From e4e4245ee359bcbc9c2eabd687dd9c87c41d705d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 16 Aug 2024 11:26:07 -0700 Subject: [PATCH 001/368] One Round DKG (#589) * Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function. --- .github/workflows/crypto-tests.yml | 4 + Cargo.lock | 116 ++- Cargo.toml | 30 +- coordinator/Cargo.toml | 1 + coordinator/src/main.rs | 153 +-- coordinator/src/substrate/mod.rs | 41 +- coordinator/src/tests/tributary/chain.rs | 22 +- coordinator/src/tests/tributary/dkg.rs | 278 ++---- coordinator/src/tests/tributary/mod.rs | 120 +-- coordinator/src/tests/tributary/sync.rs | 4 +- coordinator/src/tests/tributary/tx.rs | 15 +- coordinator/src/tributary/db.rs | 23 +- coordinator/src/tributary/handle.rs | 424 ++------ coordinator/src/tributary/mod.rs | 37 - coordinator/src/tributary/scanner.rs | 207 +--- coordinator/src/tributary/signing_protocol.rs | 114 ++- coordinator/src/tributary/spec.rs | 64 +- coordinator/src/tributary/transaction.rs | 276 ++---- coordinator/tributary/src/lib.rs | 10 +- crypto/dkg/Cargo.toml | 34 + crypto/dkg/src/encryption.rs | 214 ++-- crypto/dkg/src/evrf/mod.rs | 584 +++++++++++ crypto/dkg/src/evrf/proof.rs | 861 +++++++++++++++++ crypto/dkg/src/lib.rs | 102 +- crypto/dkg/src/musig.rs | 43 +- crypto/dkg/src/pedpop.rs | 35 +- crypto/dkg/src/promote.rs | 1 + crypto/dkg/src/tests/evrf/mod.rs | 79 ++ crypto/dkg/src/tests/evrf/proof.rs | 118 +++ crypto/dkg/src/tests/mod.rs | 8 +- crypto/dkg/src/tests/pedpop.rs | 18 +- crypto/evrf/circuit-abstraction/Cargo.toml | 20 + crypto/evrf/circuit-abstraction/LICENSE | 21 + crypto/evrf/circuit-abstraction/README.md | 3 + .../evrf/circuit-abstraction/src/gadgets.rs | 39 + crypto/evrf/circuit-abstraction/src/lib.rs | 192 ++++ crypto/evrf/divisors/Cargo.toml | 34 + crypto/evrf/divisors/LICENSE | 21 + crypto/evrf/divisors/README.md | 4 + crypto/evrf/divisors/src/lib.rs | 287 ++++++ crypto/evrf/divisors/src/poly.rs | 430 +++++++++ crypto/evrf/divisors/src/tests/mod.rs | 235 +++++ crypto/evrf/divisors/src/tests/poly.rs | 129 +++ crypto/evrf/ec-gadgets/Cargo.toml | 20 + crypto/evrf/ec-gadgets/LICENSE | 21 + crypto/evrf/ec-gadgets/README.md | 3 + crypto/evrf/ec-gadgets/src/dlog.rs | 529 ++++++++++ crypto/evrf/ec-gadgets/src/lib.rs | 130 +++ crypto/evrf/embedwards25519/Cargo.toml | 39 + crypto/evrf/embedwards25519/LICENSE | 21 + crypto/evrf/embedwards25519/README.md | 21 + crypto/evrf/embedwards25519/src/backend.rs | 293 ++++++ crypto/evrf/embedwards25519/src/lib.rs | 47 + crypto/evrf/embedwards25519/src/point.rs | 415 ++++++++ crypto/evrf/embedwards25519/src/scalar.rs | 52 + .../evrf/generalized-bulletproofs/Cargo.toml | 33 + crypto/evrf/generalized-bulletproofs/LICENSE | 21 + .../evrf/generalized-bulletproofs/README.md | 6 + .../src/arithmetic_circuit_proof.rs | 679 +++++++++++++ .../src/inner_product.rs | 360 +++++++ .../evrf/generalized-bulletproofs/src/lib.rs | 328 +++++++ .../generalized-bulletproofs/src/lincomb.rs | 265 +++++ .../src/point_vector.rs | 121 +++ .../src/scalar_vector.rs | 146 +++ .../src/tests/arithmetic_circuit_proof.rs | 250 +++++ .../src/tests/inner_product.rs | 113 +++ .../generalized-bulletproofs/src/tests/mod.rs | 27 + .../src/transcript.rs | 188 ++++ crypto/evrf/secq256k1/Cargo.toml | 39 + crypto/evrf/secq256k1/LICENSE | 21 + crypto/evrf/secq256k1/README.md | 5 + crypto/evrf/secq256k1/src/backend.rs | 295 ++++++ crypto/evrf/secq256k1/src/lib.rs | 47 + crypto/evrf/secq256k1/src/point.rs | 414 ++++++++ crypto/evrf/secq256k1/src/scalar.rs | 52 + crypto/frost/src/tests/vectors.rs | 1 + deny.toml | 1 + networks/monero/ringct/clsag/src/multisig.rs | 7 +- networks/monero/wallet/src/send/multisig.rs | 29 +- networks/monero/wallet/tests/runner/mod.rs | 2 +- orchestration/Cargo.toml | 2 + orchestration/src/main.rs | 74 +- orchestration/src/processor.rs | 10 +- processor/Cargo.toml | 7 +- processor/messages/src/lib.rs | 149 ++- processor/src/key_gen.rs | 913 ++++++++++-------- processor/src/main.rs | 67 +- processor/src/networks/mod.rs | 5 +- processor/src/networks/monero.rs | 2 +- processor/src/tests/key_gen.rs | 146 ++- spec/DKG Exclusions.md | 23 - .../Distributed Key Generation.md | 38 +- spec/processor/Processor.md | 32 +- substrate/abi/Cargo.toml | 8 +- substrate/abi/src/validator_sets.rs | 6 +- substrate/client/Cargo.toml | 2 + substrate/client/src/serai/validator_sets.rs | 39 +- .../client/tests/common/validator_sets.rs | 27 +- substrate/client/tests/validator_sets.rs | 34 +- substrate/node/Cargo.toml | 4 + substrate/node/src/chain_spec.rs | 45 +- substrate/primitives/Cargo.toml | 4 +- substrate/primitives/src/account.rs | 11 + substrate/primitives/src/networks.rs | 27 + substrate/runtime/src/abi.rs | 27 +- substrate/validator-sets/pallet/Cargo.toml | 13 +- substrate/validator-sets/pallet/src/lib.rs | 125 ++- .../validator-sets/primitives/src/lib.rs | 8 +- tests/coordinator/Cargo.toml | 4 + tests/coordinator/src/lib.rs | 46 +- tests/coordinator/src/tests/key_gen.rs | 143 ++- tests/coordinator/src/tests/mod.rs | 31 +- tests/coordinator/src/tests/rotation.rs | 43 +- tests/full-stack/src/tests/mod.rs | 18 +- tests/processor/Cargo.toml | 4 +- tests/processor/src/lib.rs | 76 +- tests/processor/src/networks.rs | 2 +- tests/processor/src/tests/batch.rs | 2 + tests/processor/src/tests/key_gen.rs | 144 ++- tests/processor/src/tests/mod.rs | 13 +- tests/processor/src/tests/send.rs | 2 + 121 files changed, 10388 insertions(+), 2480 deletions(-) create mode 100644 crypto/dkg/src/evrf/mod.rs create mode 100644 crypto/dkg/src/evrf/proof.rs create mode 100644 crypto/dkg/src/tests/evrf/mod.rs create mode 100644 crypto/dkg/src/tests/evrf/proof.rs create mode 100644 crypto/evrf/circuit-abstraction/Cargo.toml create mode 100644 crypto/evrf/circuit-abstraction/LICENSE create mode 100644 crypto/evrf/circuit-abstraction/README.md create mode 100644 crypto/evrf/circuit-abstraction/src/gadgets.rs create mode 100644 crypto/evrf/circuit-abstraction/src/lib.rs create mode 100644 crypto/evrf/divisors/Cargo.toml create mode 100644 crypto/evrf/divisors/LICENSE create mode 100644 crypto/evrf/divisors/README.md create mode 100644 crypto/evrf/divisors/src/lib.rs create mode 100644 crypto/evrf/divisors/src/poly.rs create mode 100644 crypto/evrf/divisors/src/tests/mod.rs create mode 100644 crypto/evrf/divisors/src/tests/poly.rs create mode 100644 crypto/evrf/ec-gadgets/Cargo.toml create mode 100644 crypto/evrf/ec-gadgets/LICENSE create mode 100644 crypto/evrf/ec-gadgets/README.md create mode 100644 crypto/evrf/ec-gadgets/src/dlog.rs create mode 100644 crypto/evrf/ec-gadgets/src/lib.rs create mode 100644 crypto/evrf/embedwards25519/Cargo.toml create mode 100644 crypto/evrf/embedwards25519/LICENSE create mode 100644 crypto/evrf/embedwards25519/README.md create mode 100644 crypto/evrf/embedwards25519/src/backend.rs create mode 100644 crypto/evrf/embedwards25519/src/lib.rs create mode 100644 crypto/evrf/embedwards25519/src/point.rs create mode 100644 crypto/evrf/embedwards25519/src/scalar.rs create mode 100644 crypto/evrf/generalized-bulletproofs/Cargo.toml create mode 100644 crypto/evrf/generalized-bulletproofs/LICENSE create mode 100644 crypto/evrf/generalized-bulletproofs/README.md create mode 100644 crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/inner_product.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/lib.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/lincomb.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/point_vector.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/tests/mod.rs create mode 100644 crypto/evrf/generalized-bulletproofs/src/transcript.rs create mode 100644 crypto/evrf/secq256k1/Cargo.toml create mode 100644 crypto/evrf/secq256k1/LICENSE create mode 100644 crypto/evrf/secq256k1/README.md create mode 100644 crypto/evrf/secq256k1/src/backend.rs create mode 100644 crypto/evrf/secq256k1/src/lib.rs create mode 100644 crypto/evrf/secq256k1/src/point.rs create mode 100644 crypto/evrf/secq256k1/src/scalar.rs delete mode 100644 spec/DKG Exclusions.md diff --git a/.github/workflows/crypto-tests.yml b/.github/workflows/crypto-tests.yml index d9d1df08..bf20ede3 100644 --- a/.github/workflows/crypto-tests.yml +++ b/.github/workflows/crypto-tests.yml @@ -35,6 +35,10 @@ jobs: -p multiexp \ -p schnorr-signatures \ -p dleq \ + -p generalized-bulletproofs \ + -p generalized-bulletproofs-circuit-abstraction \ + -p ec-divisors \ + -p generalized-bulletproofs-ec-gadgets \ -p dkg \ -p modular-frost \ -p frost-schnorrkel diff --git a/Cargo.lock b/Cargo.lock index d743f1df..ff21fe66 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1109,6 +1109,7 @@ checksum = "1bc2832c24239b0141d5674bb9174f9d68a8b5b3f2753311927c172ca46f7e9c" dependencies = [ "funty", "radium", + "serde", "tap", "wyz", ] @@ -2195,15 +2196,27 @@ dependencies = [ name = "dkg" version = "0.5.1" dependencies = [ + "blake2", "borsh", "chacha20", "ciphersuite", "dleq", + "ec-divisors", + "embedwards25519", "flexible-transcript", + "generalized-bulletproofs", + "generalized-bulletproofs-circuit-abstraction", + "generalized-bulletproofs-ec-gadgets", + "generic-array 1.1.0", "multiexp", + "pasta_curves", + "rand", + "rand_chacha", "rand_core", "schnorr-signatures", + "secq256k1", "std-shims", + "subtle", "thiserror", "zeroize", ] @@ -2295,6 +2308,18 @@ version = "1.0.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0d6ef0072f8a535281e4876be788938b528e9a1d43900b82c2569af7da799125" +[[package]] +name = "ec-divisors" +version = "0.1.0" +dependencies = [ + "dalek-ff-group", + "group", + "hex", + "pasta_curves", + "rand_core", + "zeroize", +] + [[package]] name = "ecdsa" version = "0.16.9" @@ -2375,6 +2400,26 @@ dependencies = [ "zeroize", ] +[[package]] +name = "embedwards25519" +version = "0.1.0" +dependencies = [ + "blake2", + "ciphersuite", + "crypto-bigint", + "dalek-ff-group", + "ec-divisors", + "ff-group-tests", + "generalized-bulletproofs-ec-gadgets", + "generic-array 0.14.7", + "hex", + "hex-literal", + "rand_core", + "rustversion", + "subtle", + "zeroize", +] + [[package]] name = "enum-as-inner" version = "0.5.1" @@ -3046,6 +3091,36 @@ dependencies = [ "serde_json", ] +[[package]] +name = "generalized-bulletproofs" +version = "0.1.0" +dependencies = [ + "blake2", + "ciphersuite", + "flexible-transcript", + "multiexp", + "rand_core", + "zeroize", +] + +[[package]] +name = "generalized-bulletproofs-circuit-abstraction" +version = "0.1.0" +dependencies = [ + "ciphersuite", + "generalized-bulletproofs", + "zeroize", +] + +[[package]] +name = "generalized-bulletproofs-ec-gadgets" +version = "0.1.0" +dependencies = [ + "ciphersuite", + "generalized-bulletproofs-circuit-abstraction", + "generic-array 1.1.0", +] + [[package]] name = "generator" version = "0.8.1" @@ -5789,8 +5864,7 @@ dependencies = [ [[package]] name = "pasta_curves" version = "0.5.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d3e57598f73cc7e1b2ac63c79c517b31a0877cd7c402cdcaa311b5208de7a095" +source = "git+https://github.com/kayabaNerve/pasta_curves?rev=a46b5be95cacbff54d06aad8d3bbcba42e05d616#a46b5be95cacbff54d06aad8d3bbcba42e05d616" dependencies = [ "blake2b_simd", "ff", @@ -5799,6 +5873,7 @@ dependencies = [ "rand", "static_assertions", "subtle", + "zeroize", ] [[package]] @@ -7922,6 +7997,26 @@ dependencies = [ "cc", ] +[[package]] +name = "secq256k1" +version = "0.1.0" +dependencies = [ + "blake2", + "ciphersuite", + "crypto-bigint", + "ec-divisors", + "ff-group-tests", + "generalized-bulletproofs-ec-gadgets", + "generic-array 0.14.7", + "hex", + "hex-literal", + "k256", + "rand_core", + "rustversion", + "subtle", + "zeroize", +] + [[package]] name = "secrecy" version = "0.8.0" @@ -8006,6 +8101,7 @@ checksum = "cd0b0ec5f1c1ca621c432a25813d8d60c88abe6d3e08a3eb9cf37d97a0fe3d73" name = "serai-abi" version = "0.1.0" dependencies = [ + "bitvec", "borsh", "frame-support", "parity-scale-codec", @@ -8030,6 +8126,7 @@ version = "0.1.0" dependencies = [ "async-lock", "bitcoin", + "bitvec", "blake2", "ciphersuite", "dockertest", @@ -8087,6 +8184,7 @@ name = "serai-coordinator" version = "0.1.0" dependencies = [ "async-trait", + "bitvec", "blake2", "borsh", "ciphersuite", @@ -8124,10 +8222,12 @@ dependencies = [ "ciphersuite", "dkg", "dockertest", + "embedwards25519", "hex", "parity-scale-codec", "rand_core", "schnorrkel", + "secq256k1", "serai-client", "serai-docker-tests", "serai-message-queue", @@ -8379,7 +8479,9 @@ dependencies = [ name = "serai-node" version = "0.1.0" dependencies = [ + "ciphersuite", "clap", + "embedwards25519", "frame-benchmarking", "futures-util", "hex", @@ -8405,6 +8507,7 @@ dependencies = [ "sc-transaction-pool", "sc-transaction-pool-api", "schnorrkel", + "secq256k1", "serai-env", "serai-runtime", "sp-api", @@ -8426,11 +8529,13 @@ name = "serai-orchestrator" version = "0.0.1" dependencies = [ "ciphersuite", + "embedwards25519", "flexible-transcript", "hex", "home", "rand_chacha", "rand_core", + "secq256k1", "zalloc", "zeroize", ] @@ -8440,6 +8545,7 @@ name = "serai-primitives" version = "0.1.0" dependencies = [ "borsh", + "ciphersuite", "frame-support", "parity-scale-codec", "rand_core", @@ -8458,11 +8564,14 @@ version = "0.1.0" dependencies = [ "async-trait", "bitcoin-serai", + "blake2", "borsh", "ciphersuite", "const-hex", "dalek-ff-group", + "dkg", "dockertest", + "ec-divisors", "env_logger", "ethereum-serai", "flexible-transcript", @@ -8620,9 +8729,9 @@ dependencies = [ name = "serai-validator-sets-pallet" version = "0.1.0" dependencies = [ + "bitvec", "frame-support", "frame-system", - "hashbrown 0.14.5", "pallet-babe", "pallet-grandpa", "parity-scale-codec", @@ -8631,6 +8740,7 @@ dependencies = [ "serai-dex-pallet", "serai-primitives", "serai-validator-sets-primitives", + "serde", "sp-application-crypto", "sp-core", "sp-io", diff --git a/Cargo.toml b/Cargo.toml index 3416d222..bce4ebe3 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -30,9 +30,16 @@ members = [ "crypto/ciphersuite", "crypto/multiexp", - "crypto/schnorr", "crypto/dleq", + + "crypto/evrf/secq256k1", + "crypto/evrf/embedwards25519", + "crypto/evrf/generalized-bulletproofs", + "crypto/evrf/circuit-abstraction", + "crypto/evrf/divisors", + "crypto/evrf/ec-gadgets", + "crypto/dkg", "crypto/frost", "crypto/schnorrkel", @@ -118,18 +125,32 @@ members = [ # to the extensive operations required for Bulletproofs [profile.dev.package] subtle = { opt-level = 3 } -curve25519-dalek = { opt-level = 3 } ff = { opt-level = 3 } group = { opt-level = 3 } crypto-bigint = { opt-level = 3 } +secp256k1 = { opt-level = 3 } +curve25519-dalek = { opt-level = 3 } dalek-ff-group = { opt-level = 3 } minimal-ed448 = { opt-level = 3 } multiexp = { opt-level = 3 } -monero-serai = { opt-level = 3 } +secq256k1 = { opt-level = 3 } +embedwards25519 = { opt-level = 3 } +generalized-bulletproofs = { opt-level = 3 } +generalized-bulletproofs-circuit-abstraction = { opt-level = 3 } +ec-divisors = { opt-level = 3 } +generalized-bulletproofs-ec-gadgets = { opt-level = 3 } + +dkg = { opt-level = 3 } + +monero-generators = { opt-level = 3 } +monero-borromean = { opt-level = 3 } +monero-bulletproofs = { opt-level = 3 } +monero-mlsag = { opt-level = 3 } +monero-clsag = { opt-level = 3 } [profile.release] panic = "unwind" @@ -158,6 +179,9 @@ matches = { path = "patches/matches" } option-ext = { path = "patches/option-ext" } directories-next = { path = "patches/directories-next" } +# The official pasta_curves repo doesn't support Zeroize +pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" } + # https://github.com/alloy-rs/core/issues/717 alloy-sol-type-parser = { git = "https://github.com/alloy-rs/core", rev = "446b9d2fbce12b88456152170709a3eaac929af0" } diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index ae4e2be7..85865650 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -20,6 +20,7 @@ workspace = true async-trait = { version = "0.1", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["std"] } +bitvec = { version = "1", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 58de348d..87db0135 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -16,7 +16,6 @@ use ciphersuite::{ Ciphersuite, Ristretto, }; use schnorr::SchnorrSignature; -use frost::Participant; use serai_db::{DbTxn, Db}; @@ -114,16 +113,17 @@ async fn add_tributary( // If we're rebooting, we'll re-fire this message // This is safe due to the message-queue deduplicating based off the intent system let set = spec.set(); - let our_i = spec - .i(&[], Ristretto::generator() * key.deref()) - .expect("adding a tributary for a set we aren't in set for"); + processors .send( set.network, processor_messages::key_gen::CoordinatorMessage::GenerateKey { - id: processor_messages::key_gen::KeyGenId { session: set.session, attempt: 0 }, - params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(), - shares: u16::from(our_i.end) - u16::from(our_i.start), + session: set.session, + threshold: spec.t(), + evrf_public_keys: spec.evrf_public_keys(), + // TODO + // params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(), + // shares: u16::from(our_i.end) - u16::from(our_i.start), }, ) .await; @@ -166,12 +166,9 @@ async fn handle_processor_message( // We'll only receive these if we fired GenerateKey, which we'll only do if if we're // in-set, making the Tributary relevant ProcessorMessage::KeyGen(inner_msg) => match inner_msg { - key_gen::ProcessorMessage::Commitments { id, .. } | - key_gen::ProcessorMessage::InvalidCommitments { id, .. } | - key_gen::ProcessorMessage::Shares { id, .. } | - key_gen::ProcessorMessage::InvalidShare { id, .. } | - key_gen::ProcessorMessage::GeneratedKeyPair { id, .. } | - key_gen::ProcessorMessage::Blame { id, .. } => Some(id.session), + key_gen::ProcessorMessage::Participation { session, .. } | + key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } | + key_gen::ProcessorMessage::Blame { session, .. } => Some(*session), }, ProcessorMessage::Sign(inner_msg) => match inner_msg { // We'll only receive InvalidParticipant/Preprocess/Share if we're actively signing @@ -421,125 +418,33 @@ async fn handle_processor_message( let txs = match msg.msg.clone() { ProcessorMessage::KeyGen(inner_msg) => match inner_msg { - key_gen::ProcessorMessage::Commitments { id, commitments } => { - vec![Transaction::DkgCommitments { - attempt: id.attempt, - commitments, - signed: Transaction::empty_signed(), - }] + key_gen::ProcessorMessage::Participation { session, participation } => { + assert_eq!(session, spec.set().session); + vec![Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }] } - key_gen::ProcessorMessage::InvalidCommitments { id, faulty } => { - // This doesn't have guaranteed timing - // - // While the party *should* be fatally slashed and not included in future attempts, - // they'll actually be fatally slashed (assuming liveness before the Tributary retires) - // and not included in future attempts *which begin after the latency window completes* - let participant = spec - .reverse_lookup_i( - &crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt) - .expect("participating in DKG attempt yet we didn't save who was removed"), - faulty, - ) - .unwrap(); - vec![Transaction::RemoveParticipantDueToDkg { - participant, - signed: Transaction::empty_signed(), - }] - } - key_gen::ProcessorMessage::Shares { id, mut shares } => { - // Create a MuSig-based machine to inform Substrate of this key generation - let nonces = crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, id.attempt); - - let removed = crate::tributary::removed_as_of_dkg_attempt(&txn, genesis, id.attempt) - .expect("participating in a DKG attempt yet we didn't track who was removed yet?"); - let our_i = spec - .i(&removed, pub_key) - .expect("processor message to DKG for an attempt we aren't a validator in"); - - // `tx_shares` needs to be done here as while it can be serialized from the HashMap - // without further context, it can't be deserialized without context - let mut tx_shares = Vec::with_capacity(shares.len()); - for shares in &mut shares { - tx_shares.push(vec![]); - for i in 1 ..= spec.n(&removed) { - let i = Participant::new(i).unwrap(); - if our_i.contains(&i) { - if shares.contains_key(&i) { - panic!("processor sent us our own shares"); - } - continue; - } - tx_shares.last_mut().unwrap().push( - shares.remove(&i).expect("processor didn't send share for another validator"), - ); - } - } - - vec![Transaction::DkgShares { - attempt: id.attempt, - shares: tx_shares, - confirmation_nonces: nonces, - signed: Transaction::empty_signed(), - }] - } - key_gen::ProcessorMessage::InvalidShare { id, accuser, faulty, blame } => { - vec![Transaction::InvalidDkgShare { - attempt: id.attempt, - accuser, - faulty, - blame, - signed: Transaction::empty_signed(), - }] - } - key_gen::ProcessorMessage::GeneratedKeyPair { id, substrate_key, network_key } => { - // TODO2: Check the KeyGenId fields - - // Tell the Tributary the key pair, get back the share for the MuSig signature - let share = crate::tributary::generated_key_pair::( + key_gen::ProcessorMessage::GeneratedKeyPair { session, substrate_key, network_key } => { + assert_eq!(session, spec.set().session); + crate::tributary::generated_key_pair::( &mut txn, - key, - spec, + genesis, &KeyPair(Public(substrate_key), network_key.try_into().unwrap()), - id.attempt, ); - // TODO: Move this into generated_key_pair? - match share { - Ok(share) => { - vec![Transaction::DkgConfirmed { - attempt: id.attempt, - confirmation_share: share, - signed: Transaction::empty_signed(), - }] - } - Err(p) => { - let participant = spec - .reverse_lookup_i( - &crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt) - .expect("participating in DKG attempt yet we didn't save who was removed"), - p, - ) - .unwrap(); - vec![Transaction::RemoveParticipantDueToDkg { - participant, - signed: Transaction::empty_signed(), - }] - } - } - } - key_gen::ProcessorMessage::Blame { id, participant } => { - let participant = spec - .reverse_lookup_i( - &crate::tributary::removed_as_of_dkg_attempt(&txn, spec.genesis(), id.attempt) - .expect("participating in DKG attempt yet we didn't save who was removed"), - participant, - ) - .unwrap(); - vec![Transaction::RemoveParticipantDueToDkg { - participant, + // Create a MuSig-based machine to inform Substrate of this key generation + let confirmation_nonces = + crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, 0); + + vec![Transaction::DkgConfirmationNonces { + attempt: 0, + confirmation_nonces, signed: Transaction::empty_signed(), }] } + key_gen::ProcessorMessage::Blame { session, participant } => { + assert_eq!(session, spec.set().session); + let participant = spec.reverse_lookup_i(participant).unwrap(); + vec![Transaction::RemoveParticipant { participant, signed: Transaction::empty_signed() }] + } }, ProcessorMessage::Sign(msg) => match msg { sign::ProcessorMessage::InvalidParticipant { .. } => { diff --git a/coordinator/src/substrate/mod.rs b/coordinator/src/substrate/mod.rs index fb1e3aed..d1946b7e 100644 --- a/coordinator/src/substrate/mod.rs +++ b/coordinator/src/substrate/mod.rs @@ -10,7 +10,7 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use serai_client::{ SeraiError, Block, Serai, TemporalSerai, - primitives::{BlockHash, NetworkId}, + primitives::{BlockHash, EmbeddedEllipticCurve, NetworkId}, validator_sets::{primitives::ValidatorSet, ValidatorSetsEvent}, in_instructions::InInstructionsEvent, coins::CoinsEvent, @@ -60,13 +60,46 @@ async fn handle_new_set( { log::info!("present in set {:?}", set); - let set_data = { + let validators; + let mut evrf_public_keys = vec![]; + { let serai = serai.as_of(block.hash()); let serai = serai.validator_sets(); let set_participants = serai.participants(set.network).await?.expect("NewSet for set which doesn't exist"); - set_participants.into_iter().map(|(k, w)| (k, u16::try_from(w).unwrap())).collect::>() + validators = set_participants + .iter() + .map(|(k, w)| { + ( + ::read_G::<&[u8]>(&mut k.0.as_ref()) + .expect("invalid key registered as participant"), + u16::try_from(*w).unwrap(), + ) + }) + .collect::>(); + for (validator, _) in set_participants { + // This is only run for external networks which always do a DKG for Serai + let substrate = serai + .embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519) + .await? + .expect("Serai called NewSet on a validator without an Embedwards25519 key"); + // `embedded_elliptic_curves` is documented to have the second entry be the + // network-specific curve (if it exists and is distinct from Embedwards25519) + let network = + if let Some(embedded_elliptic_curve) = set.network.embedded_elliptic_curves().get(1) { + serai.embedded_elliptic_curve_key(validator, *embedded_elliptic_curve).await?.expect( + "Serai called NewSet on a validator without the embedded key required for the network", + ) + } else { + substrate.clone() + }; + evrf_public_keys.push(( + <[u8; 32]>::try_from(substrate) + .expect("validator-sets pallet accepted a key of an invalid length"), + network, + )); + } }; let time = if let Ok(time) = block.time() { @@ -90,7 +123,7 @@ async fn handle_new_set( const SUBSTRATE_TO_TRIBUTARY_TIME_DELAY: u64 = 120; let time = time + SUBSTRATE_TO_TRIBUTARY_TIME_DELAY; - let spec = TributarySpec::new(block.hash(), time, set, set_data); + let spec = TributarySpec::new(block.hash(), time, set, validators, evrf_public_keys); log::info!("creating new tributary for {:?}", spec.set()); diff --git a/coordinator/src/tests/tributary/chain.rs b/coordinator/src/tests/tributary/chain.rs index 7fc6a064..746c611b 100644 --- a/coordinator/src/tests/tributary/chain.rs +++ b/coordinator/src/tests/tributary/chain.rs @@ -7,12 +7,8 @@ use zeroize::Zeroizing; use rand_core::{RngCore, CryptoRng, OsRng}; use futures_util::{task::Poll, poll}; -use ciphersuite::{ - group::{ff::Field, GroupEncoding}, - Ciphersuite, Ristretto, -}; +use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto}; -use sp_application_crypto::sr25519; use borsh::BorshDeserialize; use serai_client::{ primitives::NetworkId, @@ -52,12 +48,22 @@ pub fn new_spec( let set = ValidatorSet { session: Session(0), network: NetworkId::Bitcoin }; - let set_participants = keys + let validators = keys .iter() - .map(|key| (sr25519::Public((::generator() * **key).to_bytes()), 1)) + .map(|key| ((::generator() * **key), 1)) .collect::>(); - let res = TributarySpec::new(serai_block, start_time, set, set_participants); + // Generate random eVRF keys as none of these test rely on them to have any structure + let mut evrf_keys = vec![]; + for _ in 0 .. keys.len() { + let mut substrate = [0; 32]; + OsRng.fill_bytes(&mut substrate); + let mut network = vec![0; 64]; + OsRng.fill_bytes(&mut network); + evrf_keys.push((substrate, network)); + } + + let res = TributarySpec::new(serai_block, start_time, set, validators, evrf_keys); assert_eq!( TributarySpec::deserialize_reader(&mut borsh::to_vec(&res).unwrap().as_slice()).unwrap(), res, diff --git a/coordinator/src/tests/tributary/dkg.rs b/coordinator/src/tests/tributary/dkg.rs index 04a528f9..aafa9a33 100644 --- a/coordinator/src/tests/tributary/dkg.rs +++ b/coordinator/src/tests/tributary/dkg.rs @@ -1,5 +1,4 @@ use core::time::Duration; -use std::collections::HashMap; use zeroize::Zeroizing; use rand_core::{RngCore, OsRng}; @@ -9,7 +8,7 @@ use frost::Participant; use sp_runtime::traits::Verify; use serai_client::{ - primitives::{SeraiAddress, Signature}, + primitives::Signature, validator_sets::primitives::{ValidatorSet, KeyPair}, }; @@ -17,10 +16,7 @@ use tokio::time::sleep; use serai_db::{Get, DbTxn, Db, MemDb}; -use processor_messages::{ - key_gen::{self, KeyGenId}, - CoordinatorMessage, -}; +use processor_messages::{key_gen, CoordinatorMessage}; use tributary::{TransactionTrait, Tributary}; @@ -54,44 +50,41 @@ async fn dkg_test() { tokio::spawn(run_tributaries(tributaries.clone())); let mut txs = vec![]; - // Create DKG commitments for each key + // Create DKG participation for each key for key in &keys { - let attempt = 0; - let mut commitments = vec![0; 256]; - OsRng.fill_bytes(&mut commitments); + let mut participation = vec![0; 4096]; + OsRng.fill_bytes(&mut participation); - let mut tx = Transaction::DkgCommitments { - attempt, - commitments: vec![commitments], - signed: Transaction::empty_signed(), - }; + let mut tx = + Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }; tx.sign(&mut OsRng, spec.genesis(), key); txs.push(tx); } let block_before_tx = tributaries[0].1.tip().await; - // Publish all commitments but one - for (i, tx) in txs.iter().enumerate().skip(1) { + // Publish t-1 participations + let t = ((keys.len() * 2) / 3) + 1; + for (i, tx) in txs.iter().take(t - 1).enumerate() { assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true)); - } - - // Wait until these are included - for tx in txs.iter().skip(1) { wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; } - let expected_commitments: HashMap<_, _> = txs + let expected_participations = txs .iter() .enumerate() .map(|(i, tx)| { - if let Transaction::DkgCommitments { commitments, .. } = tx { - (Participant::new((i + 1).try_into().unwrap()).unwrap(), commitments[0].clone()) + if let Transaction::DkgParticipation { participation, .. } = tx { + CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Participation { + session: spec.set().session, + participant: Participant::new((i + 1).try_into().unwrap()).unwrap(), + participation: participation.clone(), + }) } else { - panic!("txs had non-commitments"); + panic!("txs wasn't a DkgParticipation"); } }) - .collect(); + .collect::>(); async fn new_processors( db: &mut MemDb, @@ -120,28 +113,30 @@ async fn dkg_test() { processors } - // Instantiate a scanner and verify it has nothing to report + // Instantiate a scanner and verify it has the first two participations to report (and isn't + // waiting for `t`) let processors = new_processors(&mut dbs[0], &keys[0], &spec, &tributaries[0].1).await; - assert!(processors.0.read().await.is_empty()); + assert_eq!(processors.0.read().await.get(&spec.set().network).unwrap().len(), t - 1); - // Publish the last commitment + // Publish the rest of the participations let block_before_tx = tributaries[0].1.tip().await; - assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true)); - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await; - sleep(Duration::from_secs(Tributary::::block_time().into())).await; + for tx in txs.iter().skip(t - 1) { + assert_eq!(tributaries[0].1.add_transaction(tx.clone()).await, Ok(true)); + wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; + } - // Verify the scanner emits a KeyGen::Commitments message + // Verify the scanner emits all KeyGen::Participations messages handle_new_blocks::<_, _, _, _, _, LocalP2p>( &mut dbs[0], &keys[0], &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called after Commitments") + panic!("provided TX caused recognized_id to be called after DkgParticipation") }, &processors, &(), &|_| async { panic!( - "test tried to publish a new Tributary TX from handle_application_tx after Commitments" + "test tried to publish a new Tributary TX from handle_application_tx after DkgParticipation" ) }, &spec, @@ -150,17 +145,11 @@ async fn dkg_test() { .await; { let mut msgs = processors.0.write().await; - assert_eq!(msgs.len(), 1); let msgs = msgs.get_mut(&spec.set().network).unwrap(); - let mut expected_commitments = expected_commitments.clone(); - expected_commitments.remove(&Participant::new((1).try_into().unwrap()).unwrap()); - assert_eq!( - msgs.pop_front().unwrap(), - CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments { - id: KeyGenId { session: spec.set().session, attempt: 0 }, - commitments: expected_commitments - }) - ); + assert_eq!(msgs.len(), keys.len()); + for expected in &expected_participations { + assert_eq!(&msgs.pop_front().unwrap(), expected); + } assert!(msgs.is_empty()); } @@ -168,149 +157,14 @@ async fn dkg_test() { for (i, key) in keys.iter().enumerate().skip(1) { let processors = new_processors(&mut dbs[i], key, &spec, &tributaries[i].1).await; let mut msgs = processors.0.write().await; - assert_eq!(msgs.len(), 1); let msgs = msgs.get_mut(&spec.set().network).unwrap(); - let mut expected_commitments = expected_commitments.clone(); - expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap()); - assert_eq!( - msgs.pop_front().unwrap(), - CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments { - id: KeyGenId { session: spec.set().session, attempt: 0 }, - commitments: expected_commitments - }) - ); + assert_eq!(msgs.len(), keys.len()); + for expected in &expected_participations { + assert_eq!(&msgs.pop_front().unwrap(), expected); + } assert!(msgs.is_empty()); } - // Now do shares - let mut txs = vec![]; - for (k, key) in keys.iter().enumerate() { - let attempt = 0; - - let mut shares = vec![vec![]]; - for i in 0 .. keys.len() { - if i != k { - let mut share = vec![0; 256]; - OsRng.fill_bytes(&mut share); - shares.last_mut().unwrap().push(share); - } - } - - let mut txn = dbs[k].txn(); - let mut tx = Transaction::DkgShares { - attempt, - shares, - confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0), - signed: Transaction::empty_signed(), - }; - txn.commit(); - tx.sign(&mut OsRng, spec.genesis(), key); - txs.push(tx); - } - - let block_before_tx = tributaries[0].1.tip().await; - for (i, tx) in txs.iter().enumerate().skip(1) { - assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true)); - } - for tx in txs.iter().skip(1) { - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; - } - - // With just 4 sets of shares, nothing should happen yet - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - &mut dbs[0], - &keys[0], - &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called after some shares") - }, - &processors, - &(), - &|_| async { - panic!( - "test tried to publish a new Tributary TX from handle_application_tx after some shares" - ) - }, - &spec, - &tributaries[0].1.reader(), - ) - .await; - assert_eq!(processors.0.read().await.len(), 1); - assert!(processors.0.read().await[&spec.set().network].is_empty()); - - // Publish the final set of shares - let block_before_tx = tributaries[0].1.tip().await; - assert_eq!(tributaries[0].1.add_transaction(txs[0].clone()).await, Ok(true)); - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, txs[0].hash()).await; - sleep(Duration::from_secs(Tributary::::block_time().into())).await; - - // Each scanner should emit a distinct shares message - let shares_for = |i: usize| { - CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Shares { - id: KeyGenId { session: spec.set().session, attempt: 0 }, - shares: vec![txs - .iter() - .enumerate() - .filter_map(|(l, tx)| { - if let Transaction::DkgShares { shares, .. } = tx { - if i == l { - None - } else { - let relative_i = i - (if i > l { 1 } else { 0 }); - Some(( - Participant::new((l + 1).try_into().unwrap()).unwrap(), - shares[0][relative_i].clone(), - )) - } - } else { - panic!("txs had non-shares"); - } - }) - .collect::>()], - }) - }; - - // Any scanner which has handled the prior blocks should only emit the new event - for (i, key) in keys.iter().enumerate() { - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - &mut dbs[i], - key, - &|_, _, _, _| async { panic!("provided TX caused recognized_id to be called after shares") }, - &processors, - &(), - &|_| async { panic!("test tried to publish a new Tributary TX from handle_application_tx") }, - &spec, - &tributaries[i].1.reader(), - ) - .await; - { - let mut msgs = processors.0.write().await; - assert_eq!(msgs.len(), 1); - let msgs = msgs.get_mut(&spec.set().network).unwrap(); - assert_eq!(msgs.pop_front().unwrap(), shares_for(i)); - assert!(msgs.is_empty()); - } - } - - // Yet new scanners should emit all events - for (i, key) in keys.iter().enumerate() { - let processors = new_processors(&mut MemDb::new(), key, &spec, &tributaries[i].1).await; - let mut msgs = processors.0.write().await; - assert_eq!(msgs.len(), 1); - let msgs = msgs.get_mut(&spec.set().network).unwrap(); - let mut expected_commitments = expected_commitments.clone(); - expected_commitments.remove(&Participant::new((i + 1).try_into().unwrap()).unwrap()); - assert_eq!( - msgs.pop_front().unwrap(), - CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Commitments { - id: KeyGenId { session: spec.set().session, attempt: 0 }, - commitments: expected_commitments - }) - ); - assert_eq!(msgs.pop_front().unwrap(), shares_for(i)); - assert!(msgs.is_empty()); - } - - // Send DkgConfirmed let mut substrate_key = [0; 32]; OsRng.fill_bytes(&mut substrate_key); let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()]; @@ -319,17 +173,19 @@ async fn dkg_test() { let mut txs = vec![]; for (i, key) in keys.iter().enumerate() { - let attempt = 0; let mut txn = dbs[i].txn(); - let share = - crate::tributary::generated_key_pair::(&mut txn, key, &spec, &key_pair, 0).unwrap(); - txn.commit(); - let mut tx = Transaction::DkgConfirmed { + // Claim we've generated the key pair + crate::tributary::generated_key_pair::(&mut txn, spec.genesis(), &key_pair); + + // Publish the nonces + let attempt = 0; + let mut tx = Transaction::DkgConfirmationNonces { attempt, - confirmation_share: share, + confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0), signed: Transaction::empty_signed(), }; + txn.commit(); tx.sign(&mut OsRng, spec.genesis(), key); txs.push(tx); } @@ -341,6 +197,35 @@ async fn dkg_test() { wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; } + // This should not cause any new processor event as the processor doesn't handle DKG confirming + for (i, key) in keys.iter().enumerate() { + handle_new_blocks::<_, _, _, _, _, LocalP2p>( + &mut dbs[i], + key, + &|_, _, _, _| async { + panic!("provided TX caused recognized_id to be called after DkgConfirmationNonces") + }, + &processors, + &(), + // The Tributary handler should publish ConfirmationShare itself after ConfirmationNonces + &|tx| async { assert_eq!(tributaries[i].1.add_transaction(tx).await, Ok(true)) }, + &spec, + &tributaries[i].1.reader(), + ) + .await; + { + assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty()); + } + } + + // Yet once these TXs are on-chain, the tributary should itself publish the confirmation shares + // This means in the block after the next block, the keys should be set onto Serai + // Sleep twice as long as two blocks, in case there's some stability issue + sleep(Duration::from_secs( + 2 * 2 * u64::from(Tributary::::block_time()), + )) + .await; + struct CheckPublishSetKeys { spec: TributarySpec, key_pair: KeyPair, @@ -351,19 +236,24 @@ async fn dkg_test() { &self, _db: &(impl Sync + Get), set: ValidatorSet, - removed: Vec, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, ) { assert_eq!(set, self.spec.set()); - assert!(removed.is_empty()); assert_eq!(self.key_pair, key_pair); assert!(signature.verify( - &*serai_client::validator_sets::primitives::set_keys_message(&set, &[], &key_pair), + &*serai_client::validator_sets::primitives::set_keys_message(&set, &key_pair), &serai_client::Public( frost::dkg::musig::musig_key::( &serai_client::validator_sets::primitives::musig_context(set), - &self.spec.validators().into_iter().map(|(validator, _)| validator).collect::>() + &self + .spec + .validators() + .into_iter() + .zip(signature_participants) + .filter_map(|((validator, _), included)| included.then_some(validator)) + .collect::>() ) .unwrap() .to_bytes() diff --git a/coordinator/src/tests/tributary/mod.rs b/coordinator/src/tests/tributary/mod.rs index c3f98311..340809e1 100644 --- a/coordinator/src/tests/tributary/mod.rs +++ b/coordinator/src/tests/tributary/mod.rs @@ -6,7 +6,7 @@ use ciphersuite::{group::Group, Ciphersuite, Ristretto}; use scale::{Encode, Decode}; use serai_client::{ - primitives::{SeraiAddress, Signature}, + primitives::Signature, validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet, KeyPair}, }; use processor_messages::coordinator::SubstrateSignableId; @@ -32,8 +32,8 @@ impl PublishSeraiTransaction for () { &self, _db: &(impl Sync + serai_db::Get), _set: ValidatorSet, - _removed: Vec, _key_pair: KeyPair, + _signature_participants: bitvec::vec::BitVec, _signature: Signature, ) { panic!("publish_set_keys was called in test") @@ -84,23 +84,25 @@ fn tx_size_limit() { use tributary::TRANSACTION_SIZE_LIMIT; let max_dkg_coefficients = (MAX_KEY_SHARES_PER_SET * 2).div_ceil(3) + 1; - let max_key_shares_per_individual = MAX_KEY_SHARES_PER_SET - max_dkg_coefficients; - // Handwave the DKG Commitments size as the size of the commitments to the coefficients and - // 1024 bytes for all overhead - let handwaved_dkg_commitments_size = (max_dkg_coefficients * MAX_KEY_LEN) + 1024; - assert!( - u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= - (handwaved_dkg_commitments_size * max_key_shares_per_individual) - ); + // n coefficients + // 2 ECDH values per recipient, and the encrypted share + let elements_outside_of_proof = max_dkg_coefficients + ((2 + 1) * MAX_KEY_SHARES_PER_SET); + // Then Pedersen Vector Commitments for each DH done, and the associated overhead in the proof + // It's handwaved as one commitment per DH, where we do 2 per coefficient and 1 for the explicit + // ECDHs + let vector_commitments = (2 * max_dkg_coefficients) + (2 * MAX_KEY_SHARES_PER_SET); + // Then we have commitments to the `t` polynomial of length 2 + 2 nc, where nc is the amount of + // commitments + let t_commitments = 2 + (2 * vector_commitments); + // The remainder of the proof should be ~30 elements + let proof_elements = 30; - // Encryption key, PoP (2 elements), message - let elements_per_share = 4; - let handwaved_dkg_shares_size = - (elements_per_share * MAX_KEY_LEN * MAX_KEY_SHARES_PER_SET) + 1024; - assert!( - u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= - (handwaved_dkg_shares_size * max_key_shares_per_individual) - ); + let handwaved_dkg_size = + ((elements_outside_of_proof + vector_commitments + t_commitments + proof_elements) * + MAX_KEY_LEN) + + 1024; + // Further scale by two in case of any errors in the above + assert!(u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= (2 * handwaved_dkg_size)); } #[test] @@ -143,84 +145,34 @@ fn serialize_sign_data() { #[test] fn serialize_transaction() { - test_read_write(&Transaction::RemoveParticipantDueToDkg { + test_read_write(&Transaction::RemoveParticipant { participant: ::G::random(&mut OsRng), signed: random_signed_with_nonce(&mut OsRng, 0), }); - { - let mut commitments = vec![random_vec(&mut OsRng, 512)]; - for _ in 0 .. (OsRng.next_u64() % 100) { - let mut temp = commitments[0].clone(); - OsRng.fill_bytes(&mut temp); - commitments.push(temp); - } - test_read_write(&Transaction::DkgCommitments { - attempt: random_u32(&mut OsRng), - commitments, - signed: random_signed_with_nonce(&mut OsRng, 0), - }); - } + test_read_write(&Transaction::DkgParticipation { + participation: random_vec(&mut OsRng, 4096), + signed: random_signed_with_nonce(&mut OsRng, 0), + }); - { - // This supports a variable share length, and variable amount of sent shares, yet share length - // and sent shares is expected to be constant among recipients - let share_len = usize::try_from((OsRng.next_u64() % 512) + 1).unwrap(); - let amount_of_shares = usize::try_from((OsRng.next_u64() % 3) + 1).unwrap(); - // Create a valid vec of shares - let mut shares = vec![]; - // Create up to 150 participants - for _ in 0 ..= (OsRng.next_u64() % 150) { - // Give each sender multiple shares - let mut sender_shares = vec![]; - for _ in 0 .. amount_of_shares { - let mut share = vec![0; share_len]; - OsRng.fill_bytes(&mut share); - sender_shares.push(share); - } - shares.push(sender_shares); - } + test_read_write(&Transaction::DkgConfirmationNonces { + attempt: random_u32(&mut OsRng), + confirmation_nonces: { + let mut nonces = [0; 64]; + OsRng.fill_bytes(&mut nonces); + nonces + }, + signed: random_signed_with_nonce(&mut OsRng, 0), + }); - test_read_write(&Transaction::DkgShares { - attempt: random_u32(&mut OsRng), - shares, - confirmation_nonces: { - let mut nonces = [0; 64]; - OsRng.fill_bytes(&mut nonces); - nonces - }, - signed: random_signed_with_nonce(&mut OsRng, 1), - }); - } - - for i in 0 .. 2 { - test_read_write(&Transaction::InvalidDkgShare { - attempt: random_u32(&mut OsRng), - accuser: frost::Participant::new( - u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1), - ) - .unwrap(), - faulty: frost::Participant::new( - u16::try_from(OsRng.next_u64() >> 48).unwrap().saturating_add(1), - ) - .unwrap(), - blame: if i == 0 { - None - } else { - Some(random_vec(&mut OsRng, 500)).filter(|blame| !blame.is_empty()) - }, - signed: random_signed_with_nonce(&mut OsRng, 2), - }); - } - - test_read_write(&Transaction::DkgConfirmed { + test_read_write(&Transaction::DkgConfirmationShare { attempt: random_u32(&mut OsRng), confirmation_share: { let mut share = [0; 32]; OsRng.fill_bytes(&mut share); share }, - signed: random_signed_with_nonce(&mut OsRng, 2), + signed: random_signed_with_nonce(&mut OsRng, 1), }); { diff --git a/coordinator/src/tests/tributary/sync.rs b/coordinator/src/tests/tributary/sync.rs index 18f60864..a0b68839 100644 --- a/coordinator/src/tests/tributary/sync.rs +++ b/coordinator/src/tests/tributary/sync.rs @@ -29,7 +29,7 @@ async fn sync_test() { let mut keys = new_keys(&mut OsRng); let spec = new_spec(&mut OsRng, &keys); // Ensure this can have a node fail - assert!(spec.n(&[]) > spec.t()); + assert!(spec.n() > spec.t()); let mut tributaries = new_tributaries(&keys, &spec) .await @@ -142,7 +142,7 @@ async fn sync_test() { // Because only `t` validators are used in a commit, take n - t nodes offline // leaving only `t` nodes. Which should force it to participate in the consensus // of next blocks. - let spares = usize::from(spec.n(&[]) - spec.t()); + let spares = usize::from(spec.n() - spec.t()); for thread in p2p_threads.iter().take(spares) { thread.abort(); } diff --git a/coordinator/src/tests/tributary/tx.rs b/coordinator/src/tests/tributary/tx.rs index da9433b6..9b948f36 100644 --- a/coordinator/src/tests/tributary/tx.rs +++ b/coordinator/src/tests/tributary/tx.rs @@ -37,15 +37,14 @@ async fn tx_test() { usize::try_from(OsRng.next_u64() % u64::try_from(tributaries.len()).unwrap()).unwrap(); let key = keys[sender].clone(); - let attempt = 0; - let mut commitments = vec![0; 256]; - OsRng.fill_bytes(&mut commitments); - - // Create the TX with a null signature so we can get its sig hash let block_before_tx = tributaries[sender].1.tip().await; - let mut tx = Transaction::DkgCommitments { - attempt, - commitments: vec![commitments.clone()], + // Create the TX with a null signature so we can get its sig hash + let mut tx = Transaction::DkgParticipation { + participation: { + let mut participation = vec![0; 4096]; + OsRng.fill_bytes(&mut participation); + participation + }, signed: Transaction::empty_signed(), }; tx.sign(&mut OsRng, spec.genesis(), &key); diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index fda1c47b..095f18af 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -18,7 +18,6 @@ use crate::tributary::{Label, Transaction}; #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] pub enum Topic { - Dkg, DkgConfirmation, SubstrateSign(SubstrateSignableId), Sign([u8; 32]), @@ -46,15 +45,13 @@ pub enum Accumulation { create_db!( Tributary { SeraiBlockNumber: (hash: [u8; 32]) -> u64, - SeraiDkgCompleted: (spec: ValidatorSet) -> [u8; 32], + SeraiDkgCompleted: (set: ValidatorSet) -> [u8; 32], TributaryBlockNumber: (block: [u8; 32]) -> u32, LastHandledBlock: (genesis: [u8; 32]) -> [u8; 32], // TODO: Revisit the point of this FatalSlashes: (genesis: [u8; 32]) -> Vec<[u8; 32]>, - RemovedAsOfDkgAttempt: (genesis: [u8; 32], attempt: u32) -> Vec<[u8; 32]>, - OfflineDuringDkg: (genesis: [u8; 32]) -> Vec<[u8; 32]>, // TODO: Combine these two FatallySlashed: (genesis: [u8; 32], account: [u8; 32]) -> (), SlashPoints: (genesis: [u8; 32], account: [u8; 32]) -> u32, @@ -67,11 +64,9 @@ create_db!( DataReceived: (genesis: [u8; 32], data_spec: &DataSpecification) -> u16, DataDb: (genesis: [u8; 32], data_spec: &DataSpecification, signer_bytes: &[u8; 32]) -> Vec, - DkgShare: (genesis: [u8; 32], from: u16, to: u16) -> Vec, + DkgParticipation: (genesis: [u8; 32], from: u16) -> Vec, ConfirmationNonces: (genesis: [u8; 32], attempt: u32) -> HashMap>, - DkgKeyPair: (genesis: [u8; 32], attempt: u32) -> KeyPair, - KeyToDkgAttempt: (key: [u8; 32]) -> u32, - DkgLocallyCompleted: (genesis: [u8; 32]) -> (), + DkgKeyPair: (genesis: [u8; 32]) -> KeyPair, PlanIds: (genesis: &[u8], block: u64) -> Vec<[u8; 32]>, @@ -123,12 +118,12 @@ impl AttemptDb { pub fn attempt(getter: &impl Get, genesis: [u8; 32], topic: Topic) -> Option { let attempt = Self::get(getter, genesis, &topic); - // Don't require explicit recognition of the Dkg topic as it starts when the chain does + // Don't require explicit recognition of the DkgConfirmation topic as it starts when the chain + // does // Don't require explicit recognition of the SlashReport topic as it isn't a DoS risk and it // should always happen (eventually) if attempt.is_none() && - ((topic == Topic::Dkg) || - (topic == Topic::DkgConfirmation) || + ((topic == Topic::DkgConfirmation) || (topic == Topic::SubstrateSign(SubstrateSignableId::SlashReport))) { return Some(0); @@ -155,16 +150,12 @@ impl ReattemptDb { // 5 minutes for attempts 0 ..= 2, 10 minutes for attempts 3 ..= 5, 15 minutes for attempts > 5 // Assumes no event will take longer than 15 minutes, yet grows the time in case there are // network bandwidth issues - let mut reattempt_delay = BASE_REATTEMPT_DELAY * + let reattempt_delay = BASE_REATTEMPT_DELAY * ((AttemptDb::attempt(txn, genesis, topic) .expect("scheduling re-attempt for unknown topic") / 3) + 1) .min(3); - // Allow more time for DKGs since they have an extra round and much more data - if matches!(topic, Topic::Dkg) { - reattempt_delay *= 4; - } let upon_block = current_block_number + reattempt_delay; let mut reattempts = Self::get(txn, genesis, upon_block).unwrap_or(vec![]); diff --git a/coordinator/src/tributary/handle.rs b/coordinator/src/tributary/handle.rs index fbce7dd9..c5378cc7 100644 --- a/coordinator/src/tributary/handle.rs +++ b/coordinator/src/tributary/handle.rs @@ -13,7 +13,7 @@ use serai_client::{Signature, validator_sets::primitives::KeyPair}; use tributary::{Signed, TransactionKind, TransactionTrait}; use processor_messages::{ - key_gen::{self, KeyGenId}, + key_gen::self, coordinator::{self, SubstrateSignableId, SubstrateSignId}, sign::{self, SignId}, }; @@ -38,33 +38,20 @@ pub fn dkg_confirmation_nonces( txn: &mut impl DbTxn, attempt: u32, ) -> [u8; 64] { - DkgConfirmer::new(key, spec, txn, attempt) - .expect("getting DKG confirmation nonces for unknown attempt") - .preprocess() + DkgConfirmer::new(key, spec, txn, attempt).preprocess() } pub fn generated_key_pair( txn: &mut D::Transaction<'_>, - key: &Zeroizing<::F>, - spec: &TributarySpec, + genesis: [u8; 32], key_pair: &KeyPair, - attempt: u32, -) -> Result<[u8; 32], Participant> { - DkgKeyPair::set(txn, spec.genesis(), attempt, key_pair); - KeyToDkgAttempt::set(txn, key_pair.0 .0, &attempt); - let preprocesses = ConfirmationNonces::get(txn, spec.genesis(), attempt).unwrap(); - DkgConfirmer::new(key, spec, txn, attempt) - .expect("claiming to have generated a key pair for an unrecognized attempt") - .share(preprocesses, key_pair) +) { + DkgKeyPair::set(txn, genesis, key_pair); } -fn unflatten( - spec: &TributarySpec, - removed: &[::G], - data: &mut HashMap>, -) { +fn unflatten(spec: &TributarySpec, data: &mut HashMap>) { for (validator, _) in spec.validators() { - let Some(range) = spec.i(removed, validator) else { continue }; + let Some(range) = spec.i(validator) else { continue }; let Some(all_segments) = data.remove(&range.start) else { continue; }; @@ -88,7 +75,6 @@ impl< { fn accumulate( &mut self, - removed: &[::G], data_spec: &DataSpecification, signer: ::G, data: &Vec, @@ -99,10 +85,7 @@ impl< panic!("accumulating data for a participant multiple times"); } let signer_shares = { - let Some(signer_i) = self.spec.i(removed, signer) else { - log::warn!("accumulating data from {} who was removed", hex::encode(signer.to_bytes())); - return Accumulation::NotReady; - }; + let signer_i = self.spec.i(signer).expect("transaction signer wasn't a member of the set"); u16::from(signer_i.end) - u16::from(signer_i.start) }; @@ -115,11 +98,7 @@ impl< // If 2/3rds of the network participated in this preprocess, queue it for an automatic // re-attempt - // DkgConfirmation doesn't have a re-attempt as it's just an extension for Dkg - if (data_spec.label == Label::Preprocess) && - received_range.contains(&self.spec.t()) && - (data_spec.topic != Topic::DkgConfirmation) - { + if (data_spec.label == Label::Preprocess) && received_range.contains(&self.spec.t()) { // Double check the attempt on this entry, as we don't want to schedule a re-attempt if this // is an old entry // This is an assert, not part of the if check, as old data shouldn't be here in the first @@ -129,10 +108,7 @@ impl< } // If we have all the needed commitments/preprocesses/shares, tell the processor - let needs_everyone = - (data_spec.topic == Topic::Dkg) || (data_spec.topic == Topic::DkgConfirmation); - let needed = if needs_everyone { self.spec.n(removed) } else { self.spec.t() }; - if received_range.contains(&needed) { + if received_range.contains(&self.spec.t()) { log::debug!( "accumulation for entry {:?} attempt #{} is ready", &data_spec.topic, @@ -141,7 +117,7 @@ impl< let mut data = HashMap::new(); for validator in self.spec.validators().iter().map(|validator| validator.0) { - let Some(i) = self.spec.i(removed, validator) else { continue }; + let Some(i) = self.spec.i(validator) else { continue }; data.insert( i.start, if let Some(data) = DataDb::get(self.txn, genesis, data_spec, &validator.to_bytes()) { @@ -152,10 +128,10 @@ impl< ); } - assert_eq!(data.len(), usize::from(needed)); + assert_eq!(data.len(), usize::from(self.spec.t())); // Remove our own piece of data, if we were involved - if let Some(i) = self.spec.i(removed, Ristretto::generator() * self.our_key.deref()) { + if let Some(i) = self.spec.i(Ristretto::generator() * self.our_key.deref()) { if data.remove(&i.start).is_some() { return Accumulation::Ready(DataSet::Participating(data)); } @@ -167,7 +143,6 @@ impl< fn handle_data( &mut self, - removed: &[::G], data_spec: &DataSpecification, bytes: &Vec, signed: &Signed, @@ -213,21 +188,15 @@ impl< // TODO: If this is shares, we need to check they are part of the selected signing set // Accumulate this data - self.accumulate(removed, data_spec, signed.signer, bytes) + self.accumulate(data_spec, signed.signer, bytes) } fn check_sign_data_len( &mut self, - removed: &[::G], signer: ::G, len: usize, ) -> Result<(), ()> { - let Some(signer_i) = self.spec.i(removed, signer) else { - // TODO: Ensure processor doesn't so participate/check how it handles removals for being - // offline - self.fatal_slash(signer.to_bytes(), "signer participated despite being removed"); - Err(())? - }; + let signer_i = self.spec.i(signer).expect("signer wasn't a member of the set"); if len != usize::from(u16::from(signer_i.end) - u16::from(signer_i.start)) { self.fatal_slash( signer.to_bytes(), @@ -254,12 +223,9 @@ impl< } match tx { - Transaction::RemoveParticipantDueToDkg { participant, signed } => { - if self.spec.i(&[], participant).is_none() { - self.fatal_slash( - participant.to_bytes(), - "RemoveParticipantDueToDkg vote for non-validator", - ); + Transaction::RemoveParticipant { participant, signed } => { + if self.spec.i(participant).is_none() { + self.fatal_slash(participant.to_bytes(), "RemoveParticipant vote for non-validator"); return; } @@ -274,268 +240,106 @@ impl< let prior_votes = VotesToRemove::get(self.txn, genesis, participant).unwrap_or(0); let signer_votes = - self.spec.i(&[], signed.signer).expect("signer wasn't a validator for this network?"); + self.spec.i(signed.signer).expect("signer wasn't a validator for this network?"); let new_votes = prior_votes + u16::from(signer_votes.end) - u16::from(signer_votes.start); VotesToRemove::set(self.txn, genesis, participant, &new_votes); if ((prior_votes + 1) ..= new_votes).contains(&self.spec.t()) { - self.fatal_slash(participant, "RemoveParticipantDueToDkg vote") + self.fatal_slash(participant, "RemoveParticipant vote") } } - Transaction::DkgCommitments { attempt, commitments, signed } => { - let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else { - self.fatal_slash(signed.signer.to_bytes(), "DkgCommitments with an unrecognized attempt"); - return; - }; - let Ok(()) = self.check_sign_data_len(&removed, signed.signer, commitments.len()) else { - return; - }; - let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Preprocess, attempt }; - match self.handle_data(&removed, &data_spec, &commitments.encode(), &signed) { - Accumulation::Ready(DataSet::Participating(mut commitments)) => { - log::info!("got all DkgCommitments for {}", hex::encode(genesis)); - unflatten(self.spec, &removed, &mut commitments); - self - .processors - .send( - self.spec.set().network, - key_gen::CoordinatorMessage::Commitments { - id: KeyGenId { session: self.spec.set().session, attempt }, - commitments, - }, - ) - .await; - } - Accumulation::Ready(DataSet::NotParticipating) => { - assert!( - removed.contains(&(Ristretto::generator() * self.our_key.deref())), - "NotParticipating in a DkgCommitments we weren't removed for" - ); - } - Accumulation::NotReady => {} - } - } - - Transaction::DkgShares { attempt, mut shares, confirmation_nonces, signed } => { - let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else { - self.fatal_slash(signed.signer.to_bytes(), "DkgShares with an unrecognized attempt"); - return; - }; - let not_participating = removed.contains(&(Ristretto::generator() * self.our_key.deref())); - - let Ok(()) = self.check_sign_data_len(&removed, signed.signer, shares.len()) else { - return; - }; - - let Some(sender_i) = self.spec.i(&removed, signed.signer) else { - self.fatal_slash( - signed.signer.to_bytes(), - "DkgShares for a DKG they aren't participating in", - ); - return; - }; - let sender_is_len = u16::from(sender_i.end) - u16::from(sender_i.start); - for shares in &shares { - if shares.len() != (usize::from(self.spec.n(&removed) - sender_is_len)) { - self.fatal_slash(signed.signer.to_bytes(), "invalid amount of DKG shares"); - return; - } - } - - // Save each share as needed for blame - for (from_offset, shares) in shares.iter().enumerate() { - let from = - Participant::new(u16::from(sender_i.start) + u16::try_from(from_offset).unwrap()) - .unwrap(); - - for (to_offset, share) in shares.iter().enumerate() { - // 0-indexed (the enumeration) to 1-indexed (Participant) - let mut to = u16::try_from(to_offset).unwrap() + 1; - // Adjust for the omission of the sender's own shares - if to >= u16::from(sender_i.start) { - to += u16::from(sender_i.end) - u16::from(sender_i.start); - } - let to = Participant::new(to).unwrap(); - - DkgShare::set(self.txn, genesis, from.into(), to.into(), share); - } - } - - // Filter down to only our share's bytes for handle - let our_shares = if let Some(our_i) = - self.spec.i(&removed, Ristretto::generator() * self.our_key.deref()) - { - if sender_i == our_i { - vec![] - } else { - // 1-indexed to 0-indexed - let mut our_i_pos = u16::from(our_i.start) - 1; - // Handle the omission of the sender's own data - if u16::from(our_i.start) > u16::from(sender_i.start) { - our_i_pos -= sender_is_len; - } - let our_i_pos = usize::from(our_i_pos); - shares - .iter_mut() - .map(|shares| { - shares - .drain( - our_i_pos .. - (our_i_pos + usize::from(u16::from(our_i.end) - u16::from(our_i.start))), - ) - .collect::>() - }) - .collect() - } - } else { - assert!( - not_participating, - "we didn't have an i while handling DkgShares we weren't removed for" - ); - // Since we're not participating, simply save vec![] for our shares - vec![] - }; - // Drop shares as it's presumably been mutated into invalidity - drop(shares); - - let data_spec = DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt }; - let encoded_data = (confirmation_nonces.to_vec(), our_shares.encode()).encode(); - match self.handle_data(&removed, &data_spec, &encoded_data, &signed) { - Accumulation::Ready(DataSet::Participating(confirmation_nonces_and_shares)) => { - log::info!("got all DkgShares for {}", hex::encode(genesis)); - - let mut confirmation_nonces = HashMap::new(); - let mut shares = HashMap::new(); - for (participant, confirmation_nonces_and_shares) in confirmation_nonces_and_shares { - let (these_confirmation_nonces, these_shares) = - <(Vec, Vec)>::decode(&mut confirmation_nonces_and_shares.as_slice()) - .unwrap(); - confirmation_nonces.insert(participant, these_confirmation_nonces); - shares.insert(participant, these_shares); - } - ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces); - - // shares is a HashMap>>>, with the values representing: - // - Each of the sender's shares - // - Each of the our shares - // - Each share - // We need a Vec>>, with the outer being each of ours - let mut expanded_shares = vec![]; - for (sender_start_i, shares) in shares { - let shares: Vec>> = Vec::<_>::decode(&mut shares.as_slice()).unwrap(); - for (sender_i_offset, our_shares) in shares.into_iter().enumerate() { - for (our_share_i, our_share) in our_shares.into_iter().enumerate() { - if expanded_shares.len() <= our_share_i { - expanded_shares.push(HashMap::new()); - } - expanded_shares[our_share_i].insert( - Participant::new( - u16::from(sender_start_i) + u16::try_from(sender_i_offset).unwrap(), - ) - .unwrap(), - our_share, - ); - } - } - } - - self - .processors - .send( - self.spec.set().network, - key_gen::CoordinatorMessage::Shares { - id: KeyGenId { session: self.spec.set().session, attempt }, - shares: expanded_shares, - }, - ) - .await; - } - Accumulation::Ready(DataSet::NotParticipating) => { - assert!(not_participating, "NotParticipating in a DkgShares we weren't removed for"); - } - Accumulation::NotReady => {} - } - } - - Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => { - let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else { - self - .fatal_slash(signed.signer.to_bytes(), "InvalidDkgShare with an unrecognized attempt"); - return; - }; - let Some(range) = self.spec.i(&removed, signed.signer) else { - self.fatal_slash( - signed.signer.to_bytes(), - "InvalidDkgShare for a DKG they aren't participating in", - ); - return; - }; - if !range.contains(&accuser) { - self.fatal_slash( - signed.signer.to_bytes(), - "accused with a Participant index which wasn't theirs", - ); - return; - } - if range.contains(&faulty) { - self.fatal_slash(signed.signer.to_bytes(), "accused self of having an InvalidDkgShare"); - return; - } - - let Some(share) = DkgShare::get(self.txn, genesis, accuser.into(), faulty.into()) else { - self.fatal_slash( - signed.signer.to_bytes(), - "InvalidDkgShare had a non-existent faulty participant", - ); - return; - }; + Transaction::DkgParticipation { participation, signed } => { + // Send the participation to the processor self .processors .send( self.spec.set().network, - key_gen::CoordinatorMessage::VerifyBlame { - id: KeyGenId { session: self.spec.set().session, attempt }, - accuser, - accused: faulty, - share, - blame, + key_gen::CoordinatorMessage::Participation { + session: self.spec.set().session, + participant: self + .spec + .i(signed.signer) + .expect("signer wasn't a validator for this network?") + .start, + participation, }, ) .await; } - Transaction::DkgConfirmed { attempt, confirmation_share, signed } => { - let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else { - self.fatal_slash(signed.signer.to_bytes(), "DkgConfirmed with an unrecognized attempt"); - return; - }; + Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => { + let data_spec = + DataSpecification { topic: Topic::DkgConfirmation, label: Label::Preprocess, attempt }; + match self.handle_data(&data_spec, &confirmation_nonces.to_vec(), &signed) { + Accumulation::Ready(DataSet::Participating(confirmation_nonces)) => { + log::info!( + "got all DkgConfirmationNonces for {}, attempt {attempt}", + hex::encode(genesis) + ); + ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces); + + // Send the expected DkgConfirmationShare + // TODO: Slight race condition here due to set, publish tx, then commit txn + let key_pair = DkgKeyPair::get(self.txn, genesis) + .expect("participating in confirming key we don't have"); + let mut tx = match DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt) + .share(confirmation_nonces, &key_pair) + { + Ok(confirmation_share) => Transaction::DkgConfirmationShare { + attempt, + confirmation_share, + signed: Transaction::empty_signed(), + }, + Err(participant) => Transaction::RemoveParticipant { + participant: self.spec.reverse_lookup_i(participant).unwrap(), + signed: Transaction::empty_signed(), + }, + }; + tx.sign(&mut OsRng, genesis, self.our_key); + self.publish_tributary_tx.publish_tributary_tx(tx).await; + } + Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {} + } + } + + Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => { let data_spec = DataSpecification { topic: Topic::DkgConfirmation, label: Label::Share, attempt }; - match self.handle_data(&removed, &data_spec, &confirmation_share.to_vec(), &signed) { + match self.handle_data(&data_spec, &confirmation_share.to_vec(), &signed) { Accumulation::Ready(DataSet::Participating(shares)) => { - log::info!("got all DkgConfirmed for {}", hex::encode(genesis)); - - let Some(removed) = removed_as_of_dkg_attempt(self.txn, genesis, attempt) else { - panic!( - "DkgConfirmed for everyone yet didn't have the removed parties for this attempt", - ); - }; + log::info!( + "got all DkgConfirmationShare for {}, attempt {attempt}", + hex::encode(genesis) + ); let preprocesses = ConfirmationNonces::get(self.txn, genesis, attempt).unwrap(); + // TODO: This can technically happen under very very very specific timing as the txn - // put happens before DkgConfirmed, yet the txn commit isn't guaranteed to - let key_pair = DkgKeyPair::get(self.txn, genesis, attempt).expect( - "in DkgConfirmed handling, which happens after everyone \ - (including us) fires DkgConfirmed, yet no confirming key pair", + // put happens before DkgConfirmationShare, yet the txn isn't guaranteed to be + // committed + let key_pair = DkgKeyPair::get(self.txn, genesis).expect( + "in DkgConfirmationShare handling, which happens after everyone \ + (including us) fires DkgConfirmationShare, yet no confirming key pair", ); - let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt) - .expect("confirming DKG for unrecognized attempt"); + + // Determine the bitstring representing who participated before we move `shares` + let validators = self.spec.validators(); + let mut signature_participants = bitvec::vec::BitVec::with_capacity(validators.len()); + for (participant, _) in validators { + signature_participants.push( + (participant == (::generator() * self.our_key.deref())) || + shares.contains_key(&self.spec.i(participant).unwrap().start), + ); + } + + // Produce the final signature + let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt); let sig = match confirmer.complete(preprocesses, &key_pair, shares) { Ok(sig) => sig, Err(p) => { - let mut tx = Transaction::RemoveParticipantDueToDkg { - participant: self.spec.reverse_lookup_i(&removed, p).unwrap(), + let mut tx = Transaction::RemoveParticipant { + participant: self.spec.reverse_lookup_i(p).unwrap(), signed: Transaction::empty_signed(), }; tx.sign(&mut OsRng, genesis, self.our_key); @@ -544,23 +348,18 @@ impl< } }; - DkgLocallyCompleted::set(self.txn, genesis, &()); - self .publish_serai_tx .publish_set_keys( self.db, self.spec.set(), - removed.into_iter().map(|key| key.to_bytes().into()).collect(), key_pair, + signature_participants, Signature(sig), ) .await; } - Accumulation::Ready(DataSet::NotParticipating) => { - panic!("wasn't a participant in DKG confirmination shares") - } - Accumulation::NotReady => {} + Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {} } } @@ -618,19 +417,8 @@ impl< } Transaction::SubstrateSign(data) => { - // Provided transactions ensure synchrony on any signing protocol, and we won't start - // signing with threshold keys before we've confirmed them on-chain - let Some(removed) = - crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis) - else { - self.fatal_slash( - data.signed.signer.to_bytes(), - "signing despite not having set keys on substrate", - ); - return; - }; let signer = data.signed.signer; - let Ok(()) = self.check_sign_data_len(&removed, signer, data.data.len()) else { + let Ok(()) = self.check_sign_data_len(signer, data.data.len()) else { return; }; let expected_len = match data.label { @@ -653,11 +441,11 @@ impl< attempt: data.attempt, }; let Accumulation::Ready(DataSet::Participating(mut results)) = - self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed) + self.handle_data(&data_spec, &data.data.encode(), &data.signed) else { return; }; - unflatten(self.spec, &removed, &mut results); + unflatten(self.spec, &mut results); let id = SubstrateSignId { session: self.spec.set().session, @@ -678,16 +466,7 @@ impl< } Transaction::Sign(data) => { - let Some(removed) = - crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis) - else { - self.fatal_slash( - data.signed.signer.to_bytes(), - "signing despite not having set keys on substrate", - ); - return; - }; - let Ok(()) = self.check_sign_data_len(&removed, data.signed.signer, data.data.len()) else { + let Ok(()) = self.check_sign_data_len(data.signed.signer, data.data.len()) else { return; }; @@ -697,9 +476,9 @@ impl< attempt: data.attempt, }; if let Accumulation::Ready(DataSet::Participating(mut results)) = - self.handle_data(&removed, &data_spec, &data.data.encode(), &data.signed) + self.handle_data(&data_spec, &data.data.encode(), &data.signed) { - unflatten(self.spec, &removed, &mut results); + unflatten(self.spec, &mut results); let id = SignId { session: self.spec.set().session, id: data.plan, attempt: data.attempt }; self @@ -740,8 +519,7 @@ impl< } Transaction::SlashReport(points, signed) => { - // Uses &[] as we only need the length which is independent to who else was removed - let signer_range = self.spec.i(&[], signed.signer).unwrap(); + let signer_range = self.spec.i(signed.signer).unwrap(); let signer_len = u16::from(signer_range.end) - u16::from(signer_range.start); if points.len() != (self.spec.validators().len() - 1) { self.fatal_slash( diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary/mod.rs index cc9bdb1e..6e2f2661 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary/mod.rs @@ -1,7 +1,3 @@ -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; - -use serai_client::validator_sets::primitives::ValidatorSet; - use tributary::{ ReadWrite, transaction::{TransactionError, TransactionKind, Transaction as TransactionTrait}, @@ -24,39 +20,6 @@ pub use handle::*; pub mod scanner; -pub fn removed_as_of_dkg_attempt( - getter: &impl Get, - genesis: [u8; 32], - attempt: u32, -) -> Option::G>> { - if attempt == 0 { - Some(vec![]) - } else { - RemovedAsOfDkgAttempt::get(getter, genesis, attempt).map(|keys| { - keys.iter().map(|key| ::G::from_bytes(key).unwrap()).collect() - }) - } -} - -pub fn removed_as_of_set_keys( - getter: &impl Get, - set: ValidatorSet, - genesis: [u8; 32], -) -> Option::G>> { - // SeraiDkgCompleted has the key placed on-chain. - // This key can be uniquely mapped to an attempt so long as one participant was honest, which we - // assume as a presumably honest participant. - // Resolve from generated key to attempt to fatally slashed as of attempt. - - // This expect will trigger if this is prematurely called and Substrate has tracked the keys yet - // we haven't locally synced and handled the Tributary - // All callers of this, at the time of writing, ensure the Tributary has sufficiently synced - // making the panic with context more desirable than the None - let attempt = KeyToDkgAttempt::get(getter, SeraiDkgCompleted::get(getter, set)?) - .expect("key completed on-chain didn't have an attempt related"); - removed_as_of_dkg_attempt(getter, genesis, attempt) -} - pub async fn publish_signed_transaction( txn: &mut D::Transaction<'_>, tributary: &Tributary, diff --git a/coordinator/src/tributary/scanner.rs b/coordinator/src/tributary/scanner.rs index 9b56e0a0..c0b906ed 100644 --- a/coordinator/src/tributary/scanner.rs +++ b/coordinator/src/tributary/scanner.rs @@ -1,15 +1,17 @@ -use core::{marker::PhantomData, ops::Deref, future::Future, time::Duration}; -use std::{sync::Arc, collections::HashSet}; +use core::{marker::PhantomData, future::Future, time::Duration}; +use std::sync::Arc; use zeroize::Zeroizing; +use rand_core::OsRng; + use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use tokio::sync::broadcast; use scale::{Encode, Decode}; use serai_client::{ - primitives::{SeraiAddress, Signature}, + primitives::Signature, validator_sets::primitives::{KeyPair, ValidatorSet}, Serai, }; @@ -67,8 +69,8 @@ pub trait PublishSeraiTransaction { &self, db: &(impl Sync + Get), set: ValidatorSet, - removed: Vec, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, ); } @@ -129,17 +131,12 @@ mod impl_pst_for_serai { &self, db: &(impl Sync + Get), set: ValidatorSet, - removed: Vec, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, ) { - // TODO: BoundedVec as an arg to avoid this expect - let tx = SeraiValidatorSets::set_keys( - set.network, - removed.try_into().expect("removing more than allowed"), - key_pair, - signature, - ); + let tx = + SeraiValidatorSets::set_keys(set.network, key_pair, signature_participants, signature); async fn check(serai: SeraiValidatorSets<'_>, set: ValidatorSet, (): ()) -> bool { if matches!(serai.keys(set).await, Ok(Some(_))) { log::info!("another coordinator set key pair for {:?}", set); @@ -249,18 +246,15 @@ impl< let genesis = self.spec.genesis(); - let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis); - // Calculate the shares still present, spinning if not enough are - // still_present_shares is used by a below branch, yet it's a natural byproduct of checking if - // we should spin, hence storing it in a variable here - let still_present_shares = { + { // Start with the original n value - let mut present_shares = self.spec.n(&[]); + let mut present_shares = self.spec.n(); // Remove everyone fatally slashed + let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis); for removed in ¤t_fatal_slashes { let original_i_for_removed = - self.spec.i(&[], *removed).expect("removed party was never present"); + self.spec.i(*removed).expect("removed party was never present"); let removed_shares = u16::from(original_i_for_removed.end) - u16::from(original_i_for_removed.start); present_shares -= removed_shares; @@ -276,79 +270,17 @@ impl< tokio::time::sleep(core::time::Duration::from_secs(60)).await; } } - - present_shares - }; + } for topic in ReattemptDb::take(self.txn, genesis, self.block_number) { let attempt = AttemptDb::start_next_attempt(self.txn, genesis, topic); - log::info!("re-attempting {topic:?} with attempt {attempt}"); + log::info!("potentially re-attempting {topic:?} with attempt {attempt}"); // Slash people who failed to participate as expected in the prior attempt { let prior_attempt = attempt - 1; - let (removed, expected_participants) = match topic { - Topic::Dkg => { - // Every validator who wasn't removed is expected to have participated - let removed = - crate::tributary::removed_as_of_dkg_attempt(self.txn, genesis, prior_attempt) - .expect("prior attempt didn't have its removed saved to disk"); - let removed_set = removed.iter().copied().collect::>(); - ( - removed, - self - .spec - .validators() - .into_iter() - .filter_map(|(validator, _)| { - Some(validator).filter(|validator| !removed_set.contains(validator)) - }) - .collect(), - ) - } - Topic::DkgConfirmation => { - panic!("TODO: re-attempting DkgConfirmation when we should be re-attempting the Dkg") - } - Topic::SubstrateSign(_) | Topic::Sign(_) => { - let removed = - crate::tributary::removed_as_of_set_keys(self.txn, self.spec.set(), genesis) - .expect("SubstrateSign/Sign yet have yet to set keys"); - // TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![] - let expected_participants = vec![]; - (removed, expected_participants) - } - }; - - let (expected_topic, expected_label) = match topic { - Topic::Dkg => { - let n = self.spec.n(&removed); - // If we got all the DKG shares, we should be on DKG confirmation - let share_spec = - DataSpecification { topic: Topic::Dkg, label: Label::Share, attempt: prior_attempt }; - if DataReceived::get(self.txn, genesis, &share_spec).unwrap_or(0) == n { - // Label::Share since there is no Label::Preprocess for DkgConfirmation since the - // preprocess is part of Topic::Dkg Label::Share - (Topic::DkgConfirmation, Label::Share) - } else { - let preprocess_spec = DataSpecification { - topic: Topic::Dkg, - label: Label::Preprocess, - attempt: prior_attempt, - }; - // If we got all the DKG preprocesses, DKG shares - if DataReceived::get(self.txn, genesis, &preprocess_spec).unwrap_or(0) == n { - // Label::Share since there is no Label::Preprocess for DkgConfirmation since the - // preprocess is part of Topic::Dkg Label::Share - (Topic::Dkg, Label::Share) - } else { - (Topic::Dkg, Label::Preprocess) - } - } - } - Topic::DkgConfirmation => unreachable!(), - // If we got enough participants to move forward, then we expect shares from them all - Topic::SubstrateSign(_) | Topic::Sign(_) => (topic, Label::Share), - }; + // TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![] + let expected_participants: Vec<::G> = vec![]; let mut did_not_participate = vec![]; for expected_participant in expected_participants { @@ -356,8 +288,9 @@ impl< self.txn, genesis, &DataSpecification { - topic: expected_topic, - label: expected_label, + topic, + // Since we got the preprocesses, we were supposed to get the shares + label: Label::Share, attempt: prior_attempt, }, &expected_participant.to_bytes(), @@ -373,15 +306,8 @@ impl< // Accordingly, clear did_not_participate // TODO - // If during the DKG, explicitly mark these people as having been offline - // TODO: If they were offline sufficiently long ago, don't strike them off - if topic == Topic::Dkg { - let mut existing = OfflineDuringDkg::get(self.txn, genesis).unwrap_or(vec![]); - for did_not_participate in did_not_participate { - existing.push(did_not_participate.to_bytes()); - } - OfflineDuringDkg::set(self.txn, genesis, &existing); - } + // TODO: Increment the slash points of people who didn't preprocess in some expected window + // of time // Slash everyone who didn't participate as expected // This may be overzealous as if a minority detects a completion, they'll abort yet the @@ -411,75 +337,22 @@ impl< then preprocesses. This only sends preprocesses). */ match topic { - Topic::Dkg => { - let mut removed = current_fatal_slashes.clone(); + Topic::DkgConfirmation => { + if SeraiDkgCompleted::get(self.txn, self.spec.set()).is_none() { + log::info!("re-attempting DKG confirmation with attempt {attempt}"); - let t = self.spec.t(); - { - let mut present_shares = still_present_shares; - - // Load the parties marked as offline across the various attempts - let mut offline = OfflineDuringDkg::get(self.txn, genesis) - .unwrap_or(vec![]) - .iter() - .map(|key| ::G::from_bytes(key).unwrap()) - .collect::>(); - // Pop from the list to prioritize the removal of those recently offline - while let Some(offline) = offline.pop() { - // Make sure they weren't removed already (such as due to being fatally slashed) - // This also may trigger if they were offline across multiple attempts - if removed.contains(&offline) { - continue; - } - - // If we can remove them and still meet the threshold, do so - let original_i_for_offline = - self.spec.i(&[], offline).expect("offline was never present?"); - let offline_shares = - u16::from(original_i_for_offline.end) - u16::from(original_i_for_offline.start); - if (present_shares - offline_shares) >= t { - present_shares -= offline_shares; - removed.push(offline); - } - - // If we've removed as many people as we can, break - if present_shares == t { - break; - } - } - } - - RemovedAsOfDkgAttempt::set( - self.txn, - genesis, - attempt, - &removed.iter().map(::G::to_bytes).collect(), - ); - - if DkgLocallyCompleted::get(self.txn, genesis).is_none() { - let Some(our_i) = self.spec.i(&removed, Ristretto::generator() * self.our_key.deref()) - else { - continue; + // Since it wasn't completed, publish our nonces for the next attempt + let confirmation_nonces = + crate::tributary::dkg_confirmation_nonces(self.our_key, self.spec, self.txn, attempt); + let mut tx = Transaction::DkgConfirmationNonces { + attempt, + confirmation_nonces, + signed: Transaction::empty_signed(), }; - - // Since it wasn't completed, instruct the processor to start the next attempt - let id = - processor_messages::key_gen::KeyGenId { session: self.spec.set().session, attempt }; - - let params = - frost::ThresholdParams::new(t, self.spec.n(&removed), our_i.start).unwrap(); - let shares = u16::from(our_i.end) - u16::from(our_i.start); - - self - .processors - .send( - self.spec.set().network, - processor_messages::key_gen::CoordinatorMessage::GenerateKey { id, params, shares }, - ) - .await; + tx.sign(&mut OsRng, genesis, self.our_key); + self.publish_tributary_tx.publish_tributary_tx(tx).await; } } - Topic::DkgConfirmation => unreachable!(), Topic::SubstrateSign(inner_id) => { let id = processor_messages::coordinator::SubstrateSignId { session: self.spec.set().session, @@ -496,6 +369,8 @@ impl< crate::cosign_evaluator::LatestCosign::get(self.txn, self.spec.set().network) .map_or(0, |cosign| cosign.block_number); if latest_cosign < block_number { + log::info!("re-attempting cosigning {block_number:?} with attempt {attempt}"); + // Instruct the processor to start the next attempt self .processors @@ -512,6 +387,8 @@ impl< SubstrateSignableId::Batch(batch) => { // If the Batch hasn't appeared on-chain... if BatchInstructionsHashDb::get(self.txn, self.spec.set().network, batch).is_none() { + log::info!("re-attempting signing batch {batch:?} with attempt {attempt}"); + // Instruct the processor to start the next attempt // The processor won't continue if it's already signed a Batch // Prior checking if the Batch is on-chain just may reduce the non-participating @@ -529,6 +406,11 @@ impl< // If this Tributary hasn't been retired... // (published SlashReport/took too long to do so) if crate::RetiredTributaryDb::get(self.txn, self.spec.set()).is_none() { + log::info!( + "re-attempting signing slash report for {:?} with attempt {attempt}", + self.spec.set() + ); + let report = SlashReport::get(self.txn, self.spec.set()) .expect("re-attempting signing a SlashReport we don't have?"); self @@ -575,8 +457,7 @@ impl< }; // Assign them 0 points for themselves report.insert(i, 0); - // Uses &[] as we only need the length which is independent to who else was removed - let signer_i = self.spec.i(&[], validator).unwrap(); + let signer_i = self.spec.i(validator).unwrap(); let signer_len = u16::from(signer_i.end) - u16::from(signer_i.start); // Push `n` copies, one for each of their shares for _ in 0 .. signer_len { diff --git a/coordinator/src/tributary/signing_protocol.rs b/coordinator/src/tributary/signing_protocol.rs index a90ed479..af334149 100644 --- a/coordinator/src/tributary/signing_protocol.rs +++ b/coordinator/src/tributary/signing_protocol.rs @@ -55,7 +55,7 @@ */ use core::ops::Deref; -use std::collections::HashMap; +use std::collections::{HashSet, HashMap}; use zeroize::{Zeroize, Zeroizing}; @@ -63,10 +63,7 @@ use rand_core::OsRng; use blake2::{Digest, Blake2s256}; -use ciphersuite::{ - group::{ff::PrimeField, GroupEncoding}, - Ciphersuite, Ristretto, -}; +use ciphersuite::{group::ff::PrimeField, Ciphersuite, Ristretto}; use frost::{ FrostError, dkg::{Participant, musig::musig}, @@ -77,10 +74,8 @@ use frost_schnorrkel::Schnorrkel; use scale::Encode; -use serai_client::{ - Public, - validator_sets::primitives::{KeyPair, musig_context, set_keys_message}, -}; +#[rustfmt::skip] +use serai_client::validator_sets::primitives::{ValidatorSet, KeyPair, musig_context, set_keys_message}; use serai_db::*; @@ -89,6 +84,7 @@ use crate::tributary::TributarySpec; create_db!( SigningProtocolDb { CachedPreprocesses: (context: &impl Encode) -> [u8; 32] + DataSignedWith: (context: &impl Encode) -> (Vec, HashMap>), } ); @@ -117,16 +113,22 @@ impl SigningProtocol<'_, T, C> { }; let encryption_key_slice: &mut [u8] = encryption_key.as_mut(); - let algorithm = Schnorrkel::new(b"substrate"); + // Create the MuSig keys let keys: ThresholdKeys = musig(&musig_context(self.spec.set()), self.key, participants) .expect("signing for a set we aren't in/validator present multiple times") .into(); + // Define the algorithm + let algorithm = Schnorrkel::new(b"substrate"); + + // Check if we've prior preprocessed if CachedPreprocesses::get(self.txn, &self.context).is_none() { + // If we haven't, we create a machine solely to obtain the preprocess with let (machine, _) = AlgorithmMachine::new(algorithm.clone(), keys.clone()).preprocess(&mut OsRng); + // Cache and save the preprocess to disk let mut cache = machine.cache(); assert_eq!(cache.0.len(), 32); #[allow(clippy::needless_range_loop)] @@ -137,13 +139,15 @@ impl SigningProtocol<'_, T, C> { CachedPreprocesses::set(self.txn, &self.context, &cache.0); } + // We're now guaranteed to have the preprocess, hence why this `unwrap` is safe let cached = CachedPreprocesses::get(self.txn, &self.context).unwrap(); - let mut cached: Zeroizing<[u8; 32]> = Zeroizing::new(cached); + let mut cached = Zeroizing::new(cached); #[allow(clippy::needless_range_loop)] for b in 0 .. 32 { cached[b] ^= encryption_key_slice[b]; } encryption_key_slice.zeroize(); + // Create the machine from the cached preprocess let (machine, preprocess) = AlgorithmSignMachine::from_cache(algorithm, keys, CachedPreprocess(cached)); @@ -156,8 +160,29 @@ impl SigningProtocol<'_, T, C> { mut serialized_preprocesses: HashMap>, msg: &[u8], ) -> Result<(AlgorithmSignatureMachine, [u8; 32]), Participant> { - let machine = self.preprocess_internal(participants).0; + // We can't clear the preprocess as we sitll need it to accumulate all of the shares + // We do save the message we signed so any future calls with distinct messages panic + // This assumes the txn deciding this data is committed before the share is broaadcast + if let Some((existing_msg, existing_preprocesses)) = + DataSignedWith::get(self.txn, &self.context) + { + assert_eq!(msg, &existing_msg, "obtaining a signature share for a distinct message"); + assert_eq!( + &serialized_preprocesses, &existing_preprocesses, + "obtaining a signature share with a distinct set of preprocesses" + ); + } else { + DataSignedWith::set( + self.txn, + &self.context, + &(msg.to_vec(), serialized_preprocesses.clone()), + ); + } + // Get the preprocessed machine + let (machine, _) = self.preprocess_internal(participants); + + // Deserialize all the preprocesses let mut participants = serialized_preprocesses.keys().copied().collect::>(); participants.sort(); let mut preprocesses = HashMap::new(); @@ -170,13 +195,14 @@ impl SigningProtocol<'_, T, C> { ); } + // Sign the share let (machine, share) = machine.sign(preprocesses, msg).map_err(|e| match e { FrostError::InternalError(e) => unreachable!("FrostError::InternalError {e}"), FrostError::InvalidParticipant(_, _) | FrostError::InvalidSigningSet(_) | FrostError::InvalidParticipantQuantity(_, _) | FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!("{e:?}"), + FrostError::MissingParticipant(_) => panic!("unexpected error during sign: {e:?}"), FrostError::InvalidPreprocess(p) | FrostError::InvalidShare(p) => p, })?; @@ -207,24 +233,24 @@ impl SigningProtocol<'_, T, C> { } // Get the keys of the participants, noted by their threshold is, and return a new map indexed by -// the MuSig is. +// their MuSig is. fn threshold_i_map_to_keys_and_musig_i_map( spec: &TributarySpec, - removed: &[::G], our_key: &Zeroizing<::F>, mut map: HashMap>, ) -> (Vec<::G>, HashMap>) { // Insert our own index so calculations aren't offset let our_threshold_i = spec - .i(removed, ::generator() * our_key.deref()) - .expect("MuSig t-of-n signing a for a protocol we were removed from") + .i(::generator() * our_key.deref()) + .expect("not in a set we're signing for") .start; + // Asserts we weren't unexpectedly already present assert!(map.insert(our_threshold_i, vec![]).is_none()); let spec_validators = spec.validators(); let key_from_threshold_i = |threshold_i| { for (key, _) in &spec_validators { - if threshold_i == spec.i(removed, *key).expect("MuSig t-of-n participant was removed").start { + if threshold_i == spec.i(*key).expect("validator wasn't in a set they're in").start { return *key; } } @@ -235,29 +261,37 @@ fn threshold_i_map_to_keys_and_musig_i_map( let mut threshold_is = map.keys().copied().collect::>(); threshold_is.sort(); for threshold_i in threshold_is { - sorted.push((key_from_threshold_i(threshold_i), map.remove(&threshold_i).unwrap())); + sorted.push(( + threshold_i, + key_from_threshold_i(threshold_i), + map.remove(&threshold_i).unwrap(), + )); } // Now that signers are sorted, with their shares, create a map with the is needed for MuSig let mut participants = vec![]; let mut map = HashMap::new(); - for (raw_i, (key, share)) in sorted.into_iter().enumerate() { - let musig_i = u16::try_from(raw_i).unwrap() + 1; + let mut our_musig_i = None; + for (raw_i, (threshold_i, key, share)) in sorted.into_iter().enumerate() { + let musig_i = Participant::new(u16::try_from(raw_i).unwrap() + 1).unwrap(); + if threshold_i == our_threshold_i { + our_musig_i = Some(musig_i); + } participants.push(key); - map.insert(Participant::new(musig_i).unwrap(), share); + map.insert(musig_i, share); } - map.remove(&our_threshold_i).unwrap(); + map.remove(&our_musig_i.unwrap()).unwrap(); (participants, map) } -type DkgConfirmerSigningProtocol<'a, T> = SigningProtocol<'a, T, (&'static [u8; 12], u32)>; +type DkgConfirmerSigningProtocol<'a, T> = + SigningProtocol<'a, T, (&'static [u8; 12], ValidatorSet, u32)>; pub(crate) struct DkgConfirmer<'a, T: DbTxn> { key: &'a Zeroizing<::F>, spec: &'a TributarySpec, - removed: Vec<::G>, txn: &'a mut T, attempt: u32, } @@ -268,19 +302,19 @@ impl DkgConfirmer<'_, T> { spec: &'a TributarySpec, txn: &'a mut T, attempt: u32, - ) -> Option> { - // This relies on how confirmations are inlined into the DKG protocol and they accordingly - // share attempts - let removed = crate::tributary::removed_as_of_dkg_attempt(txn, spec.genesis(), attempt)?; - Some(DkgConfirmer { key, spec, removed, txn, attempt }) + ) -> DkgConfirmer<'a, T> { + DkgConfirmer { key, spec, txn, attempt } } + fn signing_protocol(&mut self) -> DkgConfirmerSigningProtocol<'_, T> { - let context = (b"DkgConfirmer", self.attempt); + let context = (b"DkgConfirmer", self.spec.set(), self.attempt); SigningProtocol { key: self.key, spec: self.spec, txn: self.txn, context } } fn preprocess_internal(&mut self) -> (AlgorithmSignMachine, [u8; 64]) { - let participants = self.spec.validators().iter().map(|val| val.0).collect::>(); + // This preprocesses with just us as we only decide the participants after obtaining + // preprocesses + let participants = vec![::generator() * self.key.deref()]; self.signing_protocol().preprocess_internal(&participants) } // Get the preprocess for this confirmation. @@ -293,14 +327,9 @@ impl DkgConfirmer<'_, T> { preprocesses: HashMap>, key_pair: &KeyPair, ) -> Result<(AlgorithmSignatureMachine, [u8; 32]), Participant> { - let participants = self.spec.validators().iter().map(|val| val.0).collect::>(); - let preprocesses = - threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, preprocesses).1; - let msg = set_keys_message( - &self.spec.set(), - &self.removed.iter().map(|key| Public(key.to_bytes())).collect::>(), - key_pair, - ); + let (participants, preprocesses) = + threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, preprocesses); + let msg = set_keys_message(&self.spec.set(), key_pair); self.signing_protocol().share_internal(&participants, preprocesses, &msg) } // Get the share for this confirmation, if the preprocesses are valid. @@ -318,8 +347,9 @@ impl DkgConfirmer<'_, T> { key_pair: &KeyPair, shares: HashMap>, ) -> Result<[u8; 64], Participant> { - let shares = - threshold_i_map_to_keys_and_musig_i_map(self.spec, &self.removed, self.key, shares).1; + assert_eq!(preprocesses.keys().collect::>(), shares.keys().collect::>()); + + let shares = threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, shares).1; let machine = self .share_internal(preprocesses, key_pair) diff --git a/coordinator/src/tributary/spec.rs b/coordinator/src/tributary/spec.rs index 92905490..efc792e6 100644 --- a/coordinator/src/tributary/spec.rs +++ b/coordinator/src/tributary/spec.rs @@ -9,7 +9,7 @@ use frost::Participant; use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::{primitives::PublicKey, validator_sets::primitives::ValidatorSet}; +use serai_client::validator_sets::primitives::ValidatorSet; fn borsh_serialize_validators( validators: &Vec<(::G, u16)>, @@ -49,6 +49,7 @@ pub struct TributarySpec { deserialize_with = "borsh_deserialize_validators" )] validators: Vec<(::G, u16)>, + evrf_public_keys: Vec<([u8; 32], Vec)>, } impl TributarySpec { @@ -56,16 +57,10 @@ impl TributarySpec { serai_block: [u8; 32], start_time: u64, set: ValidatorSet, - set_participants: Vec<(PublicKey, u16)>, + validators: Vec<(::G, u16)>, + evrf_public_keys: Vec<([u8; 32], Vec)>, ) -> TributarySpec { - let mut validators = vec![]; - for (participant, shares) in set_participants { - let participant = ::read_G::<&[u8]>(&mut participant.0.as_ref()) - .expect("invalid key registered as participant"); - validators.push((participant, shares)); - } - - Self { serai_block, start_time, set, validators } + Self { serai_block, start_time, set, validators, evrf_public_keys } } pub fn set(&self) -> ValidatorSet { @@ -88,24 +83,15 @@ impl TributarySpec { self.start_time } - pub fn n(&self, removed_validators: &[::G]) -> u16 { - self - .validators - .iter() - .map(|(validator, weight)| if removed_validators.contains(validator) { 0 } else { *weight }) - .sum() + pub fn n(&self) -> u16 { + self.validators.iter().map(|(_, weight)| *weight).sum() } pub fn t(&self) -> u16 { - // t doesn't change with regards to the amount of removed validators - ((2 * self.n(&[])) / 3) + 1 + ((2 * self.n()) / 3) + 1 } - pub fn i( - &self, - removed_validators: &[::G], - key: ::G, - ) -> Option> { + pub fn i(&self, key: ::G) -> Option> { let mut all_is = HashMap::new(); let mut i = 1; for (validator, weight) in &self.validators { @@ -116,34 +102,12 @@ impl TributarySpec { i += weight; } - let original_i = all_is.get(&key)?.clone(); - let mut result_i = original_i.clone(); - for removed_validator in removed_validators { - let removed_i = all_is - .get(removed_validator) - .expect("removed validator wasn't present in set to begin with"); - // If the queried key was removed, return None - if &original_i == removed_i { - return None; - } - - // If the removed was before the queried, shift the queried down accordingly - if removed_i.start < original_i.start { - let removed_shares = u16::from(removed_i.end) - u16::from(removed_i.start); - result_i.start = Participant::new(u16::from(original_i.start) - removed_shares).unwrap(); - result_i.end = Participant::new(u16::from(original_i.end) - removed_shares).unwrap(); - } - } - Some(result_i) + Some(all_is.get(&key)?.clone()) } - pub fn reverse_lookup_i( - &self, - removed_validators: &[::G], - i: Participant, - ) -> Option<::G> { + pub fn reverse_lookup_i(&self, i: Participant) -> Option<::G> { for (validator, _) in &self.validators { - if self.i(removed_validators, *validator).map_or(false, |range| range.contains(&i)) { + if self.i(*validator).map_or(false, |range| range.contains(&i)) { return Some(*validator); } } @@ -153,4 +117,8 @@ impl TributarySpec { pub fn validators(&self) -> Vec<(::G, u64)> { self.validators.iter().map(|(validator, weight)| (*validator, u64::from(*weight))).collect() } + + pub fn evrf_public_keys(&self) -> Vec<([u8; 32], Vec)> { + self.evrf_public_keys.clone() + } } diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index 8d8bdd4c..860dbd0f 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -12,7 +12,6 @@ use ciphersuite::{ Ciphersuite, Ristretto, }; use schnorr::SchnorrSignature; -use frost::Participant; use scale::{Encode, Decode}; use processor_messages::coordinator::SubstrateSignableId; @@ -130,32 +129,26 @@ impl SignData { #[derive(Clone, PartialEq, Eq)] pub enum Transaction { - RemoveParticipantDueToDkg { + RemoveParticipant { participant: ::G, signed: Signed, }, - DkgCommitments { - attempt: u32, - commitments: Vec>, + DkgParticipation { + participation: Vec, signed: Signed, }, - DkgShares { + DkgConfirmationNonces { + // The confirmation attempt attempt: u32, - // Sending Participant, Receiving Participant, Share - shares: Vec>>, + // The nonces for DKG confirmation attempt #attempt confirmation_nonces: [u8; 64], signed: Signed, }, - InvalidDkgShare { - attempt: u32, - accuser: Participant, - faulty: Participant, - blame: Option>, - signed: Signed, - }, - DkgConfirmed { + DkgConfirmationShare { + // The confirmation attempt attempt: u32, + // The share for DKG confirmation attempt #attempt confirmation_share: [u8; 32], signed: Signed, }, @@ -197,29 +190,22 @@ pub enum Transaction { impl Debug for Transaction { fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { match self { - Transaction::RemoveParticipantDueToDkg { participant, signed } => fmt - .debug_struct("Transaction::RemoveParticipantDueToDkg") + Transaction::RemoveParticipant { participant, signed } => fmt + .debug_struct("Transaction::RemoveParticipant") .field("participant", &hex::encode(participant.to_bytes())) .field("signer", &hex::encode(signed.signer.to_bytes())) .finish_non_exhaustive(), - Transaction::DkgCommitments { attempt, commitments: _, signed } => fmt - .debug_struct("Transaction::DkgCommitments") + Transaction::DkgParticipation { signed, .. } => fmt + .debug_struct("Transaction::DkgParticipation") + .field("signer", &hex::encode(signed.signer.to_bytes())) + .finish_non_exhaustive(), + Transaction::DkgConfirmationNonces { attempt, signed, .. } => fmt + .debug_struct("Transaction::DkgConfirmationNonces") .field("attempt", attempt) .field("signer", &hex::encode(signed.signer.to_bytes())) .finish_non_exhaustive(), - Transaction::DkgShares { attempt, signed, .. } => fmt - .debug_struct("Transaction::DkgShares") - .field("attempt", attempt) - .field("signer", &hex::encode(signed.signer.to_bytes())) - .finish_non_exhaustive(), - Transaction::InvalidDkgShare { attempt, accuser, faulty, .. } => fmt - .debug_struct("Transaction::InvalidDkgShare") - .field("attempt", attempt) - .field("accuser", accuser) - .field("faulty", faulty) - .finish_non_exhaustive(), - Transaction::DkgConfirmed { attempt, confirmation_share: _, signed } => fmt - .debug_struct("Transaction::DkgConfirmed") + Transaction::DkgConfirmationShare { attempt, signed, .. } => fmt + .debug_struct("Transaction::DkgConfirmationShare") .field("attempt", attempt) .field("signer", &hex::encode(signed.signer.to_bytes())) .finish_non_exhaustive(), @@ -261,43 +247,32 @@ impl ReadWrite for Transaction { reader.read_exact(&mut kind)?; match kind[0] { - 0 => Ok(Transaction::RemoveParticipantDueToDkg { + 0 => Ok(Transaction::RemoveParticipant { participant: Ristretto::read_G(reader)?, signed: Signed::read_without_nonce(reader, 0)?, }), 1 => { - let mut attempt = [0; 4]; - reader.read_exact(&mut attempt)?; - let attempt = u32::from_le_bytes(attempt); + let participation = { + let mut participation_len = [0; 4]; + reader.read_exact(&mut participation_len)?; + let participation_len = u32::from_le_bytes(participation_len); - let commitments = { - let mut commitments_len = [0; 1]; - reader.read_exact(&mut commitments_len)?; - let commitments_len = usize::from(commitments_len[0]); - if commitments_len == 0 { - Err(io::Error::other("zero commitments in DkgCommitments"))?; - } - - let mut each_commitments_len = [0; 2]; - reader.read_exact(&mut each_commitments_len)?; - let each_commitments_len = usize::from(u16::from_le_bytes(each_commitments_len)); - if (commitments_len * each_commitments_len) > TRANSACTION_SIZE_LIMIT { + if participation_len > u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() { Err(io::Error::other( - "commitments present in transaction exceeded transaction size limit", + "participation present in transaction exceeded transaction size limit", ))?; } - let mut commitments = vec![vec![]; commitments_len]; - for commitments in &mut commitments { - *commitments = vec![0; each_commitments_len]; - reader.read_exact(commitments)?; - } - commitments + let participation_len = usize::try_from(participation_len).unwrap(); + + let mut participation = vec![0; participation_len]; + reader.read_exact(&mut participation)?; + participation }; let signed = Signed::read_without_nonce(reader, 0)?; - Ok(Transaction::DkgCommitments { attempt, commitments, signed }) + Ok(Transaction::DkgParticipation { participation, signed }) } 2 => { @@ -305,36 +280,12 @@ impl ReadWrite for Transaction { reader.read_exact(&mut attempt)?; let attempt = u32::from_le_bytes(attempt); - let shares = { - let mut share_quantity = [0; 1]; - reader.read_exact(&mut share_quantity)?; - - let mut key_share_quantity = [0; 1]; - reader.read_exact(&mut key_share_quantity)?; - - let mut share_len = [0; 2]; - reader.read_exact(&mut share_len)?; - let share_len = usize::from(u16::from_le_bytes(share_len)); - - let mut all_shares = vec![]; - for _ in 0 .. share_quantity[0] { - let mut shares = vec![]; - for _ in 0 .. key_share_quantity[0] { - let mut share = vec![0; share_len]; - reader.read_exact(&mut share)?; - shares.push(share); - } - all_shares.push(shares); - } - all_shares - }; - let mut confirmation_nonces = [0; 64]; reader.read_exact(&mut confirmation_nonces)?; - let signed = Signed::read_without_nonce(reader, 1)?; + let signed = Signed::read_without_nonce(reader, 0)?; - Ok(Transaction::DkgShares { attempt, shares, confirmation_nonces, signed }) + Ok(Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed }) } 3 => { @@ -342,53 +293,21 @@ impl ReadWrite for Transaction { reader.read_exact(&mut attempt)?; let attempt = u32::from_le_bytes(attempt); - let mut accuser = [0; 2]; - reader.read_exact(&mut accuser)?; - let accuser = Participant::new(u16::from_le_bytes(accuser)) - .ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?; - - let mut faulty = [0; 2]; - reader.read_exact(&mut faulty)?; - let faulty = Participant::new(u16::from_le_bytes(faulty)) - .ok_or_else(|| io::Error::other("invalid participant in InvalidDkgShare"))?; - - let mut blame_len = [0; 2]; - reader.read_exact(&mut blame_len)?; - let mut blame = vec![0; u16::from_le_bytes(blame_len).into()]; - reader.read_exact(&mut blame)?; - - // This shares a nonce with DkgConfirmed as only one is expected - let signed = Signed::read_without_nonce(reader, 2)?; - - Ok(Transaction::InvalidDkgShare { - attempt, - accuser, - faulty, - blame: Some(blame).filter(|blame| !blame.is_empty()), - signed, - }) - } - - 4 => { - let mut attempt = [0; 4]; - reader.read_exact(&mut attempt)?; - let attempt = u32::from_le_bytes(attempt); - let mut confirmation_share = [0; 32]; reader.read_exact(&mut confirmation_share)?; - let signed = Signed::read_without_nonce(reader, 2)?; + let signed = Signed::read_without_nonce(reader, 1)?; - Ok(Transaction::DkgConfirmed { attempt, confirmation_share, signed }) + Ok(Transaction::DkgConfirmationShare { attempt, confirmation_share, signed }) } - 5 => { + 4 => { let mut block = [0; 32]; reader.read_exact(&mut block)?; Ok(Transaction::CosignSubstrateBlock(block)) } - 6 => { + 5 => { let mut block = [0; 32]; reader.read_exact(&mut block)?; let mut batch = [0; 4]; @@ -396,16 +315,16 @@ impl ReadWrite for Transaction { Ok(Transaction::Batch { block, batch: u32::from_le_bytes(batch) }) } - 7 => { + 6 => { let mut block = [0; 8]; reader.read_exact(&mut block)?; Ok(Transaction::SubstrateBlock(u64::from_le_bytes(block))) } - 8 => SignData::read(reader).map(Transaction::SubstrateSign), - 9 => SignData::read(reader).map(Transaction::Sign), + 7 => SignData::read(reader).map(Transaction::SubstrateSign), + 8 => SignData::read(reader).map(Transaction::Sign), - 10 => { + 9 => { let mut plan = [0; 32]; reader.read_exact(&mut plan)?; @@ -420,7 +339,7 @@ impl ReadWrite for Transaction { Ok(Transaction::SignCompleted { plan, tx_hash, first_signer, signature }) } - 11 => { + 10 => { let mut len = [0]; reader.read_exact(&mut len)?; let len = len[0]; @@ -445,109 +364,59 @@ impl ReadWrite for Transaction { fn write(&self, writer: &mut W) -> io::Result<()> { match self { - Transaction::RemoveParticipantDueToDkg { participant, signed } => { + Transaction::RemoveParticipant { participant, signed } => { writer.write_all(&[0])?; writer.write_all(&participant.to_bytes())?; signed.write_without_nonce(writer) } - Transaction::DkgCommitments { attempt, commitments, signed } => { + Transaction::DkgParticipation { participation, signed } => { writer.write_all(&[1])?; - writer.write_all(&attempt.to_le_bytes())?; - if commitments.is_empty() { - Err(io::Error::other("zero commitments in DkgCommitments"))? - } - writer.write_all(&[u8::try_from(commitments.len()).unwrap()])?; - for commitments_i in commitments { - if commitments_i.len() != commitments[0].len() { - Err(io::Error::other("commitments of differing sizes in DkgCommitments"))? - } - } - writer.write_all(&u16::try_from(commitments[0].len()).unwrap().to_le_bytes())?; - for commitments in commitments { - writer.write_all(commitments)?; - } + writer.write_all(&u32::try_from(participation.len()).unwrap().to_le_bytes())?; + writer.write_all(participation)?; signed.write_without_nonce(writer) } - Transaction::DkgShares { attempt, shares, confirmation_nonces, signed } => { + Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => { writer.write_all(&[2])?; writer.write_all(&attempt.to_le_bytes())?; - - // `shares` is a Vec which is supposed to map to a HashMap>. Since we - // bound participants to 150, this conversion is safe if a valid in-memory transaction. - writer.write_all(&[u8::try_from(shares.len()).unwrap()])?; - // This assumes at least one share is being sent to another party - writer.write_all(&[u8::try_from(shares[0].len()).unwrap()])?; - let share_len = shares[0][0].len(); - // For BLS12-381 G2, this would be: - // - A 32-byte share - // - A 96-byte ephemeral key - // - A 128-byte signature - // Hence why this has to be u16 - writer.write_all(&u16::try_from(share_len).unwrap().to_le_bytes())?; - - for these_shares in shares { - assert_eq!(these_shares.len(), shares[0].len(), "amount of sent shares was variable"); - for share in these_shares { - assert_eq!(share.len(), share_len, "sent shares were of variable length"); - writer.write_all(share)?; - } - } - writer.write_all(confirmation_nonces)?; signed.write_without_nonce(writer) } - Transaction::InvalidDkgShare { attempt, accuser, faulty, blame, signed } => { + Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => { writer.write_all(&[3])?; writer.write_all(&attempt.to_le_bytes())?; - writer.write_all(&u16::from(*accuser).to_le_bytes())?; - writer.write_all(&u16::from(*faulty).to_le_bytes())?; - - // Flattens Some(vec![]) to None on the expectation no actual blame will be 0-length - assert!(blame.as_ref().map_or(1, Vec::len) != 0); - let blame_len = - u16::try_from(blame.as_ref().unwrap_or(&vec![]).len()).expect("blame exceeded 64 KB"); - writer.write_all(&blame_len.to_le_bytes())?; - writer.write_all(blame.as_ref().unwrap_or(&vec![]))?; - - signed.write_without_nonce(writer) - } - - Transaction::DkgConfirmed { attempt, confirmation_share, signed } => { - writer.write_all(&[4])?; - writer.write_all(&attempt.to_le_bytes())?; writer.write_all(confirmation_share)?; signed.write_without_nonce(writer) } Transaction::CosignSubstrateBlock(block) => { - writer.write_all(&[5])?; + writer.write_all(&[4])?; writer.write_all(block) } Transaction::Batch { block, batch } => { - writer.write_all(&[6])?; + writer.write_all(&[5])?; writer.write_all(block)?; writer.write_all(&batch.to_le_bytes()) } Transaction::SubstrateBlock(block) => { - writer.write_all(&[7])?; + writer.write_all(&[6])?; writer.write_all(&block.to_le_bytes()) } Transaction::SubstrateSign(data) => { - writer.write_all(&[8])?; + writer.write_all(&[7])?; data.write(writer) } Transaction::Sign(data) => { - writer.write_all(&[9])?; + writer.write_all(&[8])?; data.write(writer) } Transaction::SignCompleted { plan, tx_hash, first_signer, signature } => { - writer.write_all(&[10])?; + writer.write_all(&[9])?; writer.write_all(plan)?; writer .write_all(&[u8::try_from(tx_hash.len()).expect("tx hash length exceed 255 bytes")])?; @@ -556,7 +425,7 @@ impl ReadWrite for Transaction { signature.write(writer) } Transaction::SlashReport(points, signed) => { - writer.write_all(&[11])?; + writer.write_all(&[10])?; writer.write_all(&[u8::try_from(points.len()).unwrap()])?; for points in points { writer.write_all(&points.to_le_bytes())?; @@ -570,15 +439,16 @@ impl ReadWrite for Transaction { impl TransactionTrait for Transaction { fn kind(&self) -> TransactionKind<'_> { match self { - Transaction::RemoveParticipantDueToDkg { participant, signed } => { + Transaction::RemoveParticipant { participant, signed } => { TransactionKind::Signed((b"remove", participant.to_bytes()).encode(), signed) } - Transaction::DkgCommitments { attempt, commitments: _, signed } | - Transaction::DkgShares { attempt, signed, .. } | - Transaction::InvalidDkgShare { attempt, signed, .. } | - Transaction::DkgConfirmed { attempt, signed, .. } => { - TransactionKind::Signed((b"dkg", attempt).encode(), signed) + Transaction::DkgParticipation { signed, .. } => { + TransactionKind::Signed(b"dkg".to_vec(), signed) + } + Transaction::DkgConfirmationNonces { attempt, signed, .. } | + Transaction::DkgConfirmationShare { attempt, signed, .. } => { + TransactionKind::Signed((b"dkg_confirmation", attempt).encode(), signed) } Transaction::CosignSubstrateBlock(_) => TransactionKind::Provided("cosign"), @@ -645,11 +515,14 @@ impl Transaction { fn signed(tx: &mut Transaction) -> (u32, &mut Signed) { #[allow(clippy::match_same_arms)] // Doesn't make semantic sense here let nonce = match tx { - Transaction::RemoveParticipantDueToDkg { .. } => 0, + Transaction::RemoveParticipant { .. } => 0, - Transaction::DkgCommitments { .. } => 0, - Transaction::DkgShares { .. } => 1, - Transaction::InvalidDkgShare { .. } | Transaction::DkgConfirmed { .. } => 2, + Transaction::DkgParticipation { .. } => 0, + // Uses a nonce of 0 as it has an internal attempt counter we distinguish by + Transaction::DkgConfirmationNonces { .. } => 0, + // Uses a nonce of 1 due to internal attempt counter and due to following + // DkgConfirmationNonces + Transaction::DkgConfirmationShare { .. } => 1, Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"), @@ -668,11 +541,10 @@ impl Transaction { nonce, #[allow(clippy::match_same_arms)] match tx { - Transaction::RemoveParticipantDueToDkg { ref mut signed, .. } | - Transaction::DkgCommitments { ref mut signed, .. } | - Transaction::DkgShares { ref mut signed, .. } | - Transaction::InvalidDkgShare { ref mut signed, .. } | - Transaction::DkgConfirmed { ref mut signed, .. } => signed, + Transaction::RemoveParticipant { ref mut signed, .. } | + Transaction::DkgParticipation { ref mut signed, .. } | + Transaction::DkgConfirmationNonces { ref mut signed, .. } => signed, + Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"), diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 0ea74bfe..9b23dc6c 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -50,13 +50,17 @@ pub(crate) use crate::tendermint::*; pub mod tests; /// Size limit for an individual transaction. -pub const TRANSACTION_SIZE_LIMIT: usize = 3_000_000; +// This needs to be big enough to participate in a 101-of-150 eVRF DKG with each element taking +// `MAX_KEY_LEN`. This also needs to be big enough to pariticpate in signing 520 Bitcoin inputs +// with 49 key shares, and signing 120 Monero inputs with 49 key shares. +// TODO: Add a test for these properties +pub const TRANSACTION_SIZE_LIMIT: usize = 2_000_000; /// Amount of transactions a single account may have in the mempool. pub const ACCOUNT_MEMPOOL_LIMIT: u32 = 50; /// Block size limit. -// This targets a growth limit of roughly 45 GB a day, under load, in order to prevent a malicious +// This targets a growth limit of roughly 30 GB a day, under load, in order to prevent a malicious // participant from flooding disks and causing out of space errors in order processes. -pub const BLOCK_SIZE_LIMIT: usize = 3_001_000; +pub const BLOCK_SIZE_LIMIT: usize = 2_001_000; pub(crate) const TENDERMINT_MESSAGE: u8 = 0; pub(crate) const TRANSACTION_MESSAGE: u8 = 1; diff --git a/crypto/dkg/Cargo.toml b/crypto/dkg/Cargo.toml index 7ed301f5..cde0d153 100644 --- a/crypto/dkg/Cargo.toml +++ b/crypto/dkg/Cargo.toml @@ -36,9 +36,26 @@ multiexp = { path = "../multiexp", version = "0.4", default-features = false } schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5.1", default-features = false } dleq = { path = "../dleq", version = "^0.4.1", default-features = false } +# eVRF DKG dependencies +subtle = { version = "2", default-features = false, features = ["std"], optional = true } +generic-array = { version = "1", default-features = false, features = ["alloc"], optional = true } +blake2 = { version = "0.10", default-features = false, features = ["std"], optional = true } +rand_chacha = { version = "0.3", default-features = false, features = ["std"], optional = true } +generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", default-features = false, optional = true } +ec-divisors = { path = "../evrf/divisors", default-features = false, optional = true } +generalized-bulletproofs-circuit-abstraction = { path = "../evrf/circuit-abstraction", optional = true } +generalized-bulletproofs-ec-gadgets = { path = "../evrf/ec-gadgets", optional = true } + +secq256k1 = { path = "../evrf/secq256k1", optional = true } +embedwards25519 = { path = "../evrf/embedwards25519", optional = true } + [dev-dependencies] rand_core = { version = "0.6", default-features = false, features = ["getrandom"] } +rand = { version = "0.8", default-features = false, features = ["std"] } ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] } +generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", features = ["tests"] } +ec-divisors = { path = "../evrf/divisors", features = ["pasta"] } +pasta_curves = "0.5" [features] std = [ @@ -62,5 +79,22 @@ std = [ "dleq/serialize" ] borsh = ["dep:borsh"] +evrf = [ + "std", + + "dep:subtle", + "dep:generic-array", + + "dep:blake2", + "dep:rand_chacha", + + "dep:generalized-bulletproofs", + "dep:ec-divisors", + "dep:generalized-bulletproofs-circuit-abstraction", + "dep:generalized-bulletproofs-ec-gadgets", +] +evrf-secp256k1 = ["evrf", "ciphersuite/secp256k1", "secq256k1"] +evrf-ed25519 = ["evrf", "ciphersuite/ed25519", "embedwards25519"] +evrf-ristretto = ["evrf", "ciphersuite/ristretto", "embedwards25519"] tests = ["rand_core/getrandom"] default = ["std"] diff --git a/crypto/dkg/src/encryption.rs b/crypto/dkg/src/encryption.rs index 51cf6b06..1ad721f6 100644 --- a/crypto/dkg/src/encryption.rs +++ b/crypto/dkg/src/encryption.rs @@ -98,11 +98,11 @@ fn ecdh(private: &Zeroizing, public: C::G) -> Zeroizing(context: &str, ecdh: &Zeroizing) -> ChaCha20 { +fn cipher(context: [u8; 32], ecdh: &Zeroizing) -> ChaCha20 { // Ideally, we'd box this transcript with ZAlloc, yet that's only possible on nightly // TODO: https://github.com/serai-dex/serai/issues/151 let mut transcript = RecommendedTranscript::new(b"DKG Encryption v0.2"); - transcript.append_message(b"context", context.as_bytes()); + transcript.append_message(b"context", context); transcript.domain_separate(b"encryption_key"); @@ -134,7 +134,7 @@ fn cipher(context: &str, ecdh: &Zeroizing) -> ChaCha20 { fn encrypt( rng: &mut R, - context: &str, + context: [u8; 32], from: Participant, to: C::G, mut msg: Zeroizing, @@ -197,7 +197,7 @@ impl EncryptedMessage { pub(crate) fn invalidate_msg( &mut self, rng: &mut R, - context: &str, + context: [u8; 32], from: Participant, ) { // Invalidate the message by specifying a new key/Schnorr PoP @@ -219,7 +219,7 @@ impl EncryptedMessage { pub(crate) fn invalidate_share_serialization( &mut self, rng: &mut R, - context: &str, + context: [u8; 32], from: Participant, to: C::G, ) { @@ -243,7 +243,7 @@ impl EncryptedMessage { pub(crate) fn invalidate_share_value( &mut self, rng: &mut R, - context: &str, + context: [u8; 32], from: Participant, to: C::G, ) { @@ -300,14 +300,14 @@ impl EncryptionKeyProof { // This still doesn't mean the DKG offers an authenticated channel. The per-message keys have no // root of trust other than their existence in the assumed-to-exist external authenticated channel. fn pop_challenge( - context: &str, + context: [u8; 32], nonce: C::G, key: C::G, sender: Participant, msg: &[u8], ) -> C::F { let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Proof of Possession v0.2"); - transcript.append_message(b"context", context.as_bytes()); + transcript.append_message(b"context", context); transcript.domain_separate(b"proof_of_possession"); @@ -323,9 +323,9 @@ fn pop_challenge( C::hash_to_F(b"DKG-encryption-proof_of_possession", &transcript.challenge(b"schnorr")) } -fn encryption_key_transcript(context: &str) -> RecommendedTranscript { +fn encryption_key_transcript(context: [u8; 32]) -> RecommendedTranscript { let mut transcript = RecommendedTranscript::new(b"DKG Encryption Key Correctness Proof v0.2"); - transcript.append_message(b"context", context.as_bytes()); + transcript.append_message(b"context", context); transcript } @@ -337,58 +337,17 @@ pub(crate) enum DecryptionError { InvalidProof, } -// A simple box for managing encryption. -#[derive(Clone)] -pub(crate) struct Encryption { - context: String, - i: Option, - enc_key: Zeroizing, - enc_pub_key: C::G, +// A simple box for managing decryption. +#[derive(Clone, Debug)] +pub(crate) struct Decryption { + context: [u8; 32], enc_keys: HashMap, } -impl fmt::Debug for Encryption { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("Encryption") - .field("context", &self.context) - .field("i", &self.i) - .field("enc_pub_key", &self.enc_pub_key) - .field("enc_keys", &self.enc_keys) - .finish_non_exhaustive() +impl Decryption { + pub(crate) fn new(context: [u8; 32]) -> Self { + Self { context, enc_keys: HashMap::new() } } -} - -impl Zeroize for Encryption { - fn zeroize(&mut self) { - self.enc_key.zeroize(); - self.enc_pub_key.zeroize(); - for (_, mut value) in self.enc_keys.drain() { - value.zeroize(); - } - } -} - -impl Encryption { - pub(crate) fn new( - context: String, - i: Option, - rng: &mut R, - ) -> Self { - let enc_key = Zeroizing::new(C::random_nonzero_F(rng)); - Self { - context, - i, - enc_pub_key: C::generator() * enc_key.deref(), - enc_key, - enc_keys: HashMap::new(), - } - } - - pub(crate) fn registration(&self, msg: M) -> EncryptionKeyMessage { - EncryptionKeyMessage { msg, enc_key: self.enc_pub_key } - } - pub(crate) fn register( &mut self, participant: Participant, @@ -402,13 +361,109 @@ impl Encryption { msg.msg } + // Given a message, and the intended decryptor, and a proof for its key, decrypt the message. + // Returns None if the key was wrong. + pub(crate) fn decrypt_with_proof( + &self, + from: Participant, + decryptor: Participant, + mut msg: EncryptedMessage, + // There's no encryption key proof if the accusation is of an invalid signature + proof: Option>, + ) -> Result, DecryptionError> { + if !msg.pop.verify( + msg.key, + pop_challenge::(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()), + ) { + Err(DecryptionError::InvalidSignature)?; + } + + if let Some(proof) = proof { + // Verify this is the decryption key for this message + proof + .dleq + .verify( + &mut encryption_key_transcript(self.context), + &[C::generator(), msg.key], + &[self.enc_keys[&decryptor], *proof.key], + ) + .map_err(|_| DecryptionError::InvalidProof)?; + + cipher::(self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut()); + Ok(msg.msg) + } else { + Err(DecryptionError::InvalidProof) + } + } +} + +// A simple box for managing encryption. +#[derive(Clone)] +pub(crate) struct Encryption { + context: [u8; 32], + i: Participant, + enc_key: Zeroizing, + enc_pub_key: C::G, + decryption: Decryption, +} + +impl fmt::Debug for Encryption { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt + .debug_struct("Encryption") + .field("context", &self.context) + .field("i", &self.i) + .field("enc_pub_key", &self.enc_pub_key) + .field("decryption", &self.decryption) + .finish_non_exhaustive() + } +} + +impl Zeroize for Encryption { + fn zeroize(&mut self) { + self.enc_key.zeroize(); + self.enc_pub_key.zeroize(); + for (_, mut value) in self.decryption.enc_keys.drain() { + value.zeroize(); + } + } +} + +impl Encryption { + pub(crate) fn new( + context: [u8; 32], + i: Participant, + rng: &mut R, + ) -> Self { + let enc_key = Zeroizing::new(C::random_nonzero_F(rng)); + Self { + context, + i, + enc_pub_key: C::generator() * enc_key.deref(), + enc_key, + decryption: Decryption::new(context), + } + } + + pub(crate) fn registration(&self, msg: M) -> EncryptionKeyMessage { + EncryptionKeyMessage { msg, enc_key: self.enc_pub_key } + } + + pub(crate) fn register( + &mut self, + participant: Participant, + msg: EncryptionKeyMessage, + ) -> M { + self.decryption.register(participant, msg) + } + pub(crate) fn encrypt( &self, rng: &mut R, participant: Participant, msg: Zeroizing, ) -> EncryptedMessage { - encrypt(rng, &self.context, self.i.unwrap(), self.enc_keys[&participant], msg) + encrypt(rng, self.context, self.i, self.decryption.enc_keys[&participant], msg) } pub(crate) fn decrypt( @@ -426,18 +481,18 @@ impl Encryption { batch, batch_id, msg.key, - pop_challenge::(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()), + pop_challenge::(self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()), ); let key = ecdh::(&self.enc_key, msg.key); - cipher::(&self.context, &key).apply_keystream(msg.msg.as_mut().as_mut()); + cipher::(self.context, &key).apply_keystream(msg.msg.as_mut().as_mut()); ( msg.msg, EncryptionKeyProof { key, dleq: DLEqProof::prove( rng, - &mut encryption_key_transcript(&self.context), + &mut encryption_key_transcript(self.context), &[C::generator(), msg.key], &self.enc_key, ), @@ -445,38 +500,7 @@ impl Encryption { ) } - // Given a message, and the intended decryptor, and a proof for its key, decrypt the message. - // Returns None if the key was wrong. - pub(crate) fn decrypt_with_proof( - &self, - from: Participant, - decryptor: Participant, - mut msg: EncryptedMessage, - // There's no encryption key proof if the accusation is of an invalid signature - proof: Option>, - ) -> Result, DecryptionError> { - if !msg.pop.verify( - msg.key, - pop_challenge::(&self.context, msg.pop.R, msg.key, from, msg.msg.deref().as_ref()), - ) { - Err(DecryptionError::InvalidSignature)?; - } - - if let Some(proof) = proof { - // Verify this is the decryption key for this message - proof - .dleq - .verify( - &mut encryption_key_transcript(&self.context), - &[C::generator(), msg.key], - &[self.enc_keys[&decryptor], *proof.key], - ) - .map_err(|_| DecryptionError::InvalidProof)?; - - cipher::(&self.context, &proof.key).apply_keystream(msg.msg.as_mut().as_mut()); - Ok(msg.msg) - } else { - Err(DecryptionError::InvalidProof) - } + pub(crate) fn into_decryption(self) -> Decryption { + self.decryption } } diff --git a/crypto/dkg/src/evrf/mod.rs b/crypto/dkg/src/evrf/mod.rs new file mode 100644 index 00000000..3d043138 --- /dev/null +++ b/crypto/dkg/src/evrf/mod.rs @@ -0,0 +1,584 @@ +/* + We implement a DKG using an eVRF, as detailed in the eVRF paper. For the eVRF itself, we do not + use a Paillier-based construction, nor the detailed construction premised on a Bulletproof. + + For reference, the detailed construction premised on a Bulletproof involves two curves, notated + here as `C` and `E`, where the scalar field of `C` is the field of `E`. Accordingly, Bulletproofs + over `C` can efficiently perform group operations of points of curve `E`. Each participant has a + private point (`P_i`) on curve `E` committed to over curve `C`. The eVRF selects a pair of + scalars `a, b`, where the participant proves in-Bulletproof the points `A_i, B_i` are + `a * P_i, b * P_i`. The eVRF proceeds to commit to `A_i.x + B_i.x` in a Pedersen Commitment. + + Our eVRF uses + [Generalized Bulletproofs]( + https://repo.getmonero.org/monero-project/ccs-proposals + /uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf + ). + This allows us much larger witnesses without growing the reference string, and enables us to + efficiently sample challenges off in-circuit variables (via placing the variables in a vector + commitment, then challenging from a transcript of the commitments). We proceed to use + [elliptic curve divisors]( + https://repo.getmonero.org/-/project/54/ + uploads/eb1bf5b4d4855a3480c38abf895bd8e8/Veridise_Divisor_Proofs.pdf + ) + (which require the ability to sample a challenge off in-circuit variables) to prove discrete + logarithms efficiently. + + This is done via having a private scalar (`p_i`) on curve `E`, not a private point, and + publishing the public key for it (`P_i = p_i * G`, where `G` is a generator of `E`). The eVRF + samples two points with unknown discrete logarithms `A, B`, and the circuit proves a Pedersen + Commitment commits to `(p_i * A).x + (p_i * B).x`. + + With the eVRF established, we now detail our other novel aspect. The eVRF paper expects secret + shares to be sent to the other parties yet does not detail a precise way to do so. If we + encrypted the secret shares with some stream cipher, each recipient would have to attest validity + or accuse the sender of impropriety. We want an encryption scheme where anyone can verify the + secret shares were encrypted properly, without additional info, efficiently. + + Please note from the published commitments, it's possible to calculcate a commitment to the + secret share each party should receive (`V_i`). + + We have the sender sample two scalars per recipient, denoted `x_i, y_i` (where `i` is the + recipient index). They perform the eVRF to prove a Pedersen Commitment commits to + `z_i = (x_i * P_i).x + (y_i * P_i).x` and `x_i, y_i` are the discrete logarithms of `X_i, Y_i` + over `G`. They then publish the encrypted share `s_i + z_i` and `X_i, Y_i`. + + The recipient is able to decrypt the share via calculating + `s_i - ((p_i * X_i).x + (p_i * Y_i).x)`. + + To verify the secret share, we have the `F` terms of the Pedersen Commitments revealed (where + `F, H` are generators of `C`, `F` is used for binding and `H` for blinding). This already needs + to be done for the eVRF outputs used within the DKG, in order to obtain thecommitments to the + coefficients. When we have the commitment `Z_i = ((p_i * A).x + (p_i * B).x) * F`, we simply + check `s_i * F = Z_i + V_i`. + + In order to open the Pedersen Commitments to their `F` terms, we transcript the commitments and + the claimed openings, then assign random weights to each pair of `(commitment, opening). The + prover proves knowledge of the discrete logarithm of the sum weighted commitments, minus the sum + sum weighted openings, over `H`. + + The benefit to this construction is that given an broadcast channel which is reliable and + ordered, only `t` messages must be broadcast from honest parties in order to create a `t`-of-`n` + multisig. If the encrypted secret shares were not verifiable, one would need at least `t + n` + messages to ensure every participant has a correct dealing and can participate in future + reconstructions of the secret. This would also require all `n` parties be online, whereas this is + robust to threshold `t`. +*/ + +use core::ops::Deref; +use std::{ + io::{self, Read, Write}, + collections::{HashSet, HashMap}, +}; + +use rand_core::{RngCore, CryptoRng}; + +use zeroize::{Zeroize, Zeroizing}; + +use blake2::{Digest, Blake2s256}; +use ciphersuite::{ + group::{ + ff::{Field, PrimeField}, + Group, GroupEncoding, + }, + Ciphersuite, +}; +use multiexp::multiexp_vartime; + +use generalized_bulletproofs::arithmetic_circuit_proof::*; +use ec_divisors::DivisorCurve; + +use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore, ThresholdKeys}; + +pub(crate) mod proof; +use proof::*; +pub use proof::{EvrfCurve, EvrfGenerators}; + +/// Participation in the DKG. +/// +/// `Participation` is meant to be broadcast to all other participants over an authenticated, +/// reliable broadcast channel. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct Participation { + proof: Vec, + encrypted_secret_shares: HashMap, +} + +impl Participation { + pub fn read(reader: &mut R, n: u16) -> io::Result { + // TODO: Replace `len` with some calculcation deterministic to the params + let mut len = [0; 4]; + reader.read_exact(&mut len)?; + let len = usize::try_from(u32::from_le_bytes(len)).expect("<32-bit platform?"); + + // Don't allocate a buffer for the claimed length + // Read chunks until we reach the claimed length + // This means if we were told to read GB, we must actually be sent GB before allocating as such + const CHUNK_SIZE: usize = 1024; + let mut proof = Vec::with_capacity(len.min(CHUNK_SIZE)); + while proof.len() < len { + let next_chunk = (len - proof.len()).min(CHUNK_SIZE); + let old_proof_len = proof.len(); + proof.resize(old_proof_len + next_chunk, 0); + reader.read_exact(&mut proof[old_proof_len ..])?; + } + + let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n)); + for i in (1 ..= n).map(Participant) { + encrypted_secret_shares.insert(i, C::read_F(reader)?); + } + + Ok(Self { proof, encrypted_secret_shares }) + } + + pub fn write(&self, writer: &mut W) -> io::Result<()> { + writer.write_all(&u32::try_from(self.proof.len()).unwrap().to_le_bytes())?; + writer.write_all(&self.proof)?; + for i in (1 ..= u16::try_from(self.encrypted_secret_shares.len()) + .expect("writing a Participation which has a n > u16::MAX")) + .map(Participant) + { + writer.write_all(self.encrypted_secret_shares[&i].to_repr().as_ref())?; + } + Ok(()) + } +} + +fn polynomial( + coefficients: &[Zeroizing], + l: Participant, +) -> Zeroizing { + let l = F::from(u64::from(u16::from(l))); + // This should never be reached since Participant is explicitly non-zero + assert!(l != F::ZERO, "zero participant passed to polynomial"); + let mut share = Zeroizing::new(F::ZERO); + for (idx, coefficient) in coefficients.iter().rev().enumerate() { + *share += coefficient.deref(); + if idx != (coefficients.len() - 1) { + *share *= l; + } + } + share +} + +#[allow(clippy::type_complexity)] +fn share_verification_statements( + rng: &mut (impl RngCore + CryptoRng), + commitments: &[C::G], + n: u16, + encryption_commitments: &[C::G], + encrypted_secret_shares: &HashMap, +) -> (C::F, Vec<(C::F, C::G)>) { + debug_assert_eq!(usize::from(n), encryption_commitments.len()); + debug_assert_eq!(usize::from(n), encrypted_secret_shares.len()); + + let mut g_scalar = C::F::ZERO; + let mut pairs = Vec::with_capacity(commitments.len() + encryption_commitments.len()); + for commitment in commitments { + pairs.push((C::F::ZERO, *commitment)); + } + + let mut weight; + for (i, enc_share) in encrypted_secret_shares { + let enc_commitment = encryption_commitments[usize::from(u16::from(*i)) - 1]; + + weight = C::F::random(&mut *rng); + + // s_i F + g_scalar += weight * enc_share; + // - Z_i + let weight = -weight; + pairs.push((weight, enc_commitment)); + // - V_i + { + let i = C::F::from(u64::from(u16::from(*i))); + // The first `commitments.len()` pairs are for the commitments + (0 .. commitments.len()).fold(weight, |exp, j| { + pairs[j].0 += exp; + exp * i + }); + } + } + + (g_scalar, pairs) +} + +/// Errors from the eVRF DKG. +#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)] +pub enum EvrfError { + #[error("n, the amount of participants, exceeded a u16")] + TooManyParticipants, + #[error("the threshold t wasn't in range 1 <= t <= n")] + InvalidThreshold, + #[error("a public key was the identity point")] + PublicKeyWasIdentity, + #[error("participating in a DKG we aren't a participant in")] + NotAParticipant, + #[error("a participant with an unrecognized ID participated")] + NonExistentParticipant, + #[error("the passed in generators did not have enough generators for this DKG")] + NotEnoughGenerators, +} + +/// The result of calling EvrfDkg::verify. +pub enum VerifyResult { + Valid(EvrfDkg), + Invalid(Vec), + NotEnoughParticipants, +} + +/// Struct to perform/verify the DKG with. +#[derive(Debug)] +pub struct EvrfDkg { + t: u16, + n: u16, + evrf_public_keys: Vec<::G>, + group_key: C::G, + verification_shares: HashMap, + #[allow(clippy::type_complexity)] + encrypted_secret_shares: + HashMap::G; 2], C::F)>>, +} + +impl EvrfDkg { + // Form the initial transcript for the proofs. + fn initial_transcript( + invocation: [u8; 32], + evrf_public_keys: &[::G], + t: u16, + ) -> [u8; 32] { + let mut transcript = Blake2s256::new(); + transcript.update(invocation); + for key in evrf_public_keys { + transcript.update(key.to_bytes().as_ref()); + } + transcript.update(t.to_le_bytes()); + transcript.finalize().into() + } + + /// Participate in performing the DKG for the specified parameters. + /// + /// The context MUST be unique across invocations. Reuse of context will lead to sharing + /// prior-shared secrets. + /// + /// Public keys are not allowed to be the identity point. This will error if any are. + pub fn participate( + rng: &mut (impl RngCore + CryptoRng), + generators: &EvrfGenerators, + context: [u8; 32], + t: u16, + evrf_public_keys: &[::G], + evrf_private_key: &Zeroizing<::F>, + ) -> Result, EvrfError> { + let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? }; + if (t == 0) || (t > n) { + Err(EvrfError::InvalidThreshold)?; + } + if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) { + Err(EvrfError::PublicKeyWasIdentity)?; + }; + let evrf_public_key = ::generator() * evrf_private_key.deref(); + if !evrf_public_keys.iter().any(|key| *key == evrf_public_key) { + Err(EvrfError::NotAParticipant)?; + }; + + let transcript = Self::initial_transcript(context, evrf_public_keys, t); + // Further bind to the participant index so each index gets unique generators + // This allows reusing eVRF public keys as the prover + let mut per_proof_transcript = Blake2s256::new(); + per_proof_transcript.update(transcript); + per_proof_transcript.update(evrf_public_key.to_bytes()); + + // The above transcript is expected to be binding to all arguments here + // The generators are constant to this ciphersuite's generator, and the parameters are + // transcripted + let EvrfProveResult { coefficients, encryption_masks, proof } = match Evrf::prove( + rng, + &generators.0, + per_proof_transcript.finalize().into(), + usize::from(t), + evrf_public_keys, + evrf_private_key, + ) { + Ok(res) => res, + Err(AcError::NotEnoughGenerators) => Err(EvrfError::NotEnoughGenerators)?, + Err( + AcError::DifferingLrLengths | + AcError::InconsistentAmountOfConstraints | + AcError::ConstrainedNonExistentTerm | + AcError::ConstrainedNonExistentCommitment | + AcError::InconsistentWitness | + AcError::Ip(_) | + AcError::IncompleteProof, + ) => { + panic!("failed to prove for the eVRF proof") + } + }; + + let mut encrypted_secret_shares = HashMap::with_capacity(usize::from(n)); + for (l, encryption_mask) in (1 ..= n).map(Participant).zip(encryption_masks) { + let share = polynomial::(&coefficients, l); + encrypted_secret_shares.insert(l, *share + *encryption_mask); + } + + Ok(Participation { proof, encrypted_secret_shares }) + } + + /// Check if a batch of `Participation`s are valid. + /// + /// If any `Participation` is invalid, the list of all invalid participants will be returned. + /// If all `Participation`s are valid and there's at least `t`, an instance of this struct + /// (usable to obtain a threshold share of generated key) is returned. If all are valid and + /// there's not at least `t`, `VerifyResult::NotEnoughParticipants` is returned. + /// + /// This DKG is unbiased if all `n` people participate. This DKG is biased if only a threshold + /// participate. + pub fn verify( + rng: &mut (impl RngCore + CryptoRng), + generators: &EvrfGenerators, + context: [u8; 32], + t: u16, + evrf_public_keys: &[::G], + participations: &HashMap>, + ) -> Result, EvrfError> { + let Ok(n) = u16::try_from(evrf_public_keys.len()) else { Err(EvrfError::TooManyParticipants)? }; + if (t == 0) || (t > n) { + Err(EvrfError::InvalidThreshold)?; + } + if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) { + Err(EvrfError::PublicKeyWasIdentity)?; + }; + for i in participations.keys() { + if u16::from(*i) > n { + Err(EvrfError::NonExistentParticipant)?; + } + } + + let mut valid = HashMap::with_capacity(participations.len()); + let mut faulty = HashSet::new(); + + let transcript = Self::initial_transcript(context, evrf_public_keys, t); + + let mut evrf_verifier = generators.0.batch_verifier(); + for (i, participation) in participations { + let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1]; + + let mut per_proof_transcript = Blake2s256::new(); + per_proof_transcript.update(transcript); + per_proof_transcript.update(evrf_public_key.to_bytes()); + + // Clone the verifier so if this proof is faulty, it doesn't corrupt the verifier + let mut verifier_clone = evrf_verifier.clone(); + let Ok(data) = Evrf::::verify( + rng, + &generators.0, + &mut verifier_clone, + per_proof_transcript.finalize().into(), + usize::from(t), + evrf_public_keys, + evrf_public_key, + &participation.proof, + ) else { + faulty.insert(*i); + continue; + }; + evrf_verifier = verifier_clone; + + valid.insert(*i, (participation.encrypted_secret_shares.clone(), data)); + } + debug_assert_eq!(valid.len() + faulty.len(), participations.len()); + + // Perform the batch verification of the eVRFs + if !generators.0.verify(evrf_verifier) { + // If the batch failed, verify them each individually + for (i, participation) in participations { + if faulty.contains(i) { + continue; + } + let mut evrf_verifier = generators.0.batch_verifier(); + Evrf::::verify( + rng, + &generators.0, + &mut evrf_verifier, + context, + usize::from(t), + evrf_public_keys, + evrf_public_keys[usize::from(u16::from(*i)) - 1], + &participation.proof, + ) + .expect("evrf failed basic checks yet prover wasn't prior marked faulty"); + if !generators.0.verify(evrf_verifier) { + valid.remove(i); + faulty.insert(*i); + } + } + } + debug_assert_eq!(valid.len() + faulty.len(), participations.len()); + + // Perform the batch verification of the shares + let mut sum_encrypted_secret_shares = HashMap::with_capacity(usize::from(n)); + let mut sum_masks = HashMap::with_capacity(usize::from(n)); + let mut all_encrypted_secret_shares = HashMap::with_capacity(usize::from(t)); + { + let mut share_verification_statements_actual = HashMap::with_capacity(valid.len()); + if !{ + let mut g_scalar = C::F::ZERO; + let mut pairs = Vec::with_capacity(valid.len() * (usize::from(t) + evrf_public_keys.len())); + for (i, (encrypted_secret_shares, data)) in &valid { + let (this_g_scalar, mut these_pairs) = share_verification_statements::( + &mut *rng, + &data.coefficients, + evrf_public_keys + .len() + .try_into() + .expect("n prior checked to be <= u16::MAX couldn't be converted to a u16"), + &data.encryption_commitments, + encrypted_secret_shares, + ); + // Queue this into our batch + g_scalar += this_g_scalar; + pairs.extend(&these_pairs); + + // Also push this g_scalar onto these_pairs so these_pairs can be verified individually + // upon error + these_pairs.push((this_g_scalar, generators.0.g())); + share_verification_statements_actual.insert(*i, these_pairs); + + // Also format this data as we'd need it upon success + let mut formatted_encrypted_secret_shares = HashMap::with_capacity(usize::from(n)); + for (j, enc_share) in encrypted_secret_shares { + /* + We calculcate verification shares as the sum of the encrypted scalars, minus their + masks. This only does one scalar multiplication, and `1+t` point additions (with + one negation), and is accordingly much cheaper than interpolating the commitments. + This is only possible because already interpolated the commitments to verify the + encrypted secret share. + */ + let sum_encrypted_secret_share = + sum_encrypted_secret_shares.get(j).copied().unwrap_or(C::F::ZERO); + let sum_mask = sum_masks.get(j).copied().unwrap_or(C::G::identity()); + sum_encrypted_secret_shares.insert(*j, sum_encrypted_secret_share + enc_share); + + let j_index = usize::from(u16::from(*j)) - 1; + sum_masks.insert(*j, sum_mask + data.encryption_commitments[j_index]); + + formatted_encrypted_secret_shares.insert(*j, (data.ecdh_keys[j_index], *enc_share)); + } + all_encrypted_secret_shares.insert(*i, formatted_encrypted_secret_shares); + } + pairs.push((g_scalar, generators.0.g())); + bool::from(multiexp_vartime(&pairs).is_identity()) + } { + // If the batch failed, verify them each individually + for (i, pairs) in share_verification_statements_actual { + if !bool::from(multiexp_vartime(&pairs).is_identity()) { + valid.remove(&i); + faulty.insert(i); + } + } + } + } + debug_assert_eq!(valid.len() + faulty.len(), participations.len()); + + let mut faulty = faulty.into_iter().collect::>(); + if !faulty.is_empty() { + faulty.sort_unstable(); + return Ok(VerifyResult::Invalid(faulty)); + } + + // We check at least t key shares of people have participated in contributing entropy + // Since the key shares of the participants exceed t, meaning if they're malicious they can + // reconstruct the key regardless, this is safe to the threshold + { + let mut participating_weight = 0; + let mut evrf_public_keys_mut = evrf_public_keys.to_vec(); + for i in valid.keys() { + let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1]; + + // Remove this key from the Vec to prevent double-counting + /* + Double-counting would be a risk if multiple participants shared an eVRF public key and + participated. This code does still allow such participants (in order to let participants + be weighted), and any one of them participating will count as all participating. This is + fine as any one such participant will be able to decrypt the shares for themselves and + all other participants, so this is still a key generated by an amount of participants who + could simply reconstruct the key. + */ + let start_len = evrf_public_keys_mut.len(); + evrf_public_keys_mut.retain(|key| *key != evrf_public_key); + let end_len = evrf_public_keys_mut.len(); + let count = start_len - end_len; + + participating_weight += count; + } + if participating_weight < usize::from(t) { + return Ok(VerifyResult::NotEnoughParticipants); + } + } + + // If we now have >= t participations, calculate the group key and verification shares + + // The group key is the sum of the zero coefficients + let group_key = valid.values().map(|(_, evrf_data)| evrf_data.coefficients[0]).sum::(); + + // Calculate each user's verification share + let mut verification_shares = HashMap::with_capacity(usize::from(n)); + for i in (1 ..= n).map(Participant) { + verification_shares + .insert(i, (C::generator() * sum_encrypted_secret_shares[&i]) - sum_masks[&i]); + } + + Ok(VerifyResult::Valid(EvrfDkg { + t, + n, + evrf_public_keys: evrf_public_keys.to_vec(), + group_key, + verification_shares, + encrypted_secret_shares: all_encrypted_secret_shares, + })) + } + + pub fn keys( + &self, + evrf_private_key: &Zeroizing<::F>, + ) -> Vec> { + let evrf_public_key = ::generator() * evrf_private_key.deref(); + let mut is = Vec::with_capacity(1); + for (i, evrf_key) in self.evrf_public_keys.iter().enumerate() { + if *evrf_key == evrf_public_key { + let i = u16::try_from(i).expect("n <= u16::MAX yet i > u16::MAX?"); + let i = Participant(1 + i); + is.push(i); + } + } + + let mut res = Vec::with_capacity(is.len()); + for i in is { + let mut secret_share = Zeroizing::new(C::F::ZERO); + for shares in self.encrypted_secret_shares.values() { + let (ecdh_keys, enc_share) = shares[&i]; + + let mut ecdh = Zeroizing::new(C::F::ZERO); + for point in ecdh_keys { + let (mut x, mut y) = + ::G::to_xy(point * evrf_private_key.deref()).unwrap(); + *ecdh += x; + x.zeroize(); + y.zeroize(); + } + *secret_share += enc_share - ecdh.deref(); + } + + debug_assert_eq!(self.verification_shares[&i], C::generator() * secret_share.deref()); + + res.push(ThresholdKeys::from(ThresholdCore { + params: ThresholdParams::new(self.t, self.n, i).unwrap(), + interpolation: Interpolation::Lagrange, + secret_share, + group_key: self.group_key, + verification_shares: self.verification_shares.clone(), + })); + } + res + } +} diff --git a/crypto/dkg/src/evrf/proof.rs b/crypto/dkg/src/evrf/proof.rs new file mode 100644 index 00000000..ce9c57d1 --- /dev/null +++ b/crypto/dkg/src/evrf/proof.rs @@ -0,0 +1,861 @@ +use core::{marker::PhantomData, ops::Deref, fmt}; + +use subtle::*; +use zeroize::{Zeroize, Zeroizing}; + +use rand_core::{RngCore, CryptoRng, SeedableRng}; +use rand_chacha::ChaCha20Rng; + +use generic_array::{typenum::Unsigned, ArrayLength, GenericArray}; + +use blake2::{Digest, Blake2s256}; +use ciphersuite::{ + group::{ + ff::{Field, PrimeField, PrimeFieldBits}, + Group, GroupEncoding, + }, + Ciphersuite, +}; + +use generalized_bulletproofs::{ + *, + transcript::{Transcript as ProverTranscript, VerifierTranscript}, + arithmetic_circuit_proof::*, +}; +use generalized_bulletproofs_circuit_abstraction::*; + +use ec_divisors::{DivisorCurve, new_divisor}; +use generalized_bulletproofs_ec_gadgets::*; + +/// A pair of curves to perform the eVRF with. +pub trait EvrfCurve: Ciphersuite { + type EmbeddedCurve: Ciphersuite::F>>; + type EmbeddedCurveParameters: DiscreteLogParameters; +} + +#[cfg(feature = "evrf-secp256k1")] +impl EvrfCurve for ciphersuite::Secp256k1 { + type EmbeddedCurve = secq256k1::Secq256k1; + type EmbeddedCurveParameters = secq256k1::Secq256k1; +} + +#[cfg(feature = "evrf-ed25519")] +impl EvrfCurve for ciphersuite::Ed25519 { + type EmbeddedCurve = embedwards25519::Embedwards25519; + type EmbeddedCurveParameters = embedwards25519::Embedwards25519; +} + +#[cfg(feature = "evrf-ristretto")] +impl EvrfCurve for ciphersuite::Ristretto { + type EmbeddedCurve = embedwards25519::Embedwards25519; + type EmbeddedCurveParameters = embedwards25519::Embedwards25519; +} + +fn sample_point(rng: &mut (impl RngCore + CryptoRng)) -> C::G { + let mut repr = ::Repr::default(); + loop { + rng.fill_bytes(repr.as_mut()); + if let Ok(point) = C::read_G(&mut repr.as_ref()) { + if bool::from(!point.is_identity()) { + return point; + } + } + } +} + +/// Generators for eVRF proof. +#[derive(Clone, Debug)] +pub struct EvrfGenerators(pub(crate) Generators); + +impl EvrfGenerators { + /// Create a new set of generators. + pub fn new(max_threshold: u16, max_participants: u16) -> EvrfGenerators { + let g = C::generator(); + let mut rng = ChaCha20Rng::from_seed(Blake2s256::digest(g.to_bytes()).into()); + let h = sample_point::(&mut rng); + let (_, generators) = + Evrf::::muls_and_generators_to_use(max_threshold.into(), max_participants.into()); + let mut g_bold = vec![]; + let mut h_bold = vec![]; + for _ in 0 .. generators { + g_bold.push(sample_point::(&mut rng)); + h_bold.push(sample_point::(&mut rng)); + } + Self(Generators::new(g, h, g_bold, h_bold).unwrap()) + } +} + +/// The result of proving for an eVRF. +pub(crate) struct EvrfProveResult { + /// The coefficients for use in the DKG. + pub(crate) coefficients: Vec>, + /// The masks to encrypt secret shares with. + pub(crate) encryption_masks: Vec>, + /// The proof itself. + pub(crate) proof: Vec, +} + +/// The result of verifying an eVRF. +pub(crate) struct EvrfVerifyResult { + /// The commitments to the coefficients for use in the DKG. + pub(crate) coefficients: Vec, + /// The ephemeral public keys to perform ECDHs with + pub(crate) ecdh_keys: Vec<[::G; 2]>, + /// The commitments to the masks used to encrypt secret shares with. + pub(crate) encryption_commitments: Vec, +} + +impl fmt::Debug for EvrfVerifyResult { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt.debug_struct("EvrfVerifyResult").finish_non_exhaustive() + } +} + +/// A struct to prove/verify eVRFs with. +pub(crate) struct Evrf(PhantomData); +impl Evrf { + // Sample uniform points (via rejection-sampling) on the embedded elliptic curve + fn transcript_to_points( + seed: [u8; 32], + coefficients: usize, + ) -> Vec<::G> { + // We need to do two Diffie-Hellman's per coefficient in order to achieve an unbiased result + let quantity = 2 * coefficients; + + let mut rng = ChaCha20Rng::from_seed(seed); + let mut res = Vec::with_capacity(quantity); + for _ in 0 .. quantity { + res.push(sample_point::(&mut rng)); + } + res + } + + /// Read a Variable from a theoretical vector commitment tape + fn read_one_from_tape(generators_to_use: usize, start: &mut usize) -> Variable { + // Each commitment has twice as many variables as generators in use + let commitment = *start / (2 * generators_to_use); + // The index will be less than the amount of generators in use, as half are left and half are + // right + let index = *start % generators_to_use; + let res = if (*start / generators_to_use) % 2 == 0 { + Variable::CG { commitment, index } + } else { + Variable::CH { commitment, index } + }; + *start += 1; + res + } + + /// Read a set of variables from a theoretical vector commitment tape + fn read_from_tape( + generators_to_use: usize, + start: &mut usize, + ) -> GenericArray { + let mut buf = Vec::with_capacity(N::USIZE); + for _ in 0 .. N::USIZE { + buf.push(Self::read_one_from_tape(generators_to_use, start)); + } + GenericArray::from_slice(&buf).clone() + } + + /// Read `PointWithDlog`s, which share a discrete logarithm, from the theoretical vector + /// commitment tape. + fn point_with_dlogs( + start: &mut usize, + quantity: usize, + generators_to_use: usize, + ) -> Vec> { + // We define a serialized tape of the discrete logarithm, then for each divisor/point, we push: + // zero, x**i, y x**i, y, x_coord, y_coord + // We then chunk that into vector commitments + // Here, we take the assumed layout and generate the expected `Variable`s for this layout + + let dlog = Self::read_from_tape(generators_to_use, start); + + let mut res = Vec::with_capacity(quantity); + let mut read_point_with_dlog = || { + let zero = Self::read_one_from_tape(generators_to_use, start); + let x_from_power_of_2 = Self::read_from_tape(generators_to_use, start); + let yx = Self::read_from_tape(generators_to_use, start); + let y = Self::read_one_from_tape(generators_to_use, start); + let divisor = Divisor { zero, x_from_power_of_2, yx, y }; + + let point = ( + Self::read_one_from_tape(generators_to_use, start), + Self::read_one_from_tape(generators_to_use, start), + ); + + res.push(PointWithDlog { dlog: dlog.clone(), divisor, point }); + }; + + for _ in 0 .. quantity { + read_point_with_dlog(); + } + res + } + + fn muls_and_generators_to_use(coefficients: usize, ecdhs: usize) -> (usize, usize) { + const MULS_PER_DH: usize = 7; + // 1 DH to prove the discrete logarithm corresponds to the eVRF public key + // 2 DHs per generated coefficient + // 2 DHs per generated ECDH + let expected_muls = MULS_PER_DH * (1 + (2 * coefficients) + (2 * 2 * ecdhs)); + let generators_to_use = { + let mut padded_pow_of_2 = 1; + while padded_pow_of_2 < expected_muls { + padded_pow_of_2 <<= 1; + } + // This may as small as 16, which would create an excessive amount of vector commitments + // We set a floor of 1024 rows for bandwidth reasons + padded_pow_of_2.max(1024) + }; + (expected_muls, generators_to_use) + } + + fn circuit( + curve_spec: &CurveSpec, + evrf_public_key: (C::F, C::F), + coefficients: usize, + ecdh_commitments: &[[(C::F, C::F); 2]], + generator_tables: &[GeneratorTable], + circuit: &mut Circuit, + transcript: &mut impl Transcript, + ) { + let (expected_muls, generators_to_use) = + Self::muls_and_generators_to_use(coefficients, ecdh_commitments.len()); + let (challenge, challenged_generators) = + circuit.discrete_log_challenge(transcript, curve_spec, generator_tables); + debug_assert_eq!(challenged_generators.len(), 1 + (2 * coefficients) + ecdh_commitments.len()); + + // The generators tables/challenged generators are expected to have the following layouts + // G, coefficients * [A, B], ecdhs * [P] + #[allow(non_snake_case)] + let challenged_G = &challenged_generators[0]; + + // Execute the circuit for the coefficients + let mut tape_pos = 0; + { + let mut point_with_dlogs = + Self::point_with_dlogs(&mut tape_pos, 1 + (2 * coefficients), generators_to_use) + .into_iter(); + + // Verify the discrete logarithm is in the fact the discrete logarithm of the eVRF public key + let point = circuit.discrete_log( + curve_spec, + point_with_dlogs.next().unwrap(), + &challenge, + challenged_G, + ); + circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(evrf_public_key.0)); + circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(evrf_public_key.1)); + + // Verify the DLog claims against the sampled points + for (i, pair) in challenged_generators[1 ..].chunks(2).take(coefficients).enumerate() { + let mut lincomb = LinComb::empty(); + debug_assert_eq!(pair.len(), 2); + for challenged_generator in pair { + let point = circuit.discrete_log( + curve_spec, + point_with_dlogs.next().unwrap(), + &challenge, + challenged_generator, + ); + // For each point in this pair, add its x coordinate to a lincomb + lincomb = lincomb.term(C::F::ONE, point.x()); + } + // Constrain the sum of the two x coordinates to be equal to the value in the Pedersen + // commitment + circuit.equality(lincomb, &LinComb::from(Variable::V(i))); + } + debug_assert!(point_with_dlogs.next().is_none()); + } + + // Now execute the circuit for the ECDHs + let mut challenged_generators = challenged_generators.iter().skip(1 + (2 * coefficients)); + for (i, ecdh) in ecdh_commitments.iter().enumerate() { + let challenged_generator = challenged_generators.next().unwrap(); + let mut lincomb = LinComb::empty(); + for ecdh in ecdh { + let mut point_with_dlogs = + Self::point_with_dlogs(&mut tape_pos, 2, generators_to_use).into_iter(); + + // One proof of the ECDH secret * G for the commitment published + let point = circuit.discrete_log( + curve_spec, + point_with_dlogs.next().unwrap(), + &challenge, + challenged_G, + ); + circuit.equality(LinComb::from(point.x()), &LinComb::empty().constant(ecdh.0)); + circuit.equality(LinComb::from(point.y()), &LinComb::empty().constant(ecdh.1)); + + // One proof of the ECDH secret * P for the ECDH + let point = circuit.discrete_log( + curve_spec, + point_with_dlogs.next().unwrap(), + &challenge, + challenged_generator, + ); + // For each point in this pair, add its x coordinate to a lincomb + lincomb = lincomb.term(C::F::ONE, point.x()); + } + + // Constrain the sum of the two x coordinates to be equal to the value in the Pedersen + // commitment + circuit.equality(lincomb, &LinComb::from(Variable::V(coefficients + i))); + } + + debug_assert_eq!(expected_muls, circuit.muls()); + debug_assert!(challenged_generators.next().is_none()); + } + + /// Convert a scalar to a sequence of coefficients for the polynomial 2**i, where the sum of the + /// coefficients is F::NUM_BITS. + /// + /// Despite the name, the returned coefficients are not guaranteed to be bits (0 or 1). + /// + /// This scalar will presumably be used in a discrete log proof. That requires calculating a + /// divisor which is variable time to the amount of points interpolated. Since the amount of + /// points interpolated is equal to the sum of the coefficients in the polynomial, we need all + /// scalars to have a constant sum of their coefficients (instead of one variable to its bits). + /// + /// We achieve this by finding the highest non-0 coefficient, decrementing it, and increasing the + /// immediately less significant coefficient by 2. This increases the sum of the coefficients by + /// 1 (-1+2=1). + fn scalar_to_bits(scalar: &::F) -> Vec { + let num_bits = u64::from(<::EmbeddedCurve as Ciphersuite>::F::NUM_BITS); + + // Obtain the bits of the private key + let num_bits_usize = usize::try_from(num_bits).unwrap(); + let mut decomposition = vec![0; num_bits_usize]; + for (i, bit) in scalar.to_le_bits().into_iter().take(num_bits_usize).enumerate() { + let bit = u64::from(u8::from(bit)); + decomposition[i] = bit; + } + + // The following algorithm only works if the value of the scalar exceeds num_bits + // If it isn't, we increase it by the modulus such that it does exceed num_bits + { + let mut less_than_num_bits = Choice::from(0); + for i in 0 .. num_bits { + less_than_num_bits |= scalar.ct_eq(&::F::from(i)); + } + let mut decomposition_of_modulus = vec![0; num_bits_usize]; + // Decompose negative one + for (i, bit) in (-::F::ONE) + .to_le_bits() + .into_iter() + .take(num_bits_usize) + .enumerate() + { + let bit = u64::from(u8::from(bit)); + decomposition_of_modulus[i] = bit; + } + // Increment it by one + decomposition_of_modulus[0] += 1; + + // Add the decomposition onto the decomposition of the modulus + for i in 0 .. num_bits_usize { + let new_decomposition = <_>::conditional_select( + &decomposition[i], + &(decomposition[i] + decomposition_of_modulus[i]), + less_than_num_bits, + ); + decomposition[i] = new_decomposition; + } + } + + // Calculcate the sum of the coefficients + let mut sum_of_coefficients: u64 = 0; + for decomposition in &decomposition { + sum_of_coefficients += *decomposition; + } + + /* + Now, because we added a log2(k)-bit number to a k-bit number, we may have our sum of + coefficients be *too high*. We attempt to reduce the sum of the coefficients accordingly. + + This algorithm is guaranteed to complete as expected. Take the sequence `222`. `222` becomes + `032` becomes `013`. Even if the next coefficient in the sequence is `2`, the third + coefficient will be reduced once and the next coefficient (`2`, increased to `3`) will only + be eligible for reduction once. This demonstrates, even for a worst case of log2(k) `2`s + followed by `1`s (as possible if the modulus is a Mersenne prime), the log2(k) `2`s can be + reduced as necessary so long as there is a single coefficient after (requiring the entire + sequence be at least of length log2(k) + 1). For a 2-bit number, log2(k) + 1 == 2, so this + holds for any odd prime field. + + To fully type out the demonstration for the Mersenne prime 3, with scalar to encode 1 (the + highest value less than the number of bits): + + 10 - Little-endian bits of 1 + 21 - Little-endian bits of 1, plus the modulus + 02 - After one reduction, where the sum of the coefficients does in fact equal 2 (the target) + */ + { + let mut log2_num_bits = 0; + while (1 << log2_num_bits) < num_bits { + log2_num_bits += 1; + } + + for _ in 0 .. log2_num_bits { + // If the sum of coefficients is the amount of bits, we're done + let mut done = sum_of_coefficients.ct_eq(&num_bits); + + for i in 0 .. (num_bits_usize - 1) { + let should_act = (!done) & decomposition[i].ct_gt(&1); + // Subtract 2 from this coefficient + let amount_to_sub = <_>::conditional_select(&0, &2, should_act); + decomposition[i] -= amount_to_sub; + // Add 1 to the next coefficient + let amount_to_add = <_>::conditional_select(&0, &1, should_act); + decomposition[i + 1] += amount_to_add; + + // Also update the sum of coefficients + sum_of_coefficients -= <_>::conditional_select(&0, &1, should_act); + + // If we updated the coefficients this loop iter, we're done for this loop iter + done |= should_act; + } + } + } + + for _ in 0 .. num_bits { + // If the sum of coefficients is the amount of bits, we're done + let mut done = sum_of_coefficients.ct_eq(&num_bits); + + // Find the highest coefficient currently non-zero + for i in (1 .. decomposition.len()).rev() { + // If this is non-zero, we should decrement this coefficient if we haven't already + // decremented a coefficient this round + let is_non_zero = !(0.ct_eq(&decomposition[i])); + let should_act = (!done) & is_non_zero; + + // Update this coefficient and the prior coefficient + let amount_to_sub = <_>::conditional_select(&0, &1, should_act); + decomposition[i] -= amount_to_sub; + + let amount_to_add = <_>::conditional_select(&0, &2, should_act); + // i must be at least 1, so i - 1 will be at least 0 (meaning it's safe to index with) + decomposition[i - 1] += amount_to_add; + + // Also update the sum of coefficients + sum_of_coefficients += <_>::conditional_select(&0, &1, should_act); + + // If we updated the coefficients this loop iter, we're done for this loop iter + done |= should_act; + } + } + debug_assert!(bool::from(decomposition.iter().sum::().ct_eq(&num_bits))); + + decomposition + } + + /// Prove a point on an elliptic curve had its discrete logarithm generated via an eVRF. + pub(crate) fn prove( + rng: &mut (impl RngCore + CryptoRng), + generators: &Generators, + transcript: [u8; 32], + coefficients: usize, + ecdh_public_keys: &[<::EmbeddedCurve as Ciphersuite>::G], + evrf_private_key: &Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, + ) -> Result, AcError> { + let curve_spec = CurveSpec { + a: <::EmbeddedCurve as Ciphersuite>::G::a(), + b: <::EmbeddedCurve as Ciphersuite>::G::b(), + }; + + // A tape of the discrete logarithm, then [zero, x**i, y x**i, y, x_coord, y_coord] + let mut vector_commitment_tape = vec![]; + + let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len()); + + // A function to calculate a divisor and push it onto the tape + // This defines a vec, divisor_points, outside of the fn to reuse its allocation + let mut divisor_points = + Vec::with_capacity((::F::NUM_BITS as usize) + 1); + let mut divisor = + |vector_commitment_tape: &mut Vec<_>, + dlog: &[u64], + push_generator: bool, + generator: <::EmbeddedCurve as Ciphersuite>::G, + dh: <::EmbeddedCurve as Ciphersuite>::G| { + if push_generator { + let (x, y) = ::G::to_xy(generator).unwrap(); + generator_tables.push(GeneratorTable::new(&curve_spec, x, y)); + } + + { + let mut generator = generator; + for coefficient in dlog { + let mut coefficient = *coefficient; + while coefficient != 0 { + coefficient -= 1; + divisor_points.push(generator); + } + generator = generator.double(); + } + debug_assert_eq!( + dlog.iter().sum::(), + u64::from(::F::NUM_BITS) + ); + } + divisor_points.push(-dh); + let mut divisor = new_divisor(&divisor_points).unwrap().normalize_x_coefficient(); + divisor_points.zeroize(); + + vector_commitment_tape.push(divisor.zero_coefficient); + + for coefficient in divisor.x_coefficients.iter().skip(1) { + vector_commitment_tape.push(*coefficient); + } + for _ in divisor.x_coefficients.len() .. + ::XCoefficientsMinusOne::USIZE + { + vector_commitment_tape.push(::F::ZERO); + } + + for coefficient in divisor.yx_coefficients.first().unwrap_or(&vec![]) { + vector_commitment_tape.push(*coefficient); + } + for _ in divisor.yx_coefficients.first().unwrap_or(&vec![]).len() .. + ::YxCoefficients::USIZE + { + vector_commitment_tape.push(::F::ZERO); + } + + vector_commitment_tape + .push(divisor.y_coefficients.first().copied().unwrap_or(::F::ZERO)); + + divisor.zeroize(); + drop(divisor); + + let (x, y) = ::G::to_xy(dh).unwrap(); + vector_commitment_tape.push(x); + vector_commitment_tape.push(y); + + (x, y) + }; + + // Start with the coefficients + let evrf_public_key; + let mut actual_coefficients = Vec::with_capacity(coefficients); + { + let mut dlog = Self::scalar_to_bits(evrf_private_key); + let points = Self::transcript_to_points(transcript, coefficients); + + // Start by pushing the discrete logarithm onto the tape + for coefficient in &dlog { + vector_commitment_tape.push(<_>::from(*coefficient)); + } + + // Push a divisor for proving that we're using the correct scalar + evrf_public_key = divisor( + &mut vector_commitment_tape, + &dlog, + true, + <::EmbeddedCurve as Ciphersuite>::generator(), + <::EmbeddedCurve as Ciphersuite>::generator() * evrf_private_key.deref(), + ); + + // Push a divisor for each point we use in the eVRF + for pair in points.chunks(2) { + let mut res = Zeroizing::new(C::F::ZERO); + for point in pair { + let (dh_x, _) = divisor( + &mut vector_commitment_tape, + &dlog, + true, + *point, + *point * evrf_private_key.deref(), + ); + *res += dh_x; + } + actual_coefficients.push(res); + } + debug_assert_eq!(actual_coefficients.len(), coefficients); + + dlog.zeroize(); + } + + // Now do the ECDHs for the encryption + let mut encryption_masks = Vec::with_capacity(ecdh_public_keys.len()); + let mut ecdh_commitments = Vec::with_capacity(2 * ecdh_public_keys.len()); + let mut ecdh_commitments_xy = Vec::with_capacity(ecdh_public_keys.len()); + for ecdh_public_key in ecdh_public_keys { + ecdh_commitments_xy.push([(C::F::ZERO, C::F::ZERO); 2]); + + let mut res = Zeroizing::new(C::F::ZERO); + for j in 0 .. 2 { + let mut ecdh_private_key; + loop { + ecdh_private_key = ::F::random(&mut *rng); + // Generate a non-0 ECDH private key, as necessary to not produce an identity output + // Identity isn't representable with the divisors, hence the explicit effort + if bool::from(!ecdh_private_key.is_zero()) { + break; + } + } + let mut dlog = Self::scalar_to_bits(&ecdh_private_key); + let ecdh_commitment = ::generator() * ecdh_private_key; + ecdh_commitments.push(ecdh_commitment); + ecdh_commitments_xy.last_mut().unwrap()[j] = + <::G as DivisorCurve>::to_xy(ecdh_commitment).unwrap(); + + // Start by pushing the discrete logarithm onto the tape + for coefficient in &dlog { + vector_commitment_tape.push(<_>::from(*coefficient)); + } + + // Push a divisor for proving that we're using the correct scalar for the commitment + divisor( + &mut vector_commitment_tape, + &dlog, + false, + <::EmbeddedCurve as Ciphersuite>::generator(), + <::EmbeddedCurve as Ciphersuite>::generator() * ecdh_private_key, + ); + // Push a divisor for the key we're performing the ECDH with + let (dh_x, _) = divisor( + &mut vector_commitment_tape, + &dlog, + j == 0, + *ecdh_public_key, + *ecdh_public_key * ecdh_private_key, + ); + *res += dh_x; + + ecdh_private_key.zeroize(); + dlog.zeroize(); + } + encryption_masks.push(res); + } + debug_assert_eq!(encryption_masks.len(), ecdh_public_keys.len()); + + // Now that we have the vector commitment tape, chunk it + let (_, generators_to_use) = + Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len()); + + let mut vector_commitments = + Vec::with_capacity(vector_commitment_tape.len().div_ceil(2 * generators_to_use)); + for chunk in vector_commitment_tape.chunks(2 * generators_to_use) { + let g_values = chunk[.. generators_to_use.min(chunk.len())].to_vec().into(); + let h_values = chunk[generators_to_use.min(chunk.len()) ..].to_vec().into(); + vector_commitments.push(PedersenVectorCommitment { + g_values, + h_values, + mask: C::F::random(&mut *rng), + }); + } + + vector_commitment_tape.zeroize(); + drop(vector_commitment_tape); + + let mut commitments = Vec::with_capacity(coefficients + ecdh_public_keys.len()); + for coefficient in &actual_coefficients { + commitments.push(PedersenCommitment { value: **coefficient, mask: C::F::random(&mut *rng) }); + } + for enc_mask in &encryption_masks { + commitments.push(PedersenCommitment { value: **enc_mask, mask: C::F::random(&mut *rng) }); + } + + let mut transcript = ProverTranscript::new(transcript); + let commited_commitments = transcript.write_commitments( + vector_commitments + .iter() + .map(|commitment| { + commitment + .commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h()) + .ok_or(AcError::NotEnoughGenerators) + }) + .collect::>()?, + commitments + .iter() + .map(|commitment| commitment.commit(generators.g(), generators.h())) + .collect(), + ); + for ecdh_commitment in ecdh_commitments { + transcript.push_point(ecdh_commitment); + } + + let mut circuit = Circuit::prove(vector_commitments, commitments.clone()); + Self::circuit( + &curve_spec, + evrf_public_key, + coefficients, + &ecdh_commitments_xy, + &generator_tables, + &mut circuit, + &mut transcript, + ); + + let (statement, Some(witness)) = circuit + .statement( + generators.reduce(generators_to_use).ok_or(AcError::NotEnoughGenerators)?, + commited_commitments, + ) + .unwrap() + else { + panic!("proving yet wasn't yielded the witness"); + }; + statement.prove(&mut *rng, &mut transcript, witness).unwrap(); + + // Push the reveal onto the transcript + for commitment in &commitments { + transcript.push_point(generators.g() * commitment.value); + } + + // Define a weight to aggregate the commitments with + let mut agg_weights = Vec::with_capacity(commitments.len()); + agg_weights.push(C::F::ONE); + while agg_weights.len() < commitments.len() { + agg_weights.push(transcript.challenge::()); + } + let mut x = commitments + .iter() + .zip(&agg_weights) + .map(|(commitment, weight)| commitment.mask * *weight) + .sum::(); + + // Do a Schnorr PoK for the randomness of the aggregated Pedersen commitment + let mut r = C::F::random(&mut *rng); + transcript.push_point(generators.h() * r); + let c = transcript.challenge::(); + transcript.push_scalar(r + (c * x)); + r.zeroize(); + x.zeroize(); + + Ok(EvrfProveResult { + coefficients: actual_coefficients, + encryption_masks, + proof: transcript.complete(), + }) + } + + /// Verify an eVRF proof, returning the commitments output. + #[allow(clippy::too_many_arguments)] + pub(crate) fn verify( + rng: &mut (impl RngCore + CryptoRng), + generators: &Generators, + verifier: &mut BatchVerifier, + transcript: [u8; 32], + coefficients: usize, + ecdh_public_keys: &[<::EmbeddedCurve as Ciphersuite>::G], + evrf_public_key: <::EmbeddedCurve as Ciphersuite>::G, + proof: &[u8], + ) -> Result, ()> { + let curve_spec = CurveSpec { + a: <::EmbeddedCurve as Ciphersuite>::G::a(), + b: <::EmbeddedCurve as Ciphersuite>::G::b(), + }; + + let mut generator_tables = Vec::with_capacity(1 + (2 * coefficients) + ecdh_public_keys.len()); + { + let (x, y) = + ::G::to_xy(::generator()) + .unwrap(); + generator_tables.push(GeneratorTable::new(&curve_spec, x, y)); + } + let points = Self::transcript_to_points(transcript, coefficients); + for generator in points { + let (x, y) = ::G::to_xy(generator).unwrap(); + generator_tables.push(GeneratorTable::new(&curve_spec, x, y)); + } + for generator in ecdh_public_keys { + let (x, y) = ::G::to_xy(*generator).unwrap(); + generator_tables.push(GeneratorTable::new(&curve_spec, x, y)); + } + + let (_, generators_to_use) = + Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len()); + + let mut transcript = VerifierTranscript::new(transcript, proof); + + let dlog_len = ::ScalarBits::USIZE; + let divisor_len = 1 + + ::XCoefficientsMinusOne::USIZE + + ::YxCoefficients::USIZE + + 1; + let dlog_proof_len = divisor_len + 2; + + let coeffs_vc_variables = dlog_len + ((1 + (2 * coefficients)) * dlog_proof_len); + let ecdhs_vc_variables = ((2 * ecdh_public_keys.len()) * dlog_len) + + ((2 * 2 * ecdh_public_keys.len()) * dlog_proof_len); + let vcs = (coeffs_vc_variables + ecdhs_vc_variables).div_ceil(2 * generators_to_use); + + let all_commitments = + transcript.read_commitments(vcs, coefficients + ecdh_public_keys.len()).map_err(|_| ())?; + let commitments = all_commitments.V().to_vec(); + + let mut ecdh_keys = Vec::with_capacity(ecdh_public_keys.len()); + let mut ecdh_keys_xy = Vec::with_capacity(ecdh_public_keys.len()); + for _ in 0 .. ecdh_public_keys.len() { + let ecdh_keys_i = [ + transcript.read_point::().map_err(|_| ())?, + transcript.read_point::().map_err(|_| ())?, + ]; + ecdh_keys.push(ecdh_keys_i); + // This bans zero ECDH keys + ecdh_keys_xy.push([ + <::G as DivisorCurve>::to_xy(ecdh_keys_i[0]).ok_or(())?, + <::G as DivisorCurve>::to_xy(ecdh_keys_i[1]).ok_or(())?, + ]); + } + + let mut circuit = Circuit::verify(); + Self::circuit( + &curve_spec, + ::G::to_xy(evrf_public_key).ok_or(())?, + coefficients, + &ecdh_keys_xy, + &generator_tables, + &mut circuit, + &mut transcript, + ); + + let (statement, None) = + circuit.statement(generators.reduce(generators_to_use).ok_or(())?, all_commitments).unwrap() + else { + panic!("verifying yet was yielded a witness"); + }; + + statement.verify(rng, verifier, &mut transcript).map_err(|_| ())?; + + // Read the openings for the commitments + let mut openings = Vec::with_capacity(commitments.len()); + for _ in 0 .. commitments.len() { + openings.push(transcript.read_point::().map_err(|_| ())?); + } + + // Verify the openings of the commitments + let mut agg_weights = Vec::with_capacity(commitments.len()); + agg_weights.push(C::F::ONE); + while agg_weights.len() < commitments.len() { + agg_weights.push(transcript.challenge::()); + } + + let sum_points = + openings.iter().zip(&agg_weights).map(|(point, weight)| *point * *weight).sum::(); + let sum_commitments = + commitments.into_iter().zip(agg_weights).map(|(point, weight)| point * weight).sum::(); + #[allow(non_snake_case)] + let A = sum_commitments - sum_points; + + #[allow(non_snake_case)] + let R = transcript.read_point::().map_err(|_| ())?; + let c = transcript.challenge::(); + let s = transcript.read_scalar::().map_err(|_| ())?; + + // Doesn't batch verify this as we can't access the internals of the GBP batch verifier + if (R + (A * c)) != (generators.h() * s) { + Err(())?; + } + + if !transcript.complete().is_empty() { + Err(())? + }; + + let encryption_commitments = openings[coefficients ..].to_vec(); + let coefficients = openings[.. coefficients].to_vec(); + Ok(EvrfVerifyResult { coefficients, ecdh_keys, encryption_commitments }) + } +} diff --git a/crypto/dkg/src/lib.rs b/crypto/dkg/src/lib.rs index 478f400f..48037bcd 100644 --- a/crypto/dkg/src/lib.rs +++ b/crypto/dkg/src/lib.rs @@ -21,6 +21,10 @@ pub mod encryption; #[cfg(feature = "std")] pub mod pedpop; +/// The one-round DKG described in the [eVRF paper](https://eprint.iacr.org/2024/397). +#[cfg(all(feature = "std", feature = "evrf"))] +pub mod evrf; + /// Promote keys between ciphersuites. #[cfg(feature = "std")] pub mod promote; @@ -205,25 +209,37 @@ mod lib { } } - /// Calculate the lagrange coefficient for a signing set. - pub fn lagrange(i: Participant, included: &[Participant]) -> F { - let i_f = F::from(u64::from(u16::from(i))); + #[derive(Clone, PartialEq, Eq, Debug, Zeroize)] + pub(crate) enum Interpolation { + Constant(Vec), + Lagrange, + } - let mut num = F::ONE; - let mut denom = F::ONE; - for l in included { - if i == *l { - continue; + impl Interpolation { + pub(crate) fn interpolation_factor(&self, i: Participant, included: &[Participant]) -> F { + match self { + Interpolation::Constant(c) => c[usize::from(u16::from(i) - 1)], + Interpolation::Lagrange => { + let i_f = F::from(u64::from(u16::from(i))); + + let mut num = F::ONE; + let mut denom = F::ONE; + for l in included { + if i == *l { + continue; + } + + let share = F::from(u64::from(u16::from(*l))); + num *= share; + denom *= share - i_f; + } + + // Safe as this will only be 0 if we're part of the above loop + // (which we have an if case to avoid) + num * denom.invert().unwrap() + } } - - let share = F::from(u64::from(u16::from(*l))); - num *= share; - denom *= share - i_f; } - - // Safe as this will only be 0 if we're part of the above loop - // (which we have an if case to avoid) - num * denom.invert().unwrap() } /// Keys and verification shares generated by a DKG. @@ -232,6 +248,8 @@ mod lib { pub struct ThresholdCore { /// Threshold Parameters. pub(crate) params: ThresholdParams, + /// The interpolation method used. + pub(crate) interpolation: Interpolation, /// Secret share key. pub(crate) secret_share: Zeroizing, @@ -246,6 +264,7 @@ mod lib { fmt .debug_struct("ThresholdCore") .field("params", &self.params) + .field("interpolation", &self.interpolation) .field("group_key", &self.group_key) .field("verification_shares", &self.verification_shares) .finish_non_exhaustive() @@ -255,6 +274,7 @@ mod lib { impl Zeroize for ThresholdCore { fn zeroize(&mut self) { self.params.zeroize(); + self.interpolation.zeroize(); self.secret_share.zeroize(); self.group_key.zeroize(); for share in self.verification_shares.values_mut() { @@ -266,16 +286,14 @@ mod lib { impl ThresholdCore { pub(crate) fn new( params: ThresholdParams, + interpolation: Interpolation, secret_share: Zeroizing, verification_shares: HashMap, ) -> ThresholdCore { let t = (1 ..= params.t()).map(Participant).collect::>(); - ThresholdCore { - params, - secret_share, - group_key: t.iter().map(|i| verification_shares[i] * lagrange::(*i, &t)).sum(), - verification_shares, - } + let group_key = + t.iter().map(|i| verification_shares[i] * interpolation.interpolation_factor(*i, &t)).sum(); + ThresholdCore { params, interpolation, secret_share, group_key, verification_shares } } /// Parameters for these keys. @@ -304,6 +322,15 @@ mod lib { writer.write_all(&self.params.t.to_le_bytes())?; writer.write_all(&self.params.n.to_le_bytes())?; writer.write_all(&self.params.i.to_bytes())?; + match &self.interpolation { + Interpolation::Constant(c) => { + writer.write_all(&[0])?; + for c in c { + writer.write_all(c.to_repr().as_ref())?; + } + } + Interpolation::Lagrange => writer.write_all(&[1])?, + }; let mut share_bytes = self.secret_share.to_repr(); writer.write_all(share_bytes.as_ref())?; share_bytes.as_mut().zeroize(); @@ -352,6 +379,20 @@ mod lib { ) }; + let mut interpolation = [0]; + reader.read_exact(&mut interpolation)?; + let interpolation = match interpolation[0] { + 0 => Interpolation::Constant({ + let mut res = Vec::with_capacity(usize::from(n)); + for _ in 0 .. n { + res.push(C::read_F(reader)?); + } + res + }), + 1 => Interpolation::Lagrange, + _ => Err(io::Error::other("invalid interpolation method"))?, + }; + let secret_share = Zeroizing::new(C::read_F(reader)?); let mut verification_shares = HashMap::new(); @@ -361,6 +402,7 @@ mod lib { Ok(ThresholdCore::new( ThresholdParams::new(t, n, i).map_err(|_| io::Error::other("invalid parameters"))?, + interpolation, secret_share, verification_shares, )) @@ -383,6 +425,7 @@ mod lib { /// View of keys, interpolated and offset for usage. #[derive(Clone)] pub struct ThresholdView { + interpolation: Interpolation, offset: C::F, group_key: C::G, included: Vec, @@ -395,6 +438,7 @@ mod lib { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { fmt .debug_struct("ThresholdView") + .field("interpolation", &self.interpolation) .field("offset", &self.offset) .field("group_key", &self.group_key) .field("included", &self.included) @@ -480,12 +524,13 @@ mod lib { included.sort(); let mut secret_share = Zeroizing::new( - lagrange::(self.params().i(), &included) * self.secret_share().deref(), + self.core.interpolation.interpolation_factor(self.params().i(), &included) * + self.secret_share().deref(), ); let mut verification_shares = self.verification_shares(); for (i, share) in &mut verification_shares { - *share *= lagrange::(*i, &included); + *share *= self.core.interpolation.interpolation_factor(*i, &included); } // The offset is included by adding it to the participant with the lowest ID @@ -496,6 +541,7 @@ mod lib { *verification_shares.get_mut(&included[0]).unwrap() += C::generator() * offset; Ok(ThresholdView { + interpolation: self.core.interpolation.clone(), offset, group_key: self.group_key(), secret_share, @@ -528,6 +574,14 @@ mod lib { &self.included } + /// Return the interpolation factor for a signer. + pub fn interpolation_factor(&self, participant: Participant) -> Option { + if !self.included.contains(&participant) { + None? + } + Some(self.interpolation.interpolation_factor(participant, &self.included)) + } + /// Return the interpolated, offset secret share. pub fn secret_share(&self) -> &Zeroizing { &self.secret_share diff --git a/crypto/dkg/src/musig.rs b/crypto/dkg/src/musig.rs index 4d6b54c8..82843272 100644 --- a/crypto/dkg/src/musig.rs +++ b/crypto/dkg/src/musig.rs @@ -7,8 +7,6 @@ use std_shims::collections::HashMap; #[cfg(feature = "std")] use zeroize::Zeroizing; -#[cfg(feature = "std")] -use ciphersuite::group::ff::Field; use ciphersuite::{ group::{Group, GroupEncoding}, Ciphersuite, @@ -16,7 +14,7 @@ use ciphersuite::{ use crate::DkgError; #[cfg(feature = "std")] -use crate::{Participant, ThresholdParams, ThresholdCore, lagrange}; +use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore}; fn check_keys(keys: &[C::G]) -> Result> { if keys.is_empty() { @@ -67,6 +65,7 @@ pub fn musig_key(context: &[u8], keys: &[C::G]) -> Result(context, keys)?; let mut res = C::G::identity(); for i in 1 ..= keys_len { + // TODO: Calculate this with a multiexp res += keys[usize::from(i - 1)] * binding_factor::(transcript.clone(), i); } Ok(res) @@ -104,38 +103,26 @@ pub fn musig( binding.push(binding_factor::(transcript.clone(), i)); } - // Multiply our private key by our binding factor - let mut secret_share = private_key.clone(); - *secret_share *= binding[pos]; + // Our secret share is our private key + let secret_share = private_key.clone(); // Calculate verification shares let mut verification_shares = HashMap::new(); - // When this library offers a ThresholdView for a specific signing set, it applies the lagrange - // factor - // Since this is a n-of-n scheme, there's only one possible signing set, and one possible - // lagrange factor - // In the name of simplicity, we define the group key as the sum of all bound keys - // Accordingly, the secret share must be multiplied by the inverse of the lagrange factor, along - // with all verification shares - // This is less performant than simply defining the group key as the sum of all post-lagrange - // bound keys, yet the simplicity is preferred - let included = (1 ..= keys_len) - // This error also shouldn't be possible, for the same reasons as documented above - .map(|l| Participant::new(l).ok_or(DkgError::InvalidSigningSet)) - .collect::, _>>()?; let mut group_key = C::G::identity(); - for (l, p) in included.iter().enumerate() { - let bound = keys[l] * binding[l]; - group_key += bound; + for l in 1 ..= keys_len { + let key = keys[usize::from(l) - 1]; + group_key += key * binding[usize::from(l - 1)]; - let lagrange_inv = lagrange::(*p, &included).invert().unwrap(); - if params.i() == *p { - *secret_share *= lagrange_inv; - } - verification_shares.insert(*p, bound * lagrange_inv); + // These errors also shouldn't be possible, for the same reasons as documented above + verification_shares.insert(Participant::new(l).ok_or(DkgError::InvalidSigningSet)?, key); } debug_assert_eq!(C::generator() * secret_share.deref(), verification_shares[¶ms.i()]); debug_assert_eq!(musig_key::(context, keys).unwrap(), group_key); - Ok(ThresholdCore { params, secret_share, group_key, verification_shares }) + Ok(ThresholdCore::new( + params, + Interpolation::Constant(binding), + secret_share, + verification_shares, + )) } diff --git a/crypto/dkg/src/pedpop.rs b/crypto/dkg/src/pedpop.rs index 1faeebe5..adfc6958 100644 --- a/crypto/dkg/src/pedpop.rs +++ b/crypto/dkg/src/pedpop.rs @@ -22,9 +22,9 @@ use multiexp::{multiexp_vartime, BatchVerifier}; use schnorr::SchnorrSignature; use crate::{ - Participant, DkgError, ThresholdParams, ThresholdCore, validate_map, + Participant, DkgError, ThresholdParams, Interpolation, ThresholdCore, validate_map, encryption::{ - ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, EncryptionKeyProof, + ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, Decryption, EncryptionKeyProof, DecryptionError, }, }; @@ -32,10 +32,10 @@ use crate::{ type FrostError = DkgError>; #[allow(non_snake_case)] -fn challenge(context: &str, l: Participant, R: &[u8], Am: &[u8]) -> C::F { +fn challenge(context: [u8; 32], l: Participant, R: &[u8], Am: &[u8]) -> C::F { let mut transcript = RecommendedTranscript::new(b"DKG FROST v0.2"); transcript.domain_separate(b"schnorr_proof_of_knowledge"); - transcript.append_message(b"context", context.as_bytes()); + transcript.append_message(b"context", context); transcript.append_message(b"participant", l.to_bytes()); transcript.append_message(b"nonce", R); transcript.append_message(b"commitments", Am); @@ -86,15 +86,15 @@ impl ReadWrite for Commitments { #[derive(Debug, Zeroize)] pub struct KeyGenMachine { params: ThresholdParams, - context: String, + context: [u8; 32], _curve: PhantomData, } impl KeyGenMachine { /// Create a new machine to generate a key. /// - /// The context string should be unique among multisigs. - pub fn new(params: ThresholdParams, context: String) -> KeyGenMachine { + /// The context should be unique among multisigs. + pub fn new(params: ThresholdParams, context: [u8; 32]) -> KeyGenMachine { KeyGenMachine { params, context, _curve: PhantomData } } @@ -129,11 +129,11 @@ impl KeyGenMachine { // There's no reason to spend the time and effort to make this deterministic besides a // general obsession with canonicity and determinism though r, - challenge::(&self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg), + challenge::(self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg), ); // Additionally create an encryption mechanism to protect the secret shares - let encryption = Encryption::new(self.context.clone(), Some(self.params.i), rng); + let encryption = Encryption::new(self.context, self.params.i, rng); // Step 4: Broadcast let msg = @@ -225,7 +225,7 @@ impl ReadWrite for SecretShare { #[derive(Zeroize)] pub struct SecretShareMachine { params: ThresholdParams, - context: String, + context: [u8; 32], coefficients: Vec>, our_commitments: Vec, encryption: Encryption, @@ -274,7 +274,7 @@ impl SecretShareMachine { &mut batch, l, msg.commitments[0], - challenge::(&self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg), + challenge::(self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg), ); commitments.insert(l, msg.commitments.drain(..).collect::>()); @@ -472,9 +472,10 @@ impl KeyMachine { let KeyMachine { commitments, encryption, params, secret } = self; Ok(BlameMachine { commitments, - encryption, + encryption: encryption.into_decryption(), result: Some(ThresholdCore { params, + interpolation: Interpolation::Lagrange, secret_share: secret, group_key: stripes[0], verification_shares, @@ -486,7 +487,7 @@ impl KeyMachine { /// A machine capable of handling blame proofs. pub struct BlameMachine { commitments: HashMap>, - encryption: Encryption, + encryption: Decryption, result: Option>, } @@ -505,7 +506,6 @@ impl Zeroize for BlameMachine { for commitments in self.commitments.values_mut() { commitments.zeroize(); } - self.encryption.zeroize(); self.result.zeroize(); } } @@ -598,14 +598,13 @@ impl AdditionalBlameMachine { /// authenticated as having come from the supposed party and verified as valid. Usage of invalid /// commitments is considered undefined behavior, and may cause everything from inaccurate blame /// to panics. - pub fn new( - rng: &mut R, - context: String, + pub fn new( + context: [u8; 32], n: u16, mut commitment_msgs: HashMap>>, ) -> Result> { let mut commitments = HashMap::new(); - let mut encryption = Encryption::new(context, None, rng); + let mut encryption = Decryption::new(context); for i in 1 ..= n { let i = Participant::new(i).unwrap(); let Some(msg) = commitment_msgs.remove(&i) else { Err(DkgError::MissingParticipant(i))? }; diff --git a/crypto/dkg/src/promote.rs b/crypto/dkg/src/promote.rs index 7cad4f23..c8dcaed0 100644 --- a/crypto/dkg/src/promote.rs +++ b/crypto/dkg/src/promote.rs @@ -113,6 +113,7 @@ impl> GeneratorPromotion< Ok(ThresholdKeys { core: Arc::new(ThresholdCore::new( params, + self.base.core.interpolation.clone(), self.base.secret_share().clone(), verification_shares, )), diff --git a/crypto/dkg/src/tests/evrf/mod.rs b/crypto/dkg/src/tests/evrf/mod.rs new file mode 100644 index 00000000..e6fd2230 --- /dev/null +++ b/crypto/dkg/src/tests/evrf/mod.rs @@ -0,0 +1,79 @@ +use std::collections::HashMap; + +use zeroize::Zeroizing; +use rand_core::OsRng; +use rand::seq::SliceRandom; + +use ciphersuite::{group::ff::Field, Ciphersuite}; + +use crate::{ + Participant, + evrf::*, + tests::{THRESHOLD, PARTICIPANTS, recover_key}, +}; + +mod proof; +use proof::{Pallas, Vesta}; + +#[test] +fn evrf_dkg() { + let generators = EvrfGenerators::::new(THRESHOLD, PARTICIPANTS); + let context = [0; 32]; + + let mut priv_keys = vec![]; + let mut pub_keys = vec![]; + for i in 0 .. PARTICIPANTS { + let priv_key = ::F::random(&mut OsRng); + pub_keys.push(::generator() * priv_key); + priv_keys.push((Participant::new(1 + i).unwrap(), Zeroizing::new(priv_key))); + } + + let mut participations = HashMap::new(); + // Shuffle the private keys so we iterate over a random subset of them + priv_keys.shuffle(&mut OsRng); + for (i, priv_key) in priv_keys.iter().take(usize::from(THRESHOLD)) { + participations.insert( + *i, + EvrfDkg::::participate( + &mut OsRng, + &generators, + context, + THRESHOLD, + &pub_keys, + priv_key, + ) + .unwrap(), + ); + } + + let VerifyResult::Valid(dkg) = EvrfDkg::::verify( + &mut OsRng, + &generators, + context, + THRESHOLD, + &pub_keys, + &participations, + ) + .unwrap() else { + panic!("verify didn't return VerifyResult::Valid") + }; + + let mut group_key = None; + let mut verification_shares = None; + let mut all_keys = HashMap::new(); + for (i, priv_key) in priv_keys { + let keys = dkg.keys(&priv_key).into_iter().next().unwrap(); + assert_eq!(keys.params().i(), i); + assert_eq!(keys.params().t(), THRESHOLD); + assert_eq!(keys.params().n(), PARTICIPANTS); + group_key = group_key.or(Some(keys.group_key())); + verification_shares = verification_shares.or(Some(keys.verification_shares())); + assert_eq!(Some(keys.group_key()), group_key); + assert_eq!(Some(keys.verification_shares()), verification_shares); + + all_keys.insert(i, keys); + } + + // TODO: Test for all possible combinations of keys + assert_eq!(Pallas::generator() * recover_key(&all_keys), group_key.unwrap()); +} diff --git a/crypto/dkg/src/tests/evrf/proof.rs b/crypto/dkg/src/tests/evrf/proof.rs new file mode 100644 index 00000000..5750c6c4 --- /dev/null +++ b/crypto/dkg/src/tests/evrf/proof.rs @@ -0,0 +1,118 @@ +use std::time::Instant; + +use rand_core::OsRng; + +use zeroize::{Zeroize, Zeroizing}; +use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2}; +use blake2::{Digest, Blake2b512}; + +use ciphersuite::{ + group::{ + ff::{FromUniformBytes, Field, PrimeField}, + Group, + }, + Ciphersuite, Secp256k1, Ed25519, Ristretto, +}; +use pasta_curves::{Ep, Eq, Fp, Fq}; + +use generalized_bulletproofs::tests::generators; +use generalized_bulletproofs_ec_gadgets::DiscreteLogParameters; + +use crate::evrf::proof::*; + +#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)] +pub(crate) struct Pallas; +impl Ciphersuite for Pallas { + type F = Fq; + type G = Ep; + type H = Blake2b512; + const ID: &'static [u8] = b"Pallas"; + fn generator() -> Ep { + Ep::generator() + } + fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F { + // This naive concat may be insecure in a real world deployment + // This is solely test code so it's fine + Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into()) + } +} + +#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)] +pub(crate) struct Vesta; +impl Ciphersuite for Vesta { + type F = Fp; + type G = Eq; + type H = Blake2b512; + const ID: &'static [u8] = b"Vesta"; + fn generator() -> Eq { + Eq::generator() + } + fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F { + // This naive concat may be insecure in a real world deployment + // This is solely test code so it's fine + Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into()) + } +} + +pub struct VestaParams; +impl DiscreteLogParameters for VestaParams { + type ScalarBits = U<{ <::F as PrimeField>::NUM_BITS as usize }>; + type XCoefficients = Quot, U2>; + type XCoefficientsMinusOne = Diff; + type YxCoefficients = Diff, U1>, U2>, U2>; +} + +impl EvrfCurve for Pallas { + type EmbeddedCurve = Vesta; + type EmbeddedCurveParameters = VestaParams; +} + +fn evrf_proof_test() { + let generators = generators(1024); + let vesta_private_key = Zeroizing::new(::F::random(&mut OsRng)); + let ecdh_public_keys = [ + ::G::random(&mut OsRng), + ::G::random(&mut OsRng), + ]; + let time = Instant::now(); + let res = + Evrf::::prove(&mut OsRng, &generators, [0; 32], 1, &ecdh_public_keys, &vesta_private_key) + .unwrap(); + println!("Proving time: {:?}", time.elapsed()); + + let time = Instant::now(); + let mut verifier = generators.batch_verifier(); + Evrf::::verify( + &mut OsRng, + &generators, + &mut verifier, + [0; 32], + 1, + &ecdh_public_keys, + C::EmbeddedCurve::generator() * *vesta_private_key, + &res.proof, + ) + .unwrap(); + assert!(generators.verify(verifier)); + println!("Verifying time: {:?}", time.elapsed()); +} + +#[test] +fn pallas_evrf_proof_test() { + evrf_proof_test::(); +} + +#[test] +fn secp256k1_evrf_proof_test() { + evrf_proof_test::(); +} + +#[test] +fn ed25519_evrf_proof_test() { + evrf_proof_test::(); +} + +#[test] +fn ristretto_evrf_proof_test() { + evrf_proof_test::(); +} diff --git a/crypto/dkg/src/tests/mod.rs b/crypto/dkg/src/tests/mod.rs index f21d7254..4399d72a 100644 --- a/crypto/dkg/src/tests/mod.rs +++ b/crypto/dkg/src/tests/mod.rs @@ -6,7 +6,7 @@ use rand_core::{RngCore, CryptoRng}; use ciphersuite::{group::ff::Field, Ciphersuite}; -use crate::{Participant, ThresholdCore, ThresholdKeys, lagrange, musig::musig as musig_fn}; +use crate::{Participant, ThresholdCore, ThresholdKeys, musig::musig as musig_fn}; mod musig; pub use musig::test_musig; @@ -19,6 +19,9 @@ use pedpop::pedpop_gen; mod promote; use promote::test_generator_promotion; +#[cfg(all(test, feature = "evrf"))] +mod evrf; + /// Constant amount of participants to use when testing. pub const PARTICIPANTS: u16 = 5; /// Constant threshold of participants to use when testing. @@ -43,7 +46,8 @@ pub fn recover_key(keys: &HashMap> let included = keys.keys().copied().collect::>(); let group_private = keys.iter().fold(C::F::ZERO, |accum, (i, keys)| { - accum + (lagrange::(*i, &included) * keys.secret_share().deref()) + accum + + (first.core.interpolation.interpolation_factor(*i, &included) * keys.secret_share().deref()) }); assert_eq!(C::generator() * group_private, first.group_key(), "failed to recover keys"); group_private diff --git a/crypto/dkg/src/tests/pedpop.rs b/crypto/dkg/src/tests/pedpop.rs index 3ae383e3..42d7af67 100644 --- a/crypto/dkg/src/tests/pedpop.rs +++ b/crypto/dkg/src/tests/pedpop.rs @@ -14,7 +14,7 @@ use crate::{ type PedPoPEncryptedMessage = EncryptedMessage::F>>; type PedPoPSecretShares = HashMap>; -const CONTEXT: &str = "DKG Test Key Generation"; +const CONTEXT: [u8; 32] = *b"DKG Test Key Generation "; // Commit, then return commitment messages, enc keys, and shares #[allow(clippy::type_complexity)] @@ -31,7 +31,7 @@ fn commit_enc_keys_and_shares( let mut enc_keys = HashMap::new(); for i in (1 ..= PARTICIPANTS).map(Participant) { let params = ThresholdParams::new(THRESHOLD, PARTICIPANTS, i).unwrap(); - let machine = KeyGenMachine::::new(params, CONTEXT.to_string()); + let machine = KeyGenMachine::::new(params, CONTEXT); let (machine, these_commitments) = machine.generate_coefficients(rng); machines.insert(i, machine); @@ -147,14 +147,12 @@ mod literal { // Verify machines constructed with AdditionalBlameMachine::new work assert_eq!( - AdditionalBlameMachine::new( - &mut OsRng, - CONTEXT.to_string(), - PARTICIPANTS, - commitment_msgs.clone() - ) - .unwrap() - .blame(ONE, TWO, msg.clone(), blame.clone()), + AdditionalBlameMachine::new(CONTEXT, PARTICIPANTS, commitment_msgs.clone()).unwrap().blame( + ONE, + TWO, + msg.clone(), + blame.clone() + ), ONE, ); } diff --git a/crypto/evrf/circuit-abstraction/Cargo.toml b/crypto/evrf/circuit-abstraction/Cargo.toml new file mode 100644 index 00000000..1346be49 --- /dev/null +++ b/crypto/evrf/circuit-abstraction/Cargo.toml @@ -0,0 +1,20 @@ +[package] +name = "generalized-bulletproofs-circuit-abstraction" +version = "0.1.0" +description = "An abstraction for arithmetic circuits over Generalized Bulletproofs" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/circuit-abstraction" +authors = ["Luke Parker "] +keywords = ["bulletproofs", "circuit"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } + +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } + +generalized-bulletproofs = { path = "../generalized-bulletproofs" } diff --git a/crypto/evrf/circuit-abstraction/LICENSE b/crypto/evrf/circuit-abstraction/LICENSE new file mode 100644 index 00000000..659881f1 --- /dev/null +++ b/crypto/evrf/circuit-abstraction/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/circuit-abstraction/README.md b/crypto/evrf/circuit-abstraction/README.md new file mode 100644 index 00000000..95149d93 --- /dev/null +++ b/crypto/evrf/circuit-abstraction/README.md @@ -0,0 +1,3 @@ +# Generalized Bulletproofs Circuit Abstraction + +A circuit abstraction around `generalized-bulletproofs`. diff --git a/crypto/evrf/circuit-abstraction/src/gadgets.rs b/crypto/evrf/circuit-abstraction/src/gadgets.rs new file mode 100644 index 00000000..08e5214e --- /dev/null +++ b/crypto/evrf/circuit-abstraction/src/gadgets.rs @@ -0,0 +1,39 @@ +use ciphersuite::{group::ff::Field, Ciphersuite}; + +use crate::*; + +impl Circuit { + /// Constrain two linear combinations to be equal. + pub fn equality(&mut self, a: LinComb, b: &LinComb) { + self.constrain_equal_to_zero(a - b); + } + + /// Calculate (and constrain) the inverse of a value. + /// + /// A linear combination may optionally be passed as a constraint for the value being inverted. + /// A reference to the inverted value and its inverse is returned. + /// + /// May panic if any linear combinations reference non-existent terms, the witness isn't provided + /// when proving/is provided when verifying, or if the witness is 0 (and accordingly doesn't have + /// an inverse). + pub fn inverse( + &mut self, + lincomb: Option>, + witness: Option, + ) -> (Variable, Variable) { + let (l, r, o) = self.mul(lincomb, None, witness.map(|f| (f, f.invert().unwrap()))); + // The output of a value multiplied by its inverse is 1 + // Constrain `1 o - 1 = 0` + self.constrain_equal_to_zero(LinComb::from(o).constant(-C::F::ONE)); + (l, r) + } + + /// Constrain two linear combinations as inequal. + /// + /// May panic if any linear combinations reference non-existent terms. + pub fn inequality(&mut self, a: LinComb, b: &LinComb, witness: Option<(C::F, C::F)>) { + let l_constraint = a - b; + // The existence of a multiplicative inverse means a-b != 0, which means a != b + self.inverse(Some(l_constraint), witness.map(|(a, b)| a - b)); + } +} diff --git a/crypto/evrf/circuit-abstraction/src/lib.rs b/crypto/evrf/circuit-abstraction/src/lib.rs new file mode 100644 index 00000000..9971480d --- /dev/null +++ b/crypto/evrf/circuit-abstraction/src/lib.rs @@ -0,0 +1,192 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] +#![allow(non_snake_case)] + +use zeroize::{Zeroize, ZeroizeOnDrop}; + +use ciphersuite::{ + group::ff::{Field, PrimeField}, + Ciphersuite, +}; + +use generalized_bulletproofs::{ + ScalarVector, PedersenCommitment, PedersenVectorCommitment, ProofGenerators, + transcript::{Transcript as ProverTranscript, VerifierTranscript, Commitments}, + arithmetic_circuit_proof::{AcError, ArithmeticCircuitStatement, ArithmeticCircuitWitness}, +}; +pub use generalized_bulletproofs::arithmetic_circuit_proof::{Variable, LinComb}; + +mod gadgets; + +/// A trait for the transcript, whether proving for verifying, as necessary for sampling +/// challenges. +pub trait Transcript { + /// Sample a challenge from the transacript. + /// + /// It is the caller's responsibility to have properly transcripted all variables prior to + /// sampling this challenge. + fn challenge(&mut self) -> F; +} +impl Transcript for ProverTranscript { + fn challenge(&mut self) -> F { + self.challenge() + } +} +impl Transcript for VerifierTranscript<'_> { + fn challenge(&mut self) -> F { + self.challenge() + } +} + +/// The witness for the satisfaction of this circuit. +#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)] +struct ProverData { + aL: Vec, + aR: Vec, + C: Vec>, + V: Vec>, +} + +/// A struct representing a circuit. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct Circuit { + muls: usize, + // A series of linear combinations which must evaluate to 0. + constraints: Vec>, + prover: Option>, +} + +impl Circuit { + /// Returns the amount of multiplications used by this circuit. + pub fn muls(&self) -> usize { + self.muls + } + + /// Create an instance to prove satisfaction of a circuit with. + // TODO: Take the transcript here + #[allow(clippy::type_complexity)] + pub fn prove( + vector_commitments: Vec>, + commitments: Vec>, + ) -> Self { + Self { + muls: 0, + constraints: vec![], + prover: Some(ProverData { aL: vec![], aR: vec![], C: vector_commitments, V: commitments }), + } + } + + /// Create an instance to verify a proof with. + // TODO: Take the transcript here + pub fn verify() -> Self { + Self { muls: 0, constraints: vec![], prover: None } + } + + /// Evaluate a linear combination. + /// + /// Yields WL aL + WR aR + WO aO + WCG CG + WCH CH + WV V + c. + /// + /// May panic if the linear combination references non-existent terms. + /// + /// Returns None if not a prover. + pub fn eval(&self, lincomb: &LinComb) -> Option { + self.prover.as_ref().map(|prover| { + let mut res = lincomb.c(); + for (index, weight) in lincomb.WL() { + res += prover.aL[*index] * weight; + } + for (index, weight) in lincomb.WR() { + res += prover.aR[*index] * weight; + } + for (index, weight) in lincomb.WO() { + res += prover.aL[*index] * prover.aR[*index] * weight; + } + for (WCG, C) in lincomb.WCG().iter().zip(&prover.C) { + for (j, weight) in WCG { + res += C.g_values[*j] * weight; + } + } + for (WCH, C) in lincomb.WCH().iter().zip(&prover.C) { + for (j, weight) in WCH { + res += C.h_values[*j] * weight; + } + } + for (index, weight) in lincomb.WV() { + res += prover.V[*index].value * weight; + } + res + }) + } + + /// Multiply two values, optionally constrained, returning the constrainable left/right/out + /// terms. + /// + /// May panic if any linear combinations reference non-existent terms or if the witness isn't + /// provided when proving/is provided when verifying. + pub fn mul( + &mut self, + a: Option>, + b: Option>, + witness: Option<(C::F, C::F)>, + ) -> (Variable, Variable, Variable) { + let l = Variable::aL(self.muls); + let r = Variable::aR(self.muls); + let o = Variable::aO(self.muls); + self.muls += 1; + + debug_assert_eq!(self.prover.is_some(), witness.is_some()); + if let Some(witness) = witness { + let prover = self.prover.as_mut().unwrap(); + prover.aL.push(witness.0); + prover.aR.push(witness.1); + } + + if let Some(a) = a { + self.constrain_equal_to_zero(a.term(-C::F::ONE, l)); + } + if let Some(b) = b { + self.constrain_equal_to_zero(b.term(-C::F::ONE, r)); + } + + (l, r, o) + } + + /// Constrain a linear combination to be equal to 0. + /// + /// May panic if the linear combination references non-existent terms. + pub fn constrain_equal_to_zero(&mut self, lincomb: LinComb) { + self.constraints.push(lincomb); + } + + /// Obtain the statement for this circuit. + /// + /// If configured as the prover, the witness to use is also returned. + #[allow(clippy::type_complexity)] + pub fn statement( + self, + generators: ProofGenerators<'_, C>, + commitments: Commitments, + ) -> Result<(ArithmeticCircuitStatement<'_, C>, Option>), AcError> { + let statement = ArithmeticCircuitStatement::new(generators, self.constraints, commitments)?; + + let witness = self + .prover + .map(|mut prover| { + // We can't deconstruct the witness as it implements Drop (per ZeroizeOnDrop) + // Accordingly, we take the values within it and move forward with those + let mut aL = vec![]; + std::mem::swap(&mut prover.aL, &mut aL); + let mut aR = vec![]; + std::mem::swap(&mut prover.aR, &mut aR); + let mut C = vec![]; + std::mem::swap(&mut prover.C, &mut C); + let mut V = vec![]; + std::mem::swap(&mut prover.V, &mut V); + ArithmeticCircuitWitness::new(ScalarVector::from(aL), ScalarVector::from(aR), C, V) + }) + .transpose()?; + + Ok((statement, witness)) + } +} diff --git a/crypto/evrf/divisors/Cargo.toml b/crypto/evrf/divisors/Cargo.toml new file mode 100644 index 00000000..d4e3a2d0 --- /dev/null +++ b/crypto/evrf/divisors/Cargo.toml @@ -0,0 +1,34 @@ +[package] +name = "ec-divisors" +version = "0.1.0" +description = "A library for calculating elliptic curve divisors" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/divisors" +authors = ["Luke Parker "] +keywords = ["ciphersuite", "ff", "group"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +rand_core = { version = "0.6", default-features = false } +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } + +group = "0.13" + +hex = { version = "0.4", optional = true } +dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"], optional = true } +pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"], optional = true } + +[dev-dependencies] +rand_core = { version = "0.6", features = ["getrandom"] } + +hex = "0.4" +dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"] } +pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"] } + +[features] +ed25519 = ["hex", "dalek-ff-group"] +pasta = ["pasta_curves"] diff --git a/crypto/evrf/divisors/LICENSE b/crypto/evrf/divisors/LICENSE new file mode 100644 index 00000000..36fd4d60 --- /dev/null +++ b/crypto/evrf/divisors/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023-2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/divisors/README.md b/crypto/evrf/divisors/README.md new file mode 100644 index 00000000..51ba542a --- /dev/null +++ b/crypto/evrf/divisors/README.md @@ -0,0 +1,4 @@ +# Elliptic Curve Divisors + +An implementation of a representation for and construction of elliptic curve +divisors, intended for Eagen's [EC IP work](https://eprint.iacr.org/2022/596). diff --git a/crypto/evrf/divisors/src/lib.rs b/crypto/evrf/divisors/src/lib.rs new file mode 100644 index 00000000..d71aa8a4 --- /dev/null +++ b/crypto/evrf/divisors/src/lib.rs @@ -0,0 +1,287 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] +#![allow(non_snake_case)] + +use group::{ + ff::{Field, PrimeField}, + Group, +}; + +mod poly; +pub use poly::*; + +#[cfg(test)] +mod tests; + +/// A curve usable with this library. +pub trait DivisorCurve: Group { + /// An element of the field this curve is defined over. + type FieldElement: PrimeField; + + /// The A in the curve equation y^2 = x^3 + A x + B. + fn a() -> Self::FieldElement; + /// The B in the curve equation y^2 = x^3 + A x + B. + fn b() -> Self::FieldElement; + + /// y^2 - x^3 - A x - B + /// + /// Section 2 of the security proofs define this modulus. + /// + /// This MUST NOT be overriden. + // TODO: Move to an extension trait + fn divisor_modulus() -> Poly { + Poly { + // 0 y**1, 1 y*2 + y_coefficients: vec![Self::FieldElement::ZERO, Self::FieldElement::ONE], + yx_coefficients: vec![], + x_coefficients: vec![ + // - A x + -Self::a(), + // 0 x^2 + Self::FieldElement::ZERO, + // - x^3 + -Self::FieldElement::ONE, + ], + // - B + zero_coefficient: -Self::b(), + } + } + + /// Convert a point to its x and y coordinates. + /// + /// Returns None if passed the point at infinity. + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)>; +} + +/// Calculate the slope and intercept between two points. +/// +/// This function panics when `a @ infinity`, `b @ infinity`, `a == b`, or when `a == -b`. +pub(crate) fn slope_intercept(a: C, b: C) -> (C::FieldElement, C::FieldElement) { + let (ax, ay) = C::to_xy(a).unwrap(); + debug_assert_eq!(C::divisor_modulus().eval(ax, ay), C::FieldElement::ZERO); + let (bx, by) = C::to_xy(b).unwrap(); + debug_assert_eq!(C::divisor_modulus().eval(bx, by), C::FieldElement::ZERO); + let slope = (by - ay) * + Option::::from((bx - ax).invert()) + .expect("trying to get slope/intercept of points sharing an x coordinate"); + let intercept = by - (slope * bx); + debug_assert!(bool::from((ay - (slope * ax) - intercept).is_zero())); + debug_assert!(bool::from((by - (slope * bx) - intercept).is_zero())); + (slope, intercept) +} + +// The line interpolating two points. +fn line(a: C, mut b: C) -> Poly { + // If they're both the point at infinity, we simply set the line to one + if bool::from(a.is_identity() & b.is_identity()) { + return Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: C::FieldElement::ONE, + }; + } + + // If either point is the point at infinity, or these are additive inverses, the line is + // `1 * x - x`. The first `x` is a term in the polynomial, the `x` is the `x` coordinate of these + // points (of which there is one, as the second point is either at infinity or has a matching `x` + // coordinate). + if bool::from(a.is_identity() | b.is_identity()) || (a == -b) { + let (x, _) = C::to_xy(if !bool::from(a.is_identity()) { a } else { b }).unwrap(); + return Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![C::FieldElement::ONE], + zero_coefficient: -x, + }; + } + + // If the points are equal, we use the line interpolating the sum of these points with the point + // at infinity + if a == b { + b = -a.double(); + } + + let (slope, intercept) = slope_intercept::(a, b); + + // Section 4 of the proofs explicitly state the line `L = y - lambda * x - mu` + // y - (slope * x) - intercept + Poly { + y_coefficients: vec![C::FieldElement::ONE], + yx_coefficients: vec![], + x_coefficients: vec![-slope], + zero_coefficient: -intercept, + } +} + +/// Create a divisor interpolating the following points. +/// +/// Returns None if: +/// - No points were passed in +/// - The points don't sum to the point at infinity +/// - A passed in point was the point at infinity +#[allow(clippy::new_ret_no_self)] +pub fn new_divisor(points: &[C]) -> Option> { + // A single point is either the point at infinity, or this doesn't sum to the point at infinity + // Both cause us to return None + if points.len() < 2 { + None?; + } + if points.iter().sum::() != C::identity() { + None?; + } + + // Create the initial set of divisors + let mut divs = vec![]; + let mut iter = points.iter().copied(); + while let Some(a) = iter.next() { + if a == C::identity() { + None?; + } + + let b = iter.next(); + if b == Some(C::identity()) { + None?; + } + + // Draw the line between those points + divs.push((a + b.unwrap_or(C::identity()), line::(a, b.unwrap_or(-a)))); + } + + let modulus = C::divisor_modulus(); + + // Pair them off until only one remains + while divs.len() > 1 { + let mut next_divs = vec![]; + // If there's an odd amount of divisors, carry the odd one out to the next iteration + if (divs.len() % 2) == 1 { + next_divs.push(divs.pop().unwrap()); + } + + while let Some((a, a_div)) = divs.pop() { + let (b, b_div) = divs.pop().unwrap(); + + // Merge the two divisors + let numerator = a_div.mul_mod(b_div, &modulus).mul_mod(line::(a, b), &modulus); + let denominator = line::(a, -a).mul_mod(line::(b, -b), &modulus); + let (q, r) = numerator.div_rem(&denominator); + assert_eq!(r, Poly::zero()); + + next_divs.push((a + b, q)); + } + + divs = next_divs; + } + + // Return the unified divisor + Some(divs.remove(0).1) +} + +#[cfg(any(test, feature = "pasta"))] +mod pasta { + use group::{ff::Field, Curve}; + use pasta_curves::{ + arithmetic::{Coordinates, CurveAffine}, + Ep, Fp, Eq, Fq, + }; + use crate::DivisorCurve; + + impl DivisorCurve for Ep { + type FieldElement = Fp; + + fn a() -> Self::FieldElement { + Self::FieldElement::ZERO + } + fn b() -> Self::FieldElement { + Self::FieldElement::from(5u64) + } + + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> { + Option::>::from(point.to_affine().coordinates()) + .map(|coords| (*coords.x(), *coords.y())) + } + } + + impl DivisorCurve for Eq { + type FieldElement = Fq; + + fn a() -> Self::FieldElement { + Self::FieldElement::ZERO + } + fn b() -> Self::FieldElement { + Self::FieldElement::from(5u64) + } + + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> { + Option::>::from(point.to_affine().coordinates()) + .map(|coords| (*coords.x(), *coords.y())) + } + } +} + +#[cfg(any(test, feature = "ed25519"))] +mod ed25519 { + use group::{ + ff::{Field, PrimeField}, + Group, GroupEncoding, + }; + use dalek_ff_group::{FieldElement, EdwardsPoint}; + + impl crate::DivisorCurve for EdwardsPoint { + type FieldElement = FieldElement; + + // Wei25519 a/b + // https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.3 + fn a() -> Self::FieldElement { + let mut be_bytes = + hex::decode("2aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa984914a144").unwrap(); + be_bytes.reverse(); + let le_bytes = be_bytes; + Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap() + } + fn b() -> Self::FieldElement { + let mut be_bytes = + hex::decode("7b425ed097b425ed097b425ed097b425ed097b425ed097b4260b5e9c7710c864").unwrap(); + be_bytes.reverse(); + let le_bytes = be_bytes; + + Self::FieldElement::from_repr(le_bytes.try_into().unwrap()).unwrap() + } + + // https://www.ietf.org/archive/id/draft-ietf-lwig-curve-representations-02.pdf E.2 + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> { + if bool::from(point.is_identity()) { + None?; + } + + // Extract the y coordinate from the compressed point + let mut edwards_y = point.to_bytes(); + let x_is_odd = edwards_y[31] >> 7; + edwards_y[31] &= (1 << 7) - 1; + let edwards_y = Self::FieldElement::from_repr(edwards_y).unwrap(); + + // Recover the x coordinate + let edwards_y_sq = edwards_y * edwards_y; + let D = -Self::FieldElement::from(121665u64) * + Self::FieldElement::from(121666u64).invert().unwrap(); + let mut edwards_x = ((edwards_y_sq - Self::FieldElement::ONE) * + ((D * edwards_y_sq) + Self::FieldElement::ONE).invert().unwrap()) + .sqrt() + .unwrap(); + if u8::from(bool::from(edwards_x.is_odd())) != x_is_odd { + edwards_x = -edwards_x; + } + + // Calculate the x and y coordinates for Wei25519 + let edwards_y_plus_one = Self::FieldElement::ONE + edwards_y; + let one_minus_edwards_y = Self::FieldElement::ONE - edwards_y; + let wei_x = (edwards_y_plus_one * one_minus_edwards_y.invert().unwrap()) + + (Self::FieldElement::from(486662u64) * Self::FieldElement::from(3u64).invert().unwrap()); + let c = + (-(Self::FieldElement::from(486662u64) + Self::FieldElement::from(2u64))).sqrt().unwrap(); + let wei_y = c * edwards_y_plus_one * (one_minus_edwards_y * edwards_x).invert().unwrap(); + Some((wei_x, wei_y)) + } + } +} diff --git a/crypto/evrf/divisors/src/poly.rs b/crypto/evrf/divisors/src/poly.rs new file mode 100644 index 00000000..b818433b --- /dev/null +++ b/crypto/evrf/divisors/src/poly.rs @@ -0,0 +1,430 @@ +use core::ops::{Add, Neg, Sub, Mul, Rem}; + +use zeroize::Zeroize; + +use group::ff::PrimeField; + +/// A structure representing a Polynomial with x**i, y**i, and y**i * x**j terms. +#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] +pub struct Poly> { + /// c[i] * y ** (i + 1) + pub y_coefficients: Vec, + /// c[i][j] * y ** (i + 1) x ** (j + 1) + pub yx_coefficients: Vec>, + /// c[i] * x ** (i + 1) + pub x_coefficients: Vec, + /// Coefficient for x ** 0, y ** 0, and x ** 0 y ** 0 (the coefficient for 1) + pub zero_coefficient: F, +} + +impl> Poly { + /// A polynomial for zero. + pub fn zero() -> Self { + Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: F::ZERO, + } + } + + /// The amount of terms in the polynomial. + #[allow(clippy::len_without_is_empty)] + #[must_use] + pub fn len(&self) -> usize { + self.y_coefficients.len() + + self.yx_coefficients.iter().map(Vec::len).sum::() + + self.x_coefficients.len() + + usize::from(u8::from(self.zero_coefficient != F::ZERO)) + } + + // Remove high-order zero terms, allowing the length of the vectors to equal the amount of terms. + pub(crate) fn tidy(&mut self) { + let tidy = |vec: &mut Vec| { + while vec.last() == Some(&F::ZERO) { + vec.pop(); + } + }; + + tidy(&mut self.y_coefficients); + for vec in self.yx_coefficients.iter_mut() { + tidy(vec); + } + while self.yx_coefficients.last() == Some(&vec![]) { + self.yx_coefficients.pop(); + } + tidy(&mut self.x_coefficients); + } +} + +impl> Add<&Self> for Poly { + type Output = Self; + + fn add(mut self, other: &Self) -> Self { + // Expand to be the neeeded size + while self.y_coefficients.len() < other.y_coefficients.len() { + self.y_coefficients.push(F::ZERO); + } + while self.yx_coefficients.len() < other.yx_coefficients.len() { + self.yx_coefficients.push(vec![]); + } + for i in 0 .. other.yx_coefficients.len() { + while self.yx_coefficients[i].len() < other.yx_coefficients[i].len() { + self.yx_coefficients[i].push(F::ZERO); + } + } + while self.x_coefficients.len() < other.x_coefficients.len() { + self.x_coefficients.push(F::ZERO); + } + + // Perform the addition + for (i, coeff) in other.y_coefficients.iter().enumerate() { + self.y_coefficients[i] += coeff; + } + for (i, coeffs) in other.yx_coefficients.iter().enumerate() { + for (j, coeff) in coeffs.iter().enumerate() { + self.yx_coefficients[i][j] += coeff; + } + } + for (i, coeff) in other.x_coefficients.iter().enumerate() { + self.x_coefficients[i] += coeff; + } + self.zero_coefficient += other.zero_coefficient; + + self.tidy(); + self + } +} + +impl> Neg for Poly { + type Output = Self; + + fn neg(mut self) -> Self { + for y_coeff in self.y_coefficients.iter_mut() { + *y_coeff = -*y_coeff; + } + for yx_coeffs in self.yx_coefficients.iter_mut() { + for yx_coeff in yx_coeffs.iter_mut() { + *yx_coeff = -*yx_coeff; + } + } + for x_coeff in self.x_coefficients.iter_mut() { + *x_coeff = -*x_coeff; + } + self.zero_coefficient = -self.zero_coefficient; + + self + } +} + +impl> Sub for Poly { + type Output = Self; + + fn sub(self, other: Self) -> Self { + self + &-other + } +} + +impl> Mul for Poly { + type Output = Self; + + fn mul(mut self, scalar: F) -> Self { + if scalar == F::ZERO { + return Poly::zero(); + } + + for y_coeff in self.y_coefficients.iter_mut() { + *y_coeff *= scalar; + } + for coeffs in self.yx_coefficients.iter_mut() { + for coeff in coeffs.iter_mut() { + *coeff *= scalar; + } + } + for x_coeff in self.x_coefficients.iter_mut() { + *x_coeff *= scalar; + } + self.zero_coefficient *= scalar; + self + } +} + +impl> Poly { + #[must_use] + fn shift_by_x(mut self, power_of_x: usize) -> Self { + if power_of_x == 0 { + return self; + } + + // Shift up every x coefficient + for _ in 0 .. power_of_x { + self.x_coefficients.insert(0, F::ZERO); + for yx_coeffs in &mut self.yx_coefficients { + yx_coeffs.insert(0, F::ZERO); + } + } + + // Move the zero coefficient + self.x_coefficients[power_of_x - 1] = self.zero_coefficient; + self.zero_coefficient = F::ZERO; + + // Move the y coefficients + // Start by creating yx coefficients with the necessary powers of x + let mut yx_coefficients_to_push = vec![]; + while yx_coefficients_to_push.len() < power_of_x { + yx_coefficients_to_push.push(F::ZERO); + } + // Now, ensure the yx coefficients has the slots for the y coefficients we're moving + while self.yx_coefficients.len() < self.y_coefficients.len() { + self.yx_coefficients.push(yx_coefficients_to_push.clone()); + } + // Perform the move + for (i, y_coeff) in self.y_coefficients.drain(..).enumerate() { + self.yx_coefficients[i][power_of_x - 1] = y_coeff; + } + + self + } + + #[must_use] + fn shift_by_y(mut self, power_of_y: usize) -> Self { + if power_of_y == 0 { + return self; + } + + // Shift up every y coefficient + for _ in 0 .. power_of_y { + self.y_coefficients.insert(0, F::ZERO); + self.yx_coefficients.insert(0, vec![]); + } + + // Move the zero coefficient + self.y_coefficients[power_of_y - 1] = self.zero_coefficient; + self.zero_coefficient = F::ZERO; + + // Move the x coefficients + self.yx_coefficients[power_of_y - 1] = self.x_coefficients; + self.x_coefficients = vec![]; + + self + } +} + +impl> Mul for Poly { + type Output = Self; + + fn mul(self, other: Self) -> Self { + let mut res = self.clone() * other.zero_coefficient; + + for (i, y_coeff) in other.y_coefficients.iter().enumerate() { + let scaled = self.clone() * *y_coeff; + res = res + &scaled.shift_by_y(i + 1); + } + + for (y_i, yx_coeffs) in other.yx_coefficients.iter().enumerate() { + for (x_i, yx_coeff) in yx_coeffs.iter().enumerate() { + let scaled = self.clone() * *yx_coeff; + res = res + &scaled.shift_by_y(y_i + 1).shift_by_x(x_i + 1); + } + } + + for (i, x_coeff) in other.x_coefficients.iter().enumerate() { + let scaled = self.clone() * *x_coeff; + res = res + &scaled.shift_by_x(i + 1); + } + + res.tidy(); + res + } +} + +impl> Poly { + /// Perform multiplication mod `modulus`. + #[must_use] + pub fn mul_mod(self, other: Self, modulus: &Self) -> Self { + ((self % modulus) * (other % modulus)) % modulus + } + + /// Perform division, returning the result and remainder. + /// + /// Panics upon division by zero, with undefined behavior if a non-tidy divisor is used. + #[must_use] + pub fn div_rem(self, divisor: &Self) -> (Self, Self) { + // The leading y coefficient and associated x coefficient. + let leading_y = |poly: &Self| -> (_, _) { + if poly.y_coefficients.len() > poly.yx_coefficients.len() { + (poly.y_coefficients.len(), 0) + } else if !poly.yx_coefficients.is_empty() { + (poly.yx_coefficients.len(), poly.yx_coefficients.last().unwrap().len()) + } else { + (0, poly.x_coefficients.len()) + } + }; + + let (div_y, div_x) = leading_y(divisor); + // If this divisor is actually a scalar, don't perform long division + if (div_y == 0) && (div_x == 0) { + return (self * divisor.zero_coefficient.invert().unwrap(), Poly::zero()); + } + + // Remove leading terms until the value is less than the divisor + let mut quotient: Poly = Poly::zero(); + let mut remainder = self.clone(); + loop { + // If there's nothing left to divide, return + if remainder == Poly::zero() { + break; + } + + let (rem_y, rem_x) = leading_y(&remainder); + if (rem_y < div_y) || (rem_x < div_x) { + break; + } + + let get = |poly: &Poly, y_pow: usize, x_pow: usize| -> F { + if (y_pow == 0) && (x_pow == 0) { + poly.zero_coefficient + } else if x_pow == 0 { + poly.y_coefficients[y_pow - 1] + } else if y_pow == 0 { + poly.x_coefficients[x_pow - 1] + } else { + poly.yx_coefficients[y_pow - 1][x_pow - 1] + } + }; + let coeff_numerator = get(&remainder, rem_y, rem_x); + let coeff_denominator = get(divisor, div_y, div_x); + + // We want coeff_denominator scaled by x to equal coeff_numerator + // x * d = n + // n / d = x + let mut quotient_term = Poly::zero(); + // Because this is the coefficient for the leading term of a tidied polynomial, it must be + // non-zero + quotient_term.zero_coefficient = coeff_numerator * coeff_denominator.invert().unwrap(); + + // Add the necessary yx powers + let delta_y = rem_y - div_y; + let delta_x = rem_x - div_x; + let quotient_term = quotient_term.shift_by_y(delta_y).shift_by_x(delta_x); + + let to_remove = quotient_term.clone() * divisor.clone(); + debug_assert_eq!(get(&to_remove, rem_y, rem_x), coeff_numerator); + + remainder = remainder - to_remove; + quotient = quotient + "ient_term; + } + debug_assert_eq!((quotient.clone() * divisor.clone()) + &remainder, self); + + (quotient, remainder) + } +} + +impl> Rem<&Self> for Poly { + type Output = Self; + + fn rem(self, modulus: &Self) -> Self { + self.div_rem(modulus).1 + } +} + +impl> Poly { + /// Evaluate this polynomial with the specified x/y values. + /// + /// Panics on polynomials with terms whose powers exceed 2**64. + #[must_use] + pub fn eval(&self, x: F, y: F) -> F { + let mut res = self.zero_coefficient; + for (pow, coeff) in + self.y_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v)) + { + res += y.pow([pow]) * coeff; + } + for (y_pow, coeffs) in + self.yx_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v)) + { + let y_pow = y.pow([y_pow]); + for (x_pow, coeff) in + coeffs.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v)) + { + res += y_pow * x.pow([x_pow]) * coeff; + } + } + for (pow, coeff) in + self.x_coefficients.iter().enumerate().map(|(i, v)| (u64::try_from(i + 1).unwrap(), v)) + { + res += x.pow([pow]) * coeff; + } + res + } + + /// Differentiate a polynomial, reduced by a modulus with a leading y term y**2 x**0, by x and y. + /// + /// This function panics if a y**2 term is present within the polynomial. + #[must_use] + pub fn differentiate(&self) -> (Poly, Poly) { + assert!(self.y_coefficients.len() <= 1); + assert!(self.yx_coefficients.len() <= 1); + + // Differentation by x practically involves: + // - Dropping everything without an x component + // - Shifting everything down a power of x + // - Multiplying the new coefficient by the power it prior was used with + let diff_x = { + let mut diff_x = Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: F::ZERO, + }; + if !self.x_coefficients.is_empty() { + let mut x_coeffs = self.x_coefficients.clone(); + diff_x.zero_coefficient = x_coeffs.remove(0); + diff_x.x_coefficients = x_coeffs; + + let mut prior_x_power = F::from(2); + for x_coeff in &mut diff_x.x_coefficients { + *x_coeff *= prior_x_power; + prior_x_power += F::ONE; + } + } + + if !self.yx_coefficients.is_empty() { + let mut yx_coeffs = self.yx_coefficients[0].clone(); + diff_x.y_coefficients = vec![yx_coeffs.remove(0)]; + diff_x.yx_coefficients = vec![yx_coeffs]; + + let mut prior_x_power = F::from(2); + for yx_coeff in &mut diff_x.yx_coefficients[0] { + *yx_coeff *= prior_x_power; + prior_x_power += F::ONE; + } + } + + diff_x.tidy(); + diff_x + }; + + // Differentation by y is trivial + // It's the y coefficient as the zero coefficient, and the yx coefficients as the x + // coefficients + // This is thanks to any y term over y^2 being reduced out + let diff_y = Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: self.yx_coefficients.first().cloned().unwrap_or(vec![]), + zero_coefficient: self.y_coefficients.first().cloned().unwrap_or(F::ZERO), + }; + + (diff_x, diff_y) + } + + /// Normalize the x coefficient to 1. + /// + /// Panics if there is no x coefficient to normalize or if it cannot be normalized to 1. + #[must_use] + pub fn normalize_x_coefficient(self) -> Self { + let scalar = self.x_coefficients[0].invert().unwrap(); + self * scalar + } +} diff --git a/crypto/evrf/divisors/src/tests/mod.rs b/crypto/evrf/divisors/src/tests/mod.rs new file mode 100644 index 00000000..bd8de441 --- /dev/null +++ b/crypto/evrf/divisors/src/tests/mod.rs @@ -0,0 +1,235 @@ +use rand_core::OsRng; + +use group::{ff::Field, Group}; +use dalek_ff_group::EdwardsPoint; +use pasta_curves::{Ep, Eq}; + +use crate::{DivisorCurve, Poly, new_divisor}; + +// Equation 4 in the security proofs +fn check_divisor(points: Vec) { + // Create the divisor + let divisor = new_divisor::(&points).unwrap(); + let eval = |c| { + let (x, y) = C::to_xy(c).unwrap(); + divisor.eval(x, y) + }; + + // Decide challgenges + let c0 = C::random(&mut OsRng); + let c1 = C::random(&mut OsRng); + let c2 = -(c0 + c1); + let (slope, intercept) = crate::slope_intercept::(c0, c1); + + let mut rhs = ::FieldElement::ONE; + for point in points { + let (x, y) = C::to_xy(point).unwrap(); + rhs *= intercept - (y - (slope * x)); + } + assert_eq!(eval(c0) * eval(c1) * eval(c2), rhs); +} + +fn test_divisor() { + for i in 1 ..= 255 { + println!("Test iteration {i}"); + + // Select points + let mut points = vec![]; + for _ in 0 .. i { + points.push(C::random(&mut OsRng)); + } + points.push(-points.iter().sum::()); + println!("Points {}", points.len()); + + // Perform the original check + check_divisor(points.clone()); + + // Create the divisor + let divisor = new_divisor::(&points).unwrap(); + + // For a divisor interpolating 256 points, as one does when interpreting a 255-bit discrete log + // with the result of its scalar multiplication against a fixed generator, the lengths of the + // yx/x coefficients shouldn't supersede the following bounds + assert!((divisor.yx_coefficients.first().unwrap_or(&vec![]).len()) <= 126); + assert!((divisor.x_coefficients.len() - 1) <= 127); + assert!( + (1 + divisor.yx_coefficients.first().unwrap_or(&vec![]).len() + + (divisor.x_coefficients.len() - 1) + + 1) <= + 255 + ); + + // Decide challgenges + let c0 = C::random(&mut OsRng); + let c1 = C::random(&mut OsRng); + let c2 = -(c0 + c1); + let (slope, intercept) = crate::slope_intercept::(c0, c1); + + // Perform the Logarithmic derivative check + { + let dx_over_dz = { + let dx = Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![C::FieldElement::ZERO, C::FieldElement::from(3)], + zero_coefficient: C::a(), + }; + + let dy = Poly { + y_coefficients: vec![C::FieldElement::from(2)], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: C::FieldElement::ZERO, + }; + + let dz = (dy.clone() * -slope) + &dx; + + // We want dx/dz, and dz/dx is equal to dy/dx - slope + // Sagemath claims this, dy / dz, is the proper inverse + (dy, dz) + }; + + { + let sanity_eval = |c| { + let (x, y) = C::to_xy(c).unwrap(); + dx_over_dz.0.eval(x, y) * dx_over_dz.1.eval(x, y).invert().unwrap() + }; + let sanity = sanity_eval(c0) + sanity_eval(c1) + sanity_eval(c2); + // This verifies the dx/dz polynomial is correct + assert_eq!(sanity, C::FieldElement::ZERO); + } + + // Logarithmic derivative check + let test = |divisor: Poly<_>| { + let (dx, dy) = divisor.differentiate(); + + let lhs = |c| { + let (x, y) = C::to_xy(c).unwrap(); + + let n_0 = (C::FieldElement::from(3) * (x * x)) + C::a(); + let d_0 = (C::FieldElement::from(2) * y).invert().unwrap(); + let p_0_n_0 = n_0 * d_0; + + let n_1 = dy.eval(x, y); + let first = p_0_n_0 * n_1; + + let second = dx.eval(x, y); + + let d_1 = divisor.eval(x, y); + + let fraction_1_n = first + second; + let fraction_1_d = d_1; + + let fraction_2_n = dx_over_dz.0.eval(x, y); + let fraction_2_d = dx_over_dz.1.eval(x, y); + + fraction_1_n * fraction_2_n * (fraction_1_d * fraction_2_d).invert().unwrap() + }; + let lhs = lhs(c0) + lhs(c1) + lhs(c2); + + let mut rhs = C::FieldElement::ZERO; + for point in &points { + let (x, y) = ::to_xy(*point).unwrap(); + rhs += (intercept - (y - (slope * x))).invert().unwrap(); + } + + assert_eq!(lhs, rhs); + }; + // Test the divisor and the divisor with a normalized x coefficient + test(divisor.clone()); + test(divisor.normalize_x_coefficient()); + } + } +} + +fn test_same_point() { + let mut points = vec![C::random(&mut OsRng)]; + points.push(points[0]); + points.push(-points.iter().sum::()); + check_divisor(points); +} + +fn test_subset_sum_to_infinity() { + // Internally, a binary tree algorithm is used + // This executes the first pass to end up with [0, 0] for further reductions + { + let mut points = vec![C::random(&mut OsRng)]; + points.push(-points[0]); + + let next = C::random(&mut OsRng); + points.push(next); + points.push(-next); + check_divisor(points); + } + + // This executes the first pass to end up with [0, X, -X, 0] + { + let mut points = vec![C::random(&mut OsRng)]; + points.push(-points[0]); + + let x_1 = C::random(&mut OsRng); + let x_2 = C::random(&mut OsRng); + points.push(x_1); + points.push(x_2); + + points.push(-x_1); + points.push(-x_2); + + let next = C::random(&mut OsRng); + points.push(next); + points.push(-next); + check_divisor(points); + } +} + +#[test] +fn test_divisor_pallas() { + test_divisor::(); + test_same_point::(); + test_subset_sum_to_infinity::(); +} + +#[test] +fn test_divisor_vesta() { + test_divisor::(); + test_same_point::(); + test_subset_sum_to_infinity::(); +} + +#[test] +fn test_divisor_ed25519() { + // Since we're implementing Wei25519 ourselves, check the isomorphism works as expected + { + let incomplete_add = |p1, p2| { + let (x1, y1) = EdwardsPoint::to_xy(p1).unwrap(); + let (x2, y2) = EdwardsPoint::to_xy(p2).unwrap(); + + // mmadd-1998-cmo + let u = y2 - y1; + let uu = u * u; + let v = x2 - x1; + let vv = v * v; + let vvv = v * vv; + let R = vv * x1; + let A = uu - vvv - R.double(); + let x3 = v * A; + let y3 = (u * (R - A)) - (vvv * y1); + let z3 = vvv; + + // Normalize from XYZ to XY + let x3 = x3 * z3.invert().unwrap(); + let y3 = y3 * z3.invert().unwrap(); + + // Edwards addition -> Wei25519 coordinates should be equivalent to Wei25519 addition + assert_eq!(EdwardsPoint::to_xy(p1 + p2).unwrap(), (x3, y3)); + }; + + for _ in 0 .. 256 { + incomplete_add(EdwardsPoint::random(&mut OsRng), EdwardsPoint::random(&mut OsRng)); + } + } + + test_divisor::(); + test_same_point::(); + test_subset_sum_to_infinity::(); +} diff --git a/crypto/evrf/divisors/src/tests/poly.rs b/crypto/evrf/divisors/src/tests/poly.rs new file mode 100644 index 00000000..c630a69e --- /dev/null +++ b/crypto/evrf/divisors/src/tests/poly.rs @@ -0,0 +1,129 @@ +use group::ff::Field; +use pasta_curves::Ep; + +use crate::{DivisorCurve, Poly}; + +type F = ::FieldElement; + +#[test] +fn test_poly() { + let zero = F::ZERO; + let one = F::ONE; + + { + let mut poly = Poly::zero(); + poly.y_coefficients = vec![zero, one]; + + let mut modulus = Poly::zero(); + modulus.y_coefficients = vec![one]; + assert_eq!(poly % &modulus, Poly::zero()); + } + + { + let mut poly = Poly::zero(); + poly.y_coefficients = vec![zero, one]; + + let mut squared = Poly::zero(); + squared.y_coefficients = vec![zero, zero, zero, one]; + assert_eq!(poly.clone() * poly.clone(), squared); + } + + { + let mut a = Poly::zero(); + a.zero_coefficient = F::from(2u64); + + let mut b = Poly::zero(); + b.zero_coefficient = F::from(3u64); + + let mut res = Poly::zero(); + res.zero_coefficient = F::from(6u64); + assert_eq!(a.clone() * b.clone(), res); + + b.y_coefficients = vec![F::from(4u64)]; + res.y_coefficients = vec![F::from(8u64)]; + assert_eq!(a.clone() * b.clone(), res); + assert_eq!(b.clone() * a.clone(), res); + + a.x_coefficients = vec![F::from(5u64)]; + res.x_coefficients = vec![F::from(15u64)]; + res.yx_coefficients = vec![vec![F::from(20u64)]]; + assert_eq!(a.clone() * b.clone(), res); + assert_eq!(b * a.clone(), res); + + // res is now 20xy + 8*y + 15*x + 6 + // res ** 2 = + // 400*x^2*y^2 + 320*x*y^2 + 64*y^2 + 600*x^2*y + 480*x*y + 96*y + 225*x^2 + 180*x + 36 + + let mut squared = Poly::zero(); + squared.y_coefficients = vec![F::from(96u64), F::from(64u64)]; + squared.yx_coefficients = + vec![vec![F::from(480u64), F::from(600u64)], vec![F::from(320u64), F::from(400u64)]]; + squared.x_coefficients = vec![F::from(180u64), F::from(225u64)]; + squared.zero_coefficient = F::from(36u64); + assert_eq!(res.clone() * res, squared); + } +} + +#[test] +fn test_differentation() { + let random = || F::random(&mut OsRng); + + let input = Poly { + y_coefficients: vec![random()], + yx_coefficients: vec![vec![random()]], + x_coefficients: vec![random(), random(), random()], + zero_coefficient: random(), + }; + let (diff_x, diff_y) = input.differentiate(); + assert_eq!( + diff_x, + Poly { + y_coefficients: vec![input.yx_coefficients[0][0]], + yx_coefficients: vec![], + x_coefficients: vec![ + F::from(2) * input.x_coefficients[1], + F::from(3) * input.x_coefficients[2] + ], + zero_coefficient: input.x_coefficients[0], + } + ); + assert_eq!( + diff_y, + Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![input.yx_coefficients[0][0]], + zero_coefficient: input.y_coefficients[0], + } + ); + + let input = Poly { + y_coefficients: vec![random()], + yx_coefficients: vec![vec![random(), random()]], + x_coefficients: vec![random(), random(), random(), random()], + zero_coefficient: random(), + }; + let (diff_x, diff_y) = input.differentiate(); + assert_eq!( + diff_x, + Poly { + y_coefficients: vec![input.yx_coefficients[0][0]], + yx_coefficients: vec![vec![F::from(2) * input.yx_coefficients[0][1]]], + x_coefficients: vec![ + F::from(2) * input.x_coefficients[1], + F::from(3) * input.x_coefficients[2], + F::from(4) * input.x_coefficients[3], + ], + zero_coefficient: input.x_coefficients[0], + } + ); + assert_eq!( + diff_y, + Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![input.yx_coefficients[0][0], input.yx_coefficients[0][1]], + zero_coefficient: input.y_coefficients[0], + } + ); +} diff --git a/crypto/evrf/ec-gadgets/Cargo.toml b/crypto/evrf/ec-gadgets/Cargo.toml new file mode 100644 index 00000000..cbd35639 --- /dev/null +++ b/crypto/evrf/ec-gadgets/Cargo.toml @@ -0,0 +1,20 @@ +[package] +name = "generalized-bulletproofs-ec-gadgets" +version = "0.1.0" +description = "Gadgets for working with an embedded Elliptic Curve in a Generalized Bulletproofs circuit" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/ec-gadgets" +authors = ["Luke Parker "] +keywords = ["bulletproofs", "circuit", "divisors"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +generic-array = { version = "1", default-features = false, features = ["alloc"] } + +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } + +generalized-bulletproofs-circuit-abstraction = { path = "../circuit-abstraction" } diff --git a/crypto/evrf/ec-gadgets/LICENSE b/crypto/evrf/ec-gadgets/LICENSE new file mode 100644 index 00000000..659881f1 --- /dev/null +++ b/crypto/evrf/ec-gadgets/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/ec-gadgets/README.md b/crypto/evrf/ec-gadgets/README.md new file mode 100644 index 00000000..95149d93 --- /dev/null +++ b/crypto/evrf/ec-gadgets/README.md @@ -0,0 +1,3 @@ +# Generalized Bulletproofs Circuit Abstraction + +A circuit abstraction around `generalized-bulletproofs`. diff --git a/crypto/evrf/ec-gadgets/src/dlog.rs b/crypto/evrf/ec-gadgets/src/dlog.rs new file mode 100644 index 00000000..ef4b8c83 --- /dev/null +++ b/crypto/evrf/ec-gadgets/src/dlog.rs @@ -0,0 +1,529 @@ +use core::fmt; + +use ciphersuite::{ + group::ff::{Field, PrimeField, BatchInverter}, + Ciphersuite, +}; + +use generalized_bulletproofs_circuit_abstraction::*; + +use crate::*; + +/// Parameters for a discrete logarithm proof. +/// +/// This isn't required to be implemented by the Field/Group/Ciphersuite, solely a struct, to +/// enable parameterization of discrete log proofs to the bitlength of the discrete logarithm. +/// While that may be F::NUM_BITS, a discrete log proof a for a full scalar, it could also be 64, +/// a discrete log proof for a u64 (such as if opening a Pedersen commitment in-circuit). +pub trait DiscreteLogParameters { + /// The amount of bits used to represent a scalar. + type ScalarBits: ArrayLength; + + /// The amount of x**i coefficients in a divisor. + /// + /// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) divided + /// by two. + type XCoefficients: ArrayLength; + + /// The amount of x**i coefficients in a divisor, minus one. + type XCoefficientsMinusOne: ArrayLength; + + /// The amount of y x**i coefficients in a divisor. + /// + /// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) plus + /// one, divided by two, minus two. + type YxCoefficients: ArrayLength; +} + +/// A tabled generator for proving/verifying discrete logarithm claims. +#[derive(Clone)] +pub struct GeneratorTable( + GenericArray<(F, F), Parameters::ScalarBits>, +); + +impl fmt::Debug + for GeneratorTable +{ + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt + .debug_struct("GeneratorTable") + .field("x", &self.0[0].0) + .field("y", &self.0[0].1) + .finish_non_exhaustive() + } +} + +impl GeneratorTable { + /// Create a new table for this generator. + /// + /// The generator is assumed to be well-formed and on-curve. This function may panic if it's not. + pub fn new(curve: &CurveSpec, generator_x: F, generator_y: F) -> Self { + // mdbl-2007-bl + fn dbl(a: F, x1: F, y1: F) -> (F, F) { + let xx = x1 * x1; + let w = a + (xx + xx.double()); + let y1y1 = y1 * y1; + let r = y1y1 + y1y1; + let sss = (y1 * r).double().double(); + let rr = r * r; + + let b = x1 + r; + let b = (b * b) - xx - rr; + + let h = (w * w) - b.double(); + let x3 = h.double() * y1; + let y3 = (w * (b - h)) - rr.double(); + let z3 = sss; + + // Normalize from XYZ to XY + let z3_inv = z3.invert().unwrap(); + let x3 = x3 * z3_inv; + let y3 = y3 * z3_inv; + + (x3, y3) + } + + let mut res = Self(GenericArray::default()); + res.0[0] = (generator_x, generator_y); + for i in 1 .. Parameters::ScalarBits::USIZE { + let last = res.0[i - 1]; + res.0[i] = dbl(curve.a, last.0, last.1); + } + + res + } +} + +/// A representation of the divisor. +/// +/// The coefficient for x**1 is explicitly excluded as it's expected to be normalized to 1. +#[derive(Clone)] +pub struct Divisor { + /// The coefficient for the `y` term of the divisor. + /// + /// There is never more than one `y**i x**0` coefficient as the leading term of the modulus is + /// `y**2`. It's assumed the coefficient is non-zero (and present) as it will be for any divisor + /// exceeding trivial complexity. + pub y: Variable, + /// The coefficients for the `y**1 x**i` terms of the polynomial. + // This subtraction enforces the divisor to have at least 4 points which is acceptable. + // TODO: Double check these constants + pub yx: GenericArray, + /// The coefficients for the `x**i` terms of the polynomial, skipping x**1. + /// + /// x**1 is skipped as it's expected to be normalized to 1, and therefore constant, in order to + /// ensure the divisor is non-zero (as necessary for the proof to be complete). + // Subtract 1 from the length due to skipping the coefficient for x**1 + pub x_from_power_of_2: GenericArray, + /// The constant term in the polynomial (alternatively, the coefficient for y**0 x**0). + pub zero: Variable, +} + +/// A point, its discrete logarithm, and the divisor to prove it. +#[derive(Clone)] +pub struct PointWithDlog { + /// The point which is supposedly the result of scaling the generator by the discrete logarithm. + pub point: (Variable, Variable), + /// The discrete logarithm, represented as coefficients of a polynomial of 2**i. + pub dlog: GenericArray, + /// The divisor interpolating the relevant doublings of generator with the inverse of the point. + pub divisor: Divisor, +} + +/// A struct containing a point used for the evaluation of a divisor. +/// +/// Preprocesses and caches as much of the calculation as possible to minimize work upon reuse of +/// challenge points. +struct ChallengePoint { + y: F, + yx: GenericArray, + x: GenericArray, + p_0_n_0: F, + x_p_0_n_0: GenericArray, + p_1_n: F, + p_1_d: F, +} + +impl ChallengePoint { + fn new( + curve: &CurveSpec, + // The slope between all of the challenge points + slope: F, + // The x and y coordinates + x: F, + y: F, + // The inversion of twice the y coordinate + // We accept this as an argument so that the caller can calculcate these with a batch inversion + inv_two_y: F, + ) -> Self { + // Powers of x, skipping x**0 + let divisor_x_len = Parameters::XCoefficients::USIZE; + let mut x_pows = GenericArray::default(); + x_pows[0] = x; + for i in 1 .. divisor_x_len { + let last = x_pows[i - 1]; + x_pows[i] = last * x; + } + + // Powers of x multiplied by y + let divisor_yx_len = Parameters::YxCoefficients::USIZE; + let mut yx = GenericArray::default(); + // Skips x**0 + yx[0] = y * x; + for i in 1 .. divisor_yx_len { + let last = yx[i - 1]; + yx[i] = last * x; + } + + let x_sq = x.square(); + let three_x_sq = x_sq.double() + x_sq; + let three_x_sq_plus_a = three_x_sq + curve.a; + let two_y = y.double(); + + // p_0_n_0 from `DivisorChallenge` + let p_0_n_0 = three_x_sq_plus_a * inv_two_y; + let mut x_p_0_n_0 = GenericArray::default(); + // Since this iterates over x, which skips x**0, this also skips p_0_n_0 x**0 + for (i, x) in x_pows.iter().take(divisor_yx_len).enumerate() { + x_p_0_n_0[i] = p_0_n_0 * x; + } + + // p_1_n from `DivisorChallenge` + let p_1_n = two_y; + // p_1_d from `DivisorChallenge` + let p_1_d = (-slope * p_1_n) + three_x_sq_plus_a; + + ChallengePoint { x: x_pows, y, yx, p_0_n_0, x_p_0_n_0, p_1_n, p_1_d } + } +} + +// `DivisorChallenge` from the section `Discrete Log Proof` +fn divisor_challenge_eval( + circuit: &mut Circuit, + divisor: &Divisor, + challenge: &ChallengePoint, +) -> Variable { + // The evaluation of the divisor differentiated by y, further multiplied by p_0_n_0 + // Differentation drops everything without a y coefficient, and drops what remains by a power + // of y + // (y**1 -> y**0, yx**i -> x**i) + // This aligns with p_0_n_1 from `DivisorChallenge` + let p_0_n_1 = { + let mut p_0_n_1 = LinComb::empty().term(challenge.p_0_n_0, divisor.y); + for (j, var) in divisor.yx.iter().enumerate() { + // This does not raise by `j + 1` as x_p_0_n_0 omits x**0 + p_0_n_1 = p_0_n_1.term(challenge.x_p_0_n_0[j], *var); + } + p_0_n_1 + }; + + // The evaluation of the divisor differentiated by x + // This aligns with p_0_n_2 from `DivisorChallenge` + let p_0_n_2 = { + // The coefficient for x**1 is 1, so 1 becomes the new zero coefficient + let mut p_0_n_2 = LinComb::empty().constant(C::F::ONE); + + // Handle the new y coefficient + p_0_n_2 = p_0_n_2.term(challenge.y, divisor.yx[0]); + + // Handle the new yx coefficients + for (j, yx) in divisor.yx.iter().enumerate().skip(1) { + // For the power which was shifted down, we multiply this coefficient + // 3 x**2 -> 2 * 3 x**1 + let original_power_of_x = C::F::from(u64::try_from(j + 1).unwrap()); + // `j - 1` so `j = 1` indexes yx[0] as yx[0] is the y x**1 + // (yx omits y x**0) + let this_weight = original_power_of_x * challenge.yx[j - 1]; + p_0_n_2 = p_0_n_2.term(this_weight, *yx); + } + + // Handle the x coefficients + // We don't skip the first one as `x_from_power_of_2` already omits x**1 + for (i, x) in divisor.x_from_power_of_2.iter().enumerate() { + // i + 2 as the paper expects i to start from 1 and be + 1, yet we start from 0 + let original_power_of_x = C::F::from(u64::try_from(i + 2).unwrap()); + // Still x[i] as x[0] is x**1 + let this_weight = original_power_of_x * challenge.x[i]; + + p_0_n_2 = p_0_n_2.term(this_weight, *x); + } + + p_0_n_2 + }; + + // p_0_n from `DivisorChallenge` + let p_0_n = p_0_n_1 + &p_0_n_2; + + // Evaluation of the divisor + // p_0_d from `DivisorChallenge` + let p_0_d = { + let mut p_0_d = LinComb::empty().term(challenge.y, divisor.y); + + for (var, c_yx) in divisor.yx.iter().zip(&challenge.yx) { + p_0_d = p_0_d.term(*c_yx, *var); + } + + for (i, var) in divisor.x_from_power_of_2.iter().enumerate() { + // This `i+1` is preserved, despite most not being as x omits x**0, as this assumes we + // start with `i=1` + p_0_d = p_0_d.term(challenge.x[i + 1], *var); + } + + // Adding x effectively adds a `1 x` term, ensuring the divisor isn't 0 + p_0_d.term(C::F::ONE, divisor.zero).constant(challenge.x[0]) + }; + + // Calculate the joint numerator + // p_n from `DivisorChallenge` + let p_n = p_0_n * challenge.p_1_n; + // Calculate the joint denominator + // p_d from `DivisorChallenge` + let p_d = p_0_d * challenge.p_1_d; + + // We want `n / d = o` + // `n / d = o` == `n = d * o` + // These are safe unwraps as they're solely done by the prover and should always be non-zero + let witness = + circuit.eval(&p_d).map(|p_d| (p_d, circuit.eval(&p_n).unwrap() * p_d.invert().unwrap())); + let (_l, o, n_claim) = circuit.mul(Some(p_d), None, witness); + circuit.equality(p_n, &n_claim.into()); + o +} + +/// A challenge to evaluate divisors with. +/// +/// This challenge must be sampled after writing the commitments to the transcript. This challenge +/// is reusable across various divisors. +pub struct DiscreteLogChallenge { + c0: ChallengePoint, + c1: ChallengePoint, + c2: ChallengePoint, + slope: F, + intercept: F, +} + +/// A generator which has been challenged and is ready for use in evaluating discrete logarithm +/// claims. +pub struct ChallengedGenerator( + GenericArray, +); + +/// Gadgets for proving the discrete logarithm of points on an elliptic curve defined over the +/// scalar field of the curve of the Bulletproof. +pub trait EcDlogGadgets { + /// Sample a challenge for a series of discrete logarithm claims. + /// + /// This must be called after writing the commitments to the transcript. + /// + /// The generators are assumed to be non-empty. They are not transcripted. If your generators are + /// dynamic, they must be properly transcripted into the context. + /// + /// May panic/have undefined behavior if an assumption is broken. + #[allow(clippy::type_complexity)] + fn discrete_log_challenge( + &self, + transcript: &mut T, + curve: &CurveSpec, + generators: &[GeneratorTable], + ) -> (DiscreteLogChallenge, Vec>); + + /// Prove this point has the specified discrete logarithm over the specified generator. + /// + /// The discrete logarithm is not validated to be in a canonical form. The only guarantee made on + /// it is that it's a consistent representation of _a_ discrete logarithm (reuse won't enable + /// re-interpretation as a distinct discrete logarithm). + /// + /// This does ensure the point is on-curve. + /// + /// This MUST only be called with `Variable`s present within commitments. + /// + /// May panic/have undefined behavior if an assumption is broken, or if passed an invalid + /// witness. + fn discrete_log( + &mut self, + curve: &CurveSpec, + point: PointWithDlog, + challenge: &DiscreteLogChallenge, + challenged_generator: &ChallengedGenerator, + ) -> OnCurve; +} + +impl EcDlogGadgets for Circuit { + // This is part of `DiscreteLog` from `Discrete Log Proof`, specifically, the challenges and + // the calculations dependent solely on them + fn discrete_log_challenge( + &self, + transcript: &mut T, + curve: &CurveSpec, + generators: &[GeneratorTable], + ) -> (DiscreteLogChallenge, Vec>) { + // Get the challenge points + // TODO: Implement a proper hash to curve + let (c0_x, c0_y) = loop { + let c0_x: C::F = transcript.challenge(); + let Some(c0_y) = + Option::::from(((c0_x.square() * c0_x) + (curve.a * c0_x) + curve.b).sqrt()) + else { + continue; + }; + // Takes the even y coordinate as to not be dependent on whatever root the above sqrt + // happens to returns + // TODO: Randomly select which to take + break (c0_x, if bool::from(c0_y.is_odd()) { -c0_y } else { c0_y }); + }; + let (c1_x, c1_y) = loop { + let c1_x: C::F = transcript.challenge(); + let Some(c1_y) = + Option::::from(((c1_x.square() * c1_x) + (curve.a * c1_x) + curve.b).sqrt()) + else { + continue; + }; + break (c1_x, if bool::from(c1_y.is_odd()) { -c1_y } else { c1_y }); + }; + + // mmadd-1998-cmo + fn incomplete_add(x1: F, y1: F, x2: F, y2: F) -> Option<(F, F)> { + if x1 == x2 { + None? + } + + let u = y2 - y1; + let uu = u * u; + let v = x2 - x1; + let vv = v * v; + let vvv = v * vv; + let r = vv * x1; + let a = uu - vvv - r.double(); + let x3 = v * a; + let y3 = (u * (r - a)) - (vvv * y1); + let z3 = vvv; + + // Normalize from XYZ to XY + let z3_inv = Option::::from(z3.invert())?; + let x3 = x3 * z3_inv; + let y3 = y3 * z3_inv; + + Some((x3, y3)) + } + + let (c2_x, c2_y) = incomplete_add::(c0_x, c0_y, c1_x, c1_y) + .expect("randomly selected points shared an x coordinate"); + // We want C0, C1, C2 = -(C0 + C1) + let c2_y = -c2_y; + + // Calculate the slope and intercept + // Safe invert as these x coordinates must be distinct due to passing the above incomplete_add + let slope = (c1_y - c0_y) * (c1_x - c0_x).invert().unwrap(); + let intercept = c0_y - (slope * c0_x); + + // Calculate the inversions for 2 c_y (for each c) and all of the challenged generators + let mut inversions = vec![C::F::ZERO; 3 + (generators.len() * Parameters::ScalarBits::USIZE)]; + + // Needed for the left-hand side eval + { + inversions[0] = c0_y.double(); + inversions[1] = c1_y.double(); + inversions[2] = c2_y.double(); + } + + // Perform the inversions for the generators + for (i, generator) in generators.iter().enumerate() { + // Needed for the right-hand side eval + for (j, generator) in generator.0.iter().enumerate() { + // `DiscreteLog` has weights of `(mu - (G_i.y + (slope * G_i.x)))**-1` in its last line + inversions[3 + (i * Parameters::ScalarBits::USIZE) + j] = + intercept - (generator.1 - (slope * generator.0)); + } + } + for challenge_inversion in &inversions { + // This should be unreachable barring negligible probability + if challenge_inversion.is_zero().into() { + panic!("trying to invert 0"); + } + } + let mut scratch = vec![C::F::ZERO; inversions.len()]; + let _ = BatchInverter::invert_with_external_scratch(&mut inversions, &mut scratch); + + let mut inversions = inversions.into_iter(); + let inv_c0_two_y = inversions.next().unwrap(); + let inv_c1_two_y = inversions.next().unwrap(); + let inv_c2_two_y = inversions.next().unwrap(); + + let c0 = ChallengePoint::new(curve, slope, c0_x, c0_y, inv_c0_two_y); + let c1 = ChallengePoint::new(curve, slope, c1_x, c1_y, inv_c1_two_y); + let c2 = ChallengePoint::new(curve, slope, c2_x, c2_y, inv_c2_two_y); + + // Fill in the inverted values + let mut challenged_generators = Vec::with_capacity(generators.len()); + for _ in 0 .. generators.len() { + let mut challenged_generator = GenericArray::default(); + for i in 0 .. Parameters::ScalarBits::USIZE { + challenged_generator[i] = inversions.next().unwrap(); + } + challenged_generators.push(ChallengedGenerator(challenged_generator)); + } + + (DiscreteLogChallenge { c0, c1, c2, slope, intercept }, challenged_generators) + } + + // `DiscreteLog` from `Discrete Log Proof` + fn discrete_log( + &mut self, + curve: &CurveSpec, + point: PointWithDlog, + challenge: &DiscreteLogChallenge, + challenged_generator: &ChallengedGenerator, + ) -> OnCurve { + let PointWithDlog { divisor, dlog, point } = point; + + // Ensure this is being safely called + let arg_iter = [point.0, point.1, divisor.y, divisor.zero]; + let arg_iter = arg_iter.iter().chain(divisor.yx.iter()); + let arg_iter = arg_iter.chain(divisor.x_from_power_of_2.iter()); + let arg_iter = arg_iter.chain(dlog.iter()); + for variable in arg_iter { + debug_assert!( + matches!(variable, Variable::CG { .. } | Variable::CH { .. } | Variable::V(_)), + "discrete log proofs requires all arguments belong to commitments", + ); + } + + // Check the point is on curve + let point = self.on_curve(curve, point); + + // The challenge has already been sampled so those lines aren't necessary + + // lhs from the paper, evaluating the divisor + let lhs_eval = LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c0)) + + &LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c1)) + + &LinComb::from(divisor_challenge_eval(self, &divisor, &challenge.c2)); + + // Interpolate the doublings of the generator + let mut rhs_eval = LinComb::empty(); + // We call this `bit` yet it's not constrained to being a bit + // It's presumed to be yet may be malleated + for (bit, weight) in dlog.into_iter().zip(&challenged_generator.0) { + rhs_eval = rhs_eval.term(*weight, bit); + } + + // Interpolate the output point + // intercept - (y - (slope * x)) + // intercept - y + (slope * x) + // -y + (slope * x) + intercept + // EXCEPT the output point we're proving the discrete log for isn't the one interpolated + // Its negative is, so -y becomes y + // y + (slope * x) + intercept + let output_interpolation = LinComb::empty() + .constant(challenge.intercept) + .term(C::F::ONE, point.y) + .term(challenge.slope, point.x); + let output_interpolation_eval = self.eval(&output_interpolation); + let (_output_interpolation, inverse) = + self.inverse(Some(output_interpolation), output_interpolation_eval); + rhs_eval = rhs_eval.term(C::F::ONE, inverse); + + self.equality(lhs_eval, &rhs_eval); + + point + } +} diff --git a/crypto/evrf/ec-gadgets/src/lib.rs b/crypto/evrf/ec-gadgets/src/lib.rs new file mode 100644 index 00000000..463eedd6 --- /dev/null +++ b/crypto/evrf/ec-gadgets/src/lib.rs @@ -0,0 +1,130 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] +#![allow(non_snake_case)] + +use generic_array::{typenum::Unsigned, ArrayLength, GenericArray}; + +use ciphersuite::{group::ff::Field, Ciphersuite}; + +use generalized_bulletproofs_circuit_abstraction::*; + +mod dlog; +pub use dlog::*; + +/// The specification of a short Weierstrass curve over the field `F`. +/// +/// The short Weierstrass curve is defined via the formula `y**2 = x**3 + a*x + b`. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct CurveSpec { + /// The `a` constant in the curve formula. + pub a: F, + /// The `b` constant in the curve formula. + pub b: F, +} + +/// A struct for a point on a towered curve which has been confirmed to be on-curve. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct OnCurve { + pub(crate) x: Variable, + pub(crate) y: Variable, +} + +impl OnCurve { + /// The variable for the x-coordinate. + pub fn x(&self) -> Variable { + self.x + } + /// The variable for the y-coordinate. + pub fn y(&self) -> Variable { + self.y + } +} + +/// Gadgets for working with points on an elliptic curve defined over the scalar field of the curve +/// of the Bulletproof. +pub trait EcGadgets { + /// Constrain an x and y coordinate as being on the specified curve. + /// + /// The specified curve is defined over the scalar field of the curve this proof is performed + /// over, offering efficient arithmetic. + /// + /// May panic if the prover and the point is not actually on-curve. + fn on_curve(&mut self, curve: &CurveSpec, point: (Variable, Variable)) -> OnCurve; + + /// Perform incomplete addition for a fixed point and an on-curve point. + /// + /// `a` is the x and y coordinates of the fixed point, assumed to be on-curve. + /// + /// `b` is a point prior checked to be on-curve. + /// + /// `c` is a point prior checked to be on-curve, constrained to be the sum of `a` and `b`. + /// + /// `a` and `b` are checked to have distinct x coordinates. + /// + /// This function may panic if `a` is malformed or if the prover and `c` is not actually the sum + /// of `a` and `b`. + fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve; +} + +impl EcGadgets for Circuit { + fn on_curve(&mut self, curve: &CurveSpec, (x, y): (Variable, Variable)) -> OnCurve { + let x_eval = self.eval(&LinComb::from(x)); + let (_x, _x_2, x2) = + self.mul(Some(LinComb::from(x)), Some(LinComb::from(x)), x_eval.map(|x| (x, x))); + let (_x, _x_2, x3) = + self.mul(Some(LinComb::from(x2)), Some(LinComb::from(x)), x_eval.map(|x| (x * x, x))); + let expected_y2 = LinComb::from(x3).term(curve.a, x).constant(curve.b); + + let y_eval = self.eval(&LinComb::from(y)); + let (_y, _y_2, y2) = + self.mul(Some(LinComb::from(y)), Some(LinComb::from(y)), y_eval.map(|y| (y, y))); + + self.equality(y2.into(), &expected_y2); + + OnCurve { x, y } + } + + fn incomplete_add_fixed(&mut self, a: (C::F, C::F), b: OnCurve, c: OnCurve) -> OnCurve { + // Check b.x != a.0 + { + let bx_lincomb = LinComb::from(b.x); + let bx_eval = self.eval(&bx_lincomb); + self.inequality(bx_lincomb, &LinComb::empty().constant(a.0), bx_eval.map(|bx| (bx, a.0))); + } + + let (x0, y0) = (a.0, a.1); + let (x1, y1) = (b.x, b.y); + let (x2, y2) = (c.x, c.y); + + let slope_eval = self.eval(&LinComb::from(x1)).map(|x1| { + let y1 = self.eval(&LinComb::from(b.y)).unwrap(); + + (y1 - y0) * (x1 - x0).invert().unwrap() + }); + + // slope * (x1 - x0) = y1 - y0 + let x1_minus_x0 = LinComb::from(x1).constant(-x0); + let x1_minus_x0_eval = self.eval(&x1_minus_x0); + let (slope, _r, o) = + self.mul(None, Some(x1_minus_x0), slope_eval.map(|slope| (slope, x1_minus_x0_eval.unwrap()))); + self.equality(LinComb::from(o), &LinComb::from(y1).constant(-y0)); + + // slope * (x2 - x0) = -y2 - y0 + let x2_minus_x0 = LinComb::from(x2).constant(-x0); + let x2_minus_x0_eval = self.eval(&x2_minus_x0); + let (_slope, _x2_minus_x0, o) = self.mul( + Some(slope.into()), + Some(x2_minus_x0), + slope_eval.map(|slope| (slope, x2_minus_x0_eval.unwrap())), + ); + self.equality(o.into(), &LinComb::empty().term(-C::F::ONE, y2).constant(-y0)); + + // slope * slope = x0 + x1 + x2 + let (_slope, _slope_2, o) = + self.mul(Some(slope.into()), Some(slope.into()), slope_eval.map(|slope| (slope, slope))); + self.equality(o.into(), &LinComb::from(x1).term(C::F::ONE, x2).constant(x0)); + + OnCurve { x: x2, y: y2 } + } +} diff --git a/crypto/evrf/embedwards25519/Cargo.toml b/crypto/evrf/embedwards25519/Cargo.toml new file mode 100644 index 00000000..bbae482b --- /dev/null +++ b/crypto/evrf/embedwards25519/Cargo.toml @@ -0,0 +1,39 @@ +[package] +name = "embedwards25519" +version = "0.1.0" +description = "A curve defined over the Ed25519 scalar field" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/embedwards25519" +authors = ["Luke Parker "] +keywords = ["curve25519", "ed25519", "ristretto255", "group"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +rustversion = "1" +hex-literal = { version = "0.4", default-features = false } + +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } +subtle = { version = "^2.4", default-features = false, features = ["std"] } + +generic-array = { version = "0.14", default-features = false } +crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] } + +dalek-ff-group = { path = "../../dalek-ff-group", version = "0.4", default-features = false } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } +ec-divisors = { path = "../divisors" } +generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" } + +[dev-dependencies] +hex = "0.4" + +rand_core = { version = "0.6", features = ["std"] } + +ff-group-tests = { path = "../../ff-group-tests" } diff --git a/crypto/evrf/embedwards25519/LICENSE b/crypto/evrf/embedwards25519/LICENSE new file mode 100644 index 00000000..91d893c1 --- /dev/null +++ b/crypto/evrf/embedwards25519/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022-2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/embedwards25519/README.md b/crypto/evrf/embedwards25519/README.md new file mode 100644 index 00000000..5f7f5e47 --- /dev/null +++ b/crypto/evrf/embedwards25519/README.md @@ -0,0 +1,21 @@ +# embedwards25519 + +A curve defined over the Ed25519 scalar field. + +This curve was found via +[tevador's script](https://gist.github.com/tevador/4524c2092178df08996487d4e272b096) +for finding curves (specifically, curve cycles), modified to search for curves +whose field is the Ed25519 scalar field (not the Ed25519 field). + +``` +p = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed +q = 0x0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b +D = -420435 +y^2 = x^3 - 3*x + 4188043517836764736459661287169077812555441231147410753119540549773825148767 +``` + +The embedding degree is `(q-1)/2`. + +This curve should not be used with single-coordinate ladders, and points should +always be represented in a compressed form (preventing receiving off-curve +points). diff --git a/crypto/evrf/embedwards25519/src/backend.rs b/crypto/evrf/embedwards25519/src/backend.rs new file mode 100644 index 00000000..304fa0bc --- /dev/null +++ b/crypto/evrf/embedwards25519/src/backend.rs @@ -0,0 +1,293 @@ +use zeroize::Zeroize; + +// Use black_box when possible +#[rustversion::since(1.66)] +use core::hint::black_box; +#[rustversion::before(1.66)] +fn black_box(val: T) -> T { + val +} + +pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 { + let bit_ref = black_box(bit_ref); + + let mut bit = black_box(*bit_ref); + let res = black_box(bit as u8); + bit.zeroize(); + debug_assert!((res | 1) == 1); + + bit_ref.zeroize(); + res +} + +macro_rules! math_op { + ( + $Value: ident, + $Other: ident, + $Op: ident, + $op_fn: ident, + $Assign: ident, + $assign_fn: ident, + $function: expr + ) => { + impl $Op<$Other> for $Value { + type Output = $Value; + fn $op_fn(self, other: $Other) -> Self::Output { + Self($function(self.0, other.0)) + } + } + impl $Assign<$Other> for $Value { + fn $assign_fn(&mut self, other: $Other) { + self.0 = $function(self.0, other.0); + } + } + impl<'a> $Op<&'a $Other> for $Value { + type Output = $Value; + fn $op_fn(self, other: &'a $Other) -> Self::Output { + Self($function(self.0, other.0)) + } + } + impl<'a> $Assign<&'a $Other> for $Value { + fn $assign_fn(&mut self, other: &'a $Other) { + self.0 = $function(self.0, other.0); + } + } + }; +} + +macro_rules! from_wrapper { + ($wrapper: ident, $inner: ident, $uint: ident) => { + impl From<$uint> for $wrapper { + fn from(a: $uint) -> $wrapper { + Self(Residue::new(&$inner::from(a))) + } + } + }; +} + +macro_rules! field { + ( + $FieldName: ident, + $ResidueType: ident, + + $MODULUS_STR: ident, + $MODULUS: ident, + $WIDE_MODULUS: ident, + + $NUM_BITS: literal, + $MULTIPLICATIVE_GENERATOR: literal, + $S: literal, + $ROOT_OF_UNITY: literal, + $DELTA: literal, + ) => { + use core::{ + ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign}, + iter::{Sum, Product}, + }; + + use subtle::{Choice, CtOption, ConstantTimeEq, ConstantTimeLess, ConditionallySelectable}; + use rand_core::RngCore; + + use crypto_bigint::{Integer, NonZero, Encoding, impl_modulus}; + + use ciphersuite::group::ff::{ + Field, PrimeField, FieldBits, PrimeFieldBits, helpers::sqrt_ratio_generic, + }; + + use $crate::backend::u8_from_bool; + + fn reduce(x: U512) -> U256 { + U256::from_le_slice(&x.rem(&NonZero::new($WIDE_MODULUS).unwrap()).to_le_bytes()[.. 32]) + } + + impl ConstantTimeEq for $FieldName { + fn ct_eq(&self, other: &Self) -> Choice { + self.0.ct_eq(&other.0) + } + } + + impl ConditionallySelectable for $FieldName { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + $FieldName(Residue::conditional_select(&a.0, &b.0, choice)) + } + } + + math_op!($FieldName, $FieldName, Add, add, AddAssign, add_assign, |x: $ResidueType, y| x + .add(&y)); + math_op!($FieldName, $FieldName, Sub, sub, SubAssign, sub_assign, |x: $ResidueType, y| x + .sub(&y)); + math_op!($FieldName, $FieldName, Mul, mul, MulAssign, mul_assign, |x: $ResidueType, y| x + .mul(&y)); + + from_wrapper!($FieldName, U256, u8); + from_wrapper!($FieldName, U256, u16); + from_wrapper!($FieldName, U256, u32); + from_wrapper!($FieldName, U256, u64); + from_wrapper!($FieldName, U256, u128); + + impl Neg for $FieldName { + type Output = $FieldName; + fn neg(self) -> $FieldName { + Self(self.0.neg()) + } + } + + impl<'a> Neg for &'a $FieldName { + type Output = $FieldName; + fn neg(self) -> Self::Output { + (*self).neg() + } + } + + impl $FieldName { + /// Perform an exponentation. + pub fn pow(&self, other: $FieldName) -> $FieldName { + let mut table = [Self(Residue::ONE); 16]; + table[1] = *self; + for i in 2 .. 16 { + table[i] = table[i - 1] * self; + } + + let mut res = Self(Residue::ONE); + let mut bits = 0; + for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() { + bits <<= 1; + let mut bit = u8_from_bool(bit.deref_mut()); + bits |= bit; + bit.zeroize(); + + if ((i + 1) % 4) == 0 { + if i != 3 { + for _ in 0 .. 4 { + res *= res; + } + } + + let mut factor = table[0]; + for (j, candidate) in table[1 ..].iter().enumerate() { + let j = j + 1; + factor = Self::conditional_select(&factor, &candidate, usize::from(bits).ct_eq(&j)); + } + res *= factor; + bits = 0; + } + } + res + } + } + + impl Field for $FieldName { + const ZERO: Self = Self(Residue::ZERO); + const ONE: Self = Self(Residue::ONE); + + fn random(mut rng: impl RngCore) -> Self { + let mut bytes = [0; 64]; + rng.fill_bytes(&mut bytes); + $FieldName(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref())))) + } + + fn square(&self) -> Self { + Self(self.0.square()) + } + fn double(&self) -> Self { + *self + self + } + + fn invert(&self) -> CtOption { + let res = self.0.invert(); + CtOption::new(Self(res.0), res.1.into()) + } + + fn sqrt(&self) -> CtOption { + // (p + 1) // 4, as valid since p % 4 == 3 + let mod_plus_one_div_four = $MODULUS.saturating_add(&U256::ONE).wrapping_div(&(4u8.into())); + let res = self.pow(Self($ResidueType::new_checked(&mod_plus_one_div_four).unwrap())); + CtOption::new(res, res.square().ct_eq(self)) + } + + fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) { + sqrt_ratio_generic(num, div) + } + } + + impl PrimeField for $FieldName { + type Repr = [u8; 32]; + + const MODULUS: &'static str = $MODULUS_STR; + + const NUM_BITS: u32 = $NUM_BITS; + const CAPACITY: u32 = $NUM_BITS - 1; + + const TWO_INV: Self = $FieldName($ResidueType::new(&U256::from_u8(2)).invert().0); + + const MULTIPLICATIVE_GENERATOR: Self = + Self(Residue::new(&U256::from_u8($MULTIPLICATIVE_GENERATOR))); + const S: u32 = $S; + + const ROOT_OF_UNITY: Self = $FieldName(Residue::new(&U256::from_be_hex($ROOT_OF_UNITY))); + const ROOT_OF_UNITY_INV: Self = Self(Self::ROOT_OF_UNITY.0.invert().0); + + const DELTA: Self = $FieldName(Residue::new(&U256::from_be_hex($DELTA))); + + fn from_repr(bytes: Self::Repr) -> CtOption { + let res = U256::from_le_slice(&bytes); + CtOption::new($FieldName(Residue::new(&res)), res.ct_lt(&$MODULUS)) + } + fn to_repr(&self) -> Self::Repr { + let mut repr = [0; 32]; + repr.copy_from_slice(&self.0.retrieve().to_le_bytes()); + repr + } + + fn is_odd(&self) -> Choice { + self.0.retrieve().is_odd() + } + } + + impl PrimeFieldBits for $FieldName { + type ReprBits = [u8; 32]; + + fn to_le_bits(&self) -> FieldBits { + self.to_repr().into() + } + + fn char_le_bits() -> FieldBits { + let mut repr = [0; 32]; + repr.copy_from_slice(&MODULUS.to_le_bytes()); + repr.into() + } + } + + impl Sum<$FieldName> for $FieldName { + fn sum>(iter: I) -> $FieldName { + let mut res = $FieldName::ZERO; + for item in iter { + res += item; + } + res + } + } + + impl<'a> Sum<&'a $FieldName> for $FieldName { + fn sum>(iter: I) -> $FieldName { + iter.cloned().sum() + } + } + + impl Product<$FieldName> for $FieldName { + fn product>(iter: I) -> $FieldName { + let mut res = $FieldName::ONE; + for item in iter { + res *= item; + } + res + } + } + + impl<'a> Product<&'a $FieldName> for $FieldName { + fn product>(iter: I) -> $FieldName { + iter.cloned().product() + } + } + }; +} diff --git a/crypto/evrf/embedwards25519/src/lib.rs b/crypto/evrf/embedwards25519/src/lib.rs new file mode 100644 index 00000000..858f4ada --- /dev/null +++ b/crypto/evrf/embedwards25519/src/lib.rs @@ -0,0 +1,47 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] + +use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2}; +use ciphersuite::group::{ff::PrimeField, Group}; + +#[macro_use] +mod backend; + +mod scalar; +pub use scalar::Scalar; + +pub use dalek_ff_group::Scalar as FieldElement; + +mod point; +pub use point::Point; + +/// Ciphersuite for Embedwards25519. +/// +/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition +/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as +/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other. +#[derive(Clone, Copy, PartialEq, Eq, Debug, zeroize::Zeroize)] +pub struct Embedwards25519; +impl ciphersuite::Ciphersuite for Embedwards25519 { + type F = Scalar; + type G = Point; + type H = blake2::Blake2b512; + + const ID: &'static [u8] = b"embedwards25519"; + + fn generator() -> Self::G { + Point::generator() + } + + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { + use blake2::Digest; + Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap()) + } +} + +impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Embedwards25519 { + type ScalarBits = U<{ Scalar::NUM_BITS as usize }>; + type XCoefficients = Quot, U2>; + type XCoefficientsMinusOne = Diff; + type YxCoefficients = Diff, U1>, U2>, U2>; +} diff --git a/crypto/evrf/embedwards25519/src/point.rs b/crypto/evrf/embedwards25519/src/point.rs new file mode 100644 index 00000000..9d24e88a --- /dev/null +++ b/crypto/evrf/embedwards25519/src/point.rs @@ -0,0 +1,415 @@ +use core::{ + ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign}, + iter::Sum, +}; + +use rand_core::RngCore; + +use zeroize::Zeroize; +use subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable}; + +use ciphersuite::group::{ + ff::{Field, PrimeField, PrimeFieldBits}, + Group, GroupEncoding, + prime::PrimeGroup, +}; + +use crate::{backend::u8_from_bool, Scalar, FieldElement}; + +#[allow(non_snake_case)] +fn B() -> FieldElement { + FieldElement::from_repr(hex_literal::hex!( + "5f07603a853f20370b682036210d463e64903a23ea669d07ca26cfc13f594209" + )) + .unwrap() +} + +fn recover_y(x: FieldElement) -> CtOption { + // x**3 - 3 * x + B + ((x.square() * x) - (x.double() + x) + B()).sqrt() +} + +/// Point. +#[derive(Clone, Copy, Debug, Zeroize)] +#[repr(C)] +pub struct Point { + x: FieldElement, // / Z + y: FieldElement, // / Z + z: FieldElement, +} + +impl ConstantTimeEq for Point { + fn ct_eq(&self, other: &Self) -> Choice { + let x1 = self.x * other.z; + let x2 = other.x * self.z; + + let y1 = self.y * other.z; + let y2 = other.y * self.z; + + (self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) + } +} + +impl PartialEq for Point { + fn eq(&self, other: &Point) -> bool { + self.ct_eq(other).into() + } +} + +impl Eq for Point {} + +impl ConditionallySelectable for Point { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + Point { + x: FieldElement::conditional_select(&a.x, &b.x, choice), + y: FieldElement::conditional_select(&a.y, &b.y, choice), + z: FieldElement::conditional_select(&a.z, &b.z, choice), + } + } +} + +impl Add for Point { + type Output = Point; + #[allow(non_snake_case)] + fn add(self, other: Self) -> Self { + // add-2015-rcb + + let a = -FieldElement::from(3u64); + let B = B(); + let b3 = B + B + B; + + let X1 = self.x; + let Y1 = self.y; + let Z1 = self.z; + let X2 = other.x; + let Y2 = other.y; + let Z2 = other.z; + + let t0 = X1 * X2; + let t1 = Y1 * Y2; + let t2 = Z1 * Z2; + let t3 = X1 + Y1; + let t4 = X2 + Y2; + let t3 = t3 * t4; + let t4 = t0 + t1; + let t3 = t3 - t4; + let t4 = X1 + Z1; + let t5 = X2 + Z2; + let t4 = t4 * t5; + let t5 = t0 + t2; + let t4 = t4 - t5; + let t5 = Y1 + Z1; + let X3 = Y2 + Z2; + let t5 = t5 * X3; + let X3 = t1 + t2; + let t5 = t5 - X3; + let Z3 = a * t4; + let X3 = b3 * t2; + let Z3 = X3 + Z3; + let X3 = t1 - Z3; + let Z3 = t1 + Z3; + let Y3 = X3 * Z3; + let t1 = t0 + t0; + let t1 = t1 + t0; + let t2 = a * t2; + let t4 = b3 * t4; + let t1 = t1 + t2; + let t2 = t0 - t2; + let t2 = a * t2; + let t4 = t4 + t2; + let t0 = t1 * t4; + let Y3 = Y3 + t0; + let t0 = t5 * t4; + let X3 = t3 * X3; + let X3 = X3 - t0; + let t0 = t3 * t1; + let Z3 = t5 * Z3; + let Z3 = Z3 + t0; + Point { x: X3, y: Y3, z: Z3 } + } +} + +impl AddAssign for Point { + fn add_assign(&mut self, other: Point) { + *self = *self + other; + } +} + +impl Add<&Point> for Point { + type Output = Point; + fn add(self, other: &Point) -> Point { + self + *other + } +} + +impl AddAssign<&Point> for Point { + fn add_assign(&mut self, other: &Point) { + *self += *other; + } +} + +impl Neg for Point { + type Output = Point; + fn neg(self) -> Self { + Point { x: self.x, y: -self.y, z: self.z } + } +} + +impl Sub for Point { + type Output = Point; + #[allow(clippy::suspicious_arithmetic_impl)] + fn sub(self, other: Self) -> Self { + self + other.neg() + } +} + +impl SubAssign for Point { + fn sub_assign(&mut self, other: Point) { + *self = *self - other; + } +} + +impl Sub<&Point> for Point { + type Output = Point; + fn sub(self, other: &Point) -> Point { + self - *other + } +} + +impl SubAssign<&Point> for Point { + fn sub_assign(&mut self, other: &Point) { + *self -= *other; + } +} + +impl Group for Point { + type Scalar = Scalar; + fn random(mut rng: impl RngCore) -> Self { + loop { + let mut bytes = [0; 32]; + rng.fill_bytes(bytes.as_mut()); + let opt = Self::from_bytes(&bytes); + if opt.is_some().into() { + return opt.unwrap(); + } + } + } + fn identity() -> Self { + Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO } + } + fn generator() -> Self { + Point { + x: FieldElement::from_repr(hex_literal::hex!( + "0100000000000000000000000000000000000000000000000000000000000000" + )) + .unwrap(), + y: FieldElement::from_repr(hex_literal::hex!( + "2e4118080a484a3dfbafe2199a0e36b7193581d676c0dadfa376b0265616020c" + )) + .unwrap(), + z: FieldElement::ONE, + } + } + fn is_identity(&self) -> Choice { + self.z.ct_eq(&FieldElement::ZERO) + } + #[allow(non_snake_case)] + fn double(&self) -> Self { + // dbl-2007-bl-2 + let X1 = self.x; + let Y1 = self.y; + let Z1 = self.z; + + let w = (X1 - Z1) * (X1 + Z1); + let w = w.double() + w; + let s = (Y1 * Z1).double(); + let ss = s.square(); + let sss = s * ss; + let R = Y1 * s; + let RR = R.square(); + let B_ = (X1 * R).double(); + let h = w.square() - B_.double(); + let X3 = h * s; + let Y3 = w * (B_ - h) - RR.double(); + let Z3 = sss; + + let res = Self { x: X3, y: Y3, z: Z3 }; + // If self is identity, res will not be well-formed + // Accordingly, we return self if self was the identity + Self::conditional_select(&res, self, self.is_identity()) + } +} + +impl Sum for Point { + fn sum>(iter: I) -> Point { + let mut res = Self::identity(); + for i in iter { + res += i; + } + res + } +} + +impl<'a> Sum<&'a Point> for Point { + fn sum>(iter: I) -> Point { + Point::sum(iter.cloned()) + } +} + +impl Mul for Point { + type Output = Point; + fn mul(self, mut other: Scalar) -> Point { + // Precompute the optimal amount that's a multiple of 2 + let mut table = [Point::identity(); 16]; + table[1] = self; + for i in 2 .. 16 { + table[i] = table[i - 1] + self; + } + + let mut res = Self::identity(); + let mut bits = 0; + for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() { + bits <<= 1; + let mut bit = u8_from_bool(bit.deref_mut()); + bits |= bit; + bit.zeroize(); + + if ((i + 1) % 4) == 0 { + if i != 3 { + for _ in 0 .. 4 { + res = res.double(); + } + } + + let mut term = table[0]; + for (j, candidate) in table[1 ..].iter().enumerate() { + let j = j + 1; + term = Self::conditional_select(&term, candidate, usize::from(bits).ct_eq(&j)); + } + res += term; + bits = 0; + } + } + other.zeroize(); + res + } +} + +impl MulAssign for Point { + fn mul_assign(&mut self, other: Scalar) { + *self = *self * other; + } +} + +impl Mul<&Scalar> for Point { + type Output = Point; + fn mul(self, other: &Scalar) -> Point { + self * *other + } +} + +impl MulAssign<&Scalar> for Point { + fn mul_assign(&mut self, other: &Scalar) { + *self *= *other; + } +} + +impl GroupEncoding for Point { + type Repr = [u8; 32]; + + fn from_bytes(bytes: &Self::Repr) -> CtOption { + // Extract and clear the sign bit + let mut bytes = *bytes; + let sign = Choice::from(bytes[31] >> 7); + bytes[31] &= u8::MAX >> 1; + + // Parse x, recover y + FieldElement::from_repr(bytes).and_then(|x| { + let is_identity = x.is_zero(); + + let y = recover_y(x).map(|mut y| { + y = <_>::conditional_select(&y, &-y, y.is_odd().ct_eq(&!sign)); + y + }); + + // If this the identity, set y to 1 + let y = + CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity); + // Create the point if we have a y solution + let point = y.map(|y| Point { x, y, z: FieldElement::ONE }); + + let not_negative_zero = !(is_identity & sign); + // Only return the point if it isn't -0 + CtOption::conditional_select( + &CtOption::new(Point::identity(), 0.into()), + &point, + not_negative_zero, + ) + }) + } + + fn from_bytes_unchecked(bytes: &Self::Repr) -> CtOption { + Point::from_bytes(bytes) + } + + fn to_bytes(&self) -> Self::Repr { + let Some(z) = Option::::from(self.z.invert()) else { + return [0; 32]; + }; + let x = self.x * z; + let y = self.y * z; + + let mut res = [0; 32]; + res.as_mut().copy_from_slice(&x.to_repr()); + + // The following conditional select normalizes the sign to 0 when x is 0 + let y_sign = u8::conditional_select(&y.is_odd().unwrap_u8(), &0, x.ct_eq(&FieldElement::ZERO)); + res[31] |= y_sign << 7; + res + } +} + +impl PrimeGroup for Point {} + +impl ec_divisors::DivisorCurve for Point { + type FieldElement = FieldElement; + + fn a() -> Self::FieldElement { + -FieldElement::from(3u64) + } + fn b() -> Self::FieldElement { + B() + } + + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> { + let z: Self::FieldElement = Option::from(point.z.invert())?; + Some((point.x * z, point.y * z)) + } +} + +#[test] +fn test_curve() { + ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng); +} + +#[test] +fn generator() { + assert_eq!( + Point::generator(), + Point::from_bytes(&hex_literal::hex!( + "0100000000000000000000000000000000000000000000000000000000000000" + )) + .unwrap() + ); +} + +#[test] +fn zero_x_is_invalid() { + assert!(Option::::from(recover_y(FieldElement::ZERO)).is_none()); +} + +// Checks random won't infinitely loop +#[test] +fn random() { + Point::random(&mut rand_core::OsRng); +} diff --git a/crypto/evrf/embedwards25519/src/scalar.rs b/crypto/evrf/embedwards25519/src/scalar.rs new file mode 100644 index 00000000..f2d6e61f --- /dev/null +++ b/crypto/evrf/embedwards25519/src/scalar.rs @@ -0,0 +1,52 @@ +use zeroize::{DefaultIsZeroes, Zeroize}; + +use crypto_bigint::{ + U256, U512, + modular::constant_mod::{ResidueParams, Residue}, +}; + +const MODULUS_STR: &str = "0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b"; + +impl_modulus!(EmbedwardsQ, U256, MODULUS_STR); +type ResidueType = Residue; + +/// The Scalar field of Embedwards25519. +/// +/// This is equivalent to the field secp256k1 is defined over. +#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)] +#[repr(C)] +pub struct Scalar(pub(crate) ResidueType); + +impl DefaultIsZeroes for Scalar {} + +pub(crate) const MODULUS: U256 = U256::from_be_hex(MODULUS_STR); + +const WIDE_MODULUS: U512 = U512::from_be_hex(concat!( + "0000000000000000000000000000000000000000000000000000000000000000", + "0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b", +)); + +field!( + Scalar, + ResidueType, + MODULUS_STR, + MODULUS, + WIDE_MODULUS, + 252, + 10, + 1, + "0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96a", + "0000000000000000000000000000000000000000000000000000000000000064", +); + +impl Scalar { + /// Perform a wide reduction, presumably to obtain a non-biased Scalar field element. + pub fn wide_reduce(bytes: [u8; 64]) -> Scalar { + Scalar(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref())))) + } +} + +#[test] +fn test_scalar_field() { + ff_group_tests::prime_field::test_prime_field_bits::<_, Scalar>(&mut rand_core::OsRng); +} diff --git a/crypto/evrf/generalized-bulletproofs/Cargo.toml b/crypto/evrf/generalized-bulletproofs/Cargo.toml new file mode 100644 index 00000000..9dfc95a5 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/Cargo.toml @@ -0,0 +1,33 @@ +[package] +name = "generalized-bulletproofs" +version = "0.1.0" +description = "Generalized Bulletproofs" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/generalized-bulletproofs" +authors = ["Luke Parker "] +keywords = ["ciphersuite", "ff", "group"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } + +multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std", "batch"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } + +[dev-dependencies] +rand_core = { version = "0.6", features = ["getrandom"] } + +transcript = { package = "flexible-transcript", path = "../../transcript", features = ["recommended"] } + +ciphersuite = { path = "../../ciphersuite", features = ["ristretto"] } + +[features] +tests = [] diff --git a/crypto/evrf/generalized-bulletproofs/LICENSE b/crypto/evrf/generalized-bulletproofs/LICENSE new file mode 100644 index 00000000..ad3c2fd5 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2021-2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/generalized-bulletproofs/README.md b/crypto/evrf/generalized-bulletproofs/README.md new file mode 100644 index 00000000..da588b8d --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/README.md @@ -0,0 +1,6 @@ +# Generalized Bulletproofs + +An implementation of +[Generalized Bulletproofs](https://repo.getmonero.org/monero-project/ccs-proposals/uploads/a9baa50c38c6312efc0fea5c6a188bb9/gbp.pdf), +a variant of the Bulletproofs arithmetic circuit statement to support Pedersen +vector commitments. diff --git a/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs new file mode 100644 index 00000000..e0c6e464 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs @@ -0,0 +1,679 @@ +use rand_core::{RngCore, CryptoRng}; + +use zeroize::{Zeroize, ZeroizeOnDrop}; + +use multiexp::{multiexp, multiexp_vartime}; +use ciphersuite::{group::ff::Field, Ciphersuite}; + +use crate::{ + ScalarVector, PointVector, ProofGenerators, PedersenCommitment, PedersenVectorCommitment, + BatchVerifier, + transcript::*, + lincomb::accumulate_vector, + inner_product::{IpError, IpStatement, IpWitness, P}, +}; +pub use crate::lincomb::{Variable, LinComb}; + +/// An Arithmetic Circuit Statement. +/// +/// Bulletproofs' constraints are of the form +/// `aL * aR = aO, WL * aL + WR * aR + WO * aO = WV * V + c`. +/// +/// Generalized Bulletproofs modifies this to +/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H = WV * V + c`. +/// +/// We implement the latter, yet represented (for simplicity) as +/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H + WV * V + c = 0`. +#[derive(Clone, Debug)] +pub struct ArithmeticCircuitStatement<'a, C: Ciphersuite> { + generators: ProofGenerators<'a, C>, + + constraints: Vec>, + C: PointVector, + V: PointVector, +} + +impl<'a, C: Ciphersuite> Zeroize for ArithmeticCircuitStatement<'a, C> { + fn zeroize(&mut self) { + self.constraints.zeroize(); + self.C.zeroize(); + self.V.zeroize(); + } +} + +/// The witness for an arithmetic circuit statement. +#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)] +pub struct ArithmeticCircuitWitness { + aL: ScalarVector, + aR: ScalarVector, + aO: ScalarVector, + + c: Vec>, + v: Vec>, +} + +/// An error incurred during arithmetic circuit proof operations. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub enum AcError { + /// The vectors of scalars which are multiplied against each other were of different lengths. + DifferingLrLengths, + /// The matrices of constraints are of different lengths. + InconsistentAmountOfConstraints, + /// A constraint referred to a non-existent term. + ConstrainedNonExistentTerm, + /// A constraint referred to a non-existent commitment. + ConstrainedNonExistentCommitment, + /// There weren't enough generators to prove for this statement. + NotEnoughGenerators, + /// The witness was inconsistent to the statement. + /// + /// Sanity checks on the witness are always performed. If the library is compiled with debug + /// assertions on, the satisfaction of all constraints and validity of the commitmentsd is + /// additionally checked. + InconsistentWitness, + /// There was an error from the inner-product proof. + Ip(IpError), + /// The proof wasn't complete and the necessary values could not be read from the transcript. + IncompleteProof, +} + +impl ArithmeticCircuitWitness { + /// Constructs a new witness instance. + pub fn new( + aL: ScalarVector, + aR: ScalarVector, + c: Vec>, + v: Vec>, + ) -> Result { + if aL.len() != aR.len() { + Err(AcError::DifferingLrLengths)?; + } + + // The Pedersen Vector Commitments don't have their variables' lengths checked as they aren't + // paired off with each other as aL, aR are + + // The PVC commit function ensures there's enough generators for their amount of terms + // If there aren't enough/the same generators when this is proven for, it'll trigger + // InconsistentWitness + + let aO = aL.clone() * &aR; + Ok(ArithmeticCircuitWitness { aL, aR, aO, c, v }) + } +} + +struct YzChallenges { + y_inv: ScalarVector, + z: ScalarVector, +} + +impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { + // The amount of multiplications performed. + fn n(&self) -> usize { + self.generators.len() + } + + // The amount of constraints. + fn q(&self) -> usize { + self.constraints.len() + } + + // The amount of Pedersen vector commitments. + fn c(&self) -> usize { + self.C.len() + } + + // The amount of Pedersen commitments. + fn m(&self) -> usize { + self.V.len() + } + + /// Create a new ArithmeticCircuitStatement for the specified relationship. + /// + /// The `LinComb`s passed as `constraints` will be bound to evaluate to 0. + /// + /// The constraints are not transcripted. They're expected to be deterministic from the context + /// and higher-level statement. If your constraints are variable, you MUST transcript them before + /// calling prove/verify. + /// + /// The commitments are expected to have been transcripted extenally to this statement's + /// invocation. That's practically ensured by taking a `Commitments` struct here, which is only + /// obtainable via a transcript. + pub fn new( + generators: ProofGenerators<'a, C>, + constraints: Vec>, + commitments: Commitments, + ) -> Result { + let Commitments { C, V } = commitments; + + for constraint in &constraints { + if Some(generators.len()) <= constraint.highest_a_index { + Err(AcError::ConstrainedNonExistentTerm)?; + } + if Some(C.len()) <= constraint.highest_c_index { + Err(AcError::ConstrainedNonExistentCommitment)?; + } + if Some(V.len()) <= constraint.highest_v_index { + Err(AcError::ConstrainedNonExistentCommitment)?; + } + } + + Ok(Self { generators, constraints, C, V }) + } + + fn yz_challenges(&self, y: C::F, z_1: C::F) -> YzChallenges { + let y_inv = y.invert().unwrap(); + let y_inv = ScalarVector::powers(y_inv, self.n()); + + // Powers of z *starting with z**1* + // We could reuse powers and remove the first element, yet this is cheaper than the shift that + // would require + let q = self.q(); + let mut z = ScalarVector(Vec::with_capacity(q)); + z.0.push(z_1); + for _ in 1 .. q { + z.0.push(*z.0.last().unwrap() * z_1); + } + z.0.truncate(q); + + YzChallenges { y_inv, z } + } + + /// Prove for this statement/witness. + pub fn prove( + self, + rng: &mut R, + transcript: &mut Transcript, + mut witness: ArithmeticCircuitWitness, + ) -> Result<(), AcError> { + let n = self.n(); + let c = self.c(); + let m = self.m(); + + // Check the witness length and pad it to the necessary power of two + if witness.aL.len() > n { + Err(AcError::NotEnoughGenerators)?; + } + while witness.aL.len() < n { + witness.aL.0.push(C::F::ZERO); + witness.aR.0.push(C::F::ZERO); + witness.aO.0.push(C::F::ZERO); + } + for c in &mut witness.c { + if c.g_values.len() > n { + Err(AcError::NotEnoughGenerators)?; + } + if c.h_values.len() > n { + Err(AcError::NotEnoughGenerators)?; + } + // The Pedersen vector commitments internally have n terms + while c.g_values.len() < n { + c.g_values.0.push(C::F::ZERO); + } + while c.h_values.len() < n { + c.h_values.0.push(C::F::ZERO); + } + } + + // Check the witness's consistency with the statement + if (c != witness.c.len()) || (m != witness.v.len()) { + Err(AcError::InconsistentWitness)?; + } + + #[cfg(debug_assertions)] + { + for (commitment, opening) in self.V.0.iter().zip(witness.v.iter()) { + if *commitment != opening.commit(self.generators.g(), self.generators.h()) { + Err(AcError::InconsistentWitness)?; + } + } + for (commitment, opening) in self.C.0.iter().zip(witness.c.iter()) { + if Some(*commitment) != + opening.commit( + self.generators.g_bold_slice(), + self.generators.h_bold_slice(), + self.generators.h(), + ) + { + Err(AcError::InconsistentWitness)?; + } + } + for constraint in &self.constraints { + let eval = + constraint + .WL + .iter() + .map(|(i, weight)| *weight * witness.aL[*i]) + .chain(constraint.WR.iter().map(|(i, weight)| *weight * witness.aR[*i])) + .chain(constraint.WO.iter().map(|(i, weight)| *weight * witness.aO[*i])) + .chain( + constraint.WCG.iter().zip(&witness.c).flat_map(|(weights, c)| { + weights.iter().map(|(j, weight)| *weight * c.g_values[*j]) + }), + ) + .chain( + constraint.WCH.iter().zip(&witness.c).flat_map(|(weights, c)| { + weights.iter().map(|(j, weight)| *weight * c.h_values[*j]) + }), + ) + .chain(constraint.WV.iter().map(|(i, weight)| *weight * witness.v[*i].value)) + .chain(core::iter::once(constraint.c)) + .sum::(); + + if eval != C::F::ZERO { + Err(AcError::InconsistentWitness)?; + } + } + } + + let alpha = C::F::random(&mut *rng); + let beta = C::F::random(&mut *rng); + let rho = C::F::random(&mut *rng); + + let AI = { + let alg = witness.aL.0.iter().enumerate().map(|(i, aL)| (*aL, self.generators.g_bold(i))); + let arh = witness.aR.0.iter().enumerate().map(|(i, aR)| (*aR, self.generators.h_bold(i))); + let ah = core::iter::once((alpha, self.generators.h())); + let mut AI_terms = alg.chain(arh).chain(ah).collect::>(); + let AI = multiexp(&AI_terms); + AI_terms.zeroize(); + AI + }; + let AO = { + let aog = witness.aO.0.iter().enumerate().map(|(i, aO)| (*aO, self.generators.g_bold(i))); + let bh = core::iter::once((beta, self.generators.h())); + let mut AO_terms = aog.chain(bh).collect::>(); + let AO = multiexp(&AO_terms); + AO_terms.zeroize(); + AO + }; + + let mut sL = ScalarVector(Vec::with_capacity(n)); + let mut sR = ScalarVector(Vec::with_capacity(n)); + for _ in 0 .. n { + sL.0.push(C::F::random(&mut *rng)); + sR.0.push(C::F::random(&mut *rng)); + } + let S = { + let slg = sL.0.iter().enumerate().map(|(i, sL)| (*sL, self.generators.g_bold(i))); + let srh = sR.0.iter().enumerate().map(|(i, sR)| (*sR, self.generators.h_bold(i))); + let rh = core::iter::once((rho, self.generators.h())); + let mut S_terms = slg.chain(srh).chain(rh).collect::>(); + let S = multiexp(&S_terms); + S_terms.zeroize(); + S + }; + + transcript.push_point(AI); + transcript.push_point(AO); + transcript.push_point(S); + let y = transcript.challenge(); + let z = transcript.challenge(); + let YzChallenges { y_inv, z } = self.yz_challenges(y, z); + let y = ScalarVector::powers(y, n); + + // t is a n'-term polynomial + // While Bulletproofs discuss it as a 6-term polynomial, Generalized Bulletproofs re-defines it + // as `2(n' + 1)`-term, where `n'` is `2 (c + 1)`. + // When `c = 0`, `n' = 2`, and t is `6` (which lines up with Bulletproofs having a 6-term + // polynomial). + + // ni = n' + let ni = 2 * (c + 1); + // These indexes are from the Generalized Bulletproofs paper + #[rustfmt::skip] + let ilr = ni / 2; // 1 if c = 0 + #[rustfmt::skip] + let io = ni; // 2 if c = 0 + #[rustfmt::skip] + let is = ni + 1; // 3 if c = 0 + #[rustfmt::skip] + let jlr = ni / 2; // 1 if c = 0 + #[rustfmt::skip] + let jo = 0; // 0 if c = 0 + #[rustfmt::skip] + let js = ni + 1; // 3 if c = 0 + + // If c = 0, these indexes perfectly align with the stated powers of X from the Bulletproofs + // paper for the following coefficients + + // Declare the l and r polynomials, assigning the traditional coefficients to their positions + let mut l = vec![]; + let mut r = vec![]; + for _ in 0 .. (is + 1) { + l.push(ScalarVector::new(0)); + r.push(ScalarVector::new(0)); + } + + let mut l_weights = ScalarVector::new(n); + let mut r_weights = ScalarVector::new(n); + let mut o_weights = ScalarVector::new(n); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + accumulate_vector(&mut l_weights, &constraint.WL, *z); + accumulate_vector(&mut r_weights, &constraint.WR, *z); + accumulate_vector(&mut o_weights, &constraint.WO, *z); + } + + l[ilr] = (r_weights * &y_inv) + &witness.aL; + l[io] = witness.aO.clone(); + l[is] = sL; + r[jlr] = l_weights + &(witness.aR.clone() * &y); + r[jo] = o_weights - &y; + r[js] = sR * &y; + + // Pad as expected + for l in &mut l { + debug_assert!((l.len() == 0) || (l.len() == n)); + if l.len() == 0 { + *l = ScalarVector::new(n); + } + } + for r in &mut r { + debug_assert!((r.len() == 0) || (r.len() == n)); + if r.len() == 0 { + *r = ScalarVector::new(n); + } + } + + // We now fill in the vector commitments + // We use unused coefficients of l increasing from 0 (skipping ilr), and unused coefficients of + // r decreasing from n' (skipping jlr) + + let mut cg_weights = Vec::with_capacity(witness.c.len()); + let mut ch_weights = Vec::with_capacity(witness.c.len()); + for i in 0 .. witness.c.len() { + let mut cg = ScalarVector::new(n); + let mut ch = ScalarVector::new(n); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + if let Some(WCG) = constraint.WCG.get(i) { + accumulate_vector(&mut cg, WCG, *z); + } + if let Some(WCH) = constraint.WCH.get(i) { + accumulate_vector(&mut ch, WCH, *z); + } + } + cg_weights.push(cg); + ch_weights.push(ch); + } + + for (i, (c, (cg_weights, ch_weights))) in + witness.c.iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate() + { + let i = i + 1; + let j = ni - i; + + l[i] = c.g_values.clone(); + l[j] = ch_weights * &y_inv; + r[j] = cg_weights; + r[i] = (c.h_values.clone() * &y) + &r[i]; + } + + // Multiply them to obtain t + let mut t = ScalarVector::new(1 + (2 * (l.len() - 1))); + for (i, l) in l.iter().enumerate() { + for (j, r) in r.iter().enumerate() { + let new_coeff = i + j; + t[new_coeff] += l.inner_product(r.0.iter()); + } + } + + // Per Bulletproofs, calculate masks tau for each t where (i > 0) && (i != 2) + // Per Generalized Bulletproofs, calculate masks tau for each t where i != n' + // With Bulletproofs, t[0] is zero, hence its omission, yet Generalized Bulletproofs uses it + let mut tau_before_ni = vec![]; + for _ in 0 .. ni { + tau_before_ni.push(C::F::random(&mut *rng)); + } + let mut tau_after_ni = vec![]; + for _ in 0 .. t.0[(ni + 1) ..].len() { + tau_after_ni.push(C::F::random(&mut *rng)); + } + // Calculate commitments to the coefficients of t, blinded by tau + debug_assert_eq!(t.0[0 .. ni].len(), tau_before_ni.len()); + for (t, tau) in t.0[0 .. ni].iter().zip(tau_before_ni.iter()) { + transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())])); + } + debug_assert_eq!(t.0[(ni + 1) ..].len(), tau_after_ni.len()); + for (t, tau) in t.0[(ni + 1) ..].iter().zip(tau_after_ni.iter()) { + transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())])); + } + + let x: ScalarVector = ScalarVector::powers(transcript.challenge(), t.len()); + + let poly_eval = |poly: &[ScalarVector], x: &ScalarVector<_>| -> ScalarVector<_> { + let mut res = ScalarVector::::new(poly[0].0.len()); + for (i, coeff) in poly.iter().enumerate() { + res = res + &(coeff.clone() * x[i]); + } + res + }; + let l = poly_eval(&l, &x); + let r = poly_eval(&r, &x); + + let t_caret = l.inner_product(r.0.iter()); + + let mut V_weights = ScalarVector::new(self.V.len()); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + // We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..` + // This means we need to subtract `WV V` from both sides, which we accomplish here + accumulate_vector(&mut V_weights, &constraint.WV, -*z); + } + + let tau_x = { + let mut tau_x_poly = vec![]; + tau_x_poly.extend(tau_before_ni); + tau_x_poly.push(V_weights.inner_product(witness.v.iter().map(|v| &v.mask))); + tau_x_poly.extend(tau_after_ni); + + let mut tau_x = C::F::ZERO; + for (i, coeff) in tau_x_poly.into_iter().enumerate() { + tau_x += coeff * x[i]; + } + tau_x + }; + + // Calculate u for the powers of x variable to ilr/io/is + let u = { + // Calculate the first part of u + let mut u = (alpha * x[ilr]) + (beta * x[io]) + (rho * x[is]); + + // Incorporate the commitment masks multiplied by the associated power of x + for (i, commitment) in witness.c.iter().enumerate() { + let i = i + 1; + u += x[i] * commitment.mask; + } + u + }; + + // Use the Inner-Product argument to prove for this + // P = t_caret * g + l * g_bold + r * (y_inv * h_bold) + + let mut P_terms = Vec::with_capacity(1 + (2 * self.generators.len())); + debug_assert_eq!(l.len(), r.len()); + for (i, (l, r)) in l.0.iter().zip(r.0.iter()).enumerate() { + P_terms.push((*l, self.generators.g_bold(i))); + P_terms.push((y_inv[i] * r, self.generators.h_bold(i))); + } + + // Protocol 1, inlined, since our IpStatement is for Protocol 2 + transcript.push_scalar(tau_x); + transcript.push_scalar(u); + transcript.push_scalar(t_caret); + let ip_x = transcript.challenge(); + P_terms.push((ip_x * t_caret, self.generators.g())); + IpStatement::new( + self.generators, + y_inv, + ip_x, + // Safe since IpStatement isn't a ZK proof + P::Prover(multiexp_vartime(&P_terms)), + ) + .unwrap() + .prove(transcript, IpWitness::new(l, r).unwrap()) + .map_err(AcError::Ip) + } + + /// Verify a proof for this statement. + pub fn verify( + self, + rng: &mut R, + verifier: &mut BatchVerifier, + transcript: &mut VerifierTranscript, + ) -> Result<(), AcError> { + let n = self.n(); + let c = self.c(); + + let ni = 2 * (c + 1); + + let ilr = ni / 2; + let io = ni; + let is = ni + 1; + let jlr = ni / 2; + + let l_r_poly_len = 1 + ni + 1; + let t_poly_len = (2 * l_r_poly_len) - 1; + + let AI = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; + let AO = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; + let S = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; + let y = transcript.challenge(); + let z = transcript.challenge(); + let YzChallenges { y_inv, z } = self.yz_challenges(y, z); + + let mut l_weights = ScalarVector::new(n); + let mut r_weights = ScalarVector::new(n); + let mut o_weights = ScalarVector::new(n); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + accumulate_vector(&mut l_weights, &constraint.WL, *z); + accumulate_vector(&mut r_weights, &constraint.WR, *z); + accumulate_vector(&mut o_weights, &constraint.WO, *z); + } + let r_weights = r_weights * &y_inv; + + let delta = r_weights.inner_product(l_weights.0.iter()); + + let mut T_before_ni = Vec::with_capacity(ni); + let mut T_after_ni = Vec::with_capacity(t_poly_len - ni - 1); + for _ in 0 .. ni { + T_before_ni.push(transcript.read_point::().map_err(|_| AcError::IncompleteProof)?); + } + for _ in 0 .. (t_poly_len - ni - 1) { + T_after_ni.push(transcript.read_point::().map_err(|_| AcError::IncompleteProof)?); + } + let x: ScalarVector = ScalarVector::powers(transcript.challenge(), t_poly_len); + + let tau_x = transcript.read_scalar::().map_err(|_| AcError::IncompleteProof)?; + let u = transcript.read_scalar::().map_err(|_| AcError::IncompleteProof)?; + let t_caret = transcript.read_scalar::().map_err(|_| AcError::IncompleteProof)?; + + // Lines 88-90, modified per Generalized Bulletproofs as needed w.r.t. t + { + let verifier_weight = C::F::random(&mut *rng); + // lhs of the equation, weighted to enable batch verification + verifier.g += t_caret * verifier_weight; + verifier.h += tau_x * verifier_weight; + + let mut V_weights = ScalarVector::new(self.V.len()); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + // We use `-z`, not `z`, as we write our constraint as `... + WV V = 0` not `= WV V + ..` + // This means we need to subtract `WV V` from both sides, which we accomplish here + accumulate_vector(&mut V_weights, &constraint.WV, -*z); + } + V_weights = V_weights * x[ni]; + + // rhs of the equation, negated to cause a sum to zero + // `delta - z...`, instead of `delta + z...`, is done for the same reason as in the above WV + // matrix transform + verifier.g -= verifier_weight * + x[ni] * + (delta - z.inner_product(self.constraints.iter().map(|constraint| &constraint.c))); + for pair in V_weights.0.into_iter().zip(self.V.0) { + verifier.additional.push((-verifier_weight * pair.0, pair.1)); + } + for (i, T) in T_before_ni.into_iter().enumerate() { + verifier.additional.push((-verifier_weight * x[i], T)); + } + for (i, T) in T_after_ni.into_iter().enumerate() { + verifier.additional.push((-verifier_weight * x[ni + 1 + i], T)); + } + } + + let verifier_weight = C::F::random(&mut *rng); + // Multiply `x` by `verifier_weight` as this effects `verifier_weight` onto most scalars and + // saves a notable amount of operations + let x = x * verifier_weight; + + // This following block effectively calculates P, within the multiexp + { + verifier.additional.push((x[ilr], AI)); + verifier.additional.push((x[io], AO)); + // h' ** y is equivalent to h as h' is h ** y_inv + let mut log2_n = 0; + while (1 << log2_n) != n { + log2_n += 1; + } + verifier.h_sum[log2_n] -= verifier_weight; + verifier.additional.push((x[is], S)); + + // Lines 85-87 calculate WL, WR, WO + // We preserve them in terms of g_bold and h_bold for a more efficient multiexp + let mut h_bold_scalars = l_weights * x[jlr]; + for (i, wr) in (r_weights * x[jlr]).0.into_iter().enumerate() { + verifier.g_bold[i] += wr; + } + // WO is weighted by x**jo where jo == 0, hence why we can ignore the x term + h_bold_scalars = h_bold_scalars + &(o_weights * verifier_weight); + + let mut cg_weights = Vec::with_capacity(self.C.len()); + let mut ch_weights = Vec::with_capacity(self.C.len()); + for i in 0 .. self.C.len() { + let mut cg = ScalarVector::new(n); + let mut ch = ScalarVector::new(n); + for (constraint, z) in self.constraints.iter().zip(&z.0) { + if let Some(WCG) = constraint.WCG.get(i) { + accumulate_vector(&mut cg, WCG, *z); + } + if let Some(WCH) = constraint.WCH.get(i) { + accumulate_vector(&mut ch, WCH, *z); + } + } + cg_weights.push(cg); + ch_weights.push(ch); + } + + // Push the terms for C, which increment from 0, and the terms for WC, which decrement from + // n' + for (i, (C, (WCG, WCH))) in + self.C.0.into_iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate() + { + let i = i + 1; + let j = ni - i; + verifier.additional.push((x[i], C)); + h_bold_scalars = h_bold_scalars + &(WCG * x[j]); + for (i, scalar) in (WCH * &y_inv * x[j]).0.into_iter().enumerate() { + verifier.g_bold[i] += scalar; + } + } + + // All terms for h_bold here have actually been for h_bold', h_bold * y_inv + h_bold_scalars = h_bold_scalars * &y_inv; + for (i, scalar) in h_bold_scalars.0.into_iter().enumerate() { + verifier.h_bold[i] += scalar; + } + + // Remove u * h from P + verifier.h -= verifier_weight * u; + } + + // Prove for lines 88, 92 with an Inner-Product statement + // This inlines Protocol 1, as our IpStatement implements Protocol 2 + let ip_x = transcript.challenge(); + // P is amended with this additional term + verifier.g += verifier_weight * ip_x * t_caret; + IpStatement::new(self.generators, y_inv, ip_x, P::Verifier { verifier_weight }) + .unwrap() + .verify(verifier, transcript) + .map_err(AcError::Ip)?; + + Ok(()) + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/inner_product.rs b/crypto/evrf/generalized-bulletproofs/src/inner_product.rs new file mode 100644 index 00000000..ae3ec876 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/inner_product.rs @@ -0,0 +1,360 @@ +use multiexp::multiexp_vartime; +use ciphersuite::{group::ff::Field, Ciphersuite}; + +#[rustfmt::skip] +use crate::{ScalarVector, PointVector, ProofGenerators, BatchVerifier, transcript::*, padded_pow_of_2}; + +/// An error from proving/verifying Inner-Product statements. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub enum IpError { + /// An incorrect amount of generators was provided. + IncorrectAmountOfGenerators, + /// The witness was inconsistent to the statement. + /// + /// Sanity checks on the witness are always performed. If the library is compiled with debug + /// assertions on, whether or not this witness actually opens `P` is checked. + InconsistentWitness, + /// The proof wasn't complete and the necessary values could not be read from the transcript. + IncompleteProof, +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) enum P { + Verifier { verifier_weight: C::F }, + Prover(C::G), +} + +/// The Bulletproofs Inner-Product statement. +/// +/// This is for usage with Protocol 2 from the Bulletproofs paper. +#[derive(Clone, Debug)] +pub(crate) struct IpStatement<'a, C: Ciphersuite> { + generators: ProofGenerators<'a, C>, + // Weights for h_bold + h_bold_weights: ScalarVector, + // u as the discrete logarithm of G + u: C::F, + // P + P: P, +} + +/// The witness for the Bulletproofs Inner-Product statement. +#[derive(Clone, Debug)] +pub(crate) struct IpWitness { + // a + a: ScalarVector, + // b + b: ScalarVector, +} + +impl IpWitness { + /// Construct a new witness for an Inner-Product statement. + /// + /// If the witness is less than a power of two, it is padded to the nearest power of two. + /// + /// This functions return None if the lengths of a, b are mismatched or either are empty. + pub(crate) fn new(mut a: ScalarVector, mut b: ScalarVector) -> Option { + if a.0.is_empty() || (a.len() != b.len()) { + None?; + } + + // Pad to the nearest power of 2 + let missing = padded_pow_of_2(a.len()) - a.len(); + a.0.reserve(missing); + b.0.reserve(missing); + for _ in 0 .. missing { + a.0.push(C::F::ZERO); + b.0.push(C::F::ZERO); + } + + Some(Self { a, b }) + } +} + +impl<'a, C: Ciphersuite> IpStatement<'a, C> { + /// Create a new Inner-Product statement. + /// + /// This does not perform any transcripting of any variables within this statement. They must be + /// deterministic to the existing transcript. + pub(crate) fn new( + generators: ProofGenerators<'a, C>, + h_bold_weights: ScalarVector, + u: C::F, + P: P, + ) -> Result { + if generators.h_bold_slice().len() != h_bold_weights.len() { + Err(IpError::IncorrectAmountOfGenerators)? + } + Ok(Self { generators, h_bold_weights, u, P }) + } + + /// Prove for this Inner-Product statement. + /// + /// Returns an error if this statement couldn't be proven for (such as if the witness isn't + /// consistent). + pub(crate) fn prove( + self, + transcript: &mut Transcript, + witness: IpWitness, + ) -> Result<(), IpError> { + let (mut g_bold, mut h_bold, u, mut P, mut a, mut b) = { + let IpStatement { generators, h_bold_weights, u, P } = self; + let u = generators.g() * u; + + // Ensure we have the exact amount of generators + if generators.g_bold_slice().len() != witness.a.len() { + Err(IpError::IncorrectAmountOfGenerators)?; + } + // Acquire a local copy of the generators + let g_bold = PointVector::(generators.g_bold_slice().to_vec()); + let h_bold = PointVector::(generators.h_bold_slice().to_vec()).mul_vec(&h_bold_weights); + + let IpWitness { a, b } = witness; + + let P = match P { + P::Prover(point) => point, + P::Verifier { .. } => { + panic!("prove called with a P specification which was for the verifier") + } + }; + + // Ensure this witness actually opens this statement + #[cfg(debug_assertions)] + { + let ag = a.0.iter().cloned().zip(g_bold.0.iter().cloned()); + let bh = b.0.iter().cloned().zip(h_bold.0.iter().cloned()); + let cu = core::iter::once((a.inner_product(b.0.iter()), u)); + if P != multiexp_vartime(&ag.chain(bh).chain(cu).collect::>()) { + Err(IpError::InconsistentWitness)?; + } + } + + (g_bold, h_bold, u, P, a, b) + }; + + // `else: (n > 1)` case, lines 18-35 of the Bulletproofs paper + // This interprets `g_bold.len()` as `n` + while g_bold.len() > 1 { + // Split a, b, g_bold, h_bold as needed for lines 20-24 + let (a1, a2) = a.clone().split(); + let (b1, b2) = b.clone().split(); + + let (g_bold1, g_bold2) = g_bold.split(); + let (h_bold1, h_bold2) = h_bold.split(); + + let n_hat = g_bold1.len(); + + // Sanity + debug_assert_eq!(a1.len(), n_hat); + debug_assert_eq!(a2.len(), n_hat); + debug_assert_eq!(b1.len(), n_hat); + debug_assert_eq!(b2.len(), n_hat); + debug_assert_eq!(g_bold1.len(), n_hat); + debug_assert_eq!(g_bold2.len(), n_hat); + debug_assert_eq!(h_bold1.len(), n_hat); + debug_assert_eq!(h_bold2.len(), n_hat); + + // cl, cr, lines 21-22 + let cl = a1.inner_product(b2.0.iter()); + let cr = a2.inner_product(b1.0.iter()); + + let L = { + let mut L_terms = Vec::with_capacity(1 + (2 * g_bold1.len())); + for (a, g) in a1.0.iter().zip(g_bold2.0.iter()) { + L_terms.push((*a, *g)); + } + for (b, h) in b2.0.iter().zip(h_bold1.0.iter()) { + L_terms.push((*b, *h)); + } + L_terms.push((cl, u)); + // Uses vartime since this isn't a ZK proof + multiexp_vartime(&L_terms) + }; + + let R = { + let mut R_terms = Vec::with_capacity(1 + (2 * g_bold1.len())); + for (a, g) in a2.0.iter().zip(g_bold1.0.iter()) { + R_terms.push((*a, *g)); + } + for (b, h) in b1.0.iter().zip(h_bold2.0.iter()) { + R_terms.push((*b, *h)); + } + R_terms.push((cr, u)); + multiexp_vartime(&R_terms) + }; + + // Now that we've calculate L, R, transcript them to receive x (26-27) + transcript.push_point(L); + transcript.push_point(R); + let x: C::F = transcript.challenge(); + let x_inv = x.invert().unwrap(); + + // The prover and verifier now calculate the following (28-31) + g_bold = PointVector(Vec::with_capacity(g_bold1.len())); + for (a, b) in g_bold1.0.into_iter().zip(g_bold2.0.into_iter()) { + g_bold.0.push(multiexp_vartime(&[(x_inv, a), (x, b)])); + } + h_bold = PointVector(Vec::with_capacity(h_bold1.len())); + for (a, b) in h_bold1.0.into_iter().zip(h_bold2.0.into_iter()) { + h_bold.0.push(multiexp_vartime(&[(x, a), (x_inv, b)])); + } + P = (L * (x * x)) + P + (R * (x_inv * x_inv)); + + // 32-34 + a = (a1 * x) + &(a2 * x_inv); + b = (b1 * x_inv) + &(b2 * x); + } + + // `if n = 1` case from line 14-17 + + // Sanity + debug_assert_eq!(g_bold.len(), 1); + debug_assert_eq!(h_bold.len(), 1); + debug_assert_eq!(a.len(), 1); + debug_assert_eq!(b.len(), 1); + + // We simply send a/b + transcript.push_scalar(a[0]); + transcript.push_scalar(b[0]); + Ok(()) + } + + /* + This has room for optimization worth investigating further. It currently takes + an iterative approach. It can be optimized further via divide and conquer. + + Assume there are 4 challenges. + + Iterative approach (current): + 1. Do the optimal multiplications across challenge column 0 and 1. + 2. Do the optimal multiplications across that result and column 2. + 3. Do the optimal multiplications across that result and column 3. + + Divide and conquer (worth investigating further): + 1. Do the optimal multiplications across challenge column 0 and 1. + 2. Do the optimal multiplications across challenge column 2 and 3. + 3. Multiply both results together. + + When there are 4 challenges (n=16), the iterative approach does 28 multiplications + versus divide and conquer's 24. + */ + fn challenge_products(challenges: &[(C::F, C::F)]) -> Vec { + let mut products = vec![C::F::ONE; 1 << challenges.len()]; + + if !challenges.is_empty() { + products[0] = challenges[0].1; + products[1] = challenges[0].0; + + for (j, challenge) in challenges.iter().enumerate().skip(1) { + let mut slots = (1 << (j + 1)) - 1; + while slots > 0 { + products[slots] = products[slots / 2] * challenge.0; + products[slots - 1] = products[slots / 2] * challenge.1; + + slots = slots.saturating_sub(2); + } + } + + // Sanity check since if the above failed to populate, it'd be critical + for product in &products { + debug_assert!(!bool::from(product.is_zero())); + } + } + + products + } + + /// Queue an Inner-Product proof for batch verification. + /// + /// This will return Err if there is an error. This will return Ok if the proof was successfully + /// queued for batch verification. The caller is required to verify the batch in order to ensure + /// the proof is actually correct. + pub(crate) fn verify( + self, + verifier: &mut BatchVerifier, + transcript: &mut VerifierTranscript, + ) -> Result<(), IpError> { + let IpStatement { generators, h_bold_weights, u, P } = self; + + // Calculate the discrete log w.r.t. 2 for the amount of generators present + let mut lr_len = 0; + while (1 << lr_len) < generators.g_bold_slice().len() { + lr_len += 1; + } + + let weight = match P { + P::Prover(_) => panic!("prove called with a P specification which was for the prover"), + P::Verifier { verifier_weight } => verifier_weight, + }; + + // Again, we start with the `else: (n > 1)` case + + // We need x, x_inv per lines 25-27 for lines 28-31 + let mut L = Vec::with_capacity(lr_len); + let mut R = Vec::with_capacity(lr_len); + let mut xs: Vec = Vec::with_capacity(lr_len); + for _ in 0 .. lr_len { + L.push(transcript.read_point::().map_err(|_| IpError::IncompleteProof)?); + R.push(transcript.read_point::().map_err(|_| IpError::IncompleteProof)?); + xs.push(transcript.challenge()); + } + + // We calculate their inverse in batch + let mut x_invs = xs.clone(); + { + let mut scratch = vec![C::F::ZERO; x_invs.len()]; + ciphersuite::group::ff::BatchInverter::invert_with_external_scratch( + &mut x_invs, + &mut scratch, + ); + } + + // Now, with x and x_inv, we need to calculate g_bold', h_bold', P' + // + // For the sake of performance, we solely want to calculate all of these in terms of scalings + // for g_bold, h_bold, P, and don't want to actually perform intermediary scalings of the + // points + // + // L and R are easy, as it's simply x**2, x**-2 + // + // For the series of g_bold, h_bold, we use the `challenge_products` function + // For how that works, please see its own documentation + let product_cache = { + let mut challenges = Vec::with_capacity(lr_len); + + let x_iter = xs.into_iter().zip(x_invs); + let lr_iter = L.into_iter().zip(R); + for ((x, x_inv), (L, R)) in x_iter.zip(lr_iter) { + challenges.push((x, x_inv)); + verifier.additional.push((weight * x.square(), L)); + verifier.additional.push((weight * x_inv.square(), R)); + } + + Self::challenge_products(&challenges) + }; + + // And now for the `if n = 1` case + let a = transcript.read_scalar::().map_err(|_| IpError::IncompleteProof)?; + let b = transcript.read_scalar::().map_err(|_| IpError::IncompleteProof)?; + let c = a * b; + + // The multiexp of these terms equate to the final permutation of P + // We now add terms for a * g_bold' + b * h_bold' b + c * u, with the scalars negative such + // that the terms sum to 0 for an honest prover + + // The g_bold * a term case from line 16 + #[allow(clippy::needless_range_loop)] + for i in 0 .. generators.g_bold_slice().len() { + verifier.g_bold[i] -= weight * product_cache[i] * a; + } + // The h_bold * b term case from line 16 + for i in 0 .. generators.h_bold_slice().len() { + verifier.h_bold[i] -= + weight * product_cache[product_cache.len() - 1 - i] * b * h_bold_weights[i]; + } + // The c * u term case from line 16 + verifier.g -= weight * c * u; + + Ok(()) + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/lib.rs b/crypto/evrf/generalized-bulletproofs/src/lib.rs new file mode 100644 index 00000000..dc88e68c --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/lib.rs @@ -0,0 +1,328 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] +#![allow(non_snake_case)] + +use core::fmt; +use std::collections::HashSet; + +use zeroize::Zeroize; + +use multiexp::{multiexp, multiexp_vartime}; +use ciphersuite::{ + group::{ff::Field, Group, GroupEncoding}, + Ciphersuite, +}; + +mod scalar_vector; +pub use scalar_vector::ScalarVector; +mod point_vector; +pub use point_vector::PointVector; + +/// The transcript formats. +pub mod transcript; + +pub(crate) mod inner_product; + +pub(crate) mod lincomb; + +/// The arithmetic circuit proof. +pub mod arithmetic_circuit_proof; + +/// Functionlity useful when testing. +#[cfg(any(test, feature = "tests"))] +pub mod tests; + +/// Calculate the nearest power of two greater than or equivalent to the argument. +pub(crate) fn padded_pow_of_2(i: usize) -> usize { + let mut next_pow_of_2 = 1; + while next_pow_of_2 < i { + next_pow_of_2 <<= 1; + } + next_pow_of_2 +} + +/// An error from working with generators. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub enum GeneratorsError { + /// The provided list of generators for `g` (bold) was empty. + GBoldEmpty, + /// The provided list of generators for `h` (bold) did not match `g` (bold) in length. + DifferingGhBoldLengths, + /// The amount of provided generators were not a power of two. + NotPowerOfTwo, + /// A generator was used multiple times. + DuplicatedGenerator, +} + +/// A full set of generators. +#[derive(Clone)] +pub struct Generators { + g: C::G, + h: C::G, + + g_bold: Vec, + h_bold: Vec, + h_sum: Vec, +} + +/// A batch verifier of proofs. +#[must_use] +#[derive(Clone)] +pub struct BatchVerifier { + g: C::F, + h: C::F, + + g_bold: Vec, + h_bold: Vec, + h_sum: Vec, + + additional: Vec<(C::F, C::G)>, +} + +impl fmt::Debug for Generators { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + let g = self.g.to_bytes(); + let g: &[u8] = g.as_ref(); + + let h = self.h.to_bytes(); + let h: &[u8] = h.as_ref(); + + fmt.debug_struct("Generators").field("g", &g).field("h", &h).finish_non_exhaustive() + } +} + +/// The generators for a specific proof. +/// +/// This potentially have been reduced in size from the original set of generators, as beneficial +/// to performance. +#[derive(Copy, Clone)] +pub struct ProofGenerators<'a, C: Ciphersuite> { + g: &'a C::G, + h: &'a C::G, + + g_bold: &'a [C::G], + h_bold: &'a [C::G], +} + +impl fmt::Debug for ProofGenerators<'_, C> { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + let g = self.g.to_bytes(); + let g: &[u8] = g.as_ref(); + + let h = self.h.to_bytes(); + let h: &[u8] = h.as_ref(); + + fmt.debug_struct("ProofGenerators").field("g", &g).field("h", &h).finish_non_exhaustive() + } +} + +impl Generators { + /// Construct an instance of Generators for usage with Bulletproofs. + pub fn new( + g: C::G, + h: C::G, + g_bold: Vec, + h_bold: Vec, + ) -> Result { + if g_bold.is_empty() { + Err(GeneratorsError::GBoldEmpty)?; + } + if g_bold.len() != h_bold.len() { + Err(GeneratorsError::DifferingGhBoldLengths)?; + } + if padded_pow_of_2(g_bold.len()) != g_bold.len() { + Err(GeneratorsError::NotPowerOfTwo)?; + } + + let mut set = HashSet::new(); + let mut add_generator = |generator: &C::G| { + assert!(!bool::from(generator.is_identity())); + let bytes = generator.to_bytes(); + !set.insert(bytes.as_ref().to_vec()) + }; + + assert!(!add_generator(&g), "g was prior present in empty set"); + if add_generator(&h) { + Err(GeneratorsError::DuplicatedGenerator)?; + } + for g in &g_bold { + if add_generator(g) { + Err(GeneratorsError::DuplicatedGenerator)?; + } + } + for h in &h_bold { + if add_generator(h) { + Err(GeneratorsError::DuplicatedGenerator)?; + } + } + + let mut running_h_sum = C::G::identity(); + let mut h_sum = vec![]; + let mut next_pow_of_2 = 1; + for (i, h) in h_bold.iter().enumerate() { + running_h_sum += h; + if (i + 1) == next_pow_of_2 { + h_sum.push(running_h_sum); + next_pow_of_2 *= 2; + } + } + + Ok(Generators { g, h, g_bold, h_bold, h_sum }) + } + + /// Create a BatchVerifier for proofs which use these generators. + pub fn batch_verifier(&self) -> BatchVerifier { + BatchVerifier { + g: C::F::ZERO, + h: C::F::ZERO, + + g_bold: vec![C::F::ZERO; self.g_bold.len()], + h_bold: vec![C::F::ZERO; self.h_bold.len()], + h_sum: vec![C::F::ZERO; self.h_sum.len()], + + additional: Vec::with_capacity(128), + } + } + + /// Verify all proofs queued for batch verification in this BatchVerifier. + #[must_use] + pub fn verify(&self, verifier: BatchVerifier) -> bool { + multiexp_vartime( + &[(verifier.g, self.g), (verifier.h, self.h)] + .into_iter() + .chain(verifier.g_bold.into_iter().zip(self.g_bold.iter().cloned())) + .chain(verifier.h_bold.into_iter().zip(self.h_bold.iter().cloned())) + .chain(verifier.h_sum.into_iter().zip(self.h_sum.iter().cloned())) + .chain(verifier.additional) + .collect::>(), + ) + .is_identity() + .into() + } + + /// The `g` generator. + pub fn g(&self) -> C::G { + self.g + } + + /// The `h` generator. + pub fn h(&self) -> C::G { + self.h + } + + /// A slice to view the `g` (bold) generators. + pub fn g_bold_slice(&self) -> &[C::G] { + &self.g_bold + } + + /// A slice to view the `h` (bold) generators. + pub fn h_bold_slice(&self) -> &[C::G] { + &self.h_bold + } + + /// Reduce a set of generators to the quantity necessary to support a certain amount of + /// in-circuit multiplications/terms in a Pedersen vector commitment. + /// + /// Returns None if reducing to 0 or if the generators reduced are insufficient to provide this + /// many generators. + pub fn reduce(&self, generators: usize) -> Option> { + if generators == 0 { + None?; + } + + // Round to the nearest power of 2 + let generators = padded_pow_of_2(generators); + if generators > self.g_bold.len() { + None?; + } + + Some(ProofGenerators { + g: &self.g, + h: &self.h, + + g_bold: &self.g_bold[.. generators], + h_bold: &self.h_bold[.. generators], + }) + } +} + +impl<'a, C: Ciphersuite> ProofGenerators<'a, C> { + pub(crate) fn len(&self) -> usize { + self.g_bold.len() + } + + pub(crate) fn g(&self) -> C::G { + *self.g + } + + pub(crate) fn h(&self) -> C::G { + *self.h + } + + pub(crate) fn g_bold(&self, i: usize) -> C::G { + self.g_bold[i] + } + + pub(crate) fn h_bold(&self, i: usize) -> C::G { + self.h_bold[i] + } + + pub(crate) fn g_bold_slice(&self) -> &[C::G] { + self.g_bold + } + + pub(crate) fn h_bold_slice(&self) -> &[C::G] { + self.h_bold + } +} + +/// The opening of a Pedersen commitment. +#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)] +pub struct PedersenCommitment { + /// The value committed to. + pub value: C::F, + /// The mask blinding the value committed to. + pub mask: C::F, +} + +impl PedersenCommitment { + /// Commit to this value, yielding the Pedersen commitment. + pub fn commit(&self, g: C::G, h: C::G) -> C::G { + multiexp(&[(self.value, g), (self.mask, h)]) + } +} + +/// The opening of a Pedersen vector commitment. +#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] +pub struct PedersenVectorCommitment { + /// The values committed to across the `g` (bold) generators. + pub g_values: ScalarVector, + /// The values committed to across the `h` (bold) generators. + pub h_values: ScalarVector, + /// The mask blinding the values committed to. + pub mask: C::F, +} + +impl PedersenVectorCommitment { + /// Commit to the vectors of values. + /// + /// This function returns None if the amount of generators is less than the amount of values + /// within the relevant vector. + pub fn commit(&self, g_bold: &[C::G], h_bold: &[C::G], h: C::G) -> Option { + if (g_bold.len() < self.g_values.len()) || (h_bold.len() < self.h_values.len()) { + None?; + }; + + let mut terms = vec![(self.mask, h)]; + for pair in self.g_values.0.iter().cloned().zip(g_bold.iter().cloned()) { + terms.push(pair); + } + for pair in self.h_values.0.iter().cloned().zip(h_bold.iter().cloned()) { + terms.push(pair); + } + let res = multiexp(&terms); + terms.zeroize(); + Some(res) + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/lincomb.rs b/crypto/evrf/generalized-bulletproofs/src/lincomb.rs new file mode 100644 index 00000000..291b3b0b --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/lincomb.rs @@ -0,0 +1,265 @@ +use core::ops::{Add, Sub, Mul}; + +use zeroize::Zeroize; + +use ciphersuite::group::ff::PrimeField; + +use crate::ScalarVector; + +/// A reference to a variable usable within linear combinations. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +#[allow(non_camel_case_types)] +pub enum Variable { + /// A variable within the left vector of vectors multiplied against each other. + aL(usize), + /// A variable within the right vector of vectors multiplied against each other. + aR(usize), + /// A variable within the output vector of the left vector multiplied by the right vector. + aO(usize), + /// A variable within a Pedersen vector commitment, committed to with a generator from `g` (bold). + CG { + /// The commitment being indexed. + commitment: usize, + /// The index of the variable. + index: usize, + }, + /// A variable within a Pedersen vector commitment, committed to with a generator from `h` (bold). + CH { + /// The commitment being indexed. + commitment: usize, + /// The index of the variable. + index: usize, + }, + /// A variable within a Pedersen commitment. + V(usize), +} + +// Does a NOP as there shouldn't be anything critical here +impl Zeroize for Variable { + fn zeroize(&mut self) {} +} + +/// A linear combination. +/// +/// Specifically, `WL aL + WR aR + WO aO + WCG C_G + WCH C_H + WV V + c`. +#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] +#[must_use] +pub struct LinComb { + pub(crate) highest_a_index: Option, + pub(crate) highest_c_index: Option, + pub(crate) highest_v_index: Option, + + // Sparse representation of WL/WR/WO + pub(crate) WL: Vec<(usize, F)>, + pub(crate) WR: Vec<(usize, F)>, + pub(crate) WO: Vec<(usize, F)>, + // Sparse representation once within a commitment + pub(crate) WCG: Vec>, + pub(crate) WCH: Vec>, + // Sparse representation of WV + pub(crate) WV: Vec<(usize, F)>, + pub(crate) c: F, +} + +impl From for LinComb { + fn from(constrainable: Variable) -> LinComb { + LinComb::empty().term(F::ONE, constrainable) + } +} + +impl Add<&LinComb> for LinComb { + type Output = Self; + + fn add(mut self, constraint: &Self) -> Self { + self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index); + self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index); + self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index); + + self.WL.extend(&constraint.WL); + self.WR.extend(&constraint.WR); + self.WO.extend(&constraint.WO); + while self.WCG.len() < constraint.WCG.len() { + self.WCG.push(vec![]); + } + while self.WCH.len() < constraint.WCH.len() { + self.WCH.push(vec![]); + } + for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) { + sWC.extend(cWC); + } + for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) { + sWC.extend(cWC); + } + self.WV.extend(&constraint.WV); + self.c += constraint.c; + self + } +} + +impl Sub<&LinComb> for LinComb { + type Output = Self; + + fn sub(mut self, constraint: &Self) -> Self { + self.highest_a_index = self.highest_a_index.max(constraint.highest_a_index); + self.highest_c_index = self.highest_c_index.max(constraint.highest_c_index); + self.highest_v_index = self.highest_v_index.max(constraint.highest_v_index); + + self.WL.extend(constraint.WL.iter().map(|(i, weight)| (*i, -*weight))); + self.WR.extend(constraint.WR.iter().map(|(i, weight)| (*i, -*weight))); + self.WO.extend(constraint.WO.iter().map(|(i, weight)| (*i, -*weight))); + while self.WCG.len() < constraint.WCG.len() { + self.WCG.push(vec![]); + } + while self.WCH.len() < constraint.WCH.len() { + self.WCH.push(vec![]); + } + for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) { + sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight))); + } + for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) { + sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight))); + } + self.WV.extend(constraint.WV.iter().map(|(i, weight)| (*i, -*weight))); + self.c -= constraint.c; + self + } +} + +impl Mul for LinComb { + type Output = Self; + + fn mul(mut self, scalar: F) -> Self { + for (_, weight) in self.WL.iter_mut() { + *weight *= scalar; + } + for (_, weight) in self.WR.iter_mut() { + *weight *= scalar; + } + for (_, weight) in self.WO.iter_mut() { + *weight *= scalar; + } + for WC in self.WCG.iter_mut() { + for (_, weight) in WC { + *weight *= scalar; + } + } + for WC in self.WCH.iter_mut() { + for (_, weight) in WC { + *weight *= scalar; + } + } + for (_, weight) in self.WV.iter_mut() { + *weight *= scalar; + } + self.c *= scalar; + self + } +} + +impl LinComb { + /// Create an empty linear combination. + pub fn empty() -> Self { + Self { + highest_a_index: None, + highest_c_index: None, + highest_v_index: None, + WL: vec![], + WR: vec![], + WO: vec![], + WCG: vec![], + WCH: vec![], + WV: vec![], + c: F::ZERO, + } + } + + /// Add a new instance of a term to this linear combination. + pub fn term(mut self, scalar: F, constrainable: Variable) -> Self { + match constrainable { + Variable::aL(i) => { + self.highest_a_index = self.highest_a_index.max(Some(i)); + self.WL.push((i, scalar)) + } + Variable::aR(i) => { + self.highest_a_index = self.highest_a_index.max(Some(i)); + self.WR.push((i, scalar)) + } + Variable::aO(i) => { + self.highest_a_index = self.highest_a_index.max(Some(i)); + self.WO.push((i, scalar)) + } + Variable::CG { commitment: i, index: j } => { + self.highest_c_index = self.highest_c_index.max(Some(i)); + self.highest_a_index = self.highest_a_index.max(Some(j)); + while self.WCG.len() <= i { + self.WCG.push(vec![]); + } + self.WCG[i].push((j, scalar)) + } + Variable::CH { commitment: i, index: j } => { + self.highest_c_index = self.highest_c_index.max(Some(i)); + self.highest_a_index = self.highest_a_index.max(Some(j)); + while self.WCH.len() <= i { + self.WCH.push(vec![]); + } + self.WCH[i].push((j, scalar)) + } + Variable::V(i) => { + self.highest_v_index = self.highest_v_index.max(Some(i)); + self.WV.push((i, scalar)); + } + }; + self + } + + /// Add to the constant c. + pub fn constant(mut self, scalar: F) -> Self { + self.c += scalar; + self + } + + /// View the current weights for aL. + pub fn WL(&self) -> &[(usize, F)] { + &self.WL + } + + /// View the current weights for aR. + pub fn WR(&self) -> &[(usize, F)] { + &self.WR + } + + /// View the current weights for aO. + pub fn WO(&self) -> &[(usize, F)] { + &self.WO + } + + /// View the current weights for CG. + pub fn WCG(&self) -> &[Vec<(usize, F)>] { + &self.WCG + } + + /// View the current weights for CH. + pub fn WCH(&self) -> &[Vec<(usize, F)>] { + &self.WCH + } + + /// View the current weights for V. + pub fn WV(&self) -> &[(usize, F)] { + &self.WV + } + + /// View the current constant. + pub fn c(&self) -> F { + self.c + } +} + +pub(crate) fn accumulate_vector( + accumulator: &mut ScalarVector, + values: &[(usize, F)], + weight: F, +) { + for (i, coeff) in values { + accumulator[*i] += *coeff * weight; + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/point_vector.rs b/crypto/evrf/generalized-bulletproofs/src/point_vector.rs new file mode 100644 index 00000000..82fad519 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/point_vector.rs @@ -0,0 +1,121 @@ +use core::ops::{Index, IndexMut}; + +use zeroize::Zeroize; + +use ciphersuite::Ciphersuite; + +#[cfg(test)] +use multiexp::multiexp; + +use crate::ScalarVector; + +/// A point vector struct with the functionality necessary for Bulletproofs. +/// +/// The math operations for this panic upon any invalid operation, such as if vectors of different +/// lengths are added. The full extent of invalidity is not fully defined. Only field access is +/// guaranteed to have a safe, public API. +#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] +pub struct PointVector(pub(crate) Vec); + +impl Index for PointVector { + type Output = C::G; + fn index(&self, index: usize) -> &C::G { + &self.0[index] + } +} + +impl IndexMut for PointVector { + fn index_mut(&mut self, index: usize) -> &mut C::G { + &mut self.0[index] + } +} + +impl PointVector { + /* + pub(crate) fn add(&self, point: impl AsRef) -> Self { + let mut res = self.clone(); + for val in res.0.iter_mut() { + *val += point.as_ref(); + } + res + } + pub(crate) fn sub(&self, point: impl AsRef) -> Self { + let mut res = self.clone(); + for val in res.0.iter_mut() { + *val -= point.as_ref(); + } + res + } + + pub(crate) fn mul(&self, scalar: impl core::borrow::Borrow) -> Self { + let mut res = self.clone(); + for val in res.0.iter_mut() { + *val *= scalar.borrow(); + } + res + } + + pub(crate) fn add_vec(&self, vector: &Self) -> Self { + debug_assert_eq!(self.len(), vector.len()); + let mut res = self.clone(); + for (i, val) in res.0.iter_mut().enumerate() { + *val += vector.0[i]; + } + res + } + + pub(crate) fn sub_vec(&self, vector: &Self) -> Self { + debug_assert_eq!(self.len(), vector.len()); + let mut res = self.clone(); + for (i, val) in res.0.iter_mut().enumerate() { + *val -= vector.0[i]; + } + res + } + */ + + pub(crate) fn mul_vec(&self, vector: &ScalarVector) -> Self { + debug_assert_eq!(self.len(), vector.len()); + let mut res = self.clone(); + for (i, val) in res.0.iter_mut().enumerate() { + *val *= vector.0[i]; + } + res + } + + #[cfg(test)] + pub(crate) fn multiexp(&self, vector: &crate::ScalarVector) -> C::G { + debug_assert_eq!(self.len(), vector.len()); + let mut res = Vec::with_capacity(self.len()); + for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) { + res.push((scalar, point)); + } + multiexp(&res) + } + + /* + pub(crate) fn multiexp_vartime(&self, vector: &ScalarVector) -> C::G { + debug_assert_eq!(self.len(), vector.len()); + let mut res = Vec::with_capacity(self.len()); + for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) { + res.push((scalar, point)); + } + multiexp_vartime(&res) + } + + pub(crate) fn sum(&self) -> C::G { + self.0.iter().sum() + } + */ + + pub(crate) fn len(&self) -> usize { + self.0.len() + } + + pub(crate) fn split(mut self) -> (Self, Self) { + assert!(self.len() > 1); + let r = self.0.split_off(self.0.len() / 2); + debug_assert_eq!(self.len(), r.len()); + (self, PointVector(r)) + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs b/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs new file mode 100644 index 00000000..a9cf4365 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs @@ -0,0 +1,146 @@ +use core::ops::{Index, IndexMut, Add, Sub, Mul}; + +use zeroize::Zeroize; + +use ciphersuite::group::ff::PrimeField; + +/// A scalar vector struct with the functionality necessary for Bulletproofs. +/// +/// The math operations for this panic upon any invalid operation, such as if vectors of different +/// lengths are added. The full extent of invalidity is not fully defined. Only `new`, `len`, +/// and field access is guaranteed to have a safe, public API. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct ScalarVector(pub(crate) Vec); + +impl Zeroize for ScalarVector { + fn zeroize(&mut self) { + self.0.zeroize() + } +} + +impl Index for ScalarVector { + type Output = F; + fn index(&self, index: usize) -> &F { + &self.0[index] + } +} +impl IndexMut for ScalarVector { + fn index_mut(&mut self, index: usize) -> &mut F { + &mut self.0[index] + } +} + +impl Add for ScalarVector { + type Output = ScalarVector; + fn add(mut self, scalar: F) -> Self { + for s in &mut self.0 { + *s += scalar; + } + self + } +} +impl Sub for ScalarVector { + type Output = ScalarVector; + fn sub(mut self, scalar: F) -> Self { + for s in &mut self.0 { + *s -= scalar; + } + self + } +} +impl Mul for ScalarVector { + type Output = ScalarVector; + fn mul(mut self, scalar: F) -> Self { + for s in &mut self.0 { + *s *= scalar; + } + self + } +} + +impl Add<&ScalarVector> for ScalarVector { + type Output = ScalarVector; + fn add(mut self, other: &ScalarVector) -> Self { + assert_eq!(self.len(), other.len()); + for (s, o) in self.0.iter_mut().zip(other.0.iter()) { + *s += o; + } + self + } +} +impl Sub<&ScalarVector> for ScalarVector { + type Output = ScalarVector; + fn sub(mut self, other: &ScalarVector) -> Self { + assert_eq!(self.len(), other.len()); + for (s, o) in self.0.iter_mut().zip(other.0.iter()) { + *s -= o; + } + self + } +} +impl Mul<&ScalarVector> for ScalarVector { + type Output = ScalarVector; + fn mul(mut self, other: &ScalarVector) -> Self { + assert_eq!(self.len(), other.len()); + for (s, o) in self.0.iter_mut().zip(other.0.iter()) { + *s *= o; + } + self + } +} + +impl ScalarVector { + /// Create a new scalar vector, initialized with `len` zero scalars. + pub fn new(len: usize) -> Self { + ScalarVector(vec![F::ZERO; len]) + } + + pub(crate) fn powers(x: F, len: usize) -> Self { + assert!(len != 0); + + let mut res = Vec::with_capacity(len); + res.push(F::ONE); + res.push(x); + for i in 2 .. len { + res.push(res[i - 1] * x); + } + res.truncate(len); + ScalarVector(res) + } + + /// The length of this scalar vector. + #[allow(clippy::len_without_is_empty)] + pub fn len(&self) -> usize { + self.0.len() + } + + /* + pub(crate) fn sum(mut self) -> F { + self.0.drain(..).sum() + } + */ + + pub(crate) fn inner_product<'a, V: Iterator>(&self, vector: V) -> F { + let mut count = 0; + let mut res = F::ZERO; + for (a, b) in self.0.iter().zip(vector) { + res += *a * b; + count += 1; + } + debug_assert_eq!(self.len(), count); + res + } + + pub(crate) fn split(mut self) -> (Self, Self) { + assert!(self.len() > 1); + let r = self.0.split_off(self.0.len() / 2); + debug_assert_eq!(self.len(), r.len()); + (self, ScalarVector(r)) + } +} + +impl From> for ScalarVector { + fn from(vec: Vec) -> Self { + Self(vec) + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs b/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs new file mode 100644 index 00000000..588a6ae6 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs @@ -0,0 +1,250 @@ +use rand_core::{RngCore, OsRng}; + +use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto}; + +use crate::{ + ScalarVector, PedersenCommitment, PedersenVectorCommitment, + transcript::*, + arithmetic_circuit_proof::{ + Variable, LinComb, ArithmeticCircuitStatement, ArithmeticCircuitWitness, + }, + tests::generators, +}; + +#[test] +fn test_zero_arithmetic_circuit() { + let generators = generators(1); + + let value = ::F::random(&mut OsRng); + let gamma = ::F::random(&mut OsRng); + let commitment = (generators.g() * value) + (generators.h() * gamma); + let V = vec![commitment]; + + let aL = ScalarVector::<::F>(vec![::F::ZERO]); + let aR = aL.clone(); + + let mut transcript = Transcript::new([0; 32]); + let commitments = transcript.write_commitments(vec![], V); + let statement = ArithmeticCircuitStatement::::new( + generators.reduce(1).unwrap(), + vec![], + commitments.clone(), + ) + .unwrap(); + let witness = ArithmeticCircuitWitness::::new( + aL, + aR, + vec![], + vec![PedersenCommitment { value, mask: gamma }], + ) + .unwrap(); + + let proof = { + statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); + transcript.complete() + }; + let mut verifier = generators.batch_verifier(); + + let mut transcript = VerifierTranscript::new([0; 32], &proof); + let verifier_commmitments = transcript.read_commitments(0, 1); + assert_eq!(commitments, verifier_commmitments.unwrap()); + statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap(); + assert!(generators.verify(verifier)); +} + +#[test] +fn test_vector_commitment_arithmetic_circuit() { + let generators = generators(2); + let reduced = generators.reduce(2).unwrap(); + + let v1 = ::F::random(&mut OsRng); + let v2 = ::F::random(&mut OsRng); + let v3 = ::F::random(&mut OsRng); + let v4 = ::F::random(&mut OsRng); + let gamma = ::F::random(&mut OsRng); + let commitment = (reduced.g_bold(0) * v1) + + (reduced.g_bold(1) * v2) + + (reduced.h_bold(0) * v3) + + (reduced.h_bold(1) * v4) + + (generators.h() * gamma); + let V = vec![]; + let C = vec![commitment]; + + let zero_vec = + || ScalarVector::<::F>(vec![::F::ZERO]); + + let aL = zero_vec(); + let aR = zero_vec(); + + let mut transcript = Transcript::new([0; 32]); + let commitments = transcript.write_commitments(C, V); + let statement = ArithmeticCircuitStatement::::new( + reduced, + vec![LinComb::empty() + .term(::F::ONE, Variable::CG { commitment: 0, index: 0 }) + .term(::F::from(2u64), Variable::CG { commitment: 0, index: 1 }) + .term(::F::from(3u64), Variable::CH { commitment: 0, index: 0 }) + .term(::F::from(4u64), Variable::CH { commitment: 0, index: 1 }) + .constant(-(v1 + (v2 + v2) + (v3 + v3 + v3) + (v4 + v4 + v4 + v4)))], + commitments.clone(), + ) + .unwrap(); + let witness = ArithmeticCircuitWitness::::new( + aL, + aR, + vec![PedersenVectorCommitment { + g_values: ScalarVector(vec![v1, v2]), + h_values: ScalarVector(vec![v3, v4]), + mask: gamma, + }], + vec![], + ) + .unwrap(); + + let proof = { + statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); + transcript.complete() + }; + let mut verifier = generators.batch_verifier(); + + let mut transcript = VerifierTranscript::new([0; 32], &proof); + let verifier_commmitments = transcript.read_commitments(1, 0); + assert_eq!(commitments, verifier_commmitments.unwrap()); + statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap(); + assert!(generators.verify(verifier)); +} + +#[test] +fn fuzz_test_arithmetic_circuit() { + let generators = generators(32); + + for i in 0 .. 100 { + dbg!(i); + + // Create aL, aR, aO + let mut aL = ScalarVector(vec![]); + let mut aR = ScalarVector(vec![]); + while aL.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() { + aL.0.push(::F::random(&mut OsRng)); + } + while aR.len() < aL.len() { + aR.0.push(::F::random(&mut OsRng)); + } + let aO = aL.clone() * &aR; + + // Create C + let mut C = vec![]; + while C.len() < (OsRng.next_u64() % 16).try_into().unwrap() { + let mut g_values = ScalarVector(vec![]); + while g_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() { + g_values.0.push(::F::random(&mut OsRng)); + } + let mut h_values = ScalarVector(vec![]); + while h_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() { + h_values.0.push(::F::random(&mut OsRng)); + } + C.push(PedersenVectorCommitment { + g_values, + h_values, + mask: ::F::random(&mut OsRng), + }); + } + + // Create V + let mut V = vec![]; + while V.len() < (OsRng.next_u64() % 4).try_into().unwrap() { + V.push(PedersenCommitment { + value: ::F::random(&mut OsRng), + mask: ::F::random(&mut OsRng), + }); + } + + // Generate random constraints + let mut constraints = vec![]; + for _ in 0 .. (OsRng.next_u64() % 8).try_into().unwrap() { + let mut eval = ::F::ZERO; + let mut constraint = LinComb::empty(); + + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % aL.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::aL(index)); + eval += weight * aL[index]; + } + + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % aR.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::aR(index)); + eval += weight * aR[index]; + } + + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % aO.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::aO(index)); + eval += weight * aO[index]; + } + + for (commitment, C) in C.iter().enumerate() { + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % C.g_values.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::CG { commitment, index }); + eval += weight * C.g_values[index]; + } + + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % C.h_values.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::CH { commitment, index }); + eval += weight * C.h_values[index]; + } + } + + if !V.is_empty() { + for _ in 0 .. (OsRng.next_u64() % 4) { + let index = usize::try_from(OsRng.next_u64()).unwrap() % V.len(); + let weight = ::F::random(&mut OsRng); + constraint = constraint.term(weight, Variable::V(index)); + eval += weight * V[index].value; + } + } + + constraint = constraint.constant(-eval); + + constraints.push(constraint); + } + + let mut transcript = Transcript::new([0; 32]); + let commitments = transcript.write_commitments( + C.iter() + .map(|C| { + C.commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h()).unwrap() + }) + .collect(), + V.iter().map(|V| V.commit(generators.g(), generators.h())).collect(), + ); + + let statement = ArithmeticCircuitStatement::::new( + generators.reduce(16).unwrap(), + constraints, + commitments.clone(), + ) + .unwrap(); + + let witness = ArithmeticCircuitWitness::::new(aL, aR, C.clone(), V.clone()).unwrap(); + + let proof = { + statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); + transcript.complete() + }; + let mut verifier = generators.batch_verifier(); + + let mut transcript = VerifierTranscript::new([0; 32], &proof); + let verifier_commmitments = transcript.read_commitments(C.len(), V.len()); + assert_eq!(commitments, verifier_commmitments.unwrap()); + statement.verify(&mut OsRng, &mut verifier, &mut transcript).unwrap(); + assert!(generators.verify(verifier)); + } +} diff --git a/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs b/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs new file mode 100644 index 00000000..49b5fc32 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs @@ -0,0 +1,113 @@ +// The inner product relation is P = sum(g_bold * a, h_bold * b, g * (a * b)) + +use rand_core::OsRng; + +use ciphersuite::{ + group::{ff::Field, Group}, + Ciphersuite, Ristretto, +}; + +use crate::{ + ScalarVector, PointVector, + transcript::*, + inner_product::{P, IpStatement, IpWitness}, + tests::generators, +}; + +#[test] +fn test_zero_inner_product() { + let P = ::G::identity(); + + let generators = generators::(1); + let reduced = generators.reduce(1).unwrap(); + let witness = IpWitness::::new( + ScalarVector::<::F>::new(1), + ScalarVector::<::F>::new(1), + ) + .unwrap(); + + let proof = { + let mut transcript = Transcript::new([0; 32]); + IpStatement::::new( + reduced, + ScalarVector(vec![::F::ONE; 1]), + ::F::ONE, + P::Prover(P), + ) + .unwrap() + .clone() + .prove(&mut transcript, witness) + .unwrap(); + transcript.complete() + }; + + let mut verifier = generators.batch_verifier(); + IpStatement::::new( + reduced, + ScalarVector(vec![::F::ONE; 1]), + ::F::ONE, + P::Verifier { verifier_weight: ::F::ONE }, + ) + .unwrap() + .verify(&mut verifier, &mut VerifierTranscript::new([0; 32], &proof)) + .unwrap(); + assert!(generators.verify(verifier)); +} + +#[test] +fn test_inner_product() { + // P = sum(g_bold * a, h_bold * b) + let generators = generators::(32); + let mut verifier = generators.batch_verifier(); + for i in [1, 2, 4, 8, 16, 32] { + let generators = generators.reduce(i).unwrap(); + let g = generators.g(); + assert_eq!(generators.len(), i); + let mut g_bold = vec![]; + let mut h_bold = vec![]; + for i in 0 .. i { + g_bold.push(generators.g_bold(i)); + h_bold.push(generators.h_bold(i)); + } + let g_bold = PointVector::(g_bold); + let h_bold = PointVector::(h_bold); + + let mut a = ScalarVector::<::F>::new(i); + let mut b = ScalarVector::<::F>::new(i); + + for i in 0 .. i { + a[i] = ::F::random(&mut OsRng); + b[i] = ::F::random(&mut OsRng); + } + + let P = g_bold.multiexp(&a) + h_bold.multiexp(&b) + (g * a.inner_product(b.0.iter())); + + let witness = IpWitness::::new(a, b).unwrap(); + + let proof = { + let mut transcript = Transcript::new([0; 32]); + IpStatement::::new( + generators, + ScalarVector(vec![::F::ONE; i]), + ::F::ONE, + P::Prover(P), + ) + .unwrap() + .prove(&mut transcript, witness) + .unwrap(); + transcript.complete() + }; + + verifier.additional.push((::F::ONE, P)); + IpStatement::::new( + generators, + ScalarVector(vec![::F::ONE; i]), + ::F::ONE, + P::Verifier { verifier_weight: ::F::ONE }, + ) + .unwrap() + .verify(&mut verifier, &mut VerifierTranscript::new([0; 32], &proof)) + .unwrap(); + } + assert!(generators.verify(verifier)); +} diff --git a/crypto/evrf/generalized-bulletproofs/src/tests/mod.rs b/crypto/evrf/generalized-bulletproofs/src/tests/mod.rs new file mode 100644 index 00000000..1b64d378 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/tests/mod.rs @@ -0,0 +1,27 @@ +use rand_core::OsRng; + +use ciphersuite::{group::Group, Ciphersuite}; + +use crate::{Generators, padded_pow_of_2}; + +#[cfg(test)] +mod inner_product; + +#[cfg(test)] +mod arithmetic_circuit_proof; + +/// Generate a set of generators for testing purposes. +/// +/// This should not be considered secure. +pub fn generators(n: usize) -> Generators { + assert_eq!(padded_pow_of_2(n), n, "amount of generators wasn't a power of 2"); + + let gens = || { + let mut res = Vec::with_capacity(n); + for _ in 0 .. n { + res.push(C::G::random(&mut OsRng)); + } + res + }; + Generators::new(C::G::random(&mut OsRng), C::G::random(&mut OsRng), gens(), gens()).unwrap() +} diff --git a/crypto/evrf/generalized-bulletproofs/src/transcript.rs b/crypto/evrf/generalized-bulletproofs/src/transcript.rs new file mode 100644 index 00000000..75ef35c4 --- /dev/null +++ b/crypto/evrf/generalized-bulletproofs/src/transcript.rs @@ -0,0 +1,188 @@ +use std::io; + +use blake2::{Digest, Blake2b512}; + +use ciphersuite::{ + group::{ff::PrimeField, GroupEncoding}, + Ciphersuite, +}; + +use crate::PointVector; + +const SCALAR: u8 = 0; +const POINT: u8 = 1; +const CHALLENGE: u8 = 2; + +fn challenge(digest: &mut Blake2b512) -> F { + // Panic if this is such a wide field, we won't successfully perform a reduction into an unbiased + // scalar + debug_assert!((F::NUM_BITS + 128) < 512); + + digest.update([CHALLENGE]); + let chl = digest.clone().finalize(); + + let mut res = F::ZERO; + for (i, mut byte) in chl.iter().cloned().enumerate() { + for j in 0 .. 8 { + let lsb = byte & 1; + let mut bit = F::from(u64::from(lsb)); + for _ in 0 .. ((i * 8) + j) { + bit = bit.double(); + } + res += bit; + + byte >>= 1; + } + } + + // Negligible probability + if bool::from(res.is_zero()) { + panic!("zero challenge"); + } + + res +} + +/// Commitments written to/read from a transcript. +// We use a dedicated type for this to coerce the caller into transcripting the commitments as +// expected. +#[cfg_attr(test, derive(Clone, PartialEq, Debug))] +pub struct Commitments { + pub(crate) C: PointVector, + pub(crate) V: PointVector, +} + +impl Commitments { + /// The vector commitments. + pub fn C(&self) -> &[C::G] { + &self.C.0 + } + /// The non-vector commitments. + pub fn V(&self) -> &[C::G] { + &self.V.0 + } +} + +/// A transcript for proving proofs. +pub struct Transcript { + digest: Blake2b512, + transcript: Vec, +} + +/* + We define our proofs as Vec and derive our transcripts from the values we deserialize from + them. This format assumes the order of the values read, their size, and their quantity are + constant to the context. +*/ +impl Transcript { + /// Create a new transcript off some context. + pub fn new(context: [u8; 32]) -> Self { + let mut digest = Blake2b512::new(); + digest.update(context); + Self { digest, transcript: Vec::with_capacity(1024) } + } + + /// Push a scalar onto the transcript. + pub fn push_scalar(&mut self, scalar: impl PrimeField) { + self.digest.update([SCALAR]); + let bytes = scalar.to_repr(); + self.digest.update(bytes); + self.transcript.extend(bytes.as_ref()); + } + + /// Push a point onto the transcript. + pub fn push_point(&mut self, point: impl GroupEncoding) { + self.digest.update([POINT]); + let bytes = point.to_bytes(); + self.digest.update(bytes); + self.transcript.extend(bytes.as_ref()); + } + + /// Write the Pedersen (vector) commitments to this transcript. + pub fn write_commitments( + &mut self, + C: Vec, + V: Vec, + ) -> Commitments { + for C in &C { + self.push_point(*C); + } + for V in &V { + self.push_point(*V); + } + Commitments { C: PointVector(C), V: PointVector(V) } + } + + /// Sample a challenge. + pub fn challenge(&mut self) -> F { + challenge(&mut self.digest) + } + + /// Complete a transcript, yielding the fully serialized proof. + pub fn complete(self) -> Vec { + self.transcript + } +} + +/// A transcript for verifying proofs. +pub struct VerifierTranscript<'a> { + digest: Blake2b512, + transcript: &'a [u8], +} + +impl<'a> VerifierTranscript<'a> { + /// Create a new transcript to verify a proof with. + pub fn new(context: [u8; 32], proof: &'a [u8]) -> Self { + let mut digest = Blake2b512::new(); + digest.update(context); + Self { digest, transcript: proof } + } + + /// Read a scalar from the transcript. + pub fn read_scalar(&mut self) -> io::Result { + let scalar = C::read_F(&mut self.transcript)?; + self.digest.update([SCALAR]); + let bytes = scalar.to_repr(); + self.digest.update(bytes); + Ok(scalar) + } + + /// Read a point from the transcript. + pub fn read_point(&mut self) -> io::Result { + let point = C::read_G(&mut self.transcript)?; + self.digest.update([POINT]); + let bytes = point.to_bytes(); + self.digest.update(bytes); + Ok(point) + } + + /// Read the Pedersen (Vector) Commitments from the transcript. + /// + /// The lengths of the vectors are not transcripted. + #[allow(clippy::type_complexity)] + pub fn read_commitments( + &mut self, + C: usize, + V: usize, + ) -> io::Result> { + let mut C_vec = Vec::with_capacity(C); + for _ in 0 .. C { + C_vec.push(self.read_point::()?); + } + let mut V_vec = Vec::with_capacity(V); + for _ in 0 .. V { + V_vec.push(self.read_point::()?); + } + Ok(Commitments { C: PointVector(C_vec), V: PointVector(V_vec) }) + } + + /// Sample a challenge. + pub fn challenge(&mut self) -> F { + challenge(&mut self.digest) + } + + /// Complete the transcript, returning the advanced slice. + pub fn complete(self) -> &'a [u8] { + self.transcript + } +} diff --git a/crypto/evrf/secq256k1/Cargo.toml b/crypto/evrf/secq256k1/Cargo.toml new file mode 100644 index 00000000..c363ca4f --- /dev/null +++ b/crypto/evrf/secq256k1/Cargo.toml @@ -0,0 +1,39 @@ +[package] +name = "secq256k1" +version = "0.1.0" +description = "An implementation of the curve secp256k1 cycles with" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/secq256k1" +authors = ["Luke Parker "] +keywords = ["secp256k1", "secq256k1", "group"] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +rustversion = "1" +hex-literal = { version = "0.4", default-features = false } + +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } +subtle = { version = "^2.4", default-features = false, features = ["std"] } + +generic-array = { version = "0.14", default-features = false } +crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] } + +k256 = { version = "0.13", default-features = false, features = ["arithmetic"] } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } +ec-divisors = { path = "../divisors" } +generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" } + +[dev-dependencies] +hex = "0.4" + +rand_core = { version = "0.6", features = ["std"] } + +ff-group-tests = { path = "../../ff-group-tests" } diff --git a/crypto/evrf/secq256k1/LICENSE b/crypto/evrf/secq256k1/LICENSE new file mode 100644 index 00000000..91d893c1 --- /dev/null +++ b/crypto/evrf/secq256k1/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022-2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/crypto/evrf/secq256k1/README.md b/crypto/evrf/secq256k1/README.md new file mode 100644 index 00000000..b20ee31f --- /dev/null +++ b/crypto/evrf/secq256k1/README.md @@ -0,0 +1,5 @@ +# secq256k1 + +An implementation of the curve secp256k1 cycles with. + +Scalars and field elements are encoded in their big-endian formats. diff --git a/crypto/evrf/secq256k1/src/backend.rs b/crypto/evrf/secq256k1/src/backend.rs new file mode 100644 index 00000000..6f8653c8 --- /dev/null +++ b/crypto/evrf/secq256k1/src/backend.rs @@ -0,0 +1,295 @@ +use zeroize::Zeroize; + +// Use black_box when possible +#[rustversion::since(1.66)] +use core::hint::black_box; +#[rustversion::before(1.66)] +fn black_box(val: T) -> T { + val +} + +pub(crate) fn u8_from_bool(bit_ref: &mut bool) -> u8 { + let bit_ref = black_box(bit_ref); + + let mut bit = black_box(*bit_ref); + let res = black_box(bit as u8); + bit.zeroize(); + debug_assert!((res | 1) == 1); + + bit_ref.zeroize(); + res +} + +macro_rules! math_op { + ( + $Value: ident, + $Other: ident, + $Op: ident, + $op_fn: ident, + $Assign: ident, + $assign_fn: ident, + $function: expr + ) => { + impl $Op<$Other> for $Value { + type Output = $Value; + fn $op_fn(self, other: $Other) -> Self::Output { + Self($function(self.0, other.0)) + } + } + impl $Assign<$Other> for $Value { + fn $assign_fn(&mut self, other: $Other) { + self.0 = $function(self.0, other.0); + } + } + impl<'a> $Op<&'a $Other> for $Value { + type Output = $Value; + fn $op_fn(self, other: &'a $Other) -> Self::Output { + Self($function(self.0, other.0)) + } + } + impl<'a> $Assign<&'a $Other> for $Value { + fn $assign_fn(&mut self, other: &'a $Other) { + self.0 = $function(self.0, other.0); + } + } + }; +} + +macro_rules! from_wrapper { + ($wrapper: ident, $inner: ident, $uint: ident) => { + impl From<$uint> for $wrapper { + fn from(a: $uint) -> $wrapper { + Self(Residue::new(&$inner::from(a))) + } + } + }; +} + +macro_rules! field { + ( + $FieldName: ident, + $ResidueType: ident, + + $MODULUS_STR: ident, + $MODULUS: ident, + $WIDE_MODULUS: ident, + + $NUM_BITS: literal, + $MULTIPLICATIVE_GENERATOR: literal, + $S: literal, + $ROOT_OF_UNITY: literal, + $DELTA: literal, + ) => { + use core::{ + ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign}, + iter::{Sum, Product}, + }; + + use subtle::{Choice, CtOption, ConstantTimeEq, ConstantTimeLess, ConditionallySelectable}; + use rand_core::RngCore; + + use crypto_bigint::{Integer, NonZero, Encoding, impl_modulus}; + + use ciphersuite::group::ff::{ + Field, PrimeField, FieldBits, PrimeFieldBits, helpers::sqrt_ratio_generic, + }; + + use $crate::backend::u8_from_bool; + + fn reduce(x: U512) -> U256 { + U256::from_le_slice(&x.rem(&NonZero::new($WIDE_MODULUS).unwrap()).to_le_bytes()[.. 32]) + } + + impl ConstantTimeEq for $FieldName { + fn ct_eq(&self, other: &Self) -> Choice { + self.0.ct_eq(&other.0) + } + } + + impl ConditionallySelectable for $FieldName { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + $FieldName(Residue::conditional_select(&a.0, &b.0, choice)) + } + } + + math_op!($FieldName, $FieldName, Add, add, AddAssign, add_assign, |x: $ResidueType, y| x + .add(&y)); + math_op!($FieldName, $FieldName, Sub, sub, SubAssign, sub_assign, |x: $ResidueType, y| x + .sub(&y)); + math_op!($FieldName, $FieldName, Mul, mul, MulAssign, mul_assign, |x: $ResidueType, y| x + .mul(&y)); + + from_wrapper!($FieldName, U256, u8); + from_wrapper!($FieldName, U256, u16); + from_wrapper!($FieldName, U256, u32); + from_wrapper!($FieldName, U256, u64); + from_wrapper!($FieldName, U256, u128); + + impl Neg for $FieldName { + type Output = $FieldName; + fn neg(self) -> $FieldName { + Self(self.0.neg()) + } + } + + impl<'a> Neg for &'a $FieldName { + type Output = $FieldName; + fn neg(self) -> Self::Output { + (*self).neg() + } + } + + impl $FieldName { + /// Perform an exponentation. + pub fn pow(&self, other: $FieldName) -> $FieldName { + let mut table = [Self(Residue::ONE); 16]; + table[1] = *self; + for i in 2 .. 16 { + table[i] = table[i - 1] * self; + } + + let mut res = Self(Residue::ONE); + let mut bits = 0; + for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() { + bits <<= 1; + let mut bit = u8_from_bool(bit.deref_mut()); + bits |= bit; + bit.zeroize(); + + if ((i + 1) % 4) == 0 { + if i != 3 { + for _ in 0 .. 4 { + res *= res; + } + } + + let mut factor = table[0]; + for (j, candidate) in table[1 ..].iter().enumerate() { + let j = j + 1; + factor = Self::conditional_select(&factor, &candidate, usize::from(bits).ct_eq(&j)); + } + res *= factor; + bits = 0; + } + } + res + } + } + + impl Field for $FieldName { + const ZERO: Self = Self(Residue::ZERO); + const ONE: Self = Self(Residue::ONE); + + fn random(mut rng: impl RngCore) -> Self { + let mut bytes = [0; 64]; + rng.fill_bytes(&mut bytes); + $FieldName(Residue::new(&reduce(U512::from_be_slice(bytes.as_ref())))) + } + + fn square(&self) -> Self { + Self(self.0.square()) + } + fn double(&self) -> Self { + *self + self + } + + fn invert(&self) -> CtOption { + let res = self.0.invert(); + CtOption::new(Self(res.0), res.1.into()) + } + + fn sqrt(&self) -> CtOption { + // (p + 1) // 4, as valid since p % 4 == 3 + let mod_plus_one_div_four = $MODULUS.saturating_add(&U256::ONE).wrapping_div(&(4u8.into())); + let res = self.pow(Self($ResidueType::new_checked(&mod_plus_one_div_four).unwrap())); + CtOption::new(res, res.square().ct_eq(self)) + } + + fn sqrt_ratio(num: &Self, div: &Self) -> (Choice, Self) { + sqrt_ratio_generic(num, div) + } + } + + impl PrimeField for $FieldName { + type Repr = [u8; 32]; + + const MODULUS: &'static str = $MODULUS_STR; + + const NUM_BITS: u32 = $NUM_BITS; + const CAPACITY: u32 = $NUM_BITS - 1; + + const TWO_INV: Self = $FieldName($ResidueType::new(&U256::from_u8(2)).invert().0); + + const MULTIPLICATIVE_GENERATOR: Self = + Self(Residue::new(&U256::from_u8($MULTIPLICATIVE_GENERATOR))); + const S: u32 = $S; + + const ROOT_OF_UNITY: Self = $FieldName(Residue::new(&U256::from_be_hex($ROOT_OF_UNITY))); + const ROOT_OF_UNITY_INV: Self = Self(Self::ROOT_OF_UNITY.0.invert().0); + + const DELTA: Self = $FieldName(Residue::new(&U256::from_be_hex($DELTA))); + + fn from_repr(bytes: Self::Repr) -> CtOption { + let res = U256::from_be_slice(&bytes); + CtOption::new($FieldName(Residue::new(&res)), res.ct_lt(&$MODULUS)) + } + fn to_repr(&self) -> Self::Repr { + let mut repr = [0; 32]; + repr.copy_from_slice(&self.0.retrieve().to_be_bytes()); + repr + } + + fn is_odd(&self) -> Choice { + self.0.retrieve().is_odd() + } + } + + impl PrimeFieldBits for $FieldName { + type ReprBits = [u8; 32]; + + fn to_le_bits(&self) -> FieldBits { + let mut repr = [0; 32]; + repr.copy_from_slice(&self.0.retrieve().to_le_bytes()); + repr.into() + } + + fn char_le_bits() -> FieldBits { + let mut repr = [0; 32]; + repr.copy_from_slice(&MODULUS.to_le_bytes()); + repr.into() + } + } + + impl Sum<$FieldName> for $FieldName { + fn sum>(iter: I) -> $FieldName { + let mut res = $FieldName::ZERO; + for item in iter { + res += item; + } + res + } + } + + impl<'a> Sum<&'a $FieldName> for $FieldName { + fn sum>(iter: I) -> $FieldName { + iter.cloned().sum() + } + } + + impl Product<$FieldName> for $FieldName { + fn product>(iter: I) -> $FieldName { + let mut res = $FieldName::ONE; + for item in iter { + res *= item; + } + res + } + } + + impl<'a> Product<&'a $FieldName> for $FieldName { + fn product>(iter: I) -> $FieldName { + iter.cloned().product() + } + } + }; +} diff --git a/crypto/evrf/secq256k1/src/lib.rs b/crypto/evrf/secq256k1/src/lib.rs new file mode 100644 index 00000000..b59078af --- /dev/null +++ b/crypto/evrf/secq256k1/src/lib.rs @@ -0,0 +1,47 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] + +use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2}; +use ciphersuite::group::{ff::PrimeField, Group}; + +#[macro_use] +mod backend; + +mod scalar; +pub use scalar::Scalar; + +pub use k256::Scalar as FieldElement; + +mod point; +pub use point::Point; + +/// Ciphersuite for Secq256k1. +/// +/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition +/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as +/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other. +#[derive(Clone, Copy, PartialEq, Eq, Debug, zeroize::Zeroize)] +pub struct Secq256k1; +impl ciphersuite::Ciphersuite for Secq256k1 { + type F = Scalar; + type G = Point; + type H = blake2::Blake2b512; + + const ID: &'static [u8] = b"secq256k1"; + + fn generator() -> Self::G { + Point::generator() + } + + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { + use blake2::Digest; + Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap()) + } +} + +impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Secq256k1 { + type ScalarBits = U<{ Scalar::NUM_BITS as usize }>; + type XCoefficients = Quot, U2>; + type XCoefficientsMinusOne = Diff; + type YxCoefficients = Diff, U1>, U2>, U2>; +} diff --git a/crypto/evrf/secq256k1/src/point.rs b/crypto/evrf/secq256k1/src/point.rs new file mode 100644 index 00000000..384b68c9 --- /dev/null +++ b/crypto/evrf/secq256k1/src/point.rs @@ -0,0 +1,414 @@ +use core::{ + ops::{DerefMut, Add, AddAssign, Neg, Sub, SubAssign, Mul, MulAssign}, + iter::Sum, +}; + +use rand_core::RngCore; + +use zeroize::Zeroize; +use subtle::{Choice, CtOption, ConstantTimeEq, ConditionallySelectable, ConditionallyNegatable}; + +use generic_array::{typenum::U33, GenericArray}; + +use ciphersuite::group::{ + ff::{Field, PrimeField, PrimeFieldBits}, + Group, GroupEncoding, + prime::PrimeGroup, +}; + +use crate::{backend::u8_from_bool, Scalar, FieldElement}; + +fn recover_y(x: FieldElement) -> CtOption { + // x**3 + B since a = 0 + ((x.square() * x) + FieldElement::from(7u64)).sqrt() +} + +/// Point. +#[derive(Clone, Copy, Debug, Zeroize)] +#[repr(C)] +pub struct Point { + x: FieldElement, // / Z + y: FieldElement, // / Z + z: FieldElement, +} + +impl ConstantTimeEq for Point { + fn ct_eq(&self, other: &Self) -> Choice { + let x1 = self.x * other.z; + let x2 = other.x * self.z; + + let y1 = self.y * other.z; + let y2 = other.y * self.z; + + (self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) + } +} + +impl PartialEq for Point { + fn eq(&self, other: &Point) -> bool { + self.ct_eq(other).into() + } +} + +impl Eq for Point {} + +impl ConditionallySelectable for Point { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + Point { + x: FieldElement::conditional_select(&a.x, &b.x, choice), + y: FieldElement::conditional_select(&a.y, &b.y, choice), + z: FieldElement::conditional_select(&a.z, &b.z, choice), + } + } +} + +impl Add for Point { + type Output = Point; + #[allow(non_snake_case)] + fn add(self, other: Self) -> Self { + // add-2015-rcb + + let a = FieldElement::ZERO; + let B = FieldElement::from(7u64); + let b3 = B + B + B; + + let X1 = self.x; + let Y1 = self.y; + let Z1 = self.z; + let X2 = other.x; + let Y2 = other.y; + let Z2 = other.z; + + let t0 = X1 * X2; + let t1 = Y1 * Y2; + let t2 = Z1 * Z2; + let t3 = X1 + Y1; + let t4 = X2 + Y2; + let t3 = t3 * t4; + let t4 = t0 + t1; + let t3 = t3 - t4; + let t4 = X1 + Z1; + let t5 = X2 + Z2; + let t4 = t4 * t5; + let t5 = t0 + t2; + let t4 = t4 - t5; + let t5 = Y1 + Z1; + let X3 = Y2 + Z2; + let t5 = t5 * X3; + let X3 = t1 + t2; + let t5 = t5 - X3; + let Z3 = a * t4; + let X3 = b3 * t2; + let Z3 = X3 + Z3; + let X3 = t1 - Z3; + let Z3 = t1 + Z3; + let Y3 = X3 * Z3; + let t1 = t0 + t0; + let t1 = t1 + t0; + let t2 = a * t2; + let t4 = b3 * t4; + let t1 = t1 + t2; + let t2 = t0 - t2; + let t2 = a * t2; + let t4 = t4 + t2; + let t0 = t1 * t4; + let Y3 = Y3 + t0; + let t0 = t5 * t4; + let X3 = t3 * X3; + let X3 = X3 - t0; + let t0 = t3 * t1; + let Z3 = t5 * Z3; + let Z3 = Z3 + t0; + Point { x: X3, y: Y3, z: Z3 } + } +} + +impl AddAssign for Point { + fn add_assign(&mut self, other: Point) { + *self = *self + other; + } +} + +impl Add<&Point> for Point { + type Output = Point; + fn add(self, other: &Point) -> Point { + self + *other + } +} + +impl AddAssign<&Point> for Point { + fn add_assign(&mut self, other: &Point) { + *self += *other; + } +} + +impl Neg for Point { + type Output = Point; + fn neg(self) -> Self { + Point { x: self.x, y: -self.y, z: self.z } + } +} + +impl Sub for Point { + type Output = Point; + #[allow(clippy::suspicious_arithmetic_impl)] + fn sub(self, other: Self) -> Self { + self + other.neg() + } +} + +impl SubAssign for Point { + fn sub_assign(&mut self, other: Point) { + *self = *self - other; + } +} + +impl Sub<&Point> for Point { + type Output = Point; + fn sub(self, other: &Point) -> Point { + self - *other + } +} + +impl SubAssign<&Point> for Point { + fn sub_assign(&mut self, other: &Point) { + *self -= *other; + } +} + +impl Group for Point { + type Scalar = Scalar; + fn random(mut rng: impl RngCore) -> Self { + loop { + let mut bytes = GenericArray::default(); + rng.fill_bytes(bytes.as_mut()); + let opt = Self::from_bytes(&bytes); + if opt.is_some().into() { + return opt.unwrap(); + } + } + } + fn identity() -> Self { + Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO } + } + fn generator() -> Self { + Point { + x: FieldElement::from_repr( + hex_literal::hex!("0000000000000000000000000000000000000000000000000000000000000001") + .into(), + ) + .unwrap(), + y: FieldElement::from_repr( + hex_literal::hex!("0C7C97045A2074634909ABDF82C9BD0248916189041F2AF0C1B800D1FFC278C0") + .into(), + ) + .unwrap(), + z: FieldElement::ONE, + } + } + fn is_identity(&self) -> Choice { + self.z.ct_eq(&FieldElement::ZERO) + } + #[allow(non_snake_case)] + fn double(&self) -> Self { + // dbl-2007-bl + + let a = FieldElement::ZERO; + + let X1 = self.x; + let Y1 = self.y; + let Z1 = self.z; + + let XX = X1 * X1; + let ZZ = Z1 * Z1; + let w = (a * ZZ) + XX.double() + XX; + let s = (Y1 * Z1).double(); + let ss = s * s; + let sss = s * ss; + let R = Y1 * s; + let RR = R * R; + let B = X1 + R; + let B = (B * B) - XX - RR; + let h = (w * w) - B.double(); + let X3 = h * s; + let Y3 = w * (B - h) - RR.double(); + let Z3 = sss; + + let res = Self { x: X3, y: Y3, z: Z3 }; + // If self is identity, res will not be well-formed + // Accordingly, we return self if self was the identity + Self::conditional_select(&res, self, self.is_identity()) + } +} + +impl Sum for Point { + fn sum>(iter: I) -> Point { + let mut res = Self::identity(); + for i in iter { + res += i; + } + res + } +} + +impl<'a> Sum<&'a Point> for Point { + fn sum>(iter: I) -> Point { + Point::sum(iter.cloned()) + } +} + +impl Mul for Point { + type Output = Point; + fn mul(self, mut other: Scalar) -> Point { + // Precompute the optimal amount that's a multiple of 2 + let mut table = [Point::identity(); 16]; + table[1] = self; + for i in 2 .. 16 { + table[i] = table[i - 1] + self; + } + + let mut res = Self::identity(); + let mut bits = 0; + for (i, mut bit) in other.to_le_bits().iter_mut().rev().enumerate() { + bits <<= 1; + let mut bit = u8_from_bool(bit.deref_mut()); + bits |= bit; + bit.zeroize(); + + if ((i + 1) % 4) == 0 { + if i != 3 { + for _ in 0 .. 4 { + res = res.double(); + } + } + + let mut term = table[0]; + for (j, candidate) in table[1 ..].iter().enumerate() { + let j = j + 1; + term = Self::conditional_select(&term, candidate, usize::from(bits).ct_eq(&j)); + } + res += term; + bits = 0; + } + } + other.zeroize(); + res + } +} + +impl MulAssign for Point { + fn mul_assign(&mut self, other: Scalar) { + *self = *self * other; + } +} + +impl Mul<&Scalar> for Point { + type Output = Point; + fn mul(self, other: &Scalar) -> Point { + self * *other + } +} + +impl MulAssign<&Scalar> for Point { + fn mul_assign(&mut self, other: &Scalar) { + *self *= *other; + } +} + +impl GroupEncoding for Point { + type Repr = GenericArray; + + fn from_bytes(bytes: &Self::Repr) -> CtOption { + // Extract and clear the sign bit + let sign = Choice::from(bytes[0] & 1); + + // Parse x, recover y + FieldElement::from_repr(*GenericArray::from_slice(&bytes[1 ..])).and_then(|x| { + let is_identity = x.is_zero(); + + let y = recover_y(x).map(|mut y| { + y.conditional_negate(y.is_odd().ct_eq(&!sign)); + y + }); + + // If this the identity, set y to 1 + let y = + CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity); + // Create the point if we have a y solution + let point = y.map(|y| Point { x, y, z: FieldElement::ONE }); + + let not_negative_zero = !(is_identity & sign); + // Only return the point if it isn't -0 and the sign byte wasn't malleated + CtOption::conditional_select( + &CtOption::new(Point::identity(), 0.into()), + &point, + not_negative_zero & ((bytes[0] & 1).ct_eq(&bytes[0])), + ) + }) + } + + fn from_bytes_unchecked(bytes: &Self::Repr) -> CtOption { + Point::from_bytes(bytes) + } + + fn to_bytes(&self) -> Self::Repr { + let Some(z) = Option::::from(self.z.invert()) else { + return *GenericArray::from_slice(&[0; 33]); + }; + let x = self.x * z; + let y = self.y * z; + + let mut res = *GenericArray::from_slice(&[0; 33]); + res[1 ..].as_mut().copy_from_slice(&x.to_repr()); + + // The following conditional select normalizes the sign to 0 when x is 0 + let y_sign = u8::conditional_select(&y.is_odd().unwrap_u8(), &0, x.ct_eq(&FieldElement::ZERO)); + res[0] |= y_sign; + res + } +} + +impl PrimeGroup for Point {} + +impl ec_divisors::DivisorCurve for Point { + type FieldElement = FieldElement; + + fn a() -> Self::FieldElement { + FieldElement::from(0u64) + } + fn b() -> Self::FieldElement { + FieldElement::from(7u64) + } + + fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)> { + let z: Self::FieldElement = Option::from(point.z.invert())?; + Some((point.x * z, point.y * z)) + } +} + +#[test] +fn test_curve() { + ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng); +} + +#[test] +fn generator() { + assert_eq!( + Point::generator(), + Point::from_bytes(GenericArray::from_slice(&hex_literal::hex!( + "000000000000000000000000000000000000000000000000000000000000000001" + ))) + .unwrap() + ); +} + +#[test] +fn zero_x_is_invalid() { + assert!(Option::::from(recover_y(FieldElement::ZERO)).is_none()); +} + +// Checks random won't infinitely loop +#[test] +fn random() { + Point::random(&mut rand_core::OsRng); +} diff --git a/crypto/evrf/secq256k1/src/scalar.rs b/crypto/evrf/secq256k1/src/scalar.rs new file mode 100644 index 00000000..1bc930a2 --- /dev/null +++ b/crypto/evrf/secq256k1/src/scalar.rs @@ -0,0 +1,52 @@ +use zeroize::{DefaultIsZeroes, Zeroize}; + +use crypto_bigint::{ + U256, U512, + modular::constant_mod::{ResidueParams, Residue}, +}; + +const MODULUS_STR: &str = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F"; + +impl_modulus!(SecQ, U256, MODULUS_STR); +type ResidueType = Residue; + +/// The Scalar field of secq256k1. +/// +/// This is equivalent to the field secp256k1 is defined over. +#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)] +#[repr(C)] +pub struct Scalar(pub(crate) ResidueType); + +impl DefaultIsZeroes for Scalar {} + +pub(crate) const MODULUS: U256 = U256::from_be_hex(MODULUS_STR); + +const WIDE_MODULUS: U512 = U512::from_be_hex(concat!( + "0000000000000000000000000000000000000000000000000000000000000000", + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F", +)); + +field!( + Scalar, + ResidueType, + MODULUS_STR, + MODULUS, + WIDE_MODULUS, + 256, + 3, + 1, + "fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2e", + "0000000000000000000000000000000000000000000000000000000000000009", +); + +impl Scalar { + /// Perform a wide reduction, presumably to obtain a non-biased Scalar field element. + pub fn wide_reduce(bytes: [u8; 64]) -> Scalar { + Scalar(Residue::new(&reduce(U512::from_le_slice(bytes.as_ref())))) + } +} + +#[test] +fn test_scalar_field() { + ff_group_tests::prime_field::test_prime_field_bits::<_, Scalar>(&mut rand_core::OsRng); +} diff --git a/crypto/frost/src/tests/vectors.rs b/crypto/frost/src/tests/vectors.rs index 7be6478a..dc0453a1 100644 --- a/crypto/frost/src/tests/vectors.rs +++ b/crypto/frost/src/tests/vectors.rs @@ -122,6 +122,7 @@ fn vectors_to_multisig_keys(vectors: &Vectors) -> HashMap for ClsagMultisig { .append_message(b"key_image_share", addendum.key_image_share.compress().to_bytes()); // Accumulate the interpolated share - let interpolated_key_image_share = - addendum.key_image_share * lagrange::(l, view.included()); + let interpolated_key_image_share = addendum.key_image_share * + view + .interpolation_factor(l) + .ok_or(FrostError::InternalError("processing addendum of non-participant"))?; *self.image.as_mut().unwrap() += interpolated_key_image_share; self diff --git a/networks/monero/wallet/src/send/multisig.rs b/networks/monero/wallet/src/send/multisig.rs index b3d58ba5..d60c5a33 100644 --- a/networks/monero/wallet/src/send/multisig.rs +++ b/networks/monero/wallet/src/send/multisig.rs @@ -14,7 +14,6 @@ use transcript::{Transcript, RecommendedTranscript}; use frost::{ curve::Ed25519, Participant, FrostError, ThresholdKeys, - dkg::lagrange, sign::{ Preprocess, CachedPreprocess, SignatureShare, PreprocessMachine, SignMachine, SignatureMachine, AlgorithmMachine, AlgorithmSignMachine, AlgorithmSignatureMachine, @@ -34,7 +33,7 @@ use crate::send::{SendError, SignableTransaction, key_image_sort}; pub struct TransactionMachine { signable: SignableTransaction, - i: Participant, + keys: ThresholdKeys, // The key image generator, and the scalar offset from the spend key key_image_generators_and_offsets: Vec<(EdwardsPoint, Scalar)>, @@ -45,7 +44,7 @@ pub struct TransactionMachine { pub struct TransactionSignMachine { signable: SignableTransaction, - i: Participant, + keys: ThresholdKeys, key_image_generators_and_offsets: Vec<(EdwardsPoint, Scalar)>, clsags: Vec<(ClsagMultisigMaskSender, AlgorithmSignMachine)>, @@ -61,7 +60,7 @@ pub struct TransactionSignatureMachine { impl SignableTransaction { /// Create a FROST signing machine out of this signable transaction. - pub fn multisig(self, keys: &ThresholdKeys) -> Result { + pub fn multisig(self, keys: ThresholdKeys) -> Result { let mut clsags = vec![]; let mut key_image_generators_and_offsets = vec![]; @@ -85,12 +84,7 @@ impl SignableTransaction { clsags.push((clsag_mask_send, AlgorithmMachine::new(clsag, offset))); } - Ok(TransactionMachine { - signable: self, - i: keys.params().i(), - key_image_generators_and_offsets, - clsags, - }) + Ok(TransactionMachine { signable: self, keys, key_image_generators_and_offsets, clsags }) } } @@ -120,7 +114,7 @@ impl PreprocessMachine for TransactionMachine { TransactionSignMachine { signable: self.signable, - i: self.i, + keys: self.keys, key_image_generators_and_offsets: self.key_image_generators_and_offsets, clsags, @@ -173,12 +167,12 @@ impl SignMachine for TransactionSignMachine { // We do not need to be included here, yet this set of signers has yet to be validated // We explicitly remove ourselves to ensure we aren't included twice, if we were redundantly // included - commitments.remove(&self.i); + commitments.remove(&self.keys.params().i()); // Find out who's included let mut included = commitments.keys().copied().collect::>(); // This push won't duplicate due to the above removal - included.push(self.i); + included.push(self.keys.params().i()); // unstable sort may reorder elements of equal order // Given our lack of duplicates, we should have no elements of equal order included.sort_unstable(); @@ -192,12 +186,15 @@ impl SignMachine for TransactionSignMachine { } // Convert the serialized nonces commitments to a parallelized Vec + let view = self.keys.view(included.clone()).map_err(|_| { + FrostError::InvalidSigningSet("couldn't form an interpolated view of the key") + })?; let mut commitments = (0 .. self.clsags.len()) .map(|c| { included .iter() .map(|l| { - let preprocess = if *l == self.i { + let preprocess = if *l == self.keys.params().i() { self.our_preprocess[c].clone() } else { commitments.get_mut(l).ok_or(FrostError::MissingParticipant(*l))?[c].clone() @@ -206,7 +203,7 @@ impl SignMachine for TransactionSignMachine { // While here, calculate the key image as needed to call sign // The CLSAG algorithm will independently calculate the key image/verify these shares key_images[c] += - preprocess.addendum.key_image_share().0 * lagrange::(*l, &included).0; + preprocess.addendum.key_image_share().0 * view.interpolation_factor(*l).unwrap().0; Ok((*l, preprocess)) }) @@ -217,7 +214,7 @@ impl SignMachine for TransactionSignMachine { // The above inserted our own preprocess into these maps (which is unnecessary) // Remove it now for map in &mut commitments { - map.remove(&self.i); + map.remove(&self.keys.params().i()); } // The actual TX will have sorted its inputs by key image diff --git a/networks/monero/wallet/tests/runner/mod.rs b/networks/monero/wallet/tests/runner/mod.rs index b83f939a..5678ba1b 100644 --- a/networks/monero/wallet/tests/runner/mod.rs +++ b/networks/monero/wallet/tests/runner/mod.rs @@ -285,7 +285,7 @@ macro_rules! test { { let mut machines = HashMap::new(); for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) { - machines.insert(i, tx.clone().multisig(&keys[&i]).unwrap()); + machines.insert(i, tx.clone().multisig(keys[&i].clone()).unwrap()); } frost::tests::sign_without_caching(&mut OsRng, machines, &[]) diff --git a/orchestration/Cargo.toml b/orchestration/Cargo.toml index fca38066..a70e9936 100644 --- a/orchestration/Cargo.toml +++ b/orchestration/Cargo.toml @@ -24,6 +24,8 @@ rand_chacha = { version = "0.3", default-features = false, features = ["std"] } transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } +embedwards25519 = { path = "../crypto/evrf/embedwards25519" } +secq256k1 = { path = "../crypto/evrf/secq256k1" } zalloc = { path = "../common/zalloc" } diff --git a/orchestration/src/main.rs b/orchestration/src/main.rs index 4655be01..7afec67d 100644 --- a/orchestration/src/main.rs +++ b/orchestration/src/main.rs @@ -25,6 +25,8 @@ use ciphersuite::{ }, Ciphersuite, Ristretto, }; +use embedwards25519::Embedwards25519; +use secq256k1::Secq256k1; mod mimalloc; use mimalloc::mimalloc; @@ -267,6 +269,55 @@ fn infrastructure_keys(network: Network) -> InfrastructureKeys { ]) } +struct EmbeddedCurveKeys { + embedwards25519: (Zeroizing>, Vec), + secq256k1: (Zeroizing>, Vec), +} + +fn embedded_curve_keys(network: Network) -> EmbeddedCurveKeys { + // Generate entropy for the embedded curve keys + + let entropy = { + let path = home::home_dir() + .unwrap() + .join(".serai") + .join(network.label()) + .join("embedded_curve_keys_entropy"); + // Check if there's existing entropy + if let Ok(entropy) = fs::read(&path).map(Zeroizing::new) { + assert_eq!(entropy.len(), 32, "entropy saved to disk wasn't 32 bytes"); + let mut res = Zeroizing::new([0; 32]); + res.copy_from_slice(entropy.as_ref()); + res + } else { + // If there isn't, generate fresh entropy + let mut res = Zeroizing::new([0; 32]); + OsRng.fill_bytes(res.as_mut()); + fs::write(&path, &res).unwrap(); + res + } + }; + + let mut transcript = + RecommendedTranscript::new(b"Serai Orchestrator Embedded Curve Keys Transcript"); + transcript.append_message(b"network", network.label().as_bytes()); + transcript.append_message(b"entropy", entropy); + let mut rng = ChaCha20Rng::from_seed(transcript.rng_seed(b"embedded_curve_keys")); + + EmbeddedCurveKeys { + embedwards25519: { + let key = Zeroizing::new(::F::random(&mut rng)); + let pub_key = Embedwards25519::generator() * key.deref(); + (Zeroizing::new(key.to_repr().as_slice().to_vec()), pub_key.to_bytes().to_vec()) + }, + secq256k1: { + let key = Zeroizing::new(::F::random(&mut rng)); + let pub_key = Secq256k1::generator() * key.deref(); + (Zeroizing::new(key.to_repr().as_slice().to_vec()), pub_key.to_bytes().to_vec()) + }, + } +} + fn dockerfiles(network: Network) { let orchestration_path = orchestration_path(network); @@ -294,18 +345,15 @@ fn dockerfiles(network: Network) { monero_key.1, ); - let new_entropy = || { - let mut res = Zeroizing::new([0; 32]); - OsRng.fill_bytes(res.as_mut()); - res - }; + let embedded_curve_keys = embedded_curve_keys(network); processor( &orchestration_path, network, "bitcoin", coordinator_key.1, bitcoin_key.0, - new_entropy(), + embedded_curve_keys.embedwards25519.0.clone(), + embedded_curve_keys.secq256k1.0.clone(), ); processor( &orchestration_path, @@ -313,9 +361,18 @@ fn dockerfiles(network: Network) { "ethereum", coordinator_key.1, ethereum_key.0, - new_entropy(), + embedded_curve_keys.embedwards25519.0.clone(), + embedded_curve_keys.secq256k1.0.clone(), + ); + processor( + &orchestration_path, + network, + "monero", + coordinator_key.1, + monero_key.0, + embedded_curve_keys.embedwards25519.0.clone(), + embedded_curve_keys.embedwards25519.0.clone(), ); - processor(&orchestration_path, network, "monero", coordinator_key.1, monero_key.0, new_entropy()); let serai_key = { let serai_key = Zeroizing::new( @@ -346,6 +403,7 @@ fn key_gen(network: Network) { let _ = fs::create_dir_all(&serai_dir); fs::write(key_file, key.to_repr()).expect("couldn't write key"); + // TODO: Move embedded curve key gen here, and print them println!( "Public Key: {}", hex::encode((::generator() * key).to_bytes()) diff --git a/orchestration/src/processor.rs b/orchestration/src/processor.rs index cefe6455..3387c4ed 100644 --- a/orchestration/src/processor.rs +++ b/orchestration/src/processor.rs @@ -12,8 +12,9 @@ pub fn processor( network: Network, coin: &'static str, _coordinator_key: ::G, - coin_key: Zeroizing<::F>, - entropy: Zeroizing<[u8; 32]>, + processor_key: Zeroizing<::F>, + substrate_evrf_key: Zeroizing>, + network_evrf_key: Zeroizing>, ) { let setup = mimalloc(Os::Debian).to_string() + &build_serai_service( @@ -53,8 +54,9 @@ RUN apt install -y ca-certificates let mut env_vars = vec![ ("MESSAGE_QUEUE_RPC", format!("serai-{}-message-queue", network.label())), - ("MESSAGE_QUEUE_KEY", hex::encode(coin_key.to_repr())), - ("ENTROPY", hex::encode(entropy.as_ref())), + ("MESSAGE_QUEUE_KEY", hex::encode(processor_key.to_repr())), + ("SUBSTRATE_EVRF_KEY", hex::encode(substrate_evrf_key)), + ("NETWORK_EVRF_KEY", hex::encode(network_evrf_key)), ("NETWORK", coin.to_string()), ("NETWORK_RPC_LOGIN", format!("{RPC_USER}:{RPC_PASS}")), ("NETWORK_RPC_HOSTNAME", hostname), diff --git a/processor/Cargo.toml b/processor/Cargo.toml index 9d29bc7c..fa2f643c 100644 --- a/processor/Cargo.toml +++ b/processor/Cargo.toml @@ -36,7 +36,10 @@ serde_json = { version = "1", default-features = false, features = ["std"] } # Cryptography ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } +blake2 = { version = "0.10", default-features = false, features = ["std"] } transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std"] } +ec-divisors = { package = "ec-divisors", path = "../crypto/evrf/divisors", default-features = false } +dkg = { package = "dkg", path = "../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] } frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false } @@ -81,12 +84,12 @@ dockertest = "0.5" serai-docker-tests = { path = "../tests/docker" } [features] -secp256k1 = ["k256", "frost/secp256k1"] +secp256k1 = ["k256", "dkg/evrf-secp256k1", "frost/secp256k1"] bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"] ethereum = ["secp256k1", "ethereum-serai/tests"] -ed25519 = ["dalek-ff-group", "frost/ed25519"] +ed25519 = ["dalek-ff-group", "dkg/evrf-ed25519", "frost/ed25519"] monero = ["ed25519", "monero-simple-request-rpc", "monero-wallet", "serai-client/monero"] binaries = ["env_logger", "serai-env", "message-queue"] diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 22360a1a..98af97ce 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -3,7 +3,7 @@ use std::collections::HashMap; use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; -use dkg::{Participant, ThresholdParams}; +use dkg::Participant; use serai_primitives::BlockHash; use in_instructions_primitives::{Batch, SignedBatch}; @@ -19,41 +19,31 @@ pub struct SubstrateContext { pub mod key_gen { use super::*; - #[derive( - Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, - )] - pub struct KeyGenId { - pub session: Session, - pub attempt: u32, - } - - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { // Instructs the Processor to begin the key generation process. // TODO: Should this be moved under Substrate? - GenerateKey { - id: KeyGenId, - params: ThresholdParams, - shares: u16, - }, - // Received commitments for the specified key generation protocol. - Commitments { - id: KeyGenId, - commitments: HashMap>, - }, - // Received shares for the specified key generation protocol. - Shares { - id: KeyGenId, - shares: Vec>>, - }, - /// Instruction to verify a blame accusation. - VerifyBlame { - id: KeyGenId, - accuser: Participant, - accused: Participant, - share: Vec, - blame: Option>, - }, + GenerateKey { session: Session, threshold: u16, evrf_public_keys: Vec<([u8; 32], Vec)> }, + // Received participations for the specified key generation protocol. + Participation { session: Session, participant: Participant, participation: Vec }, + } + + impl core::fmt::Debug for CoordinatorMessage { + fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { + match self { + CoordinatorMessage::GenerateKey { session, threshold, evrf_public_keys } => fmt + .debug_struct("CoordinatorMessage::GenerateKey") + .field("session", &session) + .field("threshold", &threshold) + .field("evrf_public_keys.len()", &evrf_public_keys.len()) + .finish_non_exhaustive(), + CoordinatorMessage::Participation { session, participant, .. } => fmt + .debug_struct("CoordinatorMessage::Participation") + .field("session", &session) + .field("participant", &participant) + .finish_non_exhaustive(), + } + } } impl CoordinatorMessage { @@ -62,42 +52,34 @@ pub mod key_gen { } } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { - // Created commitments for the specified key generation protocol. - Commitments { - id: KeyGenId, - commitments: Vec>, - }, - // Participant published invalid commitments. - InvalidCommitments { - id: KeyGenId, - faulty: Participant, - }, - // Created shares for the specified key generation protocol. - Shares { - id: KeyGenId, - shares: Vec>>, - }, - // Participant published an invalid share. - #[rustfmt::skip] - InvalidShare { - id: KeyGenId, - accuser: Participant, - faulty: Participant, - blame: Option>, - }, + // Participated in the specified key generation protocol. + Participation { session: Session, participation: Vec }, // Resulting keys from the specified key generation protocol. - GeneratedKeyPair { - id: KeyGenId, - substrate_key: [u8; 32], - network_key: Vec, - }, + GeneratedKeyPair { session: Session, substrate_key: [u8; 32], network_key: Vec }, // Blame this participant. - Blame { - id: KeyGenId, - participant: Participant, - }, + Blame { session: Session, participant: Participant }, + } + + impl core::fmt::Debug for ProcessorMessage { + fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { + match self { + ProcessorMessage::Participation { session, .. } => fmt + .debug_struct("ProcessorMessage::Participation") + .field("session", &session) + .finish_non_exhaustive(), + ProcessorMessage::GeneratedKeyPair { session, .. } => fmt + .debug_struct("ProcessorMessage::GeneratedKeyPair") + .field("session", &session) + .finish_non_exhaustive(), + ProcessorMessage::Blame { session, participant } => fmt + .debug_struct("ProcessorMessage::Blame") + .field("session", &session) + .field("participant", &participant) + .finish_non_exhaustive(), + } + } } } @@ -328,16 +310,19 @@ impl CoordinatorMessage { pub fn intent(&self) -> Vec { match self { CoordinatorMessage::KeyGen(msg) => { - // Unique since key gen ID embeds the session and attempt let (sub, id) = match msg { - key_gen::CoordinatorMessage::GenerateKey { id, .. } => (0, id), - key_gen::CoordinatorMessage::Commitments { id, .. } => (1, id), - key_gen::CoordinatorMessage::Shares { id, .. } => (2, id), - key_gen::CoordinatorMessage::VerifyBlame { id, .. } => (3, id), + // Unique since we only have one attempt per session + key_gen::CoordinatorMessage::GenerateKey { session, .. } => { + (0, borsh::to_vec(session).unwrap()) + } + // Unique since one participation per participant per session + key_gen::CoordinatorMessage::Participation { session, participant, .. } => { + (1, borsh::to_vec(&(session, participant)).unwrap()) + } }; let mut res = vec![COORDINATOR_UID, TYPE_KEY_GEN_UID, sub]; - res.extend(&id.encode()); + res.extend(&id); res } CoordinatorMessage::Sign(msg) => { @@ -400,17 +385,21 @@ impl ProcessorMessage { match self { ProcessorMessage::KeyGen(msg) => { let (sub, id) = match msg { - // Unique since KeyGenId - key_gen::ProcessorMessage::Commitments { id, .. } => (0, id), - key_gen::ProcessorMessage::InvalidCommitments { id, .. } => (1, id), - key_gen::ProcessorMessage::Shares { id, .. } => (2, id), - key_gen::ProcessorMessage::InvalidShare { id, .. } => (3, id), - key_gen::ProcessorMessage::GeneratedKeyPair { id, .. } => (4, id), - key_gen::ProcessorMessage::Blame { id, .. } => (5, id), + // Unique since we only have one participation per session (due to no re-attempts) + key_gen::ProcessorMessage::Participation { session, .. } => { + (0, borsh::to_vec(session).unwrap()) + } + key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } => { + (1, borsh::to_vec(session).unwrap()) + } + // Unique since we only blame a participant once (as this is fatal) + key_gen::ProcessorMessage::Blame { session, participant } => { + (2, borsh::to_vec(&(session, participant)).unwrap()) + } }; let mut res = vec![PROCESSOR_UID, TYPE_KEY_GEN_UID, sub]; - res.extend(&id.encode()); + res.extend(&id); res } ProcessorMessage::Sign(msg) => { diff --git a/processor/src/key_gen.rs b/processor/src/key_gen.rs index 297db194..a059c350 100644 --- a/processor/src/key_gen.rs +++ b/processor/src/key_gen.rs @@ -1,18 +1,20 @@ -use std::collections::HashMap; +use std::{ + io, + collections::{HashSet, HashMap}, +}; use zeroize::Zeroizing; -use rand_core::SeedableRng; +use rand_core::{RngCore, SeedableRng, OsRng}; use rand_chacha::ChaCha20Rng; +use blake2::{Digest, Blake2s256}; use transcript::{Transcript, RecommendedTranscript}; -use ciphersuite::group::GroupEncoding; -use frost::{ - curve::{Ciphersuite, Ristretto}, - dkg::{ - DkgError, Participant, ThresholdParams, ThresholdCore, ThresholdKeys, encryption::*, pedpop::*, - }, +use ciphersuite::{ + group::{Group, GroupEncoding}, + Ciphersuite, Ristretto, }; +use dkg::{Participant, ThresholdCore, ThresholdKeys, evrf::*}; use log::info; @@ -21,6 +23,48 @@ use messages::key_gen::*; use crate::{Get, DbTxn, Db, create_db, networks::Network}; +mod generators { + use core::any::{TypeId, Any}; + use std::{ + sync::{LazyLock, Mutex}, + collections::HashMap, + }; + + use frost::dkg::evrf::*; + + use serai_client::validator_sets::primitives::MAX_KEY_SHARES_PER_SET; + + /// A cache of the generators used by the eVRF DKG. + /// + /// This performs a lookup of the Ciphersuite to its generators. Since the Ciphersuite is a + /// generic, this takes advantage of `Any`. This static is isolated in a module to ensure + /// correctness can be evaluated solely by reviewing these few lines of code. + /// + /// This is arguably over-engineered as of right now, as we only need generators for Ristretto + /// and N::Curve. By having this HashMap, we enable de-duplication of the Ristretto == N::Curve + /// case, and we automatically support the n-curve case (rather than hard-coding to the 2-curve + /// case). + static GENERATORS: LazyLock>> = + LazyLock::new(|| Mutex::new(HashMap::new())); + + pub(crate) fn generators() -> &'static EvrfGenerators { + GENERATORS + .lock() + .unwrap() + .entry(TypeId::of::()) + .or_insert_with(|| { + // If we haven't prior needed generators for this Ciphersuite, generate new ones + Box::leak(Box::new(EvrfGenerators::::new( + ((MAX_KEY_SHARES_PER_SET * 2 / 3) + 1).try_into().unwrap(), + MAX_KEY_SHARES_PER_SET.try_into().unwrap(), + ))) + }) + .downcast_ref() + .unwrap() + } +} +use generators::generators; + #[derive(Debug)] pub struct KeyConfirmed { pub substrate_keys: Vec>, @@ -29,15 +73,18 @@ pub struct KeyConfirmed { create_db!( KeyGenDb { - ParamsDb: (session: &Session, attempt: u32) -> (ThresholdParams, u16), - // Not scoped to the set since that'd have latter attempts overwrite former - // A former attempt may become the finalized attempt, even if it doesn't in a timely manner - // Overwriting its commitments would be accordingly poor - CommitmentsDb: (key: &KeyGenId) -> HashMap>, - GeneratedKeysDb: (session: &Session, substrate_key: &[u8; 32], network_key: &[u8]) -> Vec, - // These do assume a key is only used once across sets, which holds true so long as a single - // participant is honest in their execution of the protocol - KeysDb: (network_key: &[u8]) -> Vec, + ParamsDb: (session: &Session) -> (u16, Vec<[u8; 32]>, Vec>), + ParticipationDb: (session: &Session) -> ( + HashMap>, + HashMap>, + ), + // GeneratedKeysDb, KeysDb use `()` for their value as we manually serialize their values + // TODO: Don't do that + GeneratedKeysDb: (session: &Session) -> (), + // These do assume a key is only used once across sets, which holds true if the threshold is + // honest + // TODO: Remove this assumption + KeysDb: (network_key: &[u8]) -> (), SessionDb: (network_key: &[u8]) -> Session, NetworkKeyDb: (session: Session) -> Vec, } @@ -65,8 +112,8 @@ impl GeneratedKeysDb { fn save_keys( txn: &mut impl DbTxn, - id: &KeyGenId, - substrate_keys: &[ThresholdCore], + session: &Session, + substrate_keys: &[ThresholdKeys], network_keys: &[ThresholdKeys], ) { let mut keys = Zeroizing::new(vec![]); @@ -74,14 +121,7 @@ impl GeneratedKeysDb { keys.extend(substrate_keys.serialize().as_slice()); keys.extend(network_keys.serialize().as_slice()); } - txn.put( - Self::key( - &id.session, - &substrate_keys[0].group_key().to_bytes(), - network_keys[0].group_key().to_bytes().as_ref(), - ), - keys, - ); + txn.put(Self::key(session), keys); } } @@ -91,11 +131,8 @@ impl KeysDb { session: Session, key_pair: &KeyPair, ) -> (Vec>, Vec>) { - let (keys_vec, keys) = GeneratedKeysDb::read_keys::( - txn, - &GeneratedKeysDb::key(&session, &key_pair.0 .0, key_pair.1.as_ref()), - ) - .unwrap(); + let (keys_vec, keys) = + GeneratedKeysDb::read_keys::(txn, &GeneratedKeysDb::key(&session)).unwrap(); assert_eq!(key_pair.0 .0, keys.0[0].group_key().to_bytes()); assert_eq!( { @@ -130,32 +167,105 @@ impl KeysDb { } } -type SecretShareMachines = - Vec<(SecretShareMachine, SecretShareMachine<::Curve>)>; -type KeyMachines = Vec<(KeyMachine, KeyMachine<::Curve>)>; +/* + On the Serai blockchain, users specify their public keys on the embedded curves. Substrate does + not have the libraries for the embedded curves and is unable to evaluate if the keys are valid + or not. + + We could add the libraries for the embedded curves to the blockchain, yet this would be a + non-trivial scope for what's effectively an embedded context. It'd also permanently bind our + consensus to these arbitrary curves. We would have the benefit of being able to also require PoKs + for the keys, ensuring no one uses someone else's key (creating oddities there). Since someone + who uses someone else's key can't actually participate, all it does in effect is give more key + shares to the holder of the private key, and make us unable to rely on eVRF keys as a secure way + to index validators (hence the usage of `Participant` throughout the messages here). + + We could remove invalid keys from the DKG, yet this would create a view of the DKG only the + processor (which does have the embedded curves) has. We'd need to reconcile it with the view of + the DKG which does include all keys (even the invalid keys). + + The easiest solution is to keep the views consistent by replacing invalid keys with valid keys + (which no one has the private key for). This keeps the view consistent. This does prevent those + who posted invalid keys from participating, and receiving their keys, which is the understood and + declared effect of them posting invalid keys. Since at least `t` people must honestly participate + for the DKG to complete, and since their honest participation means they had valid keys, we do + ensure at least `t` people participated and the DKG result can be reconstructed. + + We do lose fault tolerance, yet only by losing those faulty. Accordingly, this is accepted. + + Returns the coerced keys and faulty participants. +*/ +fn coerce_keys( + key_bytes: &[impl AsRef<[u8]>], +) -> (Vec<::G>, Vec) { + fn evrf_key(key: &[u8]) -> Option<::G> { + let mut repr = <::G as GroupEncoding>::Repr::default(); + if repr.as_ref().len() != key.len() { + None?; + } + repr.as_mut().copy_from_slice(key); + let point = Option::<::G>::from(<_>::from_bytes(&repr))?; + if bool::from(point.is_identity()) { + None?; + } + Some(point) + } + + let mut keys = Vec::with_capacity(key_bytes.len()); + let mut faulty = vec![]; + for (i, key) in key_bytes.iter().enumerate() { + let i = Participant::new( + 1 + u16::try_from(i).expect("performing a key gen with more than u16::MAX participants"), + ) + .unwrap(); + keys.push(match evrf_key::(key.as_ref()) { + Some(key) => key, + None => { + // Mark this participant faulty + faulty.push(i); + + // Generate a random key + let mut rng = ChaCha20Rng::from_seed(Blake2s256::digest(key).into()); + loop { + let mut repr = <::G as GroupEncoding>::Repr::default(); + rng.fill_bytes(repr.as_mut()); + if let Some(key) = + Option::<::G>::from(<_>::from_bytes(&repr)) + { + break key; + } + } + } + }); + } + + (keys, faulty) +} #[derive(Debug)] pub struct KeyGen { db: D, - entropy: Zeroizing<[u8; 32]>, - - active_commit: HashMap, Vec>)>, - #[allow(clippy::type_complexity)] - active_share: HashMap, Vec>>)>, + substrate_evrf_private_key: + Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, + network_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, } impl KeyGen { #[allow(clippy::new_ret_no_self)] - pub fn new(db: D, entropy: Zeroizing<[u8; 32]>) -> KeyGen { - KeyGen { db, entropy, active_commit: HashMap::new(), active_share: HashMap::new() } + pub fn new( + db: D, + substrate_evrf_private_key: Zeroizing< + <::EmbeddedCurve as Ciphersuite>::F, + >, + network_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, + ) -> KeyGen { + KeyGen { db, substrate_evrf_private_key, network_evrf_private_key } } pub fn in_set(&self, session: &Session) -> bool { // We determine if we're in set using if we have the parameters for a session's key generation - // The usage of 0 for the attempt is valid so long as we aren't malicious and accordingly - // aren't fatally slashed - // TODO: Revisit once we do DKG removals for being offline - ParamsDb::get(&self.db, session, 0).is_some() + // We only have these if we were told to generate a key for this session + ParamsDb::get(&self.db, session).is_some() } #[allow(clippy::type_complexity)] @@ -179,406 +289,351 @@ impl KeyGen { &mut self, txn: &mut D::Transaction<'_>, msg: CoordinatorMessage, - ) -> ProcessorMessage { - const SUBSTRATE_KEY_CONTEXT: &str = "substrate"; - const NETWORK_KEY_CONTEXT: &str = "network"; - let context = |id: &KeyGenId, key| { + ) -> Vec { + const SUBSTRATE_KEY_CONTEXT: &[u8] = b"substrate"; + const NETWORK_KEY_CONTEXT: &[u8] = b"network"; + fn context(session: Session, key_context: &[u8]) -> [u8; 32] { // TODO2: Also embed the chain ID/genesis block - format!( - "Serai Key Gen. Session: {:?}, Network: {:?}, Attempt: {}, Key: {}", - id.session, - N::NETWORK, - id.attempt, - key, - ) - }; - - let rng = |label, id: KeyGenId| { - let mut transcript = RecommendedTranscript::new(label); - transcript.append_message(b"entropy", &self.entropy); - transcript.append_message(b"context", context(&id, "rng")); - ChaCha20Rng::from_seed(transcript.rng_seed(b"rng")) - }; - let coefficients_rng = |id| rng(b"Key Gen Coefficients", id); - let secret_shares_rng = |id| rng(b"Key Gen Secret Shares", id); - let share_rng = |id| rng(b"Key Gen Share", id); - - let key_gen_machines = |id, params: ThresholdParams, shares| { - let mut rng = coefficients_rng(id); - let mut machines = vec![]; - let mut commitments = vec![]; - for s in 0 .. shares { - let params = ThresholdParams::new( - params.t(), - params.n(), - Participant::new(u16::from(params.i()) + s).unwrap(), - ) - .unwrap(); - let substrate = KeyGenMachine::new(params, context(&id, SUBSTRATE_KEY_CONTEXT)) - .generate_coefficients(&mut rng); - let network = KeyGenMachine::new(params, context(&id, NETWORK_KEY_CONTEXT)) - .generate_coefficients(&mut rng); - machines.push((substrate.0, network.0)); - let mut serialized = vec![]; - substrate.1.write(&mut serialized).unwrap(); - network.1.write(&mut serialized).unwrap(); - commitments.push(serialized); - } - (machines, commitments) - }; - - let secret_share_machines = |id, - params: ThresholdParams, - machines: SecretShareMachines, - commitments: HashMap>| - -> Result<_, ProcessorMessage> { - let mut rng = secret_shares_rng(id); - - #[allow(clippy::type_complexity)] - fn handle_machine( - rng: &mut ChaCha20Rng, - id: KeyGenId, - machine: SecretShareMachine, - commitments: HashMap>>, - ) -> Result< - (KeyMachine, HashMap>>), - ProcessorMessage, - > { - match machine.generate_secret_shares(rng, commitments) { - Ok(res) => Ok(res), - Err(e) => match e { - DkgError::ZeroParameter(_, _) | - DkgError::InvalidThreshold(_, _) | - DkgError::InvalidParticipant(_, _) | - DkgError::InvalidSigningSet | - DkgError::InvalidShare { .. } => unreachable!("{e:?}"), - DkgError::InvalidParticipantQuantity(_, _) | - DkgError::DuplicatedParticipant(_) | - DkgError::MissingParticipant(_) => { - panic!("coordinator sent invalid DKG commitments: {e:?}") - } - DkgError::InvalidCommitments(i) => { - Err(ProcessorMessage::InvalidCommitments { id, faulty: i })? - } - }, - } - } - - let mut substrate_commitments = HashMap::new(); - let mut network_commitments = HashMap::new(); - for i in 1 ..= params.n() { - let i = Participant::new(i).unwrap(); - let mut commitments = commitments[&i].as_slice(); - substrate_commitments.insert( - i, - EncryptionKeyMessage::>::read(&mut commitments, params) - .map_err(|_| ProcessorMessage::InvalidCommitments { id, faulty: i })?, - ); - network_commitments.insert( - i, - EncryptionKeyMessage::>::read(&mut commitments, params) - .map_err(|_| ProcessorMessage::InvalidCommitments { id, faulty: i })?, - ); - if !commitments.is_empty() { - // Malicious Participant included extra bytes in their commitments - // (a potential DoS attack) - Err(ProcessorMessage::InvalidCommitments { id, faulty: i })?; - } - } - - let mut key_machines = vec![]; - let mut shares = vec![]; - for (m, (substrate_machine, network_machine)) in machines.into_iter().enumerate() { - let actual_i = Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap(); - - let mut substrate_commitments = substrate_commitments.clone(); - substrate_commitments.remove(&actual_i); - let (substrate_machine, mut substrate_shares) = - handle_machine::(&mut rng, id, substrate_machine, substrate_commitments)?; - - let mut network_commitments = network_commitments.clone(); - network_commitments.remove(&actual_i); - let (network_machine, network_shares) = - handle_machine(&mut rng, id, network_machine, network_commitments.clone())?; - - key_machines.push((substrate_machine, network_machine)); - - let mut these_shares: HashMap<_, _> = - substrate_shares.drain().map(|(i, share)| (i, share.serialize())).collect(); - for (i, share) in &mut these_shares { - share.extend(network_shares[i].serialize()); - } - shares.push(these_shares); - } - Ok((key_machines, shares)) - }; + let mut transcript = RecommendedTranscript::new(b"Serai eVRF Key Gen"); + transcript.append_message(b"network", N::ID); + transcript.append_message(b"session", session.0.to_le_bytes()); + transcript.append_message(b"key", key_context); + (&(&transcript.challenge(b"context"))[.. 32]).try_into().unwrap() + } match msg { - CoordinatorMessage::GenerateKey { id, params, shares } => { - info!("Generating new key. ID: {id:?} Params: {params:?} Shares: {shares}"); + CoordinatorMessage::GenerateKey { session, threshold, evrf_public_keys } => { + info!("Generating new key. Session: {session:?}"); - // Remove old attempts - if self.active_commit.remove(&id.session).is_none() && - self.active_share.remove(&id.session).is_none() + // Unzip the vector of eVRF keys + let substrate_evrf_public_keys = + evrf_public_keys.iter().map(|(key, _)| *key).collect::>(); + let network_evrf_public_keys = + evrf_public_keys.into_iter().map(|(_, key)| key).collect::>(); + + let mut participation = Vec::with_capacity(2048); + let mut faulty = HashSet::new(); + + // Participate for both Substrate and the network + fn participate( + context: [u8; 32], + threshold: u16, + evrf_public_keys: &[impl AsRef<[u8]>], + evrf_private_key: &Zeroizing<::F>, + faulty: &mut HashSet, + output: &mut impl io::Write, + ) { + let (coerced_keys, faulty_is) = coerce_keys::(evrf_public_keys); + for faulty_i in faulty_is { + faulty.insert(faulty_i); + } + let participation = EvrfDkg::::participate( + &mut OsRng, + generators(), + context, + threshold, + &coerced_keys, + evrf_private_key, + ); + participation.unwrap().write(output).unwrap(); + } + participate::( + context::(session, SUBSTRATE_KEY_CONTEXT), + threshold, + &substrate_evrf_public_keys, + &self.substrate_evrf_private_key, + &mut faulty, + &mut participation, + ); + participate::( + context::(session, NETWORK_KEY_CONTEXT), + threshold, + &network_evrf_public_keys, + &self.network_evrf_private_key, + &mut faulty, + &mut participation, + ); + + // Save the params + ParamsDb::set( + txn, + &session, + &(threshold, substrate_evrf_public_keys, network_evrf_public_keys), + ); + + // Send back our Participation and all faulty parties + let mut faulty = faulty.into_iter().collect::>(); + faulty.sort(); + + let mut res = Vec::with_capacity(faulty.len() + 1); + for faulty in faulty { + res.push(ProcessorMessage::Blame { session, participant: faulty }); + } + res.push(ProcessorMessage::Participation { session, participation }); + + res + } + + CoordinatorMessage::Participation { session, participant, participation } => { + info!("received participation from {:?} for {:?}", participant, session); + + let (threshold, substrate_evrf_public_keys, network_evrf_public_keys) = + ParamsDb::get(txn, &session).unwrap(); + + let n = substrate_evrf_public_keys + .len() + .try_into() + .expect("performing a key gen with more than u16::MAX participants"); + + // Read these `Participation`s + // If they fail basic sanity checks, fail fast + let (substrate_participation, network_participation) = { + let network_participation_start_pos = { + let mut participation = participation.as_slice(); + let start_len = participation.len(); + + let blame = vec![ProcessorMessage::Blame { session, participant }]; + let Ok(substrate_participation) = + Participation::::read(&mut participation, n) + else { + return blame; + }; + let len_at_network_participation_start_pos = participation.len(); + let Ok(network_participation) = Participation::::read(&mut participation, n) + else { + return blame; + }; + + // If they added random noise after their participations, they're faulty + // This prevents DoS by causing a slash upon such spam + if !participation.is_empty() { + return blame; + } + + // If we've already generated these keys, we don't actually need to save these + // participations and continue. We solely have to verify them, as to identify malicious + // participants and prevent DoSs, before returning + if txn.get(GeneratedKeysDb::key(&session)).is_some() { + info!("already finished generating a key for {:?}", session); + + match EvrfDkg::::verify( + &mut OsRng, + generators(), + context::(session, SUBSTRATE_KEY_CONTEXT), + threshold, + // Ignores the list of participants who were faulty, as they were prior blamed + &coerce_keys::(&substrate_evrf_public_keys).0, + &HashMap::from([(participant, substrate_participation)]), + ) + .unwrap() + { + VerifyResult::Valid(_) | VerifyResult::NotEnoughParticipants => {} + VerifyResult::Invalid(faulty) => { + assert_eq!(faulty, vec![participant]); + return vec![ProcessorMessage::Blame { session, participant }]; + } + } + + match EvrfDkg::::verify( + &mut OsRng, + generators(), + context::(session, NETWORK_KEY_CONTEXT), + threshold, + // Ignores the list of participants who were faulty, as they were prior blamed + &coerce_keys::(&network_evrf_public_keys).0, + &HashMap::from([(participant, network_participation)]), + ) + .unwrap() + { + VerifyResult::Valid(_) | VerifyResult::NotEnoughParticipants => return vec![], + VerifyResult::Invalid(faulty) => { + assert_eq!(faulty, vec![participant]); + return vec![ProcessorMessage::Blame { session, participant }]; + } + } + } + + // Return the position the network participation starts at + start_len - len_at_network_participation_start_pos + }; + + // Instead of re-serializing the `Participation`s we read, we just use the relevant + // sections of the existing byte buffer + ( + participation[.. network_participation_start_pos].to_vec(), + participation[network_participation_start_pos ..].to_vec(), + ) + }; + + // Since these are valid `Participation`s, save them + let (mut substrate_participations, mut network_participations) = + ParticipationDb::get(txn, &session) + .unwrap_or((HashMap::with_capacity(1), HashMap::with_capacity(1))); + assert!( + substrate_participations.insert(participant, substrate_participation).is_none() && + network_participations.insert(participant, network_participation).is_none(), + "received participation for someone multiple times" + ); + ParticipationDb::set( + txn, + &session, + &(substrate_participations.clone(), network_participations.clone()), + ); + + // This block is taken from the eVRF DKG itself to evaluate the amount participating { - // If we haven't handled this session before, save the params - ParamsDb::set(txn, &id.session, id.attempt, &(params, shares)); - } + let mut participating_weight = 0; + // This uses the Substrate maps as the maps are kept in synchrony + let mut evrf_public_keys_mut = substrate_evrf_public_keys.clone(); + for i in substrate_participations.keys() { + let evrf_public_key = substrate_evrf_public_keys[usize::from(u16::from(*i)) - 1]; - let (machines, commitments) = key_gen_machines(id, params, shares); - self.active_commit.insert(id.session, (machines, commitments.clone())); + // Remove this key from the Vec to prevent double-counting + /* + Double-counting would be a risk if multiple participants shared an eVRF public key + and participated. This code does still allow such participants (in order to let + participants be weighted), and any one of them participating will count as all + participating. This is fine as any one such participant will be able to decrypt + the shares for themselves and all other participants, so this is still a key + generated by an amount of participants who could simply reconstruct the key. + */ + let start_len = evrf_public_keys_mut.len(); + evrf_public_keys_mut.retain(|key| *key != evrf_public_key); + let end_len = evrf_public_keys_mut.len(); + let count = start_len - end_len; - ProcessorMessage::Commitments { id, commitments } - } - - CoordinatorMessage::Commitments { id, mut commitments } => { - info!("Received commitments for {:?}", id); - - if self.active_share.contains_key(&id.session) { - // We should've been told of a new attempt before receiving commitments again - // The coordinator is either missing messages or repeating itself - // Either way, it's faulty - panic!("commitments when already handled commitments"); - } - - let (params, share_quantity) = ParamsDb::get(txn, &id.session, id.attempt).unwrap(); - - // Unwrap the machines, rebuilding them if we didn't have them in our cache - // We won't if the processor rebooted - // This *may* be inconsistent if we receive a KeyGen for attempt x, then commitments for - // attempt y - // The coordinator is trusted to be proper in this regard - let (prior, our_commitments) = self - .active_commit - .remove(&id.session) - .unwrap_or_else(|| key_gen_machines(id, params, share_quantity)); - - for (i, our_commitments) in our_commitments.into_iter().enumerate() { - assert!(commitments - .insert( - Participant::new(u16::from(params.i()) + u16::try_from(i).unwrap()).unwrap(), - our_commitments, - ) - .is_none()); - } - - CommitmentsDb::set(txn, &id, &commitments); - - match secret_share_machines(id, params, prior, commitments) { - Ok((machines, shares)) => { - self.active_share.insert(id.session, (machines, shares.clone())); - ProcessorMessage::Shares { id, shares } + participating_weight += count; } - Err(e) => e, - } - } - - CoordinatorMessage::Shares { id, shares } => { - info!("Received shares for {:?}", id); - - let (params, share_quantity) = ParamsDb::get(txn, &id.session, id.attempt).unwrap(); - - // Same commentary on inconsistency as above exists - let (machines, our_shares) = self.active_share.remove(&id.session).unwrap_or_else(|| { - let prior = key_gen_machines(id, params, share_quantity).0; - let (machines, shares) = - secret_share_machines(id, params, prior, CommitmentsDb::get(txn, &id).unwrap()) - .expect("got Shares for a key gen which faulted"); - (machines, shares) - }); - - let mut rng = share_rng(id); - - fn handle_machine( - rng: &mut ChaCha20Rng, - id: KeyGenId, - // These are the params of our first share, not this machine's shares - params: ThresholdParams, - m: usize, - machine: KeyMachine, - shares_ref: &mut HashMap, - ) -> Result, ProcessorMessage> { - let params = ThresholdParams::new( - params.t(), - params.n(), - Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap(), - ) - .unwrap(); - - // Parse the shares - let mut shares = HashMap::new(); - for i in 1 ..= params.n() { - let i = Participant::new(i).unwrap(); - let Some(share) = shares_ref.get_mut(&i) else { continue }; - shares.insert( - i, - EncryptedMessage::>::read(share, params).map_err(|_| { - ProcessorMessage::InvalidShare { id, accuser: params.i(), faulty: i, blame: None } - })?, - ); + if participating_weight < usize::from(threshold) { + return vec![]; } - - Ok( - (match machine.calculate_share(rng, shares) { - Ok(res) => res, - Err(e) => match e { - DkgError::ZeroParameter(_, _) | - DkgError::InvalidThreshold(_, _) | - DkgError::InvalidParticipant(_, _) | - DkgError::InvalidSigningSet | - DkgError::InvalidCommitments(_) => unreachable!("{e:?}"), - DkgError::InvalidParticipantQuantity(_, _) | - DkgError::DuplicatedParticipant(_) | - DkgError::MissingParticipant(_) => { - panic!("coordinator sent invalid DKG shares: {e:?}") - } - DkgError::InvalidShare { participant, blame } => { - Err(ProcessorMessage::InvalidShare { - id, - accuser: params.i(), - faulty: participant, - blame: Some(blame.map(|blame| blame.serialize())).flatten(), - })? - } - }, - }) - .complete(), - ) } - let mut substrate_keys = vec![]; - let mut network_keys = vec![]; - for (m, machines) in machines.into_iter().enumerate() { - let mut shares_ref: HashMap = - shares[m].iter().map(|(i, shares)| (*i, shares.as_ref())).collect(); - for (i, our_shares) in our_shares.iter().enumerate() { - if m != i { - assert!(shares_ref - .insert( - Participant::new(u16::from(params.i()) + u16::try_from(i).unwrap()).unwrap(), - our_shares - [&Participant::new(u16::from(params.i()) + u16::try_from(m).unwrap()).unwrap()] - .as_ref(), - ) - .is_none()); - } - } - - let these_substrate_keys = - match handle_machine(&mut rng, id, params, m, machines.0, &mut shares_ref) { - Ok(keys) => keys, - Err(msg) => return msg, - }; - let these_network_keys = - match handle_machine(&mut rng, id, params, m, machines.1, &mut shares_ref) { - Ok(keys) => keys, - Err(msg) => return msg, - }; - - for i in 1 ..= params.n() { - let i = Participant::new(i).unwrap(); - let Some(shares) = shares_ref.get(&i) else { continue }; - if !shares.is_empty() { - return ProcessorMessage::InvalidShare { - id, - accuser: these_substrate_keys.params().i(), - faulty: i, - blame: None, - }; - } - } - - let mut these_network_keys = ThresholdKeys::new(these_network_keys); - N::tweak_keys(&mut these_network_keys); - - substrate_keys.push(these_substrate_keys); - network_keys.push(these_network_keys); - } - - let mut generated_substrate_key = None; - let mut generated_network_key = None; - for keys in substrate_keys.iter().zip(&network_keys) { - if generated_substrate_key.is_none() { - generated_substrate_key = Some(keys.0.group_key()); - generated_network_key = Some(keys.1.group_key()); + // If we now have the threshold participating, verify their `Participation`s + fn verify_dkg( + txn: &mut impl DbTxn, + session: Session, + true_if_substrate_false_if_network: bool, + threshold: u16, + evrf_public_keys: &[impl AsRef<[u8]>], + substrate_participations: &mut HashMap>, + network_participations: &mut HashMap>, + ) -> Result, Vec> { + // Parse the `Participation`s + let participations = (if true_if_substrate_false_if_network { + &*substrate_participations } else { - assert_eq!(generated_substrate_key, Some(keys.0.group_key())); - assert_eq!(generated_network_key, Some(keys.1.group_key())); + &*network_participations + }) + .iter() + .map(|(key, participation)| { + ( + *key, + Participation::read( + &mut participation.as_slice(), + evrf_public_keys.len().try_into().unwrap(), + ) + .expect("prior read participation was invalid"), + ) + }) + .collect(); + + // Actually call verify on the DKG + match EvrfDkg::::verify( + &mut OsRng, + generators(), + context::( + session, + if true_if_substrate_false_if_network { + SUBSTRATE_KEY_CONTEXT + } else { + NETWORK_KEY_CONTEXT + }, + ), + threshold, + // Ignores the list of participants who were faulty, as they were prior blamed + &coerce_keys::(evrf_public_keys).0, + &participations, + ) + .unwrap() + { + // If the DKG was valid, return it + VerifyResult::Valid(dkg) => Ok(dkg), + // This DKG had faulty participants, so create blame messages for them + VerifyResult::Invalid(faulty) => { + let mut blames = vec![]; + for participant in faulty { + // Remove from both maps for simplicity's sake + // There's no point in having one DKG complete yet not the other + assert!(substrate_participations.remove(&participant).is_some()); + assert!(network_participations.remove(&participant).is_some()); + blames.push(ProcessorMessage::Blame { session, participant }); + } + // Since we removed `Participation`s, write the updated versions to the database + ParticipationDb::set( + txn, + &session, + &(substrate_participations.clone(), network_participations.clone()), + ); + Err(blames)? + } + VerifyResult::NotEnoughParticipants => { + // This is the first DKG, and we checked we were at the threshold OR + // This is the second DKG, as the first had no invalid participants, so we're still + // at the threshold + panic!("not enough participants despite checking we were at the threshold") + } } } - GeneratedKeysDb::save_keys::(txn, &id, &substrate_keys, &network_keys); + let substrate_dkg = match verify_dkg::( + txn, + session, + true, + threshold, + &substrate_evrf_public_keys, + &mut substrate_participations, + &mut network_participations, + ) { + Ok(dkg) => dkg, + // If we had any blames, immediately return them as necessary for the safety of + // `verify_dkg` (it assumes we don't call it again upon prior errors) + Err(blames) => return blames, + }; - ProcessorMessage::GeneratedKeyPair { - id, - substrate_key: generated_substrate_key.unwrap().to_bytes(), + let network_dkg = match verify_dkg::( + txn, + session, + false, + threshold, + &network_evrf_public_keys, + &mut substrate_participations, + &mut network_participations, + ) { + Ok(dkg) => dkg, + Err(blames) => return blames, + }; + + // Get our keys from each DKG + // TODO: Some of these keys may be decrypted by us, yet not actually meant for us, if + // another validator set our eVRF public key as their eVRF public key. We either need to + // ensure the coordinator tracks amount of shares we're supposed to have by the eVRF public + // keys OR explicitly reduce to the keys we're supposed to have based on our `i` index. + let substrate_keys = substrate_dkg.keys(&self.substrate_evrf_private_key); + let mut network_keys = network_dkg.keys(&self.network_evrf_private_key); + // Tweak the keys for the network + for network_keys in &mut network_keys { + N::tweak_keys(network_keys); + } + GeneratedKeysDb::save_keys::(txn, &session, &substrate_keys, &network_keys); + + // Since no one we verified was invalid, and we had the threshold, yield the new keys + vec![ProcessorMessage::GeneratedKeyPair { + session, + substrate_key: substrate_keys[0].group_key().to_bytes(), // TODO: This can be made more efficient since tweaked keys may be a subset of keys - network_key: generated_network_key.unwrap().to_bytes().as_ref().to_vec(), - } - } - - CoordinatorMessage::VerifyBlame { id, accuser, accused, share, blame } => { - let params = ParamsDb::get(txn, &id.session, id.attempt).unwrap().0; - - let mut share_ref = share.as_slice(); - let Ok(substrate_share) = EncryptedMessage::< - Ristretto, - SecretShare<::F>, - >::read(&mut share_ref, params) else { - return ProcessorMessage::Blame { id, participant: accused }; - }; - let Ok(network_share) = EncryptedMessage::< - N::Curve, - SecretShare<::F>, - >::read(&mut share_ref, params) else { - return ProcessorMessage::Blame { id, participant: accused }; - }; - if !share_ref.is_empty() { - return ProcessorMessage::Blame { id, participant: accused }; - } - - let mut substrate_commitment_msgs = HashMap::new(); - let mut network_commitment_msgs = HashMap::new(); - let commitments = CommitmentsDb::get(txn, &id).unwrap(); - for (i, commitments) in commitments { - let mut commitments = commitments.as_slice(); - substrate_commitment_msgs - .insert(i, EncryptionKeyMessage::<_, _>::read(&mut commitments, params).unwrap()); - network_commitment_msgs - .insert(i, EncryptionKeyMessage::<_, _>::read(&mut commitments, params).unwrap()); - } - - // There is a mild DoS here where someone with a valid blame bloats it to the maximum size - // Given the ambiguity, and limited potential to DoS (this being called means *someone* is - // getting fatally slashed) voids the need to ensure blame is minimal - let substrate_blame = - blame.clone().and_then(|blame| EncryptionKeyProof::read(&mut blame.as_slice()).ok()); - let network_blame = - blame.clone().and_then(|blame| EncryptionKeyProof::read(&mut blame.as_slice()).ok()); - - let substrate_blame = AdditionalBlameMachine::new( - &mut rand_core::OsRng, - context(&id, SUBSTRATE_KEY_CONTEXT), - params.n(), - substrate_commitment_msgs, - ) - .unwrap() - .blame(accuser, accused, substrate_share, substrate_blame); - let network_blame = AdditionalBlameMachine::new( - &mut rand_core::OsRng, - context(&id, NETWORK_KEY_CONTEXT), - params.n(), - network_commitment_msgs, - ) - .unwrap() - .blame(accuser, accused, network_share, network_blame); - - // If the accused was blamed for either, mark them as at fault - if (substrate_blame == accused) || (network_blame == accused) { - return ProcessorMessage::Blame { id, participant: accused }; - } - - ProcessorMessage::Blame { id, participant: accuser } + network_key: network_keys[0].group_key().to_bytes().as_ref().to_vec(), + }] } } } diff --git a/processor/src/main.rs b/processor/src/main.rs index e0d97aa6..2d05ad4d 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -2,8 +2,11 @@ use std::{time::Duration, collections::HashMap}; use zeroize::{Zeroize, Zeroizing}; -use transcript::{Transcript, RecommendedTranscript}; -use ciphersuite::{group::GroupEncoding, Ciphersuite}; +use ciphersuite::{ + group::{ff::PrimeField, GroupEncoding}, + Ciphersuite, Ristretto, +}; +use dkg::evrf::EvrfCurve; use log::{info, warn}; use tokio::time::sleep; @@ -128,7 +131,7 @@ struct TributaryMutable { `Burn`s. Substrate also decides when to move to a new multisig, hence why this entire object is - Substate-mutable. + Substrate-mutable. Since MultisigManager should always be verifiable, and the Tributary is temporal, MultisigManager being entirely SubstrateMutable shows proper data pipe-lining. @@ -224,7 +227,9 @@ async fn handle_coordinator_msg( match msg.msg.clone() { CoordinatorMessage::KeyGen(msg) => { - coordinator.send(tributary_mutable.key_gen.handle(txn, msg)).await; + for msg in tributary_mutable.key_gen.handle(txn, msg) { + coordinator.send(msg).await; + } } CoordinatorMessage::Sign(msg) => { @@ -485,41 +490,31 @@ async fn boot( network: &N, coordinator: &mut Co, ) -> (D, TributaryMutable, SubstrateMutable) { - let mut entropy_transcript = { - let entropy = Zeroizing::new(env::var("ENTROPY").expect("entropy wasn't specified")); - if entropy.len() != 64 { - panic!("entropy isn't the right length"); + fn read_key_from_env(label: &'static str) -> Zeroizing { + let key_hex = + Zeroizing::new(env::var(label).unwrap_or_else(|| panic!("{label} wasn't provided"))); + let bytes = Zeroizing::new( + hex::decode(key_hex).unwrap_or_else(|_| panic!("{label} wasn't a valid hex string")), + ); + + let mut repr = ::Repr::default(); + if repr.as_ref().len() != bytes.len() { + panic!("{label} wasn't the correct length"); } - let mut bytes = - Zeroizing::new(hex::decode(entropy).map_err(|_| ()).expect("entropy wasn't hex-formatted")); - if bytes.len() != 32 { - bytes.zeroize(); - panic!("entropy wasn't 32 bytes"); - } - let mut entropy = Zeroizing::new([0; 32]); - let entropy_mut: &mut [u8] = entropy.as_mut(); - entropy_mut.copy_from_slice(bytes.as_ref()); - - let mut transcript = RecommendedTranscript::new(b"Serai Processor Entropy"); - transcript.append_message(b"entropy", entropy); - transcript - }; - - // TODO: Save a hash of the entropy to the DB and make sure the entropy didn't change - - let mut entropy = |label| { - let mut challenge = entropy_transcript.challenge(label); - let mut res = Zeroizing::new([0; 32]); - let res_mut: &mut [u8] = res.as_mut(); - res_mut.copy_from_slice(&challenge[.. 32]); - challenge.zeroize(); + repr.as_mut().copy_from_slice(bytes.as_slice()); + let res = Zeroizing::new( + Option::from(::from_repr(repr)) + .unwrap_or_else(|| panic!("{label} wasn't a valid scalar")), + ); + repr.as_mut().zeroize(); res - }; + } - // We don't need to re-issue GenerateKey orders because the coordinator is expected to - // schedule/notify us of new attempts - // TODO: Is this above comment still true? Not at all due to the planned lack of DKG timeouts? - let key_gen = KeyGen::::new(raw_db.clone(), entropy(b"key-gen_entropy")); + let key_gen = KeyGen::::new( + raw_db.clone(), + read_key_from_env::<::EmbeddedCurve>("SUBSTRATE_EVRF_KEY"), + read_key_from_env::<::EmbeddedCurve>("NETWORK_EVRF_KEY"), + ); let (multisig_manager, current_keys, actively_signing) = MultisigManager::new(raw_db, network).await; diff --git a/processor/src/networks/mod.rs b/processor/src/networks/mod.rs index ee3cd24a..81838ae1 100644 --- a/processor/src/networks/mod.rs +++ b/processor/src/networks/mod.rs @@ -5,6 +5,7 @@ use async_trait::async_trait; use thiserror::Error; use frost::{ + dkg::evrf::EvrfCurve, curve::{Ciphersuite, Curve}, ThresholdKeys, sign::PreprocessMachine, @@ -240,9 +241,11 @@ pub struct PreparedSend { } #[async_trait] +#[rustfmt::skip] pub trait Network: 'static + Send + Sync + Clone + PartialEq + Debug { /// The elliptic curve used for this network. - type Curve: Curve; + type Curve: Curve + + EvrfCurve::F>>>; /// The type representing the transaction for this network. type Transaction: Transaction; // TODO: Review use of diff --git a/processor/src/networks/monero.rs b/processor/src/networks/monero.rs index 154702fe..6ffa29df 100644 --- a/processor/src/networks/monero.rs +++ b/processor/src/networks/monero.rs @@ -663,7 +663,7 @@ impl Network for Monero { keys: ThresholdKeys, transaction: SignableTransaction, ) -> Result { - match transaction.0.clone().multisig(&keys) { + match transaction.0.clone().multisig(keys) { Ok(machine) => Ok(machine), Err(e) => panic!("failed to create a multisig machine for TX: {e}"), } diff --git a/processor/src/tests/key_gen.rs b/processor/src/tests/key_gen.rs index 047e006a..43f0de05 100644 --- a/processor/src/tests/key_gen.rs +++ b/processor/src/tests/key_gen.rs @@ -2,10 +2,13 @@ use std::collections::HashMap; use zeroize::Zeroizing; -use rand_core::{RngCore, OsRng}; +use rand_core::OsRng; -use ciphersuite::group::GroupEncoding; -use frost::{Participant, ThresholdParams, tests::clone_without}; +use ciphersuite::{ + group::{ff::Field, GroupEncoding}, + Ciphersuite, Ristretto, +}; +use dkg::{Participant, ThresholdParams, evrf::*}; use serai_db::{DbTxn, Db, MemDb}; @@ -18,113 +21,102 @@ use crate::{ key_gen::{KeyConfirmed, KeyGen}, }; -const ID: KeyGenId = KeyGenId { session: Session(1), attempt: 3 }; +const SESSION: Session = Session(1); pub fn test_key_gen() { - let mut entropies = HashMap::new(); let mut dbs = HashMap::new(); + let mut substrate_evrf_keys = HashMap::new(); + let mut network_evrf_keys = HashMap::new(); + let mut evrf_public_keys = vec![]; let mut key_gens = HashMap::new(); for i in 1 ..= 5 { - let mut entropy = Zeroizing::new([0; 32]); - OsRng.fill_bytes(entropy.as_mut()); - entropies.insert(i, entropy); let db = MemDb::new(); dbs.insert(i, db.clone()); - key_gens.insert(i, KeyGen::::new(db, entropies[&i].clone())); + + let substrate_evrf_key = Zeroizing::new( + <::EmbeddedCurve as Ciphersuite>::F::random(&mut OsRng), + ); + substrate_evrf_keys.insert(i, substrate_evrf_key.clone()); + let network_evrf_key = Zeroizing::new( + <::EmbeddedCurve as Ciphersuite>::F::random(&mut OsRng), + ); + network_evrf_keys.insert(i, network_evrf_key.clone()); + + evrf_public_keys.push(( + (<::EmbeddedCurve as Ciphersuite>::generator() * *substrate_evrf_key) + .to_bytes(), + (<::EmbeddedCurve as Ciphersuite>::generator() * *network_evrf_key) + .to_bytes() + .as_ref() + .to_vec(), + )); + key_gens + .insert(i, KeyGen::::new(db, substrate_evrf_key.clone(), network_evrf_key.clone())); } - let mut all_commitments = HashMap::new(); + let mut participations = HashMap::new(); for i in 1 ..= 5 { let key_gen = key_gens.get_mut(&i).unwrap(); let mut txn = dbs.get_mut(&i).unwrap().txn(); - if let ProcessorMessage::Commitments { id, mut commitments } = key_gen.handle( + let mut msgs = key_gen.handle( &mut txn, CoordinatorMessage::GenerateKey { - id: ID, - params: ThresholdParams::new(3, 5, Participant::new(u16::try_from(i).unwrap()).unwrap()) - .unwrap(), - shares: 1, + session: SESSION, + threshold: 3, + evrf_public_keys: evrf_public_keys.clone(), }, - ) { - assert_eq!(id, ID); - assert_eq!(commitments.len(), 1); - all_commitments - .insert(Participant::new(u16::try_from(i).unwrap()).unwrap(), commitments.swap_remove(0)); - } else { - panic!("didn't get commitments back"); - } + ); + assert_eq!(msgs.len(), 1); + let ProcessorMessage::Participation { session, participation } = msgs.swap_remove(0) else { + panic!("didn't get a participation") + }; + assert_eq!(session, SESSION); + participations.insert(i, participation); txn.commit(); } - // 1 is rebuilt on every step - // 2 is rebuilt here - // 3 ... are rebuilt once, one at each of the following steps - let rebuild = |key_gens: &mut HashMap<_, _>, dbs: &HashMap<_, MemDb>, i| { - key_gens.remove(&i); - key_gens.insert(i, KeyGen::::new(dbs[&i].clone(), entropies[&i].clone())); - }; - rebuild(&mut key_gens, &dbs, 1); - rebuild(&mut key_gens, &dbs, 2); - - let mut all_shares = HashMap::new(); - for i in 1 ..= 5 { - let key_gen = key_gens.get_mut(&i).unwrap(); - let mut txn = dbs.get_mut(&i).unwrap().txn(); - let i = Participant::new(u16::try_from(i).unwrap()).unwrap(); - if let ProcessorMessage::Shares { id, mut shares } = key_gen.handle( - &mut txn, - CoordinatorMessage::Commitments { id: ID, commitments: clone_without(&all_commitments, &i) }, - ) { - assert_eq!(id, ID); - assert_eq!(shares.len(), 1); - all_shares.insert(i, shares.swap_remove(0)); - } else { - panic!("didn't get shares back"); - } - txn.commit(); - } - - // Rebuild 1 and 3 - rebuild(&mut key_gens, &dbs, 1); - rebuild(&mut key_gens, &dbs, 3); - let mut res = None; for i in 1 ..= 5 { let key_gen = key_gens.get_mut(&i).unwrap(); let mut txn = dbs.get_mut(&i).unwrap().txn(); - let i = Participant::new(u16::try_from(i).unwrap()).unwrap(); - if let ProcessorMessage::GeneratedKeyPair { id, substrate_key, network_key } = key_gen.handle( - &mut txn, - CoordinatorMessage::Shares { - id: ID, - shares: vec![all_shares - .iter() - .filter_map(|(l, shares)| if i == *l { None } else { Some((*l, shares[&i].clone())) }) - .collect()], - }, - ) { - assert_eq!(id, ID); - if res.is_none() { - res = Some((substrate_key, network_key.clone())); + for j in 1 ..= 5 { + let mut msgs = key_gen.handle( + &mut txn, + CoordinatorMessage::Participation { + session: SESSION, + participant: Participant::new(u16::try_from(j).unwrap()).unwrap(), + participation: participations[&j].clone(), + }, + ); + if j != 3 { + assert!(msgs.is_empty()); + } + if j == 3 { + assert_eq!(msgs.len(), 1); + let ProcessorMessage::GeneratedKeyPair { session, substrate_key, network_key } = + msgs.swap_remove(0) + else { + panic!("didn't get a generated key pair") + }; + assert_eq!(session, SESSION); + + if res.is_none() { + res = Some((substrate_key, network_key.clone())); + } + assert_eq!(res.as_ref().unwrap(), &(substrate_key, network_key)); } - assert_eq!(res.as_ref().unwrap(), &(substrate_key, network_key)); - } else { - panic!("didn't get key back"); } + txn.commit(); } let res = res.unwrap(); - // Rebuild 1 and 4 - rebuild(&mut key_gens, &dbs, 1); - rebuild(&mut key_gens, &dbs, 4); - for i in 1 ..= 5 { let key_gen = key_gens.get_mut(&i).unwrap(); let mut txn = dbs.get_mut(&i).unwrap().txn(); let KeyConfirmed { mut substrate_keys, mut network_keys } = key_gen.confirm( &mut txn, - ID.session, + SESSION, &KeyPair(sr25519::Public(res.0), res.1.clone().try_into().unwrap()), ); txn.commit(); diff --git a/spec/DKG Exclusions.md b/spec/DKG Exclusions.md deleted file mode 100644 index 1677da8a..00000000 --- a/spec/DKG Exclusions.md +++ /dev/null @@ -1,23 +0,0 @@ -Upon an issue with the DKG, the honest validators must remove the malicious -validators. Ideally, a threshold signature would be used, yet that would require -a threshold key (which would require authentication by a MuSig signature). A -MuSig signature which specifies the signing set (or rather, the excluded -signers) achieves the most efficiency. - -While that resolves the on-chain behavior, the Tributary also has to perform -exclusion. This has the following forms: - -1) Rejecting further transactions (required) -2) Rejecting further participation in Tendermint - -With regards to rejecting further participation in Tendermint, it's *ideal* to -remove the validator from the list of validators. Each validator removed from -participation, yet not from the list of validators, increases the likelihood of -the network failing to form consensus. - -With regards to the economic security, an honest 67% may remove a faulty -(explicitly or simply offline) 33%, letting 67% of the remaining 67% (4/9ths) -take control of the associated private keys. In such a case, the malicious -parties are defined as the 4/9ths of validators with access to the private key -and the 33% removed (who together form >67% of the originally intended -validator set and have presumably provided enough stake to cover losses). diff --git a/spec/cryptography/Distributed Key Generation.md b/spec/cryptography/Distributed Key Generation.md index fae5ff90..d0f209c1 100644 --- a/spec/cryptography/Distributed Key Generation.md +++ b/spec/cryptography/Distributed Key Generation.md @@ -1,35 +1,7 @@ # Distributed Key Generation -Serai uses a modification of Pedersen's Distributed Key Generation, which is -actually Feldman's Verifiable Secret Sharing Scheme run by every participant, as -described in the FROST paper. The modification included in FROST was to include -a Schnorr Proof of Knowledge for coefficient zero, preventing rogue key attacks. -This results in a two-round protocol. - -### Encryption - -In order to protect the secret shares during communication, the `dkg` library -establishes a public key for encryption at the start of a given protocol. -Every encrypted message (such as the secret shares) then includes a per-message -encryption key. These two keys are used in an Elliptic-curve Diffie-Hellman -handshake to derive a shared key. This shared key is then hashed to obtain a key -and IV for use in a ChaCha20 stream cipher instance, which is xor'd against a -message to encrypt it. - -### Blame - -Since each message has a distinct key attached, and accordingly a distinct -shared key, it's possible to reveal the shared key for a specific message -without revealing any other message's decryption keys. This is utilized when a -participant misbehaves. A participant who receives an invalid encrypted message -publishes its key, able to without concern for side effects, With the key -published, all participants can decrypt the message in order to decide blame. - -While key reuse by a participant is considered as them revealing the messages -themselves, and therefore out of scope, there is an attack where a malicious -adversary claims another participant's encryption key. They'll fail to encrypt -their message, and the recipient will issue a blame statement. This blame -statement, intended to reveal the malicious adversary, also reveals the message -by the participant whose keys were co-opted. To resolve this, a -proof-of-possession is also included with encrypted messages, ensuring only -those actually with per-message keys can claim to use them. +Serai uses a modification of the one-round Distributed Key Generation described +in the [eVRF](https://eprint.iacr.org/2024/397) paper. We only require a +threshold to participate, sacrificing unbiased for robustness, and implement a +verifiable encryption scheme such that anyone can can verify a ciphertext +encrypts the expected secret share. diff --git a/spec/processor/Processor.md b/spec/processor/Processor.md index ca8cf428..55d3baf3 100644 --- a/spec/processor/Processor.md +++ b/spec/processor/Processor.md @@ -9,29 +9,23 @@ This document primarily discusses its flow with regards to the coordinator. ### Generate Key On `key_gen::CoordinatorMessage::GenerateKey`, the processor begins a pair of -instances of the distributed key generation protocol specified in the FROST -paper. +instances of the distributed key generation protocol. -The first instance is for a key to use on the external network. The second -instance is for a Ristretto public key used to publish data to the Serai -blockchain. This pair of FROST DKG instances is considered a single instance of -Serai's overall key generation protocol. +The first instance is for a Ristretto public key used to publish data to the +Serai blockchain. The second instance is for a key to use on the external +network. This pair of DKG instances is considered a single instance of Serai's +overall DKG protocol. -The commitments for both protocols are sent to the coordinator in a single -`key_gen::ProcessorMessage::Commitments`. +The participations in both protocols are sent to the coordinator in +`key_gen::ProcessorMessage::Participation` messages, individually, as they come +in. -### Key Gen Commitments +### Key Gen Participations -On `key_gen::CoordinatorMessage::Commitments`, the processor continues the -specified key generation instance. The secret shares for each fellow -participant are sent to the coordinator in a -`key_gen::ProcessorMessage::Shares`. - -#### Key Gen Shares - -On `key_gen::CoordinatorMessage::Shares`, the processor completes the specified -key generation instance. The generated key pair is sent to the coordinator in a -`key_gen::ProcessorMessage::GeneratedKeyPair`. +On `key_gen::CoordinatorMessage::Participation`, the processor stores the +contained participation, verifying participations as sane. Once it receives `t` +honest participations, the processor completes the DKG and sends the generated +key pair to the coordinator in a `key_gen::ProcessorMessage::GeneratedKeyPair`. ### Confirm Key Pair diff --git a/substrate/abi/Cargo.toml b/substrate/abi/Cargo.toml index 072f7460..ea26485f 100644 --- a/substrate/abi/Cargo.toml +++ b/substrate/abi/Cargo.toml @@ -16,8 +16,10 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] } -scale-info = { version = "2", default-features = false, features = ["derive"] } +bitvec = { version = "1", default-features = false, features = ["alloc", "serde"] } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive", "bit-vec"] } +scale-info = { version = "2", default-features = false, features = ["derive", "bit-vec"] } borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true } serde = { version = "1", default-features = false, features = ["derive", "alloc"], optional = true } @@ -40,6 +42,8 @@ serai-signals-primitives = { path = "../signals/primitives", version = "0.1", de [features] std = [ + "bitvec/std", + "scale/std", "scale-info/std", diff --git a/substrate/abi/src/validator_sets.rs b/substrate/abi/src/validator_sets.rs index 1e1e3359..7a7bdc00 100644 --- a/substrate/abi/src/validator_sets.rs +++ b/substrate/abi/src/validator_sets.rs @@ -11,10 +11,14 @@ use serai_validator_sets_primitives::*; pub enum Call { set_keys { network: NetworkId, - removed_participants: BoundedVec>, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, }, + set_embedded_elliptic_curve_key { + embedded_elliptic_curve: EmbeddedEllipticCurve, + key: BoundedVec>, + }, report_slashes { network: NetworkId, slashes: BoundedVec<(SeraiAddress, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET / 3 }>>, diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index 629312c0..e653c9af 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -20,6 +20,8 @@ workspace = true zeroize = "^1.5" thiserror = { version = "1", optional = true } +bitvec = { version = "1", default-features = false, features = ["alloc", "serde"] } + hex = "0.4" scale = { package = "parity-scale-codec", version = "3" } serde = { version = "1", features = ["derive"], optional = true } diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index ec67bae0..89990406 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -1,13 +1,14 @@ use scale::Encode; use sp_core::sr25519::{Public, Signature}; +use sp_runtime::BoundedVec; use serai_abi::primitives::Amount; pub use serai_abi::validator_sets::primitives; -use primitives::{Session, ValidatorSet, KeyPair}; +use primitives::{MAX_KEY_LEN, Session, ValidatorSet, KeyPair}; use crate::{ - primitives::{NetworkId, SeraiAddress}, + primitives::{EmbeddedEllipticCurve, NetworkId, SeraiAddress}, Transaction, Serai, TemporalSerai, SeraiError, }; @@ -107,6 +108,21 @@ impl<'a> SeraiValidatorSets<'a> { self.0.storage(PALLET, "CurrentSession", network).await } + pub async fn embedded_elliptic_curve_key( + &self, + validator: Public, + embedded_elliptic_curve: EmbeddedEllipticCurve, + ) -> Result>, SeraiError> { + self + .0 + .storage( + PALLET, + "EmbeddedEllipticCurveKeys", + (sp_core::hashing::blake2_128(&validator.encode()), validator, embedded_elliptic_curve), + ) + .await + } + pub async fn participants( &self, network: NetworkId, @@ -188,21 +204,30 @@ impl<'a> SeraiValidatorSets<'a> { pub fn set_keys( network: NetworkId, - removed_participants: sp_runtime::BoundedVec< - SeraiAddress, - sp_core::ConstU32<{ primitives::MAX_KEY_SHARES_PER_SET / 3 }>, - >, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, ) -> Transaction { Serai::unsigned(serai_abi::Call::ValidatorSets(serai_abi::validator_sets::Call::set_keys { network, - removed_participants, key_pair, + signature_participants, signature, })) } + pub fn set_embedded_elliptic_curve_key( + embedded_elliptic_curve: EmbeddedEllipticCurve, + key: BoundedVec>, + ) -> serai_abi::Call { + serai_abi::Call::ValidatorSets( + serai_abi::validator_sets::Call::set_embedded_elliptic_curve_key { + embedded_elliptic_curve, + key, + }, + ) + } + pub fn allocate(network: NetworkId, amount: Amount) -> serai_abi::Call { serai_abi::Call::ValidatorSets(serai_abi::validator_sets::Call::allocate { network, amount }) } diff --git a/substrate/client/tests/common/validator_sets.rs b/substrate/client/tests/common/validator_sets.rs index 3238501a..c3b66c0d 100644 --- a/substrate/client/tests/common/validator_sets.rs +++ b/substrate/client/tests/common/validator_sets.rs @@ -5,6 +5,8 @@ use zeroize::Zeroizing; use rand_core::OsRng; use sp_core::{ + ConstU32, + bounded_vec::BoundedVec, sr25519::{Pair, Signature}, Pair as PairTrait, }; @@ -14,8 +16,9 @@ use frost::dkg::musig::musig; use schnorrkel::Schnorrkel; use serai_client::{ + primitives::EmbeddedEllipticCurve, validator_sets::{ - primitives::{ValidatorSet, KeyPair, musig_context, set_keys_message}, + primitives::{MAX_KEY_LEN, ValidatorSet, KeyPair, musig_context, set_keys_message}, ValidatorSetsEvent, }, Amount, Serai, SeraiValidatorSets, @@ -58,7 +61,7 @@ pub async fn set_keys( let sig = frost::tests::sign_without_caching( &mut OsRng, frost::tests::algorithm_machines(&mut OsRng, &Schnorrkel::new(b"substrate"), &musig_keys), - &set_keys_message(&set, &[], &key_pair), + &set_keys_message(&set, &key_pair), ); // Set the key pair @@ -66,8 +69,8 @@ pub async fn set_keys( serai, &SeraiValidatorSets::set_keys( set.network, - vec![].try_into().unwrap(), key_pair.clone(), + bitvec::bitvec!(u8, bitvec::prelude::Lsb0; 1; musig_keys.len()), Signature(sig.to_bytes()), ), ) @@ -82,6 +85,24 @@ pub async fn set_keys( block } +#[allow(dead_code)] +pub async fn set_embedded_elliptic_curve_key( + serai: &Serai, + pair: &Pair, + embedded_elliptic_curve: EmbeddedEllipticCurve, + key: BoundedVec>, + nonce: u32, +) -> [u8; 32] { + // get the call + let tx = serai.sign( + pair, + SeraiValidatorSets::set_embedded_elliptic_curve_key(embedded_elliptic_curve, key), + nonce, + 0, + ); + publish_tx(serai, &tx).await +} + #[allow(dead_code)] pub async fn allocate_stake( serai: &Serai, diff --git a/substrate/client/tests/validator_sets.rs b/substrate/client/tests/validator_sets.rs index c2c6c509..a2ccf22b 100644 --- a/substrate/client/tests/validator_sets.rs +++ b/substrate/client/tests/validator_sets.rs @@ -7,7 +7,8 @@ use sp_core::{ use serai_client::{ primitives::{ - NETWORKS, NetworkId, BlockHash, insecure_pair_from_name, FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, + FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, NETWORKS, EmbeddedEllipticCurve, NetworkId, BlockHash, + insecure_pair_from_name, }, validator_sets::{ primitives::{Session, ValidatorSet, KeyPair}, @@ -23,7 +24,7 @@ use serai_client::{ mod common; use common::{ tx::publish_tx, - validator_sets::{allocate_stake, deallocate_stake, set_keys}, + validator_sets::{set_embedded_elliptic_curve_key, allocate_stake, deallocate_stake, set_keys}, }; fn get_random_key_pair() -> KeyPair { @@ -223,12 +224,39 @@ async fn validator_set_rotation() { // add 1 participant let last_participant = accounts[4].clone(); + + // If this is the first iteration, set embedded elliptic curve keys + if i == 0 { + for (i, embedded_elliptic_curve) in + [EmbeddedEllipticCurve::Embedwards25519, EmbeddedEllipticCurve::Secq256k1] + .into_iter() + .enumerate() + { + set_embedded_elliptic_curve_key( + &serai, + &last_participant, + embedded_elliptic_curve, + vec![ + 0; + match embedded_elliptic_curve { + EmbeddedEllipticCurve::Embedwards25519 => 32, + EmbeddedEllipticCurve::Secq256k1 => 33, + } + ] + .try_into() + .unwrap(), + i.try_into().unwrap(), + ) + .await; + } + } + let hash = allocate_stake( &serai, network, key_shares[&network], &last_participant, - i.try_into().unwrap(), + (2 + i).try_into().unwrap(), ) .await; participants.push(last_participant.public()); diff --git a/substrate/node/Cargo.toml b/substrate/node/Cargo.toml index 0e551c72..5da8ce85 100644 --- a/substrate/node/Cargo.toml +++ b/substrate/node/Cargo.toml @@ -27,6 +27,10 @@ log = "0.4" schnorrkel = "0.11" +ciphersuite = { path = "../../crypto/ciphersuite" } +embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } +secq256k1 = { path = "../../crypto/evrf/secq256k1" } + libp2p = "0.52" sp-core = { git = "https://github.com/serai-dex/substrate" } diff --git a/substrate/node/src/chain_spec.rs b/substrate/node/src/chain_spec.rs index e67674cc..ddc501b9 100644 --- a/substrate/node/src/chain_spec.rs +++ b/substrate/node/src/chain_spec.rs @@ -1,13 +1,17 @@ use core::marker::PhantomData; -use std::collections::HashSet; -use sp_core::{Decode, Pair as PairTrait, sr25519::Public}; +use sp_core::Pair as PairTrait; use sc_service::ChainType; +use ciphersuite::{group::GroupEncoding, Ciphersuite}; +use embedwards25519::Embedwards25519; +use secq256k1::Secq256k1; + use serai_runtime::{ - primitives::*, WASM_BINARY, BABE_GENESIS_EPOCH_CONFIG, RuntimeGenesisConfig, SystemConfig, - CoinsConfig, ValidatorSetsConfig, SignalsConfig, BabeConfig, GrandpaConfig, EmissionsConfig, + primitives::*, validator_sets::AllEmbeddedEllipticCurveKeysAtGenesis, WASM_BINARY, + BABE_GENESIS_EPOCH_CONFIG, RuntimeGenesisConfig, SystemConfig, CoinsConfig, ValidatorSetsConfig, + SignalsConfig, BabeConfig, GrandpaConfig, EmissionsConfig, }; pub type ChainSpec = sc_service::GenericChainSpec; @@ -16,6 +20,11 @@ fn account_from_name(name: &'static str) -> PublicKey { insecure_pair_from_name(name).public() } +fn insecure_arbitrary_public_key_from_name(name: &'static str) -> Vec { + let key = insecure_arbitrary_key_from_name::(name); + (C::generator() * key).to_bytes().as_ref().to_vec() +} + fn wasm_binary() -> Vec { // TODO: Accept a config of runtime path const WASM_PATH: &str = "/runtime/serai.wasm"; @@ -32,7 +41,21 @@ fn devnet_genesis( validators: &[&'static str], endowed_accounts: Vec, ) -> RuntimeGenesisConfig { - let validators = validators.iter().map(|name| account_from_name(name)).collect::>(); + let validators = validators + .iter() + .map(|name| { + ( + account_from_name(name), + AllEmbeddedEllipticCurveKeysAtGenesis { + embedwards25519: insecure_arbitrary_public_key_from_name::(name) + .try_into() + .unwrap(), + secq256k1: insecure_arbitrary_public_key_from_name::(name).try_into().unwrap(), + }, + ) + }) + .collect::>(); + RuntimeGenesisConfig { system: SystemConfig { code: wasm_binary.to_vec(), _config: PhantomData }, @@ -68,21 +91,22 @@ fn devnet_genesis( NetworkId::Monero => (NetworkId::Monero, Amount(100_000 * 10_u64.pow(8))), }) .collect(), - participants: validators.clone(), + participants: validators.iter().map(|(validator, _)| *validator).collect(), }, signals: SignalsConfig::default(), babe: BabeConfig { - authorities: validators.iter().map(|validator| ((*validator).into(), 1)).collect(), + authorities: validators.iter().map(|validator| (validator.0.into(), 1)).collect(), epoch_config: Some(BABE_GENESIS_EPOCH_CONFIG), _config: PhantomData, }, grandpa: GrandpaConfig { - authorities: validators.into_iter().map(|validator| (validator.into(), 1)).collect(), + authorities: validators.into_iter().map(|validator| (validator.0.into(), 1)).collect(), _config: PhantomData, }, } } +/* fn testnet_genesis(wasm_binary: &[u8], validators: Vec<&'static str>) -> RuntimeGenesisConfig { let validators = validators .into_iter() @@ -140,6 +164,7 @@ fn testnet_genesis(wasm_binary: &[u8], validators: Vec<&'static str>) -> Runtime }, } } +*/ pub fn development_config() -> ChainSpec { let wasm_binary = wasm_binary(); @@ -218,7 +243,7 @@ pub fn local_config() -> ChainSpec { } pub fn testnet_config() -> ChainSpec { - let wasm_binary = wasm_binary(); + // let wasm_binary = wasm_binary(); ChainSpec::from_genesis( // Name @@ -227,7 +252,7 @@ pub fn testnet_config() -> ChainSpec { "testnet-2", ChainType::Live, move || { - let _ = testnet_genesis(&wasm_binary, vec![]); + // let _ = testnet_genesis(&wasm_binary, vec![]) todo!() }, // Bootnodes diff --git a/substrate/primitives/Cargo.toml b/substrate/primitives/Cargo.toml index 0e1e8f38..4a495b53 100644 --- a/substrate/primitives/Cargo.toml +++ b/substrate/primitives/Cargo.toml @@ -18,6 +18,8 @@ workspace = true [dependencies] zeroize = { version = "^1.5", features = ["derive"], optional = true } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, optional = true } + scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] } scale-info = { version = "2", default-features = false, features = ["derive"] } @@ -35,7 +37,7 @@ frame-support = { git = "https://github.com/serai-dex/substrate", default-featur rand_core = { version = "0.6", default-features = false, features = ["getrandom"] } [features] -std = ["zeroize", "scale/std", "borsh?/std", "serde?/std", "scale-info/std", "sp-core/std", "sp-runtime/std", "frame-support/std"] +std = ["zeroize", "ciphersuite/std", "scale/std", "borsh?/std", "serde?/std", "scale-info/std", "sp-core/std", "sp-runtime/std", "frame-support/std"] borsh = ["dep:borsh"] serde = ["dep:serde"] default = ["std"] diff --git a/substrate/primitives/src/account.rs b/substrate/primitives/src/account.rs index 77877a14..5c77c28f 100644 --- a/substrate/primitives/src/account.rs +++ b/substrate/primitives/src/account.rs @@ -90,11 +90,22 @@ impl std::fmt::Display for SeraiAddress { } } +/// Create a Substraate key pair by a name. +/// +/// This should never be considered to have a secure private key. It has effectively no entropy. #[cfg(feature = "std")] pub fn insecure_pair_from_name(name: &str) -> Pair { Pair::from_string(&format!("//{name}"), None).unwrap() } +/// Create a private key for an arbitrary ciphersuite by a name. +/// +/// This key should never be considered a secure private key. It has effectively no entropy. +#[cfg(feature = "std")] +pub fn insecure_arbitrary_key_from_name(name: &str) -> C::F { + C::hash_to_F(b"insecure arbitrary key", name.as_bytes()) +} + pub struct AccountLookup; impl Lookup for AccountLookup { type Source = SeraiAddress; diff --git a/substrate/primitives/src/networks.rs b/substrate/primitives/src/networks.rs index 1213378c..db396fb5 100644 --- a/substrate/primitives/src/networks.rs +++ b/substrate/primitives/src/networks.rs @@ -14,6 +14,16 @@ use sp_core::{ConstU32, bounded::BoundedVec}; #[cfg(feature = "borsh")] use crate::{borsh_serialize_bounded_vec, borsh_deserialize_bounded_vec}; +/// Identifier for an embedded elliptic curve. +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, MaxEncodedLen, TypeInfo)] +#[cfg_attr(feature = "std", derive(Zeroize))] +#[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] +#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +pub enum EmbeddedEllipticCurve { + Embedwards25519, + Secq256k1, +} + /// The type used to identify networks. #[derive( Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, PartialOrd, Ord, MaxEncodedLen, TypeInfo, @@ -28,6 +38,23 @@ pub enum NetworkId { Monero, } impl NetworkId { + /// The embedded elliptic curve actively used for this network. + /// + /// This is guaranteed to return `[]`, `[Embedwards25519]`, or + /// `[Embedwards25519, *network specific curve*]`. + pub fn embedded_elliptic_curves(&self) -> &'static [EmbeddedEllipticCurve] { + match self { + // We don't use any embedded elliptic curves for Serai as we don't perform a DKG for Serai + Self::Serai => &[], + // We need to generate a Ristretto key for oraclizing and a Secp256k1 key for the network + Self::Bitcoin | Self::Ethereum => { + &[EmbeddedEllipticCurve::Embedwards25519, EmbeddedEllipticCurve::Secq256k1] + } + // Since the oraclizing key curve is the same as the network's curve, we only need it + Self::Monero => &[EmbeddedEllipticCurve::Embedwards25519], + } + } + pub fn coins(&self) -> &'static [Coin] { match self { Self::Serai => &[Coin::Serai], diff --git a/substrate/runtime/src/abi.rs b/substrate/runtime/src/abi.rs index 48b4a6c7..107389c1 100644 --- a/substrate/runtime/src/abi.rs +++ b/substrate/runtime/src/abi.rs @@ -92,18 +92,22 @@ impl From for RuntimeCall { Call::ValidatorSets(vs) => match vs { serai_abi::validator_sets::Call::set_keys { network, - removed_participants, key_pair, + signature_participants, signature, } => RuntimeCall::ValidatorSets(validator_sets::Call::set_keys { network, - removed_participants: <_>::try_from( - removed_participants.into_iter().map(PublicKey::from).collect::>(), - ) - .unwrap(), key_pair, + signature_participants, signature, }), + serai_abi::validator_sets::Call::set_embedded_elliptic_curve_key { + embedded_elliptic_curve, + key, + } => RuntimeCall::ValidatorSets(validator_sets::Call::set_embedded_elliptic_curve_key { + embedded_elliptic_curve, + key, + }), serai_abi::validator_sets::Call::report_slashes { network, slashes, signature } => { RuntimeCall::ValidatorSets(validator_sets::Call::report_slashes { network, @@ -282,17 +286,20 @@ impl TryInto for RuntimeCall { _ => Err(())?, }), RuntimeCall::ValidatorSets(call) => Call::ValidatorSets(match call { - validator_sets::Call::set_keys { network, removed_participants, key_pair, signature } => { + validator_sets::Call::set_keys { network, key_pair, signature_participants, signature } => { serai_abi::validator_sets::Call::set_keys { network, - removed_participants: <_>::try_from( - removed_participants.into_iter().map(SeraiAddress::from).collect::>(), - ) - .unwrap(), key_pair, + signature_participants, signature, } } + validator_sets::Call::set_embedded_elliptic_curve_key { embedded_elliptic_curve, key } => { + serai_abi::validator_sets::Call::set_embedded_elliptic_curve_key { + embedded_elliptic_curve, + key, + } + } validator_sets::Call::report_slashes { network, slashes, signature } => { serai_abi::validator_sets::Call::report_slashes { network, diff --git a/substrate/validator-sets/pallet/Cargo.toml b/substrate/validator-sets/pallet/Cargo.toml index dd67d1bc..e6f559e1 100644 --- a/substrate/validator-sets/pallet/Cargo.toml +++ b/substrate/validator-sets/pallet/Cargo.toml @@ -12,17 +12,16 @@ rust-version = "1.74" all-features = true rustdoc-args = ["--cfg", "docsrs"] -[package.metadata.cargo-machete] -ignored = ["scale", "scale-info"] - [lints] workspace = true [dependencies] -hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] } +bitvec = { version = "1", default-features = false, features = ["alloc", "serde"] } -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] } -scale-info = { version = "2", default-features = false, features = ["derive"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive", "bit-vec"] } +scale-info = { version = "2", default-features = false, features = ["derive", "bit-vec"] } + +serde = { version = "1", default-features = false, features = ["derive", "alloc"] } sp-core = { git = "https://github.com/serai-dex/substrate", default-features = false } sp-io = { git = "https://github.com/serai-dex/substrate", default-features = false } @@ -46,6 +45,8 @@ dex-pallet = { package = "serai-dex-pallet", path = "../../dex/pallet", default- [features] std = [ + "bitvec/std", + "scale/std", "scale-info/std", diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index c2ba80a9..655c6722 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -83,6 +83,12 @@ pub mod pallet { type ShouldEndSession: ShouldEndSession>; } + #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, serde::Serialize, serde::Deserialize)] + pub struct AllEmbeddedEllipticCurveKeysAtGenesis { + pub embedwards25519: BoundedVec>, + pub secq256k1: BoundedVec>, + } + #[pallet::genesis_config] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] pub struct GenesisConfig { @@ -92,7 +98,7 @@ pub mod pallet { /// This stake cannot be withdrawn however as there's no actual stake behind it. pub networks: Vec<(NetworkId, Amount)>, /// List of participants to place in the initial validator sets. - pub participants: Vec, + pub participants: Vec<(T::AccountId, AllEmbeddedEllipticCurveKeysAtGenesis)>, } impl Default for GenesisConfig { @@ -191,6 +197,18 @@ pub mod pallet { } } + /// A key on an embedded elliptic curve. + #[pallet::storage] + pub type EmbeddedEllipticCurveKeys = StorageDoubleMap< + _, + Blake2_128Concat, + Public, + Identity, + EmbeddedEllipticCurve, + BoundedVec>, + OptionQuery, + >; + /// The total stake allocated to this network by the active set of validators. #[pallet::storage] #[pallet::getter(fn total_allocated_stake)] @@ -426,6 +444,14 @@ pub mod pallet { pub enum Error { /// Validator Set doesn't exist. NonExistentValidatorSet, + /// An invalid embedded elliptic curve key was specified. + /// + /// This error not being raised does not mean the key was valid. Solely that it wasn't detected + /// by this pallet as invalid. + InvalidEmbeddedEllipticCurveKey, + /// Trying to perform an operation requiring an embedded elliptic curve key, without an + /// embedded elliptic curve key. + MissingEmbeddedEllipticCurveKey, /// Not enough allocation to obtain a key share in the set. InsufficientAllocation, /// Trying to deallocate more than allocated. @@ -469,10 +495,20 @@ pub mod pallet { fn build(&self) { for (id, stake) in self.networks.clone() { AllocationPerKeyShare::::set(id, Some(stake)); - for participant in self.participants.clone() { - if Pallet::::set_allocation(id, participant, stake) { + for participant in &self.participants { + if Pallet::::set_allocation(id, participant.0, stake) { panic!("participants contained duplicates"); } + EmbeddedEllipticCurveKeys::::set( + participant.0, + EmbeddedEllipticCurve::Embedwards25519, + Some(participant.1.embedwards25519.clone()), + ); + EmbeddedEllipticCurveKeys::::set( + participant.0, + EmbeddedEllipticCurve::Secq256k1, + Some(participant.1.secq256k1.clone()), + ); } Pallet::::new_set(id); } @@ -941,14 +977,15 @@ pub mod pallet { pub fn set_keys( origin: OriginFor, network: NetworkId, - removed_participants: BoundedVec>, key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, signature: Signature, ) -> DispatchResult { ensure_none(origin)?; // signature isn't checked as this is an unsigned transaction, and validate_unsigned // (called by pre_dispatch) checks it + let _ = signature_participants; let _ = signature; let session = Self::session(network).unwrap(); @@ -963,15 +1000,6 @@ pub mod pallet { Self::set_total_allocated_stake(network); } - // This does not remove from TotalAllocatedStake or InSet in order to: - // 1) Not decrease the stake present in this set. This means removed participants are - // still liable for the economic security of the external network. This prevents - // a decided set, which is economically secure, from falling below the threshold. - // 2) Not allow parties removed to immediately deallocate, per commentary on deallocation - // scheduling (https://github.com/serai-dex/serai/issues/394). - for removed in removed_participants { - Self::deposit_event(Event::ParticipantRemoved { set, removed }); - } Self::deposit_event(Event::KeyGen { set, key_pair }); Ok(()) @@ -1004,8 +1032,42 @@ pub mod pallet { #[pallet::call_index(2)] #[pallet::weight(0)] // TODO + pub fn set_embedded_elliptic_curve_key( + origin: OriginFor, + embedded_elliptic_curve: EmbeddedEllipticCurve, + key: BoundedVec>, + ) -> DispatchResult { + let validator = ensure_signed(origin)?; + + // We don't have the curve formulas, nor the BigInt arithmetic, necessary here to validate + // these keys. Instead, we solely check the key lengths. Validators are responsible to not + // provide invalid keys. + let expected_len = match embedded_elliptic_curve { + EmbeddedEllipticCurve::Embedwards25519 => 32, + EmbeddedEllipticCurve::Secq256k1 => 33, + }; + if key.len() != expected_len { + Err(Error::::InvalidEmbeddedEllipticCurveKey)?; + } + + // This does allow overwriting an existing key which... is unlikely to be done? + // Yet it isn't an issue as we'll fix to the key as of any set's declaration (uncaring to if + // it's distinct at the latest block) + EmbeddedEllipticCurveKeys::::set(validator, embedded_elliptic_curve, Some(key)); + Ok(()) + } + + #[pallet::call_index(3)] + #[pallet::weight(0)] // TODO pub fn allocate(origin: OriginFor, network: NetworkId, amount: Amount) -> DispatchResult { let validator = ensure_signed(origin)?; + // If this network utilizes embedded elliptic curve(s), require the validator to have set the + // appropriate key(s) + for embedded_elliptic_curve in network.embedded_elliptic_curves() { + if !EmbeddedEllipticCurveKeys::::contains_key(validator, *embedded_elliptic_curve) { + Err(Error::::MissingEmbeddedEllipticCurveKey)?; + } + } Coins::::transfer_internal( validator, Self::account(), @@ -1014,7 +1076,7 @@ pub mod pallet { Self::increase_allocation(network, validator, amount, false) } - #[pallet::call_index(3)] + #[pallet::call_index(4)] #[pallet::weight(0)] // TODO pub fn deallocate(origin: OriginFor, network: NetworkId, amount: Amount) -> DispatchResult { let account = ensure_signed(origin)?; @@ -1031,7 +1093,7 @@ pub mod pallet { Ok(()) } - #[pallet::call_index(4)] + #[pallet::call_index(5)] #[pallet::weight((0, DispatchClass::Operational))] // TODO pub fn claim_deallocation( origin: OriginFor, @@ -1059,7 +1121,7 @@ pub mod pallet { fn validate_unsigned(_: TransactionSource, call: &Self::Call) -> TransactionValidity { // Match to be exhaustive match call { - Call::set_keys { network, ref removed_participants, ref key_pair, ref signature } => { + Call::set_keys { network, ref key_pair, ref signature_participants, ref signature } => { let network = *network; // Don't allow the Serai set to set_keys, as they have no reason to do so @@ -1083,30 +1145,24 @@ pub mod pallet { // session on this assumption assert_eq!(Pallet::::latest_decided_session(network), Some(current_session)); - // This does not slash the removed participants as that'll be done at the end of the - // set's lifetime - let mut removed = hashbrown::HashSet::new(); - for participant in removed_participants { - // Confirm this wasn't duplicated - if removed.contains(&participant.0) { - Err(InvalidTransaction::Custom(2))?; - } - removed.insert(participant.0); - } - let participants = Participants::::get(network).expect("session existed without participants"); + // Check the bitvec is of the proper length + if participants.len() != signature_participants.len() { + Err(InvalidTransaction::Custom(2))?; + } + let mut all_key_shares = 0; let mut signers = vec![]; let mut signing_key_shares = 0; - for participant in participants { + for (participant, in_use) in participants.into_iter().zip(signature_participants) { let participant = participant.0; let shares = InSet::::get(network, participant) .expect("participant from Participants wasn't InSet"); all_key_shares += shares; - if removed.contains(&participant.0) { + if !in_use { continue; } @@ -1124,9 +1180,7 @@ pub mod pallet { // Verify the signature with the MuSig key of the signers // We theoretically don't need set_keys_message to bind to removed_participants, as the // key we're signing with effectively already does so, yet there's no reason not to - if !musig_key(set, &signers) - .verify(&set_keys_message(&set, removed_participants, key_pair), signature) - { + if !musig_key(set, &signers).verify(&set_keys_message(&set, key_pair), signature) { Err(InvalidTransaction::BadProof)?; } @@ -1159,9 +1213,10 @@ pub mod pallet { .propagate(true) .build() } - Call::allocate { .. } | Call::deallocate { .. } | Call::claim_deallocation { .. } => { - Err(InvalidTransaction::Call)? - } + Call::set_embedded_elliptic_curve_key { .. } | + Call::allocate { .. } | + Call::deallocate { .. } | + Call::claim_deallocation { .. } => Err(InvalidTransaction::Call)?, Call::__Ignore(_, _) => unreachable!(), } } diff --git a/substrate/validator-sets/primitives/src/lib.rs b/substrate/validator-sets/primitives/src/lib.rs index c900b0a9..90d58c37 100644 --- a/substrate/validator-sets/primitives/src/lib.rs +++ b/substrate/validator-sets/primitives/src/lib.rs @@ -99,12 +99,8 @@ pub fn musig_key(set: ValidatorSet, set_keys: &[Public]) -> Public { } /// The message for the set_keys signature. -pub fn set_keys_message( - set: &ValidatorSet, - removed_participants: &[Public], - key_pair: &KeyPair, -) -> Vec { - (b"ValidatorSets-set_keys", set, removed_participants, key_pair).encode() +pub fn set_keys_message(set: &ValidatorSet, key_pair: &KeyPair) -> Vec { + (b"ValidatorSets-set_keys", set, key_pair).encode() } pub fn report_slashes_message(set: &ValidatorSet, slashes: &[(Public, u32)]) -> Vec { diff --git a/tests/coordinator/Cargo.toml b/tests/coordinator/Cargo.toml index 89b168c0..ca7a10d6 100644 --- a/tests/coordinator/Cargo.toml +++ b/tests/coordinator/Cargo.toml @@ -24,7 +24,11 @@ zeroize = { version = "1", default-features = false } rand_core = { version = "0.6", default-features = false } blake2 = "0.10" + ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["ristretto", "secp256k1"] } +embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } +secq256k1 = { path = "../../crypto/evrf/secq256k1" } + schnorrkel = "0.11" dkg = { path = "../../crypto/dkg", default-features = false, features = ["tests"] } diff --git a/tests/coordinator/src/lib.rs b/tests/coordinator/src/lib.rs index c364128c..fe2a0a4f 100644 --- a/tests/coordinator/src/lib.rs +++ b/tests/coordinator/src/lib.rs @@ -18,6 +18,8 @@ use ciphersuite::{ group::{ff::PrimeField, GroupEncoding}, Ciphersuite, Ristretto, }; +use embedwards25519::Embedwards25519; +use secq256k1::Secq256k1; use serai_client::primitives::NetworkId; @@ -118,6 +120,8 @@ pub struct Processor { queue_for_sending: MessageQueue, abort_handle: Option>, + evrf_public_keys: ([u8; 32], Vec), + substrate_key: Arc::F>>>>, } @@ -131,7 +135,7 @@ impl Drop for Processor { impl Processor { pub async fn new( - raw_i: u8, + name: &'static str, network: NetworkId, ops: &DockerOperations, handles: Handles, @@ -168,7 +172,11 @@ impl Processor { let (msg_send, msg_recv) = mpsc::unbounded_channel(); + use serai_client::primitives::insecure_arbitrary_key_from_name; let substrate_key = Arc::new(AsyncMutex::new(None)); + let embedwards25519_evrf_key = (Embedwards25519::generator() * + insecure_arbitrary_key_from_name::(name)) + .to_bytes(); let mut res = Processor { network, @@ -183,6 +191,21 @@ impl Processor { msgs: msg_recv, abort_handle: None, + evrf_public_keys: ( + embedwards25519_evrf_key, + match network { + NetworkId::Serai => panic!("mock processor for the serai network"), + NetworkId::Bitcoin | NetworkId::Ethereum => { + let key = (Secq256k1::generator() * + insecure_arbitrary_key_from_name::(name)) + .to_bytes(); + let key: &[u8] = key.as_ref(); + key.to_vec() + } + NetworkId::Monero => embedwards25519_evrf_key.to_vec(), + }, + ), + substrate_key: substrate_key.clone(), }; @@ -256,10 +279,12 @@ impl Processor { if current_cosign.is_none() || (current_cosign.as_ref().unwrap().block != block) { *current_cosign = Some(new_cosign); } + let mut preprocess = [0; 64]; + preprocess[.. name.len()].copy_from_slice(name.as_ref()); send_message( messages::coordinator::ProcessorMessage::CosignPreprocess { id: id.clone(), - preprocesses: vec![[raw_i; 64]], + preprocesses: vec![preprocess], } .into(), ) @@ -270,12 +295,11 @@ impl Processor { ) => { // TODO: Assert the ID matches CURRENT_COSIGN // TODO: Verify the received preprocesses + let mut share = [0; 32]; + share[.. name.len()].copy_from_slice(name.as_bytes()); send_message( - messages::coordinator::ProcessorMessage::SubstrateShare { - id, - shares: vec![[raw_i; 32]], - } - .into(), + messages::coordinator::ProcessorMessage::SubstrateShare { id, shares: vec![share] } + .into(), ) .await; } @@ -327,6 +351,14 @@ impl Processor { res } + pub fn network(&self) -> NetworkId { + self.network + } + + pub fn evrf_public_keys(&self) -> ([u8; 32], Vec) { + self.evrf_public_keys.clone() + } + pub async fn serai(&self) -> Serai { Serai::new(self.serai_rpc.clone()).await.unwrap() } diff --git a/tests/coordinator/src/tests/key_gen.rs b/tests/coordinator/src/tests/key_gen.rs index 8ea14cbc..1ec31776 100644 --- a/tests/coordinator/src/tests/key_gen.rs +++ b/tests/coordinator/src/tests/key_gen.rs @@ -1,7 +1,4 @@ -use std::{ - time::{Duration, SystemTime}, - collections::HashMap, -}; +use std::time::{Duration, SystemTime}; use zeroize::Zeroizing; use rand_core::OsRng; @@ -10,14 +7,14 @@ use ciphersuite::{ group::{ff::Field, GroupEncoding}, Ciphersuite, Ristretto, Secp256k1, }; -use dkg::ThresholdParams; +use dkg::Participant; use serai_client::{ primitives::NetworkId, Public, validator_sets::primitives::{Session, ValidatorSet, KeyPair}, }; -use messages::{key_gen::KeyGenId, CoordinatorMessage}; +use messages::CoordinatorMessage; use crate::tests::*; @@ -29,16 +26,28 @@ pub async fn key_gen( let mut participant_is = vec![]; let set = ValidatorSet { session, network: NetworkId::Bitcoin }; - let id = KeyGenId { session: set.session, attempt: 0 }; - for (i, processor) in processors.iter_mut().enumerate() { + // This is distinct from the result of evrf_public_keys for each processor, as there'll have some + // ordering algorithm on-chain which won't match our ordering + let mut evrf_public_keys_as_on_chain = None; + for processor in processors.iter_mut() { + // Receive GenerateKey let msg = processor.recv_message().await; match &msg { CoordinatorMessage::KeyGen(messages::key_gen::CoordinatorMessage::GenerateKey { - params, + evrf_public_keys, .. }) => { - participant_is.push(params.i()); + if evrf_public_keys_as_on_chain.is_none() { + evrf_public_keys_as_on_chain = Some(evrf_public_keys.clone()); + } + assert_eq!(evrf_public_keys_as_on_chain.as_ref().unwrap(), evrf_public_keys); + let i = evrf_public_keys + .iter() + .position(|public_keys| *public_keys == processor.evrf_public_keys()) + .unwrap(); + let i = Participant::new(1 + u16::try_from(i).unwrap()).unwrap(); + participant_is.push(i); } _ => panic!("unexpected message: {msg:?}"), } @@ -46,63 +55,43 @@ pub async fn key_gen( assert_eq!( msg, CoordinatorMessage::KeyGen(messages::key_gen::CoordinatorMessage::GenerateKey { - id, - params: ThresholdParams::new( - u16::try_from(((coordinators * 2) / 3) + 1).unwrap(), - u16::try_from(coordinators).unwrap(), - participant_is[i], - ) - .unwrap(), - shares: 1, + session, + threshold: u16::try_from(((coordinators * 2) / 3) + 1).unwrap(), + evrf_public_keys: evrf_public_keys_as_on_chain.clone().unwrap(), }) ); - - processor - .send_message(messages::key_gen::ProcessorMessage::Commitments { - id, - commitments: vec![vec![u8::try_from(u16::from(participant_is[i])).unwrap()]], - }) - .await; } - wait_for_tributary().await; - for (i, processor) in processors.iter_mut().enumerate() { - let mut commitments = (0 .. u8::try_from(coordinators).unwrap()) - .map(|l| { - ( - participant_is[usize::from(l)], - vec![u8::try_from(u16::from(participant_is[usize::from(l)])).unwrap()], - ) + for i in 0 .. coordinators { + // Send Participation + processors[i] + .send_message(messages::key_gen::ProcessorMessage::Participation { + session, + participation: vec![u8::try_from(u16::from(participant_is[i])).unwrap()], }) - .collect::>(); - commitments.remove(&participant_is[i]); - assert_eq!( - processor.recv_message().await, - CoordinatorMessage::KeyGen(messages::key_gen::CoordinatorMessage::Commitments { - id, - commitments, - }) - ); - - // Recipient it's for -> (Sender i, Recipient i) - let mut shares = (0 .. u8::try_from(coordinators).unwrap()) - .map(|l| { - ( - participant_is[usize::from(l)], - vec![ - u8::try_from(u16::from(participant_is[i])).unwrap(), - u8::try_from(u16::from(participant_is[usize::from(l)])).unwrap(), - ], - ) - }) - .collect::>(); - - shares.remove(&participant_is[i]); - processor - .send_message(messages::key_gen::ProcessorMessage::Shares { id, shares: vec![shares] }) .await; + + // Sleep so this participation gets included + for _ in 0 .. 2 { + wait_for_tributary().await; + } + + // Have every other processor recv this message too + for processor in processors.iter_mut() { + assert_eq!( + processor.recv_message().await, + messages::CoordinatorMessage::KeyGen( + messages::key_gen::CoordinatorMessage::Participation { + session, + participant: participant_is[i], + participation: vec![u8::try_from(u16::from(participant_is[i])).unwrap()], + } + ) + ); + } } + // Now that we've received all participations, publish the key pair let substrate_priv_key = Zeroizing::new(::F::random(&mut OsRng)); let substrate_key = (::generator() * *substrate_priv_key).to_bytes(); @@ -112,40 +101,24 @@ pub async fn key_gen( let serai = processors[0].serai().await; let mut last_serai_block = serai.latest_finalized_block().await.unwrap().number(); - wait_for_tributary().await; - for (i, processor) in processors.iter_mut().enumerate() { - let i = participant_is[i]; - assert_eq!( - processor.recv_message().await, - CoordinatorMessage::KeyGen(messages::key_gen::CoordinatorMessage::Shares { - id, - shares: { - let mut shares = (0 .. u8::try_from(coordinators).unwrap()) - .map(|l| { - ( - participant_is[usize::from(l)], - vec![ - u8::try_from(u16::from(participant_is[usize::from(l)])).unwrap(), - u8::try_from(u16::from(i)).unwrap(), - ], - ) - }) - .collect::>(); - shares.remove(&i); - vec![shares] - }, - }) - ); + for processor in processors.iter_mut() { processor .send_message(messages::key_gen::ProcessorMessage::GeneratedKeyPair { - id, + session, substrate_key, network_key: network_key.clone(), }) .await; } - // Sleeps for longer since we need to wait for a Substrate block as well + // Wait for the Nonces TXs to go around + wait_for_tributary().await; + // Wait for the Share TXs to go around + wait_for_tributary().await; + + // And now we're waiting ro the TX to be published onto Serai + + // We need to wait for a finalized Substrate block as well, so this waites for up to 20 blocks 'outer: for _ in 0 .. 20 { tokio::time::sleep(Duration::from_secs(6)).await; if std::env::var("GITHUB_CI") == Ok("true".to_string()) { diff --git a/tests/coordinator/src/tests/mod.rs b/tests/coordinator/src/tests/mod.rs index ef67b0ac..0b46cd81 100644 --- a/tests/coordinator/src/tests/mod.rs +++ b/tests/coordinator/src/tests/mod.rs @@ -41,6 +41,18 @@ impl) -> F> Test } } +fn name(i: usize) -> &'static str { + match i { + 0 => "Alice", + 1 => "Bob", + 2 => "Charlie", + 3 => "Dave", + 4 => "Eve", + 5 => "Ferdie", + _ => panic!("needed a 7th name for a serai node"), + } +} + pub(crate) async fn new_test(test_body: impl TestBody, fast_epoch: bool) { let mut unique_id_lock = UNIQUE_ID.get_or_init(|| Mutex::new(0)).lock().await; @@ -50,15 +62,7 @@ pub(crate) async fn new_test(test_body: impl TestBody, fast_epoch: bool) { // Spawn one extra coordinator which isn't in-set #[allow(clippy::range_plus_one)] for i in 0 .. (COORDINATORS + 1) { - let name = match i { - 0 => "Alice", - 1 => "Bob", - 2 => "Charlie", - 3 => "Dave", - 4 => "Eve", - 5 => "Ferdie", - _ => panic!("needed a 7th name for a serai node"), - }; + let name = name(i); let serai_composition = serai_composition(name, fast_epoch); let (processor_key, message_queue_keys, message_queue_composition) = @@ -196,14 +200,7 @@ pub(crate) async fn new_test(test_body: impl TestBody, fast_epoch: bool) { let mut processors: Vec = vec![]; for (i, (handles, key)) in coordinators.iter().enumerate() { processors.push( - Processor::new( - i.try_into().unwrap(), - NetworkId::Bitcoin, - &outer_ops, - handles.clone(), - *key, - ) - .await, + Processor::new(name(i), NetworkId::Bitcoin, &outer_ops, handles.clone(), *key).await, ); } diff --git a/tests/coordinator/src/tests/rotation.rs b/tests/coordinator/src/tests/rotation.rs index 1ebeec16..507b0536 100644 --- a/tests/coordinator/src/tests/rotation.rs +++ b/tests/coordinator/src/tests/rotation.rs @@ -3,7 +3,7 @@ use tokio::time::{sleep, Duration}; use ciphersuite::Secp256k1; use serai_client::{ - primitives::{insecure_pair_from_name, NetworkId}, + primitives::{EmbeddedEllipticCurve, NetworkId, insecure_pair_from_name}, validator_sets::{ self, primitives::{Session, ValidatorSet}, @@ -55,6 +55,27 @@ async fn publish_tx(serai: &Serai, tx: &Transaction) -> [u8; 32] { } } +#[allow(dead_code)] +async fn set_embedded_elliptic_curve_key( + serai: &Serai, + curve: EmbeddedEllipticCurve, + key: Vec, + pair: &Pair, + nonce: u32, +) -> [u8; 32] { + // get the call + let tx = serai.sign( + pair, + validator_sets::SeraiValidatorSets::set_embedded_elliptic_curve_key( + curve, + key.try_into().unwrap(), + ), + nonce, + 0, + ); + publish_tx(serai, &tx).await +} + #[allow(dead_code)] async fn allocate_stake( serai: &Serai, @@ -132,13 +153,29 @@ async fn set_rotation_test() { // excluded participant let pair5 = insecure_pair_from_name("Eve"); - let network = NetworkId::Bitcoin; + let network = excluded.network(); let amount = Amount(1_000_000 * 10_u64.pow(8)); let serai = processors[0].serai().await; // allocate now for the last participant so that it is guaranteed to be included into session // 1 set. This doesn't affect the genesis set at all since that is a predetermined set. - allocate_stake(&serai, network, amount, &pair5, 0).await; + set_embedded_elliptic_curve_key( + &serai, + EmbeddedEllipticCurve::Embedwards25519, + excluded.evrf_public_keys().0.to_vec(), + &pair5, + 0, + ) + .await; + set_embedded_elliptic_curve_key( + &serai, + *excluded.network().embedded_elliptic_curves().last().unwrap(), + excluded.evrf_public_keys().1.clone(), + &pair5, + 1, + ) + .await; + allocate_stake(&serai, network, amount, &pair5, 2).await; // genesis keygen let _ = key_gen::(&mut processors, Session(0)).await; diff --git a/tests/full-stack/src/tests/mod.rs b/tests/full-stack/src/tests/mod.rs index 7d92070e..a288ff05 100644 --- a/tests/full-stack/src/tests/mod.rs +++ b/tests/full-stack/src/tests/mod.rs @@ -57,14 +57,24 @@ pub(crate) async fn new_test(test_body: impl TestBody) { let (coord_key, message_queue_keys, message_queue_composition) = message_queue_instance(); let (bitcoin_composition, bitcoin_port) = network_instance(NetworkId::Bitcoin); - let mut bitcoin_processor_composition = - processor_instance(NetworkId::Bitcoin, bitcoin_port, message_queue_keys[&NetworkId::Bitcoin]); + let mut bitcoin_processor_composition = processor_instance( + name, + NetworkId::Bitcoin, + bitcoin_port, + message_queue_keys[&NetworkId::Bitcoin], + ) + .0; assert_eq!(bitcoin_processor_composition.len(), 1); let bitcoin_processor_composition = bitcoin_processor_composition.swap_remove(0); let (monero_composition, monero_port) = network_instance(NetworkId::Monero); - let mut monero_processor_composition = - processor_instance(NetworkId::Monero, monero_port, message_queue_keys[&NetworkId::Monero]); + let mut monero_processor_composition = processor_instance( + name, + NetworkId::Monero, + monero_port, + message_queue_keys[&NetworkId::Monero], + ) + .0; assert_eq!(monero_processor_composition.len(), 1); let monero_processor_composition = monero_processor_composition.swap_remove(0); diff --git a/tests/processor/Cargo.toml b/tests/processor/Cargo.toml index 8817b0c9..f06e4741 100644 --- a/tests/processor/Cargo.toml +++ b/tests/processor/Cargo.toml @@ -23,8 +23,8 @@ zeroize = { version = "1", default-features = false } rand_core = { version = "0.6", default-features = false, features = ["getrandom"] } curve25519-dalek = "4" -ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["secp256k1", "ristretto"] } -dkg = { path = "../../crypto/dkg", default-features = false, features = ["tests"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["secp256k1", "ed25519", "ristretto"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } bitcoin-serai = { path = "../../networks/bitcoin" } diff --git a/tests/processor/src/lib.rs b/tests/processor/src/lib.rs index ec607a55..d57ec290 100644 --- a/tests/processor/src/lib.rs +++ b/tests/processor/src/lib.rs @@ -3,11 +3,14 @@ use std::sync::{OnceLock, Mutex}; use zeroize::Zeroizing; -use rand_core::{RngCore, OsRng}; -use ciphersuite::{group::ff::PrimeField, Ciphersuite, Ristretto}; +use ciphersuite::{ + group::{ff::PrimeField, GroupEncoding}, + Ciphersuite, Secp256k1, Ed25519, Ristretto, +}; +use dkg::evrf::*; -use serai_client::primitives::NetworkId; +use serai_client::primitives::{NetworkId, insecure_arbitrary_key_from_name}; use messages::{ProcessorMessage, CoordinatorMessage}; use serai_message_queue::{Service, Metadata, client::MessageQueue}; @@ -24,13 +27,42 @@ mod tests; static UNIQUE_ID: OnceLock> = OnceLock::new(); +#[allow(dead_code)] +#[derive(Clone)] +pub struct EvrfPublicKeys { + substrate: [u8; 32], + network: Vec, +} + pub fn processor_instance( + name: &str, network: NetworkId, port: u32, message_queue_key: ::F, -) -> Vec { - let mut entropy = [0; 32]; - OsRng.fill_bytes(&mut entropy); +) -> (Vec, EvrfPublicKeys) { + let substrate_evrf_key = + insecure_arbitrary_key_from_name::<::EmbeddedCurve>(name); + let substrate_evrf_pub_key = + (::EmbeddedCurve::generator() * substrate_evrf_key).to_bytes(); + let substrate_evrf_key = substrate_evrf_key.to_repr(); + + let (network_evrf_key, network_evrf_pub_key) = match network { + NetworkId::Serai => panic!("starting a processor for Serai"), + NetworkId::Bitcoin | NetworkId::Ethereum => { + let evrf_key = + insecure_arbitrary_key_from_name::<::EmbeddedCurve>(name); + let pub_key = + (::EmbeddedCurve::generator() * evrf_key).to_bytes().to_vec(); + (evrf_key.to_repr(), pub_key) + } + NetworkId::Monero => { + let evrf_key = + insecure_arbitrary_key_from_name::<::EmbeddedCurve>(name); + let pub_key = + (::EmbeddedCurve::generator() * evrf_key).to_bytes().to_vec(); + (evrf_key.to_repr(), pub_key) + } + }; let network_str = match network { NetworkId::Serai => panic!("starting a processor for Serai"), @@ -47,7 +79,8 @@ pub fn processor_instance( .replace_env( [ ("MESSAGE_QUEUE_KEY".to_string(), hex::encode(message_queue_key.to_repr())), - ("ENTROPY".to_string(), hex::encode(entropy)), + ("SUBSTRATE_EVRF_KEY".to_string(), hex::encode(substrate_evrf_key)), + ("NETWORK_EVRF_KEY".to_string(), hex::encode(network_evrf_key)), ("NETWORK".to_string(), network_str.to_string()), ("NETWORK_RPC_LOGIN".to_string(), format!("{RPC_USER}:{RPC_PASS}")), ("NETWORK_RPC_PORT".to_string(), port.to_string()), @@ -75,21 +108,27 @@ pub fn processor_instance( ); } - res + (res, EvrfPublicKeys { substrate: substrate_evrf_pub_key, network: network_evrf_pub_key }) +} + +pub struct ProcessorKeys { + coordinator: ::F, + evrf: EvrfPublicKeys, } pub type Handles = (String, String, String, String); pub fn processor_stack( + name: &str, network: NetworkId, network_hostname_override: Option, -) -> (Handles, ::F, Vec) { +) -> (Handles, ProcessorKeys, Vec) { let (network_composition, network_rpc_port) = network_instance(network); let (coord_key, message_queue_keys, message_queue_composition) = serai_message_queue_tests::instance(); - let mut processor_compositions = - processor_instance(network, network_rpc_port, message_queue_keys[&network]); + let (mut processor_compositions, evrf_keys) = + processor_instance(name, network, network_rpc_port, message_queue_keys[&network]); // Give every item in this stack a unique ID // Uses a Mutex as we can't generate a 8-byte random ID without hitting hostname length limits @@ -155,7 +194,7 @@ pub fn processor_stack( handles[2].clone(), handles.get(3).cloned().unwrap_or(String::new()), ), - coord_key, + ProcessorKeys { coordinator: coord_key, evrf: evrf_keys }, compositions, ) } @@ -170,6 +209,8 @@ pub struct Coordinator { processor_handle: String, relayer_handle: String, + evrf_keys: EvrfPublicKeys, + next_send_id: u64, next_recv_id: u64, queue: MessageQueue, @@ -180,7 +221,7 @@ impl Coordinator { network: NetworkId, ops: &DockerOperations, handles: Handles, - coord_key: ::F, + keys: ProcessorKeys, ) -> Coordinator { let rpc = ops.handle(&handles.1).host_port(2287).unwrap(); let rpc = rpc.0.to_string() + ":" + &rpc.1.to_string(); @@ -193,9 +234,11 @@ impl Coordinator { processor_handle: handles.2, relayer_handle: handles.3, + evrf_keys: keys.evrf, + next_send_id: 0, next_recv_id: 0, - queue: MessageQueue::new(Service::Coordinator, rpc, Zeroizing::new(coord_key)), + queue: MessageQueue::new(Service::Coordinator, rpc, Zeroizing::new(keys.coordinator)), }; // Sleep for up to a minute in case the external network's RPC has yet to start @@ -302,6 +345,11 @@ impl Coordinator { res } + /// Get the eVRF keys for the associated processor. + pub fn evrf_keys(&self) -> EvrfPublicKeys { + self.evrf_keys.clone() + } + /// Send a message to a processor as its coordinator. pub async fn send_message(&mut self, msg: impl Into) { let msg: CoordinatorMessage = msg.into(); diff --git a/tests/processor/src/networks.rs b/tests/processor/src/networks.rs index 32563c9f..bed741e5 100644 --- a/tests/processor/src/networks.rs +++ b/tests/processor/src/networks.rs @@ -451,7 +451,7 @@ impl Wallet { ); } - let to_spend_key = decompress_point(<[u8; 32]>::try_from(to.as_ref()).unwrap()).unwrap(); + let to_spend_key = decompress_point(<[u8; 32]>::try_from(to.as_slice()).unwrap()).unwrap(); let to_view_key = additional_key::(0); let to_addr = Address::new( Network::Mainnet, diff --git a/tests/processor/src/tests/batch.rs b/tests/processor/src/tests/batch.rs index 6170270a..b85f43cf 100644 --- a/tests/processor/src/tests/batch.rs +++ b/tests/processor/src/tests/batch.rs @@ -3,6 +3,8 @@ use std::{ time::{SystemTime, Duration}, }; +use rand_core::{RngCore, OsRng}; + use dkg::{Participant, tests::clone_without}; use messages::{coordinator::*, SubstrateContext}; diff --git a/tests/processor/src/tests/key_gen.rs b/tests/processor/src/tests/key_gen.rs index 7dea0bfd..ee69086b 100644 --- a/tests/processor/src/tests/key_gen.rs +++ b/tests/processor/src/tests/key_gen.rs @@ -1,30 +1,24 @@ -use std::{collections::HashMap, time::SystemTime}; +use std::time::SystemTime; -use dkg::{Participant, ThresholdParams, tests::clone_without}; +use dkg::Participant; use serai_client::{ primitives::{NetworkId, BlockHash, PublicKey}, validator_sets::primitives::{Session, KeyPair}, }; -use messages::{SubstrateContext, key_gen::KeyGenId, CoordinatorMessage, ProcessorMessage}; +use messages::{SubstrateContext, CoordinatorMessage, ProcessorMessage}; use crate::{*, tests::*}; pub(crate) async fn key_gen(coordinators: &mut [Coordinator]) -> KeyPair { // Perform an interaction with all processors via their coordinators - async fn interact_with_all< - FS: Fn(Participant) -> messages::key_gen::CoordinatorMessage, - FR: FnMut(Participant, messages::key_gen::ProcessorMessage), - >( + async fn interact_with_all( coordinators: &mut [Coordinator], - message: FS, mut recv: FR, ) { for (i, coordinator) in coordinators.iter_mut().enumerate() { let participant = Participant::new(u16::try_from(i + 1).unwrap()).unwrap(); - coordinator.send_message(CoordinatorMessage::KeyGen(message(participant))).await; - match coordinator.recv_message().await { ProcessorMessage::KeyGen(msg) => recv(participant, msg), _ => panic!("processor didn't return KeyGen message"), @@ -33,85 +27,69 @@ pub(crate) async fn key_gen(coordinators: &mut [Coordinator]) -> KeyPair { } // Order a key gen - let id = KeyGenId { session: Session(0), attempt: 0 }; + let session = Session(0); - let mut commitments = HashMap::new(); - interact_with_all( - coordinators, - |participant| messages::key_gen::CoordinatorMessage::GenerateKey { - id, - params: ThresholdParams::new( - u16::try_from(THRESHOLD).unwrap(), - u16::try_from(COORDINATORS).unwrap(), + let mut evrf_public_keys = vec![]; + for coordinator in &*coordinators { + let keys = coordinator.evrf_keys(); + evrf_public_keys.push((keys.substrate, keys.network)); + } + + let mut participations = vec![]; + for coordinator in &mut *coordinators { + coordinator + .send_message(CoordinatorMessage::KeyGen( + messages::key_gen::CoordinatorMessage::GenerateKey { + session, + threshold: u16::try_from(THRESHOLD).unwrap(), + evrf_public_keys: evrf_public_keys.clone(), + }, + )) + .await; + } + // This takes forever on debug, as we use in these tests + let ci_scaling_factor = + 1 + u64::from(u8::from(std::env::var("GITHUB_CI") == Ok("true".to_string()))); + tokio::time::sleep(core::time::Duration::from_secs(600 * ci_scaling_factor)).await; + interact_with_all(coordinators, |participant, msg| match msg { + messages::key_gen::ProcessorMessage::Participation { session: this_session, participation } => { + assert_eq!(this_session, session); + participations.push(messages::key_gen::CoordinatorMessage::Participation { + session, participant, - ) - .unwrap(), - shares: 1, - }, - |participant, msg| match msg { - messages::key_gen::ProcessorMessage::Commitments { - id: this_id, - commitments: mut these_commitments, - } => { - assert_eq!(this_id, id); - assert_eq!(these_commitments.len(), 1); - commitments.insert(participant, these_commitments.swap_remove(0)); - } - _ => panic!("processor didn't return Commitments in response to GenerateKey"), - }, - ) + participation, + }); + } + _ => panic!("processor didn't return Participation in response to GenerateKey"), + }) .await; - // Send the commitments to all parties - let mut shares = HashMap::new(); - interact_with_all( - coordinators, - |participant| messages::key_gen::CoordinatorMessage::Commitments { - id, - commitments: clone_without(&commitments, &participant), - }, - |participant, msg| match msg { - messages::key_gen::ProcessorMessage::Shares { id: this_id, shares: mut these_shares } => { - assert_eq!(this_id, id); - assert_eq!(these_shares.len(), 1); - shares.insert(participant, these_shares.swap_remove(0)); - } - _ => panic!("processor didn't return Shares in response to GenerateKey"), - }, - ) - .await; - - // Send the shares + // Send the participations let mut substrate_key = None; let mut network_key = None; - interact_with_all( - coordinators, - |participant| messages::key_gen::CoordinatorMessage::Shares { - id, - shares: vec![shares - .iter() - .filter_map(|(this_participant, shares)| { - shares.get(&participant).cloned().map(|share| (*this_participant, share)) - }) - .collect()], - }, - |_, msg| match msg { - messages::key_gen::ProcessorMessage::GeneratedKeyPair { - id: this_id, - substrate_key: this_substrate_key, - network_key: this_network_key, - } => { - assert_eq!(this_id, id); - if substrate_key.is_none() { - substrate_key = Some(this_substrate_key); - network_key = Some(this_network_key.clone()); - } - assert_eq!(substrate_key.unwrap(), this_substrate_key); - assert_eq!(network_key.as_ref().unwrap(), &this_network_key); + for participation in participations { + for coordinator in &mut *coordinators { + coordinator.send_message(participation.clone()).await; + } + } + // This also takes a while on debug + tokio::time::sleep(core::time::Duration::from_secs(240 * ci_scaling_factor)).await; + interact_with_all(coordinators, |_, msg| match msg { + messages::key_gen::ProcessorMessage::GeneratedKeyPair { + session: this_session, + substrate_key: this_substrate_key, + network_key: this_network_key, + } => { + assert_eq!(this_session, session); + if substrate_key.is_none() { + substrate_key = Some(this_substrate_key); + network_key = Some(this_network_key.clone()); } - _ => panic!("processor didn't return GeneratedKeyPair in response to GenerateKey"), - }, - ) + assert_eq!(substrate_key.unwrap(), this_substrate_key); + assert_eq!(network_key.as_ref().unwrap(), &this_network_key); + } + _ => panic!("processor didn't return GeneratedKeyPair in response to all Participations"), + }) .await; // Confirm the key pair @@ -132,7 +110,7 @@ pub(crate) async fn key_gen(coordinators: &mut [Coordinator]) -> KeyPair { .send_message(CoordinatorMessage::Substrate( messages::substrate::CoordinatorMessage::ConfirmKeyPair { context, - session: id.session, + session, key_pair: key_pair.clone(), }, )) diff --git a/tests/processor/src/tests/mod.rs b/tests/processor/src/tests/mod.rs index afda97d5..668506b1 100644 --- a/tests/processor/src/tests/mod.rs +++ b/tests/processor/src/tests/mod.rs @@ -1,5 +1,3 @@ -use ciphersuite::{Ciphersuite, Ristretto}; - use serai_client::primitives::NetworkId; use dockertest::DockerTest; @@ -17,18 +15,21 @@ mod send; pub(crate) const COORDINATORS: usize = 4; pub(crate) const THRESHOLD: usize = ((COORDINATORS * 2) / 3) + 1; -fn new_test(network: NetworkId) -> (Vec<(Handles, ::F)>, DockerTest) { +fn new_test(network: NetworkId) -> (Vec<(Handles, ProcessorKeys)>, DockerTest) { let mut coordinators = vec![]; let mut test = DockerTest::new().with_network(dockertest::Network::Isolated); let mut eth_handle = None; - for _ in 0 .. COORDINATORS { - let (handles, coord_key, compositions) = processor_stack(network, eth_handle.clone()); + for i in 0 .. COORDINATORS { + // Uses the counter `i` as this has no relation to any other system, and while Substrate has + // hard-coded names for itself, these tests down't spawn any Substrate node + let (handles, keys, compositions) = + processor_stack(&i.to_string(), network, eth_handle.clone()); // TODO: Remove this once https://github.com/foundry-rs/foundry/issues/7955 // This has all processors share an Ethereum node until we can sync controlled nodes if network == NetworkId::Ethereum { eth_handle = eth_handle.or_else(|| Some(handles.0.clone())); } - coordinators.push((handles, coord_key)); + coordinators.push((handles, keys)); for composition in compositions { test.provide_container(composition); } diff --git a/tests/processor/src/tests/send.rs b/tests/processor/src/tests/send.rs index 62e80c09..8dfb5353 100644 --- a/tests/processor/src/tests/send.rs +++ b/tests/processor/src/tests/send.rs @@ -3,6 +3,8 @@ use std::{ time::{SystemTime, Duration}, }; +use rand_core::{RngCore, OsRng}; + use dkg::{Participant, tests::clone_without}; use messages::{sign::SignId, SubstrateContext}; From f3b91bd44f321853c38386f5ad83d64a821c636c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 16 Aug 2024 14:51:31 -0400 Subject: [PATCH 002/368] Smash key-gen into independent crate --- .github/workflows/tests.yml | 1 + Cargo.toml | 1 + deny.toml | 1 + processor/key-gen/Cargo.toml | 97 +++++++++++++++++++ processor/key-gen/LICENSE | 15 +++ .../{src/key_gen.rs => key-gen/src/lib.rs} | 0 6 files changed, 115 insertions(+) create mode 100644 processor/key-gen/Cargo.toml create mode 100644 processor/key-gen/LICENSE rename processor/{src/key_gen.rs => key-gen/src/lib.rs} (100%) diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 05c25972..fabfaba9 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -39,6 +39,7 @@ jobs: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ -p serai-message-queue \ -p serai-processor-messages \ + -p serai-processor-key-gen \ -p serai-processor \ -p tendermint-machine \ -p tributary-chain \ diff --git a/Cargo.toml b/Cargo.toml index bce4ebe3..f0bdd6a8 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -70,6 +70,7 @@ members = [ "message-queue", "processor/messages", + "processor/key-gen", "processor", "coordinator/tributary/tendermint", diff --git a/deny.toml b/deny.toml index e5c72f0c..0d82cb8a 100644 --- a/deny.toml +++ b/deny.toml @@ -46,6 +46,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-message-queue" }, { allow = ["AGPL-3.0"], name = "serai-processor-messages" }, + { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, diff --git a/processor/key-gen/Cargo.toml b/processor/key-gen/Cargo.toml new file mode 100644 index 00000000..ed6e7383 --- /dev/null +++ b/processor/key-gen/Cargo.toml @@ -0,0 +1,97 @@ +[package] +name = "serai-processor-key-gen" +version = "0.1.0" +description = "Key generation for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/key-gen" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +# Macros +async-trait = { version = "0.1", default-features = false } +zeroize = { version = "1", default-features = false, features = ["std"] } +thiserror = { version = "1", default-features = false } + +# Libs +rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } +rand_chacha = { version = "0.3", default-features = false, features = ["std"] } + +# Encoders +const-hex = { version = "1", default-features = false } +hex = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serde_json = { version = "1", default-features = false, features = ["std"] } + +# Cryptography +ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } +transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std"] } +ec-divisors = { package = "ec-divisors", path = "../crypto/evrf/divisors", default-features = false } +dkg = { package = "dkg", path = "../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } +frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] } +frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false } + +# Bitcoin/Ethereum +k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true } + +# Bitcoin +secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"], optional = true } +bitcoin-serai = { path = "../networks/bitcoin", default-features = false, features = ["std"], optional = true } + +# Ethereum +ethereum-serai = { path = "../networks/ethereum", default-features = false, optional = true } + +# Monero +dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true } +monero-simple-request-rpc = { path = "../networks/monero/rpc/simple-request", default-features = false, optional = true } +monero-wallet = { path = "../networks/monero/wallet", default-features = false, features = ["std", "multisig", "compile-time-generators"], optional = true } + +# Application +log = { version = "0.4", default-features = false, features = ["std"] } +env_logger = { version = "0.10", default-features = false, features = ["humantime"], optional = true } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +zalloc = { path = "../common/zalloc" } +serai-db = { path = "../common/db" } +serai-env = { path = "../common/env", optional = true } +# TODO: Replace with direct usage of primitives +serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] } + +messages = { package = "serai-processor-messages", path = "./messages" } + +message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true } + +[dev-dependencies] +frost = { package = "modular-frost", path = "../crypto/frost", features = ["tests"] } + +sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } + +ethereum-serai = { path = "../networks/ethereum", default-features = false, features = ["tests"] } + +dockertest = "0.4" +serai-docker-tests = { path = "../tests/docker" } + +[features] +secp256k1 = ["k256", "dkg/evrf-secp256k1", "frost/secp256k1"] +bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"] + +ethereum = ["secp256k1", "ethereum-serai/tests"] + +ed25519 = ["dalek-ff-group", "dkg/evrf-ed25519", "frost/ed25519"] +monero = ["ed25519", "monero-simple-request-rpc", "monero-wallet", "serai-client/monero"] + +binaries = ["env_logger", "serai-env", "message-queue"] +parity-db = ["serai-db/parity-db"] +rocksdb = ["serai-db/rocksdb"] diff --git a/processor/key-gen/LICENSE b/processor/key-gen/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/key-gen/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/src/key_gen.rs b/processor/key-gen/src/lib.rs similarity index 100% rename from processor/src/key_gen.rs rename to processor/key-gen/src/lib.rs From 2f29c91d3068c411d74859b33af2f0284dd69118 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 16 Aug 2024 17:01:45 -0400 Subject: [PATCH 003/368] Smash key-gen out of processor Resolves some bad assumptions made regarding keys being unique or not. --- Cargo.lock | 21 +- processor/Cargo.toml | 1 - processor/key-gen/Cargo.toml | 80 ++----- processor/key-gen/README.md | 8 + processor/key-gen/src/db.rs | 144 ++++++++++++ processor/key-gen/src/generators.rs | 38 +++ processor/key-gen/src/lib.rs | 351 +++++++++------------------- processor/src/lib.rs | 2 +- processor/src/main.rs | 2 +- 9 files changed, 335 insertions(+), 312 deletions(-) create mode 100644 processor/key-gen/README.md create mode 100644 processor/key-gen/src/db.rs create mode 100644 processor/key-gen/src/generators.rs diff --git a/Cargo.lock b/Cargo.lock index ff21fe66..62952da0 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8564,7 +8564,6 @@ version = "0.1.0" dependencies = [ "async-trait", "bitcoin-serai", - "blake2", "borsh", "ciphersuite", "const-hex", @@ -8600,6 +8599,26 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-key-gen" +version = "0.1.0" +dependencies = [ + "blake2", + "borsh", + "ciphersuite", + "dkg", + "ec-divisors", + "flexible-transcript", + "log", + "parity-scale-codec", + "rand_chacha", + "rand_core", + "serai-db", + "serai-processor-messages", + "serai-validator-sets-primitives", + "zeroize", +] + [[package]] name = "serai-processor-messages" version = "0.1.0" diff --git a/processor/Cargo.toml b/processor/Cargo.toml index fa2f643c..2d386f2d 100644 --- a/processor/Cargo.toml +++ b/processor/Cargo.toml @@ -36,7 +36,6 @@ serde_json = { version = "1", default-features = false, features = ["std"] } # Cryptography ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } -blake2 = { version = "0.10", default-features = false, features = ["std"] } transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std"] } ec-divisors = { package = "ec-divisors", path = "../crypto/evrf/divisors", default-features = false } dkg = { package = "dkg", path = "../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } diff --git a/processor/key-gen/Cargo.toml b/processor/key-gen/Cargo.toml index ed6e7383..f1f00564 100644 --- a/processor/key-gen/Cargo.toml +++ b/processor/key-gen/Cargo.toml @@ -13,85 +13,35 @@ publish = false all-features = true rustdoc-args = ["--cfg", "docsrs"] +[package.metadata.cargo-machete] +ignored = ["scale"] + [lints] workspace = true [dependencies] # Macros -async-trait = { version = "0.1", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } -thiserror = { version = "1", default-features = false } # Libs rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } rand_chacha = { version = "0.3", default-features = false, features = ["std"] } +# Cryptography +blake2 = { version = "0.10", default-features = false, features = ["std"] } +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std"] } +ec-divisors = { package = "ec-divisors", path = "../../crypto/evrf/divisors", default-features = false } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } +dkg = { package = "dkg", path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } + +# Substrate +serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } + # Encoders -const-hex = { version = "1", default-features = false } -hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serde_json = { version = "1", default-features = false, features = ["std"] } - -# Cryptography -ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } - -blake2 = { version = "0.10", default-features = false, features = ["std"] } -transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std"] } -ec-divisors = { package = "ec-divisors", path = "../crypto/evrf/divisors", default-features = false } -dkg = { package = "dkg", path = "../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } -frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] } -frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false } - -# Bitcoin/Ethereum -k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true } - -# Bitcoin -secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"], optional = true } -bitcoin-serai = { path = "../networks/bitcoin", default-features = false, features = ["std"], optional = true } - -# Ethereum -ethereum-serai = { path = "../networks/ethereum", default-features = false, optional = true } - -# Monero -dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true } -monero-simple-request-rpc = { path = "../networks/monero/rpc/simple-request", default-features = false, optional = true } -monero-wallet = { path = "../networks/monero/wallet", default-features = false, features = ["std", "multisig", "compile-time-generators"], optional = true } # Application log = { version = "0.4", default-features = false, features = ["std"] } -env_logger = { version = "0.10", default-features = false, features = ["humantime"], optional = true } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } - -zalloc = { path = "../common/zalloc" } -serai-db = { path = "../common/db" } -serai-env = { path = "../common/env", optional = true } -# TODO: Replace with direct usage of primitives -serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] } - -messages = { package = "serai-processor-messages", path = "./messages" } - -message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true } - -[dev-dependencies] -frost = { package = "modular-frost", path = "../crypto/frost", features = ["tests"] } - -sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } - -ethereum-serai = { path = "../networks/ethereum", default-features = false, features = ["tests"] } - -dockertest = "0.4" -serai-docker-tests = { path = "../tests/docker" } - -[features] -secp256k1 = ["k256", "dkg/evrf-secp256k1", "frost/secp256k1"] -bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"] - -ethereum = ["secp256k1", "ethereum-serai/tests"] - -ed25519 = ["dalek-ff-group", "dkg/evrf-ed25519", "frost/ed25519"] -monero = ["ed25519", "monero-simple-request-rpc", "monero-wallet", "serai-client/monero"] - -binaries = ["env_logger", "serai-env", "message-queue"] -parity-db = ["serai-db/parity-db"] -rocksdb = ["serai-db/rocksdb"] +serai-db = { path = "../../common/db" } +messages = { package = "serai-processor-messages", path = "../messages" } diff --git a/processor/key-gen/README.md b/processor/key-gen/README.md new file mode 100644 index 00000000..c28357ba --- /dev/null +++ b/processor/key-gen/README.md @@ -0,0 +1,8 @@ +# Key Generation + +This library implements the Distributed Key Generation (DKG) for the Serai +protocol. Two invocations of the eVRF-based DKG are performed, one for Ristretto +(to have a key to oraclize values onto the Serai blockchain with) and one for +the external network's curve. + +This library is interacted with via the `serai-processor-messages::key_gen` API. diff --git a/processor/key-gen/src/db.rs b/processor/key-gen/src/db.rs new file mode 100644 index 00000000..d597cb7e --- /dev/null +++ b/processor/key-gen/src/db.rs @@ -0,0 +1,144 @@ +use core::marker::PhantomData; +use std::collections::HashMap; + +use zeroize::Zeroizing; + +use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; +use dkg::{Participant, ThresholdCore, ThresholdKeys, evrf::EvrfCurve}; + +use serai_validator_sets_primitives::Session; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, create_db}; + +use crate::KeyGenParams; + +pub(crate) struct Params { + pub(crate) t: u16, + pub(crate) n: u16, + pub(crate) substrate_evrf_public_keys: + Vec<<::EmbeddedCurve as Ciphersuite>::G>, + pub(crate) network_evrf_public_keys: + Vec<<::EmbeddedCurve as Ciphersuite>::G>, +} + +#[derive(BorshSerialize, BorshDeserialize)] +struct RawParams { + t: u16, + substrate_evrf_public_keys: Vec<[u8; 32]>, + network_evrf_public_keys: Vec>, +} + +#[derive(BorshSerialize, BorshDeserialize)] +pub(crate) struct Participations { + pub(crate) substrate_participations: HashMap>, + pub(crate) network_participations: HashMap>, +} + +create_db!( + KeyGenDb { + ParamsDb: (session: &Session) -> RawParams, + ParticipationsDb: (session: &Session) -> Participations, + KeySharesDb: (session: &Session) -> Vec, + } +); + +pub(crate) struct KeyGenDb(PhantomData

); +impl KeyGenDb

{ + pub(crate) fn set_params(txn: &mut impl DbTxn, session: Session, params: Params

) { + assert_eq!(params.substrate_evrf_public_keys.len(), params.network_evrf_public_keys.len()); + + ParamsDb::set( + txn, + &session, + &RawParams { + t: params.t, + substrate_evrf_public_keys: params + .substrate_evrf_public_keys + .into_iter() + .map(|key| key.to_bytes()) + .collect(), + network_evrf_public_keys: params + .network_evrf_public_keys + .into_iter() + .map(|key| key.to_bytes().as_ref().to_vec()) + .collect(), + }, + ) + } + + pub(crate) fn params(getter: &impl Get, session: Session) -> Option> { + ParamsDb::get(getter, &session).map(|params| Params { + t: params.t, + n: params + .network_evrf_public_keys + .len() + .try_into() + .expect("amount of keys exceeded the amount allowed during a DKG"), + substrate_evrf_public_keys: params + .substrate_evrf_public_keys + .into_iter() + .map(|key| { + <::EmbeddedCurve as Ciphersuite>::read_G(&mut key.as_slice()) + .unwrap() + }) + .collect(), + network_evrf_public_keys: params + .network_evrf_public_keys + .into_iter() + .map(|key| { + <::EmbeddedCurve as Ciphersuite>::read_G::<&[u8]>( + &mut key.as_ref(), + ) + .unwrap() + }) + .collect(), + }) + } + + pub(crate) fn set_participations( + txn: &mut impl DbTxn, + session: Session, + participations: &Participations, + ) { + ParticipationsDb::set(txn, &session, participations) + } + pub(crate) fn participations(getter: &impl Get, session: Session) -> Option { + ParticipationsDb::get(getter, &session) + } + + pub(crate) fn set_key_shares( + txn: &mut impl DbTxn, + session: Session, + substrate_keys: &[ThresholdKeys], + network_keys: &[ThresholdKeys], + ) { + assert_eq!(substrate_keys.len(), network_keys.len()); + + let mut keys = Zeroizing::new(vec![]); + for (substrate_keys, network_keys) in substrate_keys.iter().zip(network_keys) { + keys.extend(substrate_keys.serialize().as_slice()); + keys.extend(network_keys.serialize().as_slice()); + } + KeySharesDb::set(txn, &session, &keys); + } + + #[allow(clippy::type_complexity)] + pub(crate) fn key_shares( + getter: &impl Get, + session: Session, + ) -> Option<(Vec>, Vec>)> { + let keys = KeySharesDb::get(getter, &session)?; + let mut keys: &[u8] = keys.as_ref(); + + let mut substrate_keys = vec![]; + let mut network_keys = vec![]; + while !keys.is_empty() { + substrate_keys.push(ThresholdKeys::new(ThresholdCore::read(&mut keys).unwrap())); + let mut these_network_keys = ThresholdKeys::new(ThresholdCore::read(&mut keys).unwrap()); + P::tweak_keys(&mut these_network_keys); + network_keys.push(these_network_keys); + } + Some((substrate_keys, network_keys)) + } +} diff --git a/processor/key-gen/src/generators.rs b/processor/key-gen/src/generators.rs new file mode 100644 index 00000000..3570ca6e --- /dev/null +++ b/processor/key-gen/src/generators.rs @@ -0,0 +1,38 @@ +use core::any::{TypeId, Any}; +use std::{ + sync::{LazyLock, Mutex}, + collections::HashMap, +}; + +use dkg::evrf::*; + +use serai_validator_sets_primitives::MAX_KEY_SHARES_PER_SET; + +/// A cache of the generators used by the eVRF DKG. +/// +/// This performs a lookup of the Ciphersuite to its generators. Since the Ciphersuite is a +/// generic, this takes advantage of `Any`. This static is isolated in a module to ensure +/// correctness can be evaluated solely by reviewing these few lines of code. +/// +/// This is arguably over-engineered as of right now, as we only need generators for Ristretto +/// and N::Curve. By having this HashMap, we enable de-duplication of the Ristretto == N::Curve +/// case, and we automatically support the n-curve case (rather than hard-coding to the 2-curve +/// case). +static GENERATORS: LazyLock>> = + LazyLock::new(|| Mutex::new(HashMap::new())); + +pub(crate) fn generators() -> &'static EvrfGenerators { + GENERATORS + .lock() + .unwrap() + .entry(TypeId::of::()) + .or_insert_with(|| { + // If we haven't prior needed generators for this Ciphersuite, generate new ones + Box::leak(Box::new(EvrfGenerators::::new( + ((MAX_KEY_SHARES_PER_SET * 2 / 3) + 1).try_into().unwrap(), + MAX_KEY_SHARES_PER_SET.try_into().unwrap(), + ))) + }) + .downcast_ref() + .unwrap() +} diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index a059c350..8d4e911f 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -1,7 +1,8 @@ -use std::{ - io, - collections::{HashSet, HashMap}, -}; +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use std::{io, collections::HashMap}; use zeroize::Zeroizing; @@ -14,156 +15,41 @@ use ciphersuite::{ group::{Group, GroupEncoding}, Ciphersuite, Ristretto, }; -use dkg::{Participant, ThresholdCore, ThresholdKeys, evrf::*}; +use dkg::{Participant, ThresholdKeys, evrf::*}; use log::info; -use serai_client::validator_sets::primitives::{Session, KeyPair}; +use serai_validator_sets_primitives::Session; use messages::key_gen::*; -use crate::{Get, DbTxn, Db, create_db, networks::Network}; +use serai_db::{DbTxn, Db}; -mod generators { - use core::any::{TypeId, Any}; - use std::{ - sync::{LazyLock, Mutex}, - collections::HashMap, - }; - - use frost::dkg::evrf::*; - - use serai_client::validator_sets::primitives::MAX_KEY_SHARES_PER_SET; - - /// A cache of the generators used by the eVRF DKG. - /// - /// This performs a lookup of the Ciphersuite to its generators. Since the Ciphersuite is a - /// generic, this takes advantage of `Any`. This static is isolated in a module to ensure - /// correctness can be evaluated solely by reviewing these few lines of code. - /// - /// This is arguably over-engineered as of right now, as we only need generators for Ristretto - /// and N::Curve. By having this HashMap, we enable de-duplication of the Ristretto == N::Curve - /// case, and we automatically support the n-curve case (rather than hard-coding to the 2-curve - /// case). - static GENERATORS: LazyLock>> = - LazyLock::new(|| Mutex::new(HashMap::new())); - - pub(crate) fn generators() -> &'static EvrfGenerators { - GENERATORS - .lock() - .unwrap() - .entry(TypeId::of::()) - .or_insert_with(|| { - // If we haven't prior needed generators for this Ciphersuite, generate new ones - Box::leak(Box::new(EvrfGenerators::::new( - ((MAX_KEY_SHARES_PER_SET * 2 / 3) + 1).try_into().unwrap(), - MAX_KEY_SHARES_PER_SET.try_into().unwrap(), - ))) - }) - .downcast_ref() - .unwrap() - } -} +mod generators; use generators::generators; -#[derive(Debug)] -pub struct KeyConfirmed { - pub substrate_keys: Vec>, - pub network_keys: Vec>, -} +mod db; +use db::{Params, Participations, KeyGenDb}; -create_db!( - KeyGenDb { - ParamsDb: (session: &Session) -> (u16, Vec<[u8; 32]>, Vec>), - ParticipationDb: (session: &Session) -> ( - HashMap>, - HashMap>, - ), - // GeneratedKeysDb, KeysDb use `()` for their value as we manually serialize their values - // TODO: Don't do that - GeneratedKeysDb: (session: &Session) -> (), - // These do assume a key is only used once across sets, which holds true if the threshold is - // honest - // TODO: Remove this assumption - KeysDb: (network_key: &[u8]) -> (), - SessionDb: (network_key: &[u8]) -> Session, - NetworkKeyDb: (session: Session) -> Vec, - } -); +/// Parameters for a key generation. +pub trait KeyGenParams { + /// The ID for this instantiation. + const ID: &'static str; -impl GeneratedKeysDb { - #[allow(clippy::type_complexity)] - fn read_keys( - getter: &impl Get, - key: &[u8], - ) -> Option<(Vec, (Vec>, Vec>))> { - let keys_vec = getter.get(key)?; - let mut keys_ref: &[u8] = keys_vec.as_ref(); + /// The curve used for the external network. + type ExternalNetworkCurve: EvrfCurve< + EmbeddedCurve: Ciphersuite< + G: ec_divisors::DivisorCurve::F>, + >, + >; - let mut substrate_keys = vec![]; - let mut network_keys = vec![]; - while !keys_ref.is_empty() { - substrate_keys.push(ThresholdKeys::new(ThresholdCore::read(&mut keys_ref).unwrap())); - let mut these_network_keys = ThresholdKeys::new(ThresholdCore::read(&mut keys_ref).unwrap()); - N::tweak_keys(&mut these_network_keys); - network_keys.push(these_network_keys); - } - Some((keys_vec, (substrate_keys, network_keys))) - } + /// Tweaks keys as necessary/beneficial. + fn tweak_keys(keys: &mut ThresholdKeys); - fn save_keys( - txn: &mut impl DbTxn, - session: &Session, - substrate_keys: &[ThresholdKeys], - network_keys: &[ThresholdKeys], - ) { - let mut keys = Zeroizing::new(vec![]); - for (substrate_keys, network_keys) in substrate_keys.iter().zip(network_keys) { - keys.extend(substrate_keys.serialize().as_slice()); - keys.extend(network_keys.serialize().as_slice()); - } - txn.put(Self::key(session), keys); - } -} - -impl KeysDb { - fn confirm_keys( - txn: &mut impl DbTxn, - session: Session, - key_pair: &KeyPair, - ) -> (Vec>, Vec>) { - let (keys_vec, keys) = - GeneratedKeysDb::read_keys::(txn, &GeneratedKeysDb::key(&session)).unwrap(); - assert_eq!(key_pair.0 .0, keys.0[0].group_key().to_bytes()); - assert_eq!( - { - let network_key: &[u8] = key_pair.1.as_ref(); - network_key - }, - keys.1[0].group_key().to_bytes().as_ref(), - ); - txn.put(Self::key(key_pair.1.as_ref()), keys_vec); - NetworkKeyDb::set(txn, session, &key_pair.1.clone().into_inner()); - SessionDb::set(txn, key_pair.1.as_ref(), &session); - keys - } - - #[allow(clippy::type_complexity)] - fn keys( - getter: &impl Get, - network_key: &::G, - ) -> Option<(Session, (Vec>, Vec>))> { - let res = - GeneratedKeysDb::read_keys::(getter, &Self::key(network_key.to_bytes().as_ref()))?.1; - assert_eq!(&res.1[0].group_key(), network_key); - Some((SessionDb::get(getter, network_key.to_bytes().as_ref()).unwrap(), res)) - } - - pub fn substrate_keys_by_session( - getter: &impl Get, - session: Session, - ) -> Option>> { - let network_key = NetworkKeyDb::get(getter, session)?; - Some(GeneratedKeysDb::read_keys::(getter, &Self::key(&network_key))?.1 .0) + /// Encode keys as optimal. + /// + /// A default implementation is provided which calls the traditional `to_bytes`. + fn encode_key(key: ::G) -> Vec { + key.to_bytes().as_ref().to_vec() } } @@ -242,49 +128,44 @@ fn coerce_keys( (keys, faulty) } +/// An instance of the Serai key generation protocol. #[derive(Debug)] -pub struct KeyGen { +pub struct KeyGen { db: D, substrate_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, - network_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, + network_evrf_private_key: + Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, } -impl KeyGen { +impl KeyGen { + /// Create a new key generation instance. #[allow(clippy::new_ret_no_self)] pub fn new( db: D, substrate_evrf_private_key: Zeroizing< <::EmbeddedCurve as Ciphersuite>::F, >, - network_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, - ) -> KeyGen { + network_evrf_private_key: Zeroizing< + <::EmbeddedCurve as Ciphersuite>::F, + >, + ) -> KeyGen { KeyGen { db, substrate_evrf_private_key, network_evrf_private_key } } - pub fn in_set(&self, session: &Session) -> bool { - // We determine if we're in set using if we have the parameters for a session's key generation - // We only have these if we were told to generate a key for this session - ParamsDb::get(&self.db, session).is_some() - } - + /// Fetch the key shares for a specific session. #[allow(clippy::type_complexity)] - pub fn keys( - &self, - key: &::G, - ) -> Option<(Session, (Vec>, Vec>))> { - // This is safe, despite not having a txn, since it's a static value - // It doesn't change over time/in relation to other operations - KeysDb::keys::(&self.db, key) - } - - pub fn substrate_keys_by_session( + pub fn key_shares( &self, session: Session, - ) -> Option>> { - KeysDb::substrate_keys_by_session::(&self.db, session) + ) -> Option<(Vec>, Vec>)> { + // This is safe, despite not having a txn, since it's a static value + // It doesn't change over time/in relation to other operations + // It is solely set or unset + KeyGenDb::

::key_shares(&self.db, session) } + /// Handle a message from the coordinator. pub fn handle( &mut self, txn: &mut D::Transaction<'_>, @@ -292,10 +173,10 @@ impl KeyGen { ) -> Vec { const SUBSTRATE_KEY_CONTEXT: &[u8] = b"substrate"; const NETWORK_KEY_CONTEXT: &[u8] = b"network"; - fn context(session: Session, key_context: &[u8]) -> [u8; 32] { + fn context(session: Session, key_context: &[u8]) -> [u8; 32] { // TODO2: Also embed the chain ID/genesis block let mut transcript = RecommendedTranscript::new(b"Serai eVRF Key Gen"); - transcript.append_message(b"network", N::ID); + transcript.append_message(b"network", P::ID.as_bytes()); transcript.append_message(b"session", session.0.to_le_bytes()); transcript.append_message(b"key", key_context); (&(&transcript.challenge(b"context"))[.. 32]).try_into().unwrap() @@ -308,64 +189,68 @@ impl KeyGen { // Unzip the vector of eVRF keys let substrate_evrf_public_keys = evrf_public_keys.iter().map(|(key, _)| *key).collect::>(); + let (substrate_evrf_public_keys, mut faulty) = + coerce_keys::(&substrate_evrf_public_keys); + let network_evrf_public_keys = evrf_public_keys.into_iter().map(|(_, key)| key).collect::>(); - - let mut participation = Vec::with_capacity(2048); - let mut faulty = HashSet::new(); + let (network_evrf_public_keys, additional_faulty) = + coerce_keys::(&network_evrf_public_keys); + faulty.extend(additional_faulty); // Participate for both Substrate and the network fn participate( context: [u8; 32], threshold: u16, - evrf_public_keys: &[impl AsRef<[u8]>], + evrf_public_keys: &[::G], evrf_private_key: &Zeroizing<::F>, - faulty: &mut HashSet, output: &mut impl io::Write, ) { - let (coerced_keys, faulty_is) = coerce_keys::(evrf_public_keys); - for faulty_i in faulty_is { - faulty.insert(faulty_i); - } let participation = EvrfDkg::::participate( &mut OsRng, generators(), context, threshold, - &coerced_keys, + evrf_public_keys, evrf_private_key, ); participation.unwrap().write(output).unwrap(); } + + let mut participation = Vec::with_capacity(2048); participate::( - context::(session, SUBSTRATE_KEY_CONTEXT), + context::

(session, SUBSTRATE_KEY_CONTEXT), threshold, &substrate_evrf_public_keys, &self.substrate_evrf_private_key, - &mut faulty, &mut participation, ); - participate::( - context::(session, NETWORK_KEY_CONTEXT), + participate::( + context::

(session, NETWORK_KEY_CONTEXT), threshold, &network_evrf_public_keys, &self.network_evrf_private_key, - &mut faulty, &mut participation, ); // Save the params - ParamsDb::set( + KeyGenDb::

::set_params( txn, - &session, - &(threshold, substrate_evrf_public_keys, network_evrf_public_keys), + session, + Params { + t: threshold, + n: substrate_evrf_public_keys + .len() + .try_into() + .expect("amount of keys exceeded the amount allowed during a DKG"), + substrate_evrf_public_keys, + network_evrf_public_keys, + }, ); // Send back our Participation and all faulty parties - let mut faulty = faulty.into_iter().collect::>(); - faulty.sort(); - let mut res = Vec::with_capacity(faulty.len() + 1); + faulty.sort_unstable(); for faulty in faulty { res.push(ProcessorMessage::Blame { session, participant: faulty }); } @@ -377,13 +262,8 @@ impl KeyGen { CoordinatorMessage::Participation { session, participant, participation } => { info!("received participation from {:?} for {:?}", participant, session); - let (threshold, substrate_evrf_public_keys, network_evrf_public_keys) = - ParamsDb::get(txn, &session).unwrap(); - - let n = substrate_evrf_public_keys - .len() - .try_into() - .expect("performing a key gen with more than u16::MAX participants"); + let Params { t: threshold, n, substrate_evrf_public_keys, network_evrf_public_keys } = + KeyGenDb::

::params(txn, session).unwrap(); // Read these `Participation`s // If they fail basic sanity checks, fail fast @@ -399,7 +279,8 @@ impl KeyGen { return blame; }; let len_at_network_participation_start_pos = participation.len(); - let Ok(network_participation) = Participation::::read(&mut participation, n) + let Ok(network_participation) = + Participation::::read(&mut participation, n) else { return blame; }; @@ -413,16 +294,15 @@ impl KeyGen { // If we've already generated these keys, we don't actually need to save these // participations and continue. We solely have to verify them, as to identify malicious // participants and prevent DoSs, before returning - if txn.get(GeneratedKeysDb::key(&session)).is_some() { + if self.key_shares(session).is_some() { info!("already finished generating a key for {:?}", session); match EvrfDkg::::verify( &mut OsRng, generators(), - context::(session, SUBSTRATE_KEY_CONTEXT), + context::

(session, SUBSTRATE_KEY_CONTEXT), threshold, - // Ignores the list of participants who were faulty, as they were prior blamed - &coerce_keys::(&substrate_evrf_public_keys).0, + &substrate_evrf_public_keys, &HashMap::from([(participant, substrate_participation)]), ) .unwrap() @@ -434,13 +314,12 @@ impl KeyGen { } } - match EvrfDkg::::verify( + match EvrfDkg::::verify( &mut OsRng, generators(), - context::(session, NETWORK_KEY_CONTEXT), + context::

(session, NETWORK_KEY_CONTEXT), threshold, - // Ignores the list of participants who were faulty, as they were prior blamed - &coerce_keys::(&network_evrf_public_keys).0, + &network_evrf_public_keys, &HashMap::from([(participant, network_participation)]), ) .unwrap() @@ -467,17 +346,22 @@ impl KeyGen { // Since these are valid `Participation`s, save them let (mut substrate_participations, mut network_participations) = - ParticipationDb::get(txn, &session) - .unwrap_or((HashMap::with_capacity(1), HashMap::with_capacity(1))); + KeyGenDb::

::participations(txn, session).map_or_else( + || (HashMap::with_capacity(1), HashMap::with_capacity(1)), + |p| (p.substrate_participations, p.network_participations), + ); assert!( substrate_participations.insert(participant, substrate_participation).is_none() && network_participations.insert(participant, network_participation).is_none(), "received participation for someone multiple times" ); - ParticipationDb::set( + KeyGenDb::

::set_participations( txn, - &session, - &(substrate_participations.clone(), network_participations.clone()), + session, + &Participations { + substrate_participations: substrate_participations.clone(), + network_participations: network_participations.clone(), + }, ); // This block is taken from the eVRF DKG itself to evaluate the amount participating @@ -510,12 +394,12 @@ impl KeyGen { } // If we now have the threshold participating, verify their `Participation`s - fn verify_dkg( + fn verify_dkg( txn: &mut impl DbTxn, session: Session, true_if_substrate_false_if_network: bool, threshold: u16, - evrf_public_keys: &[impl AsRef<[u8]>], + evrf_public_keys: &[::G], substrate_participations: &mut HashMap>, network_participations: &mut HashMap>, ) -> Result, Vec> { @@ -542,7 +426,7 @@ impl KeyGen { match EvrfDkg::::verify( &mut OsRng, generators(), - context::( + context::

( session, if true_if_substrate_false_if_network { SUBSTRATE_KEY_CONTEXT @@ -551,8 +435,7 @@ impl KeyGen { }, ), threshold, - // Ignores the list of participants who were faulty, as they were prior blamed - &coerce_keys::(evrf_public_keys).0, + evrf_public_keys, &participations, ) .unwrap() @@ -570,10 +453,13 @@ impl KeyGen { blames.push(ProcessorMessage::Blame { session, participant }); } // Since we removed `Participation`s, write the updated versions to the database - ParticipationDb::set( + KeyGenDb::

::set_participations( txn, - &session, - &(substrate_participations.clone(), network_participations.clone()), + session, + &Participations { + substrate_participations: substrate_participations.clone(), + network_participations: network_participations.clone(), + }, ); Err(blames)? } @@ -586,7 +472,7 @@ impl KeyGen { } } - let substrate_dkg = match verify_dkg::( + let substrate_dkg = match verify_dkg::( txn, session, true, @@ -601,7 +487,7 @@ impl KeyGen { Err(blames) => return blames, }; - let network_dkg = match verify_dkg::( + let network_dkg = match verify_dkg::( txn, session, false, @@ -623,38 +509,17 @@ impl KeyGen { let mut network_keys = network_dkg.keys(&self.network_evrf_private_key); // Tweak the keys for the network for network_keys in &mut network_keys { - N::tweak_keys(network_keys); + P::tweak_keys(network_keys); } - GeneratedKeysDb::save_keys::(txn, &session, &substrate_keys, &network_keys); + KeyGenDb::

::set_key_shares(txn, session, &substrate_keys, &network_keys); // Since no one we verified was invalid, and we had the threshold, yield the new keys vec![ProcessorMessage::GeneratedKeyPair { session, substrate_key: substrate_keys[0].group_key().to_bytes(), - // TODO: This can be made more efficient since tweaked keys may be a subset of keys - network_key: network_keys[0].group_key().to_bytes().as_ref().to_vec(), + network_key: P::encode_key(network_keys[0].group_key()), }] } } } - - // This should only be called if we're participating, hence taking our instance - #[allow(clippy::unused_self)] - pub fn confirm( - &mut self, - txn: &mut D::Transaction<'_>, - session: Session, - key_pair: &KeyPair, - ) -> KeyConfirmed { - info!( - "Confirmed key pair {} {} for {:?}", - hex::encode(key_pair.0), - hex::encode(&key_pair.1), - session, - ); - - let (substrate_keys, network_keys) = KeysDb::confirm_keys::(txn, session, key_pair); - - KeyConfirmed { substrate_keys, network_keys } - } } diff --git a/processor/src/lib.rs b/processor/src/lib.rs index 19f67508..bbff33f6 100644 --- a/processor/src/lib.rs +++ b/processor/src/lib.rs @@ -6,7 +6,7 @@ pub use plan::*; mod db; pub(crate) use db::*; -mod key_gen; +use serai_processor_key_gen as key_gen; pub mod networks; pub(crate) mod multisigs; diff --git a/processor/src/main.rs b/processor/src/main.rs index 2d05ad4d..49406aaf 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -48,7 +48,7 @@ pub use db::*; mod coordinator; pub use coordinator::*; -mod key_gen; +use serai_processor_key_gen as key_gen; use key_gen::{SessionDb, KeyConfirmed, KeyGen}; mod signer; From 1e8a9ec5bd944b965c8106c6ec43b469649d2a4f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 18 Aug 2024 22:43:13 -0400 Subject: [PATCH 004/368] Smash out the signer Abstract, to be done for the transactions, the batches, the cosigns, the slash reports, everything. It has a minimal API itself, intending to be as clear as possible. --- .github/workflows/tests.yml | 1 + Cargo.lock | 13 + Cargo.toml | 1 + deny.toml | 1 + processor/LICENSE | 2 +- processor/frost-attempt-manager/Cargo.toml | 29 ++ processor/frost-attempt-manager/LICENSE | 15 ++ processor/frost-attempt-manager/README.md | 6 + .../frost-attempt-manager/src/individual.rs | 251 ++++++++++++++++++ processor/frost-attempt-manager/src/lib.rs | 92 +++++++ processor/key-gen/src/lib.rs | 8 +- processor/messages/src/lib.rs | 62 ++--- 12 files changed, 442 insertions(+), 39 deletions(-) create mode 100644 processor/frost-attempt-manager/Cargo.toml create mode 100644 processor/frost-attempt-manager/LICENSE create mode 100644 processor/frost-attempt-manager/README.md create mode 100644 processor/frost-attempt-manager/src/individual.rs create mode 100644 processor/frost-attempt-manager/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index fabfaba9..5aa3d234 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -40,6 +40,7 @@ jobs: -p serai-message-queue \ -p serai-processor-messages \ -p serai-processor-key-gen \ + -p serai-processor-frost-attempt-manager \ -p serai-processor \ -p tendermint-machine \ -p tributary-chain \ diff --git a/Cargo.lock b/Cargo.lock index 62952da0..3de56915 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8599,6 +8599,19 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-frost-attempt-manager" +version = "0.1.0" +dependencies = [ + "hex", + "log", + "modular-frost", + "rand_core", + "serai-db", + "serai-processor-messages", + "serai-validator-sets-primitives", +] + [[package]] name = "serai-processor-key-gen" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index f0bdd6a8..ddfaf1f2 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -71,6 +71,7 @@ members = [ "processor/messages", "processor/key-gen", + "processor/frost-attempt-manager", "processor", "coordinator/tributary/tendermint", diff --git a/deny.toml b/deny.toml index 0d82cb8a..ea61fcc1 100644 --- a/deny.toml +++ b/deny.toml @@ -47,6 +47,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-messages" }, { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, + { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, diff --git a/processor/LICENSE b/processor/LICENSE index c425427c..41d5a261 100644 --- a/processor/LICENSE +++ b/processor/LICENSE @@ -1,6 +1,6 @@ AGPL-3.0-only license -Copyright (c) 2022-2023 Luke Parker +Copyright (c) 2022-2024 Luke Parker This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License Version 3 as diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml new file mode 100644 index 00000000..7a9abe01 --- /dev/null +++ b/processor/frost-attempt-manager/Cargo.toml @@ -0,0 +1,29 @@ +[package] +name = "serai-processor-frost-attempt-manager" +version = "0.1.0" +description = "A manager of multiple attempts of FROST signing protocols" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/frost-attempt-manager" +authors = ["Luke Parker "] +keywords = ["frost", "multisig", "threshold"] +edition = "2021" +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } + +frost = { package = "modular-frost", path = "../../crypto/frost", version = "^0.8.1", default-features = false } + +serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } + +hex = { version = "0.4", default-features = false, features = ["std"] } +log = { version = "0.4", default-features = false, features = ["std"] } +serai-db = { path = "../../common/db" } +messages = { package = "serai-processor-messages", path = "../messages" } diff --git a/processor/frost-attempt-manager/LICENSE b/processor/frost-attempt-manager/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/frost-attempt-manager/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/frost-attempt-manager/README.md b/processor/frost-attempt-manager/README.md new file mode 100644 index 00000000..c7b0be25 --- /dev/null +++ b/processor/frost-attempt-manager/README.md @@ -0,0 +1,6 @@ +# FROST Attempt Manager + +A library for helper structures to manage various attempts of a FROST signing +protocol. + +This library is interacted with via the `serai-processor-messages::sign` API. diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs new file mode 100644 index 00000000..f64ad453 --- /dev/null +++ b/processor/frost-attempt-manager/src/individual.rs @@ -0,0 +1,251 @@ +use std::collections::HashMap; + +use rand_core::OsRng; + +use frost::{ + Participant, FrostError, + sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine}, +}; + +use serai_validator_sets_primitives::Session; + +use messages::sign::{SignId, ProcessorMessage}; + +/// An instance of a signing protocol with re-attempts handled internally. +#[allow(clippy::type_complexity)] +pub(crate) struct SigningProtocol { + // The session this signing protocol is being conducted by. + session: Session, + // The `i` of our first, or starting, set of key shares we will be signing with. + // The key shares we sign with are expected to be continguous from this position. + start_i: Participant, + // The ID of this signing protocol. + id: [u8; 32], + // This accepts a vector of `root` machines in order to support signing with multiple key shares. + root: Vec, + preprocessed: HashMap, HashMap>)>, + // Here, we drop to a single machine as we only need one to complete the signature. + shared: HashMap< + u32, + ( + >::SignatureMachine, + HashMap>, + ), + >, +} + +impl SigningProtocol { + /// Create a new signing protocol. + pub(crate) fn new(session: Session, start_i: Participant, id: [u8; 32], root: Vec) -> Self { + log::info!("starting signing protocol {}", hex::encode(id)); + + Self { + session, + start_i, + id, + root, + preprocessed: HashMap::with_capacity(1), + shared: HashMap::with_capacity(1), + } + } + + /// Start a new attempt of the signing protocol. + /// + /// Returns the (serialized) preprocesses for the attempt. + pub(crate) fn attempt(&mut self, attempt: u32) -> Vec { + /* + We'd get slashed as malicious if we: + 1) Preprocessed + 2) Rebooted + 3) On reboot, preprocessed again, sending new preprocesses which would be deduplicated by + the message-queue + 4) Got sent preprocesses + 5) Sent a share based on our new preprocesses, yet with everyone else expecting it to be + based on our old preprocesses + + We avoid this by saving to the DB we preprocessed before sending our preprocessed, and only + keeping our preprocesses for this instance of the processor. Accordingly, on reboot, we will + flag the prior preprocess and not send new preprocesses. + + We also won't send the share we were supposed to, unfortunately, yet caching/reloading the + preprocess has enough safety issues it isn't worth the headache. + */ + // TODO + + log::debug!("attemting a new instance of signing protocol {}", hex::encode(self.id)); + + let mut our_preprocesses = HashMap::with_capacity(self.root.len()); + let mut preprocessed = Vec::with_capacity(self.root.len()); + let mut preprocesses = Vec::with_capacity(self.root.len()); + for (i, machine) in self.root.iter().enumerate() { + let (machine, preprocess) = machine.clone().preprocess(&mut OsRng); + preprocessed.push(machine); + + let mut this_preprocess = Vec::with_capacity(64); + preprocess.write(&mut this_preprocess).unwrap(); + + our_preprocesses.insert( + Participant::new( + u16::from(self.start_i) + u16::try_from(i).expect("signing with 2**16 machines"), + ) + .expect("start_i + i exceeded the valid indexes for a Participant"), + this_preprocess.clone(), + ); + preprocesses.push(this_preprocess); + } + assert!(self.preprocessed.insert(attempt, (preprocessed, our_preprocesses)).is_none()); + + vec![ProcessorMessage::Preprocesses { + id: SignId { session: self.session, id: self.id, attempt }, + preprocesses, + }] + } + + /// Handle preprocesses for the signing protocol. + /// + /// Returns the (serialized) shares for the attempt. + pub(crate) fn preprocesses( + &mut self, + attempt: u32, + serialized_preprocesses: HashMap>, + ) -> Vec { + log::debug!("handling preprocesses for signing protocol {}", hex::encode(self.id)); + + let Some((machines, our_serialized_preprocesses)) = self.preprocessed.remove(&attempt) else { + return vec![]; + }; + + let mut msgs = Vec::with_capacity(1); + + let mut preprocesses = + HashMap::with_capacity(serialized_preprocesses.len() + our_serialized_preprocesses.len()); + for (i, serialized_preprocess) in + serialized_preprocesses.into_iter().chain(our_serialized_preprocesses) + { + let mut serialized_preprocess = serialized_preprocess.as_slice(); + let Ok(preprocess) = machines[0].read_preprocess(&mut serialized_preprocess) else { + msgs.push(ProcessorMessage::InvalidParticipant { session: self.session, participant: i }); + continue; + }; + if !serialized_preprocess.is_empty() { + msgs.push(ProcessorMessage::InvalidParticipant { session: self.session, participant: i }); + continue; + } + preprocesses.insert(i, preprocess); + } + // We throw out our preprocessed machines here, despite the fact they haven't been invalidated + // We could reuse them with a new set of valid preprocesses + // https://github.com/serai-dex/serai/issues/588 + if !msgs.is_empty() { + return msgs; + } + + let mut our_shares = HashMap::with_capacity(self.root.len()); + let mut shared = Vec::with_capacity(machines.len()); + let mut shares = Vec::with_capacity(machines.len()); + for (i, machine) in machines.into_iter().enumerate() { + let i = Participant::new( + u16::from(self.start_i) + u16::try_from(i).expect("signing with 2**16 machines"), + ) + .expect("start_i + i exceeded the valid indexes for a Participant"); + + let mut preprocesses = preprocesses.clone(); + assert!(preprocesses.remove(&i).is_some()); + + // TODO: Replace this with `()`, which requires making the message type part of the trait + let (machine, share) = match machine.sign(preprocesses, &[]) { + Ok((machine, share)) => (machine, share), + Err(e) => match e { + FrostError::InternalError(_) | + FrostError::InvalidParticipant(_, _) | + FrostError::InvalidSigningSet(_) | + FrostError::InvalidParticipantQuantity(_, _) | + FrostError::DuplicatedParticipant(_) | + FrostError::MissingParticipant(_) | + FrostError::InvalidShare(_) => { + panic!("FROST had an error which shouldn't be reachable: {e:?}"); + } + FrostError::InvalidPreprocess(i) => { + msgs + .push(ProcessorMessage::InvalidParticipant { session: self.session, participant: i }); + return msgs; + } + }, + }; + shared.push(machine); + + let mut this_share = Vec::with_capacity(32); + share.write(&mut this_share).unwrap(); + + our_shares.insert(i, this_share.clone()); + shares.push(this_share); + } + + assert!(self.shared.insert(attempt, (shared.swap_remove(0), our_shares)).is_none()); + log::debug!( + "successfully handled preprocesses for signing protocol {}, sending shares", + hex::encode(self.id) + ); + msgs.push(ProcessorMessage::Shares { + id: SignId { session: self.session, id: self.id, attempt }, + shares, + }); + msgs + } + + /// Process shares for the signing protocol. + /// + /// Returns the signature produced by the protocol. + pub(crate) fn shares( + &mut self, + attempt: u32, + serialized_shares: HashMap>, + ) -> Result> { + log::debug!("handling shares for signing protocol {}", hex::encode(self.id)); + + let Some((machine, our_serialized_shares)) = self.shared.remove(&attempt) else { Err(vec![])? }; + + let mut msgs = Vec::with_capacity(1); + + let mut shares = HashMap::with_capacity(serialized_shares.len() + our_serialized_shares.len()); + for (i, serialized_share) in our_serialized_shares.into_iter().chain(serialized_shares) { + let mut serialized_share = serialized_share.as_slice(); + let Ok(share) = machine.read_share(&mut serialized_share) else { + msgs.push(ProcessorMessage::InvalidParticipant { session: self.session, participant: i }); + continue; + }; + if !serialized_share.is_empty() { + msgs.push(ProcessorMessage::InvalidParticipant { session: self.session, participant: i }); + continue; + } + shares.insert(i, share); + } + if !msgs.is_empty() { + Err(msgs)?; + } + + assert!(shares.remove(&self.start_i).is_some()); + + let signature = match machine.complete(shares) { + Ok(signature) => signature, + Err(e) => match e { + FrostError::InternalError(_) | + FrostError::InvalidParticipant(_, _) | + FrostError::InvalidSigningSet(_) | + FrostError::InvalidParticipantQuantity(_, _) | + FrostError::DuplicatedParticipant(_) | + FrostError::MissingParticipant(_) | + FrostError::InvalidPreprocess(_) => { + panic!("FROST had an error which shouldn't be reachable: {e:?}"); + } + FrostError::InvalidShare(i) => { + Err(vec![ProcessorMessage::InvalidParticipant { session: self.session, participant: i }])? + } + }, + }; + + log::info!("finished signing for protocol {}", hex::encode(self.id)); + + Ok(signature) + } +} diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs new file mode 100644 index 00000000..e7e51d30 --- /dev/null +++ b/processor/frost-attempt-manager/src/lib.rs @@ -0,0 +1,92 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use std::collections::HashMap; + +use frost::{Participant, sign::PreprocessMachine}; + +use serai_validator_sets_primitives::Session; + +use messages::sign::{ProcessorMessage, CoordinatorMessage}; + +mod individual; +use individual::SigningProtocol; + +/// A response to handling a message from the coordinator. +pub enum Response { + /// Messages to send to the coordinator. + Messages(Vec), + /// A produced signature. + Signature(M::Signature), +} + +/// A manager of attempts for a variety of signing protocols. +pub struct AttemptManager { + session: Session, + start_i: Participant, + active: HashMap<[u8; 32], SigningProtocol>, +} + +impl AttemptManager { + /// Create a new attempt manager. + pub fn new(session: Session, start_i: Participant) -> Self { + AttemptManager { session, start_i, active: HashMap::new() } + } + + /// Register a signing protocol to attempt. + pub fn register(&mut self, id: [u8; 32], machines: Vec) { + self.active.insert(id, SigningProtocol::new(self.session, self.start_i, id, machines)); + } + + /// Retire a signing protocol. + /// + /// This frees all memory used for it and means no further messages will be handled for it. + /// This does not stop the protocol from being re-registered and further worked on (with + /// undefined behavior) then. The higher-level context must never call `register` again with this + /// ID. + // TODO: Also have the DB for this SigningProtocol cleaned up here. + pub fn retire(&mut self, id: [u8; 32]) { + log::info!("retiring signing protocol {}", hex::encode(id)); + self.active.remove(&id); + } + + /// Handle a message for a signing protocol. + pub fn handle(&mut self, msg: CoordinatorMessage) -> Response { + match msg { + CoordinatorMessage::Preprocesses { id, preprocesses } => { + let Some(protocol) = self.active.get_mut(&id.id) else { + log::trace!( + "handling preprocesses for signing protocol {}, which we're not actively running", + hex::encode(id.id) + ); + return Response::Messages(vec![]); + }; + Response::Messages(protocol.preprocesses(id.attempt, preprocesses)) + } + CoordinatorMessage::Shares { id, shares } => { + let Some(protocol) = self.active.get_mut(&id.id) else { + log::trace!( + "handling shares for signing protocol {}, which we're not actively running", + hex::encode(id.id) + ); + return Response::Messages(vec![]); + }; + match protocol.shares(id.attempt, shares) { + Ok(signature) => Response::Signature(signature), + Err(messages) => Response::Messages(messages), + } + } + CoordinatorMessage::Reattempt { id } => { + let Some(protocol) = self.active.get_mut(&id.id) else { + log::trace!( + "reattempting signing protocol {}, which we're not actively running", + hex::encode(id.id) + ); + return Response::Messages(vec![]); + }; + Response::Messages(protocol.attempt(id.attempt)) + } + } + } +} diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index 8d4e911f..3d8c3552 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -17,8 +17,6 @@ use ciphersuite::{ }; use dkg::{Participant, ThresholdKeys, evrf::*}; -use log::info; - use serai_validator_sets_primitives::Session; use messages::key_gen::*; @@ -184,7 +182,7 @@ impl KeyGen { match msg { CoordinatorMessage::GenerateKey { session, threshold, evrf_public_keys } => { - info!("Generating new key. Session: {session:?}"); + log::info!("Generating new key. Session: {session:?}"); // Unzip the vector of eVRF keys let substrate_evrf_public_keys = @@ -260,7 +258,7 @@ impl KeyGen { } CoordinatorMessage::Participation { session, participant, participation } => { - info!("received participation from {:?} for {:?}", participant, session); + log::info!("received participation from {:?} for {:?}", participant, session); let Params { t: threshold, n, substrate_evrf_public_keys, network_evrf_public_keys } = KeyGenDb::

::params(txn, session).unwrap(); @@ -295,7 +293,7 @@ impl KeyGen { // participations and continue. We solely have to verify them, as to identify malicious // participants and prevent DoSs, before returning if self.key_shares(session).is_some() { - info!("already finished generating a key for {:?}", session); + log::info!("already finished generating a key for {:?}", session); match EvrfDkg::::verify( &mut OsRng, diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 98af97ce..096fddb9 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -22,7 +22,6 @@ pub mod key_gen { #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { // Instructs the Processor to begin the key generation process. - // TODO: Should this be moved under Substrate? GenerateKey { session: Session, threshold: u16, evrf_public_keys: Vec<([u8; 32], Vec)> }, // Received participations for the specified key generation protocol. Participation { session: Session, participant: Participant, participation: Vec }, @@ -93,6 +92,8 @@ pub mod sign { pub attempt: u32, } + // TODO: Make this generic to the ID once we introduce topics into the message-queue and remove + // the global ProcessorMessage/CoordinatorMessage #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { // Received preprocesses for the specified signing protocol. @@ -101,8 +102,10 @@ pub mod sign { Shares { id: SignId, shares: HashMap> }, // Re-attempt a signing protocol. Reattempt { id: SignId }, + /* TODO // Completed a signing protocol already. Completed { session: Session, id: [u8; 32], tx: Vec }, + */ } impl CoordinatorMessage { @@ -114,8 +117,8 @@ pub mod sign { match self { CoordinatorMessage::Preprocesses { id, .. } | CoordinatorMessage::Shares { id, .. } | - CoordinatorMessage::Reattempt { id } => id.session, - CoordinatorMessage::Completed { session, .. } => *session, + CoordinatorMessage::Reattempt { id, .. } => id.session, + // TODO CoordinatorMessage::Completed { session, .. } => *session, } } } @@ -123,13 +126,13 @@ pub mod sign { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { // Participant sent an invalid message during the sign protocol. - InvalidParticipant { id: SignId, participant: Participant }, - // Created preprocess for the specified signing protocol. - Preprocess { id: SignId, preprocesses: Vec> }, - // Signed share for the specified signing protocol. - Share { id: SignId, shares: Vec> }, + InvalidParticipant { session: Session, participant: Participant }, + // Created preprocesses for the specified signing protocol. + Preprocesses { id: SignId, preprocesses: Vec> }, + // Signed shares for the specified signing protocol. + Shares { id: SignId, shares: Vec> }, // Completed a signing protocol already. - Completed { session: Session, id: [u8; 32], tx: Vec }, + // TODO Completed { session: Session, id: [u8; 32], tx: Vec }, } } @@ -165,10 +168,6 @@ pub mod coordinator { pub enum CoordinatorMessage { CosignSubstrateBlock { id: SubstrateSignId, block_number: u64 }, SignSlashReport { id: SubstrateSignId, report: Vec<([u8; 32], u32)> }, - SubstratePreprocesses { id: SubstrateSignId, preprocesses: HashMap }, - SubstrateShares { id: SubstrateSignId, shares: HashMap }, - // Re-attempt a batch signing protocol. - BatchReattempt { id: SubstrateSignId }, } impl CoordinatorMessage { @@ -192,9 +191,9 @@ pub mod coordinator { SubstrateBlockAck { block: u64, plans: Vec }, InvalidParticipant { id: SubstrateSignId, participant: Participant }, CosignPreprocess { id: SubstrateSignId, preprocesses: Vec<[u8; 64]> }, + // TODO: Remove BatchPreprocess? Why does this take a BlockHash here and not in its + // SubstrateSignId? BatchPreprocess { id: SubstrateSignId, block: BlockHash, preprocesses: Vec<[u8; 64]> }, - SlashReportPreprocess { id: SubstrateSignId, preprocesses: Vec<[u8; 64]> }, - SubstrateShare { id: SubstrateSignId, shares: Vec<[u8; 32]> }, // TODO: Make these signatures [u8; 64]? CosignedBlock { block_number: u64, block: [u8; 32], signature: Vec }, SignedSlashReport { session: Session, signature: Vec }, @@ -327,19 +326,19 @@ impl CoordinatorMessage { } CoordinatorMessage::Sign(msg) => { let (sub, id) = match msg { - // Unique since SignId includes a hash of the network, and specific transaction info - sign::CoordinatorMessage::Preprocesses { id, .. } => (0, id.encode()), - sign::CoordinatorMessage::Shares { id, .. } => (1, id.encode()), - sign::CoordinatorMessage::Reattempt { id } => (2, id.encode()), + // Unique since SignId + sign::CoordinatorMessage::Preprocesses { id, .. } => (0, id), + sign::CoordinatorMessage::Shares { id, .. } => (1, id), + sign::CoordinatorMessage::Reattempt { id, .. } => (2, id), // The coordinator should report all reported completions to the processor // Accordingly, the intent is a combination of plan ID and actual TX // While transaction alone may suffice, that doesn't cover cross-chain TX ID conflicts, // which are possible - sign::CoordinatorMessage::Completed { id, tx, .. } => (3, (id, tx).encode()), + // TODO sign::CoordinatorMessage::Completed { id, tx, .. } => (3, (id, tx).encode()), }; let mut res = vec![COORDINATOR_UID, TYPE_SIGN_UID, sub]; - res.extend(&id); + res.extend(id.encode()); res } CoordinatorMessage::Coordinator(msg) => { @@ -349,10 +348,6 @@ impl CoordinatorMessage { // Unique since there's only one of these per session/attempt, and ID is inclusive to // both coordinator::CoordinatorMessage::SignSlashReport { id, .. } => (1, id.encode()), - // Unique since this embeds the batch ID (including its network) and attempt - coordinator::CoordinatorMessage::SubstratePreprocesses { id, .. } => (2, id.encode()), - coordinator::CoordinatorMessage::SubstrateShares { id, .. } => (3, id.encode()), - coordinator::CoordinatorMessage::BatchReattempt { id, .. } => (4, id.encode()), }; let mut res = vec![COORDINATOR_UID, TYPE_COORDINATOR_UID, sub]; @@ -404,12 +399,15 @@ impl ProcessorMessage { } ProcessorMessage::Sign(msg) => { let (sub, id) = match msg { + // Unique since we'll only fatally slash a a participant once + sign::ProcessorMessage::InvalidParticipant { session, participant } => { + (0, (session, u16::from(*participant)).encode()) + } // Unique since SignId - sign::ProcessorMessage::InvalidParticipant { id, .. } => (0, id.encode()), - sign::ProcessorMessage::Preprocess { id, .. } => (1, id.encode()), - sign::ProcessorMessage::Share { id, .. } => (2, id.encode()), + sign::ProcessorMessage::Preprocesses { id, .. } => (1, id.encode()), + sign::ProcessorMessage::Shares { id, .. } => (2, id.encode()), // Unique since a processor will only sign a TX once - sign::ProcessorMessage::Completed { id, .. } => (3, id.to_vec()), + // TODO sign::ProcessorMessage::Completed { id, .. } => (3, id.to_vec()), }; let mut res = vec![PROCESSOR_UID, TYPE_SIGN_UID, sub]; @@ -423,11 +421,9 @@ impl ProcessorMessage { coordinator::ProcessorMessage::InvalidParticipant { id, .. } => (1, id.encode()), coordinator::ProcessorMessage::CosignPreprocess { id, .. } => (2, id.encode()), coordinator::ProcessorMessage::BatchPreprocess { id, .. } => (3, id.encode()), - coordinator::ProcessorMessage::SlashReportPreprocess { id, .. } => (4, id.encode()), - coordinator::ProcessorMessage::SubstrateShare { id, .. } => (5, id.encode()), // Unique since only one instance of a signature matters - coordinator::ProcessorMessage::CosignedBlock { block, .. } => (6, block.encode()), - coordinator::ProcessorMessage::SignedSlashReport { .. } => (7, vec![]), + coordinator::ProcessorMessage::CosignedBlock { block, .. } => (4, block.encode()), + coordinator::ProcessorMessage::SignedSlashReport { .. } => (5, vec![]), }; let mut res = vec![PROCESSOR_UID, TYPE_COORDINATOR_UID, sub]; From 2f3bd7a02a07ddfcd488f91864f2388f40ee2312 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 19 Aug 2024 00:41:18 -0400 Subject: [PATCH 005/368] Cleanup DB handling a bit in key-gen/attempt-manager --- Cargo.lock | 2 + processor/frost-attempt-manager/Cargo.toml | 4 ++ .../frost-attempt-manager/src/individual.rs | 38 +++++++++++++++++-- processor/frost-attempt-manager/src/lib.rs | 31 ++++++++++----- processor/key-gen/src/db.rs | 21 +++++----- processor/key-gen/src/lib.rs | 8 ++-- 6 files changed, 77 insertions(+), 27 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 3de56915..f5e1151d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8603,9 +8603,11 @@ dependencies = [ name = "serai-processor-frost-attempt-manager" version = "0.1.0" dependencies = [ + "borsh", "hex", "log", "modular-frost", + "parity-scale-codec", "rand_core", "serai-db", "serai-processor-messages", diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml index 7a9abe01..01c1e4c5 100644 --- a/processor/frost-attempt-manager/Cargo.toml +++ b/processor/frost-attempt-manager/Cargo.toml @@ -25,5 +25,9 @@ serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primi hex = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-db = { path = "../../common/db" } + messages = { package = "serai-processor-messages", path = "../messages" } diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs index f64ad453..d7f4eec0 100644 --- a/processor/frost-attempt-manager/src/individual.rs +++ b/processor/frost-attempt-manager/src/individual.rs @@ -9,11 +9,19 @@ use frost::{ use serai_validator_sets_primitives::Session; +use serai_db::{Get, DbTxn, Db, create_db}; use messages::sign::{SignId, ProcessorMessage}; +create_db!( + FrostAttemptManager { + Attempted: (id: [u8; 32]) -> u32, + } +); + /// An instance of a signing protocol with re-attempts handled internally. #[allow(clippy::type_complexity)] -pub(crate) struct SigningProtocol { +pub(crate) struct SigningProtocol { + db: D, // The session this signing protocol is being conducted by. session: Session, // The `i` of our first, or starting, set of key shares we will be signing with. @@ -34,12 +42,19 @@ pub(crate) struct SigningProtocol { >, } -impl SigningProtocol { +impl SigningProtocol { /// Create a new signing protocol. - pub(crate) fn new(session: Session, start_i: Participant, id: [u8; 32], root: Vec) -> Self { + pub(crate) fn new( + db: D, + session: Session, + start_i: Participant, + id: [u8; 32], + root: Vec, + ) -> Self { log::info!("starting signing protocol {}", hex::encode(id)); Self { + db, session, start_i, id, @@ -70,7 +85,15 @@ impl SigningProtocol { We also won't send the share we were supposed to, unfortunately, yet caching/reloading the preprocess has enough safety issues it isn't worth the headache. */ - // TODO + { + let mut txn = self.db.txn(); + let prior_attempted = Attempted::get(&txn, self.id); + if Some(attempt) <= prior_attempted { + return vec![]; + } + Attempted::set(&mut txn, self.id, &attempt); + txn.commit(); + } log::debug!("attemting a new instance of signing protocol {}", hex::encode(self.id)); @@ -248,4 +271,11 @@ impl SigningProtocol { Ok(signature) } + + /// Cleanup the database entries for a specified signing protocol. + pub(crate) fn cleanup(db: &mut D, id: [u8; 32]) { + let mut txn = db.txn(); + Attempted::del(&mut txn, id); + txn.commit(); + } } diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index e7e51d30..cd8452fa 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -8,6 +8,7 @@ use frost::{Participant, sign::PreprocessMachine}; use serai_validator_sets_primitives::Session; +use serai_db::Db; use messages::sign::{ProcessorMessage, CoordinatorMessage}; mod individual; @@ -22,21 +23,28 @@ pub enum Response { } /// A manager of attempts for a variety of signing protocols. -pub struct AttemptManager { +pub struct AttemptManager { + db: D, session: Session, start_i: Participant, - active: HashMap<[u8; 32], SigningProtocol>, + active: HashMap<[u8; 32], SigningProtocol>, } -impl AttemptManager { +impl AttemptManager { /// Create a new attempt manager. - pub fn new(session: Session, start_i: Participant) -> Self { - AttemptManager { session, start_i, active: HashMap::new() } + pub fn new(db: D, session: Session, start_i: Participant) -> Self { + AttemptManager { db, session, start_i, active: HashMap::new() } } /// Register a signing protocol to attempt. - pub fn register(&mut self, id: [u8; 32], machines: Vec) { - self.active.insert(id, SigningProtocol::new(self.session, self.start_i, id, machines)); + /// + /// This ID must be unique across all sessions, attempt managers, protocols, etc. + pub fn register(&mut self, id: [u8; 32], machines: Vec) -> Vec { + let mut protocol = + SigningProtocol::new(self.db.clone(), self.session, self.start_i, id, machines); + let messages = protocol.attempt(0); + self.active.insert(id, protocol); + messages } /// Retire a signing protocol. @@ -45,10 +53,13 @@ impl AttemptManager { /// This does not stop the protocol from being re-registered and further worked on (with /// undefined behavior) then. The higher-level context must never call `register` again with this /// ID. - // TODO: Also have the DB for this SigningProtocol cleaned up here. pub fn retire(&mut self, id: [u8; 32]) { - log::info!("retiring signing protocol {}", hex::encode(id)); - self.active.remove(&id); + if self.active.remove(&id).is_none() { + log::info!("retiring protocol {}, which we didn't register/already retired", hex::encode(id)); + } else { + log::info!("retired signing protocol {}", hex::encode(id)); + } + SigningProtocol::::cleanup(&mut self.db, id); } /// Handle a message for a signing protocol. diff --git a/processor/key-gen/src/db.rs b/processor/key-gen/src/db.rs index d597cb7e..e82b84a5 100644 --- a/processor/key-gen/src/db.rs +++ b/processor/key-gen/src/db.rs @@ -36,10 +36,10 @@ pub(crate) struct Participations { } create_db!( - KeyGenDb { - ParamsDb: (session: &Session) -> RawParams, - ParticipationsDb: (session: &Session) -> Participations, - KeySharesDb: (session: &Session) -> Vec, + KeyGen { + Params: (session: &Session) -> RawParams, + Participations: (session: &Session) -> Participations, + KeyShares: (session: &Session) -> Vec, } ); @@ -48,7 +48,7 @@ impl KeyGenDb

{ pub(crate) fn set_params(txn: &mut impl DbTxn, session: Session, params: Params

) { assert_eq!(params.substrate_evrf_public_keys.len(), params.network_evrf_public_keys.len()); - ParamsDb::set( + Params::set( txn, &session, &RawParams { @@ -68,7 +68,7 @@ impl KeyGenDb

{ } pub(crate) fn params(getter: &impl Get, session: Session) -> Option> { - ParamsDb::get(getter, &session).map(|params| Params { + Params::get(getter, &session).map(|params| Params { t: params.t, n: params .network_evrf_public_keys @@ -101,12 +101,13 @@ impl KeyGenDb

{ session: Session, participations: &Participations, ) { - ParticipationsDb::set(txn, &session, participations) + Participations::set(txn, &session, participations) } pub(crate) fn participations(getter: &impl Get, session: Session) -> Option { - ParticipationsDb::get(getter, &session) + Participations::get(getter, &session) } + // Set the key shares for a session. pub(crate) fn set_key_shares( txn: &mut impl DbTxn, session: Session, @@ -120,7 +121,7 @@ impl KeyGenDb

{ keys.extend(substrate_keys.serialize().as_slice()); keys.extend(network_keys.serialize().as_slice()); } - KeySharesDb::set(txn, &session, &keys); + KeyShares::set(txn, &session, &keys); } #[allow(clippy::type_complexity)] @@ -128,7 +129,7 @@ impl KeyGenDb

{ getter: &impl Get, session: Session, ) -> Option<(Vec>, Vec>)> { - let keys = KeySharesDb::get(getter, &session)?; + let keys = KeyShares::get(getter, &session)?; let mut keys: &[u8] = keys.as_ref(); let mut substrate_keys = vec![]; diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index 3d8c3552..60753412 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -182,7 +182,7 @@ impl KeyGen { match msg { CoordinatorMessage::GenerateKey { session, threshold, evrf_public_keys } => { - log::info!("Generating new key. Session: {session:?}"); + log::info!("generating new key, session: {session:?}"); // Unzip the vector of eVRF keys let substrate_evrf_public_keys = @@ -258,7 +258,7 @@ impl KeyGen { } CoordinatorMessage::Participation { session, participant, participation } => { - log::info!("received participation from {:?} for {:?}", participant, session); + log::debug!("received participation from {:?} for {:?}", participant, session); let Params { t: threshold, n, substrate_evrf_public_keys, network_evrf_public_keys } = KeyGenDb::

::params(txn, session).unwrap(); @@ -293,7 +293,7 @@ impl KeyGen { // participations and continue. We solely have to verify them, as to identify malicious // participants and prevent DoSs, before returning if self.key_shares(session).is_some() { - log::info!("already finished generating a key for {:?}", session); + log::debug!("already finished generating a key for {:?}", session); match EvrfDkg::::verify( &mut OsRng, @@ -511,6 +511,8 @@ impl KeyGen { } KeyGenDb::

::set_key_shares(txn, session, &substrate_keys, &network_keys); + log::info!("generated key, session: {session:?}"); + // Since no one we verified was invalid, and we had the threshold, yield the new keys vec![ProcessorMessage::GeneratedKeyPair { session, From e843b4a2a058942271e6799676aa1be2ce1e6205 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 19 Aug 2024 00:42:38 -0400 Subject: [PATCH 006/368] Move scanner.rs to scanner/lib.rs --- .github/workflows/tests.yml | 1 + Cargo.toml | 1 + processor/scanner/Cargo.toml | 33 +++++++++++++++++++ processor/scanner/LICENSE | 15 +++++++++ processor/scanner/README.md | 12 +++++++ .../scanner.rs => scanner/src/lib.rs} | 0 6 files changed, 62 insertions(+) create mode 100644 processor/scanner/Cargo.toml create mode 100644 processor/scanner/LICENSE create mode 100644 processor/scanner/README.md rename processor/{src/multisigs/scanner.rs => scanner/src/lib.rs} (100%) diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 5aa3d234..385d54c4 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -41,6 +41,7 @@ jobs: -p serai-processor-messages \ -p serai-processor-key-gen \ -p serai-processor-frost-attempt-manager \ + -p serai-processor-scanner \ -p serai-processor \ -p tendermint-machine \ -p tributary-chain \ diff --git a/Cargo.toml b/Cargo.toml index ddfaf1f2..8d6d9416 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -72,6 +72,7 @@ members = [ "processor/messages", "processor/key-gen", "processor/frost-attempt-manager", + "processor/scanner", "processor", "coordinator/tributary/tendermint", diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml new file mode 100644 index 00000000..f3b5ad37 --- /dev/null +++ b/processor/scanner/Cargo.toml @@ -0,0 +1,33 @@ +[package] +name = "serai-processor-scanner" +version = "0.1.0" +description = "Scanner of abstract blockchains for Serai" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scanner" +authors = ["Luke Parker "] +keywords = ["frost", "multisig", "threshold"] +edition = "2021" +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } + +frost = { package = "modular-frost", path = "../../crypto/frost", version = "^0.8.1", default-features = false } + +serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } + +hex = { version = "0.4", default-features = false, features = ["std"] } +log = { version = "0.4", default-features = false, features = ["std"] } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serai-db = { path = "../../common/db" } + +messages = { package = "serai-processor-messages", path = "../messages" } diff --git a/processor/scanner/LICENSE b/processor/scanner/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/scanner/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scanner/README.md b/processor/scanner/README.md new file mode 100644 index 00000000..f6c6ccc6 --- /dev/null +++ b/processor/scanner/README.md @@ -0,0 +1,12 @@ +# Scanner + +A scanner of arbitrary blockchains for Serai. + +This scanner has two distinct roles: + +1) Scanning blocks for received outputs contained within them +2) Scanning blocks for the completion of eventualities + +While these can be optimized into a single structure, they are written as two +distinct structures (with the associated overhead) for clarity and simplicity +reasons. diff --git a/processor/src/multisigs/scanner.rs b/processor/scanner/src/lib.rs similarity index 100% rename from processor/src/multisigs/scanner.rs rename to processor/scanner/src/lib.rs From 57a0ba966bcb967def941aaaa1202fed1f53ebe9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 19 Aug 2024 17:33:57 -0400 Subject: [PATCH 007/368] Extend serai-db with support for generic keys/values --- common/db/src/create_db.rs | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/common/db/src/create_db.rs b/common/db/src/create_db.rs index abd86e46..7be1e1c8 100644 --- a/common/db/src/create_db.rs +++ b/common/db/src/create_db.rs @@ -38,13 +38,18 @@ pub fn serai_db_key( #[macro_export] macro_rules! create_db { ($db_name: ident { - $($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)* + $( + $field_name: ident: + $(<$($generic_name: tt: $generic_type: tt),+>)?( + $($arg: ident: $arg_type: ty),* + ) -> $field_type: ty$(,)? + )* }) => { $( #[derive(Clone, Debug)] pub(crate) struct $field_name; impl $field_name { - pub(crate) fn key($($arg: $arg_type),*) -> Vec { + pub(crate) fn key$(<$($generic_name: $generic_type),+>)?($($arg: $arg_type),*) -> Vec { use scale::Encode; $crate::serai_db_key( stringify!($db_name).as_bytes(), @@ -52,18 +57,31 @@ macro_rules! create_db { ($($arg),*).encode() ) } - pub(crate) fn set(txn: &mut impl DbTxn $(, $arg: $arg_type)*, data: &$field_type) { - let key = $field_name::key($($arg),*); + pub(crate) fn set$(<$($generic_name: $generic_type),+>)?( + txn: &mut impl DbTxn + $(, $arg: $arg_type)*, + data: &$field_type + ) { + let key = $field_name::key$(::<$($generic_name),+>)?($($arg),*); txn.put(&key, borsh::to_vec(data).unwrap()); } - pub(crate) fn get(getter: &impl Get, $($arg: $arg_type),*) -> Option<$field_type> { - getter.get($field_name::key($($arg),*)).map(|data| { + pub(crate) fn get$(<$($generic_name: $generic_type),+>)?( + getter: &impl Get, + $($arg: $arg_type),* + ) -> Option<$field_type> { + getter.get($field_name::key$(::<$($generic_name),+>)?($($arg),*)).map(|data| { borsh::from_slice(data.as_ref()).unwrap() }) } + // Returns a PhantomData of all generic types so if the generic was only used in the value, + // not the keys, this doesn't have unused generic types #[allow(dead_code)] - pub(crate) fn del(txn: &mut impl DbTxn $(, $arg: $arg_type)*) { - txn.del(&$field_name::key($($arg),*)) + pub(crate) fn del$(<$($generic_name: $generic_type),+>)?( + txn: &mut impl DbTxn + $(, $arg: $arg_type)* + ) -> core::marker::PhantomData<($($($generic_name),+)?)> { + txn.del(&$field_name::key$(::<$($generic_name),+>)?($($arg),*)); + core::marker::PhantomData } } )* From 8763ef23ed6d42dd99a46d5f452b8748b7cfc68c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 11:57:56 -0400 Subject: [PATCH 008/368] Definition and delineation of tasks within the scanner Also defines primitives for the processor. --- .github/workflows/tests.yml | 1 + Cargo.lock | 28 ++++ Cargo.toml | 2 + processor/frost-attempt-manager/Cargo.toml | 3 + processor/primitives/Cargo.toml | 27 +++ processor/primitives/LICENSE | 15 ++ processor/primitives/README.md | 3 + processor/primitives/src/lib.rs | 167 +++++++++++++++++++ processor/scanner/Cargo.toml | 20 ++- processor/scanner/src/db.rs | 162 ++++++++++++++++++ processor/scanner/src/eventuality.rs | 0 processor/scanner/src/index.rs | 72 ++++++++ processor/scanner/src/lib.rs | 183 ++++++++++----------- processor/scanner/src/scan.rs | 73 ++++++++ processor/src/multisigs/mod.rs | 2 + 15 files changed, 653 insertions(+), 105 deletions(-) create mode 100644 processor/primitives/Cargo.toml create mode 100644 processor/primitives/LICENSE create mode 100644 processor/primitives/README.md create mode 100644 processor/primitives/src/lib.rs create mode 100644 processor/scanner/src/db.rs create mode 100644 processor/scanner/src/eventuality.rs create mode 100644 processor/scanner/src/index.rs create mode 100644 processor/scanner/src/scan.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 385d54c4..5032676f 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -41,6 +41,7 @@ jobs: -p serai-processor-messages \ -p serai-processor-key-gen \ -p serai-processor-frost-attempt-manager \ + -p serai-processor-primitives \ -p serai-processor-scanner \ -p serai-processor \ -p tendermint-machine \ diff --git a/Cargo.lock b/Cargo.lock index f5e1151d..230ed22f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8647,6 +8647,34 @@ dependencies = [ "serai-validator-sets-primitives", ] +[[package]] +name = "serai-processor-primitives" +version = "0.1.0" +dependencies = [ + "async-trait", + "borsh", + "group", + "parity-scale-codec", + "serai-primitives", +] + +[[package]] +name = "serai-processor-scanner" +version = "0.1.0" +dependencies = [ + "async-trait", + "borsh", + "group", + "hex", + "log", + "parity-scale-codec", + "serai-db", + "serai-processor-messages", + "serai-processor-primitives", + "thiserror", + "tokio", +] + [[package]] name = "serai-processor-tests" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 8d6d9416..7ad08a51 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -72,6 +72,8 @@ members = [ "processor/messages", "processor/key-gen", "processor/frost-attempt-manager", + + "processor/primitives", "processor/scanner", "processor", diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml index 01c1e4c5..a01acf0f 100644 --- a/processor/frost-attempt-manager/Cargo.toml +++ b/processor/frost-attempt-manager/Cargo.toml @@ -13,6 +13,9 @@ rust-version = "1.79" all-features = true rustdoc-args = ["--cfg", "docsrs"] +[package.metadata.cargo-machete] +ignored = ["borsh", "scale"] + [lints] workspace = true diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml new file mode 100644 index 00000000..dd59c0a8 --- /dev/null +++ b/processor/primitives/Cargo.toml @@ -0,0 +1,27 @@ +[package] +name = "serai-processor-primitives" +version = "0.1.0" +description = "Primitives for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/primitives" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +group = { version = "0.13", default-features = false } + +serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } diff --git a/processor/primitives/LICENSE b/processor/primitives/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/primitives/README.md b/processor/primitives/README.md new file mode 100644 index 00000000..d616993c --- /dev/null +++ b/processor/primitives/README.md @@ -0,0 +1,3 @@ +# Primitives + +Primitive types/traits/structs used by the Processor. diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs new file mode 100644 index 00000000..535dd14f --- /dev/null +++ b/processor/primitives/src/lib.rs @@ -0,0 +1,167 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::fmt::Debug; +use std::io; + +use group::GroupEncoding; + +use serai_primitives::Balance; + +use scale::{Encode, Decode}; +use borsh::{BorshSerialize, BorshDeserialize}; + +/// An ID for an output/transaction/block/etc. +/// +/// IDs don't need to implement `Copy`, enabling `[u8; 33]`, `[u8; 64]` to be used. IDs are still +/// bound to being of a constant-size, where `Default::default()` returns an instance of such size +/// (making `Vec` invalid as an `Id`). +pub trait Id: + Send + + Sync + + Clone + + Default + + PartialEq + + AsRef<[u8]> + + AsMut<[u8]> + + Debug + + Encode + + Decode + + BorshSerialize + + BorshDeserialize +{ +} +impl Id for [u8; N] where [u8; N]: Default {} + +/// The type of the output. +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] +pub enum OutputType { + /// An output received to the address external payments use. + /// + /// This is reported to Substrate in a `Batch`. + External, + + /// A branch output. + /// + /// Given a known output set, and a known series of outbound transactions, we should be able to + /// form a completely deterministic schedule S. The issue is when S has TXs which spend prior TXs + /// in S (which is needed for our logarithmic scheduling). In order to have the descendant TX, + /// say S[1], build off S[0], we need to observe when S[0] is included on-chain. + /// + /// We cannot. + /// + /// Monero (and other privacy coins) do not expose their UTXO graphs. Even if we know how to + /// create S[0], and the actual payment info behind it, we cannot observe it on the blockchain + /// unless we participated in creating it. Locking the entire schedule, when we cannot sign for + /// the entire schedule at once, to a single signing set isn't feasible. + /// + /// While any member of the active signing set can provide data enabling other signers to + /// participate, it's several KB of data which we then have to code communication for. + /// The other option is to simply not observe S[0]. Instead, observe a TX with an identical + /// output to the one in S[0] we intended to use for S[1]. It's either from S[0], or Eve, a + /// malicious actor, has sent us a forged TX which is... equally as usable? So who cares? + /// + /// The only issue is if we have multiple outputs on-chain with identical amounts and purposes. + /// Accordingly, when the scheduler makes a plan for when a specific output is available, it + /// shouldn't set that plan. It should *push* that plan to a queue of plans to perform when + /// instances of that output occur. + Branch, + + /// A change output. + /// + /// This should be added to the available UTXO pool with no further action taken. It does not + /// need to be reported (though we do still need synchrony on the block it's in). There's no + /// explicit expectation for the usage of this output at time of recipience. + Change, + + /// A forwarded output from the prior multisig. + /// + /// This is distinguished for technical reasons around detecting when a multisig should be + /// retired. + Forwarded, +} + +impl OutputType { + fn write(&self, writer: &mut W) -> io::Result<()> { + writer.write_all(&[match self { + OutputType::External => 0, + OutputType::Branch => 1, + OutputType::Change => 2, + OutputType::Forwarded => 3, + }]) + } + + fn read(reader: &mut R) -> io::Result { + let mut byte = [0; 1]; + reader.read_exact(&mut byte)?; + Ok(match byte[0] { + 0 => OutputType::External, + 1 => OutputType::Branch, + 2 => OutputType::Change, + 3 => OutputType::Forwarded, + _ => Err(io::Error::other("invalid OutputType"))?, + }) + } +} + +/// A received output. +pub trait ReceivedOutput: + Send + Sync + Sized + Clone + PartialEq + Eq + Debug +{ + /// The type used to identify this output. + type Id: 'static + Id; + + /// The type of this output. + fn kind(&self) -> OutputType; + + /// The ID of this output. + fn id(&self) -> Self::Id; + /// The key this output was received by. + fn key(&self) -> K; + + /// The presumed origin for this output. + /// + /// This is used as the address to refund coins to if we can't handle the output as desired + /// (unless overridden). + fn presumed_origin(&self) -> Option; + + /// The balance associated with this output. + fn balance(&self) -> Balance; + /// The arbitrary data (presumably an InInstruction) associated with this output. + fn data(&self) -> &[u8]; + + /// Write this output. + fn write(&self, writer: &mut W) -> io::Result<()>; + /// Read an output. + fn read(reader: &mut R) -> io::Result; +} + +/// A block from an external network. +#[async_trait::async_trait] +pub trait Block: Send + Sync + Sized + Clone + Debug { + /// The type used to identify blocks. + type Id: 'static + Id; + /// The ID of this block. + fn id(&self) -> Self::Id; + /// The ID of the parent block. + fn parent(&self) -> Self::Id; +} + +/// A wrapper for a group element which implements the borsh traits. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct BorshG(pub G); +impl BorshSerialize for BorshG { + fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { + writer.write_all(self.0.to_bytes().as_ref()) + } +} +impl BorshDeserialize for BorshG { + fn deserialize_reader(reader: &mut R) -> borsh::io::Result { + let mut repr = G::Repr::default(); + reader.read_exact(repr.as_mut())?; + Ok(Self( + Option::::from(G::from_bytes(&repr)).ok_or(borsh::io::Error::other("invalid point"))?, + )) + } +} diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index f3b5ad37..670581d9 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -17,17 +17,23 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } - -frost = { package = "modular-frost", path = "../../crypto/frost", version = "^0.8.1", default-features = false } - -serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } +# Macros +async-trait = { version = "0.1", default-features = false } +thiserror = { version = "1", default-features = false } +# Encoders hex = { version = "0.4", default-features = false, features = ["std"] } -log = { version = "0.4", default-features = false, features = ["std"] } - scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +# Cryptography +group = { version = "0.13", default-features = false } + +# Application +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + serai-db = { path = "../../common/db" } messages = { package = "serai-processor-messages", path = "../messages" } +primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs new file mode 100644 index 00000000..8bd7d944 --- /dev/null +++ b/processor/scanner/src/db.rs @@ -0,0 +1,162 @@ +use core::marker::PhantomData; + +use group::GroupEncoding; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, create_db}; + +use primitives::{Id, Block, BorshG}; + +use crate::ScannerFeed; + +// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. +trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + +#[derive(BorshSerialize, BorshDeserialize)] +struct SeraiKey { + activation_block_number: u64, + retirement_block_number: Option, + key: K, +} + +create_db!( + Scanner { + BlockId: (number: u64) -> I, + BlockNumber: (id: I) -> u64, + + ActiveKeys: () -> Vec>, + + // The latest finalized block to appear of a blockchain + LatestFinalizedBlock: () -> u64, + // The latest block which it's safe to scan (dependent on what Serai has acknowledged scanning) + LatestScannableBlock: () -> u64, + // The next block to scan for received outputs + NextToScanForOutputsBlock: () -> u64, + // The next block to check for resolving eventualities + NextToCheckForEventualitiesBlock: () -> u64, + + // If a block was notable + /* + A block is notable if one of three conditions are met: + + 1) We activated a key within this block. + 2) We retired a key within this block. + 3) We received outputs within this block. + + The first two conditions, and the reasoning for them, is extensively documented in + `spec/processor/Multisig Rotation.md`. The third is obvious (as any block we receive outputs + in needs synchrony so that we can spend the received outputs). + + We save if a block is notable here by either the scan for received outputs task or the + check for eventuality completion task. Once a block has been processed by both, the reporting + task will report any notable blocks. Finally, the task which sets the block safe to scan to + makes its decision based on the notable blocks and the acknowledged blocks. + */ + // This collapses from `bool` to `()`, using if the value was set for true and false otherwise + NotableBlock: (number: u64) -> (), + } +); + +pub(crate) struct ScannerDb(PhantomData); +impl ScannerDb { + pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: ::Id) { + BlockId::set(txn, number, &id); + BlockNumber::set(txn, id, &number); + } + pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<::Id> { + BlockId::get(getter, number) + } + pub(crate) fn block_number(getter: &impl Get, id: ::Id) -> Option { + BlockNumber::get(getter, id) + } + + // activation_block_number is inclusive, so the key will be scanned for starting at the specified + // block + pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: S::Key) { + let mut keys: Vec>> = ActiveKeys::get(txn).unwrap_or(vec![]); + for key_i in &keys { + if key == key_i.key.0 { + panic!("queueing a key prior queued"); + } + } + keys.push(SeraiKey { + activation_block_number, + retirement_block_number: None, + key: BorshG(key), + }); + ActiveKeys::set(txn, &keys); + } + // retirement_block_number is inclusive, so the key will no longer be scanned for as of the + // specified block + pub(crate) fn retire_key(txn: &mut impl DbTxn, retirement_block_number: u64, key: S::Key) { + let mut keys: Vec>> = + ActiveKeys::get(txn).expect("retiring key yet no active keys"); + + assert!(keys.len() > 1, "retiring our only key"); + for i in 0 .. keys.len() { + if key == keys[i].key.0 { + keys[i].retirement_block_number = Some(retirement_block_number); + ActiveKeys::set(txn, &keys); + return; + } + + // This is not the key in question, but since it's older, it already should've been queued + // for retirement + assert!( + keys[i].retirement_block_number.is_some(), + "older key wasn't retired before newer key" + ); + } + panic!("retiring key yet not present in keys") + } + pub(crate) fn keys(getter: &impl Get) -> Option>>> { + ActiveKeys::get(getter) + } + + pub(crate) fn set_start_block( + txn: &mut impl DbTxn, + start_block: u64, + id: ::Id, + ) { + Self::set_block(txn, start_block, id); + LatestFinalizedBlock::set(txn, &start_block); + LatestScannableBlock::set(txn, &start_block); + NextToScanForOutputsBlock::set(txn, &start_block); + NextToCheckForEventualitiesBlock::set(txn, &start_block); + } + + pub(crate) fn set_latest_finalized_block(txn: &mut impl DbTxn, latest_finalized_block: u64) { + LatestFinalizedBlock::set(txn, &latest_finalized_block); + } + pub(crate) fn latest_finalized_block(getter: &impl Get) -> Option { + LatestFinalizedBlock::get(getter) + } + + pub(crate) fn set_latest_scannable_block(txn: &mut impl DbTxn, latest_scannable_block: u64) { + LatestScannableBlock::set(txn, &latest_scannable_block); + } + pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { + LatestScannableBlock::get(getter) + } + + pub(crate) fn set_next_to_scan_for_outputs_block( + txn: &mut impl DbTxn, + next_to_scan_for_outputs_block: u64, + ) { + NextToScanForOutputsBlock::set(txn, &next_to_scan_for_outputs_block); + } + pub(crate) fn next_to_scan_for_outputs_block(getter: &impl Get) -> Option { + NextToScanForOutputsBlock::get(getter) + } + + pub(crate) fn set_next_to_check_for_eventualities_block( + txn: &mut impl DbTxn, + next_to_check_for_eventualities_block: u64, + ) { + NextToCheckForEventualitiesBlock::set(txn, &next_to_check_for_eventualities_block); + } + pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { + NextToCheckForEventualitiesBlock::get(getter) + } +} diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs new file mode 100644 index 00000000..e69de29b diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs new file mode 100644 index 00000000..66477cdb --- /dev/null +++ b/processor/scanner/src/index.rs @@ -0,0 +1,72 @@ +use serai_db::{Db, DbTxn}; + +use primitives::{Id, Block}; + +// TODO: Localize to IndexDb? +use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; + +/* + This processor should build its own index of the blockchain, yet only for finalized blocks which + are safe to process. For Proof of Work blockchains, which only have probabilistic finality, these + are the set of sufficiently confirmed blocks. For blockchains with finality, these are the + finalized blocks. + + This task finds the finalized blocks, verifies they're continguous, and saves their IDs. +*/ +struct IndexFinalizedTask { + db: D, + feed: S, +} + +#[async_trait::async_trait] +impl ContinuallyRan for IndexFinalizedTask { + async fn run_instance(&mut self) -> Result<(), String> { + // Fetch the latest finalized block + let our_latest_finalized = ScannerDb::::latest_finalized_block(&self.db) + .expect("IndexTask run before writing the start block"); + let latest_finalized = match self.feed.latest_finalized_block_number().await { + Ok(latest_finalized) => latest_finalized, + Err(e) => Err(format!("couldn't fetch the latest finalized block number: {e:?}"))?, + }; + + // Index the hashes of all blocks until the latest finalized block + for b in (our_latest_finalized + 1) ..= latest_finalized { + let block = match self.feed.block_by_number(b).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, + }; + + // Check this descends from our indexed chain + { + let expected_parent = + ScannerDb::::block_id(&self.db, b - 1).expect("didn't have the ID of the prior block"); + if block.parent() != expected_parent { + panic!( + "current finalized block (#{b}, {}) doesn't build off finalized block (#{}, {})", + hex::encode(block.parent()), + b - 1, + hex::encode(expected_parent) + ); + } + } + + // Update the latest finalized block + let mut txn = self.db.txn(); + ScannerDb::::set_block(&mut txn, b, block.id()); + ScannerDb::::set_latest_finalized_block(&mut txn, b); + txn.commit(); + } + + Ok(()) + } +} + +/* + The processor can't index the blockchain unilaterally. It needs to develop a totally ordered view + of the blockchain. That requires consensus with other validators on when certain keys are set to + activate (and retire). We solve this by only scanning `n` blocks ahead of the last agreed upon + block, then waiting for Serai to acknowledge the block. This lets us safely schedule events after + this `n` block window (as demonstrated/proven with `mini`). + + TODO +*/ diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 1b25e108..736a62b9 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,25 +1,91 @@ -use core::marker::PhantomData; -use std::{ - sync::Arc, - io::Read, - time::Duration, - collections::{VecDeque, HashSet, HashMap}, -}; +use core::fmt::Debug; -use ciphersuite::group::GroupEncoding; -use frost::curve::Ciphersuite; +use primitives::{ReceivedOutput, Block}; -use log::{info, debug, warn}; -use tokio::{ - sync::{RwLockReadGuard, RwLockWriteGuard, RwLock, mpsc}, - time::sleep, -}; +mod db; +mod index; -use crate::{ - Get, DbTxn, Db, - networks::{Output, Transaction, Eventuality, EventualitiesTracker, Block, Network}, -}; +/// A feed usable to scan a blockchain. +/// +/// This defines the primitive types used, along with various getters necessary for indexing. +#[async_trait::async_trait] +pub trait ScannerFeed: Send + Sync { + /// The type of the key used to receive coins on this blockchain. + type Key: group::Group + group::GroupEncoding; + /// The type of the address used to specify who to send coins to on this blockchain. + type Address; + + /// The type representing a received (and spendable) output. + type Output: ReceivedOutput; + + /// The representation of a block for this blockchain. + /// + /// A block is defined as a consensus event associated with a set of transactions. It is not + /// necessary to literally define it as whatever the external network defines as a block. For + /// external networks which finalize block(s), this block type should be a representation of all + /// transactions within a finalization event. + type Block: Block; + + /// An error encountered when fetching data from the blockchain. + /// + /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually + /// resolve without manual intervention. + type EphemeralError: Debug; + + /// Fetch the number of the latest finalized block. + /// + /// The block number is its zero-indexed position within a linear view of the external network's + /// consensus. The genesis block accordingly has block number 0. + async fn latest_finalized_block_number(&self) -> Result; + + /// Fetch a block by its number. + async fn block_by_number(&self, number: u64) -> Result; + + /// Scan a block for its outputs. + async fn scan_for_outputs( + &self, + block: &Self::Block, + key: Self::Key, + ) -> Result; +} + +#[async_trait::async_trait] +pub(crate) trait ContinuallyRan: Sized { + async fn run_instance(&mut self) -> Result<(), String>; + + async fn continually_run(mut self) { + // The default number of seconds to sleep before running the task again + let default_sleep_before_next_task = 5; + // The current number of seconds to sleep before running the task again + // We increment this upon errors in order to not flood the logs with errors + let mut current_sleep_before_next_task = default_sleep_before_next_task; + let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| { + let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task; + // Set a limit of sleeping for two minutes + *current_sleep_before_next_task = new_sleep.max(120); + }; + + loop { + match self.run_instance().await { + Ok(()) => { + // Upon a successful (error-free) loop iteration, reset the amount of time we sleep + current_sleep_before_next_task = default_sleep_before_next_task; + } + Err(e) => { + log::debug!("{}", e); + increase_sleep_before_next_task(&mut current_sleep_before_next_task); + } + } + + // Don't run the task again for another few seconds + // This is at the start of the loop so we can continue without skipping this delay + tokio::time::sleep(core::time::Duration::from_secs(current_sleep_before_next_task)).await; + } + } +} + +/* #[derive(Clone, Debug)] pub enum ScannerEvent { // Block scanned @@ -44,86 +110,6 @@ pub type ScannerEventChannel = mpsc::UnboundedReceiver>; #[derive(Clone, Debug)] struct ScannerDb(PhantomData, PhantomData); impl ScannerDb { - fn scanner_key(dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec { - D::key(b"SCANNER", dst, key) - } - - fn block_key(number: usize) -> Vec { - Self::scanner_key(b"block_id", u64::try_from(number).unwrap().to_le_bytes()) - } - fn block_number_key(id: &>::Id) -> Vec { - Self::scanner_key(b"block_number", id) - } - fn save_block(txn: &mut D::Transaction<'_>, number: usize, id: &>::Id) { - txn.put(Self::block_number_key(id), u64::try_from(number).unwrap().to_le_bytes()); - txn.put(Self::block_key(number), id); - } - fn block(getter: &G, number: usize) -> Option<>::Id> { - getter.get(Self::block_key(number)).map(|id| { - let mut res = >::Id::default(); - res.as_mut().copy_from_slice(&id); - res - }) - } - fn block_number(getter: &G, id: &>::Id) -> Option { - getter - .get(Self::block_number_key(id)) - .map(|number| u64::from_le_bytes(number.try_into().unwrap()).try_into().unwrap()) - } - - fn keys_key() -> Vec { - Self::scanner_key(b"keys", b"") - } - fn register_key( - txn: &mut D::Transaction<'_>, - activation_number: usize, - key: ::G, - ) { - let mut keys = txn.get(Self::keys_key()).unwrap_or(vec![]); - - let key_bytes = key.to_bytes(); - - let key_len = key_bytes.as_ref().len(); - assert_eq!(keys.len() % (8 + key_len), 0); - - // Sanity check this key isn't already present - let mut i = 0; - while i < keys.len() { - if &keys[(i + 8) .. ((i + 8) + key_len)] == key_bytes.as_ref() { - panic!("adding {} as a key yet it was already present", hex::encode(key_bytes)); - } - i += 8 + key_len; - } - - keys.extend(u64::try_from(activation_number).unwrap().to_le_bytes()); - keys.extend(key_bytes.as_ref()); - txn.put(Self::keys_key(), keys); - } - fn keys(getter: &G) -> Vec<(usize, ::G)> { - let bytes_vec = getter.get(Self::keys_key()).unwrap_or(vec![]); - let mut bytes: &[u8] = bytes_vec.as_ref(); - - // Assumes keys will be 32 bytes when calculating the capacity - // If keys are larger, this may allocate more memory than needed - // If keys are smaller, this may require additional allocations - // Either are fine - let mut res = Vec::with_capacity(bytes.len() / (8 + 32)); - while !bytes.is_empty() { - let mut activation_number = [0; 8]; - bytes.read_exact(&mut activation_number).unwrap(); - let activation_number = u64::from_le_bytes(activation_number).try_into().unwrap(); - - res.push((activation_number, N::Curve::read_G(&mut bytes).unwrap())); - } - res - } - fn retire_key(txn: &mut D::Transaction<'_>) { - let keys = Self::keys(txn); - assert_eq!(keys.len(), 2); - txn.del(Self::keys_key()); - Self::register_key(txn, keys[1].0, keys[1].1); - } - fn seen_key(id: &>::Id) -> Vec { Self::scanner_key(b"seen", id) } @@ -737,3 +723,4 @@ impl Scanner { } } } +*/ diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs new file mode 100644 index 00000000..6f784a7e --- /dev/null +++ b/processor/scanner/src/scan.rs @@ -0,0 +1,73 @@ +use serai_db::{Db, DbTxn}; + +use primitives::{Id, Block}; + +// TODO: Localize to ScanDb? +use crate::{db::ScannerDb, ScannerFeed}; + +struct ScanForOutputsTask { + db: D, + feed: S, +} + +#[async_trait::async_trait] +impl ContinuallyRan for ScanForOutputsTask { + async fn run_instance(&mut self) -> Result<(), String> { + // Fetch the safe to scan block + let latest_scannable = ScannerDb::::latest_scannable_block(&self.db).expect("ScanForOutputsTask run before writing the start block"); + // Fetch the next block to scan + let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db).expect("ScanForOutputsTask run before writing the start block"); + + for b in next_to_scan ..= latest_scannable { + let block = match self.feed.block_by_number(b).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, + }; + + // Check the ID of this block is the expected ID + { + let expected = ScannerDb::::block_id(b).expect("scannable block didn't have its ID saved"); + if block.id() != expected { + panic!("finalized chain reorganized from {} to {} at {}", hex::encode(expected), hex::encode(block.id()), b); + } + } + + log::info!("scanning block: {} ({b})", hex::encode(block.id())); + + let keys = ScannerDb::::keys(&self.db).expect("scanning for a blockchain without any keys set"); + // Remove all the retired keys + while let Some(retire_at) = keys[0].retirement_block_number { + if retire_at <= b { + keys.remove(0); + } + } + assert!(keys.len() <= 2); + + // Scan for each key + for key in keys { + // If this key has yet to active, skip it + if key.activation_block_number > b { + continue; + } + + let mut outputs = vec![]; + for output in network.scan_for_outputs(&block, key).awaits { + assert_eq!(output.key(), key); + // TODO: Check for dust + outputs.push(output); + } + } + + let mut txn = self.db.txn(); + // Update the latest scanned block + ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); + // TODO: If this had outputs, yield them and mark this block notable + /* + A block is notable if it's an activation, had outputs, or a retirement block. + */ + txn.commit(); + } + + Ok(()) + } +} diff --git a/processor/src/multisigs/mod.rs b/processor/src/multisigs/mod.rs index 12f01715..92ea0271 100644 --- a/processor/src/multisigs/mod.rs +++ b/processor/src/multisigs/mod.rs @@ -18,10 +18,12 @@ use log::{info, error}; use tokio::time::sleep; +/* TODO #[cfg(not(test))] mod scanner; #[cfg(test)] pub mod scanner; +*/ use scanner::{ScannerEvent, ScannerHandle, Scanner}; From a2717d73f08fb286d5a46a640b3964a0ae518ba1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 16:24:18 -0400 Subject: [PATCH 009/368] Flesh out new scanner a bit more Adds the task to mark blocks safe to scan, and outlines the task to report blocks. --- processor/scanner/src/db.rs | 60 +++++++++++++++++++++-- processor/scanner/src/eventuality.rs | 1 + processor/scanner/src/index.rs | 27 +++++----- processor/scanner/src/lib.rs | 65 ++++++++++++++++++++++--- processor/scanner/src/report.rs | 50 +++++++++++++++++++ processor/scanner/src/safe.rs | 73 ++++++++++++++++++++++++++++ processor/scanner/src/scan.rs | 15 +++--- 7 files changed, 259 insertions(+), 32 deletions(-) create mode 100644 processor/scanner/src/report.rs create mode 100644 processor/scanner/src/safe.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 8bd7d944..073d5d42 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,11 +1,9 @@ use core::marker::PhantomData; -use group::GroupEncoding; - use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; -use primitives::{Id, Block, BorshG}; +use primitives::{Id, ReceivedOutput, Block, BorshG}; use crate::ScannerFeed; @@ -14,7 +12,7 @@ trait Borshy: BorshSerialize + BorshDeserialize {} impl Borshy for T {} #[derive(BorshSerialize, BorshDeserialize)] -struct SeraiKey { +pub(crate) struct SeraiKey { activation_block_number: u64, retirement_block_number: Option, key: K, @@ -35,6 +33,10 @@ create_db!( NextToScanForOutputsBlock: () -> u64, // The next block to check for resolving eventualities NextToCheckForEventualitiesBlock: () -> u64, + // The next block to potentially report + NextToPotentiallyReportBlock: () -> u64, + // The highest acknowledged block + HighestAcknowledgedBlock: () -> u64, // If a block was notable /* @@ -55,6 +57,8 @@ create_db!( */ // This collapses from `bool` to `()`, using if the value was set for true and false otherwise NotableBlock: (number: u64) -> (), + + SerializedOutputs: (block_number: u64) -> Vec, } ); @@ -74,6 +78,10 @@ impl ScannerDb { // activation_block_number is inclusive, so the key will be scanned for starting at the specified // block pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: S::Key) { + // Set this block as notable + NotableBlock::set(txn, activation_block_number, &()); + + // Push the key let mut keys: Vec>> = ActiveKeys::get(txn).unwrap_or(vec![]); for key_i in &keys { if key == key_i.key.0 { @@ -124,6 +132,7 @@ impl ScannerDb { LatestScannableBlock::set(txn, &start_block); NextToScanForOutputsBlock::set(txn, &start_block); NextToCheckForEventualitiesBlock::set(txn, &start_block); + NextToPotentiallyReportBlock::set(txn, &start_block); } pub(crate) fn set_latest_finalized_block(txn: &mut impl DbTxn, latest_finalized_block: u64) { @@ -159,4 +168,47 @@ impl ScannerDb { pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { NextToCheckForEventualitiesBlock::get(getter) } + + pub(crate) fn set_next_to_potentially_report_block( + txn: &mut impl DbTxn, + next_to_potentially_report_block: u64, + ) { + NextToPotentiallyReportBlock::set(txn, &next_to_potentially_report_block); + } + pub(crate) fn next_to_potentially_report_block(getter: &impl Get) -> Option { + NextToPotentiallyReportBlock::get(getter) + } + + pub(crate) fn set_highest_acknowledged_block( + txn: &mut impl DbTxn, + highest_acknowledged_block: u64, + ) { + HighestAcknowledgedBlock::set(txn, &highest_acknowledged_block); + } + pub(crate) fn highest_acknowledged_block(getter: &impl Get) -> Option { + HighestAcknowledgedBlock::get(getter) + } + + pub(crate) fn set_outputs( + txn: &mut impl DbTxn, + block_number: u64, + outputs: Vec>, + ) { + if outputs.is_empty() { + return; + } + + // Set this block as notable + NotableBlock::set(txn, block_number, &()); + + let mut buf = Vec::with_capacity(outputs.len() * 128); + for output in outputs { + output.write(&mut buf).unwrap(); + } + SerializedOutputs::set(txn, block_number, &buf); + } + + pub(crate) fn is_notable_block(getter: &impl Get, number: u64) -> bool { + NotableBlock::get(getter, number).is_some() + } } diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index e69de29b..70b786d1 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -0,0 +1 @@ +// TODO diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs index 66477cdb..7967d5df 100644 --- a/processor/scanner/src/index.rs +++ b/processor/scanner/src/index.rs @@ -20,7 +20,7 @@ struct IndexFinalizedTask { #[async_trait::async_trait] impl ContinuallyRan for IndexFinalizedTask { - async fn run_instance(&mut self) -> Result<(), String> { + async fn run_iteration(&mut self) -> Result { // Fetch the latest finalized block let our_latest_finalized = ScannerDb::::latest_finalized_block(&self.db) .expect("IndexTask run before writing the start block"); @@ -29,6 +29,18 @@ impl ContinuallyRan for IndexFinalizedTask { Err(e) => Err(format!("couldn't fetch the latest finalized block number: {e:?}"))?, }; + if latest_finalized < our_latest_finalized { + // Explicitly log this as an error as returned ephemeral errors are logged with debug + // This doesn't panic as the node should sync along our indexed chain, and if it doesn't, + // we'll panic at that point in time + log::error!( + "node is out of sync, latest finalized {} is behind our indexed {}", + latest_finalized, + our_latest_finalized + ); + Err("node is out of sync".to_string())?; + } + // Index the hashes of all blocks until the latest finalized block for b in (our_latest_finalized + 1) ..= latest_finalized { let block = match self.feed.block_by_number(b).await { @@ -57,16 +69,7 @@ impl ContinuallyRan for IndexFinalizedTask { txn.commit(); } - Ok(()) + // Have dependents run if we updated the latest finalized block + Ok(our_latest_finalized != latest_finalized) } } - -/* - The processor can't index the blockchain unilaterally. It needs to develop a totally ordered view - of the blockchain. That requires consensus with other validators on when certain keys are set to - activate (and retire). We solve this by only scanning `n` blocks ahead of the last agreed upon - block, then waiting for Serai to acknowledge the block. This lets us safely schedule events after - this `n` block window (as demonstrated/proven with `mini`). - - TODO -*/ diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 736a62b9..04dcf824 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,4 +1,6 @@ -use core::fmt::Debug; +use core::{fmt::Debug, time::Duration}; + +use tokio::sync::mpsc; use primitives::{ReceivedOutput, Block}; @@ -50,11 +52,50 @@ pub trait ScannerFeed: Send + Sync { ) -> Result; } +/// A handle to immediately run an iteration of a task. +#[derive(Clone)] +pub(crate) struct RunNowHandle(mpsc::Sender<()>); +/// An instruction recipient to immediately run an iteration of a task. +pub(crate) struct RunNowRecipient(mpsc::Receiver<()>); + +impl RunNowHandle { + /// Create a new run-now handle to be assigned to a task. + pub(crate) fn new() -> (Self, RunNowRecipient) { + // Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as + // soon as possible + let (send, recv) = mpsc::channel(1); + (Self(send), RunNowRecipient(recv)) + } + + /// Tell the task to run now (and not whenever its next iteration on a timer is). + /// + /// Panics if the task has been dropped. + pub(crate) fn run_now(&self) { + #[allow(clippy::match_same_arms)] + match self.0.try_send(()) { + Ok(()) => {} + // NOP on full, as this task will already be ran as soon as possible + Err(mpsc::error::TrySendError::Full(())) => {} + Err(mpsc::error::TrySendError::Closed(())) => { + panic!("task was unexpectedly closed when calling run_now") + } + } + } +} + #[async_trait::async_trait] pub(crate) trait ContinuallyRan: Sized { - async fn run_instance(&mut self) -> Result<(), String>; + /// Run an iteration of the task. + /// + /// If this returns `true`, all dependents of the task will immediately have a new iteration ran + /// (without waiting for whatever timer they were already on). + async fn run_iteration(&mut self) -> Result; - async fn continually_run(mut self) { + /// Continually run the task. + /// + /// This returns a channel which can have a message set to immediately trigger a new run of an + /// iteration. + async fn continually_run(mut self, mut run_now: RunNowRecipient, dependents: Vec) { // The default number of seconds to sleep before running the task again let default_sleep_before_next_task = 5; // The current number of seconds to sleep before running the task again @@ -67,10 +108,16 @@ pub(crate) trait ContinuallyRan: Sized { }; loop { - match self.run_instance().await { - Ok(()) => { + match self.run_iteration().await { + Ok(run_dependents) => { // Upon a successful (error-free) loop iteration, reset the amount of time we sleep current_sleep_before_next_task = default_sleep_before_next_task; + + if run_dependents { + for dependent in &dependents { + dependent.run_now(); + } + } } Err(e) => { log::debug!("{}", e); @@ -78,9 +125,11 @@ pub(crate) trait ContinuallyRan: Sized { } } - // Don't run the task again for another few seconds - // This is at the start of the loop so we can continue without skipping this delay - tokio::time::sleep(core::time::Duration::from_secs(current_sleep_before_next_task)).await; + // Don't run the task again for another few seconds UNLESS told to run now + tokio::select! { + () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, + msg = run_now.0.recv() => assert_eq!(msg, Some(()), "run now handle was dropped"), + } } } } diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs new file mode 100644 index 00000000..4d378b9c --- /dev/null +++ b/processor/scanner/src/report.rs @@ -0,0 +1,50 @@ +/* + We only report blocks once both tasks, scanning for received ouputs and eventualities, have + processed the block. This ensures we've performed all ncessary options. +*/ + +use serai_db::{Db, DbTxn}; + +use primitives::{Id, Block}; + +// TODO: Localize to ReportDb? +use crate::{db::ScannerDb, ScannerFeed}; + +struct ReportTask { + db: D, + feed: S, +} + +#[async_trait::async_trait] +impl ContinuallyRan for ReportTask { + async fn run_iteration(&mut self) -> Result { + let highest_reportable = { + // Fetch the latest scanned and latest checked block + let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db).expect("ReportTask run before writing the start block"); + let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db).expect("ReportTask run before writing the start block"); + // If we haven't done any work, return + if (next_to_scan == 0) || (next_to_check == 0) { + return Ok(false); + } + let last_scanned = next_to_scan - 1; + let last_checked = next_to_check - 1; + last_scanned.min(last_checked) + }; + + let next_to_potentially_report = ScannerDb::::next_block_to_potentially_report(&self.db).expect("ReportTask run before writing the start block"); + + for b in next_to_potentially_report ..= highest_reportable { + if ScannerDb::::is_block_notable(b) { + todo!("TODO: Make Batches, which requires handling Forwarded within this crate"); + } + + let mut txn = self.db.txn(); + // Update the next to potentially report block + ScannerDb::::set_next_to_potentially_report_block(&mut txn, b + 1); + txn.commit(); + } + + // Run dependents if we decided to report any blocks + Ok(next_to_potentially_report <= highest_reportable) + } +} diff --git a/processor/scanner/src/safe.rs b/processor/scanner/src/safe.rs new file mode 100644 index 00000000..a5de448d --- /dev/null +++ b/processor/scanner/src/safe.rs @@ -0,0 +1,73 @@ +use core::marker::PhantomData; + +use serai_db::{Db, DbTxn}; + +use primitives::{Id, Block}; + +// TODO: Localize to SafeDb? +use crate::{db::ScannerDb, ScannerFeed}; + +/* + We mark blocks safe to scan when they're no more than `(CONFIRMATIONS - 1)` blocks after the + oldest notable block still pending acknowledgement (creating a window of length `CONFIRMATIONS` + when including the block pending acknowledgement). This means that if all known notable blocks + have been acknowledged, and a stretch of non-notable blocks occurs, they'll automatically be + marked safe to scan (since they come before the next oldest notable block still pending + acknowledgement). + + This design lets Serai safely schedule events `CONFIRMATIONS` blocks after the latest + acknowledged block. For an exhaustive proof of this, please see `mini`. +*/ +struct SafeToScanTask { + db: D, + _S: PhantomData, +} + +#[async_trait::async_trait] +impl ContinuallyRan for SafeToScanTask { + async fn run_iteration(&mut self) -> Result { + // First, we fetch the highest acknowledged block + let Some(highest_acknowledged_block) = ScannerDb::::highest_acknowledged_block(&self.db) else { + // If no blocks have been acknowledged, we don't mark any safe + // Once the start block (implicitly safe) has been acknowledged, we proceed from there + return Ok(false); + }; + + let latest_block_known_if_pending_acknowledgement = { + // The next block to potentially report comes after all blocks we've decided to report or not + // If we've decided to report (or not report) a block, we know if it needs acknowledgement + // (and accordingly is pending acknowledgement) + // Accordingly, the block immediately before this is the latest block with a known status + ScannerDb::::next_block_to_potentially_report(&self.db).expect("SafeToScanTask run before writing the start block") - 1 + }; + + let mut oldest_pending_acknowledgement = None; + for b in (highest_acknowledged_block + 1) ..= latest_block_known_if_pending_acknowledgement { + // If the block isn't notable, immediately flag it as acknowledged + if !ScannerDb::::is_block_notable(b) { + let mut txn = self.db.txn(); + ScannerDb::::set_highest_acknowledged_block(&mut txn, b); + txn.commit(); + continue; + } + + oldest_pending_acknowledgement = Some(b); + break; + } + + // `oldest_pending_acknowledgement` is now the oldest block pending acknowledgement or `None` + // If it's `None`, then we were able to implicitly acknowledge all blocks within this span + // Since the safe block is `(CONFIRMATIONS - 1)` blocks after the oldest block still pending + // acknowledgement, and the oldest block still pending acknowledgement is in the future, + // we know the safe block to scan to is + // `>= latest_block_known_if_pending_acknowledgement + (CONFIRMATIONS - 1)` + let oldest_pending_acknowledgement = oldest_pending_acknowledgement.unwrap_or(latest_block_known_if_pending_acknowledgement); + + // Update the latest scannable block + let mut txn = self.db.txn(); + ScannerDb::::set_latest_scannable_block(oldest_pending_acknowledgement + (CONFIRMATIONS - 1)); + txn.commit(); + + Ok(next_to_potentially_report <= highest_reportable) + } +} diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 6f784a7e..b96486d4 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -12,7 +12,7 @@ struct ScanForOutputsTask { #[async_trait::async_trait] impl ContinuallyRan for ScanForOutputsTask { - async fn run_instance(&mut self) -> Result<(), String> { + async fn run_iteration(&mut self) -> Result { // Fetch the safe to scan block let latest_scannable = ScannerDb::::latest_scannable_block(&self.db).expect("ScanForOutputsTask run before writing the start block"); // Fetch the next block to scan @@ -43,6 +43,7 @@ impl ContinuallyRan for ScanForOutputsTask { } assert!(keys.len() <= 2); + let mut outputs = vec![]; // Scan for each key for key in keys { // If this key has yet to active, skip it @@ -50,7 +51,6 @@ impl ContinuallyRan for ScanForOutputsTask { continue; } - let mut outputs = vec![]; for output in network.scan_for_outputs(&block, key).awaits { assert_eq!(output.key(), key); // TODO: Check for dust @@ -59,15 +59,14 @@ impl ContinuallyRan for ScanForOutputsTask { } let mut txn = self.db.txn(); - // Update the latest scanned block + // Save the outputs + ScannerDb::::set_outputs(&mut txn, b, outputs); + // Update the next to scan block ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); - // TODO: If this had outputs, yield them and mark this block notable - /* - A block is notable if it's an activation, had outputs, or a retirement block. - */ txn.commit(); } - Ok(()) + // Run dependents if we successfully scanned any blocks + Ok(next_to_scan <= latest_scannable) } } From 2b47feafed43a0b0ae90d5b31cd98ed0679d7550 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 16:39:03 -0400 Subject: [PATCH 010/368] Correct misc compilation errors --- processor/scanner/src/db.rs | 8 ++++---- processor/scanner/src/lib.rs | 21 +++++++++++++++++++-- processor/scanner/src/report.rs | 13 ++++++++----- processor/scanner/src/safe.rs | 23 ++++++++++++++++------- processor/scanner/src/scan.rs | 32 +++++++++++++++++++++++--------- 5 files changed, 70 insertions(+), 27 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 073d5d42..c7cbd253 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -13,9 +13,9 @@ impl Borshy for T {} #[derive(BorshSerialize, BorshDeserialize)] pub(crate) struct SeraiKey { - activation_block_number: u64, - retirement_block_number: Option, - key: K, + pub(crate) activation_block_number: u64, + pub(crate) retirement_block_number: Option, + pub(crate) key: K, } create_db!( @@ -208,7 +208,7 @@ impl ScannerDb { SerializedOutputs::set(txn, block_number, &buf); } - pub(crate) fn is_notable_block(getter: &impl Get, number: u64) -> bool { + pub(crate) fn is_block_notable(getter: &impl Get, number: u64) -> bool { NotableBlock::get(getter, number).is_some() } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 04dcf824..a6f3e899 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -6,12 +6,21 @@ use primitives::{ReceivedOutput, Block}; mod db; mod index; +mod scan; +mod eventuality; +mod report; +mod safe; /// A feed usable to scan a blockchain. /// /// This defines the primitive types used, along with various getters necessary for indexing. #[async_trait::async_trait] pub trait ScannerFeed: Send + Sync { + /// The amount of confirmations required for a block to be finalized. + /// + /// This value must be at least `1`. + const CONFIRMATIONS: u64; + /// The type of the key used to receive coins on this blockchain. type Key: group::Group + group::GroupEncoding; @@ -35,11 +44,19 @@ pub trait ScannerFeed: Send + Sync { /// resolve without manual intervention. type EphemeralError: Debug; + /// Fetch the number of the latest block. + /// + /// The block number is its zero-indexed position within a linear view of the external network's + /// consensus. The genesis block accordingly has block number 0. + async fn latest_block_number(&self) -> Result; + /// Fetch the number of the latest finalized block. /// /// The block number is its zero-indexed position within a linear view of the external network's /// consensus. The genesis block accordingly has block number 0. - async fn latest_finalized_block_number(&self) -> Result; + async fn latest_finalized_block_number(&self) -> Result { + Ok(self.latest_block_number().await? - Self::CONFIRMATIONS) + } /// Fetch a block by its number. async fn block_by_number(&self, number: u64) -> Result; @@ -49,7 +66,7 @@ pub trait ScannerFeed: Send + Sync { &self, block: &Self::Block, key: Self::Key, - ) -> Result; + ) -> Result, Self::EphemeralError>; } /// A handle to immediately run an iteration of a task. diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 4d378b9c..5c57a3f5 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -8,7 +8,7 @@ use serai_db::{Db, DbTxn}; use primitives::{Id, Block}; // TODO: Localize to ReportDb? -use crate::{db::ScannerDb, ScannerFeed}; +use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; struct ReportTask { db: D, @@ -20,8 +20,10 @@ impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { let highest_reportable = { // Fetch the latest scanned and latest checked block - let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db).expect("ReportTask run before writing the start block"); - let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db).expect("ReportTask run before writing the start block"); + let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + .expect("ReportTask run before writing the start block"); + let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db) + .expect("ReportTask run before writing the start block"); // If we haven't done any work, return if (next_to_scan == 0) || (next_to_check == 0) { return Ok(false); @@ -31,10 +33,11 @@ impl ContinuallyRan for ReportTask { last_scanned.min(last_checked) }; - let next_to_potentially_report = ScannerDb::::next_block_to_potentially_report(&self.db).expect("ReportTask run before writing the start block"); + let next_to_potentially_report = ScannerDb::::next_to_potentially_report_block(&self.db) + .expect("ReportTask run before writing the start block"); for b in next_to_potentially_report ..= highest_reportable { - if ScannerDb::::is_block_notable(b) { + if ScannerDb::::is_block_notable(&self.db, b) { todo!("TODO: Make Batches, which requires handling Forwarded within this crate"); } diff --git a/processor/scanner/src/safe.rs b/processor/scanner/src/safe.rs index a5de448d..a0b4f547 100644 --- a/processor/scanner/src/safe.rs +++ b/processor/scanner/src/safe.rs @@ -5,7 +5,7 @@ use serai_db::{Db, DbTxn}; use primitives::{Id, Block}; // TODO: Localize to SafeDb? -use crate::{db::ScannerDb, ScannerFeed}; +use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; /* We mark blocks safe to scan when they're no more than `(CONFIRMATIONS - 1)` blocks after the @@ -27,7 +27,8 @@ struct SafeToScanTask { impl ContinuallyRan for SafeToScanTask { async fn run_iteration(&mut self) -> Result { // First, we fetch the highest acknowledged block - let Some(highest_acknowledged_block) = ScannerDb::::highest_acknowledged_block(&self.db) else { + let Some(highest_acknowledged_block) = ScannerDb::::highest_acknowledged_block(&self.db) + else { // If no blocks have been acknowledged, we don't mark any safe // Once the start block (implicitly safe) has been acknowledged, we proceed from there return Ok(false); @@ -38,13 +39,15 @@ impl ContinuallyRan for SafeToScanTask { // If we've decided to report (or not report) a block, we know if it needs acknowledgement // (and accordingly is pending acknowledgement) // Accordingly, the block immediately before this is the latest block with a known status - ScannerDb::::next_block_to_potentially_report(&self.db).expect("SafeToScanTask run before writing the start block") - 1 + ScannerDb::::next_to_potentially_report_block(&self.db) + .expect("SafeToScanTask run before writing the start block") - + 1 }; let mut oldest_pending_acknowledgement = None; for b in (highest_acknowledged_block + 1) ..= latest_block_known_if_pending_acknowledgement { // If the block isn't notable, immediately flag it as acknowledged - if !ScannerDb::::is_block_notable(b) { + if !ScannerDb::::is_block_notable(&self.db, b) { let mut txn = self.db.txn(); ScannerDb::::set_highest_acknowledged_block(&mut txn, b); txn.commit(); @@ -61,13 +64,19 @@ impl ContinuallyRan for SafeToScanTask { // acknowledgement, and the oldest block still pending acknowledgement is in the future, // we know the safe block to scan to is // `>= latest_block_known_if_pending_acknowledgement + (CONFIRMATIONS - 1)` - let oldest_pending_acknowledgement = oldest_pending_acknowledgement.unwrap_or(latest_block_known_if_pending_acknowledgement); + let oldest_pending_acknowledgement = + oldest_pending_acknowledgement.unwrap_or(latest_block_known_if_pending_acknowledgement); + + let old_safe_block = ScannerDb::::latest_scannable_block(&self.db) + .expect("SafeToScanTask run before writing the start block"); + let new_safe_block = oldest_pending_acknowledgement + + (S::CONFIRMATIONS.checked_sub(1).expect("CONFIRMATIONS wasn't at least 1")); // Update the latest scannable block let mut txn = self.db.txn(); - ScannerDb::::set_latest_scannable_block(oldest_pending_acknowledgement + (CONFIRMATIONS - 1)); + ScannerDb::::set_latest_scannable_block(&mut txn, new_safe_block); txn.commit(); - Ok(next_to_potentially_report <= highest_reportable) + Ok(old_safe_block != new_safe_block) } } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index b96486d4..92165002 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -1,9 +1,9 @@ use serai_db::{Db, DbTxn}; -use primitives::{Id, Block}; +use primitives::{Id, ReceivedOutput, Block}; // TODO: Localize to ScanDb? -use crate::{db::ScannerDb, ScannerFeed}; +use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; struct ScanForOutputsTask { db: D, @@ -14,9 +14,11 @@ struct ScanForOutputsTask { impl ContinuallyRan for ScanForOutputsTask { async fn run_iteration(&mut self) -> Result { // Fetch the safe to scan block - let latest_scannable = ScannerDb::::latest_scannable_block(&self.db).expect("ScanForOutputsTask run before writing the start block"); + let latest_scannable = ScannerDb::::latest_scannable_block(&self.db) + .expect("ScanForOutputsTask run before writing the start block"); // Fetch the next block to scan - let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db).expect("ScanForOutputsTask run before writing the start block"); + let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + .expect("ScanForOutputsTask run before writing the start block"); for b in next_to_scan ..= latest_scannable { let block = match self.feed.block_by_number(b).await { @@ -26,15 +28,22 @@ impl ContinuallyRan for ScanForOutputsTask { // Check the ID of this block is the expected ID { - let expected = ScannerDb::::block_id(b).expect("scannable block didn't have its ID saved"); + let expected = + ScannerDb::::block_id(&self.db, b).expect("scannable block didn't have its ID saved"); if block.id() != expected { - panic!("finalized chain reorganized from {} to {} at {}", hex::encode(expected), hex::encode(block.id()), b); + panic!( + "finalized chain reorganized from {} to {} at {}", + hex::encode(expected), + hex::encode(block.id()), + b + ); } } log::info!("scanning block: {} ({b})", hex::encode(block.id())); - let keys = ScannerDb::::keys(&self.db).expect("scanning for a blockchain without any keys set"); + let mut keys = + ScannerDb::::keys(&self.db).expect("scanning for a blockchain without any keys set"); // Remove all the retired keys while let Some(retire_at) = keys[0].retirement_block_number { if retire_at <= b { @@ -51,8 +60,13 @@ impl ContinuallyRan for ScanForOutputsTask { continue; } - for output in network.scan_for_outputs(&block, key).awaits { - assert_eq!(output.key(), key); + for output in self + .feed + .scan_for_outputs(&block, key.key.0) + .await + .map_err(|e| format!("failed to scan block {b}: {e:?}"))? + { + assert_eq!(output.key(), key.key.0); // TODO: Check for dust outputs.push(output); } From 951872b026735df604c8e556768db97f02be4a0f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 16:51:58 -0400 Subject: [PATCH 011/368] Differentiate BlockHeader from Block --- processor/primitives/src/lib.rs | 32 ++++++++++++++++++++++++++++---- processor/scanner/src/db.rs | 30 +++++++++++------------------- processor/scanner/src/index.rs | 4 ++-- processor/scanner/src/lib.rs | 28 +++++++++++----------------- processor/scanner/src/scan.rs | 7 +------ 5 files changed, 53 insertions(+), 48 deletions(-) diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index 535dd14f..45f02571 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -5,7 +5,7 @@ use core::fmt::Debug; use std::io; -use group::GroupEncoding; +use group::{Group, GroupEncoding}; use serai_primitives::Balance; @@ -137,9 +137,8 @@ pub trait ReceivedOutput: fn read(reader: &mut R) -> io::Result; } -/// A block from an external network. -#[async_trait::async_trait] -pub trait Block: Send + Sync + Sized + Clone + Debug { +/// A block header from an external network. +pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { /// The type used to identify blocks. type Id: 'static + Id; /// The ID of this block. @@ -148,6 +147,31 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { fn parent(&self) -> Self::Id; } +/// A block from an external network. +/// +/// A block is defined as a consensus event associated with a set of transactions. It is not +/// necessary to literally define it as whatever the external network defines as a block. For +/// external networks which finalize block(s), this block type should be a representation of all +/// transactions within a period finalization (whether block or epoch). +#[async_trait::async_trait] +pub trait Block: Send + Sync + Sized + Clone + Debug { + /// The type used for this block's header. + type Header: BlockHeader; + + /// The type used to represent keys on this external network. + type Key: Group + GroupEncoding; + /// The type used to represent addresses on this external network. + type Address; + /// The type used to represent received outputs on this external network. + type Output: ReceivedOutput; + + /// The ID of this block. + fn id(&self) -> ::Id; + + /// Scan all outputs within this block to find the outputs spendable by this key. + fn scan_for_outputs(&self, key: Self::Key) -> Vec; +} + /// A wrapper for a group element which implements the borsh traits. #[derive(Clone, Copy, PartialEq, Eq, Debug)] pub struct BorshG(pub G); diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index c7cbd253..0edfad97 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -5,7 +5,7 @@ use serai_db::{Get, DbTxn, create_db}; use primitives::{Id, ReceivedOutput, Block, BorshG}; -use crate::ScannerFeed; +use crate::{ScannerFeed, BlockIdFor, KeyFor, OutputFor}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} @@ -64,25 +64,25 @@ create_db!( pub(crate) struct ScannerDb(PhantomData); impl ScannerDb { - pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: ::Id) { + pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: BlockIdFor) { BlockId::set(txn, number, &id); BlockNumber::set(txn, id, &number); } - pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<::Id> { + pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option> { BlockId::get(getter, number) } - pub(crate) fn block_number(getter: &impl Get, id: ::Id) -> Option { + pub(crate) fn block_number(getter: &impl Get, id: BlockIdFor) -> Option { BlockNumber::get(getter, id) } // activation_block_number is inclusive, so the key will be scanned for starting at the specified // block - pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: S::Key) { + pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: KeyFor) { // Set this block as notable NotableBlock::set(txn, activation_block_number, &()); // Push the key - let mut keys: Vec>> = ActiveKeys::get(txn).unwrap_or(vec![]); + let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); for key_i in &keys { if key == key_i.key.0 { panic!("queueing a key prior queued"); @@ -97,8 +97,8 @@ impl ScannerDb { } // retirement_block_number is inclusive, so the key will no longer be scanned for as of the // specified block - pub(crate) fn retire_key(txn: &mut impl DbTxn, retirement_block_number: u64, key: S::Key) { - let mut keys: Vec>> = + pub(crate) fn retire_key(txn: &mut impl DbTxn, retirement_block_number: u64, key: KeyFor) { + let mut keys: Vec>>> = ActiveKeys::get(txn).expect("retiring key yet no active keys"); assert!(keys.len() > 1, "retiring our only key"); @@ -118,15 +118,11 @@ impl ScannerDb { } panic!("retiring key yet not present in keys") } - pub(crate) fn keys(getter: &impl Get) -> Option>>> { + pub(crate) fn keys(getter: &impl Get) -> Option>>>> { ActiveKeys::get(getter) } - pub(crate) fn set_start_block( - txn: &mut impl DbTxn, - start_block: u64, - id: ::Id, - ) { + pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: BlockIdFor) { Self::set_block(txn, start_block, id); LatestFinalizedBlock::set(txn, &start_block); LatestScannableBlock::set(txn, &start_block); @@ -189,11 +185,7 @@ impl ScannerDb { HighestAcknowledgedBlock::get(getter) } - pub(crate) fn set_outputs( - txn: &mut impl DbTxn, - block_number: u64, - outputs: Vec>, - ) { + pub(crate) fn set_outputs(txn: &mut impl DbTxn, block_number: u64, outputs: Vec>) { if outputs.is_empty() { return; } diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs index 7967d5df..de68522e 100644 --- a/processor/scanner/src/index.rs +++ b/processor/scanner/src/index.rs @@ -1,6 +1,6 @@ use serai_db::{Db, DbTxn}; -use primitives::{Id, Block}; +use primitives::{Id, BlockHeader}; // TODO: Localize to IndexDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; @@ -43,7 +43,7 @@ impl ContinuallyRan for IndexFinalizedTask { // Index the hashes of all blocks until the latest finalized block for b in (our_latest_finalized + 1) ..= latest_finalized { - let block = match self.feed.block_by_number(b).await { + let block = match self.feed.block_header_by_number(b).await { Ok(block) => block, Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, }; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index a6f3e899..addebb60 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -2,7 +2,7 @@ use core::{fmt::Debug, time::Duration}; use tokio::sync::mpsc; -use primitives::{ReceivedOutput, Block}; +use primitives::{ReceivedOutput, BlockHeader, Block}; mod db; mod index; @@ -21,15 +21,6 @@ pub trait ScannerFeed: Send + Sync { /// This value must be at least `1`. const CONFIRMATIONS: u64; - /// The type of the key used to receive coins on this blockchain. - type Key: group::Group + group::GroupEncoding; - - /// The type of the address used to specify who to send coins to on this blockchain. - type Address; - - /// The type representing a received (and spendable) output. - type Output: ReceivedOutput; - /// The representation of a block for this blockchain. /// /// A block is defined as a consensus event associated with a set of transactions. It is not @@ -58,17 +49,20 @@ pub trait ScannerFeed: Send + Sync { Ok(self.latest_block_number().await? - Self::CONFIRMATIONS) } + /// Fetch a block header by its number. + async fn block_header_by_number( + &self, + number: u64, + ) -> Result<::Header, Self::EphemeralError>; + /// Fetch a block by its number. async fn block_by_number(&self, number: u64) -> Result; - - /// Scan a block for its outputs. - async fn scan_for_outputs( - &self, - block: &Self::Block, - key: Self::Key, - ) -> Result, Self::EphemeralError>; } +type BlockIdFor = <<::Block as Block>::Header as BlockHeader>::Id; +type KeyFor = <::Block as Block>::Key; +type OutputFor = <::Block as Block>::Output; + /// A handle to immediately run an iteration of a task. #[derive(Clone)] pub(crate) struct RunNowHandle(mpsc::Sender<()>); diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 92165002..6743d950 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -60,12 +60,7 @@ impl ContinuallyRan for ScanForOutputsTask { continue; } - for output in self - .feed - .scan_for_outputs(&block, key.key.0) - .await - .map_err(|e| format!("failed to scan block {b}: {e:?}"))? - { + for output in block.scan_for_outputs(key.key.0) { assert_eq!(output.key(), key.key.0); // TODO: Check for dust outputs.push(output); From 155ad48f4cbe5dee2de99797d846158870c031a6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 18:20:28 -0400 Subject: [PATCH 012/368] Handle dust --- Cargo.lock | 1 + processor/primitives/src/lib.rs | 36 ++++++++++++++++----------------- processor/scanner/Cargo.toml | 2 ++ processor/scanner/src/lib.rs | 15 ++++++++++++++ processor/scanner/src/scan.rs | 20 +++++++++++++++++- 5 files changed, 55 insertions(+), 19 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 230ed22f..e3e6f378 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8669,6 +8669,7 @@ dependencies = [ "log", "parity-scale-codec", "serai-db", + "serai-primitives", "serai-processor-messages", "serai-processor-primitives", "thiserror", diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index 45f02571..744aae47 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -34,6 +34,24 @@ pub trait Id: } impl Id for [u8; N] where [u8; N]: Default {} +/// A wrapper for a group element which implements the borsh traits. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct BorshG(pub G); +impl BorshSerialize for BorshG { + fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { + writer.write_all(self.0.to_bytes().as_ref()) + } +} +impl BorshDeserialize for BorshG { + fn deserialize_reader(reader: &mut R) -> borsh::io::Result { + let mut repr = G::Repr::default(); + reader.read_exact(repr.as_mut())?; + Ok(Self( + Option::::from(G::from_bytes(&repr)).ok_or(borsh::io::Error::other("invalid point"))?, + )) + } +} + /// The type of the output. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum OutputType { @@ -171,21 +189,3 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { /// Scan all outputs within this block to find the outputs spendable by this key. fn scan_for_outputs(&self, key: Self::Key) -> Vec; } - -/// A wrapper for a group element which implements the borsh traits. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct BorshG(pub G); -impl BorshSerialize for BorshG { - fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { - writer.write_all(self.0.to_bytes().as_ref()) - } -} -impl BorshDeserialize for BorshG { - fn deserialize_reader(reader: &mut R) -> borsh::io::Result { - let mut repr = G::Repr::default(); - reader.read_exact(repr.as_mut())?; - Ok(Self( - Option::::from(G::from_bytes(&repr)).ok_or(borsh::io::Error::other("invalid point"))?, - )) - } -} diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index 670581d9..82de4de1 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -35,5 +35,7 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-db = { path = "../../common/db" } +serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } + messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index addebb60..02c88599 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -2,6 +2,7 @@ use core::{fmt::Debug, time::Duration}; use tokio::sync::mpsc; +use serai_primitives::{Coin, Amount}; use primitives::{ReceivedOutput, BlockHeader, Block}; mod db; @@ -57,6 +58,20 @@ pub trait ScannerFeed: Send + Sync { /// Fetch a block by its number. async fn block_by_number(&self, number: u64) -> Result; + + /// The cost to aggregate an input as of the specified block. + /// + /// This is defined as the transaction fee for a 2-input, 1-output transaction. + async fn cost_to_aggregate( + &self, + coin: Coin, + block_number: u64, + ) -> Result; + + /// The dust threshold for the specified coin. + /// + /// This should be a value worth handling at a human level. + fn dust(&self, coin: Coin) -> Amount; } type BlockIdFor = <<::Block as Block>::Header as BlockHeader>::Id; diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 6743d950..6058c7da 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -62,7 +62,25 @@ impl ContinuallyRan for ScanForOutputsTask { for output in block.scan_for_outputs(key.key.0) { assert_eq!(output.key(), key.key.0); - // TODO: Check for dust + + // Check this isn't dust + { + let mut balance = output.balance(); + // First, subtract 2 * the cost to aggregate, as detailed in + // `spec/processor/UTXO Management.md` + // TODO: Cache this + let cost_to_aggregate = + self.feed.cost_to_aggregate(balance.coin, b).await.map_err(|e| { + format!("couldn't fetch cost to aggregate {:?} at {b}: {e:?}", balance.coin) + })?; + balance.amount.0 -= 2 * cost_to_aggregate.0; + + // Now, check it's still past the dust threshold + if balance.amount.0 < self.feed.dust(balance.coin).0 { + continue; + } + } + outputs.push(output); } } From 74d3075dae7493beab3551c147bf72cb9c633369 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 20 Aug 2024 19:37:47 -0400 Subject: [PATCH 013/368] Document expectations on Eventuality task and correct code determining the block safe to scan/report --- processor/scanner/src/db.rs | 22 +--- processor/scanner/src/eventuality.rs | 49 ++++++++ processor/scanner/src/lib.rs | 176 +-------------------------- processor/scanner/src/report.rs | 14 ++- processor/scanner/src/safe.rs | 82 ------------- 5 files changed, 65 insertions(+), 278 deletions(-) delete mode 100644 processor/scanner/src/safe.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 0edfad97..e92435bc 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -27,16 +27,12 @@ create_db!( // The latest finalized block to appear of a blockchain LatestFinalizedBlock: () -> u64, - // The latest block which it's safe to scan (dependent on what Serai has acknowledged scanning) - LatestScannableBlock: () -> u64, // The next block to scan for received outputs NextToScanForOutputsBlock: () -> u64, // The next block to check for resolving eventualities NextToCheckForEventualitiesBlock: () -> u64, // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, - // The highest acknowledged block - HighestAcknowledgedBlock: () -> u64, // If a block was notable /* @@ -125,7 +121,6 @@ impl ScannerDb { pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: BlockIdFor) { Self::set_block(txn, start_block, id); LatestFinalizedBlock::set(txn, &start_block); - LatestScannableBlock::set(txn, &start_block); NextToScanForOutputsBlock::set(txn, &start_block); NextToCheckForEventualitiesBlock::set(txn, &start_block); NextToPotentiallyReportBlock::set(txn, &start_block); @@ -138,11 +133,10 @@ impl ScannerDb { LatestFinalizedBlock::get(getter) } - pub(crate) fn set_latest_scannable_block(txn: &mut impl DbTxn, latest_scannable_block: u64) { - LatestScannableBlock::set(txn, &latest_scannable_block); - } pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { - LatestScannableBlock::get(getter) + // This is whatever block we've checked the Eventualities of, plus the window length + // See `eventuality.rs` for more info + NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH) } pub(crate) fn set_next_to_scan_for_outputs_block( @@ -175,16 +169,6 @@ impl ScannerDb { NextToPotentiallyReportBlock::get(getter) } - pub(crate) fn set_highest_acknowledged_block( - txn: &mut impl DbTxn, - highest_acknowledged_block: u64, - ) { - HighestAcknowledgedBlock::set(txn, &highest_acknowledged_block); - } - pub(crate) fn highest_acknowledged_block(getter: &impl Get) -> Option { - HighestAcknowledgedBlock::get(getter) - } - pub(crate) fn set_outputs(txn: &mut impl DbTxn, block_number: u64, outputs: Vec>) { if outputs.is_empty() { return; diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index 70b786d1..38f1d112 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -1 +1,50 @@ // TODO + +/* + Note: The following assumes there's some value, `CONFIRMATIONS`, and the finalized block we + operate on is `CONFIRMATIONS` blocks deep. This is true for Proof-of-Work chains yet not the API + actively used here. + + When we scan a block, we receive outputs. When this block is acknowledged, we accumulate those + outputs into some scheduler, potentially causing certain transactions to begin their signing + protocol. + + Despite only scanning blocks with `CONFIRMATIONS`, we cannot assume that these transactions (in + their signed form) will only appear after `CONFIRMATIONS`. For `CONFIRMATIONS = 10`, the scanned + block's number being `1`, the blockchain will have blocks with numbers `0 ..= 10`. While this + implies the earliest the transaction will appear is when the block number is `11`, which is + `1 + CONFIRMATIONS` (the number of the scanned block, plus the confirmations), this isn't + guaranteed. + + A reorganization could occur which causes all unconfirmed blocks to be replaced, with the new + blockchain having the signed transaction present immediately. + + This means that in order to detect Eventuality completions, we can only check block `b+1` once + we've acknowledged block `b`, accumulated its outputs, triggered any transactions, and prepared + for their Eventualities. This is important as both the completion of Eventualities, and the scan + process, may cause a block to be considered notable (where notable blocks must be perfectly + ordered). + + We do not want to fully serialize the scan flow solely because the Eventuality flow must be. If + the time to scan, acknowledge, and intake a block ever exceeded the block time, we'd form a + backlog. + + The solution is to form a window of blocks we can scan/acknowledge/intake, safely, such that we + only form a backlog if the latency for a block exceeds the duration of the entire window (the + amount of blocks in the window * the block time). + + By considering the block an Eventuality resolves not as the block it does, yet the block a window + later, we enable the following flow: + + - The scanner scans within its window, submitting blocks for acknowledgement. + - We have the blocks acknowledged (the consensus protocol handling this in parallel). + - The scanner checks for Eventualities completed following acknowledged blocks. + - If all Eventualities for a retiring multisig have been cleared, the notable block is one window + later. + - The start of the window shifts to the last block we've checked for Eventualities. This means + the end of the window is the block we just set as notable, and yes, once that's scanned we can + successfully publish a batch for it in a canonical fashion. + + This forms a backlog only if the latency of scanning, acknowledgement, and intake (including + checking Eventualities) exceeds the window duration (the desired property). +*/ diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 02c88599..5f51e7d0 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -10,17 +10,17 @@ mod index; mod scan; mod eventuality; mod report; -mod safe; /// A feed usable to scan a blockchain. /// /// This defines the primitive types used, along with various getters necessary for indexing. #[async_trait::async_trait] pub trait ScannerFeed: Send + Sync { - /// The amount of confirmations required for a block to be finalized. + /// The amount of blocks to process in parallel. /// - /// This value must be at least `1`. - const CONFIRMATIONS: u64; + /// This value must be at least `1`. This value should be the worst-case latency to handle a + /// block divided by the expected block time. + const WINDOW_LENGTH: u64; /// The representation of a block for this blockchain. /// @@ -36,19 +36,11 @@ pub trait ScannerFeed: Send + Sync { /// resolve without manual intervention. type EphemeralError: Debug; - /// Fetch the number of the latest block. - /// - /// The block number is its zero-indexed position within a linear view of the external network's - /// consensus. The genesis block accordingly has block number 0. - async fn latest_block_number(&self) -> Result; - /// Fetch the number of the latest finalized block. /// /// The block number is its zero-indexed position within a linear view of the external network's /// consensus. The genesis block accordingly has block number 0. - async fn latest_finalized_block_number(&self) -> Result { - Ok(self.latest_block_number().await? - Self::CONFIRMATIONS) - } + async fn latest_finalized_block_number(&self) -> Result; /// Fetch a block header by its number. async fn block_header_by_number( @@ -262,77 +254,7 @@ impl ScannerDb { } } -/// The Scanner emits events relating to the blockchain, notably received outputs. -/// -/// It WILL NOT fail to emit an event, even if it reboots at selected moments. -/// -/// It MAY fire the same event multiple times. -#[derive(Debug)] -pub struct Scanner { - _db: PhantomData, - - keys: Vec<(usize, ::G)>, - - eventualities: HashMap, EventualitiesTracker>, - - ram_scanned: Option, - ram_outputs: HashSet>, - - need_ack: VecDeque, - - events: mpsc::UnboundedSender>, -} - -#[derive(Clone, Debug)] -struct ScannerHold { - scanner: Arc>>>, -} -impl ScannerHold { - async fn read(&self) -> RwLockReadGuard<'_, Option>> { - loop { - let lock = self.scanner.read().await; - if lock.is_none() { - drop(lock); - tokio::task::yield_now().await; - continue; - } - return lock; - } - } - async fn write(&self) -> RwLockWriteGuard<'_, Option>> { - loop { - let lock = self.scanner.write().await; - if lock.is_none() { - drop(lock); - tokio::task::yield_now().await; - continue; - } - return lock; - } - } - // This is safe to not check if something else already acquired the Scanner as the only caller is - // sequential. - async fn long_term_acquire(&self) -> Scanner { - self.scanner.write().await.take().unwrap() - } - async fn restore(&self, scanner: Scanner) { - let _ = self.scanner.write().await.insert(scanner); - } -} - -#[derive(Debug)] -pub struct ScannerHandle { - scanner: ScannerHold, - held_scanner: Option>, - pub events: ScannerEventChannel, - pub multisig_completed: mpsc::UnboundedSender, -} - impl ScannerHandle { - pub async fn ram_scanned(&self) -> usize { - self.scanner.read().await.as_ref().unwrap().ram_scanned.unwrap_or(0) - } - /// Register a key to scan for. pub async fn register_key( &mut self, @@ -363,17 +285,6 @@ impl ScannerHandle { scanner.eventualities.insert(key.to_bytes().as_ref().to_vec(), EventualitiesTracker::new()); } - pub fn db_scanned(getter: &G) -> Option { - ScannerDb::::latest_scanned_block(getter) - } - - // This perform a database read which isn't safe with regards to if the value is set or not - // It may be set, when it isn't expected to be set, or not set, when it is expected to be set - // Since the value is static, if it's set, it's correctly set - pub fn block_number(getter: &G, id: &>::Id) -> Option { - ScannerDb::::block_number(getter, id) - } - /// Acknowledge having handled a block. /// /// Creates a lock over the Scanner, preventing its independent scanning operations until @@ -447,7 +358,6 @@ impl Scanner { network: N, db: D, ) -> (ScannerHandle, Vec<(usize, ::G)>) { - let (events_send, events_recv) = mpsc::unbounded_channel(); let (multisig_completed_send, multisig_completed_recv) = mpsc::unbounded_channel(); let keys = ScannerDb::::keys(&db); @@ -455,44 +365,6 @@ impl Scanner { for key in &keys { eventualities.insert(key.1.to_bytes().as_ref().to_vec(), EventualitiesTracker::new()); } - - let ram_scanned = ScannerDb::::latest_scanned_block(&db); - - let scanner = ScannerHold { - scanner: Arc::new(RwLock::new(Some(Scanner { - _db: PhantomData, - - keys: keys.clone(), - - eventualities, - - ram_scanned, - ram_outputs: HashSet::new(), - - need_ack: VecDeque::new(), - - events: events_send, - }))), - }; - tokio::spawn(Scanner::run(db, network, scanner.clone(), multisig_completed_recv)); - - ( - ScannerHandle { - scanner, - held_scanner: None, - events: events_recv, - multisig_completed: multisig_completed_send, - }, - keys, - ) - } - - fn emit(&mut self, event: ScannerEvent) -> bool { - if self.events.send(event).is_err() { - info!("Scanner handler was dropped. Shutting down?"); - return false; - } - true } // An async function, to be spawned on a task, to discover and report outputs @@ -576,30 +448,6 @@ impl Scanner { info!("scanning block: {} ({block_being_scanned})", hex::encode(&block_id)); - // These DB calls are safe, despite not having a txn, since they're static values - // There's no issue if they're written in advance of expected (such as on reboot) - // They're also only expected here - if let Some(id) = ScannerDb::::block(&db, block_being_scanned) { - if id != block_id { - panic!("reorg'd from finalized {} to {}", hex::encode(id), hex::encode(block_id)); - } - } else { - // TODO: Move this to an unwrap - if let Some(id) = ScannerDb::::block(&db, block_being_scanned.saturating_sub(1)) { - if id != block.parent() { - panic!( - "block {} doesn't build off expected parent {}", - hex::encode(block_id), - hex::encode(id), - ); - } - } - - let mut txn = db.txn(); - ScannerDb::::save_block(&mut txn, block_being_scanned, &block_id); - txn.commit(); - } - // Scan new blocks // TODO: This lock acquisition may be long-lived... let mut scanner_lock = scanner_hold.write().await; @@ -617,16 +465,6 @@ impl Scanner { has_activation = true; } - let key_vec = key.to_bytes().as_ref().to_vec(); - - // TODO: These lines are the ones which will cause a really long-lived lock acquisition - for output in network.get_outputs(&block, key).await { - assert_eq!(output.key(), key); - if output.balance().amount.0 >= N::DUST { - outputs.push(output); - } - } - for (id, (block_number, tx, completion)) in network .get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block) .await @@ -778,10 +616,6 @@ impl Scanner { let retired = scanner.keys.remove(0).1; scanner.eventualities.remove(retired.to_bytes().as_ref()); } - - // Update ram_scanned - scanner.ram_scanned = Some(block_being_scanned); - drop(scanner_lock); // If we sent a Block event, once again check multisig_completed if sent_block && diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 5c57a3f5..34a59617 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -19,18 +19,20 @@ struct ReportTask { impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { let highest_reportable = { - // Fetch the latest scanned and latest checked block + // Fetch the next to scan block let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) .expect("ReportTask run before writing the start block"); - let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db) - .expect("ReportTask run before writing the start block"); // If we haven't done any work, return - if (next_to_scan == 0) || (next_to_check == 0) { + if next_to_scan == 0 { return Ok(false); } + // The last scanned block is the block prior to this + #[allow(clippy::let_and_return)] let last_scanned = next_to_scan - 1; - let last_checked = next_to_check - 1; - last_scanned.min(last_checked) + // The last scanned block is the highest reportable block as we only scan blocks within a + // window where it's safe to immediately report the block + // See `eventuality.rs` for more info + last_scanned }; let next_to_potentially_report = ScannerDb::::next_to_potentially_report_block(&self.db) diff --git a/processor/scanner/src/safe.rs b/processor/scanner/src/safe.rs deleted file mode 100644 index a0b4f547..00000000 --- a/processor/scanner/src/safe.rs +++ /dev/null @@ -1,82 +0,0 @@ -use core::marker::PhantomData; - -use serai_db::{Db, DbTxn}; - -use primitives::{Id, Block}; - -// TODO: Localize to SafeDb? -use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; - -/* - We mark blocks safe to scan when they're no more than `(CONFIRMATIONS - 1)` blocks after the - oldest notable block still pending acknowledgement (creating a window of length `CONFIRMATIONS` - when including the block pending acknowledgement). This means that if all known notable blocks - have been acknowledged, and a stretch of non-notable blocks occurs, they'll automatically be - marked safe to scan (since they come before the next oldest notable block still pending - acknowledgement). - - This design lets Serai safely schedule events `CONFIRMATIONS` blocks after the latest - acknowledged block. For an exhaustive proof of this, please see `mini`. -*/ -struct SafeToScanTask { - db: D, - _S: PhantomData, -} - -#[async_trait::async_trait] -impl ContinuallyRan for SafeToScanTask { - async fn run_iteration(&mut self) -> Result { - // First, we fetch the highest acknowledged block - let Some(highest_acknowledged_block) = ScannerDb::::highest_acknowledged_block(&self.db) - else { - // If no blocks have been acknowledged, we don't mark any safe - // Once the start block (implicitly safe) has been acknowledged, we proceed from there - return Ok(false); - }; - - let latest_block_known_if_pending_acknowledgement = { - // The next block to potentially report comes after all blocks we've decided to report or not - // If we've decided to report (or not report) a block, we know if it needs acknowledgement - // (and accordingly is pending acknowledgement) - // Accordingly, the block immediately before this is the latest block with a known status - ScannerDb::::next_to_potentially_report_block(&self.db) - .expect("SafeToScanTask run before writing the start block") - - 1 - }; - - let mut oldest_pending_acknowledgement = None; - for b in (highest_acknowledged_block + 1) ..= latest_block_known_if_pending_acknowledgement { - // If the block isn't notable, immediately flag it as acknowledged - if !ScannerDb::::is_block_notable(&self.db, b) { - let mut txn = self.db.txn(); - ScannerDb::::set_highest_acknowledged_block(&mut txn, b); - txn.commit(); - continue; - } - - oldest_pending_acknowledgement = Some(b); - break; - } - - // `oldest_pending_acknowledgement` is now the oldest block pending acknowledgement or `None` - // If it's `None`, then we were able to implicitly acknowledge all blocks within this span - // Since the safe block is `(CONFIRMATIONS - 1)` blocks after the oldest block still pending - // acknowledgement, and the oldest block still pending acknowledgement is in the future, - // we know the safe block to scan to is - // `>= latest_block_known_if_pending_acknowledgement + (CONFIRMATIONS - 1)` - let oldest_pending_acknowledgement = - oldest_pending_acknowledgement.unwrap_or(latest_block_known_if_pending_acknowledgement); - - let old_safe_block = ScannerDb::::latest_scannable_block(&self.db) - .expect("SafeToScanTask run before writing the start block"); - let new_safe_block = oldest_pending_acknowledgement + - (S::CONFIRMATIONS.checked_sub(1).expect("CONFIRMATIONS wasn't at least 1")); - - // Update the latest scannable block - let mut txn = self.db.txn(); - ScannerDb::::set_latest_scannable_block(&mut txn, new_safe_block); - txn.commit(); - - Ok(old_safe_block != new_safe_block) - } -} From 4e296787993138ba6fbeebc5048204c133a24dcb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 21 Aug 2024 18:41:51 -0400 Subject: [PATCH 014/368] Add bounds for the eventuality task --- processor/scanner/src/db.rs | 20 +++++++- processor/scanner/src/eventuality.rs | 75 +++++++++++++++++++++++++++- 2 files changed, 93 insertions(+), 2 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index e92435bc..70e34315 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -33,6 +33,8 @@ create_db!( NextToCheckForEventualitiesBlock: () -> u64, // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, + // Highest acknowledged block + HighestAcknowledgedBlock: () -> u64, // If a block was notable /* @@ -122,7 +124,9 @@ impl ScannerDb { Self::set_block(txn, start_block, id); LatestFinalizedBlock::set(txn, &start_block); NextToScanForOutputsBlock::set(txn, &start_block); - NextToCheckForEventualitiesBlock::set(txn, &start_block); + // We can receive outputs in this block, but any descending transactions will be in the next + // block. This, with the check on-set, creates a bound that this value in the DB is non-zero. + NextToCheckForEventualitiesBlock::set(txn, &(start_block + 1)); NextToPotentiallyReportBlock::set(txn, &start_block); } @@ -153,6 +157,10 @@ impl ScannerDb { txn: &mut impl DbTxn, next_to_check_for_eventualities_block: u64, ) { + assert!( + next_to_check_for_eventualities_block != 0, + "next to check for eventualities block was 0 when it's bound non-zero" + ); NextToCheckForEventualitiesBlock::set(txn, &next_to_check_for_eventualities_block); } pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { @@ -169,6 +177,16 @@ impl ScannerDb { NextToPotentiallyReportBlock::get(getter) } + pub(crate) fn set_highest_acknowledged_block( + txn: &mut impl DbTxn, + highest_acknowledged_block: u64, + ) { + HighestAcknowledgedBlock::set(txn, &highest_acknowledged_block); + } + pub(crate) fn highest_acknowledged_block(getter: &impl Get) -> Option { + HighestAcknowledgedBlock::get(getter) + } + pub(crate) fn set_outputs(txn: &mut impl DbTxn, block_number: u64, outputs: Vec>) { if outputs.is_empty() { return; diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index 38f1d112..37892aa8 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -1,4 +1,9 @@ -// TODO +use serai_db::{Db, DbTxn}; + +use primitives::{Id, ReceivedOutput, Block}; + +// TODO: Localize to EventualityDb? +use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; /* Note: The following assumes there's some value, `CONFIRMATIONS`, and the finalized block we @@ -48,3 +53,71 @@ This forms a backlog only if the latency of scanning, acknowledgement, and intake (including checking Eventualities) exceeds the window duration (the desired property). */ +struct EventualityTask { + db: D, + feed: S, +} + +#[async_trait::async_trait] +impl ContinuallyRan for EventualityTask { + async fn run_iteration(&mut self) -> Result { + /* + The set of Eventualities only increase when a block is acknowledged. Accordingly, we can only + iterate up to (and including) the block currently pending acknowledgement. "including" is + because even if block `b` causes new Eventualities, they'll only potentially resolve in block + `b + 1`. + + We only know blocks will need acknowledgement *for sure* if they were scanned. The only other + causes are key activation and retirement (both scheduled outside the scan window). This makes + the exclusive upper bound the *next block to scan*. + */ + let exclusive_upper_bound = { + // Fetch the next to scan block + let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + .expect("EventualityTask run before writing the start block"); + // If we haven't done any work, return + if next_to_scan == 0 { + return Ok(false); + } + next_to_scan + }; + + // Fetch the highest acknowledged block + let highest_acknowledged = ScannerDb::::highest_acknowledged_block(&self.db) + .expect("EventualityTask run before writing the start block"); + + // Fetch the next block to check + let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db) + .expect("EventualityTask run before writing the start block"); + + // Check all blocks + let mut iterated = false; + for b in next_to_check .. exclusive_upper_bound { + // If the prior block was notable *and* not acknowledged, break + // This is so if it caused any Eventualities (which may resolve this block), we have them + { + // This `- 1` is safe as next to check is bound to be non-zero + // This is possible since even if we receive coins in block 0, any transactions we'd make + // would resolve in block 1 (the first block we'll check under this non-zero rule) + let prior_block = b - 1; + if ScannerDb::::is_block_notable(&self.db, prior_block) && + (prior_block > highest_acknowledged) + { + break; + } + } + + iterated = true; + + todo!("TODO"); + + let mut txn = self.db.txn(); + // Update the next to check block + ScannerDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); + txn.commit(); + } + + // Run dependents if we successfully checked any blocks + Ok(iterated) + } +} From f2ee4daf430721e0a637d4b7db4271c040dc48e9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 22 Aug 2024 01:27:57 -0400 Subject: [PATCH 015/368] Add Eventuality back to processor primitives Also splits crate into modules. --- processor/primitives/src/block.rs | 40 +++++++ processor/primitives/src/eventuality.rs | 31 +++++ processor/primitives/src/lib.rs | 152 ++---------------------- processor/primitives/src/output.rs | 113 ++++++++++++++++++ 4 files changed, 194 insertions(+), 142 deletions(-) create mode 100644 processor/primitives/src/block.rs create mode 100644 processor/primitives/src/eventuality.rs create mode 100644 processor/primitives/src/output.rs diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs new file mode 100644 index 00000000..22f0b998 --- /dev/null +++ b/processor/primitives/src/block.rs @@ -0,0 +1,40 @@ +use core::fmt::Debug; + +use group::{Group, GroupEncoding}; + +use crate::{Id, ReceivedOutput}; + +/// A block header from an external network. +pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { + /// The type used to identify blocks. + type Id: 'static + Id; + /// The ID of this block. + fn id(&self) -> Self::Id; + /// The ID of the parent block. + fn parent(&self) -> Self::Id; +} + +/// A block from an external network. +/// +/// A block is defined as a consensus event associated with a set of transactions. It is not +/// necessary to literally define it as whatever the external network defines as a block. For +/// external networks which finalize block(s), this block type should be a representation of all +/// transactions within a period finalization (whether block or epoch). +#[async_trait::async_trait] +pub trait Block: Send + Sync + Sized + Clone + Debug { + /// The type used for this block's header. + type Header: BlockHeader; + + /// The type used to represent keys on this external network. + type Key: Group + GroupEncoding; + /// The type used to represent addresses on this external network. + type Address; + /// The type used to represent received outputs on this external network. + type Output: ReceivedOutput; + + /// The ID of this block. + fn id(&self) -> ::Id; + + /// Scan all outputs within this block to find the outputs spendable by this key. + fn scan_for_outputs(&self, key: Self::Key) -> Vec; +} diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs new file mode 100644 index 00000000..6e16637d --- /dev/null +++ b/processor/primitives/src/eventuality.rs @@ -0,0 +1,31 @@ +use std::collections::HashMap; +use std::io; + +/// A description of a transaction which will eventually happen. +pub trait Eventuality: Sized + Send + Sync { + /// A unique byte sequence which can be used to identify potentially resolving transactions. + /// + /// Both a transaction and an Eventuality are expected to be able to yield lookup sequences. + /// Lookup sequences MUST be unique to the Eventuality and identical to any transaction's which + /// satisfies this Eventuality. Transactions which don't satisfy this Eventuality MAY also have + /// an identical lookup sequence. + /// + /// This is used to find the Eventuality a transaction MAY resolve so we don't have to check all + /// transactions against all Eventualities. Once the potential resolved Eventuality is + /// identified, the full check is performed. + fn lookup(&self) -> Vec; + + /// Read an Eventuality. + fn read(reader: &mut R) -> io::Result; + /// Serialize an Eventuality to a `Vec`. + fn serialize(&self) -> Vec; +} + +/// A tracker of unresolved Eventualities. +#[derive(Debug)] +pub struct EventualityTracker { + /// The active Eventualities. + /// + /// These are keyed by their lookups. + pub active_eventualities: HashMap, E>, +} diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index 744aae47..dc64facf 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -3,15 +3,21 @@ #![deny(missing_docs)] use core::fmt::Debug; -use std::io; -use group::{Group, GroupEncoding}; - -use serai_primitives::Balance; +use group::GroupEncoding; use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; +mod output; +pub use output::*; + +mod eventuality; +pub use eventuality::*; + +mod block; +pub use block::*; + /// An ID for an output/transaction/block/etc. /// /// IDs don't need to implement `Copy`, enabling `[u8; 33]`, `[u8; 64]` to be used. IDs are still @@ -51,141 +57,3 @@ impl BorshDeserialize for BorshG { )) } } - -/// The type of the output. -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] -pub enum OutputType { - /// An output received to the address external payments use. - /// - /// This is reported to Substrate in a `Batch`. - External, - - /// A branch output. - /// - /// Given a known output set, and a known series of outbound transactions, we should be able to - /// form a completely deterministic schedule S. The issue is when S has TXs which spend prior TXs - /// in S (which is needed for our logarithmic scheduling). In order to have the descendant TX, - /// say S[1], build off S[0], we need to observe when S[0] is included on-chain. - /// - /// We cannot. - /// - /// Monero (and other privacy coins) do not expose their UTXO graphs. Even if we know how to - /// create S[0], and the actual payment info behind it, we cannot observe it on the blockchain - /// unless we participated in creating it. Locking the entire schedule, when we cannot sign for - /// the entire schedule at once, to a single signing set isn't feasible. - /// - /// While any member of the active signing set can provide data enabling other signers to - /// participate, it's several KB of data which we then have to code communication for. - /// The other option is to simply not observe S[0]. Instead, observe a TX with an identical - /// output to the one in S[0] we intended to use for S[1]. It's either from S[0], or Eve, a - /// malicious actor, has sent us a forged TX which is... equally as usable? So who cares? - /// - /// The only issue is if we have multiple outputs on-chain with identical amounts and purposes. - /// Accordingly, when the scheduler makes a plan for when a specific output is available, it - /// shouldn't set that plan. It should *push* that plan to a queue of plans to perform when - /// instances of that output occur. - Branch, - - /// A change output. - /// - /// This should be added to the available UTXO pool with no further action taken. It does not - /// need to be reported (though we do still need synchrony on the block it's in). There's no - /// explicit expectation for the usage of this output at time of recipience. - Change, - - /// A forwarded output from the prior multisig. - /// - /// This is distinguished for technical reasons around detecting when a multisig should be - /// retired. - Forwarded, -} - -impl OutputType { - fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&[match self { - OutputType::External => 0, - OutputType::Branch => 1, - OutputType::Change => 2, - OutputType::Forwarded => 3, - }]) - } - - fn read(reader: &mut R) -> io::Result { - let mut byte = [0; 1]; - reader.read_exact(&mut byte)?; - Ok(match byte[0] { - 0 => OutputType::External, - 1 => OutputType::Branch, - 2 => OutputType::Change, - 3 => OutputType::Forwarded, - _ => Err(io::Error::other("invalid OutputType"))?, - }) - } -} - -/// A received output. -pub trait ReceivedOutput: - Send + Sync + Sized + Clone + PartialEq + Eq + Debug -{ - /// The type used to identify this output. - type Id: 'static + Id; - - /// The type of this output. - fn kind(&self) -> OutputType; - - /// The ID of this output. - fn id(&self) -> Self::Id; - /// The key this output was received by. - fn key(&self) -> K; - - /// The presumed origin for this output. - /// - /// This is used as the address to refund coins to if we can't handle the output as desired - /// (unless overridden). - fn presumed_origin(&self) -> Option; - - /// The balance associated with this output. - fn balance(&self) -> Balance; - /// The arbitrary data (presumably an InInstruction) associated with this output. - fn data(&self) -> &[u8]; - - /// Write this output. - fn write(&self, writer: &mut W) -> io::Result<()>; - /// Read an output. - fn read(reader: &mut R) -> io::Result; -} - -/// A block header from an external network. -pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { - /// The type used to identify blocks. - type Id: 'static + Id; - /// The ID of this block. - fn id(&self) -> Self::Id; - /// The ID of the parent block. - fn parent(&self) -> Self::Id; -} - -/// A block from an external network. -/// -/// A block is defined as a consensus event associated with a set of transactions. It is not -/// necessary to literally define it as whatever the external network defines as a block. For -/// external networks which finalize block(s), this block type should be a representation of all -/// transactions within a period finalization (whether block or epoch). -#[async_trait::async_trait] -pub trait Block: Send + Sync + Sized + Clone + Debug { - /// The type used for this block's header. - type Header: BlockHeader; - - /// The type used to represent keys on this external network. - type Key: Group + GroupEncoding; - /// The type used to represent addresses on this external network. - type Address; - /// The type used to represent received outputs on this external network. - type Output: ReceivedOutput; - - /// The ID of this block. - fn id(&self) -> ::Id; - - /// Scan all outputs within this block to find the outputs spendable by this key. - fn scan_for_outputs(&self, key: Self::Key) -> Vec; -} diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs new file mode 100644 index 00000000..1dd186aa --- /dev/null +++ b/processor/primitives/src/output.rs @@ -0,0 +1,113 @@ +use core::fmt::Debug; +use std::io; + +use group::GroupEncoding; + +use serai_primitives::Balance; + +use crate::Id; + +/// The type of the output. +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] +pub enum OutputType { + /// An output received to the address external payments use. + /// + /// This is reported to Substrate in a `Batch`. + External, + + /// A branch output. + /// + /// Given a known output set, and a known series of outbound transactions, we should be able to + /// form a completely deterministic schedule S. The issue is when S has TXs which spend prior TXs + /// in S (which is needed for our logarithmic scheduling). In order to have the descendant TX, + /// say S[1], build off S[0], we need to observe when S[0] is included on-chain. + /// + /// We cannot. + /// + /// Monero (and other privacy coins) do not expose their UTXO graphs. Even if we know how to + /// create S[0], and the actual payment info behind it, we cannot observe it on the blockchain + /// unless we participated in creating it. Locking the entire schedule, when we cannot sign for + /// the entire schedule at once, to a single signing set isn't feasible. + /// + /// While any member of the active signing set can provide data enabling other signers to + /// participate, it's several KB of data which we then have to code communication for. + /// The other option is to simply not observe S[0]. Instead, observe a TX with an identical + /// output to the one in S[0] we intended to use for S[1]. It's either from S[0], or Eve, a + /// malicious actor, has sent us a forged TX which is... equally as usable? So who cares? + /// + /// The only issue is if we have multiple outputs on-chain with identical amounts and purposes. + /// Accordingly, when the scheduler makes a plan for when a specific output is available, it + /// shouldn't set that plan. It should *push* that plan to a queue of plans to perform when + /// instances of that output occur. + Branch, + + /// A change output. + /// + /// This should be added to the available UTXO pool with no further action taken. It does not + /// need to be reported (though we do still need synchrony on the block it's in). There's no + /// explicit expectation for the usage of this output at time of recipience. + Change, + + /// A forwarded output from the prior multisig. + /// + /// This is distinguished for technical reasons around detecting when a multisig should be + /// retired. + Forwarded, +} + +impl OutputType { + /// Write the OutputType. + pub fn write(&self, writer: &mut W) -> io::Result<()> { + writer.write_all(&[match self { + OutputType::External => 0, + OutputType::Branch => 1, + OutputType::Change => 2, + OutputType::Forwarded => 3, + }]) + } + + /// Read an OutputType. + pub fn read(reader: &mut R) -> io::Result { + let mut byte = [0; 1]; + reader.read_exact(&mut byte)?; + Ok(match byte[0] { + 0 => OutputType::External, + 1 => OutputType::Branch, + 2 => OutputType::Change, + 3 => OutputType::Forwarded, + _ => Err(io::Error::other("invalid OutputType"))?, + }) + } +} + +/// A received output. +pub trait ReceivedOutput: + Send + Sync + Sized + Clone + PartialEq + Eq + Debug +{ + /// The type used to identify this output. + type Id: 'static + Id; + + /// The type of this output. + fn kind(&self) -> OutputType; + + /// The ID of this output. + fn id(&self) -> Self::Id; + /// The key this output was received by. + fn key(&self) -> K; + + /// The presumed origin for this output. + /// + /// This is used as the address to refund coins to if we can't handle the output as desired + /// (unless overridden). + fn presumed_origin(&self) -> Option; + + /// The balance associated with this output. + fn balance(&self) -> Balance; + /// The arbitrary data (presumably an InInstruction) associated with this output. + fn data(&self) -> &[u8]; + + /// Write this output. + fn write(&self, writer: &mut W) -> io::Result<()>; + /// Read an output. + fn read(reader: &mut R) -> io::Result; +} From bc0cc5a754cdac6e18ef30e6b51e6803adffbc66 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 23 Aug 2024 20:30:06 -0400 Subject: [PATCH 016/368] Decide flow between scan/eventuality/report Scan now only handles External outputs, with an associated essay going over why. Scan directly creates the InInstruction (prior planned to be done in Report), and Eventuality is declared to end up yielding the outputs. That will require making the Eventuality flow two-stage. One stage to evaluate existing Eventualities and yield outputs, and one stage to incorporate new Eventualities before advancing the scan window. --- processor/scanner/src/db.rs | 102 ++++++++++++------ processor/scanner/src/eventuality.rs | 2 + processor/scanner/src/lib.rs | 155 ++++++++------------------- processor/scanner/src/lifetime.rs | 96 +++++++++++++++++ processor/scanner/src/report.rs | 14 +++ processor/scanner/src/scan.rs | 148 +++++++++++++++++++++---- 6 files changed, 350 insertions(+), 167 deletions(-) create mode 100644 processor/scanner/src/lifetime.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 70e34315..2e462712 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -5,25 +5,44 @@ use serai_db::{Get, DbTxn, create_db}; use primitives::{Id, ReceivedOutput, Block, BorshG}; -use crate::{ScannerFeed, BlockIdFor, KeyFor, OutputFor}; +use crate::{lifetime::LifetimeStage, ScannerFeed, BlockIdFor, KeyFor, OutputFor}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} impl Borshy for T {} #[derive(BorshSerialize, BorshDeserialize)] -pub(crate) struct SeraiKey { - pub(crate) activation_block_number: u64, - pub(crate) retirement_block_number: Option, +struct SeraiKeyDbEntry { + activation_block_number: u64, + key: K, +} + +pub(crate) struct SeraiKey { + pub(crate) stage: LifetimeStage, pub(crate) key: K, } +pub(crate) struct OutputWithInInstruction> { + output: O, + refund_address: A, + in_instruction: InInstructionWithBalance, +} + +impl> OutputWithInInstruction { + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.output.write(writer)?; + // TODO self.refund_address.write(writer)?; + self.in_instruction.encode_to(writer); + Ok(()) + } +} + create_db!( Scanner { BlockId: (number: u64) -> I, BlockNumber: (id: I) -> u64, - ActiveKeys: () -> Vec>, + ActiveKeys: () -> Vec>, // The latest finalized block to appear of a blockchain LatestFinalizedBlock: () -> u64, @@ -80,48 +99,60 @@ impl ScannerDb { NotableBlock::set(txn, activation_block_number, &()); // Push the key - let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); + let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); for key_i in &keys { if key == key_i.key.0 { panic!("queueing a key prior queued"); } } - keys.push(SeraiKey { - activation_block_number, - retirement_block_number: None, - key: BorshG(key), - }); + keys.push(SeraiKeyDbEntry { activation_block_number, key: BorshG(key) }); ActiveKeys::set(txn, &keys); } - // retirement_block_number is inclusive, so the key will no longer be scanned for as of the - // specified block - pub(crate) fn retire_key(txn: &mut impl DbTxn, retirement_block_number: u64, key: KeyFor) { - let mut keys: Vec>>> = + // TODO: This will be called from the Eventuality task yet this field is read by the scan task + // We need to write the argument for its safety + pub(crate) fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { + let mut keys: Vec>>> = ActiveKeys::get(txn).expect("retiring key yet no active keys"); assert!(keys.len() > 1, "retiring our only key"); - for i in 0 .. keys.len() { - if key == keys[i].key.0 { - keys[i].retirement_block_number = Some(retirement_block_number); - ActiveKeys::set(txn, &keys); - return; - } - - // This is not the key in question, but since it's older, it already should've been queued - // for retirement - assert!( - keys[i].retirement_block_number.is_some(), - "older key wasn't retired before newer key" - ); - } - panic!("retiring key yet not present in keys") + assert_eq!(keys[0].key.0, key, "not retiring the oldest key"); + keys.remove(0); + ActiveKeys::set(txn, &keys); } - pub(crate) fn keys(getter: &impl Get) -> Option>>>> { - ActiveKeys::get(getter) + pub(crate) fn active_keys_as_of_next_to_scan_for_outputs_block( + getter: &impl Get, + ) -> Option>>> { + // We don't take this as an argument as we don't keep all historical keys in memory + // If we've scanned block 1,000,000, we can't answer the active keys as of block 0 + let block_number = Self::next_to_scan_for_outputs_block(getter)?; + + let raw_keys: Vec>>> = ActiveKeys::get(getter)?; + let mut keys = Vec::with_capacity(2); + for i in 0 .. raw_keys.len() { + if block_number < raw_keys[i].activation_block_number { + continue; + } + keys.push(SeraiKey { + key: raw_keys[i].key.0, + stage: LifetimeStage::calculate::( + block_number, + raw_keys[i].activation_block_number, + raw_keys.get(i + 1).map(|key| key.activation_block_number), + ), + }); + } + assert!(keys.len() <= 2); + Some(keys) } pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: BlockIdFor) { + assert!( + LatestFinalizedBlock::get(txn).is_none(), + "setting start block but prior set start block" + ); + Self::set_block(txn, start_block, id); + LatestFinalizedBlock::set(txn, &start_block); NextToScanForOutputsBlock::set(txn, &start_block); // We can receive outputs in this block, but any descending transactions will be in the next @@ -138,9 +169,10 @@ impl ScannerDb { } pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { - // This is whatever block we've checked the Eventualities of, plus the window length + // We can only scan up to whatever block we've checked the Eventualities of, plus the window + // length. Since this returns an inclusive bound, we need to subtract 1 // See `eventuality.rs` for more info - NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH) + NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH - 1) } pub(crate) fn set_next_to_scan_for_outputs_block( @@ -187,7 +219,7 @@ impl ScannerDb { HighestAcknowledgedBlock::get(getter) } - pub(crate) fn set_outputs(txn: &mut impl DbTxn, block_number: u64, outputs: Vec>) { + pub(crate) fn set_in_instructions(txn: &mut impl DbTxn, block_number: u64, outputs: Vec, AddressFor, OutputFor>>) { if outputs.is_empty() { return; } diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index 37892aa8..cb91ca42 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -109,6 +109,8 @@ impl ContinuallyRan for EventualityTask { iterated = true; + // TODO: Not only check/clear eventualities, if this eventuality forwarded an output, queue + // it to be reported in however many blocks todo!("TODO"); let mut txn = self.db.txn(); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5f51e7d0..04366dff 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -5,10 +5,18 @@ use tokio::sync::mpsc; use serai_primitives::{Coin, Amount}; use primitives::{ReceivedOutput, BlockHeader, Block}; +// Logic for deciding where in its lifetime a multisig is. +mod lifetime; + +// Database schema definition and associated functions. mod db; +// Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; +// Scans blocks for received coins. mod scan; +/// Check blocks for transactions expected to eventually occur. mod eventuality; +/// Task which reports `Batch`s to Substrate. mod report; /// A feed usable to scan a blockchain. @@ -16,12 +24,22 @@ mod report; /// This defines the primitive types used, along with various getters necessary for indexing. #[async_trait::async_trait] pub trait ScannerFeed: Send + Sync { + /// The amount of confirmations a block must have to be considered finalized. + /// + /// This value must be at least `1`. + const CONFIRMATIONS: u64; + /// The amount of blocks to process in parallel. /// /// This value must be at least `1`. This value should be the worst-case latency to handle a /// block divided by the expected block time. const WINDOW_LENGTH: u64; + /// The amount of blocks which will occur in 10 minutes (approximate). + /// + /// This value must be at least `1`. + const TEN_MINUTES: u64; + /// The representation of a block for this blockchain. /// /// A block is defined as a consensus event associated with a set of transactions. It is not @@ -152,6 +170,32 @@ pub(crate) trait ContinuallyRan: Sized { } } +/// A representation of a scanner. +pub struct Scanner; +impl Scanner { + /// Create a new scanner. + /// + /// This will begin its execution, spawning several asynchronous tasks. + // TODO: Take start_time and binary search here? + pub fn new(start_block: u64) -> Self { + todo!("TODO") + } + + /// Acknowledge a block. + /// + /// This means this block was ordered on Serai in relation to `Burn` events, and all validators + /// have achieved synchrony on it. + pub fn acknowledge_block( + &mut self, + block_number: u64, + key_to_activate: Option<()>, + forwarded_outputs: Vec<()>, + eventualities_created: Vec<()>, + ) { + todo!("TODO") + } +} + /* #[derive(Clone, Debug)] pub enum ScannerEvent { @@ -172,8 +216,6 @@ pub enum ScannerEvent { ), } -pub type ScannerEventChannel = mpsc::UnboundedReceiver>; - #[derive(Clone, Debug)] struct ScannerDb(PhantomData, PhantomData); impl ScannerDb { @@ -184,38 +226,6 @@ impl ScannerDb { getter.get(Self::seen_key(id)).is_some() } - fn outputs_key(block: &>::Id) -> Vec { - Self::scanner_key(b"outputs", block.as_ref()) - } - fn save_outputs( - txn: &mut D::Transaction<'_>, - block: &>::Id, - outputs: &[N::Output], - ) { - let mut bytes = Vec::with_capacity(outputs.len() * 64); - for output in outputs { - output.write(&mut bytes).unwrap(); - } - txn.put(Self::outputs_key(block), bytes); - } - fn outputs( - txn: &D::Transaction<'_>, - block: &>::Id, - ) -> Option> { - let bytes_vec = txn.get(Self::outputs_key(block))?; - let mut bytes: &[u8] = bytes_vec.as_ref(); - - let mut res = vec![]; - while !bytes.is_empty() { - res.push(N::Output::read(&mut bytes).unwrap()); - } - Some(res) - } - - fn scanned_block_key() -> Vec { - Self::scanner_key(b"scanned_block", []) - } - fn save_scanned_block(txn: &mut D::Transaction<'_>, block: usize) -> Vec { let id = Self::block(txn, block); // It may be None for the first key rotated to let outputs = @@ -255,36 +265,6 @@ impl ScannerDb { } impl ScannerHandle { - /// Register a key to scan for. - pub async fn register_key( - &mut self, - txn: &mut D::Transaction<'_>, - activation_number: usize, - key: ::G, - ) { - info!("Registering key {} in scanner at {activation_number}", hex::encode(key.to_bytes())); - - let mut scanner_lock = self.scanner.write().await; - let scanner = scanner_lock.as_mut().unwrap(); - assert!( - activation_number > scanner.ram_scanned.unwrap_or(0), - "activation block of new keys was already scanned", - ); - - if scanner.keys.is_empty() { - assert!(scanner.ram_scanned.is_none()); - scanner.ram_scanned = Some(activation_number); - assert!(ScannerDb::::save_scanned_block(txn, activation_number).is_empty()); - } - - ScannerDb::::register_key(txn, activation_number, key); - scanner.keys.push((activation_number, key)); - #[cfg(not(test))] // TODO: A test violates this. Improve the test with a better flow - assert!(scanner.keys.len() <= 2); - - scanner.eventualities.insert(key.to_bytes().as_ref().to_vec(), EventualitiesTracker::new()); - } - /// Acknowledge having handled a block. /// /// Creates a lock over the Scanner, preventing its independent scanning operations until @@ -375,53 +355,6 @@ impl Scanner { mut multisig_completed: mpsc::UnboundedReceiver, ) { loop { - let (ram_scanned, latest_block_to_scan) = { - // Sleep 5 seconds to prevent hammering the node/scanner lock - sleep(Duration::from_secs(5)).await; - - let ram_scanned = { - let scanner_lock = scanner_hold.read().await; - let scanner = scanner_lock.as_ref().unwrap(); - - // If we're not scanning for keys yet, wait until we are - if scanner.keys.is_empty() { - continue; - } - - let ram_scanned = scanner.ram_scanned.unwrap(); - // If a Batch has taken too long to be published, start waiting until it is before - // continuing scanning - // Solves a race condition around multisig rotation, documented in the relevant doc - // and demonstrated with mini - if let Some(needing_ack) = scanner.need_ack.front() { - let next = ram_scanned + 1; - let limit = needing_ack + N::CONFIRMATIONS; - assert!(next <= limit); - if next == limit { - continue; - } - }; - - ram_scanned - }; - - ( - ram_scanned, - loop { - break match network.get_latest_block_number().await { - // Only scan confirmed blocks, which we consider effectively finalized - // CONFIRMATIONS - 1 as whatever's in the latest block already has 1 confirm - Ok(latest) => latest.saturating_sub(N::CONFIRMATIONS.saturating_sub(1)), - Err(_) => { - warn!("couldn't get latest block number"); - sleep(Duration::from_secs(60)).await; - continue; - } - }; - }, - ) - }; - for block_being_scanned in (ram_scanned + 1) ..= latest_block_to_scan { // Redo the checks for if we're too far ahead { diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs new file mode 100644 index 00000000..62ee91c3 --- /dev/null +++ b/processor/scanner/src/lifetime.rs @@ -0,0 +1,96 @@ +use crate::ScannerFeed; + +/// An enum representing the stage of a multisig within its lifetime. +/// +/// This corresponds to `spec/processor/Multisig Rotation.md`, which details steps 1-8 of the +/// rotation process. Steps 7-8 regard a multisig which isn't retiring yet retired, and +/// accordingly, no longer exists, so they are not modelled here (as this only models active +/// multisigs. Inactive multisigs aren't represented in the first place). +pub(crate) enum LifetimeStage { + /// A new multisig, once active, shouldn't actually start receiving coins until several blocks + /// later. If any UI is premature in sending to this multisig, we delay to report the outputs to + /// prevent some DoS concerns. + /// + /// This represents steps 1-3 for a new multisig. + ActiveYetNotReporting, + /// Active with all outputs being reported on-chain. + /// + /// This represents step 4 onwards for a new multisig. + Active, + /// Retiring with all outputs being reported on-chain. + /// + /// This represents step 4 for a retiring multisig. + UsingNewForChange, + /// Retiring with outputs being forwarded, reported on-chain once forwarded. + /// + /// This represents step 5 for a retiring multisig. + Forwarding, + /// Retiring with only existing obligations being handled. + /// + /// This represents step 6 for a retiring multisig. + /// + /// Steps 7 and 8 are represented by the retiring multisig no longer existing, and these states + /// are only for multisigs which actively exist. + Finishing, +} + +impl LifetimeStage { + /// Get the stage of its lifetime this multisig is in based on when the next multisig's key + /// activates. + /// + /// Panics if the multisig being calculated for isn't actually active and a variety of other + /// insane cases. + pub(crate) fn calculate( + block_number: u64, + activation_block_number: u64, + next_keys_activation_block_number: Option, + ) -> Self { + assert!( + activation_block_number >= block_number, + "calculating lifetime stage for an inactive multisig" + ); + // This is exclusive, not inclusive, since we want a CONFIRMATIONS + 10 minutes window and the + // activation block itself is the first block within this window + let active_yet_not_reporting_end_block = + activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; + if block_number < active_yet_not_reporting_end_block { + return LifetimeStage::ActiveYetNotReporting; + } + + let Some(next_keys_activation_block_number) = next_keys_activation_block_number else { + // If there is no next multisig, this is the active multisig + return LifetimeStage::Active; + }; + + assert!( + next_keys_activation_block_number > active_yet_not_reporting_end_block, + "next set of keys activated before this multisig activated" + ); + + // If the new multisig is still having its activation block finalized on-chain, this multisig + // is still active (step 3) + let new_active_yet_not_reporting_end_block = + next_keys_activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; + if block_number < new_active_yet_not_reporting_end_block { + return LifetimeStage::Active; + } + + // Step 4 details a further CONFIRMATIONS + let new_active_and_used_for_change_end_block = + new_active_yet_not_reporting_end_block + S::CONFIRMATIONS; + if block_number < new_active_and_used_for_change_end_block { + return LifetimeStage::UsingNewForChange; + } + + // Step 5 details a further 6 hours + // 6 hours = 6 * 60 minutes = 6 * 6 * 10 minutes + let new_active_and_forwarded_to_end_block = + new_active_and_used_for_change_end_block + (6 * 6 * S::TEN_MINUTES); + if block_number < new_active_and_forwarded_to_end_block { + return LifetimeStage::Forwarding; + } + + // Step 6 + LifetimeStage::Finishing + } +} diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 34a59617..b2220895 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -40,6 +40,20 @@ impl ContinuallyRan for ReportTask { for b in next_to_potentially_report ..= highest_reportable { if ScannerDb::::is_block_notable(&self.db, b) { + let outputs = todo!("TODO"); + let in_instructions_to_report = vec![]; + for output in outputs { + match output.kind() { + // These do get reported since the scanner eliminates any which shouldn't be reported + OutputType::External => todo!("TODO"), + // These do not get reported in Batches + OutputType::Branch | OutputType::Change => {} + // These now get reported if they're legitimately forwarded + OutputType::Forwarded => { + todo!("TODO") + } + } + } todo!("TODO: Make Batches, which requires handling Forwarded within this crate"); } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 6058c7da..137f708a 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -5,6 +5,51 @@ use primitives::{Id, ReceivedOutput, Block}; // TODO: Localize to ScanDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +// Construct an InInstruction from an external output. +// +// Also returns the address to refund the coins to upon error. +fn in_instruction_from_output( + output: &impl ReceivedOutput, +) -> (Option, Option) { + assert_eq!(output.kind(), OutputType::External); + + let presumed_origin = output.presumed_origin(); + + let mut data = output.data(); + let max_data_len = usize::try_from(MAX_DATA_LEN).unwrap(); + if data.len() > max_data_len { + error!( + "data in output {} exceeded MAX_DATA_LEN ({MAX_DATA_LEN}): {}. skipping", + hex::encode(output.id()), + data.len(), + ); + return (presumed_origin, None); + } + + let shorthand = match Shorthand::decode(&mut data) { + Ok(shorthand) => shorthand, + Err(e) => { + info!("data in output {} wasn't valid shorthand: {e:?}", hex::encode(output.id())); + return (presumed_origin, None); + } + }; + let instruction = match RefundableInInstruction::try_from(shorthand) { + Ok(instruction) => instruction, + Err(e) => { + info!( + "shorthand in output {} wasn't convertible to a RefundableInInstruction: {e:?}", + hex::encode(output.id()) + ); + return (presumed_origin, None); + } + }; + + ( + instruction.origin.and_then(|addr| A::try_from(addr).ok()).or(presumed_origin), + Some(instruction.instruction), + ) +} + struct ScanForOutputsTask { db: D, feed: S, @@ -42,29 +87,79 @@ impl ContinuallyRan for ScanForOutputsTask { log::info!("scanning block: {} ({b})", hex::encode(block.id())); - let mut keys = - ScannerDb::::keys(&self.db).expect("scanning for a blockchain without any keys set"); - // Remove all the retired keys - while let Some(retire_at) = keys[0].retirement_block_number { - if retire_at <= b { - keys.remove(0); - } - } - assert!(keys.len() <= 2); + assert_eq!(ScannerDb::::next_to_scan_for_outputs_block(&self.db).unwrap(), b); + let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) + .expect("scanning for a blockchain without any keys set"); - let mut outputs = vec![]; + let mut in_instructions = vec![]; // Scan for each key for key in keys { - // If this key has yet to active, skip it - if key.activation_block_number > b { - continue; - } + for output in block.scan_for_outputs(key.key) { + assert_eq!(output.key(), key.key); - for output in block.scan_for_outputs(key.key.0) { - assert_eq!(output.key(), key.key.0); + /* + The scan task runs ahead of time, obtaining ordering on the external network's blocks + with relation to events on the Serai network. This is done via publishing a Batch which + contains the InInstructions from External outputs. Accordingly, the scan process only + has to yield External outputs. + + It'd appear to make sense to scan for all outputs, and after scanning for all outputs, + yield all outputs. The issue is we can't identify outputs we created here. We can only + identify the outputs we receive and their *declared intention*. + + We only want to handle Change/Branch/Forwarded outputs we made ourselves. For + Forwarded, the reasoning is obvious (retiring multisigs should only downsize, yet + accepting new outputs solely because they claim to be Forwarded would increase the size + of the multisig). For Change/Branch, it's because such outputs which aren't ours are + pointless. They wouldn't hurt to accumulate though. + + The issue is they would hurt to accumulate. We want to filter outputs which are less + than their cost to aggregate, a variable itself variable to the current blockchain. We + can filter such outputs here, yet if we drop a Change output, we create an insolvency. + We'd need to track the loss and offset it later. That means we can't filter such + outputs, as we expect any Change output we make. + + The issue is the Change outputs we don't make. Someone can create an output declaring + to be Change, yet not actually Change. If we don't filter it, it'd be queued for + accumulation, yet it may cost more to accumulate than it's worth. + + The solution is to let the Eventuality task, which does know if we made an output or + not (or rather, if a transaction is identical to a transaction which should exist + regarding effects) decide to keep/yield the outputs which we should only keep if we + made them (as Serai itself should not make worthless outputs, so we can assume they're + worthwhile, and even if they're not economically, they are technically). + + The alternative, we drop outputs here with a generic filter rule and then report back + the insolvency created, still doesn't work as we'd only be creating if an insolvency if + the output was actually made by us (and not simply someone else sending in). We can + have the Eventuality task report the insolvency, yet that requires the scanner be + responsible for such filter logic. It's more flexible, and has a cleaner API, + to do so at a higher level. + */ + if output.kind() != OutputType::External { + continue; + } + + // Drop External outputs if they're to a multisig which won't report them + // This means we should report any External output we save to disk here + #[allow(clippy::match_same_arms)] + match key.stage { + // TODO: Delay External outputs + LifetimeStage::ActiveYetNotReporting => todo!("TODO"), + // We should report External outputs in these cases + LifetimeStage::Active | LifetimeStage::UsingNewForChange => {} + // We should report External outputs only once forwarded, where they'll appear as + // OutputType::Forwarded + LifetimeStage::Forwarding => todo!("TODO"), + // We should drop these as we should not be handling new External outputs at this + // time + LifetimeStage::Finishing => { + continue; + } + } // Check this isn't dust - { + let balance_to_use = { let mut balance = output.balance(); // First, subtract 2 * the cost to aggregate, as detailed in // `spec/processor/UTXO Management.md` @@ -79,15 +174,26 @@ impl ContinuallyRan for ScanForOutputsTask { if balance.amount.0 < self.feed.dust(balance.coin).0 { continue; } - } + }; - outputs.push(output); + // Decode and save the InInstruction/refund addr for this output + match in_instruction_from_output::(output) { + (refund_addr, Some(instruction)) => { + let instruction = InInstructionWithBalance { instruction, balance: balance_to_use }; + // TODO: Make a proper struct out of this + in_instructions.push((output.id(), refund_addr, instruction)); + todo!("TODO: Save to be reported") + } + (Some(refund_addr), None) => todo!("TODO: Queue refund"), + // Since we didn't receive an instruction nor can we refund this, accumulate it + (None, None) => {} + } } } let mut txn = self.db.txn(); - // Save the outputs - ScannerDb::::set_outputs(&mut txn, b, outputs); + // Save the in instructions + ScannerDb::::set_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); txn.commit(); From ce805c8cc860e2598c24c98561cd5f8bf57d9252 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 23 Aug 2024 21:21:02 -0400 Subject: [PATCH 017/368] Correct compilation errors --- Cargo.lock | 1 + processor/primitives/src/block.rs | 4 +-- processor/primitives/src/output.rs | 7 ++-- processor/scanner/Cargo.toml | 1 + processor/scanner/src/db.rs | 27 ++++++++++----- processor/scanner/src/lib.rs | 16 +++++---- processor/scanner/src/report.rs | 18 ++-------- processor/scanner/src/scan.rs | 53 ++++++++++++++++++++---------- 8 files changed, 77 insertions(+), 50 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index e3e6f378..f887bd8c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8669,6 +8669,7 @@ dependencies = [ "log", "parity-scale-codec", "serai-db", + "serai-in-instructions-primitives", "serai-primitives", "serai-processor-messages", "serai-processor-primitives", diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 22f0b998..1fc92c3a 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -2,7 +2,7 @@ use core::fmt::Debug; use group::{Group, GroupEncoding}; -use crate::{Id, ReceivedOutput}; +use crate::{Id, Address, ReceivedOutput}; /// A block header from an external network. pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { @@ -28,7 +28,7 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { /// The type used to represent keys on this external network. type Key: Group + GroupEncoding; /// The type used to represent addresses on this external network. - type Address; + type Address: Address; /// The type used to represent received outputs on this external network. type Output: ReceivedOutput; diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 1dd186aa..2b96d229 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -3,10 +3,13 @@ use std::io; use group::GroupEncoding; -use serai_primitives::Balance; +use serai_primitives::{ExternalAddress, Balance}; use crate::Id; +/// An address on the external network. +pub trait Address: Send + Sync + TryFrom {} + /// The type of the output. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum OutputType { @@ -81,7 +84,7 @@ impl OutputType { } /// A received output. -pub trait ReceivedOutput: +pub trait ReceivedOutput: Send + Sync + Sized + Clone + PartialEq + Eq + Debug { /// The type used to identify this output. diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index 82de4de1..a16b55f2 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -36,6 +36,7 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-db = { path = "../../common/db" } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } +serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 2e462712..7eb276ce 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,11 +1,17 @@ use core::marker::PhantomData; +use std::io; +use group::GroupEncoding; + +use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; +use serai_in_instructions_primitives::InInstructionWithBalance; + use primitives::{Id, ReceivedOutput, Block, BorshG}; -use crate::{lifetime::LifetimeStage, ScannerFeed, BlockIdFor, KeyFor, OutputFor}; +use crate::{lifetime::LifetimeStage, ScannerFeed, BlockIdFor, KeyFor, AddressFor, OutputFor}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} @@ -22,16 +28,16 @@ pub(crate) struct SeraiKey { pub(crate) key: K, } -pub(crate) struct OutputWithInInstruction> { - output: O, - refund_address: A, - in_instruction: InInstructionWithBalance, +pub(crate) struct OutputWithInInstruction { + pub(crate) output: OutputFor, + pub(crate) return_address: Option>, + pub(crate) in_instruction: InInstructionWithBalance, } -impl> OutputWithInInstruction { +impl OutputWithInInstruction { fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { self.output.write(writer)?; - // TODO self.refund_address.write(writer)?; + // TODO self.return_address.write(writer)?; self.in_instruction.encode_to(writer); Ok(()) } @@ -172,6 +178,7 @@ impl ScannerDb { // We can only scan up to whatever block we've checked the Eventualities of, plus the window // length. Since this returns an inclusive bound, we need to subtract 1 // See `eventuality.rs` for more info + // TODO: Adjust based on register eventualities NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH - 1) } @@ -219,7 +226,11 @@ impl ScannerDb { HighestAcknowledgedBlock::get(getter) } - pub(crate) fn set_in_instructions(txn: &mut impl DbTxn, block_number: u64, outputs: Vec, AddressFor, OutputFor>>) { + pub(crate) fn set_in_instructions( + txn: &mut impl DbTxn, + block_number: u64, + outputs: Vec>, + ) { if outputs.is_empty() { return; } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 04366dff..7bd8cc2e 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,4 +1,4 @@ -use core::{fmt::Debug, time::Duration}; +use core::{marker::PhantomData, fmt::Debug, time::Duration}; use tokio::sync::mpsc; @@ -86,6 +86,7 @@ pub trait ScannerFeed: Send + Sync { type BlockIdFor = <<::Block as Block>::Header as BlockHeader>::Id; type KeyFor = <::Block as Block>::Key; +type AddressFor = <::Block as Block>::Address; type OutputFor = <::Block as Block>::Output; /// A handle to immediately run an iteration of a task. @@ -171,8 +172,8 @@ pub(crate) trait ContinuallyRan: Sized { } /// A representation of a scanner. -pub struct Scanner; -impl Scanner { +pub struct Scanner(PhantomData); +impl Scanner { /// Create a new scanner. /// /// This will begin its execution, spawning several asynchronous tasks. @@ -189,9 +190,12 @@ impl Scanner { &mut self, block_number: u64, key_to_activate: Option<()>, - forwarded_outputs: Vec<()>, - eventualities_created: Vec<()>, - ) { + ) -> Vec> { + todo!("TODO") + } + + /// Register the Eventualities caused by a block. + pub fn register_eventualities(&mut self, block_number: u64, eventualities: Vec<()>) { todo!("TODO") } } diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index b2220895..3c22556c 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -5,7 +5,7 @@ use serai_db::{Db, DbTxn}; -use primitives::{Id, Block}; +use primitives::{Id, OutputType, Block}; // TODO: Localize to ReportDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; @@ -40,20 +40,8 @@ impl ContinuallyRan for ReportTask { for b in next_to_potentially_report ..= highest_reportable { if ScannerDb::::is_block_notable(&self.db, b) { - let outputs = todo!("TODO"); - let in_instructions_to_report = vec![]; - for output in outputs { - match output.kind() { - // These do get reported since the scanner eliminates any which shouldn't be reported - OutputType::External => todo!("TODO"), - // These do not get reported in Batches - OutputType::Branch | OutputType::Change => {} - // These now get reported if they're legitimately forwarded - OutputType::Forwarded => { - todo!("TODO") - } - } - } + let in_instructions = todo!("TODO"); + // TODO: Also pull the InInstructions from forwarding todo!("TODO: Make Batches, which requires handling Forwarded within this crate"); } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 137f708a..13332586 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -1,16 +1,28 @@ +use group::GroupEncoding; + +use scale::{Encode, Decode}; use serai_db::{Db, DbTxn}; -use primitives::{Id, ReceivedOutput, Block}; +use serai_primitives::{MAX_DATA_LEN, ExternalAddress}; +use serai_in_instructions_primitives::{ + Shorthand, RefundableInInstruction, InInstruction, InInstructionWithBalance, +}; + +use primitives::{Id, OutputType, ReceivedOutput, Block}; // TODO: Localize to ScanDb? -use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +use crate::{ + lifetime::LifetimeStage, + db::{OutputWithInInstruction, ScannerDb}, + ScannerFeed, AddressFor, OutputFor, ContinuallyRan, +}; // Construct an InInstruction from an external output. // -// Also returns the address to refund the coins to upon error. -fn in_instruction_from_output( - output: &impl ReceivedOutput, -) -> (Option, Option) { +// Also returns the address to return the coins to upon error. +fn in_instruction_from_output( + output: &OutputFor, +) -> (Option>, Option) { assert_eq!(output.kind(), OutputType::External); let presumed_origin = output.presumed_origin(); @@ -18,7 +30,7 @@ fn in_instruction_from_output( let mut data = output.data(); let max_data_len = usize::try_from(MAX_DATA_LEN).unwrap(); if data.len() > max_data_len { - error!( + log::info!( "data in output {} exceeded MAX_DATA_LEN ({MAX_DATA_LEN}): {}. skipping", hex::encode(output.id()), data.len(), @@ -29,14 +41,14 @@ fn in_instruction_from_output( let shorthand = match Shorthand::decode(&mut data) { Ok(shorthand) => shorthand, Err(e) => { - info!("data in output {} wasn't valid shorthand: {e:?}", hex::encode(output.id())); + log::info!("data in output {} wasn't valid shorthand: {e:?}", hex::encode(output.id())); return (presumed_origin, None); } }; let instruction = match RefundableInInstruction::try_from(shorthand) { Ok(instruction) => instruction, Err(e) => { - info!( + log::info!( "shorthand in output {} wasn't convertible to a RefundableInInstruction: {e:?}", hex::encode(output.id()) ); @@ -45,7 +57,7 @@ fn in_instruction_from_output( }; ( - instruction.origin.and_then(|addr| A::try_from(addr).ok()).or(presumed_origin), + instruction.origin.and_then(|addr| AddressFor::::try_from(addr).ok()).or(presumed_origin), Some(instruction.instruction), ) } @@ -174,18 +186,25 @@ impl ContinuallyRan for ScanForOutputsTask { if balance.amount.0 < self.feed.dust(balance.coin).0 { continue; } + + balance }; - // Decode and save the InInstruction/refund addr for this output - match in_instruction_from_output::(output) { - (refund_addr, Some(instruction)) => { - let instruction = InInstructionWithBalance { instruction, balance: balance_to_use }; + // Decode and save the InInstruction/return addr for this output + match in_instruction_from_output::(&output) { + (return_address, Some(instruction)) => { + let in_instruction = + InInstructionWithBalance { instruction, balance: balance_to_use }; // TODO: Make a proper struct out of this - in_instructions.push((output.id(), refund_addr, instruction)); + in_instructions.push(OutputWithInInstruction { + output, + return_address, + in_instruction, + }); todo!("TODO: Save to be reported") } - (Some(refund_addr), None) => todo!("TODO: Queue refund"), - // Since we didn't receive an instruction nor can we refund this, accumulate it + (Some(return_addr), None) => todo!("TODO: Queue return"), + // Since we didn't receive an instruction nor can we return this, accumulate it (None, None) => {} } } From fd12cc0213acb228d2f94a9d2abc3924ee68386d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 23 Aug 2024 22:09:54 -0400 Subject: [PATCH 018/368] Finish scan task --- processor/scanner/src/db.rs | 62 +++++++++++++++++++--- processor/scanner/src/lib.rs | 2 + processor/scanner/src/lifetime.rs | 22 ++++---- processor/scanner/src/scan.rs | 87 ++++++++++++++++++------------- 4 files changed, 122 insertions(+), 51 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 7eb276ce..fa2db781 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -24,8 +24,9 @@ struct SeraiKeyDbEntry { } pub(crate) struct SeraiKey { - pub(crate) stage: LifetimeStage, pub(crate) key: K, + pub(crate) stage: LifetimeStage, + pub(crate) block_at_which_reporting_starts: u64, } pub(crate) struct OutputWithInInstruction { @@ -81,6 +82,9 @@ create_db!( // This collapses from `bool` to `()`, using if the value was set for true and false otherwise NotableBlock: (number: u64) -> (), + SerializedQueuedOutputs: (block_number: u64) -> Vec, + SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, + SerializedForwardedOutput: (output_id: &[u8]) -> Vec, SerializedOutputs: (block_number: u64) -> Vec, } ); @@ -138,14 +142,13 @@ impl ScannerDb { if block_number < raw_keys[i].activation_block_number { continue; } - keys.push(SeraiKey { - key: raw_keys[i].key.0, - stage: LifetimeStage::calculate::( + let (stage, block_at_which_reporting_starts) = + LifetimeStage::calculate_stage_and_reporting_start_block::( block_number, raw_keys[i].activation_block_number, raw_keys.get(i + 1).map(|key| key.activation_block_number), - ), - }); + ); + keys.push(SeraiKey { key: raw_keys[i].key.0, stage, block_at_which_reporting_starts }); } assert!(keys.len() <= 2); Some(keys) @@ -226,6 +229,53 @@ impl ScannerDb { HighestAcknowledgedBlock::get(getter) } + pub(crate) fn take_queued_outputs( + txn: &mut impl DbTxn, + block_number: u64, + ) -> Vec> { + todo!("TODO") + } + + pub(crate) fn queue_return( + txn: &mut impl DbTxn, + block_queued_from: u64, + return_addr: AddressFor, + output: OutputFor, + ) { + todo!("TODO") + } + + pub(crate) fn queue_output_until_block( + txn: &mut impl DbTxn, + queue_for_block: u64, + output: &OutputWithInInstruction, + ) { + let mut outputs = + SerializedQueuedOutputs::get(txn, queue_for_block).unwrap_or(Vec::with_capacity(128)); + output.write(&mut outputs).unwrap(); + SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); + } + + pub(crate) fn save_output_being_forwarded( + txn: &mut impl DbTxn, + block_forwarded_from: u64, + output: &OutputWithInInstruction, + ) { + let mut buf = Vec::with_capacity(128); + output.write(&mut buf).unwrap(); + + let id = output.output.id(); + + // Save this to an index so we can later fetch all outputs to forward + let mut forwarded_outputs = SerializedForwardedOutputsIndex::get(txn, block_forwarded_from) + .unwrap_or(Vec::with_capacity(32)); + forwarded_outputs.extend(id.as_ref()); + SerializedForwardedOutputsIndex::set(txn, block_forwarded_from, &forwarded_outputs); + + // Save the output itself + SerializedForwardedOutput::set(txn, id.as_ref(), &buf); + } + pub(crate) fn set_in_instructions( txn: &mut impl DbTxn, block_number: u64, diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 7bd8cc2e..0a26f177 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -195,6 +195,8 @@ impl Scanner { } /// Register the Eventualities caused by a block. + // TODO: Replace this with a callback returned by acknowledge_block which panics if it's not + // called yet dropped pub fn register_eventualities(&mut self, block_number: u64, eventualities: Vec<()>) { todo!("TODO") } diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs index 62ee91c3..6d189bca 100644 --- a/processor/scanner/src/lifetime.rs +++ b/processor/scanner/src/lifetime.rs @@ -35,16 +35,16 @@ pub(crate) enum LifetimeStage { } impl LifetimeStage { - /// Get the stage of its lifetime this multisig is in based on when the next multisig's key - /// activates. + /// Get the stage of its lifetime this multisig is in, and the block at which we start reporting + /// outputs to it. /// /// Panics if the multisig being calculated for isn't actually active and a variety of other /// insane cases. - pub(crate) fn calculate( + pub(crate) fn calculate_stage_and_reporting_start_block( block_number: u64, activation_block_number: u64, next_keys_activation_block_number: Option, - ) -> Self { + ) -> (Self, u64) { assert!( activation_block_number >= block_number, "calculating lifetime stage for an inactive multisig" @@ -53,13 +53,15 @@ impl LifetimeStage { // activation block itself is the first block within this window let active_yet_not_reporting_end_block = activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; + // The exclusive end block is the inclusive start block + let reporting_start_block = active_yet_not_reporting_end_block; if block_number < active_yet_not_reporting_end_block { - return LifetimeStage::ActiveYetNotReporting; + return (LifetimeStage::ActiveYetNotReporting, reporting_start_block); } let Some(next_keys_activation_block_number) = next_keys_activation_block_number else { // If there is no next multisig, this is the active multisig - return LifetimeStage::Active; + return (LifetimeStage::Active, reporting_start_block); }; assert!( @@ -72,14 +74,14 @@ impl LifetimeStage { let new_active_yet_not_reporting_end_block = next_keys_activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; if block_number < new_active_yet_not_reporting_end_block { - return LifetimeStage::Active; + return (LifetimeStage::Active, reporting_start_block); } // Step 4 details a further CONFIRMATIONS let new_active_and_used_for_change_end_block = new_active_yet_not_reporting_end_block + S::CONFIRMATIONS; if block_number < new_active_and_used_for_change_end_block { - return LifetimeStage::UsingNewForChange; + return (LifetimeStage::UsingNewForChange, reporting_start_block); } // Step 5 details a further 6 hours @@ -87,10 +89,10 @@ impl LifetimeStage { let new_active_and_forwarded_to_end_block = new_active_and_used_for_change_end_block + (6 * 6 * S::TEN_MINUTES); if block_number < new_active_and_forwarded_to_end_block { - return LifetimeStage::Forwarding; + return (LifetimeStage::Forwarding, reporting_start_block); } // Step 6 - LifetimeStage::Finishing + (LifetimeStage::Finishing, reporting_start_block) } } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 13332586..e35eb749 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -103,7 +103,10 @@ impl ContinuallyRan for ScanForOutputsTask { let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) .expect("scanning for a blockchain without any keys set"); - let mut in_instructions = vec![]; + let mut txn = self.db.txn(); + + let mut in_instructions = ScannerDb::::take_queued_outputs(&mut txn, b); + // Scan for each key for key in keys { for output in block.scan_for_outputs(key.key) { @@ -152,24 +155,6 @@ impl ContinuallyRan for ScanForOutputsTask { continue; } - // Drop External outputs if they're to a multisig which won't report them - // This means we should report any External output we save to disk here - #[allow(clippy::match_same_arms)] - match key.stage { - // TODO: Delay External outputs - LifetimeStage::ActiveYetNotReporting => todo!("TODO"), - // We should report External outputs in these cases - LifetimeStage::Active | LifetimeStage::UsingNewForChange => {} - // We should report External outputs only once forwarded, where they'll appear as - // OutputType::Forwarded - LifetimeStage::Forwarding => todo!("TODO"), - // We should drop these as we should not be handling new External outputs at this - // time - LifetimeStage::Finishing => { - continue; - } - } - // Check this isn't dust let balance_to_use = { let mut balance = output.balance(); @@ -190,27 +175,59 @@ impl ContinuallyRan for ScanForOutputsTask { balance }; - // Decode and save the InInstruction/return addr for this output - match in_instruction_from_output::(&output) { - (return_address, Some(instruction)) => { - let in_instruction = - InInstructionWithBalance { instruction, balance: balance_to_use }; - // TODO: Make a proper struct out of this - in_instructions.push(OutputWithInInstruction { - output, - return_address, - in_instruction, - }); - todo!("TODO: Save to be reported") + // Fetch the InInstruction/return addr for this output + let output_with_in_instruction = match in_instruction_from_output::(&output) { + (return_address, Some(instruction)) => OutputWithInInstruction { + output, + return_address, + in_instruction: InInstructionWithBalance { instruction, balance: balance_to_use }, + }, + (Some(return_addr), None) => { + // Since there was no instruction here, return this since we parsed a return address + ScannerDb::::queue_return(&mut txn, b, return_addr, output); + continue; + } + // Since we didn't receive an instruction nor can we return this, move on + (None, None) => continue, + }; + + // Drop External outputs if they're to a multisig which won't report them + // This means we should report any External output we save to disk here + #[allow(clippy::match_same_arms)] + match key.stage { + // This multisig isn't yet reporting its External outputs to avoid a DoS + // Queue the output to be reported when this multisig starts reporting + LifetimeStage::ActiveYetNotReporting => { + ScannerDb::::queue_output_until_block( + &mut txn, + key.block_at_which_reporting_starts, + &output_with_in_instruction, + ); + continue; + } + // We should report External outputs in these cases + LifetimeStage::Active | LifetimeStage::UsingNewForChange => {} + // We should report External outputs only once forwarded, where they'll appear as + // OutputType::Forwarded. We save them now for when they appear + LifetimeStage::Forwarding => { + // When the forwarded output appears, we can see which Plan it's associated with and + // from there recover this output + ScannerDb::::save_output_being_forwarded(&mut txn, &output_with_in_instruction); + continue; + } + // We should drop these as we should not be handling new External outputs at this + // time + LifetimeStage::Finishing => { + continue; } - (Some(return_addr), None) => todo!("TODO: Queue return"), - // Since we didn't receive an instruction nor can we return this, accumulate it - (None, None) => {} } + // Ensures we didn't miss a `continue` above + assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); + + in_instructions.push(output_with_in_instruction); } } - let mut txn = self.db.txn(); // Save the in instructions ScannerDb::::set_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block From d5d1fc3eea493c11feae279025de3d60b0db6ad1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 23 Aug 2024 22:29:15 -0400 Subject: [PATCH 019/368] Flesh out report task --- processor/primitives/src/block.rs | 11 +++--- processor/scanner/src/db.rs | 31 +++++++++------- processor/scanner/src/lib.rs | 6 ++-- processor/scanner/src/report.rs | 59 +++++++++++++++++++++++++------ processor/scanner/src/scan.rs | 2 +- 5 files changed, 79 insertions(+), 30 deletions(-) diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 1fc92c3a..77e7e816 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -6,12 +6,13 @@ use crate::{Id, Address, ReceivedOutput}; /// A block header from an external network. pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { - /// The type used to identify blocks. - type Id: 'static + Id; /// The ID of this block. - fn id(&self) -> Self::Id; + /// + /// This is fixed to 32-bytes and is expected to be cryptographically binding with 128-bit + /// security. This is not required to be the ID used natively by the external network. + fn id(&self) -> [u8; 32]; /// The ID of the parent block. - fn parent(&self) -> Self::Id; + fn parent(&self) -> [u8; 32]; } /// A block from an external network. @@ -33,7 +34,7 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { type Output: ReceivedOutput; /// The ID of this block. - fn id(&self) -> ::Id; + fn id(&self) -> [u8; 32]; /// Scan all outputs within this block to find the outputs spendable by this key. fn scan_for_outputs(&self, key: Self::Key) -> Vec; diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index fa2db781..cccbe5f6 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -11,7 +11,7 @@ use serai_in_instructions_primitives::InInstructionWithBalance; use primitives::{Id, ReceivedOutput, Block, BorshG}; -use crate::{lifetime::LifetimeStage, ScannerFeed, BlockIdFor, KeyFor, AddressFor, OutputFor}; +use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} @@ -46,8 +46,8 @@ impl OutputWithInInstruction { create_db!( Scanner { - BlockId: (number: u64) -> I, - BlockNumber: (id: I) -> u64, + BlockId: (number: u64) -> [u8; 32], + BlockNumber: (id: [u8; 32]) -> u64, ActiveKeys: () -> Vec>, @@ -91,14 +91,14 @@ create_db!( pub(crate) struct ScannerDb(PhantomData); impl ScannerDb { - pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: BlockIdFor) { + pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: [u8; 32]) { BlockId::set(txn, number, &id); BlockNumber::set(txn, id, &number); } - pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option> { + pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<[u8; 32]> { BlockId::get(getter, number) } - pub(crate) fn block_number(getter: &impl Get, id: BlockIdFor) -> Option { + pub(crate) fn block_number(getter: &impl Get, id: [u8; 32]) -> Option { BlockNumber::get(getter, id) } @@ -154,7 +154,7 @@ impl ScannerDb { Some(keys) } - pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: BlockIdFor) { + pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: [u8; 32]) { assert!( LatestFinalizedBlock::get(txn).is_none(), "setting start block but prior set start block" @@ -276,18 +276,18 @@ impl ScannerDb { SerializedForwardedOutput::set(txn, id.as_ref(), &buf); } + // TODO: Use a DbChannel here, and send the instructions to the report task and the outputs to + // the eventuality task? That way this cleans up after itself pub(crate) fn set_in_instructions( txn: &mut impl DbTxn, block_number: u64, outputs: Vec>, ) { - if outputs.is_empty() { - return; + if !outputs.is_empty() { + // Set this block as notable + NotableBlock::set(txn, block_number, &()); } - // Set this block as notable - NotableBlock::set(txn, block_number, &()); - let mut buf = Vec::with_capacity(outputs.len() * 128); for output in outputs { output.write(&mut buf).unwrap(); @@ -295,6 +295,13 @@ impl ScannerDb { SerializedOutputs::set(txn, block_number, &buf); } + pub(crate) fn in_instructions( + getter: &impl Get, + block_number: u64, + ) -> Option>> { + todo!("TODO") + } + pub(crate) fn is_block_notable(getter: &impl Get, number: u64) -> bool { NotableBlock::get(getter, number).is_some() } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 0a26f177..5b5f6fe2 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -2,7 +2,7 @@ use core::{marker::PhantomData, fmt::Debug, time::Duration}; use tokio::sync::mpsc; -use serai_primitives::{Coin, Amount}; +use serai_primitives::{NetworkId, Coin, Amount}; use primitives::{ReceivedOutput, BlockHeader, Block}; // Logic for deciding where in its lifetime a multisig is. @@ -24,6 +24,9 @@ mod report; /// This defines the primitive types used, along with various getters necessary for indexing. #[async_trait::async_trait] pub trait ScannerFeed: Send + Sync { + /// The ID of the network being scanned for. + const NETWORK: NetworkId; + /// The amount of confirmations a block must have to be considered finalized. /// /// This value must be at least `1`. @@ -84,7 +87,6 @@ pub trait ScannerFeed: Send + Sync { fn dust(&self, coin: Coin) -> Amount; } -type BlockIdFor = <<::Block as Block>::Header as BlockHeader>::Id; type KeyFor = <::Block as Block>::Key; type AddressFor = <::Block as Block>::Address; type OutputFor = <::Block as Block>::Output; diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 3c22556c..17cdca35 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -1,15 +1,20 @@ -/* - We only report blocks once both tasks, scanning for received ouputs and eventualities, have - processed the block. This ensures we've performed all ncessary options. -*/ - +use scale::Encode; use serai_db::{Db, DbTxn}; +use serai_primitives::BlockHash; +use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::{Id, OutputType, Block}; // TODO: Localize to ReportDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +/* + This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. + + We only report blocks once both tasks, scanning for received outputs and checking for resolved + Eventualities, have processed the block. This ensures we know if this block is notable, and have + the InInstructions for it. +*/ struct ReportTask { db: D, feed: S, @@ -39,15 +44,49 @@ impl ContinuallyRan for ReportTask { .expect("ReportTask run before writing the start block"); for b in next_to_potentially_report ..= highest_reportable { - if ScannerDb::::is_block_notable(&self.db, b) { - let in_instructions = todo!("TODO"); - // TODO: Also pull the InInstructions from forwarding - todo!("TODO: Make Batches, which requires handling Forwarded within this crate"); + let mut txn = self.db.txn(); + + if ScannerDb::::is_block_notable(&txn, b) { + let in_instructions = ScannerDb::::in_instructions(&txn, b) + .expect("reporting block which didn't set its InInstructions"); + + let network = S::NETWORK; + let block_hash = + ScannerDb::::block_id(&txn, b).expect("reporting block we didn't save the ID for"); + let mut batch_id = ScannerDb::::acquire_batch_id(txn); + + // start with empty batch + let mut batches = + vec![Batch { network, id: batch_id, block: BlockHash(block_hash), instructions: vec![] }]; + + for instruction in in_instructions { + let batch = batches.last_mut().unwrap(); + batch.instructions.push(instruction.in_instruction); + + // check if batch is over-size + if batch.encode().len() > MAX_BATCH_SIZE { + // pop the last instruction so it's back in size + let instruction = batch.instructions.pop().unwrap(); + + // bump the id for the new batch + batch_id = ScannerDb::::acquire_batch_id(txn); + + // make a new batch with this instruction included + batches.push(Batch { + network, + id: batch_id, + block: BlockHash(block_hash), + instructions: vec![instruction], + }); + } + } + + todo!("TODO: Set/emit batches"); } - let mut txn = self.db.txn(); // Update the next to potentially report block ScannerDb::::set_next_to_potentially_report_block(&mut txn, b + 1); + txn.commit(); } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index e35eb749..365f0f14 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -212,7 +212,7 @@ impl ContinuallyRan for ScanForOutputsTask { LifetimeStage::Forwarding => { // When the forwarded output appears, we can see which Plan it's associated with and // from there recover this output - ScannerDb::::save_output_being_forwarded(&mut txn, &output_with_in_instruction); + ScannerDb::::save_output_being_forwarded(&mut txn, b, &output_with_in_instruction); continue; } // We should drop these as we should not be handling new External outputs at this From 945f31dfc700be8a48b8995df7b7993909e51e11 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 24 Aug 2024 17:30:02 -0400 Subject: [PATCH 020/368] Have the scan flag blocks with change/branch/forwarded as notable --- processor/scanner/src/db.rs | 5 +++++ processor/scanner/src/report.rs | 1 + processor/scanner/src/scan.rs | 8 ++++++++ 3 files changed, 14 insertions(+) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index cccbe5f6..0710ae30 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -120,6 +120,7 @@ impl ScannerDb { } // TODO: This will be called from the Eventuality task yet this field is read by the scan task // We need to write the argument for its safety + // TODO: retire_key needs to set the notable block pub(crate) fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { let mut keys: Vec>>> = ActiveKeys::get(txn).expect("retiring key yet no active keys"); @@ -276,6 +277,10 @@ impl ScannerDb { SerializedForwardedOutput::set(txn, id.as_ref(), &buf); } + pub(crate) fn flag_notable(txn: &mut impl DbTxn, block_number: u64) { + NotableBlock::set(txn, block_number, &()); + } + // TODO: Use a DbChannel here, and send the instructions to the report task and the outputs to // the eventuality task? That way this cleans up after itself pub(crate) fn set_in_instructions( diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 17cdca35..37ef8874 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -46,6 +46,7 @@ impl ContinuallyRan for ReportTask { for b in next_to_potentially_report ..= highest_reportable { let mut txn = self.db.txn(); + // If this block is notable, create the Batch(s) for it if ScannerDb::::is_block_notable(&txn, b) { let in_instructions = ScannerDb::::in_instructions(&txn, b) .expect("reporting block which didn't set its InInstructions"); diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 365f0f14..8c8e07b3 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -152,6 +152,14 @@ impl ContinuallyRan for ScanForOutputsTask { to do so at a higher level. */ if output.kind() != OutputType::External { + // While we don't report these outputs, we still need consensus on this block and + // accordingly still need to set it as notable + let balance = outputs.balance(); + // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing + // an invocation of a consensus/signing protocol + if balance.amount.0 >= self.feed.dust(balance.coin).0 { + ScannerDb::::flag_notable(&mut txn, b); + } continue; } From 379780a3c9e2187a617552501d97ba789ecaacb7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 24 Aug 2024 23:43:31 -0400 Subject: [PATCH 021/368] Flesh out eventuality task --- processor/primitives/src/block.rs | 30 ++++++- processor/primitives/src/eventuality.rs | 11 ++- processor/primitives/src/lib.rs | 4 +- processor/primitives/src/output.rs | 4 + processor/scanner/src/db.rs | 6 +- processor/scanner/src/eventuality.rs | 103 ++++++++++++++++++++++-- processor/scanner/src/index.rs | 2 +- processor/scanner/src/lib.rs | 11 ++- processor/scanner/src/lifetime.rs | 1 + processor/scanner/src/report.rs | 19 ++++- processor/scanner/src/scan.rs | 12 ++- 11 files changed, 174 insertions(+), 29 deletions(-) diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 77e7e816..5ca2acec 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -1,8 +1,9 @@ use core::fmt::Debug; +use std::collections::HashMap; use group::{Group, GroupEncoding}; -use crate::{Id, Address, ReceivedOutput}; +use crate::{Id, Address, ReceivedOutput, Eventuality, EventualityTracker}; /// A block header from an external network. pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { @@ -15,6 +16,12 @@ pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { fn parent(&self) -> [u8; 32]; } +/// A transaction from an external network. +pub trait Transaction: Send + Sync + Sized { + /// The type used to identify transactions on this external network. + type Id: Id; +} + /// A block from an external network. /// /// A block is defined as a consensus event associated with a set of transactions. It is not @@ -30,12 +37,31 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { type Key: Group + GroupEncoding; /// The type used to represent addresses on this external network. type Address: Address; + /// The type used to represent transactions on this external network. + type Transaction: Transaction; /// The type used to represent received outputs on this external network. - type Output: ReceivedOutput; + type Output: ReceivedOutput< + Self::Key, + Self::Address, + TransactionId = ::Id, + >; + /// The type used to represent an Eventuality for a transaction on this external network. + type Eventuality: Eventuality< + OutputId = >::Id, + >; /// The ID of this block. fn id(&self) -> [u8; 32]; /// Scan all outputs within this block to find the outputs spendable by this key. fn scan_for_outputs(&self, key: Self::Key) -> Vec; + + /// Check if this block resolved any Eventualities. + /// + /// Returns tbe resolved Eventualities, indexed by the ID of the transactions which resolved + /// them. + fn check_for_eventuality_resolutions( + &self, + eventualities: &mut EventualityTracker, + ) -> HashMap<::Id, Self::Eventuality>; } diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs index 6e16637d..7203031b 100644 --- a/processor/primitives/src/eventuality.rs +++ b/processor/primitives/src/eventuality.rs @@ -1,8 +1,12 @@ -use std::collections::HashMap; -use std::io; +use std::{io, collections::HashMap}; + +use crate::Id; /// A description of a transaction which will eventually happen. pub trait Eventuality: Sized + Send + Sync { + /// The type used to identify a received output. + type OutputId: Id; + /// A unique byte sequence which can be used to identify potentially resolving transactions. /// /// Both a transaction and an Eventuality are expected to be able to yield lookup sequences. @@ -15,6 +19,9 @@ pub trait Eventuality: Sized + Send + Sync { /// identified, the full check is performed. fn lookup(&self) -> Vec; + /// The output this plan forwarded. + fn forwarded_output(&self) -> Option; + /// Read an Eventuality. fn read(reader: &mut R) -> io::Result; /// Serialize an Eventuality to a `Vec`. diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index dc64facf..f796a13a 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -2,7 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::fmt::Debug; +use core::{hash::Hash, fmt::Debug}; use group::GroupEncoding; @@ -29,6 +29,8 @@ pub trait Id: + Clone + Default + PartialEq + + Eq + + Hash + AsRef<[u8]> + AsMut<[u8]> + Debug diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 2b96d229..152a59e0 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -89,12 +89,16 @@ pub trait ReceivedOutput: { /// The type used to identify this output. type Id: 'static + Id; + /// The type used to identify the transaction which created this output. + type TransactionId: 'static + Id; /// The type of this output. fn kind(&self) -> OutputType; /// The ID of this output. fn id(&self) -> Self::Id; + /// The ID of the transaction which created this output. + fn transaction_id(&self) -> Self::TransactionId; /// The key this output was received by. fn key(&self) -> K; diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 0710ae30..09807a09 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,15 +1,13 @@ use core::marker::PhantomData; use std::io; -use group::GroupEncoding; - -use scale::{Encode, Decode}; +use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; use serai_in_instructions_primitives::InInstructionWithBalance; -use primitives::{Id, ReceivedOutput, Block, BorshG}; +use primitives::{ReceivedOutput, BorshG}; use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor}; diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index cb91ca42..b223fd79 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -1,9 +1,9 @@ use serai_db::{Db, DbTxn}; -use primitives::{Id, ReceivedOutput, Block}; +use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to EventualityDb? -use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +use crate::{lifetime::LifetimeStage, db::ScannerDb, ScannerFeed, ContinuallyRan}; /* Note: The following assumes there's some value, `CONFIRMATIONS`, and the finalized block we @@ -109,12 +109,105 @@ impl ContinuallyRan for EventualityTask { iterated = true; - // TODO: Not only check/clear eventualities, if this eventuality forwarded an output, queue - // it to be reported in however many blocks - todo!("TODO"); + // TODO: Add a helper to fetch an indexed block, de-duplicate with scan + let block = match self.feed.block_by_number(b).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, + }; + + // Check the ID of this block is the expected ID + { + let expected = + ScannerDb::::block_id(&self.db, b).expect("scannable block didn't have its ID saved"); + if block.id() != expected { + panic!( + "finalized chain reorganized from {} to {} at {}", + hex::encode(expected), + hex::encode(block.id()), + b + ); + } + } + + log::info!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); + + /* + This is proper as the keys for the next to scan block (at most `WINDOW_LENGTH` ahead, + which is `<= CONFIRMATIONS`) will be the keys to use here. + + If we had added a new key (which hasn't actually actived by the block we're currently + working on), it won't have any Eventualities for at least `CONFIRMATIONS` blocks (so it'd + have no impact here). + + As for retiring a key, that's done on this task's timeline. We ensure we don't bork the + scanner by officially retiring the key `WINDOW_LENGTH` blocks in the future (ensuring the + scanner never has a malleable view of the keys). + */ + // TODO: Ensure the add key/remove key DB fns are called by the same task to prevent issues + // there + // TODO: On register eventuality, assert the above timeline assumptions + let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) + .expect("scanning for a blockchain without any keys set"); let mut txn = self.db.txn(); + + // Fetch the External outputs we reported, and therefore should yield after handling this + // block + let mut outputs = ScannerDb::::in_instructions(&txn, b) + .expect("handling eventualities/outputs for block which didn't set its InInstructions") + .into_iter() + .map(|output| output.output) + .collect::>(); + + for key in keys { + let completed_eventualities = { + let mut eventualities = ScannerDb::::eventualities(&txn, key.key); + let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); + ScannerDb::::set_eventualities(&mut txn, eventualities); + completed_eventualities + }; + + // Fetch all non-External outputs + let mut non_external_outputs = block.scan_for_outputs(key.key); + non_external_outputs.retain(|output| output.kind() != OutputType::External); + // Drop any outputs less than the dust limit + non_external_outputs.retain(|output| { + let balance = output.balance(); + balance.amount.0 >= self.feed.dust(balance.coin).0 + }); + + /* + Now that we have all non-External outputs, we filter them to be only the outputs which + are from transactions which resolve our own Eventualities *if* the multisig is retiring. + This implements step 6 of `spec/processor/Multisig Rotation.md`. + + We may receive a Change output. The only issue with accumulating this would be if it + extends the multisig's lifetime (by increasing the amount of outputs yet to be + forwarded). By checking it's one we made, either: + 1) It's a legitimate Change output to be forwarded + 2) It's a Change output created by a user burning coins (specifying the Change address), + which can only be created while the multisig is actively handling `Burn`s (therefore + ensuring this multisig cannot be kept alive ad-infinitum) + + The commentary on Change outputs also applies to Branch/Forwarded. They'll presumably get + ignored if not usable however. + */ + if key.stage == LifetimeStage::Finishing { + non_external_outputs + .retain(|output| completed_eventualities.contains_key(&output.transaction_id())); + } + + // Now, we iterate over all Forwarded outputs and queue their InInstructions + todo!("TODO"); + + // Accumulate all of these outputs + outputs.extend(non_external_outputs); + } + + let outputs_to_return = ScannerDb::::take_queued_returns(&mut txn, b); + // Update the next to check block + // TODO: Two-stage process ScannerDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); txn.commit(); } diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs index de68522e..b5c4fd0f 100644 --- a/processor/scanner/src/index.rs +++ b/processor/scanner/src/index.rs @@ -1,6 +1,6 @@ use serai_db::{Db, DbTxn}; -use primitives::{Id, BlockHeader}; +use primitives::BlockHeader; // TODO: Localize to IndexDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5b5f6fe2..b683a4b7 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -3,7 +3,7 @@ use core::{marker::PhantomData, fmt::Debug, time::Duration}; use tokio::sync::mpsc; use serai_primitives::{NetworkId, Coin, Amount}; -use primitives::{ReceivedOutput, BlockHeader, Block}; +use primitives::Block; // Logic for deciding where in its lifetime a multisig is. mod lifetime; @@ -34,8 +34,8 @@ pub trait ScannerFeed: Send + Sync { /// The amount of blocks to process in parallel. /// - /// This value must be at least `1`. This value should be the worst-case latency to handle a - /// block divided by the expected block time. + /// This must be at least `1`. This must be less than or equal to `CONFIRMATIONS`. This value + /// should be the worst-case latency to handle a block divided by the expected block time. const WINDOW_LENGTH: u64; /// The amount of blocks which will occur in 10 minutes (approximate). @@ -83,7 +83,8 @@ pub trait ScannerFeed: Send + Sync { /// The dust threshold for the specified coin. /// - /// This should be a value worth handling at a human level. + /// This MUST be constant. Serai MJUST NOT create internal outputs worth less than this. This + /// SHOULD be a value worth handling at a human level. fn dust(&self, coin: Coin) -> Amount; } @@ -188,6 +189,8 @@ impl Scanner { /// /// This means this block was ordered on Serai in relation to `Burn` events, and all validators /// have achieved synchrony on it. + // TODO: If we're acknowledge block `b`, the Eventuality task was already eligible to check it + // for Eventualities. We need this to block until the Eventuality task has actually checked it. pub fn acknowledge_block( &mut self, block_number: u64, diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs index 6d189bca..09df7a37 100644 --- a/processor/scanner/src/lifetime.rs +++ b/processor/scanner/src/lifetime.rs @@ -6,6 +6,7 @@ use crate::ScannerFeed; /// rotation process. Steps 7-8 regard a multisig which isn't retiring yet retired, and /// accordingly, no longer exists, so they are not modelled here (as this only models active /// multisigs. Inactive multisigs aren't represented in the first place). +#[derive(PartialEq)] pub(crate) enum LifetimeStage { /// A new multisig, once active, shouldn't actually start receiving coins until several blocks /// later. If any UI is premature in sending to this multisig, we delay to report the outputs to diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 37ef8874..2c35d0f5 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -3,7 +3,8 @@ use serai_db::{Db, DbTxn}; use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; -use primitives::{Id, OutputType, Block}; + +use primitives::ReceivedOutput; // TODO: Localize to ReportDb? use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; @@ -48,8 +49,20 @@ impl ContinuallyRan for ReportTask { // If this block is notable, create the Batch(s) for it if ScannerDb::::is_block_notable(&txn, b) { - let in_instructions = ScannerDb::::in_instructions(&txn, b) - .expect("reporting block which didn't set its InInstructions"); + let in_instructions = { + let mut in_instructions = ScannerDb::::in_instructions(&txn, b) + .expect("reporting block which didn't set its InInstructions"); + // Sort these before reporting them in case anything we did is non-deterministic/to have + // a well-defined order (not implicit to however we got this result, enabling different + // methods to be used in the future) + in_instructions.sort_by(|a, b| { + use core::cmp::{Ordering, Ord}; + let res = a.output.id().as_ref().cmp(&b.output.id().as_ref()); + assert!(res != Ordering::Equal); + res + }); + in_instructions + }; let network = S::NETWORK; let block_hash = diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 8c8e07b3..2bfb112f 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -1,14 +1,12 @@ -use group::GroupEncoding; - -use scale::{Encode, Decode}; +use scale::Decode; use serai_db::{Db, DbTxn}; -use serai_primitives::{MAX_DATA_LEN, ExternalAddress}; +use serai_primitives::MAX_DATA_LEN; use serai_in_instructions_primitives::{ Shorthand, RefundableInInstruction, InInstruction, InInstructionWithBalance, }; -use primitives::{Id, OutputType, ReceivedOutput, Block}; +use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to ScanDb? use crate::{ @@ -100,7 +98,7 @@ impl ContinuallyRan for ScanForOutputsTask { log::info!("scanning block: {} ({b})", hex::encode(block.id())); assert_eq!(ScannerDb::::next_to_scan_for_outputs_block(&self.db).unwrap(), b); - let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) + let keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) .expect("scanning for a blockchain without any keys set"); let mut txn = self.db.txn(); @@ -154,7 +152,7 @@ impl ContinuallyRan for ScanForOutputsTask { if output.kind() != OutputType::External { // While we don't report these outputs, we still need consensus on this block and // accordingly still need to set it as notable - let balance = outputs.balance(); + let balance = output.balance(); // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing // an invocation of a consensus/signing protocol if balance.amount.0 >= self.feed.dust(balance.coin).0 { From 2fcd9530ddb014a1f48eb93d9f2db2b42fcb2f99 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 22:49:57 -0400 Subject: [PATCH 022/368] Add a callback to accumulate outputs and return the new Eventualities --- processor/scanner/src/eventuality.rs | 30 +++++++++++++++++++++++----- processor/scanner/src/index.rs | 2 +- processor/scanner/src/lib.rs | 29 +++++++++++++++++++-------- processor/scanner/src/report.rs | 2 +- processor/scanner/src/scan.rs | 2 +- 5 files changed, 49 insertions(+), 16 deletions(-) diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index b223fd79..83ab4eba 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -1,9 +1,11 @@ -use serai_db::{Db, DbTxn}; +use group::GroupEncoding; + +use serai_db::{DbTxn, Db}; use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to EventualityDb? -use crate::{lifetime::LifetimeStage, db::ScannerDb, ScannerFeed, ContinuallyRan}; +use crate::{lifetime::LifetimeStage, db::ScannerDb, ScannerFeed, KeyFor, Scheduler, ContinuallyRan}; /* Note: The following assumes there's some value, `CONFIRMATIONS`, and the finalized block we @@ -53,13 +55,14 @@ use crate::{lifetime::LifetimeStage, db::ScannerDb, ScannerFeed, ContinuallyRan} This forms a backlog only if the latency of scanning, acknowledgement, and intake (including checking Eventualities) exceeds the window duration (the desired property). */ -struct EventualityTask { +struct EventualityTask> { db: D, feed: S, + scheduler: Sch, } #[async_trait::async_trait] -impl ContinuallyRan for EventualityTask { +impl> ContinuallyRan for EventualityTask { async fn run_iteration(&mut self) -> Result { /* The set of Eventualities only increase when a block is acknowledged. Accordingly, we can only @@ -168,6 +171,7 @@ impl ContinuallyRan for EventualityTask { }; // Fetch all non-External outputs + // TODO: Have a scan_for_outputs_ext which sorts for us let mut non_external_outputs = block.scan_for_outputs(key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); // Drop any outputs less than the dust limit @@ -206,8 +210,24 @@ impl ContinuallyRan for EventualityTask { let outputs_to_return = ScannerDb::::take_queued_returns(&mut txn, b); + let new_eventualities = + self.scheduler.accumulate_outputs_and_return_outputs(&mut txn, outputs, outputs_to_return); + for (key, new_eventualities) in new_eventualities { + let key = { + let mut key_repr = as GroupEncoding>::Repr::default(); + assert_eq!(key.len(), key_repr.as_ref().len()); + key_repr.as_mut().copy_from_slice(&key); + KeyFor::::from_bytes(&key_repr).unwrap() + }; + + let mut eventualities = ScannerDb::::eventualities(&txn, key.key); + for new_eventuality in new_eventualities { + eventualities.active_eventualities.insert(new_eventuality.lookup(), new_eventuality); + } + ScannerDb::::set_eventualities(&mut txn, eventualities); + } + // Update the next to check block - // TODO: Two-stage process ScannerDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); txn.commit(); } diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs index b5c4fd0f..1d278015 100644 --- a/processor/scanner/src/index.rs +++ b/processor/scanner/src/index.rs @@ -1,4 +1,4 @@ -use serai_db::{Db, DbTxn}; +use serai_db::{DbTxn, Db}; use primitives::BlockHeader; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index b683a4b7..7919f006 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,8 +1,12 @@ use core::{marker::PhantomData, fmt::Debug, time::Duration}; +use std::collections::HashMap; use tokio::sync::mpsc; +use serai_db::DbTxn; + use serai_primitives::{NetworkId, Coin, Amount}; + use primitives::Block; // Logic for deciding where in its lifetime a multisig is. @@ -91,6 +95,21 @@ pub trait ScannerFeed: Send + Sync { type KeyFor = <::Block as Block>::Key; type AddressFor = <::Block as Block>::Address; type OutputFor = <::Block as Block>::Output; +type EventualityFor = <::Block as Block>::Eventuality; + +/// The object responsible for accumulating outputs and planning new transactions. +pub trait Scheduler { + /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. + /// + /// The `Vec` used as the key in the returned HashMap should be the encoded key these + /// Eventualities are for. + fn accumulate_outputs_and_return_outputs( + &mut self, + txn: &mut impl DbTxn, + outputs: Vec>, + outputs_to_return: Vec>, + ) -> HashMap, Vec>>; +} /// A handle to immediately run an iteration of a task. #[derive(Clone)] @@ -189,8 +208,9 @@ impl Scanner { /// /// This means this block was ordered on Serai in relation to `Burn` events, and all validators /// have achieved synchrony on it. - // TODO: If we're acknowledge block `b`, the Eventuality task was already eligible to check it + // TODO: If we're acknowledging block `b`, the Eventuality task was already eligible to check it // for Eventualities. We need this to block until the Eventuality task has actually checked it. + // TODO: Does the prior TODO hold with how the callback is now handled? pub fn acknowledge_block( &mut self, block_number: u64, @@ -198,13 +218,6 @@ impl Scanner { ) -> Vec> { todo!("TODO") } - - /// Register the Eventualities caused by a block. - // TODO: Replace this with a callback returned by acknowledge_block which panics if it's not - // called yet dropped - pub fn register_eventualities(&mut self, block_number: u64, eventualities: Vec<()>) { - todo!("TODO") - } } /* diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 2c35d0f5..ec87845f 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -1,5 +1,5 @@ use scale::Encode; -use serai_db::{Db, DbTxn}; +use serai_db::{DbTxn, Db}; use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 2bfb112f..cd010d7c 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -1,5 +1,5 @@ use scale::Decode; -use serai_db::{Db, DbTxn}; +use serai_db::{DbTxn, Db}; use serai_primitives::MAX_DATA_LEN; use serai_in_instructions_primitives::{ From b65dbacd6abb23279b52d9d40f1c1fad5d1d2774 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 22:57:28 -0400 Subject: [PATCH 023/368] Move ContinuallyRan into primitives I'm unsure where else it'll be used within the processor, yet it's generally useful and I don't want to make a dedicated crate yet. --- Cargo.lock | 2 + processor/primitives/Cargo.toml | 3 ++ processor/primitives/src/lib.rs | 3 ++ processor/primitives/src/task.rs | 93 ++++++++++++++++++++++++++++++++ processor/scanner/src/lib.rs | 84 +---------------------------- 5 files changed, 102 insertions(+), 83 deletions(-) create mode 100644 processor/primitives/src/task.rs diff --git a/Cargo.lock b/Cargo.lock index f887bd8c..4cc54e15 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8654,8 +8654,10 @@ dependencies = [ "async-trait", "borsh", "group", + "log", "parity-scale-codec", "serai-primitives", + "tokio", ] [[package]] diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml index dd59c0a8..9427a604 100644 --- a/processor/primitives/Cargo.toml +++ b/processor/primitives/Cargo.toml @@ -25,3 +25,6 @@ serai-primitives = { path = "../../substrate/primitives", default-features = fal scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] } diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index f796a13a..b0b7ae04 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -9,6 +9,9 @@ use group::GroupEncoding; use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; +/// A module for task-related structs and functionality. +pub mod task; + mod output; pub use output::*; diff --git a/processor/primitives/src/task.rs b/processor/primitives/src/task.rs new file mode 100644 index 00000000..a7d6153c --- /dev/null +++ b/processor/primitives/src/task.rs @@ -0,0 +1,93 @@ +use core::time::Duration; + +use tokio::sync::mpsc; + +/// A handle to immediately run an iteration of a task. +#[derive(Clone)] +pub struct RunNowHandle(mpsc::Sender<()>); +/// An instruction recipient to immediately run an iteration of a task. +pub struct RunNowRecipient(mpsc::Receiver<()>); + +impl RunNowHandle { + /// Create a new run-now handle to be assigned to a task. + pub fn new() -> (Self, RunNowRecipient) { + // Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as + // soon as possible + let (send, recv) = mpsc::channel(1); + (Self(send), RunNowRecipient(recv)) + } + + /// Tell the task to run now (and not whenever its next iteration on a timer is). + /// + /// Panics if the task has been dropped. + pub fn run_now(&self) { + #[allow(clippy::match_same_arms)] + match self.0.try_send(()) { + Ok(()) => {} + // NOP on full, as this task will already be ran as soon as possible + Err(mpsc::error::TrySendError::Full(())) => {} + Err(mpsc::error::TrySendError::Closed(())) => { + panic!("task was unexpectedly closed when calling run_now") + } + } + } +} + +/// A task to be continually ran. +#[async_trait::async_trait] +pub trait ContinuallyRan: Sized { + /// The amount of seconds before this task should be polled again. + const DELAY_BETWEEN_ITERATIONS: u64 = 5; + /// The maximum amount of seconds before this task should be run again. + /// + /// Upon error, the amount of time waited will be linearly increased until this limit. + const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120; + + /// Run an iteration of the task. + /// + /// If this returns `true`, all dependents of the task will immediately have a new iteration ran + /// (without waiting for whatever timer they were already on). + async fn run_iteration(&mut self) -> Result; + + /// Continually run the task. + /// + /// This returns a channel which can have a message set to immediately trigger a new run of an + /// iteration. + async fn continually_run(mut self, mut run_now: RunNowRecipient, dependents: Vec) { + // The default number of seconds to sleep before running the task again + let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS; + // The current number of seconds to sleep before running the task again + // We increment this upon errors in order to not flood the logs with errors + let mut current_sleep_before_next_task = default_sleep_before_next_task; + let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| { + let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task; + // Set a limit of sleeping for two minutes + *current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS); + }; + + loop { + match self.run_iteration().await { + Ok(run_dependents) => { + // Upon a successful (error-free) loop iteration, reset the amount of time we sleep + current_sleep_before_next_task = default_sleep_before_next_task; + + if run_dependents { + for dependent in &dependents { + dependent.run_now(); + } + } + } + Err(e) => { + log::debug!("{}", e); + increase_sleep_before_next_task(&mut current_sleep_before_next_task); + } + } + + // Don't run the task again for another few seconds UNLESS told to run now + tokio::select! { + () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, + msg = run_now.0.recv() => assert_eq!(msg, Some(()), "run now handle was dropped"), + } + } + } +} diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 7919f006..822acb27 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -7,7 +7,7 @@ use serai_db::DbTxn; use serai_primitives::{NetworkId, Coin, Amount}; -use primitives::Block; +use primitives::{task::*, Block}; // Logic for deciding where in its lifetime a multisig is. mod lifetime; @@ -111,88 +111,6 @@ pub trait Scheduler { ) -> HashMap, Vec>>; } -/// A handle to immediately run an iteration of a task. -#[derive(Clone)] -pub(crate) struct RunNowHandle(mpsc::Sender<()>); -/// An instruction recipient to immediately run an iteration of a task. -pub(crate) struct RunNowRecipient(mpsc::Receiver<()>); - -impl RunNowHandle { - /// Create a new run-now handle to be assigned to a task. - pub(crate) fn new() -> (Self, RunNowRecipient) { - // Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as - // soon as possible - let (send, recv) = mpsc::channel(1); - (Self(send), RunNowRecipient(recv)) - } - - /// Tell the task to run now (and not whenever its next iteration on a timer is). - /// - /// Panics if the task has been dropped. - pub(crate) fn run_now(&self) { - #[allow(clippy::match_same_arms)] - match self.0.try_send(()) { - Ok(()) => {} - // NOP on full, as this task will already be ran as soon as possible - Err(mpsc::error::TrySendError::Full(())) => {} - Err(mpsc::error::TrySendError::Closed(())) => { - panic!("task was unexpectedly closed when calling run_now") - } - } - } -} - -#[async_trait::async_trait] -pub(crate) trait ContinuallyRan: Sized { - /// Run an iteration of the task. - /// - /// If this returns `true`, all dependents of the task will immediately have a new iteration ran - /// (without waiting for whatever timer they were already on). - async fn run_iteration(&mut self) -> Result; - - /// Continually run the task. - /// - /// This returns a channel which can have a message set to immediately trigger a new run of an - /// iteration. - async fn continually_run(mut self, mut run_now: RunNowRecipient, dependents: Vec) { - // The default number of seconds to sleep before running the task again - let default_sleep_before_next_task = 5; - // The current number of seconds to sleep before running the task again - // We increment this upon errors in order to not flood the logs with errors - let mut current_sleep_before_next_task = default_sleep_before_next_task; - let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| { - let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task; - // Set a limit of sleeping for two minutes - *current_sleep_before_next_task = new_sleep.max(120); - }; - - loop { - match self.run_iteration().await { - Ok(run_dependents) => { - // Upon a successful (error-free) loop iteration, reset the amount of time we sleep - current_sleep_before_next_task = default_sleep_before_next_task; - - if run_dependents { - for dependent in &dependents { - dependent.run_now(); - } - } - } - Err(e) => { - log::debug!("{}", e); - increase_sleep_before_next_task(&mut current_sleep_before_next_task); - } - } - - // Don't run the task again for another few seconds UNLESS told to run now - tokio::select! { - () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, - msg = run_now.0.recv() => assert_eq!(msg, Some(()), "run now handle was dropped"), - } - } - } -} - /// A representation of a scanner. pub struct Scanner(PhantomData); impl Scanner { From 7e7184082251e241bb13690ddb0b937d426857dc Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 23:15:19 -0400 Subject: [PATCH 024/368] Add helper methods Has fetched blocks checked to be the indexed blocks. Has scanned outputs be sorted, meaning they aren't subject to implicit order/may be non-deterministic (such as if handled by a threadpool). --- processor/primitives/src/block.rs | 4 ++- processor/scanner/src/eventuality.rs | 32 ++++------------- processor/scanner/src/index.rs | 2 +- processor/scanner/src/lib.rs | 52 ++++++++++++++++++++++++++-- processor/scanner/src/scan.rs | 23 ++---------- 5 files changed, 63 insertions(+), 50 deletions(-) diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 5ca2acec..6f603ab2 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -54,7 +54,9 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { fn id(&self) -> [u8; 32]; /// Scan all outputs within this block to find the outputs spendable by this key. - fn scan_for_outputs(&self, key: Self::Key) -> Vec; + /// + /// No assumption on the order of the returned outputs is made. + fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec; /// Check if this block resolved any Eventualities. /// diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality.rs index 83ab4eba..8fc18246 100644 --- a/processor/scanner/src/eventuality.rs +++ b/processor/scanner/src/eventuality.rs @@ -5,13 +5,11 @@ use serai_db::{DbTxn, Db}; use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to EventualityDb? -use crate::{lifetime::LifetimeStage, db::ScannerDb, ScannerFeed, KeyFor, Scheduler, ContinuallyRan}; +use crate::{ + lifetime::LifetimeStage, db::ScannerDb, BlockExt, ScannerFeed, KeyFor, Scheduler, ContinuallyRan, +}; /* - Note: The following assumes there's some value, `CONFIRMATIONS`, and the finalized block we - operate on is `CONFIRMATIONS` blocks deep. This is true for Proof-of-Work chains yet not the API - actively used here. - When we scan a block, we receive outputs. When this block is acknowledged, we accumulate those outputs into some scheduler, potentially causing certain transactions to begin their signing protocol. @@ -112,25 +110,7 @@ impl> ContinuallyRan for EventualityTas iterated = true; - // TODO: Add a helper to fetch an indexed block, de-duplicate with scan - let block = match self.feed.block_by_number(b).await { - Ok(block) => block, - Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, - }; - - // Check the ID of this block is the expected ID - { - let expected = - ScannerDb::::block_id(&self.db, b).expect("scannable block didn't have its ID saved"); - if block.id() != expected { - panic!( - "finalized chain reorganized from {} to {} at {}", - hex::encode(expected), - hex::encode(block.id()), - b - ); - } - } + let block = self.feed.block_by_number(b).await?; log::info!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); @@ -171,7 +151,6 @@ impl> ContinuallyRan for EventualityTas }; // Fetch all non-External outputs - // TODO: Have a scan_for_outputs_ext which sorts for us let mut non_external_outputs = block.scan_for_outputs(key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); // Drop any outputs less than the dust limit @@ -210,6 +189,7 @@ impl> ContinuallyRan for EventualityTas let outputs_to_return = ScannerDb::::take_queued_returns(&mut txn, b); + // TODO: This also has to intake Burns let new_eventualities = self.scheduler.accumulate_outputs_and_return_outputs(&mut txn, outputs, outputs_to_return); for (key, new_eventualities) in new_eventualities { @@ -220,7 +200,7 @@ impl> ContinuallyRan for EventualityTas KeyFor::::from_bytes(&key_repr).unwrap() }; - let mut eventualities = ScannerDb::::eventualities(&txn, key.key); + let mut eventualities = ScannerDb::::eventualities(&txn, key); for new_eventuality in new_eventualities { eventualities.active_eventualities.insert(new_eventuality.lookup(), new_eventuality); } diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index.rs index 1d278015..e3c5c6ac 100644 --- a/processor/scanner/src/index.rs +++ b/processor/scanner/src/index.rs @@ -43,7 +43,7 @@ impl ContinuallyRan for IndexFinalizedTask { // Index the hashes of all blocks until the latest finalized block for b in (our_latest_finalized + 1) ..= latest_finalized { - let block = match self.feed.block_header_by_number(b).await { + let block = match self.feed.unchecked_block_header_by_number(b).await { Ok(block) => block, Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, }; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 822acb27..5b41301e 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -23,6 +23,23 @@ mod eventuality; /// Task which reports `Batch`s to Substrate. mod report; +/// Extension traits around Block. +pub(crate) trait BlockExt: Block { + fn scan_for_outputs(&self, key: Self::Key) -> Vec; +} +impl BlockExt for B { + fn scan_for_outputs(&self, key: Self::Key) -> Vec { + let mut outputs = self.scan_for_outputs_unordered(); + outputs.sort_by(|a, b| { + use core::cmp::{Ordering, Ord}; + let res = a.id().as_ref().cmp(&b.id().as_ref()); + assert!(res != Ordering::Equal, "scanned two outputs within a block with the same ID"); + res + }); + outputs + } +} + /// A feed usable to scan a blockchain. /// /// This defines the primitive types used, along with various getters necessary for indexing. @@ -68,13 +85,44 @@ pub trait ScannerFeed: Send + Sync { async fn latest_finalized_block_number(&self) -> Result; /// Fetch a block header by its number. - async fn block_header_by_number( + /// + /// This does not check the returned BlockHeader is the header for the block we indexed. + async fn unchecked_block_header_by_number( &self, number: u64, ) -> Result<::Header, Self::EphemeralError>; /// Fetch a block by its number. - async fn block_by_number(&self, number: u64) -> Result; + /// + /// This does not check the returned Block is the block we indexed. + async fn unchecked_block_by_number( + &self, + number: u64, + ) -> Result; + + /// Fetch a block by its number. + /// + /// Panics if the block requested wasn't indexed. + async fn block_by_number(&self, getter: &impl Get, number: u64) -> Result { + let block = match self.unchecked_block_by_number(number).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {number}: {e:?}"))?, + }; + + // Check the ID of this block is the expected ID + { + let expected = + ScannerDb::::block_id(&self.db, number).expect("requested a block which wasn't indexed"); + if block.id() != expected { + panic!( + "finalized chain reorganized from {} to {} at {}", + hex::encode(expected), + hex::encode(block.id()), + number, + ); + } + } + } /// The cost to aggregate an input as of the specified block. /// diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index cd010d7c..ddc1110e 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -12,7 +12,7 @@ use primitives::{OutputType, ReceivedOutput, Block}; use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, ScannerDb}, - ScannerFeed, AddressFor, OutputFor, ContinuallyRan, + BlockExt, ScannerFeed, AddressFor, OutputFor, ContinuallyRan, }; // Construct an InInstruction from an external output. @@ -76,24 +76,7 @@ impl ContinuallyRan for ScanForOutputsTask { .expect("ScanForOutputsTask run before writing the start block"); for b in next_to_scan ..= latest_scannable { - let block = match self.feed.block_by_number(b).await { - Ok(block) => block, - Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, - }; - - // Check the ID of this block is the expected ID - { - let expected = - ScannerDb::::block_id(&self.db, b).expect("scannable block didn't have its ID saved"); - if block.id() != expected { - panic!( - "finalized chain reorganized from {} to {} at {}", - hex::encode(expected), - hex::encode(block.id()), - b - ); - } - } + let block = self.feed.block_by_number(b).await?; log::info!("scanning block: {} ({b})", hex::encode(block.id())); @@ -143,7 +126,7 @@ impl ContinuallyRan for ScanForOutputsTask { worthwhile, and even if they're not economically, they are technically). The alternative, we drop outputs here with a generic filter rule and then report back - the insolvency created, still doesn't work as we'd only be creating if an insolvency if + the insolvency created, still doesn't work as we'd only be creating an insolvency if the output was actually made by us (and not simply someone else sending in). We can have the Eventuality task report the insolvency, yet that requires the scanner be responsible for such filter logic. It's more flexible, and has a cleaner API, From 66f342805159ba603af35ba081685a5c3cbfebf0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 23:18:31 -0400 Subject: [PATCH 025/368] Make index a folder, not a file --- processor/scanner/src/{index.rs => index/mod.rs} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename processor/scanner/src/{index.rs => index/mod.rs} (100%) diff --git a/processor/scanner/src/index.rs b/processor/scanner/src/index/mod.rs similarity index 100% rename from processor/scanner/src/index.rs rename to processor/scanner/src/index/mod.rs From 1e8f4e6156c438ecd9df90ce71fd86364c2bfd61 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 23:24:49 -0400 Subject: [PATCH 026/368] Make a dedicated IndexDb --- processor/scanner/src/db.rs | 29 +++---------------------- processor/scanner/src/index/db.rs | 34 ++++++++++++++++++++++++++++++ processor/scanner/src/index/mod.rs | 16 ++++++++------ processor/scanner/src/lib.rs | 2 +- 4 files changed, 47 insertions(+), 34 deletions(-) create mode 100644 processor/scanner/src/index/db.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 09807a09..4ac6bada 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -44,13 +44,8 @@ impl OutputWithInInstruction { create_db!( Scanner { - BlockId: (number: u64) -> [u8; 32], - BlockNumber: (id: [u8; 32]) -> u64, - ActiveKeys: () -> Vec>, - // The latest finalized block to appear of a blockchain - LatestFinalizedBlock: () -> u64, // The next block to scan for received outputs NextToScanForOutputsBlock: () -> u64, // The next block to check for resolving eventualities @@ -89,17 +84,6 @@ create_db!( pub(crate) struct ScannerDb(PhantomData); impl ScannerDb { - pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: [u8; 32]) { - BlockId::set(txn, number, &id); - BlockNumber::set(txn, id, &number); - } - pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<[u8; 32]> { - BlockId::get(getter, number) - } - pub(crate) fn block_number(getter: &impl Get, id: [u8; 32]) -> Option { - BlockNumber::get(getter, id) - } - // activation_block_number is inclusive, so the key will be scanned for starting at the specified // block pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: KeyFor) { @@ -155,13 +139,13 @@ impl ScannerDb { pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: [u8; 32]) { assert!( - LatestFinalizedBlock::get(txn).is_none(), + NextToScanForOutputsBlock::get(txn).is_none(), "setting start block but prior set start block" ); - Self::set_block(txn, start_block, id); + crate::index::IndexDb::set_block(txn, start_block, id); + crate::index::IndexDb::set_latest_finalized_block(txn, start_block); - LatestFinalizedBlock::set(txn, &start_block); NextToScanForOutputsBlock::set(txn, &start_block); // We can receive outputs in this block, but any descending transactions will be in the next // block. This, with the check on-set, creates a bound that this value in the DB is non-zero. @@ -169,13 +153,6 @@ impl ScannerDb { NextToPotentiallyReportBlock::set(txn, &start_block); } - pub(crate) fn set_latest_finalized_block(txn: &mut impl DbTxn, latest_finalized_block: u64) { - LatestFinalizedBlock::set(txn, &latest_finalized_block); - } - pub(crate) fn latest_finalized_block(getter: &impl Get) -> Option { - LatestFinalizedBlock::get(getter) - } - pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { // We can only scan up to whatever block we've checked the Eventualities of, plus the window // length. Since this returns an inclusive bound, we need to subtract 1 diff --git a/processor/scanner/src/index/db.rs b/processor/scanner/src/index/db.rs new file mode 100644 index 00000000..a46d6fa6 --- /dev/null +++ b/processor/scanner/src/index/db.rs @@ -0,0 +1,34 @@ +use serai_db::{Get, DbTxn, create_db}; + +create_db!( + ScannerIndex { + // A lookup of a block's number to its ID + BlockId: (number: u64) -> [u8; 32], + // A lookup of a block's ID to its number + BlockNumber: (id: [u8; 32]) -> u64, + + // The latest finalized block to appear on the blockchain + LatestFinalizedBlock: () -> u64, + } +); + +pub(crate) struct IndexDb; +impl IndexDb { + pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: [u8; 32]) { + BlockId::set(txn, number, &id); + BlockNumber::set(txn, id, &number); + } + pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<[u8; 32]> { + BlockId::get(getter, number) + } + pub(crate) fn block_number(getter: &impl Get, id: [u8; 32]) -> Option { + BlockNumber::get(getter, id) + } + + pub(crate) fn set_latest_finalized_block(txn: &mut impl DbTxn, latest_finalized_block: u64) { + LatestFinalizedBlock::set(txn, &latest_finalized_block); + } + pub(crate) fn latest_finalized_block(getter: &impl Get) -> Option { + LatestFinalizedBlock::get(getter) + } +} diff --git a/processor/scanner/src/index/mod.rs b/processor/scanner/src/index/mod.rs index e3c5c6ac..07801650 100644 --- a/processor/scanner/src/index/mod.rs +++ b/processor/scanner/src/index/mod.rs @@ -1,9 +1,11 @@ use serai_db::{DbTxn, Db}; -use primitives::BlockHeader; +use primitives::{task::ContinuallyRan, BlockHeader}; -// TODO: Localize to IndexDb? -use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +use crate::ScannerFeed; + +mod db; +pub(crate) use db::IndexDb; /* This processor should build its own index of the blockchain, yet only for finalized blocks which @@ -22,7 +24,7 @@ struct IndexFinalizedTask { impl ContinuallyRan for IndexFinalizedTask { async fn run_iteration(&mut self) -> Result { // Fetch the latest finalized block - let our_latest_finalized = ScannerDb::::latest_finalized_block(&self.db) + let our_latest_finalized = IndexDb::latest_finalized_block(&self.db) .expect("IndexTask run before writing the start block"); let latest_finalized = match self.feed.latest_finalized_block_number().await { Ok(latest_finalized) => latest_finalized, @@ -51,7 +53,7 @@ impl ContinuallyRan for IndexFinalizedTask { // Check this descends from our indexed chain { let expected_parent = - ScannerDb::::block_id(&self.db, b - 1).expect("didn't have the ID of the prior block"); + IndexDb::block_id(&self.db, b - 1).expect("didn't have the ID of the prior block"); if block.parent() != expected_parent { panic!( "current finalized block (#{b}, {}) doesn't build off finalized block (#{}, {})", @@ -64,8 +66,8 @@ impl ContinuallyRan for IndexFinalizedTask { // Update the latest finalized block let mut txn = self.db.txn(); - ScannerDb::::set_block(&mut txn, b, block.id()); - ScannerDb::::set_latest_finalized_block(&mut txn, b); + IndexDb::set_block(&mut txn, b, block.id()); + IndexDb::set_latest_finalized_block(&mut txn, b); txn.commit(); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5b41301e..d38c2ec3 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -112,7 +112,7 @@ pub trait ScannerFeed: Send + Sync { // Check the ID of this block is the expected ID { let expected = - ScannerDb::::block_id(&self.db, number).expect("requested a block which wasn't indexed"); + crate::index::IndexDb::block_id(&self.db, number).expect("requested a block which wasn't indexed"); if block.id() != expected { panic!( "finalized chain reorganized from {} to {} at {}", From 33e0c85f34569e64223c36440ff85d8b88192ea3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 26 Aug 2024 23:25:30 -0400 Subject: [PATCH 027/368] Make Eventuality a folder, not a file --- processor/scanner/src/{eventuality.rs => eventuality/mod.rs} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename processor/scanner/src/{eventuality.rs => eventuality/mod.rs} (100%) diff --git a/processor/scanner/src/eventuality.rs b/processor/scanner/src/eventuality/mod.rs similarity index 100% rename from processor/scanner/src/eventuality.rs rename to processor/scanner/src/eventuality/mod.rs From 9ab8ba021544a174b3da7ebfce90163688dbc927 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 00:23:15 -0400 Subject: [PATCH 028/368] Add dedicated Eventuality DB and stub missing fns --- processor/scanner/src/db.rs | 16 +++++++++-- processor/scanner/src/eventuality/db.rs | 36 ++++++++++++++++++++++++ processor/scanner/src/eventuality/mod.rs | 15 ++++++---- processor/scanner/src/lib.rs | 26 +++++++++-------- processor/scanner/src/report.rs | 10 +++---- processor/scanner/src/scan.rs | 4 +-- 6 files changed, 81 insertions(+), 26 deletions(-) create mode 100644 processor/scanner/src/eventuality/db.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 4ac6bada..18511222 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -215,8 +215,8 @@ impl ScannerDb { pub(crate) fn queue_return( txn: &mut impl DbTxn, block_queued_from: u64, - return_addr: AddressFor, - output: OutputFor, + return_addr: &AddressFor, + output: &OutputFor, ) { todo!("TODO") } @@ -253,6 +253,10 @@ impl ScannerDb { } pub(crate) fn flag_notable(txn: &mut impl DbTxn, block_number: u64) { + assert!( + NextToPotentiallyReportBlock::get(txn).unwrap() <= block_number, + "already potentially reported a block we're only now flagging as notable" + ); NotableBlock::set(txn, block_number, &()); } @@ -285,4 +289,12 @@ impl ScannerDb { pub(crate) fn is_block_notable(getter: &impl Get, number: u64) -> bool { NotableBlock::get(getter, number).is_some() } + + pub(crate) fn take_queued_returns(txn: &mut impl DbTxn, block_number: u64) -> Vec> { + todo!("TODO") + } + + pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { + todo!("TODO") + } } diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs new file mode 100644 index 00000000..e379532d --- /dev/null +++ b/processor/scanner/src/eventuality/db.rs @@ -0,0 +1,36 @@ +use core::marker::PhantomData; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, create_db}; + +use primitives::EventualityTracker; + +use crate::{ScannerFeed, KeyFor, EventualityFor}; + +// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. +trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + +create_db!( + ScannerEventuality { + SerializedEventualities: () -> Vec, + } +); + +pub(crate) struct EventualityDb(PhantomData); +impl EventualityDb { + pub(crate) fn set_eventualities( + txn: &mut impl DbTxn, + key: KeyFor, + eventualities: &EventualityTracker>, + ) { + todo!("TODO") + } + + pub(crate) fn eventualities( + getter: &impl Get, + key: KeyFor, + ) -> EventualityTracker> { + todo!("TODO") + } +} diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 8fc18246..3d70d650 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -2,13 +2,16 @@ use group::GroupEncoding; use serai_db::{DbTxn, Db}; -use primitives::{OutputType, ReceivedOutput, Block}; +use primitives::{OutputType, ReceivedOutput, Eventuality, Block}; // TODO: Localize to EventualityDb? use crate::{ lifetime::LifetimeStage, db::ScannerDb, BlockExt, ScannerFeed, KeyFor, Scheduler, ContinuallyRan, }; +mod db; +use db::EventualityDb; + /* When we scan a block, we receive outputs. When this block is acknowledged, we accumulate those outputs into some scheduler, potentially causing certain transactions to begin their signing @@ -110,7 +113,7 @@ impl> ContinuallyRan for EventualityTas iterated = true; - let block = self.feed.block_by_number(b).await?; + let block = self.feed.block_by_number(&self.db, b).await?; log::info!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); @@ -144,9 +147,9 @@ impl> ContinuallyRan for EventualityTas for key in keys { let completed_eventualities = { - let mut eventualities = ScannerDb::::eventualities(&txn, key.key); + let mut eventualities = EventualityDb::::eventualities(&txn, key.key); let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); - ScannerDb::::set_eventualities(&mut txn, eventualities); + EventualityDb::::set_eventualities(&mut txn, key.key, &eventualities); completed_eventualities }; @@ -200,11 +203,11 @@ impl> ContinuallyRan for EventualityTas KeyFor::::from_bytes(&key_repr).unwrap() }; - let mut eventualities = ScannerDb::::eventualities(&txn, key); + let mut eventualities = EventualityDb::::eventualities(&txn, key); for new_eventuality in new_eventualities { eventualities.active_eventualities.insert(new_eventuality.lookup(), new_eventuality); } - ScannerDb::::set_eventualities(&mut txn, eventualities); + EventualityDb::::set_eventualities(&mut txn, key, &eventualities); } // Update the next to check block diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d38c2ec3..fb6599b7 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,13 +1,11 @@ -use core::{marker::PhantomData, fmt::Debug, time::Duration}; +use core::{marker::PhantomData, fmt::Debug}; use std::collections::HashMap; -use tokio::sync::mpsc; - -use serai_db::DbTxn; +use serai_db::{Get, DbTxn}; use serai_primitives::{NetworkId, Coin, Amount}; -use primitives::{task::*, Block}; +use primitives::{task::*, ReceivedOutput, Block}; // Logic for deciding where in its lifetime a multisig is. mod lifetime; @@ -29,10 +27,10 @@ pub(crate) trait BlockExt: Block { } impl BlockExt for B { fn scan_for_outputs(&self, key: Self::Key) -> Vec { - let mut outputs = self.scan_for_outputs_unordered(); + let mut outputs = self.scan_for_outputs_unordered(key); outputs.sort_by(|a, b| { use core::cmp::{Ordering, Ord}; - let res = a.id().as_ref().cmp(&b.id().as_ref()); + let res = a.id().as_ref().cmp(b.id().as_ref()); assert!(res != Ordering::Equal, "scanned two outputs within a block with the same ID"); res }); @@ -103,7 +101,11 @@ pub trait ScannerFeed: Send + Sync { /// Fetch a block by its number. /// /// Panics if the block requested wasn't indexed. - async fn block_by_number(&self, getter: &impl Get, number: u64) -> Result { + async fn block_by_number( + &self, + getter: &(impl Send + Sync + Get), + number: u64, + ) -> Result { let block = match self.unchecked_block_by_number(number).await { Ok(block) => block, Err(e) => Err(format!("couldn't fetch block {number}: {e:?}"))?, @@ -111,8 +113,8 @@ pub trait ScannerFeed: Send + Sync { // Check the ID of this block is the expected ID { - let expected = - crate::index::IndexDb::block_id(&self.db, number).expect("requested a block which wasn't indexed"); + let expected = crate::index::IndexDb::block_id(getter, number) + .expect("requested a block which wasn't indexed"); if block.id() != expected { panic!( "finalized chain reorganized from {} to {} at {}", @@ -122,6 +124,8 @@ pub trait ScannerFeed: Send + Sync { ); } } + + Ok(block) } /// The cost to aggregate an input as of the specified block. @@ -146,7 +150,7 @@ type OutputFor = <::Block as Block>::Output; type EventualityFor = <::Block as Block>::Eventuality; /// The object responsible for accumulating outputs and planning new transactions. -pub trait Scheduler { +pub trait Scheduler: Send { /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// /// The `Vec` used as the key in the returned HashMap should be the encoded key these diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index ec87845f..8f37d7a6 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -7,7 +7,7 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::ReceivedOutput; // TODO: Localize to ReportDb? -use crate::{db::ScannerDb, ScannerFeed, ContinuallyRan}; +use crate::{db::ScannerDb, index::IndexDb, ScannerFeed, ContinuallyRan}; /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -57,7 +57,7 @@ impl ContinuallyRan for ReportTask { // methods to be used in the future) in_instructions.sort_by(|a, b| { use core::cmp::{Ordering, Ord}; - let res = a.output.id().as_ref().cmp(&b.output.id().as_ref()); + let res = a.output.id().as_ref().cmp(b.output.id().as_ref()); assert!(res != Ordering::Equal); res }); @@ -66,8 +66,8 @@ impl ContinuallyRan for ReportTask { let network = S::NETWORK; let block_hash = - ScannerDb::::block_id(&txn, b).expect("reporting block we didn't save the ID for"); - let mut batch_id = ScannerDb::::acquire_batch_id(txn); + IndexDb::block_id(&txn, b).expect("reporting block we didn't save the ID for"); + let mut batch_id = ScannerDb::::acquire_batch_id(&mut txn); // start with empty batch let mut batches = @@ -83,7 +83,7 @@ impl ContinuallyRan for ReportTask { let instruction = batch.instructions.pop().unwrap(); // bump the id for the new batch - batch_id = ScannerDb::::acquire_batch_id(txn); + batch_id = ScannerDb::::acquire_batch_id(&mut txn); // make a new batch with this instruction included batches.push(Batch { diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index ddc1110e..7e59c92d 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -76,7 +76,7 @@ impl ContinuallyRan for ScanForOutputsTask { .expect("ScanForOutputsTask run before writing the start block"); for b in next_to_scan ..= latest_scannable { - let block = self.feed.block_by_number(b).await?; + let block = self.feed.block_by_number(&self.db, b).await?; log::info!("scanning block: {} ({b})", hex::encode(block.id())); @@ -173,7 +173,7 @@ impl ContinuallyRan for ScanForOutputsTask { }, (Some(return_addr), None) => { // Since there was no instruction here, return this since we parsed a return address - ScannerDb::::queue_return(&mut txn, b, return_addr, output); + ScannerDb::::queue_return(&mut txn, b, &return_addr, &output); continue; } // Since we didn't receive an instruction nor can we return this, move on From 2bddf002226d0e92d3abc6fa39eca1bab177a10d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 00:44:11 -0400 Subject: [PATCH 029/368] Don't expose IndexDb throughout the crate --- processor/primitives/src/task.rs | 2 +- processor/scanner/src/db.rs | 3 --- processor/scanner/src/index/mod.rs | 40 ++++++++++++++++++++++++++++-- processor/scanner/src/lib.rs | 3 +-- processor/scanner/src/report.rs | 5 ++-- 5 files changed, 42 insertions(+), 11 deletions(-) diff --git a/processor/primitives/src/task.rs b/processor/primitives/src/task.rs index a7d6153c..94a576a0 100644 --- a/processor/primitives/src/task.rs +++ b/processor/primitives/src/task.rs @@ -78,7 +78,7 @@ pub trait ContinuallyRan: Sized { } } Err(e) => { - log::debug!("{}", e); + log::warn!("{}", e); increase_sleep_before_next_task(&mut current_sleep_before_next_task); } } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 18511222..42086681 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -143,9 +143,6 @@ impl ScannerDb { "setting start block but prior set start block" ); - crate::index::IndexDb::set_block(txn, start_block, id); - crate::index::IndexDb::set_latest_finalized_block(txn, start_block); - NextToScanForOutputsBlock::set(txn, &start_block); // We can receive outputs in this block, but any descending transactions will be in the next // block. This, with the check on-set, creates a bound that this value in the DB is non-zero. diff --git a/processor/scanner/src/index/mod.rs b/processor/scanner/src/index/mod.rs index 07801650..7c70eedc 100644 --- a/processor/scanner/src/index/mod.rs +++ b/processor/scanner/src/index/mod.rs @@ -1,11 +1,17 @@ -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn, Db}; use primitives::{task::ContinuallyRan, BlockHeader}; use crate::ScannerFeed; mod db; -pub(crate) use db::IndexDb; +use db::IndexDb; + +/// Panics if an unindexed block's ID is requested. +pub(crate) fn block_id(getter: &impl Get, block_number: u64) -> [u8; 32] { + IndexDb::block_id(getter, block_number) + .unwrap_or_else(|| panic!("requested block ID for unindexed block {block_number}")) +} /* This processor should build its own index of the blockchain, yet only for finalized blocks which @@ -20,6 +26,36 @@ struct IndexFinalizedTask { feed: S, } +impl IndexFinalizedTask { + pub(crate) async fn new(mut db: D, feed: S, start_block: u64) -> Self { + if IndexDb::block_id(&db, start_block).is_none() { + // Fetch the block for its ID + let block = { + let mut delay = Self::DELAY_BETWEEN_ITERATIONS; + loop { + match feed.unchecked_block_header_by_number(start_block).await { + Ok(block) => break block, + Err(e) => { + log::warn!("IndexFinalizedTask couldn't fetch start block {start_block}: {e:?}"); + tokio::time::sleep(core::time::Duration::from_secs(delay)).await; + delay += Self::DELAY_BETWEEN_ITERATIONS; + delay = delay.min(Self::MAX_DELAY_BETWEEN_ITERATIONS); + } + }; + } + }; + + // Initialize the DB + let mut txn = db.txn(); + IndexDb::set_block(&mut txn, start_block, block.id()); + IndexDb::set_latest_finalized_block(&mut txn, start_block); + txn.commit(); + } + + Self { db, feed } + } +} + #[async_trait::async_trait] impl ContinuallyRan for IndexFinalizedTask { async fn run_iteration(&mut self) -> Result { diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index fb6599b7..a29f1069 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -113,8 +113,7 @@ pub trait ScannerFeed: Send + Sync { // Check the ID of this block is the expected ID { - let expected = crate::index::IndexDb::block_id(getter, number) - .expect("requested a block which wasn't indexed"); + let expected = crate::index::block_id(getter, number); if block.id() != expected { panic!( "finalized chain reorganized from {} to {} at {}", diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 8f37d7a6..f2caf692 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -7,7 +7,7 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::ReceivedOutput; // TODO: Localize to ReportDb? -use crate::{db::ScannerDb, index::IndexDb, ScannerFeed, ContinuallyRan}; +use crate::{db::ScannerDb, index, ScannerFeed, ContinuallyRan}; /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -65,8 +65,7 @@ impl ContinuallyRan for ReportTask { }; let network = S::NETWORK; - let block_hash = - IndexDb::block_id(&txn, b).expect("reporting block we didn't save the ID for"); + let block_hash = index::block_id(&txn, b); let mut batch_id = ScannerDb::::acquire_batch_id(&mut txn); // start with empty batch From 6196642beb6846afce8e60f64b5e8aa868845416 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 01:54:49 -0400 Subject: [PATCH 030/368] Add a DbChannel between scan and eventuality task --- processor/scanner/src/db.rs | 131 ++++++++++++++++------- processor/scanner/src/eventuality/mod.rs | 47 +++++--- processor/scanner/src/lib.rs | 20 +++- processor/scanner/src/scan.rs | 31 ++++-- 4 files changed, 168 insertions(+), 61 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 42086681..3ea41161 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -3,13 +3,13 @@ use std::io; use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_db::{Get, DbTxn, create_db}; +use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::InInstructionWithBalance; use primitives::{ReceivedOutput, BorshG}; -use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor}; +use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} @@ -76,8 +76,6 @@ create_db!( NotableBlock: (number: u64) -> (), SerializedQueuedOutputs: (block_number: u64) -> Vec, - SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, - SerializedForwardedOutput: (output_id: &[u8]) -> Vec, SerializedOutputs: (block_number: u64) -> Vec, } ); @@ -209,15 +207,6 @@ impl ScannerDb { todo!("TODO") } - pub(crate) fn queue_return( - txn: &mut impl DbTxn, - block_queued_from: u64, - return_addr: &AddressFor, - output: &OutputFor, - ) { - todo!("TODO") - } - pub(crate) fn queue_output_until_block( txn: &mut impl DbTxn, queue_for_block: u64, @@ -229,26 +218,6 @@ impl ScannerDb { SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); } - pub(crate) fn save_output_being_forwarded( - txn: &mut impl DbTxn, - block_forwarded_from: u64, - output: &OutputWithInInstruction, - ) { - let mut buf = Vec::with_capacity(128); - output.write(&mut buf).unwrap(); - - let id = output.output.id(); - - // Save this to an index so we can later fetch all outputs to forward - let mut forwarded_outputs = SerializedForwardedOutputsIndex::get(txn, block_forwarded_from) - .unwrap_or(Vec::with_capacity(32)); - forwarded_outputs.extend(id.as_ref()); - SerializedForwardedOutputsIndex::set(txn, block_forwarded_from, &forwarded_outputs); - - // Save the output itself - SerializedForwardedOutput::set(txn, id.as_ref(), &buf); - } - pub(crate) fn flag_notable(txn: &mut impl DbTxn, block_number: u64) { assert!( NextToPotentiallyReportBlock::get(txn).unwrap() <= block_number, @@ -287,11 +256,99 @@ impl ScannerDb { NotableBlock::get(getter, number).is_some() } - pub(crate) fn take_queued_returns(txn: &mut impl DbTxn, block_number: u64) -> Vec> { - todo!("TODO") - } - pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { todo!("TODO") } + + pub(crate) fn return_address_and_in_instruction_for_forwarded_output( + getter: &impl Get, + output: & as ReceivedOutput, AddressFor>>::Id, + ) -> Option<(Option>, InInstructionWithBalance)> { + todo!("TODO") + } +} + +/// The data produced by scanning a block. +/// +/// This is the sender's version which includes the forwarded outputs with their InInstructions, +/// which need to be saved to the database for later retrieval. +pub(crate) struct SenderScanData { + /// The block number. + pub(crate) block_number: u64, + /// The received outputs which should be accumulated into the scheduler. + pub(crate) received_external_outputs: Vec>, + /// The outputs which need to be forwarded. + pub(crate) forwards: Vec>, + /// The outputs which need to be returned. + pub(crate) returns: Vec>, +} + +/// The data produced by scanning a block. +/// +/// This is the receiver's version which doesn't include the forwarded outputs' InInstructions, as +/// the Eventuality task doesn't need it to process this block. +pub(crate) struct ReceiverScanData { + /// The block number. + pub(crate) block_number: u64, + /// The received outputs which should be accumulated into the scheduler. + pub(crate) received_external_outputs: Vec>, + /// The outputs which need to be forwarded. + pub(crate) forwards: Vec>, + /// The outputs which need to be returned. + pub(crate) returns: Vec>, +} + +#[derive(BorshSerialize, BorshDeserialize)] +pub(crate) struct SerializedScanData { + pub(crate) block_number: u64, + pub(crate) data: Vec, +} + +db_channel! { + ScannerScanEventuality { + ScannedBlock: (empty_key: ()) -> SerializedScanData, + } +} + +pub(crate) struct ScanToEventualityDb(PhantomData); +impl ScanToEventualityDb { + pub(crate) fn send_scan_data(txn: &mut impl DbTxn, block_number: u64, data: &SenderScanData) { + /* + SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, + SerializedForwardedOutput: (output_id: &[u8]) -> Vec, + + pub(crate) fn save_output_being_forwarded( + txn: &mut impl DbTxn, + block_forwarded_from: u64, + output: &OutputWithInInstruction, + ) { + let mut buf = Vec::with_capacity(128); + output.write(&mut buf).unwrap(); + + let id = output.output.id(); + + // Save this to an index so we can later fetch all outputs to forward + let mut forwarded_outputs = SerializedForwardedOutputsIndex::get(txn, block_forwarded_from) + .unwrap_or(Vec::with_capacity(32)); + forwarded_outputs.extend(id.as_ref()); + SerializedForwardedOutputsIndex::set(txn, block_forwarded_from, &forwarded_outputs); + + // Save the output itself + SerializedForwardedOutput::set(txn, id.as_ref(), &buf); + } + */ + + ScannedBlock::send(txn, (), todo!("TODO")); + } + pub(crate) fn recv_scan_data(txn: &mut impl DbTxn, block_number: u64) -> ReceiverScanData { + let data = + ScannedBlock::try_recv(txn, ()).expect("receiving data for a scanned block not yet sent"); + assert_eq!( + block_number, data.block_number, + "received data for a scanned block distinct than expected" + ); + let data = &data.data; + + todo!("TODO") + } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 3d70d650..4f5fbe63 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -2,11 +2,13 @@ use group::GroupEncoding; use serai_db::{DbTxn, Db}; -use primitives::{OutputType, ReceivedOutput, Eventuality, Block}; +use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, Block}; // TODO: Localize to EventualityDb? use crate::{ - lifetime::LifetimeStage, db::ScannerDb, BlockExt, ScannerFeed, KeyFor, Scheduler, ContinuallyRan, + lifetime::LifetimeStage, + db::{OutputWithInInstruction, ReceiverScanData, ScannerDb, ScanToEventualityDb}, + BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, }; mod db; @@ -137,13 +139,12 @@ impl> ContinuallyRan for EventualityTas let mut txn = self.db.txn(); - // Fetch the External outputs we reported, and therefore should yield after handling this - // block - let mut outputs = ScannerDb::::in_instructions(&txn, b) - .expect("handling eventualities/outputs for block which didn't set its InInstructions") - .into_iter() - .map(|output| output.output) - .collect::>(); + // Fetch the data from the scanner + let scan_data = ScanToEventualityDb::recv_scan_data(&mut txn, b); + assert_eq!(scan_data.block_number, b); + let ReceiverScanData { block_number: _, received_external_outputs, forwards, returns } = + scan_data; + let mut outputs = received_external_outputs; for key in keys { let completed_eventualities = { @@ -184,17 +185,37 @@ impl> ContinuallyRan for EventualityTas } // Now, we iterate over all Forwarded outputs and queue their InInstructions - todo!("TODO"); + for output in + non_external_outputs.iter().filter(|output| output.kind() == OutputType::Forwarded) + { + let Some(eventuality) = completed_eventualities.get(&output.transaction_id()) else { + // Output sent to the forwarding address yet not actually forwarded + continue; + }; + let Some(forwarded) = eventuality.forwarded_output() else { + // This was a TX made by us, yet someone burned to the forwarding address + continue; + }; + + let (return_address, in_instruction) = + ScannerDb::::return_address_and_in_instruction_for_forwarded_output( + &txn, &forwarded, + ) + .expect("forwarded an output yet didn't save its InInstruction to the DB"); + ScannerDb::::queue_output_until_block( + &mut txn, + b + S::WINDOW_LENGTH, + &OutputWithInInstruction { output: output.clone(), return_address, in_instruction }, + ); + } // Accumulate all of these outputs outputs.extend(non_external_outputs); } - let outputs_to_return = ScannerDb::::take_queued_returns(&mut txn, b); - // TODO: This also has to intake Burns let new_eventualities = - self.scheduler.accumulate_outputs_and_return_outputs(&mut txn, outputs, outputs_to_return); + self.scheduler.update(&mut txn, SchedulerUpdate { outputs, forwards, returns }); for (key, new_eventualities) in new_eventualities { let key = { let mut key_repr = as GroupEncoding>::Repr::default(); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index a29f1069..ef295471 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -148,17 +148,29 @@ type AddressFor = <::Block as Block>::Address; type OutputFor = <::Block as Block>::Output; type EventualityFor = <::Block as Block>::Eventuality; +/// A return to occur. +pub struct Return { + address: AddressFor, + output: OutputFor, +} + +/// An update for the scheduler. +pub struct SchedulerUpdate { + outputs: Vec>, + forwards: Vec>, + returns: Vec>, +} + /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: Send { /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// - /// The `Vec` used as the key in the returned HashMap should be the encoded key these + /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. - fn accumulate_outputs_and_return_outputs( + fn update( &mut self, txn: &mut impl DbTxn, - outputs: Vec>, - outputs_to_return: Vec>, + update: SchedulerUpdate, ) -> HashMap, Vec>>; } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 7e59c92d..d8312e3b 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -11,8 +11,8 @@ use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to ScanDb? use crate::{ lifetime::LifetimeStage, - db::{OutputWithInInstruction, ScannerDb}, - BlockExt, ScannerFeed, AddressFor, OutputFor, ContinuallyRan, + db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToEventualityDb}, + BlockExt, ScannerFeed, AddressFor, OutputFor, Return, ContinuallyRan, }; // Construct an InInstruction from an external output. @@ -86,6 +86,12 @@ impl ContinuallyRan for ScanForOutputsTask { let mut txn = self.db.txn(); + let mut scan_data = SenderScanData { + block_number: b, + received_external_outputs: vec![], + forwards: vec![], + returns: vec![], + }; let mut in_instructions = ScannerDb::::take_queued_outputs(&mut txn, b); // Scan for each key @@ -171,13 +177,21 @@ impl ContinuallyRan for ScanForOutputsTask { return_address, in_instruction: InInstructionWithBalance { instruction, balance: balance_to_use }, }, - (Some(return_addr), None) => { + (Some(address), None) => { // Since there was no instruction here, return this since we parsed a return address - ScannerDb::::queue_return(&mut txn, b, &return_addr, &output); + if key.stage != LifetimeStage::Finishing { + scan_data.returns.push(Return { address, output }); + } + continue; + } + // Since we didn't receive an instruction nor can we return this, queue this for + // accumulation and move on + (None, None) => { + if key.stage != LifetimeStage::Finishing { + scan_data.received_external_outputs.push(output); + } continue; } - // Since we didn't receive an instruction nor can we return this, move on - (None, None) => continue, }; // Drop External outputs if they're to a multisig which won't report them @@ -201,7 +215,7 @@ impl ContinuallyRan for ScanForOutputsTask { LifetimeStage::Forwarding => { // When the forwarded output appears, we can see which Plan it's associated with and // from there recover this output - ScannerDb::::save_output_being_forwarded(&mut txn, b, &output_with_in_instruction); + scan_data.forwards.push(output_with_in_instruction); continue; } // We should drop these as we should not be handling new External outputs at this @@ -213,10 +227,13 @@ impl ContinuallyRan for ScanForOutputsTask { // Ensures we didn't miss a `continue` above assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); + scan_data.received_external_outputs.push(output_with_in_instruction.output.clone()); in_instructions.push(output_with_in_instruction); } } + // Save the outputs to return + ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); // Save the in instructions ScannerDb::::set_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block From 75251f04b4602452319af76003438699c33cb959 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 02:14:59 -0400 Subject: [PATCH 031/368] Use a channel for the InInstructions It's still unclear how we'll handle refunding failed InInstructions at this time. Presumably, extending the InInstruction channel with the associated output ID? --- processor/scanner/src/db.rs | 67 ++++++++++++++++++++------------- processor/scanner/src/report.rs | 35 +++++++---------- processor/scanner/src/scan.rs | 30 ++++++++++++--- 3 files changed, 79 insertions(+), 53 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 3ea41161..b4d7c27b 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -226,32 +226,6 @@ impl ScannerDb { NotableBlock::set(txn, block_number, &()); } - // TODO: Use a DbChannel here, and send the instructions to the report task and the outputs to - // the eventuality task? That way this cleans up after itself - pub(crate) fn set_in_instructions( - txn: &mut impl DbTxn, - block_number: u64, - outputs: Vec>, - ) { - if !outputs.is_empty() { - // Set this block as notable - NotableBlock::set(txn, block_number, &()); - } - - let mut buf = Vec::with_capacity(outputs.len() * 128); - for output in outputs { - output.write(&mut buf).unwrap(); - } - SerializedOutputs::set(txn, block_number, &buf); - } - - pub(crate) fn in_instructions( - getter: &impl Get, - block_number: u64, - ) -> Option>> { - todo!("TODO") - } - pub(crate) fn is_block_notable(getter: &impl Get, number: u64) -> bool { NotableBlock::get(getter, number).is_some() } @@ -352,3 +326,44 @@ impl ScanToEventualityDb { todo!("TODO") } } + +#[derive(BorshSerialize, BorshDeserialize)] +pub(crate) struct BlockBoundInInstructions { + pub(crate) block_number: u64, + pub(crate) in_instructions: Vec, +} + +db_channel! { + ScannerScanReport { + InInstructions: (empty_key: ()) -> BlockBoundInInstructions, + } +} + +pub(crate) struct ScanToReportDb(PhantomData); +impl ScanToReportDb { + pub(crate) fn send_in_instructions( + txn: &mut impl DbTxn, + block_number: u64, + in_instructions: Vec, + ) { + if !in_instructions.is_empty() { + // Set this block as notable + NotableBlock::set(txn, block_number, &()); + } + + InInstructions::send(txn, (), &BlockBoundInInstructions { block_number, in_instructions }); + } + + pub(crate) fn recv_in_instructions( + txn: &mut impl DbTxn, + block_number: u64, + ) -> Vec { + let data = InInstructions::try_recv(txn, ()) + .expect("receiving InInstructions for a scanned block not yet sent"); + assert_eq!( + block_number, data.block_number, + "received InInstructions for a scanned block distinct than expected" + ); + data.in_instructions + } +} diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index f2caf692..39a72106 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -4,10 +4,11 @@ use serai_db::{DbTxn, Db}; use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; -use primitives::ReceivedOutput; - -// TODO: Localize to ReportDb? -use crate::{db::ScannerDb, index, ScannerFeed, ContinuallyRan}; +// TODO: Localize to Report? +use crate::{ + db::{ScannerDb, ScanToReportDb}, + index, ScannerFeed, ContinuallyRan, +}; /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -47,23 +48,15 @@ impl ContinuallyRan for ReportTask { for b in next_to_potentially_report ..= highest_reportable { let mut txn = self.db.txn(); + // Receive the InInstructions for this block + // We always do this as we can't trivially tell if we should recv InInstructions before we do + let in_instructions = ScanToReportDb::::recv_in_instructions(&mut txn, b); + let notable = ScannerDb::::is_block_notable(&txn, b); + if !notable { + assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); + } // If this block is notable, create the Batch(s) for it - if ScannerDb::::is_block_notable(&txn, b) { - let in_instructions = { - let mut in_instructions = ScannerDb::::in_instructions(&txn, b) - .expect("reporting block which didn't set its InInstructions"); - // Sort these before reporting them in case anything we did is non-deterministic/to have - // a well-defined order (not implicit to however we got this result, enabling different - // methods to be used in the future) - in_instructions.sort_by(|a, b| { - use core::cmp::{Ordering, Ord}; - let res = a.output.id().as_ref().cmp(b.output.id().as_ref()); - assert!(res != Ordering::Equal); - res - }); - in_instructions - }; - + if notable { let network = S::NETWORK; let block_hash = index::block_id(&txn, b); let mut batch_id = ScannerDb::::acquire_batch_id(&mut txn); @@ -74,7 +67,7 @@ impl ContinuallyRan for ReportTask { for instruction in in_instructions { let batch = batches.last_mut().unwrap(); - batch.instructions.push(instruction.in_instruction); + batch.instructions.push(instruction); // check if batch is over-size if batch.encode().len() > MAX_BATCH_SIZE { diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index d8312e3b..861a9725 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -11,7 +11,7 @@ use primitives::{OutputType, ReceivedOutput, Block}; // TODO: Localize to ScanDb? use crate::{ lifetime::LifetimeStage, - db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToEventualityDb}, + db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToReportDb, ScanToEventualityDb}, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, ContinuallyRan, }; @@ -92,7 +92,25 @@ impl ContinuallyRan for ScanForOutputsTask { forwards: vec![], returns: vec![], }; - let mut in_instructions = ScannerDb::::take_queued_outputs(&mut txn, b); + let mut in_instructions = vec![]; + + let queued_outputs = { + let mut queued_outputs = ScannerDb::::take_queued_outputs(&mut txn, b); + + // Sort the queued outputs in case they weren't queued in a deterministic fashion + queued_outputs.sort_by(|a, b| { + use core::cmp::{Ordering, Ord}; + let res = a.output.id().as_ref().cmp(b.output.id().as_ref()); + assert!(res != Ordering::Equal); + res + }); + + queued_outputs + }; + for queued_output in queued_outputs { + scan_data.received_external_outputs.push(queued_output.output); + in_instructions.push(queued_output.in_instruction); + } // Scan for each key for key in keys { @@ -228,14 +246,14 @@ impl ContinuallyRan for ScanForOutputsTask { assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); scan_data.received_external_outputs.push(output_with_in_instruction.output.clone()); - in_instructions.push(output_with_in_instruction); + in_instructions.push(output_with_in_instruction.in_instruction); } } - // Save the outputs to return + // Send the scan data to the eventuality task ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); - // Save the in instructions - ScannerDb::::set_in_instructions(&mut txn, b, in_instructions); + // Send the in instructions to the report task + ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); txn.commit(); From 9cebdf7c68ea4c91c968472c9232a1adfca3d77f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 02:21:22 -0400 Subject: [PATCH 032/368] Add sorts for safety even upon non-determinism --- processor/scanner/src/eventuality/mod.rs | 9 ++++++--- processor/scanner/src/lib.rs | 21 ++++++++++++++------- processor/scanner/src/scan.rs | 13 +++---------- 3 files changed, 23 insertions(+), 20 deletions(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 4f5fbe63..20e24112 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -8,7 +8,7 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, ReceiverScanData, ScannerDb, ScanToEventualityDb}, - BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, + BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, sort_outputs, }; mod db; @@ -214,8 +214,11 @@ impl> ContinuallyRan for EventualityTas } // TODO: This also has to intake Burns - let new_eventualities = - self.scheduler.update(&mut txn, SchedulerUpdate { outputs, forwards, returns }); + let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; + scheduler_update.outputs.sort_by(sort_outputs); + scheduler_update.forwards.sort_by(sort_outputs); + scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + let new_eventualities = self.scheduler.update(&mut txn, scheduler_update); for (key, new_eventualities) in new_eventualities { let key = { let mut key_repr = as GroupEncoding>::Repr::default(); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index ef295471..d245e255 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,11 +1,13 @@ use core::{marker::PhantomData, fmt::Debug}; use std::collections::HashMap; +use group::GroupEncoding; + use serai_db::{Get, DbTxn}; use serai_primitives::{NetworkId, Coin, Amount}; -use primitives::{task::*, ReceivedOutput, Block}; +use primitives::{task::*, Address, ReceivedOutput, Block}; // Logic for deciding where in its lifetime a multisig is. mod lifetime; @@ -21,6 +23,16 @@ mod eventuality; /// Task which reports `Batch`s to Substrate. mod report; +pub(crate) fn sort_outputs>( + a: &O, + b: &O, +) -> core::cmp::Ordering { + use core::cmp::{Ordering, Ord}; + let res = a.id().as_ref().cmp(b.id().as_ref()); + assert!(res != Ordering::Equal, "two outputs within a collection had the same ID"); + res +} + /// Extension traits around Block. pub(crate) trait BlockExt: Block { fn scan_for_outputs(&self, key: Self::Key) -> Vec; @@ -28,12 +40,7 @@ pub(crate) trait BlockExt: Block { impl BlockExt for B { fn scan_for_outputs(&self, key: Self::Key) -> Vec { let mut outputs = self.scan_for_outputs_unordered(key); - outputs.sort_by(|a, b| { - use core::cmp::{Ordering, Ord}; - let res = a.id().as_ref().cmp(b.id().as_ref()); - assert!(res != Ordering::Equal, "scanned two outputs within a block with the same ID"); - res - }); + outputs.sort_by(sort_outputs); outputs } } diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 861a9725..8617ec18 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -6,13 +6,13 @@ use serai_in_instructions_primitives::{ Shorthand, RefundableInInstruction, InInstruction, InInstructionWithBalance, }; -use primitives::{OutputType, ReceivedOutput, Block}; +use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Block}; // TODO: Localize to ScanDb? use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToReportDb, ScanToEventualityDb}, - BlockExt, ScannerFeed, AddressFor, OutputFor, Return, ContinuallyRan, + BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, }; // Construct an InInstruction from an external output. @@ -96,15 +96,8 @@ impl ContinuallyRan for ScanForOutputsTask { let queued_outputs = { let mut queued_outputs = ScannerDb::::take_queued_outputs(&mut txn, b); - // Sort the queued outputs in case they weren't queued in a deterministic fashion - queued_outputs.sort_by(|a, b| { - use core::cmp::{Ordering, Ord}; - let res = a.output.id().as_ref().cmp(b.output.id().as_ref()); - assert!(res != Ordering::Equal); - res - }); - + queued_outputs.sort_by(|a, b| sort_outputs(&a.output, &b.output)); queued_outputs }; for queued_output in queued_outputs { From a771fbe1c6ee23ce4213e6931ae9bd8bf099b2ba Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 27 Aug 2024 16:43:50 -0400 Subject: [PATCH 033/368] Logs, documentation, misc --- processor/scanner/src/db.rs | 31 ++- processor/scanner/src/eventuality/mod.rs | 31 ++- processor/scanner/src/lib.rs | 313 ++--------------------- processor/scanner/src/scan.rs | 2 +- 4 files changed, 69 insertions(+), 308 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index b4d7c27b..d53bf7c7 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -82,25 +82,34 @@ create_db!( pub(crate) struct ScannerDb(PhantomData); impl ScannerDb { - // activation_block_number is inclusive, so the key will be scanned for starting at the specified - // block + /// Queue a key. + /// + /// Keys may be queued whenever, so long as they're scheduled to activate `WINDOW_LENGTH` blocks + /// after the next block acknowledged after they've been set. There is no requirement that any + /// prior keys have had their processing completed (meaning what should be a length-2 vector may + /// be a length-n vector). + /// + /// A new key MUST NOT be queued to activate a block preceding the finishing of the key prior to + /// its prior. There MUST only be two keys active at one time. + /// + /// activation_block_number is inclusive, so the key will be scanned for starting at the + /// specified block. pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: KeyFor) { // Set this block as notable NotableBlock::set(txn, activation_block_number, &()); + // TODO: Panic if we've ever seen this key before + // Push the key let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); - for key_i in &keys { - if key == key_i.key.0 { - panic!("queueing a key prior queued"); - } - } keys.push(SeraiKeyDbEntry { activation_block_number, key: BorshG(key) }); ActiveKeys::set(txn, &keys); } + /// Retire a key. + /// + /// The key retired must be the oldest key. There must be another key actively tracked. // TODO: This will be called from the Eventuality task yet this field is read by the scan task // We need to write the argument for its safety - // TODO: retire_key needs to set the notable block pub(crate) fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { let mut keys: Vec>>> = ActiveKeys::get(txn).expect("retiring key yet no active keys"); @@ -110,6 +119,9 @@ impl ScannerDb { keys.remove(0); ActiveKeys::set(txn, &keys); } + /// Fetch the active keys, as of the next-to-scan-for-outputs Block. + /// + /// This means the scan task should scan for all keys returned by this. pub(crate) fn active_keys_as_of_next_to_scan_for_outputs_block( getter: &impl Get, ) -> Option>>> { @@ -131,7 +143,7 @@ impl ScannerDb { ); keys.push(SeraiKey { key: raw_keys[i].key.0, stage, block_at_which_reporting_starts }); } - assert!(keys.len() <= 2); + assert!(keys.len() <= 2, "more than two keys active"); Some(keys) } @@ -152,7 +164,6 @@ impl ScannerDb { // We can only scan up to whatever block we've checked the Eventualities of, plus the window // length. Since this returns an inclusive bound, we need to subtract 1 // See `eventuality.rs` for more info - // TODO: Adjust based on register eventualities NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH - 1) } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 20e24112..3a472ce2 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -117,7 +117,7 @@ impl> ContinuallyRan for EventualityTas let block = self.feed.block_by_number(&self.db, b).await?; - log::info!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); + log::debug!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); /* This is proper as the keys for the next to scan block (at most `WINDOW_LENGTH` ahead, @@ -147,13 +147,21 @@ impl> ContinuallyRan for EventualityTas let mut outputs = received_external_outputs; for key in keys { - let completed_eventualities = { + let (eventualities_is_empty, completed_eventualities) = { let mut eventualities = EventualityDb::::eventualities(&txn, key.key); let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); EventualityDb::::set_eventualities(&mut txn, key.key, &eventualities); - completed_eventualities + (eventualities.active_eventualities.is_empty(), completed_eventualities) }; + for (tx, completed_eventuality) in completed_eventualities { + log::info!( + "eventuality {} resolved by {}", + hex::encode(completed_eventuality.id()), + hex::encode(tx.as_ref()) + ); + } + // Fetch all non-External outputs let mut non_external_outputs = block.scan_for_outputs(key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); @@ -213,7 +221,6 @@ impl> ContinuallyRan for EventualityTas outputs.extend(non_external_outputs); } - // TODO: This also has to intake Burns let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; scheduler_update.outputs.sort_by(sort_outputs); scheduler_update.forwards.sort_by(sort_outputs); @@ -234,6 +241,22 @@ impl> ContinuallyRan for EventualityTas EventualityDb::::set_eventualities(&mut txn, key, &eventualities); } + for key in keys { + if key.stage == LifetimeStage::Finishing { + let eventualities = EventualityDb::::eventualities(&txn, key.key); + if eventualities.active_eventualities.is_empty() { + log::info!( + "key {} has finished and is being retired", + hex::encode(key.key.to_bytes().as_ref()) + ); + + ScannerDb::::flag_notable(&mut txn, b + S::WINDOW_LENGTH); + // TODO: Retire the key + todo!("TODO") + } + } + } + // Update the next to check block ScannerDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); txn.commit(); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d245e255..2d19207f 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -196,38 +196,34 @@ impl Scanner { /// /// This means this block was ordered on Serai in relation to `Burn` events, and all validators /// have achieved synchrony on it. - // TODO: If we're acknowledging block `b`, the Eventuality task was already eligible to check it - // for Eventualities. We need this to block until the Eventuality task has actually checked it. - // TODO: Does the prior TODO hold with how the callback is now handled? pub fn acknowledge_block( &mut self, + txn: &mut impl DbTxn, block_number: u64, - key_to_activate: Option<()>, - ) -> Vec> { + key_to_activate: Option>, + ) { + log::info!("acknowledging block {block_number}"); + assert!( + ScannerDb::::is_block_notable(txn, block_number), + "acknowledging a block which wasn't notable" + ); + ScannerDb::::set_highest_acknowledged_block(txn, block_number); + ScannerDb::::queue_key(txn, block_number + S::WINDOW_LENGTH); + } + + /// Queue Burns. + /// + /// The scanner only updates the scheduler with new outputs upon acknowledging a block. We can + /// safely queue Burns so long as they're only actually added once we've handled the outputs from + /// the block acknowledged prior to their queueing. + pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: Vec<()>) { + let queue_as_of = ScannerDb::::highest_acknowledged_block(txn) + .expect("queueing Burns yet never acknowledged a block"); todo!("TODO") } } /* -#[derive(Clone, Debug)] -pub enum ScannerEvent { - // Block scanned - Block { - is_retirement_block: bool, - block: >::Id, - outputs: Vec, - }, - // Eventuality completion found on-chain - // TODO: Move this from a tuple - Completed( - Vec, - usize, - [u8; 32], - >::Id, - ::Completion, - ), -} - #[derive(Clone, Debug)] struct ScannerDb(PhantomData, PhantomData); impl ScannerDb { @@ -258,182 +254,8 @@ impl ScannerDb { .get(Self::scanned_block_key()) .map(|bytes| u64::from_le_bytes(bytes.try_into().unwrap()).try_into().unwrap()) } - - fn retirement_block_key(key: &::G) -> Vec { - Self::scanner_key(b"retirement_block", key.to_bytes()) - } - fn save_retirement_block( - txn: &mut D::Transaction<'_>, - key: &::G, - block: usize, - ) { - txn.put(Self::retirement_block_key(key), u64::try_from(block).unwrap().to_le_bytes()); - } - fn retirement_block(getter: &G, key: &::G) -> Option { - getter - .get(Self::retirement_block_key(key)) - .map(|bytes| usize::try_from(u64::from_le_bytes(bytes.try_into().unwrap())).unwrap()) - } } -impl ScannerHandle { - /// Acknowledge having handled a block. - /// - /// Creates a lock over the Scanner, preventing its independent scanning operations until - /// released. - /// - /// This must only be called on blocks which have been scanned in-memory. - pub async fn ack_block( - &mut self, - txn: &mut D::Transaction<'_>, - id: >::Id, - ) -> (bool, Vec) { - debug!("block {} acknowledged", hex::encode(&id)); - - let mut scanner = self.scanner.long_term_acquire().await; - - // Get the number for this block - let number = ScannerDb::::block_number(txn, &id) - .expect("main loop trying to operate on data we haven't scanned"); - log::trace!("block {} was {number}", hex::encode(&id)); - - let outputs = ScannerDb::::save_scanned_block(txn, number); - // This has a race condition if we try to ack a block we scanned on a prior boot, and we have - // yet to scan it on this boot - assert!(number <= scanner.ram_scanned.unwrap()); - for output in &outputs { - assert!(scanner.ram_outputs.remove(output.id().as_ref())); - } - - assert_eq!(scanner.need_ack.pop_front().unwrap(), number); - - self.held_scanner = Some(scanner); - - // Load the key from the DB, as it will have already been removed from RAM if retired - let key = ScannerDb::::keys(txn)[0].1; - let is_retirement_block = ScannerDb::::retirement_block(txn, &key) == Some(number); - if is_retirement_block { - ScannerDb::::retire_key(txn); - } - (is_retirement_block, outputs) - } - - pub async fn register_eventuality( - &mut self, - key: &[u8], - block_number: usize, - id: [u8; 32], - eventuality: N::Eventuality, - ) { - let mut lock; - // We won't use held_scanner if we're re-registering on boot - (if let Some(scanner) = self.held_scanner.as_mut() { - scanner - } else { - lock = Some(self.scanner.write().await); - lock.as_mut().unwrap().as_mut().unwrap() - }) - .eventualities - .get_mut(key) - .unwrap() - .register(block_number, id, eventuality) - } - - pub async fn release_lock(&mut self) { - self.scanner.restore(self.held_scanner.take().unwrap()).await - } -} - -impl Scanner { - #[allow(clippy::type_complexity, clippy::new_ret_no_self)] - pub fn new( - network: N, - db: D, - ) -> (ScannerHandle, Vec<(usize, ::G)>) { - let (multisig_completed_send, multisig_completed_recv) = mpsc::unbounded_channel(); - - let keys = ScannerDb::::keys(&db); - let mut eventualities = HashMap::new(); - for key in &keys { - eventualities.insert(key.1.to_bytes().as_ref().to_vec(), EventualitiesTracker::new()); - } - } - - // An async function, to be spawned on a task, to discover and report outputs - async fn run( - mut db: D, - network: N, - scanner_hold: ScannerHold, - mut multisig_completed: mpsc::UnboundedReceiver, - ) { - loop { - for block_being_scanned in (ram_scanned + 1) ..= latest_block_to_scan { - // Redo the checks for if we're too far ahead - { - let needing_ack = { - let scanner_lock = scanner_hold.read().await; - let scanner = scanner_lock.as_ref().unwrap(); - scanner.need_ack.front().copied() - }; - - if let Some(needing_ack) = needing_ack { - let limit = needing_ack + N::CONFIRMATIONS; - assert!(block_being_scanned <= limit); - if block_being_scanned == limit { - break; - } - } - } - - let Ok(block) = network.get_block(block_being_scanned).await else { - warn!("couldn't get block {block_being_scanned}"); - break; - }; - let block_id = block.id(); - - info!("scanning block: {} ({block_being_scanned})", hex::encode(&block_id)); - - // Scan new blocks - // TODO: This lock acquisition may be long-lived... - let mut scanner_lock = scanner_hold.write().await; - let scanner = scanner_lock.as_mut().unwrap(); - - let mut has_activation = false; - let mut outputs = vec![]; - let mut completion_block_numbers = vec![]; - for (activation_number, key) in scanner.keys.clone() { - if activation_number > block_being_scanned { - continue; - } - - if activation_number == block_being_scanned { - has_activation = true; - } - - for (id, (block_number, tx, completion)) in network - .get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block) - .await - { - info!( - "eventuality {} resolved by {}, as found on chain", - hex::encode(id), - hex::encode(tx.as_ref()) - ); - - completion_block_numbers.push(block_number); - // This must be before the mission of ScannerEvent::Block, per commentary in mod.rs - if !scanner.emit(ScannerEvent::Completed( - key_vec.clone(), - block_number, - id, - tx, - completion, - )) { - return; - } - } - } - // Panic if we've already seen these outputs for output in &outputs { let id = output.id(); @@ -482,99 +304,4 @@ impl Scanner { } scanner.ram_outputs.insert(id); } - - // We could remove this, if instead of doing the first block which passed - // requirements + CONFIRMATIONS, we simply emitted an event for every block where - // `number % CONFIRMATIONS == 0` (once at the final stage for the existing multisig) - // There's no need at this point, yet the latter may be more suitable for modeling... - async fn check_multisig_completed( - db: &mut D, - multisig_completed: &mut mpsc::UnboundedReceiver, - block_number: usize, - ) -> bool { - match multisig_completed.recv().await { - None => { - info!("Scanner handler was dropped. Shutting down?"); - false - } - Some(completed) => { - // Set the retirement block as block_number + CONFIRMATIONS - if completed { - let mut txn = db.txn(); - // The retiring key is the earliest one still around - let retiring_key = ScannerDb::::keys(&txn)[0].1; - // This value is static w.r.t. the key - ScannerDb::::save_retirement_block( - &mut txn, - &retiring_key, - block_number + N::CONFIRMATIONS, - ); - txn.commit(); - } - true - } - } - } - - drop(scanner_lock); - // Now that we've dropped the Scanner lock, we need to handle the multisig_completed - // channel before we decide if this block should be fired or not - // (holding the Scanner risks a deadlock) - for block_number in completion_block_numbers { - if !check_multisig_completed::(&mut db, &mut multisig_completed, block_number).await - { - return; - }; - } - - // Reacquire the scanner - let mut scanner_lock = scanner_hold.write().await; - let scanner = scanner_lock.as_mut().unwrap(); - - // Only emit an event if any of the following is true: - // - This is an activation block - // - This is a retirement block - // - There's outputs - // as only those blocks are meaningful and warrant obtaining synchrony over - let is_retirement_block = - ScannerDb::::retirement_block(&db, &scanner.keys[0].1) == Some(block_being_scanned); - let sent_block = if has_activation || is_retirement_block || (!outputs.is_empty()) { - // Save the outputs to disk - let mut txn = db.txn(); - ScannerDb::::save_outputs(&mut txn, &block_id, &outputs); - txn.commit(); - - // Send all outputs - if !scanner.emit(ScannerEvent::Block { is_retirement_block, block: block_id, outputs }) { - return; - } - - // Since we're creating a Batch, mark it as needing ack - scanner.need_ack.push_back(block_being_scanned); - true - } else { - false - }; - - // Remove it from memory - if is_retirement_block { - let retired = scanner.keys.remove(0).1; - scanner.eventualities.remove(retired.to_bytes().as_ref()); - } - drop(scanner_lock); - // If we sent a Block event, once again check multisig_completed - if sent_block && - (!check_multisig_completed::( - &mut db, - &mut multisig_completed, - block_being_scanned, - ) - .await) - { - return; - } - } - } - } -} */ diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index 8617ec18..f176680e 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -245,7 +245,7 @@ impl ContinuallyRan for ScanForOutputsTask { // Send the scan data to the eventuality task ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); - // Send the in instructions to the report task + // Send the InInstructions to the report task ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); From 7c1025dbcbce97134fc5ce6a2cb9ea4017c920bc Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 18:43:40 -0400 Subject: [PATCH 034/368] Implement key retiry --- processor/primitives/src/lib.rs | 13 ++- processor/scanner/src/db.rs | 113 ++++++++++++++--------- processor/scanner/src/eventuality/db.rs | 17 ++++ processor/scanner/src/eventuality/mod.rs | 82 ++++++++++------ processor/scanner/src/lib.rs | 10 +- processor/scanner/src/scan.rs | 17 ++-- 6 files changed, 165 insertions(+), 87 deletions(-) diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index b0b7ae04..7a8be219 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -45,15 +45,20 @@ pub trait Id: } impl Id for [u8; N] where [u8; N]: Default {} -/// A wrapper for a group element which implements the borsh traits. +/// A wrapper for a group element which implements the scale/borsh traits. #[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct BorshG(pub G); -impl BorshSerialize for BorshG { +pub struct EncodableG(pub G); +impl Encode for EncodableG { + fn using_encoded R>(&self, f: F) -> R { + f(self.0.to_bytes().as_ref()) + } +} +impl BorshSerialize for EncodableG { fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { writer.write_all(self.0.to_bytes().as_ref()) } } -impl BorshDeserialize for BorshG { +impl BorshDeserialize for EncodableG { fn deserialize_reader(reader: &mut R) -> borsh::io::Result { let mut repr = G::Repr::default(); reader.read_exact(repr.as_mut())?; diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index d53bf7c7..a37e05f4 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -7,7 +7,7 @@ use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::InInstructionWithBalance; -use primitives::{ReceivedOutput, BorshG}; +use primitives::{ReceivedOutput, EncodableG}; use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return}; @@ -24,6 +24,7 @@ struct SeraiKeyDbEntry { pub(crate) struct SeraiKey { pub(crate) key: K, pub(crate) stage: LifetimeStage, + pub(crate) activation_block_number: u64, pub(crate) block_at_which_reporting_starts: u64, } @@ -45,11 +46,10 @@ impl OutputWithInInstruction { create_db!( Scanner { ActiveKeys: () -> Vec>, + RetireAt: (key: K) -> u64, // The next block to scan for received outputs NextToScanForOutputsBlock: () -> u64, - // The next block to check for resolving eventualities - NextToCheckForEventualitiesBlock: () -> u64, // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, // Highest acknowledged block @@ -95,29 +95,52 @@ impl ScannerDb { /// activation_block_number is inclusive, so the key will be scanned for starting at the /// specified block. pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: KeyFor) { - // Set this block as notable + // Set the block which has a key activate as notable NotableBlock::set(txn, activation_block_number, &()); // TODO: Panic if we've ever seen this key before // Push the key - let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); - keys.push(SeraiKeyDbEntry { activation_block_number, key: BorshG(key) }); + let mut keys: Vec>>> = + ActiveKeys::get(txn).unwrap_or(vec![]); + keys.push(SeraiKeyDbEntry { activation_block_number, key: EncodableG(key) }); ActiveKeys::set(txn, &keys); } /// Retire a key. /// /// The key retired must be the oldest key. There must be another key actively tracked. - // TODO: This will be called from the Eventuality task yet this field is read by the scan task - // We need to write the argument for its safety - pub(crate) fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { - let mut keys: Vec>>> = + pub(crate) fn retire_key(txn: &mut impl DbTxn, at_block: u64, key: KeyFor) { + // Set the block which has a key retire as notable + NotableBlock::set(txn, at_block, &()); + + let keys: Vec>>> = ActiveKeys::get(txn).expect("retiring key yet no active keys"); assert!(keys.len() > 1, "retiring our only key"); assert_eq!(keys[0].key.0, key, "not retiring the oldest key"); - keys.remove(0); - ActiveKeys::set(txn, &keys); + + RetireAt::set(txn, EncodableG(key), &at_block); + } + pub(crate) fn tidy_keys(txn: &mut impl DbTxn) { + let mut keys: Vec>>> = + ActiveKeys::get(txn).expect("retiring key yet no active keys"); + let Some(key) = keys.first() else { return }; + + // Get the block we're scanning for next + let block_number = Self::next_to_scan_for_outputs_block(txn).expect( + "tidying keys despite never setting the next to scan for block (done on initialization)", + ); + // If this key is scheduled for retiry... + if let Some(retire_at) = RetireAt::get(txn, key.key) { + // And is retired by/at this block... + if retire_at <= block_number { + // Remove it from the list of keys + let key = keys.remove(0); + ActiveKeys::set(txn, &keys); + // Also clean up the retiry block + RetireAt::del(txn, key.key); + } + } } /// Fetch the active keys, as of the next-to-scan-for-outputs Block. /// @@ -129,9 +152,16 @@ impl ScannerDb { // If we've scanned block 1,000,000, we can't answer the active keys as of block 0 let block_number = Self::next_to_scan_for_outputs_block(getter)?; - let raw_keys: Vec>>> = ActiveKeys::get(getter)?; + let raw_keys: Vec>>> = ActiveKeys::get(getter)?; let mut keys = Vec::with_capacity(2); for i in 0 .. raw_keys.len() { + // Ensure this key isn't retired + if let Some(retire_at) = RetireAt::get(getter, raw_keys[i].key) { + if retire_at <= block_number { + continue; + } + } + // Ensure this key isn't yet to activate if block_number < raw_keys[i].activation_block_number { continue; } @@ -141,7 +171,12 @@ impl ScannerDb { raw_keys[i].activation_block_number, raw_keys.get(i + 1).map(|key| key.activation_block_number), ); - keys.push(SeraiKey { key: raw_keys[i].key.0, stage, block_at_which_reporting_starts }); + keys.push(SeraiKey { + key: raw_keys[i].key.0, + stage, + activation_block_number: raw_keys[i].activation_block_number, + block_at_which_reporting_starts, + }); } assert!(keys.len() <= 2, "more than two keys active"); Some(keys) @@ -154,19 +189,9 @@ impl ScannerDb { ); NextToScanForOutputsBlock::set(txn, &start_block); - // We can receive outputs in this block, but any descending transactions will be in the next - // block. This, with the check on-set, creates a bound that this value in the DB is non-zero. - NextToCheckForEventualitiesBlock::set(txn, &(start_block + 1)); NextToPotentiallyReportBlock::set(txn, &start_block); } - pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { - // We can only scan up to whatever block we've checked the Eventualities of, plus the window - // length. Since this returns an inclusive bound, we need to subtract 1 - // See `eventuality.rs` for more info - NextToCheckForEventualitiesBlock::get(getter).map(|b| b + S::WINDOW_LENGTH - 1) - } - pub(crate) fn set_next_to_scan_for_outputs_block( txn: &mut impl DbTxn, next_to_scan_for_outputs_block: u64, @@ -177,20 +202,6 @@ impl ScannerDb { NextToScanForOutputsBlock::get(getter) } - pub(crate) fn set_next_to_check_for_eventualities_block( - txn: &mut impl DbTxn, - next_to_check_for_eventualities_block: u64, - ) { - assert!( - next_to_check_for_eventualities_block != 0, - "next to check for eventualities block was 0 when it's bound non-zero" - ); - NextToCheckForEventualitiesBlock::set(txn, &next_to_check_for_eventualities_block); - } - pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { - NextToCheckForEventualitiesBlock::get(getter) - } - pub(crate) fn set_next_to_potentially_report_block( txn: &mut impl DbTxn, next_to_potentially_report_block: u64, @@ -229,7 +240,15 @@ impl ScannerDb { SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); } - pub(crate) fn flag_notable(txn: &mut impl DbTxn, block_number: u64) { + /* + This is so verbosely named as the DB itself already flags upon external outputs. Specifically, + if any block yields External outputs to accumulate, we flag it as notable. + + There is the slight edge case where some External outputs are queued for accumulation later. We + consider those outputs received as of the block they're queued to (maintaining the policy any + blocks in which we receive outputs is notable). + */ + pub(crate) fn flag_notable_due_to_non_external_output(txn: &mut impl DbTxn, block_number: u64) { assert!( NextToPotentiallyReportBlock::get(txn).unwrap() <= block_number, "already potentially reported a block we're only now flagging as notable" @@ -298,6 +317,17 @@ db_channel! { pub(crate) struct ScanToEventualityDb(PhantomData); impl ScanToEventualityDb { pub(crate) fn send_scan_data(txn: &mut impl DbTxn, block_number: u64, data: &SenderScanData) { + // If we received an External output to accumulate, or have an External output to forward + // (meaning we received an External output), or have an External output to return (again + // meaning we received an External output), set this block as notable due to receiving outputs + // The non-External output case is covered with `flag_notable_due_to_non_external_output` + if !(data.received_external_outputs.is_empty() && + data.forwards.is_empty() && + data.returns.is_empty()) + { + NotableBlock::set(txn, block_number, &()); + } + /* SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, SerializedForwardedOutput: (output_id: &[u8]) -> Vec, @@ -357,11 +387,6 @@ impl ScanToReportDb { block_number: u64, in_instructions: Vec, ) { - if !in_instructions.is_empty() { - // Set this block as notable - NotableBlock::set(txn, block_number, &()); - } - InInstructions::send(txn, (), &BlockBoundInInstructions { block_number, in_instructions }); } diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index e379532d..baed33c4 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -13,12 +13,29 @@ impl Borshy for T {} create_db!( ScannerEventuality { + // The next block to check for resolving eventualities + NextToCheckForEventualitiesBlock: () -> u64, + SerializedEventualities: () -> Vec, } ); pub(crate) struct EventualityDb(PhantomData); impl EventualityDb { + pub(crate) fn set_next_to_check_for_eventualities_block( + txn: &mut impl DbTxn, + next_to_check_for_eventualities_block: u64, + ) { + assert!( + next_to_check_for_eventualities_block != 0, + "next-to-check-for-eventualities block was 0 when it's bound non-zero" + ); + NextToCheckForEventualitiesBlock::set(txn, &next_to_check_for_eventualities_block); + } + pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { + NextToCheckForEventualitiesBlock::get(getter) + } + pub(crate) fn set_eventualities( txn: &mut impl DbTxn, key: KeyFor, diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 3a472ce2..f682bf36 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,6 +1,6 @@ use group::GroupEncoding; -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn, Db}; use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, Block}; @@ -14,6 +14,16 @@ use crate::{ mod db; use db::EventualityDb; +/// The latest scannable block, which is determined by this task. +/// +/// This task decides when a key retires, which impacts the scan task. Accordingly, the scanner is +/// only allowed to scan `S::WINDOW_LENGTH - 1` blocks ahead so we can safely schedule keys to +/// retire `S::WINDOW_LENGTH` blocks out. +pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { + EventualityDb::::next_to_check_for_eventualities_block(getter) + .map(|b| b + S::WINDOW_LENGTH - 1) +} + /* When we scan a block, we receive outputs. When this block is acknowledged, we accumulate those outputs into some scheduler, potentially causing certain transactions to begin their signing @@ -64,6 +74,21 @@ struct EventualityTask> { scheduler: Sch, } +impl> EventualityTask { + pub(crate) fn new(mut db: D, feed: S, scheduler: Sch, start_block: u64) -> Self { + if EventualityDb::::next_to_check_for_eventualities_block(&db).is_none() { + // Initialize the DB + let mut txn = db.txn(); + // We can receive outputs in `start_block`, but any descending transactions will be in the + // next block + EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, start_block + 1); + txn.commit(); + } + + Self { db, feed, scheduler } + } +} + #[async_trait::async_trait] impl> ContinuallyRan for EventualityTask { async fn run_iteration(&mut self) -> Result { @@ -93,7 +118,7 @@ impl> ContinuallyRan for EventualityTas .expect("EventualityTask run before writing the start block"); // Fetch the next block to check - let next_to_check = ScannerDb::::next_to_check_for_eventualities_block(&self.db) + let next_to_check = EventualityDb::::next_to_check_for_eventualities_block(&self.db) .expect("EventualityTask run before writing the start block"); // Check all blocks @@ -121,21 +146,19 @@ impl> ContinuallyRan for EventualityTas /* This is proper as the keys for the next to scan block (at most `WINDOW_LENGTH` ahead, - which is `<= CONFIRMATIONS`) will be the keys to use here. + which is `<= CONFIRMATIONS`) will be the keys to use here, with only minor edge cases. - If we had added a new key (which hasn't actually actived by the block we're currently - working on), it won't have any Eventualities for at least `CONFIRMATIONS` blocks (so it'd - have no impact here). + This may include a key which has yet to activate by our perception. We can simply drop + those. - As for retiring a key, that's done on this task's timeline. We ensure we don't bork the - scanner by officially retiring the key `WINDOW_LENGTH` blocks in the future (ensuring the - scanner never has a malleable view of the keys). + This may not include a key which has retired by the next-to-scan block. This task is the + one which decides when to retire a key, and when it marks a key to be retired, it is done + with it. Accordingly, it's not an issue if such a key was dropped. */ - // TODO: Ensure the add key/remove key DB fns are called by the same task to prevent issues - // there - // TODO: On register eventuality, assert the above timeline assumptions let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) .expect("scanning for a blockchain without any keys set"); + // Since the next-to-scan block is ahead of us, drop keys which have yet to actually activate + keys.retain(|key| b <= key.activation_block_number); let mut txn = self.db.txn(); @@ -146,20 +169,16 @@ impl> ContinuallyRan for EventualityTas scan_data; let mut outputs = received_external_outputs; - for key in keys { - let (eventualities_is_empty, completed_eventualities) = { + for key in &keys { + let completed_eventualities = { let mut eventualities = EventualityDb::::eventualities(&txn, key.key); let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); EventualityDb::::set_eventualities(&mut txn, key.key, &eventualities); - (eventualities.active_eventualities.is_empty(), completed_eventualities) + completed_eventualities }; - for (tx, completed_eventuality) in completed_eventualities { - log::info!( - "eventuality {} resolved by {}", - hex::encode(completed_eventuality.id()), - hex::encode(tx.as_ref()) - ); + for tx in completed_eventualities.keys() { + log::info!("eventuality resolved by {}", hex::encode(tx.as_ref())); } // Fetch all non-External outputs @@ -221,10 +240,12 @@ impl> ContinuallyRan for EventualityTas outputs.extend(non_external_outputs); } + // Update the scheduler let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; scheduler_update.outputs.sort_by(sort_outputs); scheduler_update.forwards.sort_by(sort_outputs); scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + // Intake the new Eventualities let new_eventualities = self.scheduler.update(&mut txn, scheduler_update); for (key, new_eventualities) in new_eventualities { let key = { @@ -234,6 +255,11 @@ impl> ContinuallyRan for EventualityTas KeyFor::::from_bytes(&key_repr).unwrap() }; + keys + .iter() + .find(|serai_key| serai_key.key == key) + .expect("queueing eventuality for key which isn't active"); + let mut eventualities = EventualityDb::::eventualities(&txn, key); for new_eventuality in new_eventualities { eventualities.active_eventualities.insert(new_eventuality.lookup(), new_eventuality); @@ -241,24 +267,26 @@ impl> ContinuallyRan for EventualityTas EventualityDb::::set_eventualities(&mut txn, key, &eventualities); } - for key in keys { + // Now that we've intaked any Eventualities caused, check if we're retiring any keys + for key in &keys { if key.stage == LifetimeStage::Finishing { let eventualities = EventualityDb::::eventualities(&txn, key.key); + // TODO: This assumes the Scheduler is empty if eventualities.active_eventualities.is_empty() { log::info!( "key {} has finished and is being retired", hex::encode(key.key.to_bytes().as_ref()) ); - ScannerDb::::flag_notable(&mut txn, b + S::WINDOW_LENGTH); - // TODO: Retire the key - todo!("TODO") + // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never + // has a malleable view of the keys. + ScannerDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); } } } - // Update the next to check block - ScannerDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); + // Update the next-to-check block + EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); txn.commit(); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 2d19207f..b363faa1 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -14,6 +14,7 @@ mod lifetime; // Database schema definition and associated functions. mod db; +use db::ScannerDb; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. @@ -208,7 +209,9 @@ impl Scanner { "acknowledging a block which wasn't notable" ); ScannerDb::::set_highest_acknowledged_block(txn, block_number); - ScannerDb::::queue_key(txn, block_number + S::WINDOW_LENGTH); + if let Some(key_to_activate) = key_to_activate { + ScannerDb::::queue_key(txn, block_number + S::WINDOW_LENGTH, key_to_activate); + } } /// Queue Burns. @@ -249,11 +252,6 @@ impl ScannerDb { // Return this block's outputs so they can be pruned from the RAM cache outputs } - fn latest_scanned_block(getter: &G) -> Option { - getter - .get(Self::scanned_block_key()) - .map(|bytes| u64::from_le_bytes(bytes.try_into().unwrap()).try_into().unwrap()) - } } // Panic if we've already seen these outputs diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan.rs index f176680e..201f64a1 100644 --- a/processor/scanner/src/scan.rs +++ b/processor/scanner/src/scan.rs @@ -13,6 +13,7 @@ use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToReportDb, ScanToEventualityDb}, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, + eventuality::latest_scannable_block, }; // Construct an InInstruction from an external output. @@ -69,7 +70,7 @@ struct ScanForOutputsTask { impl ContinuallyRan for ScanForOutputsTask { async fn run_iteration(&mut self) -> Result { // Fetch the safe to scan block - let latest_scannable = ScannerDb::::latest_scannable_block(&self.db) + let latest_scannable = latest_scannable_block::(&self.db) .expect("ScanForOutputsTask run before writing the start block"); // Fetch the next block to scan let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) @@ -80,12 +81,16 @@ impl ContinuallyRan for ScanForOutputsTask { log::info!("scanning block: {} ({b})", hex::encode(block.id())); - assert_eq!(ScannerDb::::next_to_scan_for_outputs_block(&self.db).unwrap(), b); - let keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) - .expect("scanning for a blockchain without any keys set"); - let mut txn = self.db.txn(); + assert_eq!(ScannerDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); + + // Tidy the keys, then fetch them + // We don't have to tidy them here, we just have to somewhere, so why not here? + ScannerDb::::tidy_keys(&mut txn); + let keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) + .expect("scanning for a blockchain without any keys set"); + let mut scan_data = SenderScanData { block_number: b, received_external_outputs: vec![], @@ -156,7 +161,7 @@ impl ContinuallyRan for ScanForOutputsTask { // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing // an invocation of a consensus/signing protocol if balance.amount.0 >= self.feed.dust(balance.coin).0 { - ScannerDb::::flag_notable(&mut txn, b); + ScannerDb::::flag_notable_due_to_non_external_output(&mut txn, b); } continue; } From 77ef25416b8f347ef8fc2fd99e39a8194928c4b7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 18:46:39 -0400 Subject: [PATCH 035/368] Make scan.rs a folder, not a file --- processor/scanner/src/{scan.rs => scan/mod.rs} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename processor/scanner/src/{scan.rs => scan/mod.rs} (100%) diff --git a/processor/scanner/src/scan.rs b/processor/scanner/src/scan/mod.rs similarity index 100% rename from processor/scanner/src/scan.rs rename to processor/scanner/src/scan/mod.rs From fdfe520f9da1fc189083a0fc9372a5c8583dd931 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 19:00:02 -0400 Subject: [PATCH 036/368] Add ScanDb --- processor/scanner/src/db.rs | 50 +++----------------- processor/scanner/src/eventuality/mod.rs | 5 +- processor/scanner/src/report.rs | 6 ++- processor/scanner/src/scan/db.rs | 59 ++++++++++++++++++++++++ processor/scanner/src/scan/mod.rs | 46 +++++++++++++++--- 5 files changed, 113 insertions(+), 53 deletions(-) create mode 100644 processor/scanner/src/scan/db.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index a37e05f4..e3e31c38 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -9,7 +9,10 @@ use serai_in_instructions_primitives::InInstructionWithBalance; use primitives::{ReceivedOutput, EncodableG}; -use crate::{lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return}; +use crate::{ + lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return, + scan::next_to_scan_for_outputs_block, +}; // The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. trait Borshy: BorshSerialize + BorshDeserialize {} @@ -35,7 +38,7 @@ pub(crate) struct OutputWithInInstruction { } impl OutputWithInInstruction { - fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + pub(crate) fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { self.output.write(writer)?; // TODO self.return_address.write(writer)?; self.in_instruction.encode_to(writer); @@ -48,8 +51,6 @@ create_db!( ActiveKeys: () -> Vec>, RetireAt: (key: K) -> u64, - // The next block to scan for received outputs - NextToScanForOutputsBlock: () -> u64, // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, // Highest acknowledged block @@ -74,9 +75,6 @@ create_db!( */ // This collapses from `bool` to `()`, using if the value was set for true and false otherwise NotableBlock: (number: u64) -> (), - - SerializedQueuedOutputs: (block_number: u64) -> Vec, - SerializedOutputs: (block_number: u64) -> Vec, } ); @@ -127,7 +125,7 @@ impl ScannerDb { let Some(key) = keys.first() else { return }; // Get the block we're scanning for next - let block_number = Self::next_to_scan_for_outputs_block(txn).expect( + let block_number = next_to_scan_for_outputs_block::(txn).expect( "tidying keys despite never setting the next to scan for block (done on initialization)", ); // If this key is scheduled for retiry... @@ -150,7 +148,7 @@ impl ScannerDb { ) -> Option>>> { // We don't take this as an argument as we don't keep all historical keys in memory // If we've scanned block 1,000,000, we can't answer the active keys as of block 0 - let block_number = Self::next_to_scan_for_outputs_block(getter)?; + let block_number = next_to_scan_for_outputs_block::(getter)?; let raw_keys: Vec>>> = ActiveKeys::get(getter)?; let mut keys = Vec::with_capacity(2); @@ -183,25 +181,9 @@ impl ScannerDb { } pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: [u8; 32]) { - assert!( - NextToScanForOutputsBlock::get(txn).is_none(), - "setting start block but prior set start block" - ); - - NextToScanForOutputsBlock::set(txn, &start_block); NextToPotentiallyReportBlock::set(txn, &start_block); } - pub(crate) fn set_next_to_scan_for_outputs_block( - txn: &mut impl DbTxn, - next_to_scan_for_outputs_block: u64, - ) { - NextToScanForOutputsBlock::set(txn, &next_to_scan_for_outputs_block); - } - pub(crate) fn next_to_scan_for_outputs_block(getter: &impl Get) -> Option { - NextToScanForOutputsBlock::get(getter) - } - pub(crate) fn set_next_to_potentially_report_block( txn: &mut impl DbTxn, next_to_potentially_report_block: u64, @@ -222,24 +204,6 @@ impl ScannerDb { HighestAcknowledgedBlock::get(getter) } - pub(crate) fn take_queued_outputs( - txn: &mut impl DbTxn, - block_number: u64, - ) -> Vec> { - todo!("TODO") - } - - pub(crate) fn queue_output_until_block( - txn: &mut impl DbTxn, - queue_for_block: u64, - output: &OutputWithInInstruction, - ) { - let mut outputs = - SerializedQueuedOutputs::get(txn, queue_for_block).unwrap_or(Vec::with_capacity(128)); - output.write(&mut outputs).unwrap(); - SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); - } - /* This is so verbosely named as the DB itself already flags upon external outputs. Specifically, if any block yields External outputs to accumulate, we flag it as notable. diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index f682bf36..a29e5301 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -9,6 +9,7 @@ use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, ReceiverScanData, ScannerDb, ScanToEventualityDb}, BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, sort_outputs, + scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; mod db; @@ -104,7 +105,7 @@ impl> ContinuallyRan for EventualityTas */ let exclusive_upper_bound = { // Fetch the next to scan block - let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + let next_to_scan = next_to_scan_for_outputs_block::(&self.db) .expect("EventualityTask run before writing the start block"); // If we haven't done any work, return if next_to_scan == 0 { @@ -229,7 +230,7 @@ impl> ContinuallyRan for EventualityTas &txn, &forwarded, ) .expect("forwarded an output yet didn't save its InInstruction to the DB"); - ScannerDb::::queue_output_until_block( + queue_output_until_block::( &mut txn, b + S::WINDOW_LENGTH, &OutputWithInInstruction { output: output.clone(), return_address, in_instruction }, diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report.rs index 39a72106..f69459f0 100644 --- a/processor/scanner/src/report.rs +++ b/processor/scanner/src/report.rs @@ -7,7 +7,9 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; // TODO: Localize to Report? use crate::{ db::{ScannerDb, ScanToReportDb}, - index, ScannerFeed, ContinuallyRan, + index, + scan::next_to_scan_for_outputs_block, + ScannerFeed, ContinuallyRan, }; /* @@ -27,7 +29,7 @@ impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { let highest_reportable = { // Fetch the next to scan block - let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + let next_to_scan = next_to_scan_for_outputs_block::(&self.db) .expect("ReportTask run before writing the start block"); // If we haven't done any work, return if next_to_scan == 0 { diff --git a/processor/scanner/src/scan/db.rs b/processor/scanner/src/scan/db.rs new file mode 100644 index 00000000..905e10be --- /dev/null +++ b/processor/scanner/src/scan/db.rs @@ -0,0 +1,59 @@ +use core::marker::PhantomData; +use std::io; + +use scale::Encode; +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, create_db}; + +use serai_in_instructions_primitives::InInstructionWithBalance; + +use primitives::{EncodableG, ReceivedOutput, EventualityTracker}; + +use crate::{ + lifetime::LifetimeStage, db::OutputWithInInstruction, ScannerFeed, KeyFor, AddressFor, OutputFor, + EventualityFor, Return, scan::next_to_scan_for_outputs_block, +}; + +// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. +trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + +create_db!( + ScannerScan { + // The next block to scan for received outputs + NextToScanForOutputsBlock: () -> u64, + + SerializedQueuedOutputs: (block_number: u64) -> Vec, + } +); + +pub(crate) struct ScanDb(PhantomData); +impl ScanDb { + pub(crate) fn set_next_to_scan_for_outputs_block( + txn: &mut impl DbTxn, + next_to_scan_for_outputs_block: u64, + ) { + NextToScanForOutputsBlock::set(txn, &next_to_scan_for_outputs_block); + } + pub(crate) fn next_to_scan_for_outputs_block(getter: &impl Get) -> Option { + NextToScanForOutputsBlock::get(getter) + } + + pub(crate) fn take_queued_outputs( + txn: &mut impl DbTxn, + block_number: u64, + ) -> Vec> { + todo!("TODO") + } + + pub(crate) fn queue_output_until_block( + txn: &mut impl DbTxn, + queue_for_block: u64, + output: &OutputWithInInstruction, + ) { + let mut outputs = + SerializedQueuedOutputs::get(txn, queue_for_block).unwrap_or(Vec::with_capacity(128)); + output.write(&mut outputs).unwrap(); + SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); + } +} diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 201f64a1..1f143809 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -1,5 +1,5 @@ use scale::Decode; -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn, Db}; use serai_primitives::MAX_DATA_LEN; use serai_in_instructions_primitives::{ @@ -16,6 +16,27 @@ use crate::{ eventuality::latest_scannable_block, }; +mod db; +use db::ScanDb; + +pub(crate) fn next_to_scan_for_outputs_block(getter: &impl Get) -> Option { + ScanDb::::next_to_scan_for_outputs_block(getter) +} + +pub(crate) fn queue_output_until_block( + txn: &mut impl DbTxn, + queue_for_block: u64, + output: &OutputWithInInstruction, +) { + assert!( + queue_for_block >= + next_to_scan_for_outputs_block::(txn) + .expect("queueing an output despite no next-to-scan-for-outputs block"), + "queueing an output for a block already scanned" + ); + ScanDb::::queue_output_until_block(txn, queue_for_block, output) +} + // Construct an InInstruction from an external output. // // Also returns the address to return the coins to upon error. @@ -66,6 +87,19 @@ struct ScanForOutputsTask { feed: S, } +impl ScanForOutputsTask { + pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { + if ScanDb::::next_to_scan_for_outputs_block(&db).is_none() { + // Initialize the DB + let mut txn = db.txn(); + ScanDb::::set_next_to_scan_for_outputs_block(&mut txn, start_block); + txn.commit(); + } + + Self { db, feed } + } +} + #[async_trait::async_trait] impl ContinuallyRan for ScanForOutputsTask { async fn run_iteration(&mut self) -> Result { @@ -73,7 +107,7 @@ impl ContinuallyRan for ScanForOutputsTask { let latest_scannable = latest_scannable_block::(&self.db) .expect("ScanForOutputsTask run before writing the start block"); // Fetch the next block to scan - let next_to_scan = ScannerDb::::next_to_scan_for_outputs_block(&self.db) + let next_to_scan = ScanDb::::next_to_scan_for_outputs_block(&self.db) .expect("ScanForOutputsTask run before writing the start block"); for b in next_to_scan ..= latest_scannable { @@ -83,7 +117,7 @@ impl ContinuallyRan for ScanForOutputsTask { let mut txn = self.db.txn(); - assert_eq!(ScannerDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); + assert_eq!(ScanDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); // Tidy the keys, then fetch them // We don't have to tidy them here, we just have to somewhere, so why not here? @@ -100,7 +134,7 @@ impl ContinuallyRan for ScanForOutputsTask { let mut in_instructions = vec![]; let queued_outputs = { - let mut queued_outputs = ScannerDb::::take_queued_outputs(&mut txn, b); + let mut queued_outputs = ScanDb::::take_queued_outputs(&mut txn, b); // Sort the queued outputs in case they weren't queued in a deterministic fashion queued_outputs.sort_by(|a, b| sort_outputs(&a.output, &b.output)); queued_outputs @@ -217,7 +251,7 @@ impl ContinuallyRan for ScanForOutputsTask { // This multisig isn't yet reporting its External outputs to avoid a DoS // Queue the output to be reported when this multisig starts reporting LifetimeStage::ActiveYetNotReporting => { - ScannerDb::::queue_output_until_block( + ScanDb::::queue_output_until_block( &mut txn, key.block_at_which_reporting_starts, &output_with_in_instruction, @@ -253,7 +287,7 @@ impl ContinuallyRan for ScanForOutputsTask { // Send the InInstructions to the report task ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); // Update the next to scan block - ScannerDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); + ScanDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); txn.commit(); } From 7cc07d64d102a67591d98c0c91192fa96d19232b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 19:37:44 -0400 Subject: [PATCH 037/368] Make report.rs a folder, not a file --- processor/scanner/src/{report.rs => report/mod.rs} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename processor/scanner/src/{report.rs => report/mod.rs} (100%) diff --git a/processor/scanner/src/report.rs b/processor/scanner/src/report/mod.rs similarity index 100% rename from processor/scanner/src/report.rs rename to processor/scanner/src/report/mod.rs From 65f3f485174a156119a17536bdb32a5938affa55 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 19:58:28 -0400 Subject: [PATCH 038/368] Add ReportDb --- processor/scanner/src/db.rs | 24 +++--------------- processor/scanner/src/eventuality/mod.rs | 18 +++++++------- processor/scanner/src/report/db.rs | 27 +++++++++++++++++++++ processor/scanner/src/report/mod.rs | 31 ++++++++++++++++++------ processor/scanner/src/scan/mod.rs | 4 ++- 5 files changed, 65 insertions(+), 39 deletions(-) create mode 100644 processor/scanner/src/report/db.rs diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index e3e31c38..7a2d68a9 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -47,7 +47,7 @@ impl OutputWithInInstruction { } create_db!( - Scanner { + ScannerGlobal { ActiveKeys: () -> Vec>, RetireAt: (key: K) -> u64, @@ -78,8 +78,8 @@ create_db!( } ); -pub(crate) struct ScannerDb(PhantomData); -impl ScannerDb { +pub(crate) struct ScannerGlobalDb(PhantomData); +impl ScannerGlobalDb { /// Queue a key. /// /// Keys may be queued whenever, so long as they're scheduled to activate `WINDOW_LENGTH` blocks @@ -180,20 +180,6 @@ impl ScannerDb { Some(keys) } - pub(crate) fn set_start_block(txn: &mut impl DbTxn, start_block: u64, id: [u8; 32]) { - NextToPotentiallyReportBlock::set(txn, &start_block); - } - - pub(crate) fn set_next_to_potentially_report_block( - txn: &mut impl DbTxn, - next_to_potentially_report_block: u64, - ) { - NextToPotentiallyReportBlock::set(txn, &next_to_potentially_report_block); - } - pub(crate) fn next_to_potentially_report_block(getter: &impl Get) -> Option { - NextToPotentiallyReportBlock::get(getter) - } - pub(crate) fn set_highest_acknowledged_block( txn: &mut impl DbTxn, highest_acknowledged_block: u64, @@ -224,10 +210,6 @@ impl ScannerDb { NotableBlock::get(getter, number).is_some() } - pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { - todo!("TODO") - } - pub(crate) fn return_address_and_in_instruction_for_forwarded_output( getter: &impl Get, output: & as ReceivedOutput, AddressFor>>::Id, diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index a29e5301..e10aab54 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -4,10 +4,9 @@ use serai_db::{Get, DbTxn, Db}; use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, Block}; -// TODO: Localize to EventualityDb? use crate::{ lifetime::LifetimeStage, - db::{OutputWithInInstruction, ReceiverScanData, ScannerDb, ScanToEventualityDb}, + db::{OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, ScanToEventualityDb}, BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -69,7 +68,7 @@ pub(crate) fn latest_scannable_block(getter: &impl Get) -> Optio This forms a backlog only if the latency of scanning, acknowledgement, and intake (including checking Eventualities) exceeds the window duration (the desired property). */ -struct EventualityTask> { +pub(crate) struct EventualityTask> { db: D, feed: S, scheduler: Sch, @@ -115,7 +114,7 @@ impl> ContinuallyRan for EventualityTas }; // Fetch the highest acknowledged block - let highest_acknowledged = ScannerDb::::highest_acknowledged_block(&self.db) + let highest_acknowledged = ScannerGlobalDb::::highest_acknowledged_block(&self.db) .expect("EventualityTask run before writing the start block"); // Fetch the next block to check @@ -132,7 +131,7 @@ impl> ContinuallyRan for EventualityTas // This is possible since even if we receive coins in block 0, any transactions we'd make // would resolve in block 1 (the first block we'll check under this non-zero rule) let prior_block = b - 1; - if ScannerDb::::is_block_notable(&self.db, prior_block) && + if ScannerGlobalDb::::is_block_notable(&self.db, prior_block) && (prior_block > highest_acknowledged) { break; @@ -156,8 +155,9 @@ impl> ContinuallyRan for EventualityTas one which decides when to retire a key, and when it marks a key to be retired, it is done with it. Accordingly, it's not an issue if such a key was dropped. */ - let mut keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) - .expect("scanning for a blockchain without any keys set"); + let mut keys = + ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) + .expect("scanning for a blockchain without any keys set"); // Since the next-to-scan block is ahead of us, drop keys which have yet to actually activate keys.retain(|key| b <= key.activation_block_number); @@ -226,7 +226,7 @@ impl> ContinuallyRan for EventualityTas }; let (return_address, in_instruction) = - ScannerDb::::return_address_and_in_instruction_for_forwarded_output( + ScannerGlobalDb::::return_address_and_in_instruction_for_forwarded_output( &txn, &forwarded, ) .expect("forwarded an output yet didn't save its InInstruction to the DB"); @@ -281,7 +281,7 @@ impl> ContinuallyRan for EventualityTas // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never // has a malleable view of the keys. - ScannerDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); + ScannerGlobalDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); } } } diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs new file mode 100644 index 00000000..cca2148e --- /dev/null +++ b/processor/scanner/src/report/db.rs @@ -0,0 +1,27 @@ +use core::marker::PhantomData; + +use serai_db::{Get, DbTxn, Db, create_db}; + +create_db!( + ScannerReport { + // The next block to potentially report + NextToPotentiallyReportBlock: () -> u64, + } +); + +pub(crate) struct ReportDb; +impl ReportDb { + pub(crate) fn set_next_to_potentially_report_block( + txn: &mut impl DbTxn, + next_to_potentially_report_block: u64, + ) { + NextToPotentiallyReportBlock::set(txn, &next_to_potentially_report_block); + } + pub(crate) fn next_to_potentially_report_block(getter: &impl Get) -> Option { + NextToPotentiallyReportBlock::get(getter) + } + + pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { + todo!("TODO") + } +} diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index f69459f0..95bbbbd2 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -4,14 +4,16 @@ use serai_db::{DbTxn, Db}; use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; -// TODO: Localize to Report? use crate::{ - db::{ScannerDb, ScanToReportDb}, + db::{ScannerGlobalDb, ScanToReportDb}, index, scan::next_to_scan_for_outputs_block, ScannerFeed, ContinuallyRan, }; +mod db; +use db::ReportDb; + /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -19,11 +21,24 @@ use crate::{ Eventualities, have processed the block. This ensures we know if this block is notable, and have the InInstructions for it. */ -struct ReportTask { +pub(crate) struct ReportTask { db: D, feed: S, } +impl ReportTask { + pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { + if ReportDb::next_to_potentially_report_block(&db).is_none() { + // Initialize the DB + let mut txn = db.txn(); + ReportDb::set_next_to_potentially_report_block(&mut txn, start_block); + txn.commit(); + } + + Self { db, feed } + } +} + #[async_trait::async_trait] impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { @@ -44,7 +59,7 @@ impl ContinuallyRan for ReportTask { last_scanned }; - let next_to_potentially_report = ScannerDb::::next_to_potentially_report_block(&self.db) + let next_to_potentially_report = ReportDb::next_to_potentially_report_block(&self.db) .expect("ReportTask run before writing the start block"); for b in next_to_potentially_report ..= highest_reportable { @@ -53,7 +68,7 @@ impl ContinuallyRan for ReportTask { // Receive the InInstructions for this block // We always do this as we can't trivially tell if we should recv InInstructions before we do let in_instructions = ScanToReportDb::::recv_in_instructions(&mut txn, b); - let notable = ScannerDb::::is_block_notable(&txn, b); + let notable = ScannerGlobalDb::::is_block_notable(&txn, b); if !notable { assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); } @@ -61,7 +76,7 @@ impl ContinuallyRan for ReportTask { if notable { let network = S::NETWORK; let block_hash = index::block_id(&txn, b); - let mut batch_id = ScannerDb::::acquire_batch_id(&mut txn); + let mut batch_id = ReportDb::acquire_batch_id(&mut txn); // start with empty batch let mut batches = @@ -77,7 +92,7 @@ impl ContinuallyRan for ReportTask { let instruction = batch.instructions.pop().unwrap(); // bump the id for the new batch - batch_id = ScannerDb::::acquire_batch_id(&mut txn); + batch_id = ReportDb::acquire_batch_id(&mut txn); // make a new batch with this instruction included batches.push(Batch { @@ -93,7 +108,7 @@ impl ContinuallyRan for ReportTask { } // Update the next to potentially report block - ScannerDb::::set_next_to_potentially_report_block(&mut txn, b + 1); + ReportDb::set_next_to_potentially_report_block(&mut txn, b + 1); txn.commit(); } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 1f143809..54f9bd77 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -8,7 +8,6 @@ use serai_in_instructions_primitives::{ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Block}; -// TODO: Localize to ScanDb? use crate::{ lifetime::LifetimeStage, db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToReportDb, ScanToEventualityDb}, @@ -28,6 +27,9 @@ pub(crate) fn queue_output_until_block( queue_for_block: u64, output: &OutputWithInInstruction, ) { + // This isn't a perfect assertion as by the time this txn commits, we may have already started + // scanning this block. That doesn't change it should never trip as we queue outside the window + // we'll scan assert!( queue_for_block >= next_to_scan_for_outputs_block::(txn) From 738636c238f793ee94a60b88a63f08f2f7903246 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 20:16:06 -0400 Subject: [PATCH 039/368] Have Scanner::new spawn tasks --- processor/scanner/src/db.rs | 2 +- processor/scanner/src/index/mod.rs | 8 ++-- processor/scanner/src/lib.rs | 61 ++++++++++++++++++++++------- processor/scanner/src/report/db.rs | 4 +- processor/scanner/src/report/mod.rs | 9 +++-- processor/scanner/src/scan/db.rs | 16 +------- processor/scanner/src/scan/mod.rs | 26 ++++++------ 7 files changed, 73 insertions(+), 53 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 7a2d68a9..59af768f 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -7,7 +7,7 @@ use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::InInstructionWithBalance; -use primitives::{ReceivedOutput, EncodableG}; +use primitives::{EncodableG, ReceivedOutput}; use crate::{ lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return, diff --git a/processor/scanner/src/index/mod.rs b/processor/scanner/src/index/mod.rs index 7c70eedc..930ce55a 100644 --- a/processor/scanner/src/index/mod.rs +++ b/processor/scanner/src/index/mod.rs @@ -21,12 +21,12 @@ pub(crate) fn block_id(getter: &impl Get, block_number: u64) -> [u8; 32] { This task finds the finalized blocks, verifies they're continguous, and saves their IDs. */ -struct IndexFinalizedTask { +pub(crate) struct IndexTask { db: D, feed: S, } -impl IndexFinalizedTask { +impl IndexTask { pub(crate) async fn new(mut db: D, feed: S, start_block: u64) -> Self { if IndexDb::block_id(&db, start_block).is_none() { // Fetch the block for its ID @@ -36,7 +36,7 @@ impl IndexFinalizedTask { match feed.unchecked_block_header_by_number(start_block).await { Ok(block) => break block, Err(e) => { - log::warn!("IndexFinalizedTask couldn't fetch start block {start_block}: {e:?}"); + log::warn!("IndexTask couldn't fetch start block {start_block}: {e:?}"); tokio::time::sleep(core::time::Duration::from_secs(delay)).await; delay += Self::DELAY_BETWEEN_ITERATIONS; delay = delay.min(Self::MAX_DELAY_BETWEEN_ITERATIONS); @@ -57,7 +57,7 @@ impl IndexFinalizedTask { } #[async_trait::async_trait] -impl ContinuallyRan for IndexFinalizedTask { +impl ContinuallyRan for IndexTask { async fn run_iteration(&mut self) -> Result { // Fetch the latest finalized block let our_latest_finalized = IndexDb::latest_finalized_block(&self.db) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index b363faa1..3515da05 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -3,7 +3,7 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_db::{Get, DbTxn}; +use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; @@ -14,7 +14,7 @@ mod lifetime; // Database schema definition and associated functions. mod db; -use db::ScannerDb; +use db::ScannerGlobalDb; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. @@ -50,7 +50,7 @@ impl BlockExt for B { /// /// This defines the primitive types used, along with various getters necessary for indexing. #[async_trait::async_trait] -pub trait ScannerFeed: Send + Sync { +pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The ID of the network being scanned for. const NETWORK: NetworkId; @@ -170,7 +170,7 @@ pub struct SchedulerUpdate { } /// The object responsible for accumulating outputs and planning new transactions. -pub trait Scheduler: Send { +pub trait Scheduler: 'static + Send { /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// /// The `Vec` used as the key in the returned HashMap should be the encoded key the @@ -183,14 +183,38 @@ pub trait Scheduler: Send { } /// A representation of a scanner. -pub struct Scanner(PhantomData); +#[allow(non_snake_case)] +pub struct Scanner { + eventuality_handle: RunNowHandle, + _S: PhantomData, +} impl Scanner { /// Create a new scanner. /// /// This will begin its execution, spawning several asynchronous tasks. // TODO: Take start_time and binary search here? - pub fn new(start_block: u64) -> Self { - todo!("TODO") + pub async fn new(db: impl Db, feed: S, scheduler: impl Scheduler, start_block: u64) -> Self { + let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; + let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); + let report_task = report::ReportTask::<_, S>::new(db.clone(), start_block); + let eventuality_task = eventuality::EventualityTask::new(db, feed, scheduler, start_block); + + let (_index_handle, index_run) = RunNowHandle::new(); + let (scan_handle, scan_run) = RunNowHandle::new(); + let (report_handle, report_run) = RunNowHandle::new(); + let (eventuality_handle, eventuality_run) = RunNowHandle::new(); + + // Upon indexing a new block, scan it + tokio::spawn(index_task.continually_run(index_run, vec![scan_handle.clone()])); + // Upon scanning a block, report it + tokio::spawn(scan_task.continually_run(scan_run, vec![report_handle])); + // Upon reporting a block, we do nothing + tokio::spawn(report_task.continually_run(report_run, vec![])); + // Upon handling the Eventualities in a block, we run the scan task as we've advanced the + // window its allowed to scan + tokio::spawn(eventuality_task.continually_run(eventuality_run, vec![scan_handle])); + + Self { eventuality_handle, _S: PhantomData } } /// Acknowledge a block. @@ -199,19 +223,26 @@ impl Scanner { /// have achieved synchrony on it. pub fn acknowledge_block( &mut self, - txn: &mut impl DbTxn, + mut txn: impl DbTxn, block_number: u64, key_to_activate: Option>, ) { log::info!("acknowledging block {block_number}"); assert!( - ScannerDb::::is_block_notable(txn, block_number), + ScannerGlobalDb::::is_block_notable(&txn, block_number), "acknowledging a block which wasn't notable" ); - ScannerDb::::set_highest_acknowledged_block(txn, block_number); + ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); if let Some(key_to_activate) = key_to_activate { - ScannerDb::::queue_key(txn, block_number + S::WINDOW_LENGTH, key_to_activate); + ScannerGlobalDb::::queue_key(&mut txn, block_number + S::WINDOW_LENGTH, key_to_activate); } + + // Commit the txn + txn.commit(); + // Run the Eventuality task since we've advanced it + // We couldn't successfully do this if that txn was still floating around, uncommitted + // The execution of this task won't actually have more work until the txn is committed + self.eventuality_handle.run_now(); } /// Queue Burns. @@ -220,7 +251,7 @@ impl Scanner { /// safely queue Burns so long as they're only actually added once we've handled the outputs from /// the block acknowledged prior to their queueing. pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: Vec<()>) { - let queue_as_of = ScannerDb::::highest_acknowledged_block(txn) + let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(txn) .expect("queueing Burns yet never acknowledged a block"); todo!("TODO") } @@ -228,8 +259,8 @@ impl Scanner { /* #[derive(Clone, Debug)] -struct ScannerDb(PhantomData, PhantomData); -impl ScannerDb { +struct ScannerGlobalDb(PhantomData, PhantomData); +impl ScannerGlobalDb { fn seen_key(id: &>::Id) -> Vec { Self::scanner_key(b"seen", id) } @@ -295,7 +326,7 @@ impl ScannerDb { TODO2: Only update ram_outputs after committing the TXN in question. */ - let seen = ScannerDb::::seen(&db, &id); + let seen = ScannerGlobalDb::::seen(&db, &id); let id = id.as_ref().to_vec(); if seen || scanner.ram_outputs.contains(&id) { panic!("scanned an output multiple times"); diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index cca2148e..745aa772 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -1,6 +1,4 @@ -use core::marker::PhantomData; - -use serai_db::{Get, DbTxn, Db, create_db}; +use serai_db::{Get, DbTxn, create_db}; create_db!( ScannerReport { diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 95bbbbd2..18f842e2 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -1,3 +1,5 @@ +use core::marker::PhantomData; + use scale::Encode; use serai_db::{DbTxn, Db}; @@ -21,13 +23,14 @@ use db::ReportDb; Eventualities, have processed the block. This ensures we know if this block is notable, and have the InInstructions for it. */ +#[allow(non_snake_case)] pub(crate) struct ReportTask { db: D, - feed: S, + _S: PhantomData, } impl ReportTask { - pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { + pub(crate) fn new(mut db: D, start_block: u64) -> Self { if ReportDb::next_to_potentially_report_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); @@ -35,7 +38,7 @@ impl ReportTask { txn.commit(); } - Self { db, feed } + Self { db, _S: PhantomData } } } diff --git a/processor/scanner/src/scan/db.rs b/processor/scanner/src/scan/db.rs index 905e10be..9b98150f 100644 --- a/processor/scanner/src/scan/db.rs +++ b/processor/scanner/src/scan/db.rs @@ -1,22 +1,8 @@ use core::marker::PhantomData; -use std::io; -use scale::Encode; -use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; -use serai_in_instructions_primitives::InInstructionWithBalance; - -use primitives::{EncodableG, ReceivedOutput, EventualityTracker}; - -use crate::{ - lifetime::LifetimeStage, db::OutputWithInInstruction, ScannerFeed, KeyFor, AddressFor, OutputFor, - EventualityFor, Return, scan::next_to_scan_for_outputs_block, -}; - -// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. -trait Borshy: BorshSerialize + BorshDeserialize {} -impl Borshy for T {} +use crate::{db::OutputWithInInstruction, ScannerFeed}; create_db!( ScannerScan { diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 54f9bd77..b427b535 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -10,7 +10,9 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Block}; use crate::{ lifetime::LifetimeStage, - db::{OutputWithInInstruction, SenderScanData, ScannerDb, ScanToReportDb, ScanToEventualityDb}, + db::{ + OutputWithInInstruction, SenderScanData, ScannerGlobalDb, ScanToReportDb, ScanToEventualityDb, + }, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, eventuality::latest_scannable_block, }; @@ -84,12 +86,12 @@ fn in_instruction_from_output( ) } -struct ScanForOutputsTask { +pub(crate) struct ScanTask { db: D, feed: S, } -impl ScanForOutputsTask { +impl ScanTask { pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { if ScanDb::::next_to_scan_for_outputs_block(&db).is_none() { // Initialize the DB @@ -103,14 +105,14 @@ impl ScanForOutputsTask { } #[async_trait::async_trait] -impl ContinuallyRan for ScanForOutputsTask { +impl ContinuallyRan for ScanTask { async fn run_iteration(&mut self) -> Result { // Fetch the safe to scan block - let latest_scannable = latest_scannable_block::(&self.db) - .expect("ScanForOutputsTask run before writing the start block"); + let latest_scannable = + latest_scannable_block::(&self.db).expect("ScanTask run before writing the start block"); // Fetch the next block to scan let next_to_scan = ScanDb::::next_to_scan_for_outputs_block(&self.db) - .expect("ScanForOutputsTask run before writing the start block"); + .expect("ScanTask run before writing the start block"); for b in next_to_scan ..= latest_scannable { let block = self.feed.block_by_number(&self.db, b).await?; @@ -123,8 +125,8 @@ impl ContinuallyRan for ScanForOutputsTask { // Tidy the keys, then fetch them // We don't have to tidy them here, we just have to somewhere, so why not here? - ScannerDb::::tidy_keys(&mut txn); - let keys = ScannerDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) + ScannerGlobalDb::::tidy_keys(&mut txn); + let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) .expect("scanning for a blockchain without any keys set"); let mut scan_data = SenderScanData { @@ -197,7 +199,7 @@ impl ContinuallyRan for ScanForOutputsTask { // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing // an invocation of a consensus/signing protocol if balance.amount.0 >= self.feed.dust(balance.coin).0 { - ScannerDb::::flag_notable_due_to_non_external_output(&mut txn, b); + ScannerGlobalDb::::flag_notable_due_to_non_external_output(&mut txn, b); } continue; } @@ -284,10 +286,10 @@ impl ContinuallyRan for ScanForOutputsTask { } } - // Send the scan data to the eventuality task - ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); // Send the InInstructions to the report task ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); + // Send the scan data to the eventuality task + ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); // Update the next to scan block ScanDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); txn.commit(); From 04a971a024074d24ac2c54aa864a609849743d6d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 23:31:31 -0400 Subject: [PATCH 040/368] Fill in various DB functions --- processor/primitives/src/eventuality.rs | 19 ++++++++++++++--- processor/primitives/src/output.rs | 7 +++++- processor/scanner/src/db.rs | 27 +++++++++++++++++++++--- processor/scanner/src/eventuality/db.rs | 26 +++++++++++++++-------- processor/scanner/src/eventuality/mod.rs | 2 +- processor/scanner/src/report/db.rs | 6 +++++- processor/scanner/src/scan/db.rs | 9 +++++++- 7 files changed, 77 insertions(+), 19 deletions(-) diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs index 7203031b..eb6cda9c 100644 --- a/processor/primitives/src/eventuality.rs +++ b/processor/primitives/src/eventuality.rs @@ -23,9 +23,9 @@ pub trait Eventuality: Sized + Send + Sync { fn forwarded_output(&self) -> Option; /// Read an Eventuality. - fn read(reader: &mut R) -> io::Result; - /// Serialize an Eventuality to a `Vec`. - fn serialize(&self) -> Vec; + fn read(reader: &mut impl io::Read) -> io::Result; + /// Write an Eventuality. + fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; } /// A tracker of unresolved Eventualities. @@ -36,3 +36,16 @@ pub struct EventualityTracker { /// These are keyed by their lookups. pub active_eventualities: HashMap, E>, } + +impl Default for EventualityTracker { + fn default() -> Self { + EventualityTracker { active_eventualities: HashMap::new() } + } +} + +impl EventualityTracker { + /// Insert an Eventuality into the tracker. + pub fn insert(&mut self, eventuality: E) { + self.active_eventualities.insert(eventuality.lookup(), eventuality); + } +} diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 152a59e0..777b2c52 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -8,7 +8,12 @@ use serai_primitives::{ExternalAddress, Balance}; use crate::Id; /// An address on the external network. -pub trait Address: Send + Sync + TryFrom {} +pub trait Address: Send + Sync + TryFrom { + /// Write this address. + fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; + /// Read an address. + fn read(reader: &mut impl io::Read) -> io::Result; +} /// The type of the output. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 59af768f..810859a6 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,13 +1,13 @@ use core::marker::PhantomData; use std::io; -use scale::Encode; +use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::InInstructionWithBalance; -use primitives::{EncodableG, ReceivedOutput}; +use primitives::{EncodableG, Address, ReceivedOutput}; use crate::{ lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return, @@ -38,9 +38,30 @@ pub(crate) struct OutputWithInInstruction { } impl OutputWithInInstruction { + pub(crate) fn read(reader: &mut impl io::Read) -> io::Result { + let output = OutputFor::::read(reader)?; + let return_address = { + let mut opt = [0xff]; + reader.read_exact(&mut opt)?; + assert!((opt[0] == 0) || (opt[0] == 1)); + if opt[0] == 0 { + None + } else { + Some(AddressFor::::read(reader)?) + } + }; + let in_instruction = + InInstructionWithBalance::decode(&mut IoReader(reader)).map_err(io::Error::other)?; + Ok(Self { output, return_address, in_instruction }) + } pub(crate) fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { self.output.write(writer)?; - // TODO self.return_address.write(writer)?; + if let Some(return_address) = &self.return_address { + writer.write_all(&[1])?; + return_address.write(writer)?; + } else { + writer.write_all(&[0])?; + } self.in_instruction.encode_to(writer); Ok(()) } diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index baed33c4..c5a07b04 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -1,22 +1,18 @@ use core::marker::PhantomData; -use borsh::{BorshSerialize, BorshDeserialize}; +use scale::Encode; use serai_db::{Get, DbTxn, create_db}; -use primitives::EventualityTracker; +use primitives::{EncodableG, Eventuality, EventualityTracker}; use crate::{ScannerFeed, KeyFor, EventualityFor}; -// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. -trait Borshy: BorshSerialize + BorshDeserialize {} -impl Borshy for T {} - create_db!( ScannerEventuality { // The next block to check for resolving eventualities NextToCheckForEventualitiesBlock: () -> u64, - SerializedEventualities: () -> Vec, + SerializedEventualities: (key: K) -> Vec, } ); @@ -41,13 +37,25 @@ impl EventualityDb { key: KeyFor, eventualities: &EventualityTracker>, ) { - todo!("TODO") + let mut serialized = Vec::with_capacity(eventualities.active_eventualities.len() * 128); + for eventuality in eventualities.active_eventualities.values() { + eventuality.write(&mut serialized).unwrap(); + } + SerializedEventualities::set(txn, EncodableG(key), &serialized); } pub(crate) fn eventualities( getter: &impl Get, key: KeyFor, ) -> EventualityTracker> { - todo!("TODO") + let serialized = SerializedEventualities::get(getter, EncodableG(key)).unwrap_or(vec![]); + let mut serialized = serialized.as_slice(); + + let mut res = EventualityTracker::default(); + while !serialized.is_empty() { + let eventuality = EventualityFor::::read(&mut serialized).unwrap(); + res.insert(eventuality); + } + res } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index e10aab54..b5dc3dd9 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -263,7 +263,7 @@ impl> ContinuallyRan for EventualityTas let mut eventualities = EventualityDb::::eventualities(&txn, key); for new_eventuality in new_eventualities { - eventualities.active_eventualities.insert(new_eventuality.lookup(), new_eventuality); + eventualities.insert(new_eventuality); } EventualityDb::::set_eventualities(&mut txn, key, &eventualities); } diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 745aa772..2fd98d4b 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -4,6 +4,8 @@ create_db!( ScannerReport { // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, + // The next Batch ID to use + NextBatchId: () -> u32, } ); @@ -20,6 +22,8 @@ impl ReportDb { } pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { - todo!("TODO") + let id = NextBatchId::get(txn).unwrap_or(0); + NextBatchId::set(txn, &(id + 1)); + id } } diff --git a/processor/scanner/src/scan/db.rs b/processor/scanner/src/scan/db.rs index 9b98150f..6df84df1 100644 --- a/processor/scanner/src/scan/db.rs +++ b/processor/scanner/src/scan/db.rs @@ -29,7 +29,14 @@ impl ScanDb { txn: &mut impl DbTxn, block_number: u64, ) -> Vec> { - todo!("TODO") + let serialized = SerializedQueuedOutputs::get(txn, block_number).unwrap_or(vec![]); + let mut serialized = serialized.as_slice(); + + let mut res = Vec::with_capacity(serialized.len() / 128); + while !serialized.is_empty() { + res.push(OutputWithInInstruction::::read(&mut serialized).unwrap()); + } + res } pub(crate) fn queue_output_until_block( From 612c67c537f07103a7e78f26bb93ce914f19adc1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 28 Aug 2024 23:45:17 -0400 Subject: [PATCH 041/368] Cache the cost to aggregate --- processor/scanner/src/lib.rs | 2 +- processor/scanner/src/scan/mod.rs | 24 ++++++++++++++++++++---- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 3515da05..d8a29951 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -141,7 +141,7 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { async fn cost_to_aggregate( &self, coin: Coin, - block_number: u64, + reference_block: &Self::Block, ) -> Result; /// The dust threshold for the specified coin. diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index b427b535..59d0f197 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -1,3 +1,5 @@ +use std::collections::HashMap; + use scale::Decode; use serai_db::{Get, DbTxn, Db}; @@ -129,14 +131,17 @@ impl ContinuallyRan for ScanTask { let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) .expect("scanning for a blockchain without any keys set"); + // The scan data for this block let mut scan_data = SenderScanData { block_number: b, received_external_outputs: vec![], forwards: vec![], returns: vec![], }; + // The InInstructions for this block let mut in_instructions = vec![]; + // The outputs queued for this block let queued_outputs = { let mut queued_outputs = ScanDb::::take_queued_outputs(&mut txn, b); // Sort the queued outputs in case they weren't queued in a deterministic fashion @@ -148,6 +153,11 @@ impl ContinuallyRan for ScanTask { in_instructions.push(queued_output.in_instruction); } + // We subtract the cost to aggregate from some outputs we scan + // This cost is fetched with an asynchronous function which may be non-trivial + // We cache the result of this function here to avoid calling it multiple times + let mut costs_to_aggregate = HashMap::with_capacity(1); + // Scan for each key for key in keys { for output in block.scan_for_outputs(key.key) { @@ -207,13 +217,19 @@ impl ContinuallyRan for ScanTask { // Check this isn't dust let balance_to_use = { let mut balance = output.balance(); + // First, subtract 2 * the cost to aggregate, as detailed in // `spec/processor/UTXO Management.md` - // TODO: Cache this - let cost_to_aggregate = - self.feed.cost_to_aggregate(balance.coin, b).await.map_err(|e| { + + // We cache this, so if it isn't yet cached, insert it into the cache + if let std::collections::hash_map::Entry::Vacant(e) = + costs_to_aggregate.entry(balance.coin) + { + e.insert(self.feed.cost_to_aggregate(balance.coin, &block).await.map_err(|e| { format!("couldn't fetch cost to aggregate {:?} at {b}: {e:?}", balance.coin) - })?; + })?); + } + let cost_to_aggregate = costs_to_aggregate[&balance.coin]; balance.amount.0 -= 2 * cost_to_aggregate.0; // Now, check it's still past the dust threshold From 8ac501028de44ac968b99aea9c750226468fc7b6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 00:01:31 -0400 Subject: [PATCH 042/368] Add API to publish Batches with This doesn't have to be abstract, we can generate the message and use the message-queue API, yet this should help with testing. --- processor/scanner/src/lib.rs | 27 ++++++++++++++++++++++++--- processor/scanner/src/report/mod.rs | 22 +++++++++++++++------- 2 files changed, 39 insertions(+), 10 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d8a29951..3e828fcb 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -6,6 +6,7 @@ use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; +use serai_in_instructions_primitives::Batch; use primitives::{task::*, Address, ReceivedOutput, Block}; @@ -81,7 +82,7 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// An error encountered when fetching data from the blockchain. /// /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually - /// resolve without manual intervention. + /// resolve without manual intervention/changing the arguments. type EphemeralError: Debug; /// Fetch the number of the latest finalized block. @@ -156,6 +157,20 @@ type AddressFor = <::Block as Block>::Address; type OutputFor = <::Block as Block>::Output; type EventualityFor = <::Block as Block>::Eventuality; +#[async_trait::async_trait] +pub trait BatchPublisher: 'static + Send + Sync { + /// An error encountered when publishing the Batch. + /// + /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual + /// intervention/changing the arguments. + type EphemeralError: Debug; + + /// Publish a Batch. + /// + /// This function must be safe to call with the same Batch multiple times. + async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError>; +} + /// A return to occur. pub struct Return { address: AddressFor, @@ -193,10 +208,16 @@ impl Scanner { /// /// This will begin its execution, spawning several asynchronous tasks. // TODO: Take start_time and binary search here? - pub async fn new(db: impl Db, feed: S, scheduler: impl Scheduler, start_block: u64) -> Self { + pub async fn new( + db: impl Db, + feed: S, + batch_publisher: impl BatchPublisher, + scheduler: impl Scheduler, + start_block: u64, + ) -> Self { let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); - let report_task = report::ReportTask::<_, S>::new(db.clone(), start_block); + let report_task = report::ReportTask::<_, S, _>::new(db.clone(), batch_publisher, start_block); let eventuality_task = eventuality::EventualityTask::new(db, feed, scheduler, start_block); let (_index_handle, index_run) = RunNowHandle::new(); diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 18f842e2..b789ea58 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -6,11 +6,12 @@ use serai_db::{DbTxn, Db}; use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; +use primitives::task::ContinuallyRan; use crate::{ db::{ScannerGlobalDb, ScanToReportDb}, index, scan::next_to_scan_for_outputs_block, - ScannerFeed, ContinuallyRan, + ScannerFeed, BatchPublisher, }; mod db; @@ -24,13 +25,14 @@ use db::ReportDb; the InInstructions for it. */ #[allow(non_snake_case)] -pub(crate) struct ReportTask { +pub(crate) struct ReportTask { db: D, + batch_publisher: B, _S: PhantomData, } -impl ReportTask { - pub(crate) fn new(mut db: D, start_block: u64) -> Self { +impl ReportTask { + pub(crate) fn new(mut db: D, batch_publisher: B, start_block: u64) -> Self { if ReportDb::next_to_potentially_report_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); @@ -38,12 +40,12 @@ impl ReportTask { txn.commit(); } - Self { db, _S: PhantomData } + Self { db, batch_publisher, _S: PhantomData } } } #[async_trait::async_trait] -impl ContinuallyRan for ReportTask { +impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { let highest_reportable = { // Fetch the next to scan block @@ -107,7 +109,13 @@ impl ContinuallyRan for ReportTask { } } - todo!("TODO: Set/emit batches"); + for batch in batches { + self + .batch_publisher + .publish_batch(batch) + .await + .map_err(|e| format!("failed to publish batch: {e:?}"))?; + } } // Update the next to potentially report block From f9d02d43c211e463a3087c98460b7f12b22ad5ac Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 12:45:47 -0400 Subject: [PATCH 043/368] Route burns through the scanner --- Cargo.lock | 1 + processor/scanner/Cargo.toml | 1 + processor/scanner/src/db.rs | 31 +++++- processor/scanner/src/eventuality/db.rs | 16 ++- processor/scanner/src/eventuality/mod.rs | 124 ++++++++++++++++++----- processor/scanner/src/lib.rs | 89 ++++++++++++++-- 6 files changed, 223 insertions(+), 39 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 4cc54e15..2a9de4b9 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8670,6 +8670,7 @@ dependencies = [ "hex", "log", "parity-scale-codec", + "serai-coins-primitives", "serai-db", "serai-in-instructions-primitives", "serai-primitives", diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index a16b55f2..e7cdef97 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -37,6 +37,7 @@ serai-db = { path = "../../common/db" } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } +serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std"] } messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 810859a6..a6272eeb 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -6,6 +6,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::InInstructionWithBalance; +use serai_coins_primitives::OutInstructionWithBalance; use primitives::{EncodableG, Address, ReceivedOutput}; @@ -336,9 +337,9 @@ impl ScanToEventualityDb { } #[derive(BorshSerialize, BorshDeserialize)] -pub(crate) struct BlockBoundInInstructions { - pub(crate) block_number: u64, - pub(crate) in_instructions: Vec, +struct BlockBoundInInstructions { + block_number: u64, + in_instructions: Vec, } db_channel! { @@ -370,3 +371,27 @@ impl ScanToReportDb { data.in_instructions } } + +db_channel! { + ScannerSubstrateEventuality { + Burns: (acknowledged_block: u64) -> Vec, + } +} + +pub(crate) struct SubstrateToEventualityDb; +impl SubstrateToEventualityDb { + pub(crate) fn send_burns( + txn: &mut impl DbTxn, + acknowledged_block: u64, + burns: &Vec, + ) { + Burns::send(txn, acknowledged_block, burns); + } + + pub(crate) fn try_recv_burns( + txn: &mut impl DbTxn, + acknowledged_block: u64, + ) -> Option> { + Burns::try_recv(txn, acknowledged_block) + } +} diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index c5a07b04..f810ba2f 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -11,6 +11,8 @@ create_db!( ScannerEventuality { // The next block to check for resolving eventualities NextToCheckForEventualitiesBlock: () -> u64, + // The latest block this task has handled which was notable + LatestHandledNotableBlock: () -> u64, SerializedEventualities: (key: K) -> Vec, } @@ -22,16 +24,22 @@ impl EventualityDb { txn: &mut impl DbTxn, next_to_check_for_eventualities_block: u64, ) { - assert!( - next_to_check_for_eventualities_block != 0, - "next-to-check-for-eventualities block was 0 when it's bound non-zero" - ); NextToCheckForEventualitiesBlock::set(txn, &next_to_check_for_eventualities_block); } pub(crate) fn next_to_check_for_eventualities_block(getter: &impl Get) -> Option { NextToCheckForEventualitiesBlock::get(getter) } + pub(crate) fn set_latest_handled_notable_block( + txn: &mut impl DbTxn, + latest_handled_notable_block: u64, + ) { + LatestHandledNotableBlock::set(txn, &latest_handled_notable_block); + } + pub(crate) fn latest_handled_notable_block(getter: &impl Get) -> Option { + LatestHandledNotableBlock::get(getter) + } + pub(crate) fn set_eventualities( txn: &mut impl DbTxn, key: KeyFor, diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index b5dc3dd9..38176ed4 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -6,7 +6,10 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, use crate::{ lifetime::LifetimeStage, - db::{OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, ScanToEventualityDb}, + db::{ + OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, + ScanToEventualityDb, + }, BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -20,6 +23,7 @@ use db::EventualityDb; /// only allowed to scan `S::WINDOW_LENGTH - 1` blocks ahead so we can safely schedule keys to /// retire `S::WINDOW_LENGTH` blocks out. pub(crate) fn latest_scannable_block(getter: &impl Get) -> Option { + assert!(S::WINDOW_LENGTH > 0); EventualityDb::::next_to_check_for_eventualities_block(getter) .map(|b| b + S::WINDOW_LENGTH - 1) } @@ -79,24 +83,81 @@ impl> EventualityTask { if EventualityDb::::next_to_check_for_eventualities_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); - // We can receive outputs in `start_block`, but any descending transactions will be in the - // next block - EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, start_block + 1); + EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, start_block); txn.commit(); } Self { db, feed, scheduler } } + + // Returns a boolean of if we intaked any Burns. + fn intake_burns(&mut self) -> bool { + let mut intaked_any = false; + + // If we've handled an notable block, we may have Burns being queued with it as the reference + if let Some(latest_handled_notable_block) = + EventualityDb::::latest_handled_notable_block(&self.db) + { + let mut txn = self.db.txn(); + // Drain the entire channel + while let Some(burns) = + SubstrateToEventualityDb::try_recv_burns(&mut txn, latest_handled_notable_block) + { + intaked_any = true; + + let new_eventualities = self.scheduler.fulfill(&mut txn, burns); + + // TODO: De-duplicate this with below instance via a helper function + for (key, new_eventualities) in new_eventualities { + let key = { + let mut key_repr = as GroupEncoding>::Repr::default(); + assert_eq!(key.len(), key_repr.as_ref().len()); + key_repr.as_mut().copy_from_slice(&key); + KeyFor::::from_bytes(&key_repr).unwrap() + }; + + let mut eventualities = EventualityDb::::eventualities(&txn, key); + for new_eventuality in new_eventualities { + eventualities.insert(new_eventuality); + } + EventualityDb::::set_eventualities(&mut txn, key, &eventualities); + } + } + txn.commit(); + } + + intaked_any + } } #[async_trait::async_trait] impl> ContinuallyRan for EventualityTask { async fn run_iteration(&mut self) -> Result { + // Fetch the highest acknowledged block + let Some(highest_acknowledged) = ScannerGlobalDb::::highest_acknowledged_block(&self.db) + else { + // If we've never acknowledged a block, return + return Ok(false); + }; + + // A boolean of if we've made any progress to return at the end of the function + let mut made_progress = false; + + // Start by intaking any Burns we have sitting around + made_progress |= self.intake_burns(); + /* - The set of Eventualities only increase when a block is acknowledged. Accordingly, we can only - iterate up to (and including) the block currently pending acknowledgement. "including" is - because even if block `b` causes new Eventualities, they'll only potentially resolve in block - `b + 1`. + Eventualities increase upon one of two cases: + + 1) We're fulfilling Burns + 2) We acknowledged a block + + We can't know the processor has intaked all Burns it should have when we process block `b`. + We solve this by executing a consensus protocol whenever a resolution for an Eventuality + created to fulfill Burns occurs. Accordingly, we force ourselves to obtain synchrony on such + blocks (and all preceding Burns). + + This means we can only iterate up to the block currently pending acknowledgement. We only know blocks will need acknowledgement *for sure* if they were scanned. The only other causes are key activation and retirement (both scheduled outside the scan window). This makes @@ -113,32 +174,38 @@ impl> ContinuallyRan for EventualityTas next_to_scan }; - // Fetch the highest acknowledged block - let highest_acknowledged = ScannerGlobalDb::::highest_acknowledged_block(&self.db) - .expect("EventualityTask run before writing the start block"); - // Fetch the next block to check let next_to_check = EventualityDb::::next_to_check_for_eventualities_block(&self.db) .expect("EventualityTask run before writing the start block"); // Check all blocks - let mut iterated = false; for b in next_to_check .. exclusive_upper_bound { - // If the prior block was notable *and* not acknowledged, break - // This is so if it caused any Eventualities (which may resolve this block), we have them - { - // This `- 1` is safe as next to check is bound to be non-zero - // This is possible since even if we receive coins in block 0, any transactions we'd make - // would resolve in block 1 (the first block we'll check under this non-zero rule) - let prior_block = b - 1; - if ScannerGlobalDb::::is_block_notable(&self.db, prior_block) && - (prior_block > highest_acknowledged) - { + let is_block_notable = ScannerGlobalDb::::is_block_notable(&self.db, b); + if is_block_notable { + /* + If this block is notable *and* not acknowledged, break. + + This is so if Burns queued prior to this block's acknowledgement caused any Eventualities + (which may resolve this block), we have them. If it wasn't for that, it'd be so if this + block's acknowledgement caused any Eventualities, we have them, though those would only + potentially resolve in the next block (letting us scan this block without delay). + */ + if b > highest_acknowledged { break; } + + // Since this block is notable, ensure we've intaked all the Burns preceding it + // We can know with certainty that the channel is fully populated at this time since we've + // acknowledged a newer block (so we've handled the state up to this point and new state + // will be for the newer block) + #[allow(unused_assignments)] + { + made_progress |= self.intake_burns(); + } } - iterated = true; + // Since we're handling this block, we are making progress + made_progress = true; let block = self.feed.block_by_number(&self.db, b).await?; @@ -186,6 +253,7 @@ impl> ContinuallyRan for EventualityTas let mut non_external_outputs = block.scan_for_outputs(key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); // Drop any outputs less than the dust limit + // TODO: Either further filter to outputs we made or also check cost_to_aggregate non_external_outputs.retain(|output| { let balance = output.balance(); balance.amount.0 >= self.feed.dust(balance.coin).0 @@ -288,10 +356,16 @@ impl> ContinuallyRan for EventualityTas // Update the next-to-check block EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); + + // If this block was notable, update the latest-handled notable block + if is_block_notable { + EventualityDb::::set_latest_handled_notable_block(&mut txn, b); + } + txn.commit(); } // Run dependents if we successfully checked any blocks - Ok(iterated) + Ok(made_progress) } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 3e828fcb..27395d79 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -7,6 +7,7 @@ use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; use serai_in_instructions_primitives::Batch; +use serai_coins_primitives::OutInstructionWithBalance; use primitives::{task::*, Address, ReceivedOutput, Block}; @@ -15,7 +16,7 @@ mod lifetime; // Database schema definition and associated functions. mod db; -use db::ScannerGlobalDb; +use db::{ScannerGlobalDb, SubstrateToEventualityDb}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. @@ -147,7 +148,7 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The dust threshold for the specified coin. /// - /// This MUST be constant. Serai MJUST NOT create internal outputs worth less than this. This + /// This MUST be constant. Serai MUST NOT create internal outputs worth less than this. This /// SHOULD be a value worth handling at a human level. fn dust(&self, coin: Coin) -> Amount; } @@ -195,6 +196,40 @@ pub trait Scheduler: 'static + Send { txn: &mut impl DbTxn, update: SchedulerUpdate, ) -> HashMap, Vec>>; + + /// Fulfill a series of payments, yielding the Eventualities now to be scanned for. + /// + /// Any Eventualities returned by this function must include an output-to-self (such as a Branch + /// or Change), unless they descend from a transaction returned by this function which satisfies + /// that requirement. + /// + /// The `Vec` used as the key in the returned HashMap should be the encoded key the + /// Eventualities are for. + /* + We need an output-to-self so we can detect a block with an Eventuality completion with regards + to Burns, forcing us to ensure we have accumulated all the Burns we should by the time we + handle that block. We explicitly don't require children have this requirement as by detecting + the first resolution, we ensure we'll accumulate the Burns (therefore becoming aware of the + childrens' Eventualities, enabling recognizing their resolutions). + + This carve out enables the following: + + ------------------ Fulfillment TX ---------------------- + | Primary Output | ---------------> | New Primary Output | + ------------------ | ---------------------- + | + | ------------------------------ + |------> | Branching Output for Burns | + ------------------------------ + + Without wasting pointless Change outputs on every transaction (as there's a single parent which + has an output-to-self). + */ + fn fulfill( + &mut self, + txn: &mut impl DbTxn, + payments: Vec, + ) -> HashMap, Vec>>; } /// A representation of a scanner. @@ -242,6 +277,8 @@ impl Scanner { /// /// This means this block was ordered on Serai in relation to `Burn` events, and all validators /// have achieved synchrony on it. + /// + /// The calls to this function must be ordered with regards to `queue_burns`. pub fn acknowledge_block( &mut self, mut txn: impl DbTxn, @@ -249,10 +286,23 @@ impl Scanner { key_to_activate: Option>, ) { log::info!("acknowledging block {block_number}"); + assert!( ScannerGlobalDb::::is_block_notable(&txn, block_number), "acknowledging a block which wasn't notable" ); + if let Some(prior_highest_acknowledged_block) = + ScannerGlobalDb::::highest_acknowledged_block(&txn) + { + assert!(block_number > prior_highest_acknowledged_block, "acknowledging blocks out-of-order"); + for b in (prior_highest_acknowledged_block + 1) .. (block_number - 1) { + assert!( + !ScannerGlobalDb::::is_block_notable(&txn, b), + "skipped acknowledging a block which was notable" + ); + } + } + ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); if let Some(key_to_activate) = key_to_activate { ScannerGlobalDb::::queue_key(&mut txn, block_number + S::WINDOW_LENGTH, key_to_activate); @@ -268,13 +318,38 @@ impl Scanner { /// Queue Burns. /// - /// The scanner only updates the scheduler with new outputs upon acknowledging a block. We can - /// safely queue Burns so long as they're only actually added once we've handled the outputs from - /// the block acknowledged prior to their queueing. - pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: Vec<()>) { + /// The scanner only updates the scheduler with new outputs upon acknowledging a block. The + /// ability to fulfill Burns, and therefore their order, is dependent on the current output + /// state. This immediately sets a bound that this function is ordered with regards to + /// `acknowledge_block`. + /* + The fact Burns can be queued during any Substrate block is problematic. The scanner is allowed + to scan anything within the window set by the Eventuality task. The Eventuality task is allowed + to handle all blocks until it reaches a block needing acknowledgement. + + This means we may queue Burns when the latest acknowledged block is 1, yet we've already + scanned 101. Such Burns may complete back in block 2, and we simply wouldn't have noticed due + to not having yet generated the Eventualities. + + We solve this by mandating all transactions made as the result of an Eventuality include a + output-to-self worth at least `N::DUST`. If that occurs, the scanner will force a consensus + protocol on block 2. Accordingly, we won't scan all the way to block 101 (missing the + resolution of the Eventuality) as we'll obtain synchrony on block 2 and all Burns queued prior + to it. + + Another option would be to re-check historical blocks, yet this would potentially redo an + unbounded amount of work. It would also not allow us to safely detect if received outputs were + in fact the result of Eventualities or not. + + Another option would be to schedule Burns after the next-acknowledged block, yet this would add + latency and likely practically require we add regularly scheduled notable blocks (which may be + unnecessary). + */ + pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: &Vec) { let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(txn) .expect("queueing Burns yet never acknowledged a block"); - todo!("TODO") + + SubstrateToEventualityDb::send_burns(txn, queue_as_of, burns) } } From 9f4b28e5ae83d20d6eef554d72001a408ef3556e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 12:49:35 -0400 Subject: [PATCH 044/368] Clarify output-to-self to output-to-Serai There's only the requirement it's to an active key which is being reported for. --- processor/scanner/src/lib.rs | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 27395d79..77bed7fc 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -199,14 +199,14 @@ pub trait Scheduler: 'static + Send { /// Fulfill a series of payments, yielding the Eventualities now to be scanned for. /// - /// Any Eventualities returned by this function must include an output-to-self (such as a Branch + /// Any Eventualities returned by this function must include an output-to-Serai (such as a Branch /// or Change), unless they descend from a transaction returned by this function which satisfies /// that requirement. /// /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. /* - We need an output-to-self so we can detect a block with an Eventuality completion with regards + We need an output-to-Serai so we can detect a block with an Eventuality completion with regards to Burns, forcing us to ensure we have accumulated all the Burns we should by the time we handle that block. We explicitly don't require children have this requirement as by detecting the first resolution, we ensure we'll accumulate the Burns (therefore becoming aware of the @@ -223,7 +223,7 @@ pub trait Scheduler: 'static + Send { ------------------------------ Without wasting pointless Change outputs on every transaction (as there's a single parent which - has an output-to-self). + has an output-to-Serai, the new primary output). */ fn fulfill( &mut self, @@ -332,7 +332,7 @@ impl Scanner { to not having yet generated the Eventualities. We solve this by mandating all transactions made as the result of an Eventuality include a - output-to-self worth at least `N::DUST`. If that occurs, the scanner will force a consensus + output-to-Serai worth at least `DUST`. If that occurs, the scanner will force a consensus protocol on block 2. Accordingly, we won't scan all the way to block 101 (missing the resolution of the Eventuality) as we'll obtain synchrony on block 2 and all Burns queued prior to it. From 920303e1b4c60ee298c920db530a3a8cc92019a0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 14:57:43 -0400 Subject: [PATCH 045/368] Add helper to intake Eventualities --- processor/scanner/src/eventuality/mod.rs | 80 ++++++++++++------------ processor/scanner/src/lib.rs | 1 - 2 files changed, 40 insertions(+), 41 deletions(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 38176ed4..7b5e3eed 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,3 +1,5 @@ +use std::collections::HashMap; + use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; @@ -10,7 +12,7 @@ use crate::{ OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, ScanToEventualityDb, }, - BlockExt, ScannerFeed, KeyFor, SchedulerUpdate, Scheduler, sort_outputs, + BlockExt, ScannerFeed, KeyFor, EventualityFor, SchedulerUpdate, Scheduler, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -28,6 +30,29 @@ pub(crate) fn latest_scannable_block(getter: &impl Get) -> Optio .map(|b| b + S::WINDOW_LENGTH - 1) } +/// Intake a set of Eventualities into the DB. +/// +/// The HashMap is keyed by the key these Eventualities are for. +fn intake_eventualities( + txn: &mut impl DbTxn, + to_intake: HashMap, Vec>>, +) { + for (key, new_eventualities) in to_intake { + let key = { + let mut key_repr = as GroupEncoding>::Repr::default(); + assert_eq!(key.len(), key_repr.as_ref().len()); + key_repr.as_mut().copy_from_slice(&key); + KeyFor::::from_bytes(&key_repr).unwrap() + }; + + let mut eventualities = EventualityDb::::eventualities(txn, key); + for new_eventuality in new_eventualities { + eventualities.insert(new_eventuality); + } + EventualityDb::::set_eventualities(txn, key, &eventualities); + } +} + /* When we scan a block, we receive outputs. When this block is acknowledged, we accumulate those outputs into some scheduler, potentially causing certain transactions to begin their signing @@ -106,22 +131,7 @@ impl> EventualityTask { intaked_any = true; let new_eventualities = self.scheduler.fulfill(&mut txn, burns); - - // TODO: De-duplicate this with below instance via a helper function - for (key, new_eventualities) in new_eventualities { - let key = { - let mut key_repr = as GroupEncoding>::Repr::default(); - assert_eq!(key.len(), key_repr.as_ref().len()); - key_repr.as_mut().copy_from_slice(&key); - KeyFor::::from_bytes(&key_repr).unwrap() - }; - - let mut eventualities = EventualityDb::::eventualities(&txn, key); - for new_eventuality in new_eventualities { - eventualities.insert(new_eventuality); - } - EventualityDb::::set_eventualities(&mut txn, key, &eventualities); - } + intake_eventualities::(&mut txn, new_eventualities); } txn.commit(); } @@ -310,30 +320,20 @@ impl> ContinuallyRan for EventualityTas } // Update the scheduler - let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; - scheduler_update.outputs.sort_by(sort_outputs); - scheduler_update.forwards.sort_by(sort_outputs); - scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); - // Intake the new Eventualities - let new_eventualities = self.scheduler.update(&mut txn, scheduler_update); - for (key, new_eventualities) in new_eventualities { - let key = { - let mut key_repr = as GroupEncoding>::Repr::default(); - assert_eq!(key.len(), key_repr.as_ref().len()); - key_repr.as_mut().copy_from_slice(&key); - KeyFor::::from_bytes(&key_repr).unwrap() - }; - - keys - .iter() - .find(|serai_key| serai_key.key == key) - .expect("queueing eventuality for key which isn't active"); - - let mut eventualities = EventualityDb::::eventualities(&txn, key); - for new_eventuality in new_eventualities { - eventualities.insert(new_eventuality); + { + let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; + scheduler_update.outputs.sort_by(sort_outputs); + scheduler_update.forwards.sort_by(sort_outputs); + scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + // Intake the new Eventualities + let new_eventualities = self.scheduler.update(&mut txn, scheduler_update); + for key in new_eventualities.keys() { + keys + .iter() + .find(|serai_key| serai_key.key.to_bytes().as_ref() == key.as_slice()) + .expect("intaking Eventuality for key which isn't active"); } - EventualityDb::::set_eventualities(&mut txn, key, &eventualities); + intake_eventualities::(&mut txn, new_eventualities); } // Now that we've intaked any Eventualities caused, check if we're retiring any keys diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 77bed7fc..5f7e44a2 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -242,7 +242,6 @@ impl Scanner { /// Create a new scanner. /// /// This will begin its execution, spawning several asynchronous tasks. - // TODO: Take start_time and binary search here? pub async fn new( db: impl Db, feed: S, From 8db76ed67c4e60eb59e98a49b7f880911be65baa Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 15:26:04 -0400 Subject: [PATCH 046/368] Add key management to the scheduler --- processor/scanner/src/eventuality/db.rs | 23 ++++++++++++++++- processor/scanner/src/eventuality/mod.rs | 14 +++++++++- processor/scanner/src/lib.rs | 33 +++++++++++++++++++----- 3 files changed, 62 insertions(+), 8 deletions(-) diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index f810ba2f..da8a3024 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -1,12 +1,17 @@ use core::marker::PhantomData; use scale::Encode; +use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; use primitives::{EncodableG, Eventuality, EventualityTracker}; use crate::{ScannerFeed, KeyFor, EventualityFor}; +// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. +trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + create_db!( ScannerEventuality { // The next block to check for resolving eventualities @@ -15,6 +20,8 @@ create_db!( LatestHandledNotableBlock: () -> u64, SerializedEventualities: (key: K) -> Vec, + + RetiredKey: (block_number: u64) -> K, } ); @@ -51,7 +58,6 @@ impl EventualityDb { } SerializedEventualities::set(txn, EncodableG(key), &serialized); } - pub(crate) fn eventualities( getter: &impl Get, key: KeyFor, @@ -66,4 +72,19 @@ impl EventualityDb { } res } + + pub(crate) fn retire_key(txn: &mut impl DbTxn, block_number: u64, key: KeyFor) { + assert!( + RetiredKey::get::>>(txn, block_number).is_none(), + "retiring multiple keys within the same block" + ); + RetiredKey::set(txn, block_number, &EncodableG(key)); + } + pub(crate) fn take_retired_key(txn: &mut impl DbTxn, block_number: u64) -> Option> { + let res = RetiredKey::get::>>(txn, block_number).map(|res| res.0); + if res.is_some() { + RetiredKey::del::>>(txn, block_number); + } + res + } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 7b5e3eed..c5f93789 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -248,6 +248,11 @@ impl> ContinuallyRan for EventualityTas let mut outputs = received_external_outputs; for key in &keys { + // If this is the key's activation block, activate it + if key.activation_block_number == b { + self.scheduler.activate_key(&mut txn, key.key); + } + let completed_eventualities = { let mut eventualities = EventualityDb::::eventualities(&txn, key.key); let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); @@ -349,11 +354,18 @@ impl> ContinuallyRan for EventualityTas // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never // has a malleable view of the keys. - ScannerGlobalDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); + let retire_at = b + S::WINDOW_LENGTH; + ScannerGlobalDb::::retire_key(&mut txn, retire_at, key.key); + EventualityDb::::retire_key(&mut txn, retire_at, key.key); } } } + // If we retired any key at this block, retire it within the scheduler + if let Some(key) = EventualityDb::::take_retired_key(&mut txn, b) { + self.scheduler.retire_key(&mut txn, key); + } + // Update the next-to-check block EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5f7e44a2..d90ca08e 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -137,6 +137,12 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { Ok(block) } + /// The dust threshold for the specified coin. + /// + /// This MUST be constant. Serai MUST NOT create internal outputs worth less than this. This + /// SHOULD be a value worth handling at a human level. + fn dust(&self, coin: Coin) -> Amount; + /// The cost to aggregate an input as of the specified block. /// /// This is defined as the transaction fee for a 2-input, 1-output transaction. @@ -145,12 +151,6 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { coin: Coin, reference_block: &Self::Block, ) -> Result; - - /// The dust threshold for the specified coin. - /// - /// This MUST be constant. Serai MUST NOT create internal outputs worth less than this. This - /// SHOULD be a value worth handling at a human level. - fn dust(&self, coin: Coin) -> Amount; } type KeyFor = <::Block as Block>::Key; @@ -187,6 +187,27 @@ pub struct SchedulerUpdate { /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: 'static + Send { + /// Activate a key. + /// + /// This SHOULD setup any necessary database structures. This SHOULD NOT cause the new key to + /// be used as the primary key. The multisig rotation time clearly establishes its steps. + fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor); + + /// Flush all outputs within a retiring key to the new key. + /// + /// When a key is activated, the existing multisig should retain its outputs and utility for a + /// certain time period. With `flush_key`, all outputs should be directed towards fulfilling some + /// obligation or the `new_key`. Every output MUST be connected to an Eventuality. If a key no + /// longer has active Eventualities, it MUST be able to be retired. + // TODO: Call this + fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor); + + /// Retire a key as it'll no longer be used. + /// + /// Any key retired MUST NOT still have outputs associated with it. This SHOULD be a NOP other + /// than any assertions and database cleanup. + fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor); + /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// /// The `Vec` used as the key in the returned HashMap should be the encoded key the From 4f6d91037ef6781801349238e30e8382e254a83a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 16:27:00 -0400 Subject: [PATCH 047/368] Call flush_key --- processor/scanner/src/db.rs | 36 +++++++++++++++++--- processor/scanner/src/eventuality/mod.rs | 9 ++++- processor/scanner/src/lifetime.rs | 43 +++++++++++++++--------- spec/processor/Multisig Rotation.md | 3 +- 4 files changed, 68 insertions(+), 23 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index a6272eeb..20aa2999 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -11,7 +11,8 @@ use serai_coins_primitives::OutInstructionWithBalance; use primitives::{EncodableG, Address, ReceivedOutput}; use crate::{ - lifetime::LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, Return, + lifetime::{LifetimeStage, Lifetime}, + ScannerFeed, KeyFor, AddressFor, OutputFor, Return, scan::next_to_scan_for_outputs_block, }; @@ -30,6 +31,7 @@ pub(crate) struct SeraiKey { pub(crate) stage: LifetimeStage, pub(crate) activation_block_number: u64, pub(crate) block_at_which_reporting_starts: u64, + pub(crate) block_at_which_forwarding_starts: Option, } pub(crate) struct OutputWithInInstruction { @@ -82,7 +84,7 @@ create_db!( /* A block is notable if one of three conditions are met: - 1) We activated a key within this block. + 1) We activated a key within this block (or explicitly forward to an activated key). 2) We retired a key within this block. 3) We received outputs within this block. @@ -120,9 +122,32 @@ impl ScannerGlobalDb { // TODO: Panic if we've ever seen this key before - // Push the key + // Fetch the existing keys let mut keys: Vec>>> = ActiveKeys::get(txn).unwrap_or(vec![]); + + // If this new key retires a key, mark the block at which forwarding explicitly occurs notable + // This lets us obtain synchrony over the transactions we'll make to accomplish this + if let Some(key_retired_by_this) = keys.last() { + NotableBlock::set( + txn, + Lifetime::calculate::( + // The 'current block number' used for this calculation + activation_block_number, + // The activation block of the key we're getting the lifetime of + key_retired_by_this.activation_block_number, + // The activation block of the key which will retire this key + Some(activation_block_number), + ) + .block_at_which_forwarding_starts + .expect( + "didn't calculate the block forwarding starts at despite passing the next key's info", + ), + &(), + ); + } + + // Push and save the next key keys.push(SeraiKeyDbEntry { activation_block_number, key: EncodableG(key) }); ActiveKeys::set(txn, &keys); } @@ -185,8 +210,8 @@ impl ScannerGlobalDb { if block_number < raw_keys[i].activation_block_number { continue; } - let (stage, block_at_which_reporting_starts) = - LifetimeStage::calculate_stage_and_reporting_start_block::( + let Lifetime { stage, block_at_which_reporting_starts, block_at_which_forwarding_starts } = + Lifetime::calculate::( block_number, raw_keys[i].activation_block_number, raw_keys.get(i + 1).map(|key| key.activation_block_number), @@ -196,6 +221,7 @@ impl ScannerGlobalDb { stage, activation_block_number: raw_keys[i].activation_block_number, block_at_which_reporting_starts, + block_at_which_forwarding_starts, }); } assert!(keys.len() <= 2, "more than two keys active"); diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index c5f93789..002131cc 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -341,8 +341,15 @@ impl> ContinuallyRan for EventualityTas intake_eventualities::(&mut txn, new_eventualities); } - // Now that we've intaked any Eventualities caused, check if we're retiring any keys for key in &keys { + // If this is the block at which forwarding starts for this key, flush it + // We do this after we issue the above update for any efficiencies gained by doing so + if key.block_at_which_forwarding_starts == Some(b) { + assert!(key.key != keys.last().unwrap().key); + self.scheduler.flush_key(&mut txn, key.key, keys.last().unwrap().key); + } + + // Now that we've intaked any Eventualities caused, check if we're retiring any keys if key.stage == LifetimeStage::Finishing { let eventualities = EventualityDb::::eventualities(&txn, key.key); // TODO: This assumes the Scheduler is empty diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs index 09df7a37..e15c0f55 100644 --- a/processor/scanner/src/lifetime.rs +++ b/processor/scanner/src/lifetime.rs @@ -35,17 +35,25 @@ pub(crate) enum LifetimeStage { Finishing, } -impl LifetimeStage { - /// Get the stage of its lifetime this multisig is in, and the block at which we start reporting - /// outputs to it. +/// The lifetime of the multisig, including various block numbers. +pub(crate) struct Lifetime { + pub(crate) stage: LifetimeStage, + pub(crate) block_at_which_reporting_starts: u64, + // This is only Some if the next key's activation block number is passed to calculate, and the + // stage is at least `LifetimeStage::Active.` + pub(crate) block_at_which_forwarding_starts: Option, +} + +impl Lifetime { + /// Get the lifetime of this multisig. /// /// Panics if the multisig being calculated for isn't actually active and a variety of other /// insane cases. - pub(crate) fn calculate_stage_and_reporting_start_block( + pub(crate) fn calculate( block_number: u64, activation_block_number: u64, next_keys_activation_block_number: Option, - ) -> (Self, u64) { + ) -> Self { assert!( activation_block_number >= block_number, "calculating lifetime stage for an inactive multisig" @@ -55,14 +63,14 @@ impl LifetimeStage { let active_yet_not_reporting_end_block = activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; // The exclusive end block is the inclusive start block - let reporting_start_block = active_yet_not_reporting_end_block; + let block_at_which_reporting_starts = active_yet_not_reporting_end_block; if block_number < active_yet_not_reporting_end_block { - return (LifetimeStage::ActiveYetNotReporting, reporting_start_block); + return Lifetime { stage: LifetimeStage::ActiveYetNotReporting, block_at_which_reporting_starts, block_at_which_forwarding_starts: None }; } let Some(next_keys_activation_block_number) = next_keys_activation_block_number else { // If there is no next multisig, this is the active multisig - return (LifetimeStage::Active, reporting_start_block); + return Lifetime { stage: LifetimeStage::Active, block_at_which_reporting_starts, block_at_which_forwarding_starts: None }; }; assert!( @@ -70,19 +78,22 @@ impl LifetimeStage { "next set of keys activated before this multisig activated" ); - // If the new multisig is still having its activation block finalized on-chain, this multisig - // is still active (step 3) let new_active_yet_not_reporting_end_block = next_keys_activation_block_number + S::CONFIRMATIONS + S::TEN_MINUTES; + let new_active_and_used_for_change_end_block = + new_active_yet_not_reporting_end_block + S::CONFIRMATIONS; + // The exclusive end block is the inclusive start block + let block_at_which_forwarding_starts = Some(new_active_and_used_for_change_end_block); + + // If the new multisig is still having its activation block finalized on-chain, this multisig + // is still active (step 3) if block_number < new_active_yet_not_reporting_end_block { - return (LifetimeStage::Active, reporting_start_block); + return Lifetime { stage: LifetimeStage::Active, block_at_which_reporting_starts, block_at_which_forwarding_starts }; } // Step 4 details a further CONFIRMATIONS - let new_active_and_used_for_change_end_block = - new_active_yet_not_reporting_end_block + S::CONFIRMATIONS; if block_number < new_active_and_used_for_change_end_block { - return (LifetimeStage::UsingNewForChange, reporting_start_block); + return Lifetime { stage: LifetimeStage::UsingNewForChange, block_at_which_reporting_starts, block_at_which_forwarding_starts }; } // Step 5 details a further 6 hours @@ -90,10 +101,10 @@ impl LifetimeStage { let new_active_and_forwarded_to_end_block = new_active_and_used_for_change_end_block + (6 * 6 * S::TEN_MINUTES); if block_number < new_active_and_forwarded_to_end_block { - return (LifetimeStage::Forwarding, reporting_start_block); + return Lifetime { stage: LifetimeStage::Forwarding, block_at_which_reporting_starts, block_at_which_forwarding_starts }; } // Step 6 - (LifetimeStage::Finishing, reporting_start_block) + Lifetime { stage: LifetimeStage::Finishing, block_at_which_reporting_starts, block_at_which_forwarding_starts } } } diff --git a/spec/processor/Multisig Rotation.md b/spec/processor/Multisig Rotation.md index ff5c3d28..916ce56b 100644 --- a/spec/processor/Multisig Rotation.md +++ b/spec/processor/Multisig Rotation.md @@ -102,7 +102,8 @@ The following timeline is established: 5) For the next 6 hours, all non-`Branch` outputs received are immediately forwarded to the new multisig. Only external transactions to the new multisig - are included in `Batch`s. + are included in `Batch`s. Any outputs not yet transferred as change are + explicitly transferred. The new multisig infers the `InInstruction`, and refund address, for forwarded `External` outputs via reading what they were for the original From 2ca7fccb08545dfb868ef3f5d83f8cfa0a367e7c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 17:37:45 -0400 Subject: [PATCH 048/368] Pass the lifetime information to the scheduler Enables it to decide which keys to use for fulfillment/change. --- processor/scanner/src/eventuality/db.rs | 22 ------- processor/scanner/src/eventuality/mod.rs | 80 +++++++++++++++--------- processor/scanner/src/lib.rs | 14 ++++- processor/scanner/src/lifetime.rs | 40 +++++++++--- 4 files changed, 95 insertions(+), 61 deletions(-) diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index da8a3024..2bd02025 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -1,17 +1,12 @@ use core::marker::PhantomData; use scale::Encode; -use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; use primitives::{EncodableG, Eventuality, EventualityTracker}; use crate::{ScannerFeed, KeyFor, EventualityFor}; -// The DB macro doesn't support `BorshSerialize + BorshDeserialize` as a bound, hence this. -trait Borshy: BorshSerialize + BorshDeserialize {} -impl Borshy for T {} - create_db!( ScannerEventuality { // The next block to check for resolving eventualities @@ -20,8 +15,6 @@ create_db!( LatestHandledNotableBlock: () -> u64, SerializedEventualities: (key: K) -> Vec, - - RetiredKey: (block_number: u64) -> K, } ); @@ -72,19 +65,4 @@ impl EventualityDb { } res } - - pub(crate) fn retire_key(txn: &mut impl DbTxn, block_number: u64, key: KeyFor) { - assert!( - RetiredKey::get::>>(txn, block_number).is_none(), - "retiring multiple keys within the same block" - ); - RetiredKey::set(txn, block_number, &EncodableG(key)); - } - pub(crate) fn take_retired_key(txn: &mut impl DbTxn, block_number: u64) -> Option> { - let res = RetiredKey::get::>>(txn, block_number).map(|res| res.0); - if res.is_some() { - RetiredKey::del::>>(txn, block_number); - } - res - } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 002131cc..400c5690 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -9,7 +9,7 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, use crate::{ lifetime::LifetimeStage, db::{ - OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, + SeraiKey, OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, ScanToEventualityDb, }, BlockExt, ScannerFeed, KeyFor, EventualityFor, SchedulerUpdate, Scheduler, sort_outputs, @@ -115,6 +115,34 @@ impl> EventualityTask { Self { db, feed, scheduler } } + fn keys_and_keys_with_stages( + &self, + block_number: u64, + ) -> (Vec>>, Vec<(KeyFor, LifetimeStage)>) { + /* + This is proper as the keys for the next-to-scan block (at most `WINDOW_LENGTH` ahead, + which is `<= CONFIRMATIONS`) will be the keys to use here, with only minor edge cases. + + This may include a key which has yet to activate by our perception. We can simply drop + those. + + This may not include a key which has retired by the next-to-scan block. This task is the + one which decides when to retire a key, and when it marks a key to be retired, it is done + with it. Accordingly, it's not an issue if such a key was dropped. + + This also may include a key we've retired which has yet to officially retire. That's fine as + we'll do nothing with it, and the Scheduler traits document this behavior. + */ + assert!(S::WINDOW_LENGTH <= S::CONFIRMATIONS); + let mut keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) + .expect("scanning for a blockchain without any keys set"); + // Since the next-to-scan block is ahead of us, drop keys which have yet to actually activate + keys.retain(|key| block_number <= key.activation_block_number); + let keys_with_stages = keys.iter().map(|key| (key.key, key.stage)).collect::>(); + + (keys, keys_with_stages) + } + // Returns a boolean of if we intaked any Burns. fn intake_burns(&mut self) -> bool { let mut intaked_any = false; @@ -123,6 +151,11 @@ impl> EventualityTask { if let Some(latest_handled_notable_block) = EventualityDb::::latest_handled_notable_block(&self.db) { + // We always intake Burns per this block as it's the block we have consensus on + // We would have a consensus failure if some thought the change should be the old key and + // others the new key + let (_keys, keys_with_stages) = self.keys_and_keys_with_stages(latest_handled_notable_block); + let mut txn = self.db.txn(); // Drain the entire channel while let Some(burns) = @@ -130,7 +163,7 @@ impl> EventualityTask { { intaked_any = true; - let new_eventualities = self.scheduler.fulfill(&mut txn, burns); + let new_eventualities = self.scheduler.fulfill(&mut txn, &keys_with_stages, burns); intake_eventualities::(&mut txn, new_eventualities); } txn.commit(); @@ -154,6 +187,7 @@ impl> ContinuallyRan for EventualityTas let mut made_progress = false; // Start by intaking any Burns we have sitting around + // It's important we run this regardless of if we have a new block to handle made_progress |= self.intake_burns(); /* @@ -206,8 +240,8 @@ impl> ContinuallyRan for EventualityTas // Since this block is notable, ensure we've intaked all the Burns preceding it // We can know with certainty that the channel is fully populated at this time since we've - // acknowledged a newer block (so we've handled the state up to this point and new state - // will be for the newer block) + // acknowledged a newer block (so we've handled the state up to this point and any new + // state will be for the newer block) #[allow(unused_assignments)] { made_progress |= self.intake_burns(); @@ -221,22 +255,7 @@ impl> ContinuallyRan for EventualityTas log::debug!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); - /* - This is proper as the keys for the next to scan block (at most `WINDOW_LENGTH` ahead, - which is `<= CONFIRMATIONS`) will be the keys to use here, with only minor edge cases. - - This may include a key which has yet to activate by our perception. We can simply drop - those. - - This may not include a key which has retired by the next-to-scan block. This task is the - one which decides when to retire a key, and when it marks a key to be retired, it is done - with it. Accordingly, it's not an issue if such a key was dropped. - */ - let mut keys = - ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) - .expect("scanning for a blockchain without any keys set"); - // Since the next-to-scan block is ahead of us, drop keys which have yet to actually activate - keys.retain(|key| b <= key.activation_block_number); + let (keys, keys_with_stages) = self.keys_and_keys_with_stages(b); let mut txn = self.db.txn(); @@ -331,7 +350,8 @@ impl> ContinuallyRan for EventualityTas scheduler_update.forwards.sort_by(sort_outputs); scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); // Intake the new Eventualities - let new_eventualities = self.scheduler.update(&mut txn, scheduler_update); + let new_eventualities = + self.scheduler.update(&mut txn, &keys_with_stages, scheduler_update); for key in new_eventualities.keys() { keys .iter() @@ -345,7 +365,10 @@ impl> ContinuallyRan for EventualityTas // If this is the block at which forwarding starts for this key, flush it // We do this after we issue the above update for any efficiencies gained by doing so if key.block_at_which_forwarding_starts == Some(b) { - assert!(key.key != keys.last().unwrap().key); + assert!( + key.key != keys.last().unwrap().key, + "key which was forwarding was the last key (which has no key after it to forward to)" + ); self.scheduler.flush_key(&mut txn, key.key, keys.last().unwrap().key); } @@ -361,18 +384,15 @@ impl> ContinuallyRan for EventualityTas // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never // has a malleable view of the keys. - let retire_at = b + S::WINDOW_LENGTH; - ScannerGlobalDb::::retire_key(&mut txn, retire_at, key.key); - EventualityDb::::retire_key(&mut txn, retire_at, key.key); + ScannerGlobalDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); + + // We tell the scheduler to retire it now as we're done with it, and this fn doesn't + // require it be called with a canonical order + self.scheduler.retire_key(&mut txn, key.key); } } } - // If we retired any key at this block, retire it within the scheduler - if let Some(key) = EventualityDb::::take_retired_key(&mut txn, b) { - self.scheduler.retire_key(&mut txn, key); - } - // Update the next-to-check block EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d90ca08e..2cbae096 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -13,6 +13,7 @@ use primitives::{task::*, Address, ReceivedOutput, Block}; // Logic for deciding where in its lifetime a multisig is. mod lifetime; +pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; @@ -205,16 +206,22 @@ pub trait Scheduler: 'static + Send { /// Retire a key as it'll no longer be used. /// /// Any key retired MUST NOT still have outputs associated with it. This SHOULD be a NOP other - /// than any assertions and database cleanup. + /// than any assertions and database cleanup. This MUST NOT be expected to be called in a fashion + /// ordered to any other calls. fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor); /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// + /// `active_keys` is the list of active keys, potentially including a key for which we've already + /// called `retire_key` on. If so, its stage will be `Finishing` and no further operations will + /// be expected for it. Nonetheless, it may be present. + /// /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. fn update( &mut self, txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, ) -> HashMap, Vec>>; @@ -224,6 +231,10 @@ pub trait Scheduler: 'static + Send { /// or Change), unless they descend from a transaction returned by this function which satisfies /// that requirement. /// + /// `active_keys` is the list of active keys, potentially including a key for which we've already + /// called `retire_key` on. If so, its stage will be `Finishing` and no further operations will + /// be expected for it. Nonetheless, it may be present. + /// /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. /* @@ -249,6 +260,7 @@ pub trait Scheduler: 'static + Send { fn fulfill( &mut self, txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], payments: Vec, ) -> HashMap, Vec>>; } diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs index e15c0f55..bef6af8b 100644 --- a/processor/scanner/src/lifetime.rs +++ b/processor/scanner/src/lifetime.rs @@ -6,8 +6,8 @@ use crate::ScannerFeed; /// rotation process. Steps 7-8 regard a multisig which isn't retiring yet retired, and /// accordingly, no longer exists, so they are not modelled here (as this only models active /// multisigs. Inactive multisigs aren't represented in the first place). -#[derive(PartialEq)] -pub(crate) enum LifetimeStage { +#[derive(Clone, Copy, PartialEq)] +pub enum LifetimeStage { /// A new multisig, once active, shouldn't actually start receiving coins until several blocks /// later. If any UI is premature in sending to this multisig, we delay to report the outputs to /// prevent some DoS concerns. @@ -65,12 +65,20 @@ impl Lifetime { // The exclusive end block is the inclusive start block let block_at_which_reporting_starts = active_yet_not_reporting_end_block; if block_number < active_yet_not_reporting_end_block { - return Lifetime { stage: LifetimeStage::ActiveYetNotReporting, block_at_which_reporting_starts, block_at_which_forwarding_starts: None }; + return Lifetime { + stage: LifetimeStage::ActiveYetNotReporting, + block_at_which_reporting_starts, + block_at_which_forwarding_starts: None, + }; } let Some(next_keys_activation_block_number) = next_keys_activation_block_number else { // If there is no next multisig, this is the active multisig - return Lifetime { stage: LifetimeStage::Active, block_at_which_reporting_starts, block_at_which_forwarding_starts: None }; + return Lifetime { + stage: LifetimeStage::Active, + block_at_which_reporting_starts, + block_at_which_forwarding_starts: None, + }; }; assert!( @@ -88,12 +96,20 @@ impl Lifetime { // If the new multisig is still having its activation block finalized on-chain, this multisig // is still active (step 3) if block_number < new_active_yet_not_reporting_end_block { - return Lifetime { stage: LifetimeStage::Active, block_at_which_reporting_starts, block_at_which_forwarding_starts }; + return Lifetime { + stage: LifetimeStage::Active, + block_at_which_reporting_starts, + block_at_which_forwarding_starts, + }; } // Step 4 details a further CONFIRMATIONS if block_number < new_active_and_used_for_change_end_block { - return Lifetime { stage: LifetimeStage::UsingNewForChange, block_at_which_reporting_starts, block_at_which_forwarding_starts }; + return Lifetime { + stage: LifetimeStage::UsingNewForChange, + block_at_which_reporting_starts, + block_at_which_forwarding_starts, + }; } // Step 5 details a further 6 hours @@ -101,10 +117,18 @@ impl Lifetime { let new_active_and_forwarded_to_end_block = new_active_and_used_for_change_end_block + (6 * 6 * S::TEN_MINUTES); if block_number < new_active_and_forwarded_to_end_block { - return Lifetime { stage: LifetimeStage::Forwarding, block_at_which_reporting_starts, block_at_which_forwarding_starts }; + return Lifetime { + stage: LifetimeStage::Forwarding, + block_at_which_reporting_starts, + block_at_which_forwarding_starts, + }; } // Step 6 - Lifetime { stage: LifetimeStage::Finishing, block_at_which_reporting_starts, block_at_which_forwarding_starts } + Lifetime { + stage: LifetimeStage::Finishing, + block_at_which_reporting_starts, + block_at_which_forwarding_starts, + } } } From a8b9b7bad326dbdd868bef0f93855093a25e432e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 21:35:22 -0400 Subject: [PATCH 049/368] Add sanity checks we haven't prior reported an InInstruction for/accumulated an output --- processor/scanner/src/eventuality/db.rs | 19 +++++- processor/scanner/src/eventuality/mod.rs | 20 +++++- processor/scanner/src/lib.rs | 79 ------------------------ processor/scanner/src/scan/db.rs | 20 +++++- processor/scanner/src/scan/mod.rs | 30 ++++++++- 5 files changed, 80 insertions(+), 88 deletions(-) diff --git a/processor/scanner/src/eventuality/db.rs b/processor/scanner/src/eventuality/db.rs index 2bd02025..3e5088d1 100644 --- a/processor/scanner/src/eventuality/db.rs +++ b/processor/scanner/src/eventuality/db.rs @@ -3,9 +3,9 @@ use core::marker::PhantomData; use scale::Encode; use serai_db::{Get, DbTxn, create_db}; -use primitives::{EncodableG, Eventuality, EventualityTracker}; +use primitives::{EncodableG, ReceivedOutput, Eventuality, EventualityTracker}; -use crate::{ScannerFeed, KeyFor, EventualityFor}; +use crate::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor}; create_db!( ScannerEventuality { @@ -15,6 +15,8 @@ create_db!( LatestHandledNotableBlock: () -> u64, SerializedEventualities: (key: K) -> Vec, + + AccumulatedOutput: (id: &[u8]) -> (), } ); @@ -65,4 +67,17 @@ impl EventualityDb { } res } + + pub(crate) fn prior_accumulated_output( + getter: &impl Get, + id: & as ReceivedOutput, AddressFor>>::Id, + ) -> bool { + AccumulatedOutput::get(getter, id.as_ref()).is_some() + } + pub(crate) fn accumulated_output( + txn: &mut impl DbTxn, + id: & as ReceivedOutput, AddressFor>>::Id, + ) { + AccumulatedOutput::set(txn, id.as_ref(), &()); + } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 400c5690..43f6b784 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -12,7 +12,8 @@ use crate::{ SeraiKey, OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, ScanToEventualityDb, }, - BlockExt, ScannerFeed, KeyFor, EventualityFor, SchedulerUpdate, Scheduler, sort_outputs, + BlockExt, ScannerFeed, KeyFor, OutputFor, EventualityFor, SchedulerUpdate, Scheduler, + sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -349,6 +350,22 @@ impl> ContinuallyRan for EventualityTas scheduler_update.outputs.sort_by(sort_outputs); scheduler_update.forwards.sort_by(sort_outputs); scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + + // Sanity check we've never accumulated these outputs before + { + let a: core::slice::Iter<'_, OutputFor> = scheduler_update.outputs.iter(); + let b: core::slice::Iter<'_, OutputFor> = scheduler_update.forwards.iter(); + let c = scheduler_update.returns.iter().map(|output_to_return| &output_to_return.output); + + for output in a.chain(b).chain(c) { + assert!( + !EventualityDb::::prior_accumulated_output(&txn, &output.id()), + "prior accumulated an output with this ID" + ); + EventualityDb::::accumulated_output(&mut txn, &output.id()); + } + } + // Intake the new Eventualities let new_eventualities = self.scheduler.update(&mut txn, &keys_with_stages, scheduler_update); @@ -375,7 +392,6 @@ impl> ContinuallyRan for EventualityTas // Now that we've intaked any Eventualities caused, check if we're retiring any keys if key.stage == LifetimeStage::Finishing { let eventualities = EventualityDb::::eventualities(&txn, key.key); - // TODO: This assumes the Scheduler is empty if eventualities.active_eventualities.is_empty() { log::info!( "key {} has finished and is being retired", diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 2cbae096..7c6466ff 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -200,7 +200,6 @@ pub trait Scheduler: 'static + Send { /// certain time period. With `flush_key`, all outputs should be directed towards fulfilling some /// obligation or the `new_key`. Every output MUST be connected to an Eventuality. If a key no /// longer has active Eventualities, it MUST be able to be retired. - // TODO: Call this fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor); /// Retire a key as it'll no longer be used. @@ -384,81 +383,3 @@ impl Scanner { SubstrateToEventualityDb::send_burns(txn, queue_as_of, burns) } } - -/* -#[derive(Clone, Debug)] -struct ScannerGlobalDb(PhantomData, PhantomData); -impl ScannerGlobalDb { - fn seen_key(id: &>::Id) -> Vec { - Self::scanner_key(b"seen", id) - } - fn seen(getter: &G, id: &>::Id) -> bool { - getter.get(Self::seen_key(id)).is_some() - } - - fn save_scanned_block(txn: &mut D::Transaction<'_>, block: usize) -> Vec { - let id = Self::block(txn, block); // It may be None for the first key rotated to - let outputs = - if let Some(id) = id.as_ref() { Self::outputs(txn, id).unwrap_or(vec![]) } else { vec![] }; - - // Mark all the outputs from this block as seen - for output in &outputs { - txn.put(Self::seen_key(&output.id()), b""); - } - - txn.put(Self::scanned_block_key(), u64::try_from(block).unwrap().to_le_bytes()); - - // Return this block's outputs so they can be pruned from the RAM cache - outputs - } -} - - // Panic if we've already seen these outputs - for output in &outputs { - let id = output.id(); - info!( - "block {} had output {} worth {:?}", - hex::encode(&block_id), - hex::encode(&id), - output.balance(), - ); - - // On Bitcoin, the output ID should be unique for a given chain - // On Monero, it's trivial to make an output sharing an ID with another - // We should only scan outputs with valid IDs however, which will be unique - - /* - The safety of this code must satisfy the following conditions: - 1) seen is not set for the first occurrence - 2) seen is set for any future occurrence - - seen is only written to after this code completes. Accordingly, it cannot be set - before the first occurrence UNLESSS it's set, yet the last scanned block isn't. - They are both written in the same database transaction, preventing this. - - As for future occurrences, the RAM entry ensures they're handled properly even if - the database has yet to be set. - - On reboot, which will clear the RAM, if seen wasn't set, neither was latest scanned - block. Accordingly, this will scan from some prior block, re-populating the RAM. - - If seen was set, then this will be successfully read. - - There's also no concern ram_outputs was pruned, yet seen wasn't set, as pruning - from ram_outputs will acquire a write lock (preventing this code from acquiring - its own write lock and running), and during its holding of the write lock, it - commits the transaction setting seen and the latest scanned block. - - This last case isn't true. Committing seen/latest_scanned_block happens after - relinquishing the write lock. - - TODO2: Only update ram_outputs after committing the TXN in question. - */ - let seen = ScannerGlobalDb::::seen(&db, &id); - let id = id.as_ref().to_vec(); - if seen || scanner.ram_outputs.contains(&id) { - panic!("scanned an output multiple times"); - } - scanner.ram_outputs.insert(id); - } -*/ diff --git a/processor/scanner/src/scan/db.rs b/processor/scanner/src/scan/db.rs index 6df84df1..44023bc8 100644 --- a/processor/scanner/src/scan/db.rs +++ b/processor/scanner/src/scan/db.rs @@ -2,7 +2,9 @@ use core::marker::PhantomData; use serai_db::{Get, DbTxn, create_db}; -use crate::{db::OutputWithInInstruction, ScannerFeed}; +use primitives::ReceivedOutput; + +use crate::{db::OutputWithInInstruction, ScannerFeed, KeyFor, AddressFor, OutputFor}; create_db!( ScannerScan { @@ -10,6 +12,8 @@ create_db!( NextToScanForOutputsBlock: () -> u64, SerializedQueuedOutputs: (block_number: u64) -> Vec, + + ReportedInInstructionForOutput: (id: &[u8]) -> (), } ); @@ -38,7 +42,6 @@ impl ScanDb { } res } - pub(crate) fn queue_output_until_block( txn: &mut impl DbTxn, queue_for_block: u64, @@ -49,4 +52,17 @@ impl ScanDb { output.write(&mut outputs).unwrap(); SerializedQueuedOutputs::set(txn, queue_for_block, &outputs); } + + pub(crate) fn prior_reported_in_instruction_for_output( + getter: &impl Get, + id: & as ReceivedOutput, AddressFor>>::Id, + ) -> bool { + ReportedInInstructionForOutput::get(getter, id.as_ref()).is_some() + } + pub(crate) fn reported_in_instruction_for_output( + txn: &mut impl DbTxn, + id: & as ReceivedOutput, AddressFor>>::Id, + ) { + ReportedInInstructionForOutput::set(txn, id.as_ref(), &()); + } } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 59d0f197..f76adb00 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -149,8 +149,8 @@ impl ContinuallyRan for ScanTask { queued_outputs }; for queued_output in queued_outputs { + in_instructions.push((queued_output.output.id(), queued_output.in_instruction)); scan_data.received_external_outputs.push(queued_output.output); - in_instructions.push(queued_output.in_instruction); } // We subtract the cost to aggregate from some outputs we scan @@ -297,13 +297,37 @@ impl ContinuallyRan for ScanTask { // Ensures we didn't miss a `continue` above assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); - scan_data.received_external_outputs.push(output_with_in_instruction.output.clone()); - in_instructions.push(output_with_in_instruction.in_instruction); + in_instructions.push(( + output_with_in_instruction.output.id(), + output_with_in_instruction.in_instruction, + )); + scan_data.received_external_outputs.push(output_with_in_instruction.output); } } + // Sort the InInstructions by the output ID + in_instructions.sort_by(|(output_id_a, _), (output_id_b, _)| { + use core::cmp::{Ordering, Ord}; + let res = output_id_a.as_ref().cmp(output_id_b.as_ref()); + assert!(res != Ordering::Equal, "two outputs within a collection had the same ID"); + res + }); + // Check we haven't prior reported an InInstruction for this output + // This is a sanity check which is intended to prevent multiple instances of sriXYZ on-chain + // due to a single output + for (id, _) in &in_instructions { + assert!( + !ScanDb::::prior_reported_in_instruction_for_output(&txn, id), + "prior reported an InInstruction for an output with this ID" + ); + ScanDb::::reported_in_instruction_for_output(&mut txn, id); + } + // Reformat the InInstructions to just the InInstructions + let in_instructions = + in_instructions.into_iter().map(|(_id, in_instruction)| in_instruction).collect::>(); // Send the InInstructions to the report task ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); + // Send the scan data to the eventuality task ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); // Update the next to scan block From 7266e7f7eaf66cd3c247e6b08bf02bfdca52fde2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 21:47:25 -0400 Subject: [PATCH 050/368] Add note on why LifetimeStage is monotonic --- processor/scanner/src/eventuality/mod.rs | 46 +++++++++++++++++------- 1 file changed, 33 insertions(+), 13 deletions(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 43f6b784..7db188de 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -351,31 +351,51 @@ impl> ContinuallyRan for EventualityTas scheduler_update.forwards.sort_by(sort_outputs); scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); - // Sanity check we've never accumulated these outputs before - { + let empty = { let a: core::slice::Iter<'_, OutputFor> = scheduler_update.outputs.iter(); let b: core::slice::Iter<'_, OutputFor> = scheduler_update.forwards.iter(); let c = scheduler_update.returns.iter().map(|output_to_return| &output_to_return.output); + let mut all_outputs = a.chain(b).chain(c).peekable(); - for output in a.chain(b).chain(c) { + // If we received any output, sanity check this block is notable + let empty = all_outputs.peek().is_none(); + if !empty { + assert!(is_block_notable, "accumulating output(s) in non-notable block"); + } + + // Sanity check we've never accumulated these outputs before + for output in all_outputs { assert!( !EventualityDb::::prior_accumulated_output(&txn, &output.id()), "prior accumulated an output with this ID" ); EventualityDb::::accumulated_output(&mut txn, &output.id()); } - } - // Intake the new Eventualities - let new_eventualities = - self.scheduler.update(&mut txn, &keys_with_stages, scheduler_update); - for key in new_eventualities.keys() { - keys - .iter() - .find(|serai_key| serai_key.key.to_bytes().as_ref() == key.as_slice()) - .expect("intaking Eventuality for key which isn't active"); + empty + }; + + if !empty { + // Accumulate the outputs + /* + This uses the `keys_with_stages` for the current block, yet this block is notable. + Accordingly, all future intaked Burns will use at least this block when determining + what LifetimeStage a key is. That makes the LifetimeStage monotonically incremented. If + this block wasn't notable, we'd potentially intake Burns with the LifetimeStage + determined off an earlier block than this (enabling an earlier LifetimeStage to be used + after a later one was already used). + */ + let new_eventualities = + self.scheduler.update(&mut txn, &keys_with_stages, scheduler_update); + // Intake the new Eventualities + for key in new_eventualities.keys() { + keys + .iter() + .find(|serai_key| serai_key.key.to_bytes().as_ref() == key.as_slice()) + .expect("intaking Eventuality for key which isn't active"); + } + intake_eventualities::(&mut txn, new_eventualities); } - intake_eventualities::(&mut txn, new_eventualities); } for key in &keys { From e26da1ec347ef93895523c73082db552c1a256f3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 21:58:56 -0400 Subject: [PATCH 051/368] Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate We could drop these entirely, yet there's some degree of utility to be able to add coins to Serai in this manner. --- processor/scanner/src/eventuality/mod.rs | 28 ++++++++++++++++++++++-- processor/scanner/src/scan/mod.rs | 5 ++++- 2 files changed, 30 insertions(+), 3 deletions(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 7db188de..9068769b 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,4 +1,4 @@ -use std::collections::HashMap; +use std::collections::{HashSet, HashMap}; use group::GroupEncoding; @@ -288,7 +288,6 @@ impl> ContinuallyRan for EventualityTas let mut non_external_outputs = block.scan_for_outputs(key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); // Drop any outputs less than the dust limit - // TODO: Either further filter to outputs we made or also check cost_to_aggregate non_external_outputs.retain(|output| { let balance = output.balance(); balance.amount.0 >= self.feed.dust(balance.coin).0 @@ -315,6 +314,31 @@ impl> ContinuallyRan for EventualityTas .retain(|output| completed_eventualities.contains_key(&output.transaction_id())); } + // Finally, for non-External outputs we didn't make, we check they're worth more than the + // cost to aggregate them to avoid some profitable spam attacks by malicious miners + { + // Fetch and cache the costs to aggregate as this call may be expensive + let coins = + non_external_outputs.iter().map(|output| output.balance().coin).collect::>(); + let mut costs_to_aggregate = HashMap::new(); + for coin in coins { + costs_to_aggregate.insert( + coin, + self.feed.cost_to_aggregate(coin, &block).await.map_err(|e| { + format!("EventualityTask couldn't fetch cost to aggregate {coin:?} at {b}: {e:?}") + })?, + ); + } + + // Only retain out outputs/outputs sufficiently worthwhile + non_external_outputs.retain(|output| { + completed_eventualities.contains_key(&output.transaction_id()) || { + let balance = output.balance(); + balance.amount.0 >= (2 * costs_to_aggregate[&balance.coin].0) + } + }); + } + // Now, we iterate over all Forwarded outputs and queue their InInstructions for output in non_external_outputs.iter().filter(|output| output.kind() == OutputType::Forwarded) diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index f76adb00..405861ba 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -226,7 +226,10 @@ impl ContinuallyRan for ScanTask { costs_to_aggregate.entry(balance.coin) { e.insert(self.feed.cost_to_aggregate(balance.coin, &block).await.map_err(|e| { - format!("couldn't fetch cost to aggregate {:?} at {b}: {e:?}", balance.coin) + format!( + "ScanTask couldn't fetch cost to aggregate {:?} at {b}: {e:?}", + balance.coin + ) })?); } let cost_to_aggregate = costs_to_aggregate[&balance.coin]; From 41a74cb513774a7009d9ace52cae828e12b62f90 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 29 Aug 2024 23:47:43 -0400 Subject: [PATCH 052/368] Check a queued key has never been queued before Re-queueing should only happen with a malicious supermajority and breaks indexing by the key. --- processor/scanner/src/db.rs | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 20aa2999..6630c0a3 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -72,6 +72,8 @@ impl OutputWithInInstruction { create_db!( ScannerGlobal { + QueuedKey: (key: K) -> (), + ActiveKeys: () -> Vec>, RetireAt: (key: K) -> u64, @@ -120,7 +122,10 @@ impl ScannerGlobalDb { // Set the block which has a key activate as notable NotableBlock::set(txn, activation_block_number, &()); - // TODO: Panic if we've ever seen this key before + // Check this key has never been queued before + // This should only happen if a malicious supermajority collude, and breaks indexing by the key + assert!(QueuedKey::get(txn, EncodableG(key)).is_none(), "key being queued was prior queued"); + QueuedKey::set(txn, EncodableG(key), &()); // Fetch the existing keys let mut keys: Vec>>> = From 775824f3739915fc0271543b93efd26d91510012 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 00:11:00 -0400 Subject: [PATCH 053/368] Impl ScanData serialization in the DB --- processor/scanner/src/db.rs | 84 ++++++++++++++++++++++++++++++------ processor/scanner/src/lib.rs | 15 ++++++- 2 files changed, 85 insertions(+), 14 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 6630c0a3..698bf546 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,5 +1,5 @@ use core::marker::PhantomData; -use std::io; +use std::io::{self, Read, Write}; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; @@ -301,15 +301,9 @@ pub(crate) struct ReceiverScanData { pub(crate) returns: Vec>, } -#[derive(BorshSerialize, BorshDeserialize)] -pub(crate) struct SerializedScanData { - pub(crate) block_number: u64, - pub(crate) data: Vec, -} - db_channel! { ScannerScanEventuality { - ScannedBlock: (empty_key: ()) -> SerializedScanData, + ScannedBlock: (empty_key: ()) -> Vec, } } @@ -328,6 +322,8 @@ impl ScanToEventualityDb { } /* + TODO + SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, SerializedForwardedOutput: (output_id: &[u8]) -> Vec, @@ -352,18 +348,80 @@ impl ScanToEventualityDb { } */ - ScannedBlock::send(txn, (), todo!("TODO")); + let mut buf = vec![]; + buf.write_all(&data.block_number.to_le_bytes()).unwrap(); + buf + .write_all(&u32::try_from(data.received_external_outputs.len()).unwrap().to_le_bytes()) + .unwrap(); + for output in &data.received_external_outputs { + output.write(&mut buf).unwrap(); + } + buf.write_all(&u32::try_from(data.forwards.len()).unwrap().to_le_bytes()).unwrap(); + for output_with_in_instruction in &data.forwards { + // Only write the output, as we saved the InInstruction above as needed + output_with_in_instruction.output.write(&mut buf).unwrap(); + } + buf.write_all(&u32::try_from(data.returns.len()).unwrap().to_le_bytes()).unwrap(); + for output in &data.returns { + output.write(&mut buf).unwrap(); + } + ScannedBlock::send(txn, (), &buf); } - pub(crate) fn recv_scan_data(txn: &mut impl DbTxn, block_number: u64) -> ReceiverScanData { + pub(crate) fn recv_scan_data( + txn: &mut impl DbTxn, + expected_block_number: u64, + ) -> ReceiverScanData { let data = ScannedBlock::try_recv(txn, ()).expect("receiving data for a scanned block not yet sent"); + let mut data = data.as_slice(); + + let block_number = { + let mut block_number = [0; 8]; + data.read_exact(&mut block_number).unwrap(); + u64::from_le_bytes(block_number) + }; assert_eq!( - block_number, data.block_number, + block_number, expected_block_number, "received data for a scanned block distinct than expected" ); - let data = &data.data; - todo!("TODO") + let received_external_outputs = { + let mut len = [0; 4]; + data.read_exact(&mut len).unwrap(); + let len = usize::try_from(u32::from_le_bytes(len)).unwrap(); + + let mut received_external_outputs = Vec::with_capacity(len); + for _ in 0 .. len { + received_external_outputs.push(OutputFor::::read(&mut data).unwrap()); + } + received_external_outputs + }; + + let forwards = { + let mut len = [0; 4]; + data.read_exact(&mut len).unwrap(); + let len = usize::try_from(u32::from_le_bytes(len)).unwrap(); + + let mut forwards = Vec::with_capacity(len); + for _ in 0 .. len { + forwards.push(OutputFor::::read(&mut data).unwrap()); + } + forwards + }; + + let returns = { + let mut len = [0; 4]; + data.read_exact(&mut len).unwrap(); + let len = usize::try_from(u32::from_le_bytes(len)).unwrap(); + + let mut returns = Vec::with_capacity(len); + for _ in 0 .. len { + returns.push(Return::::read(&mut data).unwrap()); + } + returns + }; + + ReceiverScanData { block_number, received_external_outputs, forwards, returns } } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 7c6466ff..927fc145 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,5 +1,5 @@ use core::{marker::PhantomData, fmt::Debug}; -use std::collections::HashMap; +use std::{io, collections::HashMap}; use group::GroupEncoding; @@ -179,6 +179,19 @@ pub struct Return { output: OutputFor, } +impl Return { + pub(crate) fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.address.write(writer)?; + self.output.write(writer) + } + + pub(crate) fn read(reader: &mut impl io::Read) -> io::Result { + let address = AddressFor::::read(reader)?; + let output = OutputFor::::read(reader)?; + Ok(Return { address, output }) + } +} + /// An update for the scheduler. pub struct SchedulerUpdate { outputs: Vec>, From d429a0bae6178bb4d408f452ecf83f08fbff7490 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 00:11:31 -0400 Subject: [PATCH 054/368] Remove unused ID -> number lookup --- processor/scanner/src/index/db.rs | 6 ------ 1 file changed, 6 deletions(-) diff --git a/processor/scanner/src/index/db.rs b/processor/scanner/src/index/db.rs index a46d6fa6..9254f9bc 100644 --- a/processor/scanner/src/index/db.rs +++ b/processor/scanner/src/index/db.rs @@ -4,8 +4,6 @@ create_db!( ScannerIndex { // A lookup of a block's number to its ID BlockId: (number: u64) -> [u8; 32], - // A lookup of a block's ID to its number - BlockNumber: (id: [u8; 32]) -> u64, // The latest finalized block to appear on the blockchain LatestFinalizedBlock: () -> u64, @@ -16,14 +14,10 @@ pub(crate) struct IndexDb; impl IndexDb { pub(crate) fn set_block(txn: &mut impl DbTxn, number: u64, id: [u8; 32]) { BlockId::set(txn, number, &id); - BlockNumber::set(txn, id, &number); } pub(crate) fn block_id(getter: &impl Get, number: u64) -> Option<[u8; 32]> { BlockId::get(getter, number) } - pub(crate) fn block_number(getter: &impl Get, id: [u8; 32]) -> Option { - BlockNumber::get(getter, id) - } pub(crate) fn set_latest_finalized_block(txn: &mut impl DbTxn, latest_finalized_block: u64) { LatestFinalizedBlock::set(txn, &latest_finalized_block); From 5999f5d65a003afca675166fe12605f7f19ad68b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 00:20:34 -0400 Subject: [PATCH 055/368] Route the DB w.r.t. forwarded outputs' information --- processor/scanner/src/db.rs | 47 +++++++++++------------- processor/scanner/src/eventuality/mod.rs | 1 + 2 files changed, 23 insertions(+), 25 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 698bf546..cc86afeb 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -101,6 +101,8 @@ create_db!( */ // This collapses from `bool` to `()`, using if the value was set for true and false otherwise NotableBlock: (number: u64) -> (), + + SerializedForwardedOutput: (id: &[u8]) -> Vec, } ); @@ -267,7 +269,15 @@ impl ScannerGlobalDb { getter: &impl Get, output: & as ReceivedOutput, AddressFor>>::Id, ) -> Option<(Option>, InInstructionWithBalance)> { - todo!("TODO") + let buf = SerializedForwardedOutput::get(getter, output.as_ref())?; + let mut buf = buf.as_slice(); + + let mut opt = [0xff]; + buf.read_exact(&mut opt).unwrap(); + assert!((opt[0] == 0) || (opt[0] == 1)); + + let address = (opt[0] == 1).then(|| AddressFor::::read(&mut buf).unwrap()); + Some((address, InInstructionWithBalance::decode(&mut IoReader(buf)).unwrap())) } } @@ -321,32 +331,19 @@ impl ScanToEventualityDb { NotableBlock::set(txn, block_number, &()); } - /* - TODO + // Save all the forwarded outputs' data + for forward in &data.forwards { + let mut buf = vec![]; + if let Some(address) = &forward.return_address { + buf.write_all(&[1]).unwrap(); + address.write(&mut buf).unwrap(); + } else { + buf.write_all(&[0]).unwrap(); + } + forward.in_instruction.encode_to(&mut buf); - SerializedForwardedOutputsIndex: (block_number: u64) -> Vec, - SerializedForwardedOutput: (output_id: &[u8]) -> Vec, - - pub(crate) fn save_output_being_forwarded( - txn: &mut impl DbTxn, - block_forwarded_from: u64, - output: &OutputWithInInstruction, - ) { - let mut buf = Vec::with_capacity(128); - output.write(&mut buf).unwrap(); - - let id = output.output.id(); - - // Save this to an index so we can later fetch all outputs to forward - let mut forwarded_outputs = SerializedForwardedOutputsIndex::get(txn, block_forwarded_from) - .unwrap_or(Vec::with_capacity(32)); - forwarded_outputs.extend(id.as_ref()); - SerializedForwardedOutputsIndex::set(txn, block_forwarded_from, &forwarded_outputs); - - // Save the output itself - SerializedForwardedOutput::set(txn, id.as_ref(), &buf); + SerializedForwardedOutput::set(txn, forward.output.id().as_ref(), &buf); } - */ let mut buf = vec![]; buf.write_all(&data.block_number.to_le_bytes()).unwrap(); diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 9068769b..3be7f3ce 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -116,6 +116,7 @@ impl> EventualityTask { Self { db, feed, scheduler } } + #[allow(clippy::type_complexity)] fn keys_and_keys_with_stages( &self, block_number: u64, From 76cbe6cf1edcba3967c65d31769abcc04ed54dfe Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 01:19:29 -0400 Subject: [PATCH 056/368] Have acknowledge_block take in the results of the InInstructions executed If any failed, the scanner now creates a Burn for the return. --- processor/primitives/src/output.rs | 2 +- processor/scanner/src/db.rs | 59 +++++++++++++++++++++++----- processor/scanner/src/lib.rs | 43 ++++++++++++++++++-- processor/scanner/src/report/db.rs | 61 ++++++++++++++++++++++++++++- processor/scanner/src/report/mod.rs | 51 ++++++++++++++++++------ processor/scanner/src/scan/mod.rs | 18 +++++++-- 6 files changed, 203 insertions(+), 31 deletions(-) diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 777b2c52..9a300940 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -8,7 +8,7 @@ use serai_primitives::{ExternalAddress, Balance}; use crate::Id; /// An address on the external network. -pub trait Address: Send + Sync + TryFrom { +pub trait Address: Send + Sync + Into + TryFrom { /// Write this address. fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; /// Read an address. diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index cc86afeb..f45d2966 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -47,11 +47,7 @@ impl OutputWithInInstruction { let mut opt = [0xff]; reader.read_exact(&mut opt)?; assert!((opt[0] == 0) || (opt[0] == 1)); - if opt[0] == 0 { - None - } else { - Some(AddressFor::::read(reader)?) - } + (opt[0] == 1).then(|| AddressFor::::read(reader)).transpose()? }; let in_instruction = InInstructionWithBalance::decode(&mut IoReader(reader)).map_err(io::Error::other)?; @@ -422,10 +418,39 @@ impl ScanToEventualityDb { } } +pub(crate) struct Returnable { + pub(crate) return_address: Option>, + pub(crate) in_instruction: InInstructionWithBalance, +} + +impl Returnable { + fn read(reader: &mut impl io::Read) -> io::Result { + let mut opt = [0xff]; + reader.read_exact(&mut opt).unwrap(); + assert!((opt[0] == 0) || (opt[0] == 1)); + + let return_address = (opt[0] == 1).then(|| AddressFor::::read(reader)).transpose()?; + + let in_instruction = + InInstructionWithBalance::decode(&mut IoReader(reader)).map_err(io::Error::other)?; + Ok(Returnable { return_address, in_instruction }) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + if let Some(return_address) = &self.return_address { + writer.write_all(&[1])?; + return_address.write(writer)?; + } else { + writer.write_all(&[0])?; + } + self.in_instruction.encode_to(writer); + Ok(()) + } +} + #[derive(BorshSerialize, BorshDeserialize)] struct BlockBoundInInstructions { block_number: u64, - in_instructions: Vec, + returnable_in_instructions: Vec, } db_channel! { @@ -439,22 +464,36 @@ impl ScanToReportDb { pub(crate) fn send_in_instructions( txn: &mut impl DbTxn, block_number: u64, - in_instructions: Vec, + returnable_in_instructions: &[Returnable], ) { - InInstructions::send(txn, (), &BlockBoundInInstructions { block_number, in_instructions }); + let mut buf = vec![]; + for returnable_in_instruction in returnable_in_instructions { + returnable_in_instruction.write(&mut buf).unwrap(); + } + InInstructions::send( + txn, + (), + &BlockBoundInInstructions { block_number, returnable_in_instructions: buf }, + ); } pub(crate) fn recv_in_instructions( txn: &mut impl DbTxn, block_number: u64, - ) -> Vec { + ) -> Vec> { let data = InInstructions::try_recv(txn, ()) .expect("receiving InInstructions for a scanned block not yet sent"); assert_eq!( block_number, data.block_number, "received InInstructions for a scanned block distinct than expected" ); - data.in_instructions + let mut buf = data.returnable_in_instructions.as_slice(); + + let mut returnable_in_instructions = vec![]; + while !buf.is_empty() { + returnable_in_instructions.push(Returnable::read(&mut buf).unwrap()); + } + returnable_in_instructions } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 927fc145..93ed961d 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -7,7 +7,7 @@ use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; use serai_in_instructions_primitives::Batch; -use serai_coins_primitives::OutInstructionWithBalance; +use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; use primitives::{task::*, Address, ReceivedOutput, Block}; @@ -327,6 +327,8 @@ impl Scanner { &mut self, mut txn: impl DbTxn, block_number: u64, + batch_id: u32, + in_instruction_succeededs: Vec, key_to_activate: Option>, ) { log::info!("acknowledging block {block_number}"); @@ -338,8 +340,12 @@ impl Scanner { if let Some(prior_highest_acknowledged_block) = ScannerGlobalDb::::highest_acknowledged_block(&txn) { - assert!(block_number > prior_highest_acknowledged_block, "acknowledging blocks out-of-order"); - for b in (prior_highest_acknowledged_block + 1) .. (block_number - 1) { + // If a single block produced multiple Batches, the block number won't increment + assert!( + block_number >= prior_highest_acknowledged_block, + "acknowledging blocks out-of-order" + ); + for b in (prior_highest_acknowledged_block + 1) .. block_number { assert!( !ScannerGlobalDb::::is_block_notable(&txn, b), "skipped acknowledging a block which was notable" @@ -352,6 +358,37 @@ impl Scanner { ScannerGlobalDb::::queue_key(&mut txn, block_number + S::WINDOW_LENGTH, key_to_activate); } + // Return the balances for any InInstructions which failed to execute + { + let return_information = report::take_return_information::(&mut txn, batch_id) + .expect("didn't save the return information for Batch we published"); + assert_eq!( + in_instruction_succeededs.len(), + return_information.len(), + "amount of InInstruction succeededs differed from amount of return information saved" + ); + + // We map these into standard Burns + let mut returns = vec![]; + for (succeeded, return_information) in + in_instruction_succeededs.into_iter().zip(return_information) + { + if succeeded { + continue; + } + + if let Some(report::ReturnInformation { address, balance }) = return_information { + returns.push(OutInstructionWithBalance { + instruction: OutInstruction { address: address.into(), data: None }, + balance, + }); + } + } + // We send them as stemming from this block + // TODO: These should be handled with any Burns from this block + SubstrateToEventualityDb::send_burns(&mut txn, block_number, &returns); + } + // Commit the txn txn.commit(); // Run the Eventuality task since we've advanced it diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 2fd98d4b..4c96a360 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -1,16 +1,34 @@ +use core::marker::PhantomData; +use std::io::{Read, Write}; + +use scale::{Encode, Decode, IoReader}; use serai_db::{Get, DbTxn, create_db}; +use serai_primitives::Balance; + +use primitives::Address; + +use crate::{ScannerFeed, AddressFor}; + create_db!( ScannerReport { // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, // The next Batch ID to use NextBatchId: () -> u32, + + // The return addresses for the InInstructions within a Batch + SerializedReturnAddresses: (batch: u32) -> Vec, } ); -pub(crate) struct ReportDb; -impl ReportDb { +pub(crate) struct ReturnInformation { + pub(crate) address: AddressFor, + pub(crate) balance: Balance, +} + +pub(crate) struct ReportDb(PhantomData); +impl ReportDb { pub(crate) fn set_next_to_potentially_report_block( txn: &mut impl DbTxn, next_to_potentially_report_block: u64, @@ -26,4 +44,43 @@ impl ReportDb { NextBatchId::set(txn, &(id + 1)); id } + + pub(crate) fn save_return_information( + txn: &mut impl DbTxn, + id: u32, + return_information: &Vec>>, + ) { + let mut buf = Vec::with_capacity(return_information.len() * (32 + 1 + 8)); + for return_information in return_information { + if let Some(ReturnInformation { address, balance }) = return_information { + buf.write_all(&[1]).unwrap(); + address.write(&mut buf).unwrap(); + balance.encode_to(&mut buf); + } else { + buf.write_all(&[0]).unwrap(); + } + } + SerializedReturnAddresses::set(txn, id, &buf); + } + pub(crate) fn take_return_information( + txn: &mut impl DbTxn, + id: u32, + ) -> Option>>> { + let buf = SerializedReturnAddresses::get(txn, id)?; + let mut buf = buf.as_slice(); + + let mut res = Vec::with_capacity(buf.len() / (32 + 1 + 8)); + while !buf.is_empty() { + let mut opt = [0xff]; + buf.read_exact(&mut opt).unwrap(); + assert!((opt[0] == 0) || (opt[0] == 1)); + + res.push((opt[0] == 1).then(|| { + let address = AddressFor::::read(&mut buf).unwrap(); + let balance = Balance::decode(&mut IoReader(&mut buf)).unwrap(); + ReturnInformation { address, balance } + })); + } + Some(res) + } } diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index b789ea58..8ac2c06b 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -8,15 +8,23 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::task::ContinuallyRan; use crate::{ - db::{ScannerGlobalDb, ScanToReportDb}, + db::{Returnable, ScannerGlobalDb, ScanToReportDb}, index, scan::next_to_scan_for_outputs_block, ScannerFeed, BatchPublisher, }; mod db; +pub(crate) use db::ReturnInformation; use db::ReportDb; +pub(crate) fn take_return_information( + txn: &mut impl DbTxn, + id: u32, +) -> Option>>> { + ReportDb::::take_return_information(txn, id) +} + /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -33,10 +41,10 @@ pub(crate) struct ReportTask { impl ReportTask { pub(crate) fn new(mut db: D, batch_publisher: B, start_block: u64) -> Self { - if ReportDb::next_to_potentially_report_block(&db).is_none() { + if ReportDb::::next_to_potentially_report_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); - ReportDb::set_next_to_potentially_report_block(&mut txn, start_block); + ReportDb::::set_next_to_potentially_report_block(&mut txn, start_block); txn.commit(); } @@ -64,7 +72,7 @@ impl ContinuallyRan for ReportTask::next_to_potentially_report_block(&self.db) .expect("ReportTask run before writing the start block"); for b in next_to_potentially_report ..= highest_reportable { @@ -81,32 +89,53 @@ impl ContinuallyRan for ReportTask::acquire_batch_id(&mut txn); // start with empty batch let mut batches = vec![Batch { network, id: batch_id, block: BlockHash(block_hash), instructions: vec![] }]; + // We also track the return information for the InInstructions within a Batch in case they + // error + let mut return_information = vec![vec![]]; + + for Returnable { return_address, in_instruction } in in_instructions { + let balance = in_instruction.balance; - for instruction in in_instructions { let batch = batches.last_mut().unwrap(); - batch.instructions.push(instruction); + batch.instructions.push(in_instruction); // check if batch is over-size if batch.encode().len() > MAX_BATCH_SIZE { // pop the last instruction so it's back in size - let instruction = batch.instructions.pop().unwrap(); + let in_instruction = batch.instructions.pop().unwrap(); // bump the id for the new batch - batch_id = ReportDb::acquire_batch_id(&mut txn); + batch_id = ReportDb::::acquire_batch_id(&mut txn); // make a new batch with this instruction included batches.push(Batch { network, id: batch_id, block: BlockHash(block_hash), - instructions: vec![instruction], + instructions: vec![in_instruction], }); + // Since we're allocating a new batch, allocate a new set of return addresses for it + return_information.push(vec![]); } + + // For the set of return addresses for the InInstructions for the batch we just pushed + // onto, push this InInstruction's return addresses + return_information + .last_mut() + .unwrap() + .push(return_address.map(|address| ReturnInformation { address, balance })); + } + + // Save the return addresses to the databse + assert_eq!(batches.len(), return_information.len()); + for (batch, return_information) in batches.iter().zip(&return_information) { + assert_eq!(batch.instructions.len(), return_information.len()); + ReportDb::::save_return_information(&mut txn, batch.id, return_information); } for batch in batches { @@ -119,7 +148,7 @@ impl ContinuallyRan for ReportTask::set_next_to_potentially_report_block(&mut txn, b + 1); txn.commit(); } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 405861ba..4d6ca16e 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -13,7 +13,8 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Block}; use crate::{ lifetime::LifetimeStage, db::{ - OutputWithInInstruction, SenderScanData, ScannerGlobalDb, ScanToReportDb, ScanToEventualityDb, + OutputWithInInstruction, Returnable, SenderScanData, ScannerGlobalDb, ScanToReportDb, + ScanToEventualityDb, }, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, eventuality::latest_scannable_block, @@ -149,7 +150,13 @@ impl ContinuallyRan for ScanTask { queued_outputs }; for queued_output in queued_outputs { - in_instructions.push((queued_output.output.id(), queued_output.in_instruction)); + in_instructions.push(( + queued_output.output.id(), + Returnable { + return_address: queued_output.return_address, + in_instruction: queued_output.in_instruction, + }, + )); scan_data.received_external_outputs.push(queued_output.output); } @@ -302,7 +309,10 @@ impl ContinuallyRan for ScanTask { in_instructions.push(( output_with_in_instruction.output.id(), - output_with_in_instruction.in_instruction, + Returnable { + return_address: output_with_in_instruction.return_address, + in_instruction: output_with_in_instruction.in_instruction, + }, )); scan_data.received_external_outputs.push(output_with_in_instruction.output); } @@ -329,7 +339,7 @@ impl ContinuallyRan for ScanTask { let in_instructions = in_instructions.into_iter().map(|(_id, in_instruction)| in_instruction).collect::>(); // Send the InInstructions to the report task - ScanToReportDb::::send_in_instructions(&mut txn, b, in_instructions); + ScanToReportDb::::send_in_instructions(&mut txn, b, &in_instructions); // Send the scan data to the eventuality task ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); From f21838e0d5d6aa497e4d69ea725215654d177383 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 01:33:40 -0400 Subject: [PATCH 057/368] Replace acknowledge_block with acknowledge_batch --- processor/scanner/src/lib.rs | 44 +++++++++++++++++++++-------- processor/scanner/src/report/db.rs | 13 ++++++++- processor/scanner/src/report/mod.rs | 11 ++++++-- 3 files changed, 53 insertions(+), 15 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 93ed961d..f92002d6 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -317,21 +317,33 @@ impl Scanner { Self { eventuality_handle, _S: PhantomData } } - /// Acknowledge a block. + /// Acknowledge a Batch having been published on Serai. /// - /// This means this block was ordered on Serai in relation to `Burn` events, and all validators - /// have achieved synchrony on it. + /// This means the specified Batch was ordered on Serai in relation to Burn events, and all + /// validators have achieved synchrony on it. + /// + /// `in_instruction_succeededs` is the result of executing each InInstruction within this batch, + /// true if it succeeded and false if it did not (and did not cause any state changes on Serai). + /// + /// `burns` is a list of Burns to queue with the acknowledgement of this Batch for efficiency's + /// sake. Any Burns passed here MUST NOT be passed into any other call of `acknowledge_batch` nor + /// `queue_burns`. Doing so will cause them to be executed multiple times. /// /// The calls to this function must be ordered with regards to `queue_burns`. - pub fn acknowledge_block( + pub fn acknowledge_batch( &mut self, mut txn: impl DbTxn, - block_number: u64, batch_id: u32, in_instruction_succeededs: Vec, + mut burns: Vec, key_to_activate: Option>, ) { - log::info!("acknowledging block {block_number}"); + log::info!("acknowledging batch {batch_id}"); + + // TODO: We need to take all of these arguments and send them to a task + // Then, when we do have this block number, we need to execute this function + let block_number = report::take_block_number_for_batch::(&mut txn, batch_id) + .expect("didn't have the block number for a Batch"); assert!( ScannerGlobalDb::::is_block_notable(&txn, block_number), @@ -369,7 +381,6 @@ impl Scanner { ); // We map these into standard Burns - let mut returns = vec![]; for (succeeded, return_information) in in_instruction_succeededs.into_iter().zip(return_information) { @@ -378,15 +389,18 @@ impl Scanner { } if let Some(report::ReturnInformation { address, balance }) = return_information { - returns.push(OutInstructionWithBalance { + burns.push(OutInstructionWithBalance { instruction: OutInstruction { address: address.into(), data: None }, balance, }); } } - // We send them as stemming from this block - // TODO: These should be handled with any Burns from this block - SubstrateToEventualityDb::send_burns(&mut txn, block_number, &returns); + } + + if !burns.is_empty() { + // We send these Burns as stemming from this block we just acknowledged + // This causes them to be acted on after we accumulate the outputs from this block + SubstrateToEventualityDb::send_burns(&mut txn, block_number, &burns); } // Commit the txn @@ -402,7 +416,9 @@ impl Scanner { /// The scanner only updates the scheduler with new outputs upon acknowledging a block. The /// ability to fulfill Burns, and therefore their order, is dependent on the current output /// state. This immediately sets a bound that this function is ordered with regards to - /// `acknowledge_block`. + /// `acknowledge_batch`. + /// + /// The Burns specified here MUST NOT also be passed to `acknowledge_batch`. /* The fact Burns can be queued during any Substrate block is problematic. The scanner is allowed to scan anything within the window set by the Eventuality task. The Eventuality task is allowed @@ -427,6 +443,10 @@ impl Scanner { unnecessary). */ pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: &Vec) { + if burns.is_empty() { + return; + } + let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(txn) .expect("queueing Burns yet never acknowledged a block"); diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 4c96a360..baff6635 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -17,6 +17,9 @@ create_db!( // The next Batch ID to use NextBatchId: () -> u32, + // The block number which caused a batch + BlockNumberForBatch: (batch: u32) -> u64, + // The return addresses for the InInstructions within a Batch SerializedReturnAddresses: (batch: u32) -> Vec, } @@ -39,12 +42,19 @@ impl ReportDb { NextToPotentiallyReportBlock::get(getter) } - pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { + pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn, block_number: u64) -> u32 { let id = NextBatchId::get(txn).unwrap_or(0); NextBatchId::set(txn, &(id + 1)); + BlockNumberForBatch::set(txn, id, &block_number); id } + pub(crate) fn take_block_number_for_batch(txn: &mut impl DbTxn, id: u32) -> Option { + let block_number = BlockNumberForBatch::get(txn, id)?; + BlockNumberForBatch::del(txn, id); + Some(block_number) + } + pub(crate) fn save_return_information( txn: &mut impl DbTxn, id: u32, @@ -67,6 +77,7 @@ impl ReportDb { id: u32, ) -> Option>>> { let buf = SerializedReturnAddresses::get(txn, id)?; + SerializedReturnAddresses::del(txn, id); let mut buf = buf.as_slice(); let mut res = Vec::with_capacity(buf.len() / (32 + 1 + 8)); diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 8ac2c06b..ba851713 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -25,6 +25,13 @@ pub(crate) fn take_return_information( ReportDb::::take_return_information(txn, id) } +pub(crate) fn take_block_number_for_batch( + txn: &mut impl DbTxn, + id: u32, +) -> Option { + ReportDb::::take_block_number_for_batch(txn, id) +} + /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -89,7 +96,7 @@ impl ContinuallyRan for ReportTask::acquire_batch_id(&mut txn); + let mut batch_id = ReportDb::::acquire_batch_id(&mut txn, b); // start with empty batch let mut batches = @@ -110,7 +117,7 @@ impl ContinuallyRan for ReportTask::acquire_batch_id(&mut txn); + batch_id = ReportDb::::acquire_batch_id(&mut txn, b); // make a new batch with this instruction included batches.push(Batch { From 13b74195f7b5ec8161b140c42ef4dc719b0eab49 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 02:27:22 -0400 Subject: [PATCH 058/368] Don't have `acknowledge_batch` immediately run `acknowledge_batch` can only be run if we know what the Batch should be. If we don't know what the Batch should be, we have to block until we do. Specifically, we need the block number associated with the Batch. Instead of blocking over the Scanner API, the Scanner API now solely queues actions. A new task intakes those actions once we can. This ensures we can intake the entire Substrate chain, even if our daemon for the external network is stalled at its genesis block. All of this for the block number alone seems ridiculous. To go from the block hash in the Batch to the block number without this task, we'd at least need the index task to be up to date (still requiring blocking or an API returning ephemeral errors). --- processor/scanner/src/lib.rs | 111 +++++------------ processor/scanner/src/substrate/db.rs | 89 ++++++++++++++ processor/scanner/src/substrate/mod.rs | 162 +++++++++++++++++++++++++ 3 files changed, 282 insertions(+), 80 deletions(-) create mode 100644 processor/scanner/src/substrate/db.rs create mode 100644 processor/scanner/src/substrate/mod.rs diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index f92002d6..53bb9030 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -7,7 +7,7 @@ use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; use serai_in_instructions_primitives::Batch; -use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; +use serai_coins_primitives::OutInstructionWithBalance; use primitives::{task::*, Address, ReceivedOutput, Block}; @@ -17,15 +17,16 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; -use db::{ScannerGlobalDb, SubstrateToEventualityDb}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. mod scan; +/// Task which reports Batches to Substrate. +mod report; +/// Task which handles events from Substrate once we can. +mod substrate; /// Check blocks for transactions expected to eventually occur. mod eventuality; -/// Task which reports `Batch`s to Substrate. -mod report; pub(crate) fn sort_outputs>( a: &O, @@ -280,7 +281,7 @@ pub trait Scheduler: 'static + Send { /// A representation of a scanner. #[allow(non_snake_case)] pub struct Scanner { - eventuality_handle: RunNowHandle, + substrate_handle: RunNowHandle, _S: PhantomData, } impl Scanner { @@ -297,24 +298,29 @@ impl Scanner { let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); let report_task = report::ReportTask::<_, S, _>::new(db.clone(), batch_publisher, start_block); + let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::new(db, feed, scheduler, start_block); let (_index_handle, index_run) = RunNowHandle::new(); let (scan_handle, scan_run) = RunNowHandle::new(); let (report_handle, report_run) = RunNowHandle::new(); + let (substrate_handle, substrate_run) = RunNowHandle::new(); let (eventuality_handle, eventuality_run) = RunNowHandle::new(); // Upon indexing a new block, scan it tokio::spawn(index_task.continually_run(index_run, vec![scan_handle.clone()])); // Upon scanning a block, report it tokio::spawn(scan_task.continually_run(scan_run, vec![report_handle])); - // Upon reporting a block, we do nothing + // Upon reporting a block, we do nothing (as the burden is on Substrate which won't be + // immediately ready) tokio::spawn(report_task.continually_run(report_run, vec![])); + // Upon handling an event from Substrate, we run the Eventuality task (as it's what's affected) + tokio::spawn(substrate_task.continually_run(substrate_run, vec![eventuality_handle])); // Upon handling the Eventualities in a block, we run the scan task as we've advanced the // window its allowed to scan tokio::spawn(eventuality_task.continually_run(eventuality_run, vec![scan_handle])); - Self { eventuality_handle, _S: PhantomData } + Self { substrate_handle, _S: PhantomData } } /// Acknowledge a Batch having been published on Serai. @@ -335,80 +341,23 @@ impl Scanner { mut txn: impl DbTxn, batch_id: u32, in_instruction_succeededs: Vec, - mut burns: Vec, + burns: Vec, key_to_activate: Option>, ) { log::info!("acknowledging batch {batch_id}"); - // TODO: We need to take all of these arguments and send them to a task - // Then, when we do have this block number, we need to execute this function - let block_number = report::take_block_number_for_batch::(&mut txn, batch_id) - .expect("didn't have the block number for a Batch"); - - assert!( - ScannerGlobalDb::::is_block_notable(&txn, block_number), - "acknowledging a block which wasn't notable" + // Queue acknowledging this block via the Substrate task + substrate::queue_acknowledge_batch::( + &mut txn, + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, ); - if let Some(prior_highest_acknowledged_block) = - ScannerGlobalDb::::highest_acknowledged_block(&txn) - { - // If a single block produced multiple Batches, the block number won't increment - assert!( - block_number >= prior_highest_acknowledged_block, - "acknowledging blocks out-of-order" - ); - for b in (prior_highest_acknowledged_block + 1) .. block_number { - assert!( - !ScannerGlobalDb::::is_block_notable(&txn, b), - "skipped acknowledging a block which was notable" - ); - } - } - - ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); - if let Some(key_to_activate) = key_to_activate { - ScannerGlobalDb::::queue_key(&mut txn, block_number + S::WINDOW_LENGTH, key_to_activate); - } - - // Return the balances for any InInstructions which failed to execute - { - let return_information = report::take_return_information::(&mut txn, batch_id) - .expect("didn't save the return information for Batch we published"); - assert_eq!( - in_instruction_succeededs.len(), - return_information.len(), - "amount of InInstruction succeededs differed from amount of return information saved" - ); - - // We map these into standard Burns - for (succeeded, return_information) in - in_instruction_succeededs.into_iter().zip(return_information) - { - if succeeded { - continue; - } - - if let Some(report::ReturnInformation { address, balance }) = return_information { - burns.push(OutInstructionWithBalance { - instruction: OutInstruction { address: address.into(), data: None }, - balance, - }); - } - } - } - - if !burns.is_empty() { - // We send these Burns as stemming from this block we just acknowledged - // This causes them to be acted on after we accumulate the outputs from this block - SubstrateToEventualityDb::send_burns(&mut txn, block_number, &burns); - } - - // Commit the txn + // Commit this txn so this data is flushed txn.commit(); - // Run the Eventuality task since we've advanced it - // We couldn't successfully do this if that txn was still floating around, uncommitted - // The execution of this task won't actually have more work until the txn is committed - self.eventuality_handle.run_now(); + // Then run the Substrate task + self.substrate_handle.run_now(); } /// Queue Burns. @@ -442,14 +391,16 @@ impl Scanner { latency and likely practically require we add regularly scheduled notable blocks (which may be unnecessary). */ - pub fn queue_burns(&mut self, txn: &mut impl DbTxn, burns: &Vec) { + pub fn queue_burns(&mut self, mut txn: impl DbTxn, burns: Vec) { if burns.is_empty() { return; } - let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(txn) - .expect("queueing Burns yet never acknowledged a block"); - - SubstrateToEventualityDb::send_burns(txn, queue_as_of, burns) + // Queue queueing these burns via the Substrate task + substrate::queue_queue_burns::(&mut txn, burns); + // Commit this txn so this data is flushed + txn.commit(); + // Then run the Substrate task + self.substrate_handle.run_now(); } } diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs new file mode 100644 index 00000000..697897c2 --- /dev/null +++ b/processor/scanner/src/substrate/db.rs @@ -0,0 +1,89 @@ +use core::marker::PhantomData; + +use group::GroupEncoding; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, create_db, db_channel}; + +use serai_coins_primitives::OutInstructionWithBalance; + +use crate::{ScannerFeed, KeyFor}; + +#[derive(BorshSerialize, BorshDeserialize)] +struct AcknowledgeBatchEncodable { + batch_id: u32, + in_instruction_succeededs: Vec, + burns: Vec, + key_to_activate: Option>, +} + +#[derive(BorshSerialize, BorshDeserialize)] +enum ActionEncodable { + AcknowledgeBatch(AcknowledgeBatchEncodable), + QueueBurns(Vec), +} + +pub(crate) struct AcknowledgeBatch { + pub(crate) batch_id: u32, + pub(crate) in_instruction_succeededs: Vec, + pub(crate) burns: Vec, + pub(crate) key_to_activate: Option>, +} + +pub(crate) enum Action { + AcknowledgeBatch(AcknowledgeBatch), + QueueBurns(Vec), +} + +db_channel!( + ScannerSubstrate { + Actions: (empty_key: ()) -> ActionEncodable, + } +); + +pub(crate) struct SubstrateDb(PhantomData); +impl SubstrateDb { + pub(crate) fn queue_acknowledge_batch( + txn: &mut impl DbTxn, + batch_id: u32, + in_instruction_succeededs: Vec, + burns: Vec, + key_to_activate: Option>, + ) { + Actions::send( + txn, + (), + &ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { + batch_id, + in_instruction_succeededs, + burns, + key_to_activate: key_to_activate.map(|key| key.to_bytes().as_ref().to_vec()), + }), + ); + } + pub(crate) fn queue_queue_burns(txn: &mut impl DbTxn, burns: Vec) { + Actions::send(txn, (), &ActionEncodable::QueueBurns(burns)); + } + + pub(crate) fn next_action(txn: &mut impl DbTxn) -> Option> { + let action_encodable = Actions::try_recv(txn, ())?; + Some(match action_encodable { + ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, + }) => Action::AcknowledgeBatch(AcknowledgeBatch { + batch_id, + in_instruction_succeededs, + burns, + key_to_activate: key_to_activate.map(|key| { + let mut repr = as GroupEncoding>::Repr::default(); + repr.as_mut().copy_from_slice(&key); + KeyFor::::from_bytes(&repr).unwrap() + }), + }), + ActionEncodable::QueueBurns(burns) => Action::QueueBurns(burns), + }) + } +} diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs new file mode 100644 index 00000000..4feb85d5 --- /dev/null +++ b/processor/scanner/src/substrate/mod.rs @@ -0,0 +1,162 @@ +use core::marker::PhantomData; + +use serai_db::{DbTxn, Db}; + +use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; + +use primitives::task::ContinuallyRan; +use crate::{ + db::{ScannerGlobalDb, SubstrateToEventualityDb}, + report, ScannerFeed, KeyFor, +}; + +mod db; +use db::*; + +pub(crate) fn queue_acknowledge_batch( + txn: &mut impl DbTxn, + batch_id: u32, + in_instruction_succeededs: Vec, + burns: Vec, + key_to_activate: Option>, +) { + SubstrateDb::::queue_acknowledge_batch( + txn, + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, + ) +} +pub(crate) fn queue_queue_burns( + txn: &mut impl DbTxn, + burns: Vec, +) { + SubstrateDb::::queue_queue_burns(txn, burns) +} + +/* + When Serai acknowledges a Batch, we can only handle it once we've scanned the chain and generated + the same Batch ourselves. This takes the `acknowledge_batch`, `queue_burns` arguments and sits on + them until we're able to process them. +*/ +#[allow(non_snake_case)] +pub(crate) struct SubstrateTask { + db: D, + _S: PhantomData, +} + +impl SubstrateTask { + pub(crate) fn new(db: D) -> Self { + Self { db, _S: PhantomData } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for SubstrateTask { + async fn run_iteration(&mut self) -> Result { + let mut made_progress = false; + loop { + // Fetch the next action to handle + let mut txn = self.db.txn(); + let Some(action) = SubstrateDb::::next_action(&mut txn) else { + drop(txn); + return Ok(made_progress); + }; + + match action { + Action::AcknowledgeBatch(AcknowledgeBatch { + batch_id, + in_instruction_succeededs, + mut burns, + key_to_activate, + }) => { + // Check if we have the information for this batch + let Some(block_number) = report::take_block_number_for_batch::(&mut txn, batch_id) + else { + // If we don't, drop this txn (restoring the action to the database) + drop(txn); + return Ok(made_progress); + }; + + // Mark we made progress and handle this + made_progress = true; + + assert!( + ScannerGlobalDb::::is_block_notable(&txn, block_number), + "acknowledging a block which wasn't notable" + ); + if let Some(prior_highest_acknowledged_block) = + ScannerGlobalDb::::highest_acknowledged_block(&txn) + { + // If a single block produced multiple Batches, the block number won't increment + assert!( + block_number >= prior_highest_acknowledged_block, + "acknowledging blocks out-of-order" + ); + for b in (prior_highest_acknowledged_block + 1) .. block_number { + assert!( + !ScannerGlobalDb::::is_block_notable(&txn, b), + "skipped acknowledging a block which was notable" + ); + } + } + + ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); + if let Some(key_to_activate) = key_to_activate { + ScannerGlobalDb::::queue_key( + &mut txn, + block_number + S::WINDOW_LENGTH, + key_to_activate, + ); + } + + // Return the balances for any InInstructions which failed to execute + { + let return_information = report::take_return_information::(&mut txn, batch_id) + .expect("didn't save the return information for Batch we published"); + assert_eq!( + in_instruction_succeededs.len(), + return_information.len(), + "amount of InInstruction succeededs differed from amount of return information saved" + ); + + // We map these into standard Burns + for (succeeded, return_information) in + in_instruction_succeededs.into_iter().zip(return_information) + { + if succeeded { + continue; + } + + if let Some(report::ReturnInformation { address, balance }) = return_information { + burns.push(OutInstructionWithBalance { + instruction: OutInstruction { address: address.into(), data: None }, + balance, + }); + } + } + } + + if !burns.is_empty() { + // We send these Burns as stemming from this block we just acknowledged + // This causes them to be acted on after we accumulate the outputs from this block + SubstrateToEventualityDb::send_burns(&mut txn, block_number, &burns); + } + } + + Action::QueueBurns(burns) => { + // We can instantly handle this so long as we've handled all prior actions + made_progress = true; + + let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(&txn) + .expect("queueing Burns yet never acknowledged a block"); + + SubstrateToEventualityDb::send_burns(&mut txn, queue_as_of, &burns); + } + } + + txn.commit(); + } + } +} From fc765bb9e0477a7b9a4f7eed5c0de1accc5f58f4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 30 Aug 2024 19:51:53 -0400 Subject: [PATCH 059/368] Add crate for the transaction-chaining Scheduler --- .github/workflows/tests.yml | 1 + Cargo.toml | 1 + deny.toml | 3 +++ .../scheduler/transaction-chaining/Cargo.toml | 22 +++++++++++++++++++ .../scheduler/transaction-chaining/LICENSE | 15 +++++++++++++ .../scheduler/transaction-chaining/README.md | 19 ++++++++++++++++ .../scheduler/transaction-chaining/src/lib.rs | 3 +++ 7 files changed, 64 insertions(+) create mode 100644 processor/scheduler/transaction-chaining/Cargo.toml create mode 100644 processor/scheduler/transaction-chaining/LICENSE create mode 100644 processor/scheduler/transaction-chaining/README.md create mode 100644 processor/scheduler/transaction-chaining/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 5032676f..070c5b58 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -42,6 +42,7 @@ jobs: -p serai-processor-key-gen \ -p serai-processor-frost-attempt-manager \ -p serai-processor-primitives \ + -p serai-processor-transaction-chaining-scheduler \ -p serai-processor-scanner \ -p serai-processor \ -p tendermint-machine \ diff --git a/Cargo.toml b/Cargo.toml index 7ad08a51..27e5e562 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -74,6 +74,7 @@ members = [ "processor/frost-attempt-manager", "processor/primitives", + "processor/scheduler/transaction-chaining", "processor/scanner", "processor", diff --git a/deny.toml b/deny.toml index ea61fcc1..7531f3b7 100644 --- a/deny.toml +++ b/deny.toml @@ -48,6 +48,9 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-messages" }, { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, + + { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, + { allow = ["AGPL-3.0"], name = "serai-processor-scanner" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, diff --git a/processor/scheduler/transaction-chaining/Cargo.toml b/processor/scheduler/transaction-chaining/Cargo.toml new file mode 100644 index 00000000..360da6c5 --- /dev/null +++ b/processor/scheduler/transaction-chaining/Cargo.toml @@ -0,0 +1,22 @@ +[package] +name = "serai-processor-transaction-chaining-scheduler" +version = "0.1.0" +description = "Scheduler for networks with transaction chaining for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/transaction-chaining" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[package.metadata.cargo-machete] +ignored = ["scale"] + +[lints] +workspace = true + +[dependencies] diff --git a/processor/scheduler/transaction-chaining/LICENSE b/processor/scheduler/transaction-chaining/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/scheduler/transaction-chaining/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scheduler/transaction-chaining/README.md b/processor/scheduler/transaction-chaining/README.md new file mode 100644 index 00000000..0788ff53 --- /dev/null +++ b/processor/scheduler/transaction-chaining/README.md @@ -0,0 +1,19 @@ +# Transaction Chaining Scheduler + +A scheduler of transactions for networks premised on the UTXO model which +support transaction chaining. Transaction chaining refers to the ability to +obtain an identifier for an output within a transaction not yet signed usable +to build and sign a transaction spending it. + +### Design + +The scheduler is designed to achieve fulfillment of all expected payments with +an `O(1)` delay (regardless of prior scheduler state), `O(log n)` time, and +`O(n)` computational complexity. + +Due to the ability to chain transactions, we can immediately plan/sign dependent +transactions. For the time/computational complexity, we use a tree to fulfill +payments. This quickly gives us the ability to make as many outputs as necessary +(regardless of per-transaction output limits) and only has the latency of +including a chain of `O(log n)` transactions on-chain. The only computational +overhead is in creating the transactions which are branches in the tree. diff --git a/processor/scheduler/transaction-chaining/src/lib.rs b/processor/scheduler/transaction-chaining/src/lib.rs new file mode 100644 index 00000000..3639aa04 --- /dev/null +++ b/processor/scheduler/transaction-chaining/src/lib.rs @@ -0,0 +1,3 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] From bd277e7032131cc988601b8b9b6662c1fed1d193 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 1 Sep 2024 00:01:01 -0400 Subject: [PATCH 060/368] Add processor/scheduler/utxo/primitives Includes the necessary signing functions and the fee amortization logic. Moves transaction-chaining to utxo/transaction-chaining. --- .github/workflows/tests.yml | 1 + Cargo.lock | 14 ++ Cargo.toml | 3 +- deny.toml | 1 + .../scheduler/utxo/primitives/Cargo.toml | 25 +++ processor/scheduler/utxo/primitives/LICENSE | 15 ++ processor/scheduler/utxo/primitives/README.md | 3 + .../scheduler/utxo/primitives/src/lib.rs | 179 ++++++++++++++++++ .../transaction-chaining/Cargo.toml | 0 .../{ => utxo}/transaction-chaining/LICENSE | 0 .../{ => utxo}/transaction-chaining/README.md | 0 .../transaction-chaining/src/lib.rs | 0 12 files changed, 240 insertions(+), 1 deletion(-) create mode 100644 processor/scheduler/utxo/primitives/Cargo.toml create mode 100644 processor/scheduler/utxo/primitives/LICENSE create mode 100644 processor/scheduler/utxo/primitives/README.md create mode 100644 processor/scheduler/utxo/primitives/src/lib.rs rename processor/scheduler/{ => utxo}/transaction-chaining/Cargo.toml (100%) rename processor/scheduler/{ => utxo}/transaction-chaining/LICENSE (100%) rename processor/scheduler/{ => utxo}/transaction-chaining/README.md (100%) rename processor/scheduler/{ => utxo}/transaction-chaining/src/lib.rs (100%) diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 070c5b58..33f2e852 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -42,6 +42,7 @@ jobs: -p serai-processor-key-gen \ -p serai-processor-frost-attempt-manager \ -p serai-processor-primitives \ + -p serai-processor-utxo-scheduler-primitives \ -p serai-processor-transaction-chaining-scheduler \ -p serai-processor-scanner \ -p serai-processor \ diff --git a/Cargo.lock b/Cargo.lock index 2a9de4b9..935e95d8 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8709,6 +8709,20 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-transaction-chaining-scheduler" +version = "0.1.0" + +[[package]] +name = "serai-processor-utxo-scheduler-primitives" +version = "0.1.0" +dependencies = [ + "async-trait", + "serai-primitives", + "serai-processor-primitives", + "serai-processor-scanner", +] + [[package]] name = "serai-reproducible-runtime-tests" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 27e5e562..17435713 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -74,7 +74,8 @@ members = [ "processor/frost-attempt-manager", "processor/primitives", - "processor/scheduler/transaction-chaining", + "processor/scheduler/utxo/primitives", + "processor/scheduler/utxo/transaction-chaining", "processor/scanner", "processor", diff --git a/deny.toml b/deny.toml index 7531f3b7..fb616244 100644 --- a/deny.toml +++ b/deny.toml @@ -49,6 +49,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, + { allow = ["AGPL-3.0"], name = "serai-processor-utxo-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-scanner" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, diff --git a/processor/scheduler/utxo/primitives/Cargo.toml b/processor/scheduler/utxo/primitives/Cargo.toml new file mode 100644 index 00000000..01d3db7d --- /dev/null +++ b/processor/scheduler/utxo/primitives/Cargo.toml @@ -0,0 +1,25 @@ +[package] +name = "serai-processor-utxo-scheduler-primitives" +version = "0.1.0" +description = "Primitives for UTXO schedulers for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/utxo/primitives" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } + +primitives = { package = "serai-processor-primitives", path = "../../../primitives" } +scanner = { package = "serai-processor-scanner", path = "../../../scanner" } diff --git a/processor/scheduler/utxo/primitives/LICENSE b/processor/scheduler/utxo/primitives/LICENSE new file mode 100644 index 00000000..e091b149 --- /dev/null +++ b/processor/scheduler/utxo/primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scheduler/utxo/primitives/README.md b/processor/scheduler/utxo/primitives/README.md new file mode 100644 index 00000000..81bc954a --- /dev/null +++ b/processor/scheduler/utxo/primitives/README.md @@ -0,0 +1,3 @@ +# UTXO Scheduler Primitives + +Primitives for UTXO schedulers. diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs new file mode 100644 index 00000000..61dd9d88 --- /dev/null +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -0,0 +1,179 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::fmt::Debug; + +use serai_primitives::{Coin, Amount}; + +use primitives::ReceivedOutput; +use scanner::{Payment, ScannerFeed, AddressFor, OutputFor}; + +/// An object able to plan a transaction. +#[async_trait::async_trait] +pub trait TransactionPlanner { + /// An error encountered when determining the fee rate. + /// + /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually + /// resolve without manual intervention/changing the arguments. + type EphemeralError: Debug; + + /// The type representing a fee rate to use for transactions. + type FeeRate: Clone + Copy; + + /// The type representing a planned transaction. + type PlannedTransaction; + + /// Obtain the fee rate to pay. + /// + /// This must be constant to the finalized block referenced by this block number and the coin. + async fn fee_rate( + &self, + block_number: u64, + coin: Coin, + ) -> Result; + + /// Calculate the for a tansaction with this structure. + /// + /// The fee rate, inputs, and payments, will all be for the same coin. The returned fee is + /// denominated in this coin. + fn calculate_fee( + &self, + block_number: u64, + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>, + change: Option>, + ) -> Amount; + + /// Plan a transaction. + /// + /// This must only require the same fee as would be returned by `calculate_fee`. The caller is + /// trusted to maintain `sum(inputs) - sum(payments) >= if change.is_some() { DUST } else { 0 }`. + /// + /// `change` will always be an address belonging to the Serai network. + fn plan( + &self, + block_number: u64, + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>, + change: Option>, + ) -> Self::PlannedTransaction; + + /// Obtain a PlannedTransaction via amortizing the fee over the payments. + /// + /// `operating_costs` is accrued to if Serai faces the burden of a fee or drops inputs not worth + /// accumulating. `operating_costs` will be amortized along with this transaction's fee as + /// possible. Please see `spec/processor/UTXO Management.md` for more information. + /// + /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. + fn plan_transaction_with_fee_amortization( + &self, + operating_costs: &mut u64, + block_number: u64, + fee_rate: Self::FeeRate, + inputs: Vec>, + mut payments: Vec>, + change: Option>, + ) -> Option { + // Sanity checks + { + assert!(!inputs.is_empty()); + assert!((!payments.is_empty()) || change.is_some()); + let coin = inputs.first().unwrap().balance().coin; + for input in &inputs { + assert_eq!(coin, input.balance().coin); + } + for payment in &payments { + assert_eq!(coin, payment.balance().coin); + } + assert!( + (inputs.iter().map(|input| input.balance().amount.0).sum::() + *operating_costs) >= + payments.iter().map(|payment| payment.balance().amount.0).sum::(), + "attempted to fulfill payments without a sufficient input set" + ); + } + + let coin = inputs.first().unwrap().balance().coin; + + // Amortization + { + // Sort payments from high amount to low amount + payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); + + let mut fee = self + .calculate_fee(block_number, fee_rate, inputs.clone(), payments.clone(), change.clone()) + .0; + let mut amortized = 0; + while !payments.is_empty() { + // We need to pay the fee, and any accrued operating costs, minus what we've already + // amortized + let adjusted_fee = (*operating_costs + fee).saturating_sub(amortized); + + /* + Ideally, we wouldn't use a ceil div yet would be accurate about it. Any remainder could + be amortized over the largest outputs, which wouldn't be relevant here as we only work + with the smallest output. The issue is the theoretical edge case where all outputs have + the same value and are of the minimum value. In that case, none would be able to have the + remainder amortized as it'd cause them to need to be dropped. Using a ceil div avoids + this. + */ + let per_payment_fee = adjusted_fee.div_ceil(u64::try_from(payments.len()).unwrap()); + // Pop the last payment if it can't pay the fee, remaining about the dust limit as it does + if payments.last().unwrap().balance().amount.0 <= (per_payment_fee + S::dust(coin).0) { + amortized += payments.pop().unwrap().balance().amount.0; + // Recalculate the fee and try again + fee = self + .calculate_fee(block_number, fee_rate, inputs.clone(), payments.clone(), change.clone()) + .0; + continue; + } + // Break since all of these payments shouldn't be dropped + break; + } + + // If we couldn't amortize the fee over the payments, check if we even have enough to pay it + if payments.is_empty() { + // If we don't have a change output, we simply return here + // We no longer have anything to do here, nor any expectations + if change.is_none() { + None?; + } + + let inputs = inputs.iter().map(|input| input.balance().amount.0).sum::(); + // Checks not just if we can pay for it, yet that the would-be change output is at least + // dust + if inputs < (fee + S::dust(coin).0) { + // Write off these inputs + *operating_costs += inputs; + // Yet also claw back the payments we dropped, as we only lost the change + // The dropped payments will be worth less than the inputs + operating_costs we started + // with, so this shouldn't use `saturating_sub` + *operating_costs -= amortized; + None?; + } + } else { + // Since we have payments which can pay the fee we ended up with, amortize it + let adjusted_fee = (*operating_costs + fee).saturating_sub(amortized); + let per_payment_base_fee = adjusted_fee / u64::try_from(payments.len()).unwrap(); + let payments_paying_one_atomic_unit_more = + usize::try_from(adjusted_fee % u64::try_from(payments.len()).unwrap()).unwrap(); + + for (i, payment) in payments.iter_mut().enumerate() { + let per_payment_fee = + per_payment_base_fee + u64::from(u8::from(i < payments_paying_one_atomic_unit_more)); + payment.balance().amount.0 -= per_payment_fee; + amortized += per_payment_fee; + } + assert!(amortized >= (*operating_costs + fee)); + } + + // Update the amount of operating costs + *operating_costs = (*operating_costs + fee).saturating_sub(amortized); + } + + // Because we amortized, or accrued as operating costs, the fee, make the transaction + Some(self.plan(block_number, fee_rate, inputs, payments, change)) + } +} diff --git a/processor/scheduler/transaction-chaining/Cargo.toml b/processor/scheduler/utxo/transaction-chaining/Cargo.toml similarity index 100% rename from processor/scheduler/transaction-chaining/Cargo.toml rename to processor/scheduler/utxo/transaction-chaining/Cargo.toml diff --git a/processor/scheduler/transaction-chaining/LICENSE b/processor/scheduler/utxo/transaction-chaining/LICENSE similarity index 100% rename from processor/scheduler/transaction-chaining/LICENSE rename to processor/scheduler/utxo/transaction-chaining/LICENSE diff --git a/processor/scheduler/transaction-chaining/README.md b/processor/scheduler/utxo/transaction-chaining/README.md similarity index 100% rename from processor/scheduler/transaction-chaining/README.md rename to processor/scheduler/utxo/transaction-chaining/README.md diff --git a/processor/scheduler/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs similarity index 100% rename from processor/scheduler/transaction-chaining/src/lib.rs rename to processor/scheduler/utxo/transaction-chaining/src/lib.rs From 6deb60513c8b352b2c9a2830ecca241edbeff02d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 1 Sep 2024 00:05:08 -0400 Subject: [PATCH 061/368] Expand primitives/scanner with niceties needed for the scheduler --- processor/primitives/src/output.rs | 2 +- processor/scanner/src/eventuality/mod.rs | 10 ++-- processor/scanner/src/lib.rs | 67 +++++++++++++++++++++--- processor/scanner/src/scan/mod.rs | 4 +- processor/scanner/src/substrate/mod.rs | 5 ++ 5 files changed, 75 insertions(+), 13 deletions(-) diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 9a300940..d59b4fd0 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -8,7 +8,7 @@ use serai_primitives::{ExternalAddress, Balance}; use crate::Id; /// An address on the external network. -pub trait Address: Send + Sync + Into + TryFrom { +pub trait Address: Send + Sync + Clone + Into + TryFrom { /// Write this address. fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; /// Read an address. diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 3be7f3ce..bfc879ea 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -12,7 +12,7 @@ use crate::{ SeraiKey, OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, ScanToEventualityDb, }, - BlockExt, ScannerFeed, KeyFor, OutputFor, EventualityFor, SchedulerUpdate, Scheduler, + BlockExt, ScannerFeed, KeyFor, OutputFor, EventualityFor, Payment, SchedulerUpdate, Scheduler, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -165,7 +165,11 @@ impl> EventualityTask { { intaked_any = true; - let new_eventualities = self.scheduler.fulfill(&mut txn, &keys_with_stages, burns); + let new_eventualities = self.scheduler.fulfill( + &mut txn, + &keys_with_stages, + burns.into_iter().filter_map(|burn| Payment::try_from(burn).ok()).collect(), + ); intake_eventualities::(&mut txn, new_eventualities); } txn.commit(); @@ -291,7 +295,7 @@ impl> ContinuallyRan for EventualityTas // Drop any outputs less than the dust limit non_external_outputs.retain(|output| { let balance = output.balance(); - balance.amount.0 >= self.feed.dust(balance.coin).0 + balance.amount.0 >= S::dust(balance.coin).0 }); /* diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 53bb9030..80cf96be 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -5,7 +5,7 @@ use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; -use serai_primitives::{NetworkId, Coin, Amount}; +use serai_primitives::{NetworkId, Coin, Amount, Balance, Data}; use serai_in_instructions_primitives::Batch; use serai_coins_primitives::OutInstructionWithBalance; @@ -143,7 +143,7 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// /// This MUST be constant. Serai MUST NOT create internal outputs worth less than this. This /// SHOULD be a value worth handling at a human level. - fn dust(&self, coin: Coin) -> Amount; + fn dust(coin: Coin) -> Amount; /// The cost to aggregate an input as of the specified block. /// @@ -155,10 +155,14 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { ) -> Result; } -type KeyFor = <::Block as Block>::Key; -type AddressFor = <::Block as Block>::Address; -type OutputFor = <::Block as Block>::Output; -type EventualityFor = <::Block as Block>::Eventuality; +/// The key type for this ScannerFeed. +pub type KeyFor = <::Block as Block>::Key; +/// The address type for this ScannerFeed. +pub type AddressFor = <::Block as Block>::Address; +/// The output type for this ScannerFeed. +pub type OutputFor = <::Block as Block>::Output; +/// The eventuality type for this ScannerFeed. +pub type EventualityFor = <::Block as Block>::Eventuality; #[async_trait::async_trait] pub trait BatchPublisher: 'static + Send + Sync { @@ -200,6 +204,55 @@ pub struct SchedulerUpdate { returns: Vec>, } +impl SchedulerUpdate { + /// The outputs to accumulate. + pub fn outputs(&self) -> &[OutputFor] { + &self.outputs + } + /// The outputs to forward to the latest multisig. + pub fn forwards(&self) -> &[OutputFor] { + &self.forwards + } + /// The outputs to return. + pub fn returns(&self) -> &[Return] { + &self.returns + } +} + +/// A payment to fulfill. +#[derive(Clone)] +pub struct Payment { + address: AddressFor, + balance: Balance, + data: Option>, +} + +impl TryFrom for Payment { + type Error = (); + fn try_from(out_instruction_with_balance: OutInstructionWithBalance) -> Result { + Ok(Payment { + address: out_instruction_with_balance.instruction.address.try_into().map_err(|_| ())?, + balance: out_instruction_with_balance.balance, + data: out_instruction_with_balance.instruction.data.map(Data::consume), + }) + } +} + +impl Payment { + /// The address to pay. + pub fn address(&self) -> &AddressFor { + &self.address + } + /// The balance to transfer. + pub fn balance(&self) -> Balance { + self.balance + } + /// The data to associate with this payment. + pub fn data(&self) -> &Option> { + &self.data + } +} + /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: 'static + Send { /// Activate a key. @@ -274,7 +327,7 @@ pub trait Scheduler: 'static + Send { &mut self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], - payments: Vec, + payments: Vec>, ) -> HashMap, Vec>>; } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 4d6ca16e..51671dc6 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -215,7 +215,7 @@ impl ContinuallyRan for ScanTask { let balance = output.balance(); // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing // an invocation of a consensus/signing protocol - if balance.amount.0 >= self.feed.dust(balance.coin).0 { + if balance.amount.0 >= S::dust(balance.coin).0 { ScannerGlobalDb::::flag_notable_due_to_non_external_output(&mut txn, b); } continue; @@ -243,7 +243,7 @@ impl ContinuallyRan for ScanTask { balance.amount.0 -= 2 * cost_to_aggregate.0; // Now, check it's still past the dust threshold - if balance.amount.0 < self.feed.dust(balance.coin).0 { + if balance.amount.0 < S::dust(balance.coin).0 { continue; } diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 4feb85d5..d67be9dc 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -138,6 +138,11 @@ impl ContinuallyRan for SubstrateTask { } } + // Drop burns less than the dust + let burns = burns + .into_iter() + .filter(|burn| burn.balance.amount.0 >= S::dust(burn.balance.coin).0) + .collect::>(); if !burns.is_empty() { // We send these Burns as stemming from this block we just acknowledged // This causes them to be acted on after we accumulate the outputs from this block From c88ebe985eed23ed97727594ba0c51477238c9bf Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 1 Sep 2024 01:55:04 -0400 Subject: [PATCH 062/368] Outline of the transaction-chaining scheduler --- Cargo.lock | 14 +- processor/primitives/Cargo.toml | 1 + processor/primitives/src/lib.rs | 3 + processor/primitives/src/payment.rs | 42 +++++ processor/scanner/Cargo.toml | 4 +- processor/scanner/src/eventuality/mod.rs | 9 +- processor/scanner/src/lib.rs | 50 ++---- processor/scanner/src/lifetime.rs | 2 +- .../scheduler/utxo/primitives/src/lib.rs | 50 +++--- .../utxo/transaction-chaining/Cargo.toml | 19 ++- .../utxo/transaction-chaining/LICENSE | 2 +- .../utxo/transaction-chaining/src/db.rs | 49 ++++++ .../utxo/transaction-chaining/src/lib.rs | 148 ++++++++++++++++++ 13 files changed, 321 insertions(+), 72 deletions(-) create mode 100644 processor/primitives/src/payment.rs create mode 100644 processor/scheduler/utxo/transaction-chaining/src/db.rs diff --git a/Cargo.lock b/Cargo.lock index 935e95d8..7512f35c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8656,6 +8656,7 @@ dependencies = [ "group", "log", "parity-scale-codec", + "serai-coins-primitives", "serai-primitives", "tokio", ] @@ -8674,9 +8675,7 @@ dependencies = [ "serai-db", "serai-in-instructions-primitives", "serai-primitives", - "serai-processor-messages", "serai-processor-primitives", - "thiserror", "tokio", ] @@ -8712,6 +8711,17 @@ dependencies = [ [[package]] name = "serai-processor-transaction-chaining-scheduler" version = "0.1.0" +dependencies = [ + "borsh", + "group", + "parity-scale-codec", + "serai-coins-primitives", + "serai-db", + "serai-primitives", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-utxo-scheduler-primitives", +] [[package]] name = "serai-processor-utxo-scheduler-primitives" diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml index 9427a604..dd1b74ea 100644 --- a/processor/primitives/Cargo.toml +++ b/processor/primitives/Cargo.toml @@ -22,6 +22,7 @@ async-trait = { version = "0.1", default-features = false } group = { version = "0.13", default-features = false } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } +serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index 7a8be219..4e45fa8f 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -21,6 +21,9 @@ pub use eventuality::*; mod block; pub use block::*; +mod payment; +pub use payment::*; + /// An ID for an output/transaction/block/etc. /// /// IDs don't need to implement `Copy`, enabling `[u8; 33]`, `[u8; 64]` to be used. IDs are still diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs new file mode 100644 index 00000000..1bbb0604 --- /dev/null +++ b/processor/primitives/src/payment.rs @@ -0,0 +1,42 @@ +use serai_primitives::{Balance, Data}; +use serai_coins_primitives::OutInstructionWithBalance; + +use crate::Address; + +/// A payment to fulfill. +#[derive(Clone)] +pub struct Payment { + address: A, + balance: Balance, + data: Option>, +} + +impl TryFrom for Payment { + type Error = (); + fn try_from(out_instruction_with_balance: OutInstructionWithBalance) -> Result { + Ok(Payment { + address: out_instruction_with_balance.instruction.address.try_into().map_err(|_| ())?, + balance: out_instruction_with_balance.balance, + data: out_instruction_with_balance.instruction.data.map(Data::consume), + }) + } +} + +impl Payment { + /// Create a new Payment. + pub fn new(address: A, balance: Balance, data: Option>) -> Self { + Payment { address, balance, data } + } + /// The address to pay. + pub fn address(&self) -> &A { + &self.address + } + /// The balance to transfer. + pub fn balance(&self) -> Balance { + self.balance + } + /// The data to associate with this payment. + pub fn data(&self) -> &Option> { + &self.data + } +} diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index e7cdef97..c2dc31fe 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -19,7 +19,6 @@ workspace = true [dependencies] # Macros async-trait = { version = "0.1", default-features = false } -thiserror = { version = "1", default-features = false } # Encoders hex = { version = "0.4", default-features = false, features = ["std"] } @@ -37,7 +36,6 @@ serai-db = { path = "../../common/db" } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } -serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std"] } +serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } -messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index bfc879ea..83ec50ab 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -4,7 +4,7 @@ use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; -use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, Block}; +use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Eventuality, Block, Payment}; use crate::{ lifetime::LifetimeStage, @@ -12,7 +12,7 @@ use crate::{ SeraiKey, OutputWithInInstruction, ReceiverScanData, ScannerGlobalDb, SubstrateToEventualityDb, ScanToEventualityDb, }, - BlockExt, ScannerFeed, KeyFor, OutputFor, EventualityFor, Payment, SchedulerUpdate, Scheduler, + BlockExt, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, SchedulerUpdate, Scheduler, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -168,7 +168,10 @@ impl> EventualityTask { let new_eventualities = self.scheduler.fulfill( &mut txn, &keys_with_stages, - burns.into_iter().filter_map(|burn| Payment::try_from(burn).ok()).collect(), + burns + .into_iter() + .filter_map(|burn| Payment::>::try_from(burn).ok()) + .collect(), ); intake_eventualities::(&mut txn, new_eventualities); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 80cf96be..4d33d0d0 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -5,11 +5,11 @@ use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; -use serai_primitives::{NetworkId, Coin, Amount, Balance, Data}; +use serai_primitives::{NetworkId, Coin, Amount}; use serai_in_instructions_primitives::Batch; use serai_coins_primitives::OutInstructionWithBalance; -use primitives::{task::*, Address, ReceivedOutput, Block}; +use primitives::{task::*, Address, ReceivedOutput, Block, Payment}; // Logic for deciding where in its lifetime a multisig is. mod lifetime; @@ -195,6 +195,16 @@ impl Return { let output = OutputFor::::read(reader)?; Ok(Return { address, output }) } + + /// The address to return the output to. + pub fn address(&self) -> &AddressFor { + &self.address + } + + /// The output to return. + pub fn output(&self) -> &OutputFor { + &self.output + } } /// An update for the scheduler. @@ -219,40 +229,6 @@ impl SchedulerUpdate { } } -/// A payment to fulfill. -#[derive(Clone)] -pub struct Payment { - address: AddressFor, - balance: Balance, - data: Option>, -} - -impl TryFrom for Payment { - type Error = (); - fn try_from(out_instruction_with_balance: OutInstructionWithBalance) -> Result { - Ok(Payment { - address: out_instruction_with_balance.instruction.address.try_into().map_err(|_| ())?, - balance: out_instruction_with_balance.balance, - data: out_instruction_with_balance.instruction.data.map(Data::consume), - }) - } -} - -impl Payment { - /// The address to pay. - pub fn address(&self) -> &AddressFor { - &self.address - } - /// The balance to transfer. - pub fn balance(&self) -> Balance { - self.balance - } - /// The data to associate with this payment. - pub fn data(&self) -> &Option> { - &self.data - } -} - /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: 'static + Send { /// Activate a key. @@ -327,7 +303,7 @@ pub trait Scheduler: 'static + Send { &mut self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], - payments: Vec>, + payments: Vec>>, ) -> HashMap, Vec>>; } diff --git a/processor/scanner/src/lifetime.rs b/processor/scanner/src/lifetime.rs index bef6af8b..e07f5f42 100644 --- a/processor/scanner/src/lifetime.rs +++ b/processor/scanner/src/lifetime.rs @@ -6,7 +6,7 @@ use crate::ScannerFeed; /// rotation process. Steps 7-8 regard a multisig which isn't retiring yet retired, and /// accordingly, no longer exists, so they are not modelled here (as this only models active /// multisigs. Inactive multisigs aren't represented in the first place). -#[derive(Clone, Copy, PartialEq)] +#[derive(Clone, Copy, PartialEq, Debug)] pub enum LifetimeStage { /// A new multisig, once active, shouldn't actually start receiving coins until several blocks /// later. If any UI is premature in sending to this multisig, we delay to report the outputs to diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 61dd9d88..2c6da97b 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -6,12 +6,12 @@ use core::fmt::Debug; use serai_primitives::{Coin, Amount}; -use primitives::ReceivedOutput; -use scanner::{Payment, ScannerFeed, AddressFor, OutputFor}; +use primitives::{ReceivedOutput, Payment}; +use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; /// An object able to plan a transaction. #[async_trait::async_trait] -pub trait TransactionPlanner { +pub trait TransactionPlanner: 'static + Send + Sync { /// An error encountered when determining the fee rate. /// /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually @@ -33,17 +33,22 @@ pub trait TransactionPlanner { coin: Coin, ) -> Result; + /// The branch address for this key of Serai's. + fn branch_address(key: KeyFor) -> AddressFor; + /// The change address for this key of Serai's. + fn change_address(key: KeyFor) -> AddressFor; + /// The forwarding address for this key of Serai's. + fn forwarding_address(key: KeyFor) -> AddressFor; + /// Calculate the for a tansaction with this structure. /// /// The fee rate, inputs, and payments, will all be for the same coin. The returned fee is /// denominated in this coin. fn calculate_fee( - &self, - block_number: u64, fee_rate: Self::FeeRate, inputs: Vec>, - payments: Vec>, - change: Option>, + payments: Vec>>, + change: Option>, ) -> Amount; /// Plan a transaction. @@ -53,12 +58,10 @@ pub trait TransactionPlanner { /// /// `change` will always be an address belonging to the Serai network. fn plan( - &self, - block_number: u64, fee_rate: Self::FeeRate, inputs: Vec>, - payments: Vec>, - change: Option>, + payments: Vec>>, + change: Option>, ) -> Self::PlannedTransaction; /// Obtain a PlannedTransaction via amortizing the fee over the payments. @@ -69,13 +72,11 @@ pub trait TransactionPlanner { /// /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. fn plan_transaction_with_fee_amortization( - &self, operating_costs: &mut u64, - block_number: u64, fee_rate: Self::FeeRate, inputs: Vec>, - mut payments: Vec>, - change: Option>, + mut payments: Vec>>, + mut change: Option>, ) -> Option { // Sanity checks { @@ -102,9 +103,7 @@ pub trait TransactionPlanner { // Sort payments from high amount to low amount payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); - let mut fee = self - .calculate_fee(block_number, fee_rate, inputs.clone(), payments.clone(), change.clone()) - .0; + let mut fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; let mut amortized = 0; while !payments.is_empty() { // We need to pay the fee, and any accrued operating costs, minus what we've already @@ -124,9 +123,7 @@ pub trait TransactionPlanner { if payments.last().unwrap().balance().amount.0 <= (per_payment_fee + S::dust(coin).0) { amortized += payments.pop().unwrap().balance().amount.0; // Recalculate the fee and try again - fee = self - .calculate_fee(block_number, fee_rate, inputs.clone(), payments.clone(), change.clone()) - .0; + fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; continue; } // Break since all of these payments shouldn't be dropped @@ -167,6 +164,15 @@ pub trait TransactionPlanner { amortized += per_payment_fee; } assert!(amortized >= (*operating_costs + fee)); + + // If the change is less than the dust, drop it + let would_be_change = inputs.iter().map(|input| input.balance().amount.0).sum::() - + payments.iter().map(|payment| payment.balance().amount.0).sum::() - + fee; + if would_be_change < S::dust(coin).0 { + change = None; + *operating_costs += would_be_change; + } } // Update the amount of operating costs @@ -174,6 +180,6 @@ pub trait TransactionPlanner { } // Because we amortized, or accrued as operating costs, the fee, make the transaction - Some(self.plan(block_number, fee_rate, inputs, payments, change)) + Some(Self::plan(fee_rate, inputs, payments, change)) } } diff --git a/processor/scheduler/utxo/transaction-chaining/Cargo.toml b/processor/scheduler/utxo/transaction-chaining/Cargo.toml index 360da6c5..d54d0f85 100644 --- a/processor/scheduler/utxo/transaction-chaining/Cargo.toml +++ b/processor/scheduler/utxo/transaction-chaining/Cargo.toml @@ -1,9 +1,9 @@ [package] name = "serai-processor-transaction-chaining-scheduler" version = "0.1.0" -description = "Scheduler for networks with transaction chaining for the Serai processor" +description = "Scheduler for UTXO networks with transaction chaining for the Serai processor" license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/transaction-chaining" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/utxo/transaction-chaining" authors = ["Luke Parker "] keywords = [] edition = "2021" @@ -14,9 +14,22 @@ all-features = true rustdoc-args = ["--cfg", "docsrs"] [package.metadata.cargo-machete] -ignored = ["scale"] +ignored = ["scale", "borsh"] [lints] workspace = true [dependencies] +group = { version = "0.13", default-features = false } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } +serai-coins-primitives = { path = "../../../../substrate/coins/primitives", default-features = false, features = ["std"] } + +serai-db = { path = "../../../../common/db" } + +primitives = { package = "serai-processor-primitives", path = "../../../primitives" } +scheduler-primitives = { package = "serai-processor-utxo-scheduler-primitives", path = "../primitives" } +scanner = { package = "serai-processor-scanner", path = "../../../scanner" } diff --git a/processor/scheduler/utxo/transaction-chaining/LICENSE b/processor/scheduler/utxo/transaction-chaining/LICENSE index 41d5a261..e091b149 100644 --- a/processor/scheduler/utxo/transaction-chaining/LICENSE +++ b/processor/scheduler/utxo/transaction-chaining/LICENSE @@ -1,6 +1,6 @@ AGPL-3.0-only license -Copyright (c) 2022-2024 Luke Parker +Copyright (c) 2024 Luke Parker This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License Version 3 as diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs new file mode 100644 index 00000000..20c574e9 --- /dev/null +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -0,0 +1,49 @@ +use core::marker::PhantomData; + +use group::GroupEncoding; + +use serai_primitives::Coin; + +use serai_db::{Get, DbTxn, create_db}; + +use primitives::ReceivedOutput; +use scanner::{ScannerFeed, KeyFor, OutputFor}; + +create_db! { + TransactionChainingScheduler { + SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, + } +} + +pub(crate) struct Db(PhantomData); +impl Db { + pub(crate) fn outputs( + getter: &impl Get, + key: KeyFor, + coin: Coin, + ) -> Option>> { + let buf = SerializedOutputs::get(getter, key.to_bytes().as_ref(), coin)?; + let mut buf = buf.as_slice(); + + let mut res = Vec::with_capacity(buf.len() / 128); + while !buf.is_empty() { + res.push(OutputFor::::read(&mut buf).unwrap()); + } + Some(res) + } + pub(crate) fn set_outputs( + txn: &mut impl DbTxn, + key: KeyFor, + coin: Coin, + outputs: &[OutputFor], + ) { + let mut buf = Vec::with_capacity(outputs.len() * 128); + for output in outputs { + output.write(&mut buf).unwrap(); + } + SerializedOutputs::set(txn, key.to_bytes().as_ref(), coin, &buf); + } + pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); + } +} diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 3639aa04..63635696 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -1,3 +1,151 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] #![deny(missing_docs)] + +use core::marker::PhantomData; +use std::collections::HashMap; + +use serai_primitives::Coin; + +use serai_db::DbTxn; + +use primitives::{ReceivedOutput, Payment}; +use scanner::{ + LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, SchedulerUpdate, + Scheduler as SchedulerTrait, +}; +use scheduler_primitives::*; + +mod db; +use db::Db; + +/// A planned transaction. +pub struct PlannedTransaction { + /// The signable transaction. + signable: T, + /// The outputs we'll receive from this. + effected_received_outputs: OutputFor, + /// The Evtnuality to watch for. + eventuality: EventualityFor, +} + +/// A scheduler of transactions for networks premised on the UTXO model which support +/// transaction chaining. +pub struct Scheduler< + S: ScannerFeed, + T, + P: TransactionPlanner>, +>(PhantomData, PhantomData, PhantomData

); + +impl>> + Scheduler +{ + fn accumulate_outputs(txn: &mut impl DbTxn, key: KeyFor, outputs: &[OutputFor]) { + // Accumulate them in memory + let mut outputs_by_coin = HashMap::with_capacity(1); + for output in outputs.iter().filter(|output| output.key() == key) { + let coin = output.balance().coin; + if let std::collections::hash_map::Entry::Vacant(e) = outputs_by_coin.entry(coin) { + e.insert(Db::::outputs(txn, key, coin).unwrap()); + } + outputs_by_coin.get_mut(&coin).unwrap().push(output.clone()); + } + + // Flush them to the database + for (coin, outputs) in outputs_by_coin { + Db::::set_outputs(txn, key, coin, &outputs); + } + } +} + +impl< + S: ScannerFeed, + T: 'static + Send + Sync, + P: TransactionPlanner>, + > SchedulerTrait for Scheduler +{ + fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { + for coin in S::NETWORK.coins() { + Db::::set_outputs(txn, key, *coin, &vec![]); + } + } + + fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor) { + todo!("TODO") + } + + fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { + for coin in S::NETWORK.coins() { + assert!(Db::::outputs(txn, key, *coin).is_none()); + Db::::del_outputs(txn, key, *coin); + } + } + + fn update( + &mut self, + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + update: SchedulerUpdate, + ) -> HashMap, Vec>> { + // Accumulate all the outputs + for key in active_keys { + Self::accumulate_outputs(txn, key.0, update.outputs()); + } + + let mut fee_rates: HashMap = todo!("TODO"); + + // Create the transactions for the forwards/burns + { + let mut planned_txs = vec![]; + for forward in update.forwards() { + let forward_to_key = active_keys.last().unwrap(); + assert_eq!(forward_to_key.1, LifetimeStage::Active); + + let Some(plan) = P::plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + &mut 0, + fee_rates[&forward.balance().coin], + vec![forward.clone()], + vec![Payment::new(P::forwarding_address(forward_to_key.0), forward.balance(), None)], + None, + ) else { + continue; + }; + planned_txs.push(plan); + } + for to_return in update.returns() { + let out_instruction = + Payment::new(to_return.address().clone(), to_return.output().balance(), None); + let Some(plan) = P::plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + &mut 0, + fee_rates[&out_instruction.balance().coin], + vec![to_return.output().clone()], + vec![out_instruction], + None, + ) else { + continue; + }; + planned_txs.push(plan); + } + + // TODO: Send the transactions off for signing + // TODO: Return the eventualities + todo!("TODO") + } + } + + fn fulfill( + &mut self, + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + payments: Vec>>, + ) -> HashMap, Vec>> { + // TODO: Find the key to use for fulfillment + // TODO: Sort outputs and payments by amount + // TODO: For as long as we don't have sufficiently aggregated inputs to handle all payments, + // aggregate + // TODO: Create the tree for the payments + todo!("TODO") + } +} From fadc88d2ad6deed80dfa39acc398a25a75aa9d26 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 2 Sep 2024 16:09:52 -0400 Subject: [PATCH 063/368] Add scheduler-primitives The main benefit is whatever scheduler is in use, we now have a single API to receive TXs to sign (which is of value to the TX signer crate we'll inevitably build). --- .github/workflows/tests.yml | 3 +- Cargo.lock | 12 ++++- Cargo.toml | 3 +- deny.toml | 5 +- processor/scanner/src/lib.rs | 8 +++- processor/scheduler/primitives/Cargo.toml | 25 ++++++++++ processor/scheduler/primitives/LICENSE | 15 ++++++ processor/scheduler/primitives/README.md | 3 ++ processor/scheduler/primitives/src/lib.rs | 48 +++++++++++++++++++ .../utxo/transaction-chaining/Cargo.toml | 4 +- .../utxo/transaction-chaining/src/db.rs | 26 +++++++++- .../utxo/transaction-chaining/src/lib.rs | 42 +++++++++++----- 12 files changed, 173 insertions(+), 21 deletions(-) create mode 100644 processor/scheduler/primitives/Cargo.toml create mode 100644 processor/scheduler/primitives/LICENSE create mode 100644 processor/scheduler/primitives/README.md create mode 100644 processor/scheduler/primitives/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 33f2e852..ca0bd4f5 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -42,9 +42,10 @@ jobs: -p serai-processor-key-gen \ -p serai-processor-frost-attempt-manager \ -p serai-processor-primitives \ + -p serai-processor-scanner \ + -p serai-processor-scheduler-primitives \ -p serai-processor-utxo-scheduler-primitives \ -p serai-processor-transaction-chaining-scheduler \ - -p serai-processor-scanner \ -p serai-processor \ -p tendermint-machine \ -p tributary-chain \ diff --git a/Cargo.lock b/Cargo.lock index 7512f35c..6e7ced07 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8679,6 +8679,16 @@ dependencies = [ "tokio", ] +[[package]] +name = "serai-processor-scheduler-primitives" +version = "0.1.0" +dependencies = [ + "borsh", + "group", + "parity-scale-codec", + "serai-db", +] + [[package]] name = "serai-processor-tests" version = "0.1.0" @@ -8715,11 +8725,11 @@ dependencies = [ "borsh", "group", "parity-scale-codec", - "serai-coins-primitives", "serai-db", "serai-primitives", "serai-processor-primitives", "serai-processor-scanner", + "serai-processor-scheduler-primitives", "serai-processor-utxo-scheduler-primitives", ] diff --git a/Cargo.toml b/Cargo.toml index 17435713..b61cde68 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -74,9 +74,10 @@ members = [ "processor/frost-attempt-manager", "processor/primitives", + "processor/scanner", + "processor/scheduler/primitives", "processor/scheduler/utxo/primitives", "processor/scheduler/utxo/transaction-chaining", - "processor/scanner", "processor", "coordinator/tributary/tendermint", diff --git a/deny.toml b/deny.toml index fb616244..2ca0ca50 100644 --- a/deny.toml +++ b/deny.toml @@ -49,9 +49,10 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, - { allow = ["AGPL-3.0"], name = "serai-processor-utxo-primitives" }, - { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-scanner" }, + { allow = ["AGPL-3.0"], name = "serai-processor-scheduler-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 4d33d0d0..d894f819 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -241,8 +241,12 @@ pub trait Scheduler: 'static + Send { /// /// When a key is activated, the existing multisig should retain its outputs and utility for a /// certain time period. With `flush_key`, all outputs should be directed towards fulfilling some - /// obligation or the `new_key`. Every output MUST be connected to an Eventuality. If a key no - /// longer has active Eventualities, it MUST be able to be retired. + /// obligation or the `new_key`. Every output held by the retiring key MUST be connected to an + /// Eventuality. If a key no longer has active Eventualities, it MUST be able to be retired + /// without losing any coins. + /// + /// If the retiring key has any unfulfilled payments associated with it, those MUST be made + /// the responsibility of the new key. fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor); /// Retire a key as it'll no longer be used. diff --git a/processor/scheduler/primitives/Cargo.toml b/processor/scheduler/primitives/Cargo.toml new file mode 100644 index 00000000..31d73853 --- /dev/null +++ b/processor/scheduler/primitives/Cargo.toml @@ -0,0 +1,25 @@ +[package] +name = "serai-processor-scheduler-primitives" +version = "0.1.0" +description = "Primitives for schedulers for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/primitives" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +group = { version = "0.13", default-features = false } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-db = { path = "../../../common/db" } diff --git a/processor/scheduler/primitives/LICENSE b/processor/scheduler/primitives/LICENSE new file mode 100644 index 00000000..e091b149 --- /dev/null +++ b/processor/scheduler/primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scheduler/primitives/README.md b/processor/scheduler/primitives/README.md new file mode 100644 index 00000000..6e81249d --- /dev/null +++ b/processor/scheduler/primitives/README.md @@ -0,0 +1,3 @@ +# Scheduler Primitives + +Primitives for schedulers. diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs new file mode 100644 index 00000000..97a00c03 --- /dev/null +++ b/processor/scheduler/primitives/src/lib.rs @@ -0,0 +1,48 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::marker::PhantomData; +use std::io; + +use group::GroupEncoding; + +use serai_db::DbTxn; + +/// A signable transaction. +pub trait SignableTransaction: 'static + Sized + Send + Sync { + /// Read a `SignableTransaction`. + fn read(reader: &mut impl io::Read) -> io::Result; + /// Write a `SignableTransaction`. + fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; +} + +mod db { + use serai_db::{Get, DbTxn, create_db, db_channel}; + + db_channel! { + SchedulerPrimitives { + TransactionsToSign: (key: &[u8]) -> Vec, + } + } +} + +/// The transactions to sign, as scheduled by a Scheduler. +pub struct TransactionsToSign(PhantomData); +impl TransactionsToSign { + /// Send a transaction to sign. + pub fn send(txn: &mut impl DbTxn, key: &impl GroupEncoding, tx: &T) { + let mut buf = Vec::with_capacity(128); + tx.write(&mut buf).unwrap(); + db::TransactionsToSign::send(txn, key.to_bytes().as_ref(), &buf); + } + + /// Try to receive a transaction to sign. + pub fn try_recv(txn: &mut impl DbTxn, key: &impl GroupEncoding) -> Option { + let tx = db::TransactionsToSign::try_recv(txn, key.to_bytes().as_ref())?; + let mut tx = tx.as_slice(); + let res = T::read(&mut tx).unwrap(); + assert!(tx.is_empty()); + Some(res) + } +} diff --git a/processor/scheduler/utxo/transaction-chaining/Cargo.toml b/processor/scheduler/utxo/transaction-chaining/Cargo.toml index d54d0f85..a6b12128 100644 --- a/processor/scheduler/utxo/transaction-chaining/Cargo.toml +++ b/processor/scheduler/utxo/transaction-chaining/Cargo.toml @@ -26,10 +26,10 @@ scale = { package = "parity-scale-codec", version = "3", default-features = fals borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } -serai-coins-primitives = { path = "../../../../substrate/coins/primitives", default-features = false, features = ["std"] } serai-db = { path = "../../../../common/db" } primitives = { package = "serai-processor-primitives", path = "../../../primitives" } -scheduler-primitives = { package = "serai-processor-utxo-scheduler-primitives", path = "../primitives" } +scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../../primitives" } +utxo-scheduler-primitives = { package = "serai-processor-utxo-scheduler-primitives", path = "../primitives" } scanner = { package = "serai-processor-scanner", path = "../../../scanner" } diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index 20c574e9..f6de26d1 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -2,7 +2,7 @@ use core::marker::PhantomData; use group::GroupEncoding; -use serai_primitives::Coin; +use serai_primitives::{Coin, Amount}; use serai_db::{Get, DbTxn, create_db}; @@ -11,12 +11,23 @@ use scanner::{ScannerFeed, KeyFor, OutputFor}; create_db! { TransactionChainingScheduler { + OperatingCosts: (coin: Coin) -> Amount, SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, + // We should be immediately able to schedule the fulfillment of payments, yet this may not be + // possible if we're in the middle of a multisig rotation (as our output set will be split) + SerializedQueuedPayments: (key: &[u8]) > Vec, } } pub(crate) struct Db(PhantomData); impl Db { + pub(crate) fn operating_costs(getter: &impl Get, coin: Coin) -> Amount { + OperatingCosts::get(getter, coin).unwrap_or(Amount(0)) + } + pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: Coin, amount: Amount) { + OperatingCosts::set(txn, coin, &amount) + } + pub(crate) fn outputs( getter: &impl Get, key: KeyFor, @@ -46,4 +57,17 @@ impl Db { pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); } + + pub(crate) fn queued_payments( + getter: &impl Get, + key: KeyFor, + ) -> Option>> { + todo!("TODO") + } + pub(crate) fn set_queued_payments(txn: &mut impl DbTxn, key: KeyFor, queued: Vec>) { + todo!("TODO") + } + pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor) { + SerializedQueuedPayments::del(txn, key.to_bytes().as_ref()); + } } diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 63635696..8f21e9d6 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -5,6 +5,8 @@ use core::marker::PhantomData; use std::collections::HashMap; +use group::GroupEncoding; + use serai_primitives::Coin; use serai_db::DbTxn; @@ -15,6 +17,7 @@ use scanner::{ Scheduler as SchedulerTrait, }; use scheduler_primitives::*; +use utxo_scheduler_primitives::*; mod db; use db::Db; @@ -25,7 +28,7 @@ pub struct PlannedTransaction { signable: T, /// The outputs we'll receive from this. effected_received_outputs: OutputFor, - /// The Evtnuality to watch for. + /// The Eventuality to watch for. eventuality: EventualityFor, } @@ -60,13 +63,13 @@ impl>, > SchedulerTrait for Scheduler { fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - Db::::set_outputs(txn, key, *coin, &vec![]); + Db::::set_outputs(txn, key, *coin, &[]); } } @@ -98,22 +101,27 @@ impl< { let mut planned_txs = vec![]; for forward in update.forwards() { - let forward_to_key = active_keys.last().unwrap(); - assert_eq!(forward_to_key.1, LifetimeStage::Active); + let key = forward.key(); + + assert_eq!(active_keys.len(), 2); + assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); + assert_eq!(active_keys[1].1, LifetimeStage::Active); + let forward_to_key = active_keys[1].0; let Some(plan) = P::plan_transaction_with_fee_amortization( // This uses 0 for the operating costs as we don't incur any here &mut 0, fee_rates[&forward.balance().coin], vec![forward.clone()], - vec![Payment::new(P::forwarding_address(forward_to_key.0), forward.balance(), None)], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], None, ) else { continue; }; - planned_txs.push(plan); + planned_txs.push((key, plan)); } for to_return in update.returns() { + let key = to_return.output().key(); let out_instruction = Payment::new(to_return.address().clone(), to_return.output().balance(), None); let Some(plan) = P::plan_transaction_with_fee_amortization( @@ -126,12 +134,24 @@ impl< ) else { continue; }; - planned_txs.push(plan); + planned_txs.push((key, plan)); } - // TODO: Send the transactions off for signing - // TODO: Return the eventualities - todo!("TODO") + let mut eventualities = HashMap::new(); + for (key, planned_tx) in planned_txs { + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned_tx.signable); + + // Insert the eventualities into the result + eventualities + .entry(key.to_bytes().as_ref().to_vec()) + .or_insert(Vec::with_capacity(1)) + .push(planned_tx.eventuality); + } + + // TODO: Fulfill any payments we prior couldn't + + eventualities } } From f11a6b4ff17b9155fcf1d0b3228d2cbdcbd0f846 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 2 Sep 2024 22:31:15 -0400 Subject: [PATCH 064/368] Better document the forwarded output flow --- processor/primitives/src/eventuality.rs | 7 +++++-- processor/scanner/src/eventuality/mod.rs | 15 ++++++++++----- processor/scanner/src/lib.rs | 10 ++++++++++ 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs index eb6cda9c..6a52194d 100644 --- a/processor/primitives/src/eventuality.rs +++ b/processor/primitives/src/eventuality.rs @@ -19,8 +19,11 @@ pub trait Eventuality: Sized + Send + Sync { /// identified, the full check is performed. fn lookup(&self) -> Vec; - /// The output this plan forwarded. - fn forwarded_output(&self) -> Option; + /// The output the resolution of this Eventuality was supposed to spend. + /// + /// If the resolution of this Eventuality has multiple inputs, there is no singular spent output + /// so this MUST return None. + fn singular_spent_output(&self) -> Option; /// Read an Eventuality. fn read(reader: &mut impl io::Read) -> io::Result; diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 83ec50ab..98d278d9 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -352,19 +352,24 @@ impl> ContinuallyRan for EventualityTas non_external_outputs.iter().filter(|output| output.kind() == OutputType::Forwarded) { let Some(eventuality) = completed_eventualities.get(&output.transaction_id()) else { - // Output sent to the forwarding address yet not actually forwarded + // Output sent to the forwarding address yet not one we made continue; }; - let Some(forwarded) = eventuality.forwarded_output() else { - // This was a TX made by us, yet someone burned to the forwarding address + let Some(forwarded) = eventuality.singular_spent_output() else { + // This was a TX made by us, yet someone burned to the forwarding address as it doesn't + // follow the structure of forwarding transactions continue; }; - let (return_address, in_instruction) = + let Some((return_address, in_instruction)) = ScannerGlobalDb::::return_address_and_in_instruction_for_forwarded_output( &txn, &forwarded, ) - .expect("forwarded an output yet didn't save its InInstruction to the DB"); + else { + // This was a TX made by us, coincidentally with the necessary structure, yet wasn't + // forwarding an output + continue; + }; queue_output_until_block::( &mut txn, b + S::WINDOW_LENGTH, diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d894f819..539bd4a7 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -216,14 +216,24 @@ pub struct SchedulerUpdate { impl SchedulerUpdate { /// The outputs to accumulate. + /// + /// These MUST be accumulated. pub fn outputs(&self) -> &[OutputFor] { &self.outputs } + /// The outputs to forward to the latest multisig. + /// + /// These MUST be forwarded in a 1-input 1-output transaction or dropped (if the fees are too + /// high to make the forwarding transaction). pub fn forwards(&self) -> &[OutputFor] { &self.forwards } + /// The outputs to return. + /// + /// These SHOULD be returned as specified (potentially in batch). They MAY be dropped if the fees + /// are too high to make the return transaction. pub fn returns(&self) -> &[Return] { &self.returns } From 3c787e005f3dc5c1502e326173d8409ad79f3b71 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 3 Sep 2024 01:04:43 -0400 Subject: [PATCH 065/368] Fix bug in the scanner regarding forwarded output amounts We'd report the amount originally received, minus 2x the cost to aggregate, regardless the amount successfully forwarded. We should've reduced to the amount successfully forwarded, if it was smaller, in case the cost to forward exceeded the aggregation cost. --- processor/scanner/src/eventuality/mod.rs | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 98d278d9..5a7b4cca 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -361,7 +361,7 @@ impl> ContinuallyRan for EventualityTas continue; }; - let Some((return_address, in_instruction)) = + let Some((return_address, mut in_instruction)) = ScannerGlobalDb::::return_address_and_in_instruction_for_forwarded_output( &txn, &forwarded, ) @@ -370,6 +370,14 @@ impl> ContinuallyRan for EventualityTas // forwarding an output continue; }; + + // We use the original amount, minus twice the cost to aggregate + // If the fees we paid to forward this now (less than the cost to aggregate now, yet not + // necessarily the cost to aggregate historically) caused this amount to be less, reduce + // it accordingly + in_instruction.balance.amount.0 = + in_instruction.balance.amount.0.min(output.balance().amount.0); + queue_output_until_block::( &mut txn, b + S::WINDOW_LENGTH, From 75b47070027985c423b14610166720a5bd859a20 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 3 Sep 2024 01:41:51 -0400 Subject: [PATCH 066/368] Add input aggregation in the transaction-chaining scheduler Also handles some other misc in it. --- Cargo.lock | 1 + .../scheduler/utxo/primitives/Cargo.toml | 1 + .../scheduler/utxo/primitives/src/lib.rs | 23 +- .../utxo/transaction-chaining/Cargo.toml | 2 +- .../utxo/transaction-chaining/src/db.rs | 20 +- .../utxo/transaction-chaining/src/lib.rs | 291 ++++++++++++++---- 6 files changed, 268 insertions(+), 70 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 6e7ced07..dd1cc19e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8741,6 +8741,7 @@ dependencies = [ "serai-primitives", "serai-processor-primitives", "serai-processor-scanner", + "serai-processor-scheduler-primitives", ] [[package]] diff --git a/processor/scheduler/utxo/primitives/Cargo.toml b/processor/scheduler/utxo/primitives/Cargo.toml index 01d3db7d..4f2499f9 100644 --- a/processor/scheduler/utxo/primitives/Cargo.toml +++ b/processor/scheduler/utxo/primitives/Cargo.toml @@ -23,3 +23,4 @@ serai-primitives = { path = "../../../../substrate/primitives", default-features primitives = { package = "serai-processor-primitives", path = "../../../primitives" } scanner = { package = "serai-processor-scanner", path = "../../../scanner" } +scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../../primitives" } diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 2c6da97b..f3e220b0 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -7,11 +7,22 @@ use core::fmt::Debug; use serai_primitives::{Coin, Amount}; use primitives::{ReceivedOutput, Payment}; -use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; +use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor}; +use scheduler_primitives::*; + +/// A planned transaction. +pub struct PlannedTransaction { + /// The signable transaction. + pub signable: ST, + /// The Eventuality to watch for. + pub eventuality: EventualityFor, + /// The auxilliary data for this transaction. + pub auxilliary: A, +} /// An object able to plan a transaction. #[async_trait::async_trait] -pub trait TransactionPlanner: 'static + Send + Sync { +pub trait TransactionPlanner: 'static + Send + Sync { /// An error encountered when determining the fee rate. /// /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually @@ -21,8 +32,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// The type representing a fee rate to use for transactions. type FeeRate: Clone + Copy; - /// The type representing a planned transaction. - type PlannedTransaction; + /// The type representing a signable transaction. + type SignableTransaction: SignableTransaction; /// Obtain the fee rate to pay. /// @@ -62,7 +73,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { inputs: Vec>, payments: Vec>>, change: Option>, - ) -> Self::PlannedTransaction; + ) -> PlannedTransaction; /// Obtain a PlannedTransaction via amortizing the fee over the payments. /// @@ -77,7 +88,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { inputs: Vec>, mut payments: Vec>>, mut change: Option>, - ) -> Option { + ) -> Option> { // Sanity checks { assert!(!inputs.is_empty()); diff --git a/processor/scheduler/utxo/transaction-chaining/Cargo.toml b/processor/scheduler/utxo/transaction-chaining/Cargo.toml index a6b12128..0b1eb155 100644 --- a/processor/scheduler/utxo/transaction-chaining/Cargo.toml +++ b/processor/scheduler/utxo/transaction-chaining/Cargo.toml @@ -30,6 +30,6 @@ serai-primitives = { path = "../../../../substrate/primitives", default-features serai-db = { path = "../../../../common/db" } primitives = { package = "serai-processor-primitives", path = "../../../primitives" } +scanner = { package = "serai-processor-scanner", path = "../../../scanner" } scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../../primitives" } utxo-scheduler-primitives = { package = "serai-processor-utxo-scheduler-primitives", path = "../primitives" } -scanner = { package = "serai-processor-scanner", path = "../../../scanner" } diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index f6de26d1..7d800718 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -6,8 +6,8 @@ use serai_primitives::{Coin, Amount}; use serai_db::{Get, DbTxn, create_db}; -use primitives::ReceivedOutput; -use scanner::{ScannerFeed, KeyFor, OutputFor}; +use primitives::{Payment, ReceivedOutput}; +use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; create_db! { TransactionChainingScheduler { @@ -15,7 +15,7 @@ create_db! { SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, // We should be immediately able to schedule the fulfillment of payments, yet this may not be // possible if we're in the middle of a multisig rotation (as our output set will be split) - SerializedQueuedPayments: (key: &[u8]) > Vec, + SerializedQueuedPayments: (key: &[u8], coin: Coin) -> Vec, } } @@ -61,13 +61,19 @@ impl Db { pub(crate) fn queued_payments( getter: &impl Get, key: KeyFor, - ) -> Option>> { + coin: Coin, + ) -> Option>>> { todo!("TODO") } - pub(crate) fn set_queued_payments(txn: &mut impl DbTxn, key: KeyFor, queued: Vec>) { + pub(crate) fn set_queued_payments( + txn: &mut impl DbTxn, + key: KeyFor, + coin: Coin, + queued: &Vec>>, + ) { todo!("TODO") } - pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor) { - SerializedQueuedPayments::del(txn, key.to_bytes().as_ref()); + pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + SerializedQueuedPayments::del(txn, key.to_bytes().as_ref(), coin); } } diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 8f21e9d6..9e552c13 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -7,11 +7,11 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_primitives::Coin; +use serai_primitives::{Coin, Amount}; use serai_db::DbTxn; -use primitives::{ReceivedOutput, Payment}; +use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{ LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, SchedulerUpdate, Scheduler as SchedulerTrait, @@ -22,65 +22,205 @@ use utxo_scheduler_primitives::*; mod db; use db::Db; -/// A planned transaction. -pub struct PlannedTransaction { - /// The signable transaction. - signable: T, - /// The outputs we'll receive from this. - effected_received_outputs: OutputFor, - /// The Eventuality to watch for. - eventuality: EventualityFor, -} +/// The outputs which will be effected by a PlannedTransaction and received by Serai. +pub struct EffectedReceivedOutputs(Vec>); /// A scheduler of transactions for networks premised on the UTXO model which support /// transaction chaining. -pub struct Scheduler< - S: ScannerFeed, - T, - P: TransactionPlanner>, ->(PhantomData, PhantomData, PhantomData

); +pub struct Scheduler>>( + PhantomData, + PhantomData

, +); -impl>> - Scheduler -{ - fn accumulate_outputs(txn: &mut impl DbTxn, key: KeyFor, outputs: &[OutputFor]) { - // Accumulate them in memory - let mut outputs_by_coin = HashMap::with_capacity(1); - for output in outputs.iter().filter(|output| output.key() == key) { - let coin = output.balance().coin; - if let std::collections::hash_map::Entry::Vacant(e) = outputs_by_coin.entry(coin) { - e.insert(Db::::outputs(txn, key, coin).unwrap()); +impl>> Scheduler { + fn handle_queued_payments( + &mut self, + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + key: KeyFor, + ) -> Vec> { + let mut eventualities = vec![]; + + for coin in S::NETWORK.coins() { + // Fetch our operating costs and all our outputs + let mut operating_costs = Db::::operating_costs(txn, *coin).0; + let mut outputs = Db::::outputs(txn, key, *coin).unwrap(); + + // Fetch the queued payments + let mut payments = Db::::queued_payments(txn, key, *coin).unwrap(); + if payments.is_empty() { + continue; } - outputs_by_coin.get_mut(&coin).unwrap().push(output.clone()); + + // If this is our only key, our outputs and operating costs should be greater than the + // payments' value + if active_keys.len() == 1 { + // The available amount of fulfill is the amount we have plus the amount we'll reduce by + // An alternative formulation would be `outputs >= (payments - operating costs)`, but + // that'd risk underflow + let available = + operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); + assert!(available >= payments.iter().map(|payment| payment.balance().amount.0).sum::()); + } + + let amount_of_payments_that_can_be_handled = + |operating_costs: u64, outputs: &[_], payments: &[_]| { + let value_available = + operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); + + let mut can_handle = 0; + let mut value_used = 0; + for payment in payments { + value_used += payment.balance().amount.0; + if value_available < value_used { + break; + } + can_handle += 1; + } + + can_handle + }; + + // Find the set of payments we should fulfill at this time + { + // Drop to just the payments we currently have the outputs for + { + let can_handle = + amount_of_payments_that_can_be_handled(operating_costs, &outputs, &payments); + let remaining_payments = payments.drain(can_handle ..).collect::>(); + // Restore the rest to the database + Db::::set_queued_payments(txn, key, *coin, &remaining_payments); + } + let payments_value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + + // If these payments are worth less than the operating costs, immediately drop them + if payments_value <= operating_costs { + operating_costs -= payments_value; + Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); + return vec![]; + } + + // We explicitly sort AFTER deciding which payments to handle so we always handle the + // oldest queued payments first (preventing any from eternally being shuffled to the back + // of the line) + payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); + } + assert!(!payments.is_empty()); + + // Find the smallest set of outputs usable to fulfill these outputs + // Size is determined by the largest output, not quantity nor aggregate value + { + // We start by sorting low to high + outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); + + let value_needed = + payments.iter().map(|payment| payment.balance().amount.0).sum::() - operating_costs; + + let mut needed = 0; + let mut value_present = 0; + for output in &outputs { + needed += 1; + value_present += output.balance().amount.0; + if value_present >= value_needed { + break; + } + } + + // Drain, and save back to the DB, the unnecessary outputs + let remaining_outputs = outputs.drain(needed ..).collect::>(); + Db::::set_outputs(txn, key, *coin, &remaining_outputs); + } + assert!(!outputs.is_empty()); + + // We now have the current operating costs, the outputs we're using, and the payments + // The database has the unused outputs/unfilfillable payments + // Actually plan/send off the transactions + + // While our set of outputs exceed the input limit, aggregate them + while outputs.len() > MAX_INPUTS { + let outputs_chunk = outputs.drain(.. MAX_INPUTS).collect::>(); + + // While we're aggregating these outputs, handle any payments we can + let payments_chunk = loop { + let can_handle = + amount_of_payments_that_can_be_handled(operating_costs, &outputs, &payments); + let payments_chunk = payments.drain(.. can_handle.min(MAX_OUTPUTS)).collect::>(); + + let payments_value = + payments_chunk.iter().map(|payment| payment.balance().amount.0).sum::(); + if payments_value <= operating_costs { + operating_costs -= payments_value; + continue; + } + break payments_chunk; + }; + + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + fee_rates[coin], + outputs_chunk, + payments_chunk, + // We always use our key for the change here since we may need this change output to + // finish fulfilling these payments + Some(key), + ) else { + // We amortized all payments, and even when just trying to make the change output, these + // inputs couldn't afford their own aggregation and were written off + continue; + }; + + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned.signable); + + // Push the Eventualities onto the result + eventualities.push(planned.eventuality); + + let mut effected_received_outputs = planned.auxilliary.0; + // Only handle Change so if someone burns to an External address, we don't use it here + // when the scanner will tell us to return it (without accumulating it) + effected_received_outputs.retain(|output| output.kind() == OutputType::Change); + outputs.append(&mut effected_received_outputs); + } + + // Now that we have an aggregated set of inputs, create the tree for payments + todo!("TODO"); } - // Flush them to the database - for (coin, outputs) in outputs_by_coin { - Db::::set_outputs(txn, key, coin, &outputs); - } + eventualities } } -impl< - S: ScannerFeed, - T: 'static + Send + Sync + SignableTransaction, - P: TransactionPlanner>, - > SchedulerTrait for Scheduler +impl>> SchedulerTrait + for Scheduler { fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { + assert!(Db::::outputs(txn, key, *coin).is_none()); Db::::set_outputs(txn, key, *coin, &[]); + assert!(Db::::queued_payments(txn, key, *coin).is_none()); + Db::::set_queued_payments(txn, key, *coin, &vec![]); } } fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor) { - todo!("TODO") + for coin in S::NETWORK.coins() { + let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + + let mut queued = still_queued; + queued.append(&mut new_queued); + + Db::::set_queued_payments(txn, retiring_key, *coin, &vec![]); + Db::::set_queued_payments(txn, new_key, *coin, &queued); + } } fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - assert!(Db::::outputs(txn, key, *coin).is_none()); + assert!(Db::::outputs(txn, key, *coin).unwrap().is_empty()); Db::::del_outputs(txn, key, *coin); + assert!(Db::::queued_payments(txn, key, *coin).unwrap().is_empty()); + Db::::del_queued_payments(txn, key, *coin); } } @@ -91,12 +231,41 @@ impl< update: SchedulerUpdate, ) -> HashMap, Vec>> { // Accumulate all the outputs - for key in active_keys { - Self::accumulate_outputs(txn, key.0, update.outputs()); + for (key, _) in active_keys { + // Accumulate them in memory + let mut outputs_by_coin = HashMap::with_capacity(1); + for output in update.outputs().iter().filter(|output| output.key() == *key) { + match output.kind() { + OutputType::External | OutputType::Forwarded => {}, + // TODO: Only accumulate these if we haven't already, but do accumulate if not + OutputType::Branch | OutputType::Change => todo!("TODO"), + } + let coin = output.balance().coin; + if let std::collections::hash_map::Entry::Vacant(e) = outputs_by_coin.entry(coin) { + e.insert(Db::::outputs(txn, *key, coin).unwrap()); + } + outputs_by_coin.get_mut(&coin).unwrap().push(output.clone()); + } + + // Flush them to the database + for (coin, outputs) in outputs_by_coin { + Db::::set_outputs(txn, *key, coin, &outputs); + } } let mut fee_rates: HashMap = todo!("TODO"); + // Fulfill the payments we prior couldn't + let mut eventualities = HashMap::new(); + for (key, _stage) in active_keys { + eventualities.insert( + key.to_bytes().as_ref().to_vec(), + self.handle_queued_payments(txn, active_keys, *key), + ); + } + + // TODO: If this key has been flushed, forward all outputs + // Create the transactions for the forwards/burns { let mut planned_txs = vec![]; @@ -137,20 +306,14 @@ impl< planned_txs.push((key, plan)); } - let mut eventualities = HashMap::new(); for (key, planned_tx) in planned_txs { // Send the transactions off for signing - TransactionsToSign::::send(txn, &key, &planned_tx.signable); + TransactionsToSign::::send(txn, &key, &planned_tx.signable); - // Insert the eventualities into the result - eventualities - .entry(key.to_bytes().as_ref().to_vec()) - .or_insert(Vec::with_capacity(1)) - .push(planned_tx.eventuality); + // Insert the Eventualities into the result + eventualities[key.to_bytes().as_ref()].push(planned_tx.eventuality); } - // TODO: Fulfill any payments we prior couldn't - eventualities } } @@ -159,13 +322,29 @@ impl< &mut self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], - payments: Vec>>, + mut payments: Vec>>, ) -> HashMap, Vec>> { - // TODO: Find the key to use for fulfillment - // TODO: Sort outputs and payments by amount - // TODO: For as long as we don't have sufficiently aggregated inputs to handle all payments, - // aggregate - // TODO: Create the tree for the payments - todo!("TODO") + // Find the key to filfill these payments with + let fulfillment_key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") + } + LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + // Queue the payments for this key + for coin in S::NETWORK.coins() { + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + queued_payments + .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + } + + // Handle the queued payments + HashMap::from([( + fulfillment_key.to_bytes().as_ref().to_vec(), + self.handle_queued_payments(txn, active_keys, fulfillment_key), + )]) } } From ebef38d93bef96253659072282240ac4d3e4bebd Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 3 Sep 2024 16:42:47 -0400 Subject: [PATCH 067/368] Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times --- .../utxo/transaction-chaining/src/db.rs | 16 ++++++++++++++++ .../utxo/transaction-chaining/src/lib.rs | 17 +++++++++++++---- 2 files changed, 29 insertions(+), 4 deletions(-) diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index 7d800718..d629480f 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -13,6 +13,7 @@ create_db! { TransactionChainingScheduler { OperatingCosts: (coin: Coin) -> Amount, SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, + AlreadyAccumulatedOutput: (id: &[u8]) -> (), // We should be immediately able to schedule the fulfillment of payments, yet this may not be // possible if we're in the middle of a multisig rotation (as our output set will be split) SerializedQueuedPayments: (key: &[u8], coin: Coin) -> Vec, @@ -58,6 +59,21 @@ impl Db { SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); } + pub(crate) fn set_already_accumulated_output( + txn: &mut impl DbTxn, + output: as ReceivedOutput, AddressFor>>::Id, + ) { + AlreadyAccumulatedOutput::set(txn, output.as_ref(), &()); + } + pub(crate) fn take_if_already_accumulated_output( + txn: &mut impl DbTxn, + output: as ReceivedOutput, AddressFor>>::Id, + ) -> bool { + let res = AlreadyAccumulatedOutput::get(txn, output.as_ref()).is_some(); + AlreadyAccumulatedOutput::del(txn, output.as_ref()); + res + } + pub(crate) fn queued_payments( getter: &impl Get, key: KeyFor, diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 9e552c13..f74e2c2c 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -60,7 +60,9 @@ impl>> Sched // that'd risk underflow let available = operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); - assert!(available >= payments.iter().map(|payment| payment.balance().amount.0).sum::()); + assert!( + available >= payments.iter().map(|payment| payment.balance().amount.0).sum::() + ); } let amount_of_payments_that_can_be_handled = @@ -179,6 +181,9 @@ impl>> Sched // Only handle Change so if someone burns to an External address, we don't use it here // when the scanner will tell us to return it (without accumulating it) effected_received_outputs.retain(|output| output.kind() == OutputType::Change); + for output in &effected_received_outputs { + Db::::set_already_accumulated_output(txn, output.id()); + } outputs.append(&mut effected_received_outputs); } @@ -236,9 +241,13 @@ impl>> Sched let mut outputs_by_coin = HashMap::with_capacity(1); for output in update.outputs().iter().filter(|output| output.key() == *key) { match output.kind() { - OutputType::External | OutputType::Forwarded => {}, - // TODO: Only accumulate these if we haven't already, but do accumulate if not - OutputType::Branch | OutputType::Change => todo!("TODO"), + OutputType::External | OutputType::Forwarded => {} + // Only accumulate these if we haven't already + OutputType::Branch | OutputType::Change => { + if Db::::take_if_already_accumulated_output(txn, output.id()) { + continue; + } + } } let coin = output.balance().coin; if let std::collections::hash_map::Entry::Vacant(e) = outputs_by_coin.entry(coin) { From 0601d477898ca01a4b87027bf9cf1d2bdb434ff2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 3 Sep 2024 18:51:27 -0400 Subject: [PATCH 068/368] Work on the tree logic in the transaction-chaining scheduler --- .../scheduler/utxo/primitives/src/lib.rs | 26 +- .../utxo/transaction-chaining/src/lib.rs | 276 +++++++++++------- 2 files changed, 193 insertions(+), 109 deletions(-) diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index f3e220b0..af8b985f 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -79,7 +79,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// /// `operating_costs` is accrued to if Serai faces the burden of a fee or drops inputs not worth /// accumulating. `operating_costs` will be amortized along with this transaction's fee as - /// possible. Please see `spec/processor/UTXO Management.md` for more information. + /// possible, if there is a change output. Please see `spec/processor/UTXO Management.md` for + /// more information. /// /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. fn plan_transaction_with_fee_amortization( @@ -89,6 +90,12 @@ pub trait TransactionPlanner: 'static + Send + Sync { mut payments: Vec>>, mut change: Option>, ) -> Option> { + // If there's no change output, we can't recoup any operating costs we would amortize + // We also don't have any losses if the inputs are written off/the change output is reduced + let mut operating_costs_if_no_change = 0; + let operating_costs_in_effect = + if change.is_none() { &mut operating_costs_if_no_change } else { operating_costs }; + // Sanity checks { assert!(!inputs.is_empty()); @@ -101,7 +108,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { assert_eq!(coin, payment.balance().coin); } assert!( - (inputs.iter().map(|input| input.balance().amount.0).sum::() + *operating_costs) >= + (inputs.iter().map(|input| input.balance().amount.0).sum::() + + *operating_costs_in_effect) >= payments.iter().map(|payment| payment.balance().amount.0).sum::(), "attempted to fulfill payments without a sufficient input set" ); @@ -119,7 +127,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { while !payments.is_empty() { // We need to pay the fee, and any accrued operating costs, minus what we've already // amortized - let adjusted_fee = (*operating_costs + fee).saturating_sub(amortized); + let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); /* Ideally, we wouldn't use a ceil div yet would be accurate about it. Any remainder could @@ -154,16 +162,16 @@ pub trait TransactionPlanner: 'static + Send + Sync { // dust if inputs < (fee + S::dust(coin).0) { // Write off these inputs - *operating_costs += inputs; + *operating_costs_in_effect += inputs; // Yet also claw back the payments we dropped, as we only lost the change // The dropped payments will be worth less than the inputs + operating_costs we started // with, so this shouldn't use `saturating_sub` - *operating_costs -= amortized; + *operating_costs_in_effect -= amortized; None?; } } else { // Since we have payments which can pay the fee we ended up with, amortize it - let adjusted_fee = (*operating_costs + fee).saturating_sub(amortized); + let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); let per_payment_base_fee = adjusted_fee / u64::try_from(payments.len()).unwrap(); let payments_paying_one_atomic_unit_more = usize::try_from(adjusted_fee % u64::try_from(payments.len()).unwrap()).unwrap(); @@ -174,7 +182,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { payment.balance().amount.0 -= per_payment_fee; amortized += per_payment_fee; } - assert!(amortized >= (*operating_costs + fee)); + assert!(amortized >= (*operating_costs_in_effect + fee)); // If the change is less than the dust, drop it let would_be_change = inputs.iter().map(|input| input.balance().amount.0).sum::() - @@ -182,12 +190,12 @@ pub trait TransactionPlanner: 'static + Send + Sync { fee; if would_be_change < S::dust(coin).0 { change = None; - *operating_costs += would_be_change; + *operating_costs_in_effect += would_be_change; } } // Update the amount of operating costs - *operating_costs = (*operating_costs + fee).saturating_sub(amortized); + *operating_costs_in_effect = (*operating_costs_in_effect + fee).saturating_sub(amortized); } // Because we amortized, or accrued as operating costs, the fee, make the transaction diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index f74e2c2c..8e567e14 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -7,7 +7,7 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_primitives::{Coin, Amount}; +use serai_primitives::{Coin, Amount, Balance}; use serai_db::DbTxn; @@ -41,12 +41,56 @@ impl>> Sched ) -> Vec> { let mut eventualities = vec![]; + let mut accumulate_outputs = |txn, outputs: Vec>| { + let mut outputs_by_key = HashMap::new(); + for output in outputs { + Db::::set_already_accumulated_output(txn, output.id()); + let coin = output.balance().coin; + outputs_by_key + .entry((output.key().to_bytes().as_ref().to_vec(), coin)) + .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) + .1 + .push(output); + } + for ((_key_vec, coin), (key, outputs)) in outputs_by_key { + Db::::set_outputs(txn, key, coin, &outputs); + } + }; + for coin in S::NETWORK.coins() { // Fetch our operating costs and all our outputs let mut operating_costs = Db::::operating_costs(txn, *coin).0; let mut outputs = Db::::outputs(txn, key, *coin).unwrap(); - // Fetch the queued payments + // If we have more than the maximum amount of inputs, aggregate until we don't + { + while outputs.len() > MAX_INPUTS { + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + fee_rates[coin], + outputs.drain(.. MAX_INPUTS).collect::>(), + vec![], + Some(key_for_change), + ) else { + // We amortized all payments, and even when just trying to make the change output, these + // inputs couldn't afford their own aggregation and were written off + Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); + continue; + }; + + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned.signable); + // Push the Eventualities onto the result + eventualities.push(planned.eventuality); + // Accumulate the outputs + Db::set_outputs(txn, key, *coin, &outputs); + accumulate_outputs(txn, planned.auxilliary.0); + outputs = Db::outputs(txn, key, *coin).unwrap(); + } + Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); + } + + // Now, handle the payments let mut payments = Db::::queued_payments(txn, key, *coin).unwrap(); if payments.is_empty() { continue; @@ -55,21 +99,24 @@ impl>> Sched // If this is our only key, our outputs and operating costs should be greater than the // payments' value if active_keys.len() == 1 { - // The available amount of fulfill is the amount we have plus the amount we'll reduce by + // The available amount to fulfill is the amount we have plus the amount we'll reduce by // An alternative formulation would be `outputs >= (payments - operating costs)`, but // that'd risk underflow - let available = + let value_available = operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); + assert!( - available >= payments.iter().map(|payment| payment.balance().amount.0).sum::() + value_available >= payments.iter().map(|payment| payment.balance().amount.0).sum::() ); } - let amount_of_payments_that_can_be_handled = - |operating_costs: u64, outputs: &[_], payments: &[_]| { - let value_available = - operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); + // Find the set of payments we should fulfill at this time + loop { + let value_available = + operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); + // Drop to just the payments we currently have the outputs for + { let mut can_handle = 0; let mut value_used = 0; for payment in payments { @@ -80,15 +127,6 @@ impl>> Sched can_handle += 1; } - can_handle - }; - - // Find the set of payments we should fulfill at this time - { - // Drop to just the payments we currently have the outputs for - { - let can_handle = - amount_of_payments_that_can_be_handled(operating_costs, &outputs, &payments); let remaining_payments = payments.drain(can_handle ..).collect::>(); // Restore the rest to the database Db::::set_queued_payments(txn, key, *coin, &remaining_payments); @@ -99,96 +137,132 @@ impl>> Sched if payments_value <= operating_costs { operating_costs -= payments_value; Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); - return vec![]; - } - // We explicitly sort AFTER deciding which payments to handle so we always handle the - // oldest queued payments first (preventing any from eternally being shuffled to the back - // of the line) - payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); - } - assert!(!payments.is_empty()); - - // Find the smallest set of outputs usable to fulfill these outputs - // Size is determined by the largest output, not quantity nor aggregate value - { - // We start by sorting low to high - outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); - - let value_needed = - payments.iter().map(|payment| payment.balance().amount.0).sum::() - operating_costs; - - let mut needed = 0; - let mut value_present = 0; - for output in &outputs { - needed += 1; - value_present += output.balance().amount.0; - if value_present >= value_needed { + // Reset payments to the queued payments + payments = Db::::queued_payments(txn, key, *coin).unwrap(); + // If there's no more payments, stop looking for which payments we should fulfill + if payments.is_empty() { break; } - } - // Drain, and save back to the DB, the unnecessary outputs - let remaining_outputs = outputs.drain(needed ..).collect::>(); - Db::::set_outputs(txn, key, *coin, &remaining_outputs); - } - assert!(!outputs.is_empty()); - - // We now have the current operating costs, the outputs we're using, and the payments - // The database has the unused outputs/unfilfillable payments - // Actually plan/send off the transactions - - // While our set of outputs exceed the input limit, aggregate them - while outputs.len() > MAX_INPUTS { - let outputs_chunk = outputs.drain(.. MAX_INPUTS).collect::>(); - - // While we're aggregating these outputs, handle any payments we can - let payments_chunk = loop { - let can_handle = - amount_of_payments_that_can_be_handled(operating_costs, &outputs, &payments); - let payments_chunk = payments.drain(.. can_handle.min(MAX_OUTPUTS)).collect::>(); - - let payments_value = - payments_chunk.iter().map(|payment| payment.balance().amount.0).sum::(); - if payments_value <= operating_costs { - operating_costs -= payments_value; - continue; - } - break payments_chunk; - }; - - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - fee_rates[coin], - outputs_chunk, - payments_chunk, - // We always use our key for the change here since we may need this change output to - // finish fulfilling these payments - Some(key), - ) else { - // We amortized all payments, and even when just trying to make the change output, these - // inputs couldn't afford their own aggregation and were written off + // Find which of these we should handle continue; - }; - - // Send the transactions off for signing - TransactionsToSign::::send(txn, &key, &planned.signable); - - // Push the Eventualities onto the result - eventualities.push(planned.eventuality); - - let mut effected_received_outputs = planned.auxilliary.0; - // Only handle Change so if someone burns to an External address, we don't use it here - // when the scanner will tell us to return it (without accumulating it) - effected_received_outputs.retain(|output| output.kind() == OutputType::Change); - for output in &effected_received_outputs { - Db::::set_already_accumulated_output(txn, output.id()); } - outputs.append(&mut effected_received_outputs); + + break; + } + if payments.is_empty() { + continue; } - // Now that we have an aggregated set of inputs, create the tree for payments - todo!("TODO"); + // Create a tree to fulfill all of the payments + struct TreeTransaction { + payments: Vec>>, + children: Vec>, + value: u64, + } + let mut tree_transactions = vec![]; + for payments in payments.chunks(MAX_OUTPUTS) { + let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + tree_transactions.push(TreeTransaction:: { + payments: payments.to_vec(), + children: vec![], + value, + }); + } + // While we haven't calculated a tree root, or the tree root doesn't support a change output, + // keep working + while (tree_transactions.len() != 1) || (tree_transactions[0].payments.len() == MAX_OUTPUTS) { + let mut next_tree_transactions = vec![]; + for children in tree_transactions.chunks(MAX_OUTPUTS) { + let payments = children + .iter() + .map(|child| { + Payment::new( + P::branch_address(key), + Balance { coin: *coin, amount: Amount(child.value) }, + None, + ) + }) + .collect(); + let value = children.iter().map(|child| child.value).sum(); + next_tree_transactions.push(TreeTransaction { + payments, + children: children.to_vec(), + value, + }); + } + tree_transactions = next_tree_transactions; + } + assert_eq!(tree_transactions.len(), 1); + assert!((tree_transactions.payments.len() + 1) <= MAX_OUTPUTS); + + // Create the transaction for the root of the tree + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + fee_rates[coin], + outputs, + tree_transactions.payments, + Some(key_for_change), + ) else { + Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); + continue; + }; + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + + // We accumulate the change output, but consume the branches here + accumulate_outputs( + txn, + planned + .auxilliary + .0 + .iter() + .filter(|output| output.kind() == OutputType::Change) + .cloned() + .collect(), + ); + // Filter the outputs to the change outputs + let mut branch_outputs = planned.auxilliary.0; + branch_outputs.retain(|output| output.kind() == OutputType::Branch); + + // This is recursive, yet only recurses with logarithmic depth + let execute_tree_transaction = |branch_outputs, children| { + assert_eq!(branch_outputs.len(), children.len()); + + // Sort the branch outputs by their value + branch_outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); + // Find the child for each branch output + // This is only done within a transaction, not across the layer, so we don't have branches + // created in transactions with less outputs (and therefore less fees) jump places with + // other branches + children.sort_by(|a, b| a.value.cmp(&b.value)); + + for (branch_output, child) in branch_outputs.into_iter().zip(children) { + assert_eq!(branch_output.kind(), OutputType::Branch); + Db::::set_already_accumulated_output(txn, branch_output.id()); + + let Some(planned) = P::plan_transaction_with_fee_amortization( + // Uses 0 as there's no operating costs to incur/amortize here + &mut 0, + fee_rates[coin], + vec![branch_output], + child.payments, + None, + ) else { + // This Branch isn't viable, so drop it (and its children) + continue; + }; + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + if !child.children.is_empty() { + execute_tree_transaction(planned.auxilliary.0, child.children); + } + } + }; + if !tree_transaction.children.is_empty() { + execute_tree_transaction(branch_outputs, tree_transaction.children); + } } eventualities @@ -288,6 +362,7 @@ impl>> Sched let Some(plan) = P::plan_transaction_with_fee_amortization( // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be forwarded, we simply drop it &mut 0, fee_rates[&forward.balance().coin], vec![forward.clone()], @@ -304,6 +379,7 @@ impl>> Sched Payment::new(to_return.address().clone(), to_return.output().balance(), None); let Some(plan) = P::plan_transaction_with_fee_amortization( // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be returned, we simply drop it &mut 0, fee_rates[&out_instruction.balance().coin], vec![to_return.output().clone()], From 8ff019265ff5a87069aa8e346ab87e8926e20371 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 3 Sep 2024 19:33:38 -0400 Subject: [PATCH 069/368] Near-complete version of the tree algorithm in the transaction-chaining scheduler --- processor/scanner/src/lib.rs | 2 + .../scheduler/utxo/primitives/src/lib.rs | 5 + .../utxo/transaction-chaining/src/lib.rs | 177 +++++++++++++----- 3 files changed, 138 insertions(+), 46 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 539bd4a7..1818fbf0 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -277,6 +277,7 @@ pub trait Scheduler: 'static + Send { fn update( &mut self, txn: &mut impl DbTxn, + block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, ) -> HashMap, Vec>>; @@ -316,6 +317,7 @@ pub trait Scheduler: 'static + Send { fn fulfill( &mut self, txn: &mut impl DbTxn, + block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], payments: Vec>>, ) -> HashMap, Vec>>; diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index af8b985f..356192ee 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -35,6 +35,11 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// The type representing a signable transaction. type SignableTransaction: SignableTransaction; + /// The maximum amount of inputs allowed in a transaction. + const MAX_INPUTS: usize; + /// The maximum amount of outputs allowed in a transaction, including the change output. + const MAX_OUTPUTS: usize; + /// Obtain the fee rate to pay. /// /// This must be constant to the finalized block referenced by this block number and the coin. diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 8e567e14..31c70c1e 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -37,6 +37,7 @@ impl>> Sched &mut self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], + fee_rates: &HashMap, key: KeyFor, ) -> Vec> { let mut eventualities = vec![]; @@ -64,11 +65,11 @@ impl>> Sched // If we have more than the maximum amount of inputs, aggregate until we don't { - while outputs.len() > MAX_INPUTS { + while outputs.len() > P::MAX_INPUTS { let Some(planned) = P::plan_transaction_with_fee_amortization( &mut operating_costs, fee_rates[coin], - outputs.drain(.. MAX_INPUTS).collect::>(), + outputs.drain(.. P::MAX_INPUTS).collect::>(), vec![], Some(key_for_change), ) else { @@ -156,13 +157,14 @@ impl>> Sched } // Create a tree to fulfill all of the payments + #[derive(Clone)] struct TreeTransaction { payments: Vec>>, children: Vec>, value: u64, } let mut tree_transactions = vec![]; - for payments in payments.chunks(MAX_OUTPUTS) { + for payments in payments.chunks(P::MAX_OUTPUTS) { let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); tree_transactions.push(TreeTransaction:: { payments: payments.to_vec(), @@ -172,9 +174,21 @@ impl>> Sched } // While we haven't calculated a tree root, or the tree root doesn't support a change output, // keep working - while (tree_transactions.len() != 1) || (tree_transactions[0].payments.len() == MAX_OUTPUTS) { + while (tree_transactions.len() != 1) || + (tree_transactions[0].payments.len() == P::MAX_OUTPUTS) + { let mut next_tree_transactions = vec![]; - for children in tree_transactions.chunks(MAX_OUTPUTS) { + for children in tree_transactions.chunks(P::MAX_OUTPUTS) { + // If this is the last chunk, and it doesn't need to accumulated, continue + if (children.len() < P::MAX_OUTPUTS) && + ((next_tree_transactions.len() + children.len()) < P::MAX_OUTPUTS) + { + for child in children { + next_tree_transactions.push(child.clone()); + } + continue; + } + let payments = children .iter() .map(|child| { @@ -194,15 +208,111 @@ impl>> Sched } tree_transactions = next_tree_transactions; } + + // This is recursive, yet only recurses with logarithmic depth + fn execute_tree_transaction< + S: ScannerFeed, + P: TransactionPlanner>, + >( + txn: &mut impl DbTxn, + fee_rate: P::FeeRate, + eventualities: &mut Vec>, + key: KeyFor, + mut branch_outputs: Vec>, + mut children: Vec>, + ) { + assert_eq!(branch_outputs.len(), children.len()); + + // Sort the branch outputs by their value + branch_outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); + // Find the child for each branch output + // This is only done within a transaction, not across the layer, so we don't have branches + // created in transactions with less outputs (and therefore less fees) jump places with + // other branches + children.sort_by(|a, b| a.value.cmp(&b.value)); + + for (branch_output, mut child) in branch_outputs.into_iter().zip(children) { + assert_eq!(branch_output.kind(), OutputType::Branch); + Db::::set_already_accumulated_output(txn, branch_output.id()); + + // We need to compensate for the value of this output being less than the value of the + // payments + { + let fee_to_amortize = child.value - branch_output.balance().amount.0; + let mut amortized = 0; + 'outer: while (!child.payments.is_empty()) && (amortized < fee_to_amortize) { + let adjusted_fee = fee_to_amortize - amortized; + let payments_len = u64::try_from(child.payments.len()).unwrap(); + let per_payment_fee_check = adjusted_fee.div_ceil(payments_len); + + let mut i = 0; + while i < child.payments.len() { + let amount = child.payments[i].balance().amount.0; + if amount <= per_payment_fee_check { + child.payments.swap_remove(i); + child.children.swap_remove(i); + amortized += amount; + continue 'outer; + } + i += 1; + } + + // Since all payments can pay the fee, deduct accordingly + for (i, payment) in child.payments.iter_mut().enumerate() { + let Balance { coin, amount } = payment.balance(); + let mut amount = amount.0; + amount -= adjusted_fee / payments_len; + if i < usize::try_from(adjusted_fee % payments_len).unwrap() { + amount -= 1; + } + + *payment = Payment::new( + payment.address().clone(), + Balance { coin, amount: Amount(amount) }, + None, + ); + } + } + if child.payments.is_empty() { + continue; + } + } + + let Some(planned) = P::plan_transaction_with_fee_amortization( + // Uses 0 as there's no operating costs to incur/amortize here + &mut 0, + fee_rate, + vec![branch_output], + child.payments, + None, + ) else { + // This Branch isn't viable, so drop it (and its children) + continue; + }; + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + if !child.children.is_empty() { + execute_tree_transaction::( + txn, + fee_rate, + eventualities, + key, + planned.auxilliary.0, + child.children, + ); + } + } + } + assert_eq!(tree_transactions.len(), 1); - assert!((tree_transactions.payments.len() + 1) <= MAX_OUTPUTS); + assert!((tree_transactions[0].payments.len() + 1) <= P::MAX_OUTPUTS); // Create the transaction for the root of the tree let Some(planned) = P::plan_transaction_with_fee_amortization( &mut operating_costs, fee_rates[coin], outputs, - tree_transactions.payments, + tree_transactions[0].payments, Some(key_for_change), ) else { Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); @@ -226,42 +336,15 @@ impl>> Sched let mut branch_outputs = planned.auxilliary.0; branch_outputs.retain(|output| output.kind() == OutputType::Branch); - // This is recursive, yet only recurses with logarithmic depth - let execute_tree_transaction = |branch_outputs, children| { - assert_eq!(branch_outputs.len(), children.len()); - - // Sort the branch outputs by their value - branch_outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); - // Find the child for each branch output - // This is only done within a transaction, not across the layer, so we don't have branches - // created in transactions with less outputs (and therefore less fees) jump places with - // other branches - children.sort_by(|a, b| a.value.cmp(&b.value)); - - for (branch_output, child) in branch_outputs.into_iter().zip(children) { - assert_eq!(branch_output.kind(), OutputType::Branch); - Db::::set_already_accumulated_output(txn, branch_output.id()); - - let Some(planned) = P::plan_transaction_with_fee_amortization( - // Uses 0 as there's no operating costs to incur/amortize here - &mut 0, - fee_rates[coin], - vec![branch_output], - child.payments, - None, - ) else { - // This Branch isn't viable, so drop it (and its children) - continue; - }; - TransactionsToSign::::send(txn, &key, &planned.signable); - eventualities.push(planned.eventuality); - if !child.children.is_empty() { - execute_tree_transaction(planned.auxilliary.0, child.children); - } - } - }; - if !tree_transaction.children.is_empty() { - execute_tree_transaction(branch_outputs, tree_transaction.children); + if !tree_transactions[0].children.is_empty() { + execute_tree_transaction::( + txn, + fee_rates[coin], + &mut eventualities, + key, + branch_outputs, + tree_transactions[0].children, + ); } } @@ -306,6 +389,7 @@ impl>> Sched fn update( &mut self, txn: &mut impl DbTxn, + block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, ) -> HashMap, Vec>> { @@ -336,14 +420,14 @@ impl>> Sched } } - let mut fee_rates: HashMap = todo!("TODO"); + let fee_rates = block.fee_rates(); // Fulfill the payments we prior couldn't let mut eventualities = HashMap::new(); for (key, _stage) in active_keys { eventualities.insert( key.to_bytes().as_ref().to_vec(), - self.handle_queued_payments(txn, active_keys, *key), + self.handle_queued_payments(txn, active_keys, fee_rates, *key), ); } @@ -406,6 +490,7 @@ impl>> Sched fn fulfill( &mut self, txn: &mut impl DbTxn, + block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], mut payments: Vec>>, ) -> HashMap, Vec>> { @@ -429,7 +514,7 @@ impl>> Sched // Handle the queued payments HashMap::from([( fulfillment_key.to_bytes().as_ref().to_vec(), - self.handle_queued_payments(txn, active_keys, fulfillment_key), + self.handle_queued_payments(txn, active_keys, block.fee_rates(), fulfillment_key), )]) } } From 653ead1e8c46cc0b28b9e4613e5c19bf2715cd52 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 01:44:21 -0400 Subject: [PATCH 070/368] Finish the tree logic in the transaction-chaining scheduler Also completes the DB functions, makes Scheduler never instantiated, and ensures tree roots have change outputs. --- processor/primitives/src/payment.rs | 21 + processor/scanner/src/eventuality/mod.rs | 28 +- processor/scanner/src/lib.rs | 21 +- .../scheduler/utxo/primitives/src/lib.rs | 18 +- .../utxo/transaction-chaining/src/db.rs | 21 +- .../utxo/transaction-chaining/src/lib.rs | 736 ++++++++++-------- 6 files changed, 477 insertions(+), 368 deletions(-) diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs index 1bbb0604..bf3c918c 100644 --- a/processor/primitives/src/payment.rs +++ b/processor/primitives/src/payment.rs @@ -1,3 +1,7 @@ +use std::io; + +use scale::{Encode, Decode, IoReader}; + use serai_primitives::{Balance, Data}; use serai_coins_primitives::OutInstructionWithBalance; @@ -27,6 +31,7 @@ impl Payment { pub fn new(address: A, balance: Balance, data: Option>) -> Self { Payment { address, balance, data } } + /// The address to pay. pub fn address(&self) -> &A { &self.address @@ -39,4 +44,20 @@ impl Payment { pub fn data(&self) -> &Option> { &self.data } + + /// Read a Payment. + pub fn read(reader: &mut impl io::Read) -> io::Result { + let address = A::read(reader)?; + let reader = &mut IoReader(reader); + let balance = Balance::decode(reader).map_err(io::Error::other)?; + let data = Option::>::decode(reader).map_err(io::Error::other)?; + Ok(Self { address, balance, data }) + } + /// Write the Payment. + pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.address.write(writer).unwrap(); + self.balance.encode_to(writer); + self.data.encode_to(writer); + Ok(()) + } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 5a7b4cca..84670f79 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,3 +1,4 @@ +use core::marker::PhantomData; use std::collections::{HashSet, HashMap}; use group::GroupEncoding; @@ -101,11 +102,11 @@ fn intake_eventualities( pub(crate) struct EventualityTask> { db: D, feed: S, - scheduler: Sch, + scheduler: PhantomData, } impl> EventualityTask { - pub(crate) fn new(mut db: D, feed: S, scheduler: Sch, start_block: u64) -> Self { + pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { if EventualityDb::::next_to_check_for_eventualities_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); @@ -113,7 +114,7 @@ impl> EventualityTask { txn.commit(); } - Self { db, feed, scheduler } + Self { db, feed, scheduler: PhantomData } } #[allow(clippy::type_complexity)] @@ -146,7 +147,7 @@ impl> EventualityTask { } // Returns a boolean of if we intaked any Burns. - fn intake_burns(&mut self) -> bool { + async fn intake_burns(&mut self) -> Result { let mut intaked_any = false; // If we've handled an notable block, we may have Burns being queued with it as the reference @@ -158,6 +159,8 @@ impl> EventualityTask { // others the new key let (_keys, keys_with_stages) = self.keys_and_keys_with_stages(latest_handled_notable_block); + let block = self.feed.block_by_number(&self.db, latest_handled_notable_block).await?; + let mut txn = self.db.txn(); // Drain the entire channel while let Some(burns) = @@ -165,8 +168,9 @@ impl> EventualityTask { { intaked_any = true; - let new_eventualities = self.scheduler.fulfill( + let new_eventualities = Sch::fulfill( &mut txn, + &block, &keys_with_stages, burns .into_iter() @@ -178,7 +182,7 @@ impl> EventualityTask { txn.commit(); } - intaked_any + Ok(intaked_any) } } @@ -197,7 +201,7 @@ impl> ContinuallyRan for EventualityTas // Start by intaking any Burns we have sitting around // It's important we run this regardless of if we have a new block to handle - made_progress |= self.intake_burns(); + made_progress |= self.intake_burns().await?; /* Eventualities increase upon one of two cases: @@ -253,7 +257,7 @@ impl> ContinuallyRan for EventualityTas // state will be for the newer block) #[allow(unused_assignments)] { - made_progress |= self.intake_burns(); + made_progress |= self.intake_burns().await?; } } @@ -278,7 +282,7 @@ impl> ContinuallyRan for EventualityTas for key in &keys { // If this is the key's activation block, activate it if key.activation_block_number == b { - self.scheduler.activate_key(&mut txn, key.key); + Sch::activate_key(&mut txn, key.key); } let completed_eventualities = { @@ -431,7 +435,7 @@ impl> ContinuallyRan for EventualityTas after a later one was already used). */ let new_eventualities = - self.scheduler.update(&mut txn, &keys_with_stages, scheduler_update); + Sch::update(&mut txn, &block, &keys_with_stages, scheduler_update); // Intake the new Eventualities for key in new_eventualities.keys() { keys @@ -451,7 +455,7 @@ impl> ContinuallyRan for EventualityTas key.key != keys.last().unwrap().key, "key which was forwarding was the last key (which has no key after it to forward to)" ); - self.scheduler.flush_key(&mut txn, key.key, keys.last().unwrap().key); + Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); } // Now that we've intaked any Eventualities caused, check if we're retiring any keys @@ -469,7 +473,7 @@ impl> ContinuallyRan for EventualityTas // We tell the scheduler to retire it now as we're done with it, and this fn doesn't // require it be called with a canonical order - self.scheduler.retire_key(&mut txn, key.key); + Sch::retire_key(&mut txn, key.key); } } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 1818fbf0..8ecb731f 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -163,6 +163,8 @@ pub type AddressFor = <::Block as Block>::Address; pub type OutputFor = <::Block as Block>::Output; /// The eventuality type for this ScannerFeed. pub type EventualityFor = <::Block as Block>::Eventuality; +/// The block type for this ScannerFeed. +pub type BlockFor = ::Block; #[async_trait::async_trait] pub trait BatchPublisher: 'static + Send + Sync { @@ -245,7 +247,7 @@ pub trait Scheduler: 'static + Send { /// /// This SHOULD setup any necessary database structures. This SHOULD NOT cause the new key to /// be used as the primary key. The multisig rotation time clearly establishes its steps. - fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor); + fn activate_key(txn: &mut impl DbTxn, key: KeyFor); /// Flush all outputs within a retiring key to the new key. /// @@ -257,14 +259,20 @@ pub trait Scheduler: 'static + Send { /// /// If the retiring key has any unfulfilled payments associated with it, those MUST be made /// the responsibility of the new key. - fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor); + // TODO: This needs to return a HashMap for the eventualities + fn flush_key( + txn: &mut impl DbTxn, + block: &BlockFor, + retiring_key: KeyFor, + new_key: KeyFor, + ); /// Retire a key as it'll no longer be used. /// /// Any key retired MUST NOT still have outputs associated with it. This SHOULD be a NOP other /// than any assertions and database cleanup. This MUST NOT be expected to be called in a fashion /// ordered to any other calls. - fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor); + fn retire_key(txn: &mut impl DbTxn, key: KeyFor); /// Accumulate outputs into the scheduler, yielding the Eventualities now to be scanned for. /// @@ -275,7 +283,6 @@ pub trait Scheduler: 'static + Send { /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. fn update( - &mut self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], @@ -315,7 +322,6 @@ pub trait Scheduler: 'static + Send { has an output-to-Serai, the new primary output). */ fn fulfill( - &mut self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], @@ -333,18 +339,17 @@ impl Scanner { /// Create a new scanner. /// /// This will begin its execution, spawning several asynchronous tasks. - pub async fn new( + pub async fn new>( db: impl Db, feed: S, batch_publisher: impl BatchPublisher, - scheduler: impl Scheduler, start_block: u64, ) -> Self { let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); let report_task = report::ReportTask::<_, S, _>::new(db.clone(), batch_publisher, start_block); let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); - let eventuality_task = eventuality::EventualityTask::new(db, feed, scheduler, start_block); + let eventuality_task = eventuality::EventualityTask::<_, _, Sch>::new(db, feed, start_block); let (_index_handle, index_run) = RunNowHandle::new(); let (scan_handle, scan_run) = RunNowHandle::new(); diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 356192ee..81d5ebd7 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -2,12 +2,10 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::fmt::Debug; - use serai_primitives::{Coin, Amount}; use primitives::{ReceivedOutput, Payment}; -use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor}; +use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor}; use scheduler_primitives::*; /// A planned transaction. @@ -23,12 +21,6 @@ pub struct PlannedTransaction { /// An object able to plan a transaction. #[async_trait::async_trait] pub trait TransactionPlanner: 'static + Send + Sync { - /// An error encountered when determining the fee rate. - /// - /// This MUST be an ephemeral error. Retrying fetching data from the blockchain MUST eventually - /// resolve without manual intervention/changing the arguments. - type EphemeralError: Debug; - /// The type representing a fee rate to use for transactions. type FeeRate: Clone + Copy; @@ -42,12 +34,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// Obtain the fee rate to pay. /// - /// This must be constant to the finalized block referenced by this block number and the coin. - async fn fee_rate( - &self, - block_number: u64, - coin: Coin, - ) -> Result; + /// This must be constant to the block and coin. + fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate; /// The branch address for this key of Serai's. fn branch_address(key: KeyFor) -> AddressFor; diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index d629480f..697f1009 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -61,13 +61,13 @@ impl Db { pub(crate) fn set_already_accumulated_output( txn: &mut impl DbTxn, - output: as ReceivedOutput, AddressFor>>::Id, + output: & as ReceivedOutput, AddressFor>>::Id, ) { AlreadyAccumulatedOutput::set(txn, output.as_ref(), &()); } pub(crate) fn take_if_already_accumulated_output( txn: &mut impl DbTxn, - output: as ReceivedOutput, AddressFor>>::Id, + output: & as ReceivedOutput, AddressFor>>::Id, ) -> bool { let res = AlreadyAccumulatedOutput::get(txn, output.as_ref()).is_some(); AlreadyAccumulatedOutput::del(txn, output.as_ref()); @@ -79,15 +79,26 @@ impl Db { key: KeyFor, coin: Coin, ) -> Option>>> { - todo!("TODO") + let buf = SerializedQueuedPayments::get(getter, key.to_bytes().as_ref(), coin)?; + let mut buf = buf.as_slice(); + + let mut res = Vec::with_capacity(buf.len() / 128); + while !buf.is_empty() { + res.push(Payment::read(&mut buf).unwrap()); + } + Some(res) } pub(crate) fn set_queued_payments( txn: &mut impl DbTxn, key: KeyFor, coin: Coin, - queued: &Vec>>, + queued: &[Payment>], ) { - todo!("TODO") + let mut buf = Vec::with_capacity(queued.len() * 128); + for queued in queued { + queued.write(&mut buf).unwrap(); + } + SerializedQueuedPayments::set(txn, key.to_bytes().as_ref(), coin, &buf); } pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { SerializedQueuedPayments::del(txn, key.to_bytes().as_ref(), coin); diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 31c70c1e..7359a87c 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -13,8 +13,8 @@ use serai_db::DbTxn; use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{ - LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, SchedulerUpdate, - Scheduler as SchedulerTrait, + LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor, + SchedulerUpdate, Scheduler as SchedulerTrait, }; use scheduler_primitives::*; use utxo_scheduler_primitives::*; @@ -22,6 +22,114 @@ use utxo_scheduler_primitives::*; mod db; use db::Db; +#[derive(Clone)] +enum TreeTransaction { + Leaves { payments: Vec>>, value: u64 }, + Branch { children: Vec, value: u64 }, +} +impl TreeTransaction { + fn children(&self) -> usize { + match self { + Self::Leaves { payments, .. } => payments.len(), + Self::Branch { children, .. } => children.len(), + } + } + fn value(&self) -> u64 { + match self { + Self::Leaves { value, .. } | Self::Branch { value, .. } => *value, + } + } + fn payments( + &self, + coin: Coin, + branch_address: &AddressFor, + input_value: u64, + ) -> Option>>> { + // Fetch the amounts for the payments we'll make + let mut amounts: Vec<_> = match self { + Self::Leaves { payments, .. } => { + payments.iter().map(|payment| Some(payment.balance().amount.0)).collect() + } + Self::Branch { children, .. } => children.iter().map(|child| Some(child.value())).collect(), + }; + + // We need to reduce them so their sum is our input value + assert!(input_value <= self.value()); + let amount_to_amortize = self.value() - input_value; + + // If any payments won't survive the reduction, set them to None + let mut amortized = 0; + 'outer: while amounts.iter().any(Option::is_some) && (amortized < amount_to_amortize) { + let adjusted_fee = amount_to_amortize - amortized; + let amounts_len = + u64::try_from(amounts.iter().filter(|amount| amount.is_some()).count()).unwrap(); + let per_payment_fee_check = adjusted_fee.div_ceil(amounts_len); + + // Check each amount to see if it's not viable + let mut i = 0; + while i < amounts.len() { + if let Some(amount) = amounts[i] { + if amount.saturating_sub(per_payment_fee_check) < S::dust(coin).0 { + amounts[i] = None; + amortized += amount; + // If this amount wasn't viable, re-run with the new fee/amortization amounts + continue 'outer; + } + } + i += 1; + } + + // Now that we have the payments which will survive, reduce them + for (i, amount) in amounts.iter_mut().enumerate() { + if let Some(amount) = amount { + *amount -= adjusted_fee / amounts_len; + if i < usize::try_from(adjusted_fee % amounts_len).unwrap() { + *amount -= 1; + } + } + } + break; + } + + // Now that we have the reduced amounts, create the payments + let payments: Vec<_> = match self { + Self::Leaves { payments, .. } => { + payments + .iter() + .zip(amounts) + .filter_map(|(payment, amount)| { + amount.map(|amount| { + // The existing payment, with the new amount + Payment::new( + payment.address().clone(), + Balance { coin, amount: Amount(amount) }, + payment.data().clone(), + ) + }) + }) + .collect() + } + Self::Branch { .. } => { + amounts + .into_iter() + .filter_map(|amount| { + amount.map(|amount| { + // A branch output with the new amount + Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }, None) + }) + }) + .collect() + } + }; + + // Use None for vec![] so we never actually use vec![] + if payments.is_empty() { + None?; + } + Some(payments) + } +} + /// The outputs which will be effected by a PlannedTransaction and received by Serai. pub struct EffectedReceivedOutputs(Vec>); @@ -33,319 +141,315 @@ pub struct Scheduler>> Scheduler { - fn handle_queued_payments( - &mut self, + fn accumulate_outputs(txn: &mut impl DbTxn, outputs: Vec>, from_scanner: bool) { + let mut outputs_by_key = HashMap::new(); + for output in outputs { + if !from_scanner { + // Since this isn't being reported by the scanner, flag it so when the scanner does report + // it, we don't accumulate it again + Db::::set_already_accumulated_output(txn, &output.id()); + } else if Db::::take_if_already_accumulated_output(txn, &output.id()) { + continue; + } + + let coin = output.balance().coin; + outputs_by_key + // Index by key and coin + .entry((output.key().to_bytes().as_ref().to_vec(), coin)) + // If we haven't accumulated here prior, read the outputs from the database + .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) + .1 + .push(output); + } + // Write the outputs back to the database + for ((_key_vec, coin), (key, outputs)) in outputs_by_key { + Db::::set_outputs(txn, key, coin, &outputs); + } + } + + fn aggregate_inputs( + txn: &mut impl DbTxn, + block: &BlockFor, + key_for_change: KeyFor, + key: KeyFor, + coin: Coin, + ) -> Vec> { + let mut eventualities = vec![]; + + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let mut outputs = Db::::outputs(txn, key, coin).unwrap(); + while outputs.len() > P::MAX_INPUTS { + let to_aggregate = outputs.drain(.. P::MAX_INPUTS).collect::>(); + Db::::set_outputs(txn, key, coin, &outputs); + + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + to_aggregate, + vec![], + Some(key_for_change), + ) else { + continue; + }; + + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + Self::accumulate_outputs(txn, planned.auxilliary.0, false); + + // Reload the outputs for the next loop iteration + outputs = Db::::outputs(txn, key, coin).unwrap(); + } + + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + eventualities + } + + fn fulfillable_payments( + txn: &mut impl DbTxn, + operating_costs: &mut u64, + key: KeyFor, + coin: Coin, + value_of_outputs: u64, + ) -> Vec>> { + // Fetch all payments for this key + let mut payments = Db::::queued_payments(txn, key, coin).unwrap(); + if payments.is_empty() { + return vec![]; + } + + loop { + // inputs must be >= (payments - operating costs) + // Accordingly, (inputs + operating costs) must be >= payments + let value_fulfillable = value_of_outputs + *operating_costs; + + // Drop to just the payments we can currently fulfill + { + let mut can_handle = 0; + let mut value_used = 0; + for payment in &payments { + value_used += payment.balance().amount.0; + if value_fulfillable < value_used { + break; + } + can_handle += 1; + } + + let remaining_payments = payments.drain(can_handle ..).collect::>(); + // Restore the rest to the database + Db::::set_queued_payments(txn, key, coin, &remaining_payments); + } + + // If these payments are worth less than the operating costs, immediately drop them + let payments_value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + if payments_value <= *operating_costs { + *operating_costs -= payments_value; + Db::::set_operating_costs(txn, coin, Amount(*operating_costs)); + + // Reset payments to the queued payments + payments = Db::::queued_payments(txn, key, coin).unwrap(); + // If there's no more payments, stop looking for which payments we should fulfill + if payments.is_empty() { + return vec![]; + } + // Find which of these we should handle + continue; + } + + return payments; + } + } + + fn step( txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], - fee_rates: &HashMap, + block: &BlockFor, key: KeyFor, ) -> Vec> { let mut eventualities = vec![]; - let mut accumulate_outputs = |txn, outputs: Vec>| { - let mut outputs_by_key = HashMap::new(); - for output in outputs { - Db::::set_already_accumulated_output(txn, output.id()); - let coin = output.balance().coin; - outputs_by_key - .entry((output.key().to_bytes().as_ref().to_vec(), coin)) - .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) - .1 - .push(output); + let key_for_change = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") } - for ((_key_vec, coin), (key, outputs)) in outputs_by_key { - Db::::set_outputs(txn, key, coin, &outputs); + LifetimeStage::Active => active_keys[0].0, + LifetimeStage::UsingNewForChange | LifetimeStage::Forwarding | LifetimeStage::Finishing => { + active_keys[1].0 } }; + let branch_address = P::branch_address(key); - for coin in S::NETWORK.coins() { - // Fetch our operating costs and all our outputs - let mut operating_costs = Db::::operating_costs(txn, *coin).0; - let mut outputs = Db::::outputs(txn, key, *coin).unwrap(); + 'coin: for coin in S::NETWORK.coins() { + let coin = *coin; - // If we have more than the maximum amount of inputs, aggregate until we don't - { - while outputs.len() > P::MAX_INPUTS { + // Perform any input aggregation we should + eventualities.append(&mut Self::aggregate_inputs(txn, block, key_for_change, key, coin)); + + // Fetch the operating costs/outputs + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let outputs = Db::::outputs(txn, key, coin).unwrap(); + + // Fetch the fulfillable payments + let payments = Self::fulfillable_payments( + txn, + &mut operating_costs, + key, + coin, + outputs.iter().map(|output| output.balance().amount.0).sum(), + ); + if payments.is_empty() { + continue; + } + + // If this is our only key, we should be able to fulfill all payments + // Else, we'd be insolvent + if active_keys.len() == 1 { + assert!(Db::::queued_payments(txn, key, coin).unwrap().is_empty()); + } + + // Create a tree to fulfillthe payments + // This variable is for the current layer of the tree being built + let mut tree = Vec::with_capacity(payments.len().div_ceil(P::MAX_OUTPUTS)); + + // Push the branches for the leaves (the payments out) + for payments in payments.chunks(P::MAX_OUTPUTS) { + let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + tree.push(TreeTransaction::::Leaves { payments: payments.to_vec(), value }); + } + + // While we haven't calculated a tree root, or the tree root doesn't support a change output, + // keep working + while (tree.len() != 1) || (tree[0].children() == P::MAX_OUTPUTS) { + let mut branch_layer = vec![]; + for children in tree.chunks(P::MAX_OUTPUTS) { + branch_layer.push(TreeTransaction::::Branch { + children: children.to_vec(), + value: children.iter().map(TreeTransaction::value).sum(), + }); + } + tree = branch_layer; + } + assert_eq!(tree.len(), 1); + assert!((tree[0].children() + 1) <= P::MAX_OUTPUTS); + + // Create the transaction for the root of the tree + let mut branch_outputs = { + // Try creating this transaction twice, once with a change output and once with increased + // operating costs to ensure a change output (as necessary to meet the requirements of the + // scanner API) + let mut planned_outer = None; + for i in 0 .. 2 { let Some(planned) = P::plan_transaction_with_fee_amortization( &mut operating_costs, - fee_rates[coin], - outputs.drain(.. P::MAX_INPUTS).collect::>(), - vec![], + P::fee_rate(block, coin), + outputs.clone(), + tree[0] + .payments(coin, &branch_address, tree[0].value()) + .expect("payments were dropped despite providing an input of the needed value"), Some(key_for_change), ) else { - // We amortized all payments, and even when just trying to make the change output, these - // inputs couldn't afford their own aggregation and were written off - Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); + // This should trip on the first iteration or not at all + assert_eq!(i, 0); + // This doesn't have inputs even worth aggregating so drop the entire tree + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + continue 'coin; + }; + + // If this doesn't have a change output, increase operating costs and try again + if !planned.auxilliary.0.iter().any(|output| output.kind() == OutputType::Change) { + /* + Since we'll create a change output if it's worth at least dust, amortizing dust from + the payments should solve this. If the new transaction can't afford those operating + costs, then the payments should be amortized out, causing there to be a change or no + transaction at all. + */ + operating_costs += S::dust(coin).0; + continue; + } + + // Since this had a change output, move forward with it + planned_outer = Some(planned); + break; + } + let Some(mut planned) = planned_outer else { + panic!("couldn't create a tree root with a change output") + }; + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + + // We accumulate the change output, but not the branches as we'll consume them momentarily + Self::accumulate_outputs( + txn, + planned + .auxilliary + .0 + .iter() + .filter(|output| output.kind() == OutputType::Change) + .cloned() + .collect(), + false, + ); + planned.auxilliary.0.retain(|output| output.kind() == OutputType::Branch); + planned.auxilliary.0 + }; + + // Now execute each layer of the tree + tree = match tree.remove(0) { + TreeTransaction::Leaves { .. } => vec![], + TreeTransaction::Branch { children, .. } => children, + }; + while !tree.is_empty() { + // Sort the branch outputs by their value + branch_outputs.sort_by_key(|a| a.balance().amount.0); + // Sort the transactions we should create by their value so they share an order with the + // branch outputs + tree.sort_by_key(TreeTransaction::value); + + // If we dropped any Branch outputs, drop the associated children + tree.truncate(branch_outputs.len()); + assert_eq!(branch_outputs.len(), tree.len()); + + let branch_outputs_for_this_layer = branch_outputs; + let this_layer = tree; + branch_outputs = vec![]; + tree = vec![]; + + for (branch_output, tx) in branch_outputs_for_this_layer.into_iter().zip(this_layer) { + assert_eq!(branch_output.kind(), OutputType::Branch); + + let Some(payments) = tx.payments(coin, &branch_address, branch_output.balance().amount.0) + else { + // If this output has become too small to satisfy this branch, drop it continue; }; - // Send the transactions off for signing - TransactionsToSign::::send(txn, &key, &planned.signable); - // Push the Eventualities onto the result - eventualities.push(planned.eventuality); - // Accumulate the outputs - Db::set_outputs(txn, key, *coin, &outputs); - accumulate_outputs(txn, planned.auxilliary.0); - outputs = Db::outputs(txn, key, *coin).unwrap(); - } - Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); - } - - // Now, handle the payments - let mut payments = Db::::queued_payments(txn, key, *coin).unwrap(); - if payments.is_empty() { - continue; - } - - // If this is our only key, our outputs and operating costs should be greater than the - // payments' value - if active_keys.len() == 1 { - // The available amount to fulfill is the amount we have plus the amount we'll reduce by - // An alternative formulation would be `outputs >= (payments - operating costs)`, but - // that'd risk underflow - let value_available = - operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); - - assert!( - value_available >= payments.iter().map(|payment| payment.balance().amount.0).sum::() - ); - } - - // Find the set of payments we should fulfill at this time - loop { - let value_available = - operating_costs + outputs.iter().map(|output| output.balance().amount.0).sum::(); - - // Drop to just the payments we currently have the outputs for - { - let mut can_handle = 0; - let mut value_used = 0; - for payment in payments { - value_used += payment.balance().amount.0; - if value_available < value_used { - break; - } - can_handle += 1; - } - - let remaining_payments = payments.drain(can_handle ..).collect::>(); - // Restore the rest to the database - Db::::set_queued_payments(txn, key, *coin, &remaining_payments); - } - let payments_value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); - - // If these payments are worth less than the operating costs, immediately drop them - if payments_value <= operating_costs { - operating_costs -= payments_value; - Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); - - // Reset payments to the queued payments - payments = Db::::queued_payments(txn, key, *coin).unwrap(); - // If there's no more payments, stop looking for which payments we should fulfill - if payments.is_empty() { - break; - } - - // Find which of these we should handle - continue; - } - - break; - } - if payments.is_empty() { - continue; - } - - // Create a tree to fulfill all of the payments - #[derive(Clone)] - struct TreeTransaction { - payments: Vec>>, - children: Vec>, - value: u64, - } - let mut tree_transactions = vec![]; - for payments in payments.chunks(P::MAX_OUTPUTS) { - let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); - tree_transactions.push(TreeTransaction:: { - payments: payments.to_vec(), - children: vec![], - value, - }); - } - // While we haven't calculated a tree root, or the tree root doesn't support a change output, - // keep working - while (tree_transactions.len() != 1) || - (tree_transactions[0].payments.len() == P::MAX_OUTPUTS) - { - let mut next_tree_transactions = vec![]; - for children in tree_transactions.chunks(P::MAX_OUTPUTS) { - // If this is the last chunk, and it doesn't need to accumulated, continue - if (children.len() < P::MAX_OUTPUTS) && - ((next_tree_transactions.len() + children.len()) < P::MAX_OUTPUTS) - { - for child in children { - next_tree_transactions.push(child.clone()); - } - continue; - } - - let payments = children - .iter() - .map(|child| { - Payment::new( - P::branch_address(key), - Balance { coin: *coin, amount: Amount(child.value) }, - None, - ) - }) - .collect(); - let value = children.iter().map(|child| child.value).sum(); - next_tree_transactions.push(TreeTransaction { - payments, - children: children.to_vec(), - value, - }); - } - tree_transactions = next_tree_transactions; - } - - // This is recursive, yet only recurses with logarithmic depth - fn execute_tree_transaction< - S: ScannerFeed, - P: TransactionPlanner>, - >( - txn: &mut impl DbTxn, - fee_rate: P::FeeRate, - eventualities: &mut Vec>, - key: KeyFor, - mut branch_outputs: Vec>, - mut children: Vec>, - ) { - assert_eq!(branch_outputs.len(), children.len()); - - // Sort the branch outputs by their value - branch_outputs.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0)); - // Find the child for each branch output - // This is only done within a transaction, not across the layer, so we don't have branches - // created in transactions with less outputs (and therefore less fees) jump places with - // other branches - children.sort_by(|a, b| a.value.cmp(&b.value)); - - for (branch_output, mut child) in branch_outputs.into_iter().zip(children) { - assert_eq!(branch_output.kind(), OutputType::Branch); - Db::::set_already_accumulated_output(txn, branch_output.id()); - - // We need to compensate for the value of this output being less than the value of the - // payments - { - let fee_to_amortize = child.value - branch_output.balance().amount.0; - let mut amortized = 0; - 'outer: while (!child.payments.is_empty()) && (amortized < fee_to_amortize) { - let adjusted_fee = fee_to_amortize - amortized; - let payments_len = u64::try_from(child.payments.len()).unwrap(); - let per_payment_fee_check = adjusted_fee.div_ceil(payments_len); - - let mut i = 0; - while i < child.payments.len() { - let amount = child.payments[i].balance().amount.0; - if amount <= per_payment_fee_check { - child.payments.swap_remove(i); - child.children.swap_remove(i); - amortized += amount; - continue 'outer; - } - i += 1; - } - - // Since all payments can pay the fee, deduct accordingly - for (i, payment) in child.payments.iter_mut().enumerate() { - let Balance { coin, amount } = payment.balance(); - let mut amount = amount.0; - amount -= adjusted_fee / payments_len; - if i < usize::try_from(adjusted_fee % payments_len).unwrap() { - amount -= 1; - } - - *payment = Payment::new( - payment.address().clone(), - Balance { coin, amount: Amount(amount) }, - None, - ); - } - } - if child.payments.is_empty() { - continue; - } - } - - let Some(planned) = P::plan_transaction_with_fee_amortization( + let branch_output_id = branch_output.id(); + let Some(mut planned) = P::plan_transaction_with_fee_amortization( // Uses 0 as there's no operating costs to incur/amortize here &mut 0, - fee_rate, + P::fee_rate(block, coin), vec![branch_output], - child.payments, + payments, None, ) else { // This Branch isn't viable, so drop it (and its children) continue; }; + // Since we've made a TX spending this output, don't accumulate it later + Db::::set_already_accumulated_output(txn, &branch_output_id); TransactionsToSign::::send(txn, &key, &planned.signable); eventualities.push(planned.eventuality); - if !child.children.is_empty() { - execute_tree_transaction::( - txn, - fee_rate, - eventualities, - key, - planned.auxilliary.0, - child.children, - ); + + match tx { + TreeTransaction::Leaves { .. } => {} + // If this was a branch, handle its children + TreeTransaction::Branch { mut children, .. } => { + branch_outputs.append(&mut planned.auxilliary.0); + tree.append(&mut children); + } } } } - - assert_eq!(tree_transactions.len(), 1); - assert!((tree_transactions[0].payments.len() + 1) <= P::MAX_OUTPUTS); - - // Create the transaction for the root of the tree - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - fee_rates[coin], - outputs, - tree_transactions[0].payments, - Some(key_for_change), - ) else { - Db::::set_operating_costs(txn, *coin, Amount(operating_costs)); - continue; - }; - TransactionsToSign::::send(txn, &key, &planned.signable); - eventualities.push(planned.eventuality); - - // We accumulate the change output, but consume the branches here - accumulate_outputs( - txn, - planned - .auxilliary - .0 - .iter() - .filter(|output| output.kind() == OutputType::Change) - .cloned() - .collect(), - ); - // Filter the outputs to the change outputs - let mut branch_outputs = planned.auxilliary.0; - branch_outputs.retain(|output| output.kind() == OutputType::Branch); - - if !tree_transactions[0].children.is_empty() { - execute_tree_transaction::( - txn, - fee_rates[coin], - &mut eventualities, - key, - branch_outputs, - tree_transactions[0].children, - ); - } } eventualities @@ -355,16 +459,21 @@ impl>> Sched impl>> SchedulerTrait for Scheduler { - fn activate_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { + fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { assert!(Db::::outputs(txn, key, *coin).is_none()); Db::::set_outputs(txn, key, *coin, &[]); assert!(Db::::queued_payments(txn, key, *coin).is_none()); - Db::::set_queued_payments(txn, key, *coin, &vec![]); + Db::::set_queued_payments(txn, key, *coin, &[]); } } - fn flush_key(&mut self, txn: &mut impl DbTxn, retiring_key: KeyFor, new_key: KeyFor) { + fn flush_key( + txn: &mut impl DbTxn, + _block: &BlockFor, + retiring_key: KeyFor, + new_key: KeyFor, + ) { for coin in S::NETWORK.coins() { let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); @@ -372,12 +481,14 @@ impl>> Sched let mut queued = still_queued; queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &vec![]); + Db::::set_queued_payments(txn, retiring_key, *coin, &[]); Db::::set_queued_payments(txn, new_key, *coin, &queued); + + // TODO: Forward all existing outputs } } - fn retire_key(&mut self, txn: &mut impl DbTxn, key: KeyFor) { + fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { assert!(Db::::outputs(txn, key, *coin).unwrap().is_empty()); Db::::del_outputs(txn, key, *coin); @@ -387,48 +498,18 @@ impl>> Sched } fn update( - &mut self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, ) -> HashMap, Vec>> { - // Accumulate all the outputs - for (key, _) in active_keys { - // Accumulate them in memory - let mut outputs_by_coin = HashMap::with_capacity(1); - for output in update.outputs().iter().filter(|output| output.key() == *key) { - match output.kind() { - OutputType::External | OutputType::Forwarded => {} - // Only accumulate these if we haven't already - OutputType::Branch | OutputType::Change => { - if Db::::take_if_already_accumulated_output(txn, output.id()) { - continue; - } - } - } - let coin = output.balance().coin; - if let std::collections::hash_map::Entry::Vacant(e) = outputs_by_coin.entry(coin) { - e.insert(Db::::outputs(txn, *key, coin).unwrap()); - } - outputs_by_coin.get_mut(&coin).unwrap().push(output.clone()); - } - - // Flush them to the database - for (coin, outputs) in outputs_by_coin { - Db::::set_outputs(txn, *key, coin, &outputs); - } - } - - let fee_rates = block.fee_rates(); + Self::accumulate_outputs(txn, update.outputs().to_vec(), true); // Fulfill the payments we prior couldn't let mut eventualities = HashMap::new(); for (key, _stage) in active_keys { - eventualities.insert( - key.to_bytes().as_ref().to_vec(), - self.handle_queued_payments(txn, active_keys, fee_rates, *key), - ); + eventualities + .insert(key.to_bytes().as_ref().to_vec(), Self::step(txn, active_keys, block, *key)); } // TODO: If this key has been flushed, forward all outputs @@ -448,7 +529,7 @@ impl>> Sched // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be forwarded, we simply drop it &mut 0, - fee_rates[&forward.balance().coin], + P::fee_rate(block, forward.balance().coin), vec![forward.clone()], vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], None, @@ -465,7 +546,7 @@ impl>> Sched // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be returned, we simply drop it &mut 0, - fee_rates[&out_instruction.balance().coin], + P::fee_rate(block, out_instruction.balance().coin), vec![to_return.output().clone()], vec![out_instruction], None, @@ -480,7 +561,7 @@ impl>> Sched TransactionsToSign::::send(txn, &key, &planned_tx.signable); // Insert the Eventualities into the result - eventualities[key.to_bytes().as_ref()].push(planned_tx.eventuality); + eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); } eventualities @@ -488,11 +569,10 @@ impl>> Sched } fn fulfill( - &mut self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], - mut payments: Vec>>, + payments: Vec>>, ) -> HashMap, Vec>> { // Find the key to filfill these payments with let fulfillment_key = match active_keys[0].1 { @@ -514,7 +594,7 @@ impl>> Sched // Handle the queued payments HashMap::from([( fulfillment_key.to_bytes().as_ref().to_vec(), - self.handle_queued_payments(txn, active_keys, block.fee_rates(), fulfillment_key), + Self::step(txn, active_keys, block, fulfillment_key), )]) } } From 0c1aec29bb8b88f97dd345ab9cc732898eaa4e9b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 02:06:21 -0400 Subject: [PATCH 071/368] Finish routing output flushing Completes the transaction-chaining scheduler. --- processor/scanner/src/eventuality/mod.rs | 4 +- processor/scanner/src/lib.rs | 3 +- .../utxo/transaction-chaining/src/lib.rs | 79 ++++++++++++++++--- 3 files changed, 73 insertions(+), 13 deletions(-) diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 84670f79..6db60b71 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -455,7 +455,9 @@ impl> ContinuallyRan for EventualityTas key.key != keys.last().unwrap().key, "key which was forwarding was the last key (which has no key after it to forward to)" ); - Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); + let new_eventualities = + Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); + intake_eventualities::(&mut txn, new_eventualities); } // Now that we've intaked any Eventualities caused, check if we're retiring any keys diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 8ecb731f..ecefb9a8 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -259,13 +259,12 @@ pub trait Scheduler: 'static + Send { /// /// If the retiring key has any unfulfilled payments associated with it, those MUST be made /// the responsibility of the new key. - // TODO: This needs to return a HashMap for the eventualities fn flush_key( txn: &mut impl DbTxn, block: &BlockFor, retiring_key: KeyFor, new_key: KeyFor, - ); + ) -> HashMap, Vec>>; /// Retire a key as it'll no longer be used. /// diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 7359a87c..321d4b60 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -454,6 +454,42 @@ impl>> Sched eventualities } + + fn flush_outputs( + txn: &mut impl DbTxn, + eventualities: &mut HashMap, Vec>>, + block: &BlockFor, + from: KeyFor, + to: KeyFor, + coin: Coin, + ) { + let from_bytes = from.to_bytes().as_ref().to_vec(); + // Ensure our inputs are aggregated + eventualities + .entry(from_bytes.clone()) + .or_insert(vec![]) + .append(&mut Self::aggregate_inputs(txn, block, to, from, coin)); + + // Now that our inputs are aggregated, transfer all of them to the new key + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let outputs = Db::::outputs(txn, from, coin).unwrap(); + if outputs.is_empty() { + return; + } + let planned = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs, + vec![], + Some(to), + ); + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + let Some(planned) = planned else { return }; + + TransactionsToSign::::send(txn, &from, &planned.signable); + eventualities.get_mut(&from_bytes).unwrap().push(planned.eventuality); + Self::accumulate_outputs(txn, planned.auxilliary.0, false); + } } impl>> SchedulerTrait @@ -470,22 +506,28 @@ impl>> Sched fn flush_key( txn: &mut impl DbTxn, - _block: &BlockFor, + block: &BlockFor, retiring_key: KeyFor, new_key: KeyFor, - ) { + ) -> HashMap, Vec>> { + let mut eventualities = HashMap::new(); for coin in S::NETWORK.coins() { - let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); - let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + // Move the payments to the new key + { + let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); - let mut queued = still_queued; - queued.append(&mut new_queued); + let mut queued = still_queued; + queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &[]); - Db::::set_queued_payments(txn, new_key, *coin, &queued); + Db::::set_queued_payments(txn, retiring_key, *coin, &[]); + Db::::set_queued_payments(txn, new_key, *coin, &queued); + } - // TODO: Forward all existing outputs + // Move the outputs to the new key + Self::flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin); } + eventualities } fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { @@ -512,7 +554,24 @@ impl>> Sched .insert(key.to_bytes().as_ref().to_vec(), Self::step(txn, active_keys, block, *key)); } - // TODO: If this key has been flushed, forward all outputs + // If this key has been flushed, forward all outputs + match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => {} + LifetimeStage::Forwarding | LifetimeStage::Finishing => { + for coin in S::NETWORK.coins() { + Self::flush_outputs( + txn, + &mut eventualities, + block, + active_keys[0].0, + active_keys[1].0, + *coin, + ); + } + } + } // Create the transactions for the forwards/burns { From 6e9cb740224beff9535d8230304ddbac44cbf36b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 03:54:12 -0400 Subject: [PATCH 072/368] Add non-transaction-chaining scheduler --- .github/workflows/tests.yml | 1 + Cargo.lock | 16 + Cargo.toml | 1 + deny.toml | 1 + processor/primitives/src/output.rs | 12 +- processor/primitives/src/payment.rs | 3 +- processor/scheduler/primitives/Cargo.toml | 3 + .../scheduler/utxo/primitives/Cargo.toml | 2 + .../scheduler/utxo/primitives/src/lib.rs | 69 ++- .../scheduler/utxo/primitives/src/tree.rs | 146 +++++ processor/scheduler/utxo/standard/Cargo.toml | 35 ++ processor/scheduler/utxo/standard/LICENSE | 15 + processor/scheduler/utxo/standard/README.md | 17 + processor/scheduler/utxo/standard/src/db.rs | 113 ++++ processor/scheduler/utxo/standard/src/lib.rs | 508 ++++++++++++++++++ .../utxo/transaction-chaining/README.md | 2 +- .../utxo/transaction-chaining/src/lib.rs | 152 +----- 17 files changed, 951 insertions(+), 145 deletions(-) create mode 100644 processor/scheduler/utxo/primitives/src/tree.rs create mode 100644 processor/scheduler/utxo/standard/Cargo.toml create mode 100644 processor/scheduler/utxo/standard/LICENSE create mode 100644 processor/scheduler/utxo/standard/README.md create mode 100644 processor/scheduler/utxo/standard/src/db.rs create mode 100644 processor/scheduler/utxo/standard/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index ca0bd4f5..a6260579 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -45,6 +45,7 @@ jobs: -p serai-processor-scanner \ -p serai-processor-scheduler-primitives \ -p serai-processor-utxo-scheduler-primitives \ + -p serai-processor-utxo-scheduler \ -p serai-processor-transaction-chaining-scheduler \ -p serai-processor \ -p tendermint-machine \ diff --git a/Cargo.lock b/Cargo.lock index dd1cc19e..b3fa4e36 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8733,11 +8733,27 @@ dependencies = [ "serai-processor-utxo-scheduler-primitives", ] +[[package]] +name = "serai-processor-utxo-scheduler" +version = "0.1.0" +dependencies = [ + "borsh", + "group", + "parity-scale-codec", + "serai-db", + "serai-primitives", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", + "serai-processor-utxo-scheduler-primitives", +] + [[package]] name = "serai-processor-utxo-scheduler-primitives" version = "0.1.0" dependencies = [ "async-trait", + "borsh", "serai-primitives", "serai-processor-primitives", "serai-processor-scanner", diff --git a/Cargo.toml b/Cargo.toml index b61cde68..a2d86c82 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -77,6 +77,7 @@ members = [ "processor/scanner", "processor/scheduler/primitives", "processor/scheduler/utxo/primitives", + "processor/scheduler/utxo/standard", "processor/scheduler/utxo/transaction-chaining", "processor", diff --git a/deny.toml b/deny.toml index 2ca0ca50..16d3cbea 100644 --- a/deny.toml +++ b/deny.toml @@ -52,6 +52,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-scanner" }, { allow = ["AGPL-3.0"], name = "serai-processor-scheduler-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-processor-standard-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor" }, diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index d59b4fd0..cbfe59f3 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -3,12 +3,22 @@ use std::io; use group::GroupEncoding; +use borsh::{BorshSerialize, BorshDeserialize}; + use serai_primitives::{ExternalAddress, Balance}; use crate::Id; /// An address on the external network. -pub trait Address: Send + Sync + Clone + Into + TryFrom { +pub trait Address: + Send + + Sync + + Clone + + Into + + TryFrom + + BorshSerialize + + BorshDeserialize +{ /// Write this address. fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; /// Read an address. diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs index bf3c918c..67a5bbad 100644 --- a/processor/primitives/src/payment.rs +++ b/processor/primitives/src/payment.rs @@ -1,6 +1,7 @@ use std::io; use scale::{Encode, Decode, IoReader}; +use borsh::{BorshSerialize, BorshDeserialize}; use serai_primitives::{Balance, Data}; use serai_coins_primitives::OutInstructionWithBalance; @@ -8,7 +9,7 @@ use serai_coins_primitives::OutInstructionWithBalance; use crate::Address; /// A payment to fulfill. -#[derive(Clone)] +#[derive(Clone, BorshSerialize, BorshDeserialize)] pub struct Payment { address: A, balance: Balance, diff --git a/processor/scheduler/primitives/Cargo.toml b/processor/scheduler/primitives/Cargo.toml index 31d73853..cdf12cbb 100644 --- a/processor/scheduler/primitives/Cargo.toml +++ b/processor/scheduler/primitives/Cargo.toml @@ -13,6 +13,9 @@ publish = false all-features = true rustdoc-args = ["--cfg", "docsrs"] +[package.metadata.cargo-machete] +ignored = ["scale", "borsh"] + [lints] workspace = true diff --git a/processor/scheduler/utxo/primitives/Cargo.toml b/processor/scheduler/utxo/primitives/Cargo.toml index 4f2499f9..85935ae0 100644 --- a/processor/scheduler/utxo/primitives/Cargo.toml +++ b/processor/scheduler/utxo/primitives/Cargo.toml @@ -19,6 +19,8 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } primitives = { package = "serai-processor-primitives", path = "../../../primitives" } diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 81d5ebd7..274eb2a4 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -8,6 +8,9 @@ use primitives::{ReceivedOutput, Payment}; use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor}; use scheduler_primitives::*; +mod tree; +pub use tree::*; + /// A planned transaction. pub struct PlannedTransaction { /// The signable transaction. @@ -18,6 +21,23 @@ pub struct PlannedTransaction { pub auxilliary: A, } +/// A planned transaction which was created via amortizing the fee. +pub struct AmortizePlannedTransaction { + /// The amounts the included payments were worth. + /// + /// If the payments passed as an argument are sorted from highest to lowest valued, these `n` + /// amounts will be for the first `n` payments. + pub effected_payments: Vec, + /// Whether or not the planned transaction had a change output. + pub has_change: bool, + /// The signable transaction. + pub signable: ST, + /// The Eventuality to watch for. + pub eventuality: EventualityFor, + /// The auxilliary data for this transaction. + pub auxilliary: A, +} + /// An object able to plan a transaction. #[async_trait::async_trait] pub trait TransactionPlanner: 'static + Send + Sync { @@ -60,7 +80,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// This must only require the same fee as would be returned by `calculate_fee`. The caller is /// trusted to maintain `sum(inputs) - sum(payments) >= if change.is_some() { DUST } else { 0 }`. /// - /// `change` will always be an address belonging to the Serai network. + /// `change` will always be an address belonging to the Serai network. If it is `Some`, a change + /// output must be created. fn plan( fee_rate: Self::FeeRate, inputs: Vec>, @@ -82,7 +103,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { inputs: Vec>, mut payments: Vec>>, mut change: Option>, - ) -> Option> { + ) -> Option> { // If there's no change output, we can't recoup any operating costs we would amortize // We also don't have any losses if the inputs are written off/the change output is reduced let mut operating_costs_if_no_change = 0; @@ -192,6 +213,48 @@ pub trait TransactionPlanner: 'static + Send + Sync { } // Because we amortized, or accrued as operating costs, the fee, make the transaction - Some(Self::plan(fee_rate, inputs, payments, change)) + let effected_payments = payments.iter().map(|payment| payment.balance().amount).collect(); + let has_change = change.is_some(); + let PlannedTransaction { signable, eventuality, auxilliary } = + Self::plan(fee_rate, inputs, payments, change); + Some(AmortizePlannedTransaction { + effected_payments, + has_change, + signable, + eventuality, + auxilliary, + }) + } + + /// Create a tree to fulfill a set of payments. + /// + /// Returns a `TreeTransaction` whose children (and arbitrary children of children) fulfill all + /// these payments. This tree root will be able to be made with a change output. + fn tree(payments: &[Payment>]) -> TreeTransaction> { + // This variable is for the current layer of the tree being built + let mut tree = Vec::with_capacity(payments.len().div_ceil(Self::MAX_OUTPUTS)); + + // Push the branches for the leaves (the payments out) + for payments in payments.chunks(Self::MAX_OUTPUTS) { + let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + tree.push(TreeTransaction::>::Leaves { payments: payments.to_vec(), value }); + } + + // While we haven't calculated a tree root, or the tree root doesn't support a change output, + // keep working + while (tree.len() != 1) || (tree[0].children() == Self::MAX_OUTPUTS) { + let mut branch_layer = vec![]; + for children in tree.chunks(Self::MAX_OUTPUTS) { + branch_layer.push(TreeTransaction::>::Branch { + children: children.to_vec(), + value: children.iter().map(TreeTransaction::value).sum(), + }); + } + tree = branch_layer; + } + assert_eq!(tree.len(), 1); + let tree_root = tree.remove(0); + assert!((tree_root.children() + 1) <= Self::MAX_OUTPUTS); + tree_root } } diff --git a/processor/scheduler/utxo/primitives/src/tree.rs b/processor/scheduler/utxo/primitives/src/tree.rs new file mode 100644 index 00000000..b52f3ba3 --- /dev/null +++ b/processor/scheduler/utxo/primitives/src/tree.rs @@ -0,0 +1,146 @@ +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_primitives::{Coin, Amount, Balance}; + +use primitives::{Address, Payment}; +use scanner::ScannerFeed; + +/// A transaction within a tree to fulfill payments. +#[derive(Clone, BorshSerialize, BorshDeserialize)] +pub enum TreeTransaction { + /// A transaction for the leaves (payments) of the tree. + Leaves { + /// The payments within this transaction. + payments: Vec>, + /// The sum value of the payments. + value: u64, + }, + /// A transaction for the branches of the tree. + Branch { + /// The child transactions. + children: Vec, + /// The sum value of the child transactions. + value: u64, + }, +} +impl TreeTransaction { + /// How many children this transaction has. + /// + /// A child is defined as any dependent, whether payment or transaction. + pub fn children(&self) -> usize { + match self { + Self::Leaves { payments, .. } => payments.len(), + Self::Branch { children, .. } => children.len(), + } + } + + /// The value this transaction wants to spend. + pub fn value(&self) -> u64 { + match self { + Self::Leaves { value, .. } | Self::Branch { value, .. } => *value, + } + } + + /// The payments to make to enable this transaction's children. + /// + /// A child is defined as any dependent, whether payment or transaction. + /// + /// The input value given to this transaction MUST be less than or equal to the desired value. + /// The difference will be amortized over all dependents. + /// + /// Returns None if no payments should be made. Returns Some containing a non-empty Vec if any + /// payments should be made. + pub fn payments( + &self, + coin: Coin, + branch_address: &A, + input_value: u64, + ) -> Option>> { + // Fetch the amounts for the payments we'll make + let mut amounts: Vec<_> = match self { + Self::Leaves { payments, .. } => payments + .iter() + .map(|payment| { + assert_eq!(payment.balance().coin, coin); + Some(payment.balance().amount.0) + }) + .collect(), + Self::Branch { children, .. } => children.iter().map(|child| Some(child.value())).collect(), + }; + + // We need to reduce them so their sum is our input value + assert!(input_value <= self.value()); + let amount_to_amortize = self.value() - input_value; + + // If any payments won't survive the reduction, set them to None + let mut amortized = 0; + 'outer: while amounts.iter().any(Option::is_some) && (amortized < amount_to_amortize) { + let adjusted_fee = amount_to_amortize - amortized; + let amounts_len = + u64::try_from(amounts.iter().filter(|amount| amount.is_some()).count()).unwrap(); + let per_payment_fee_check = adjusted_fee.div_ceil(amounts_len); + + // Check each amount to see if it's not viable + let mut i = 0; + while i < amounts.len() { + if let Some(amount) = amounts[i] { + if amount.saturating_sub(per_payment_fee_check) < S::dust(coin).0 { + amounts[i] = None; + amortized += amount; + // If this amount wasn't viable, re-run with the new fee/amortization amounts + continue 'outer; + } + } + i += 1; + } + + // Now that we have the payments which will survive, reduce them + for (i, amount) in amounts.iter_mut().enumerate() { + if let Some(amount) = amount { + *amount -= adjusted_fee / amounts_len; + if i < usize::try_from(adjusted_fee % amounts_len).unwrap() { + *amount -= 1; + } + } + } + break; + } + + // Now that we have the reduced amounts, create the payments + let payments: Vec<_> = match self { + Self::Leaves { payments, .. } => { + payments + .iter() + .zip(amounts) + .filter_map(|(payment, amount)| { + amount.map(|amount| { + // The existing payment, with the new amount + Payment::new( + payment.address().clone(), + Balance { coin, amount: Amount(amount) }, + payment.data().clone(), + ) + }) + }) + .collect() + } + Self::Branch { .. } => { + amounts + .into_iter() + .filter_map(|amount| { + amount.map(|amount| { + // A branch output with the new amount + Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }, None) + }) + }) + .collect() + } + }; + + // Use None for vec![] so we never actually use vec![] + if payments.is_empty() { + None?; + } + Some(payments) + } +} diff --git a/processor/scheduler/utxo/standard/Cargo.toml b/processor/scheduler/utxo/standard/Cargo.toml new file mode 100644 index 00000000..d6c16161 --- /dev/null +++ b/processor/scheduler/utxo/standard/Cargo.toml @@ -0,0 +1,35 @@ +[package] +name = "serai-processor-utxo-scheduler" +version = "0.1.0" +description = "Scheduler for UTXO networks for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/utxo/standard" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[package.metadata.cargo-machete] +ignored = ["scale", "borsh"] + +[lints] +workspace = true + +[dependencies] +group = { version = "0.13", default-features = false } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } + +serai-db = { path = "../../../../common/db" } + +primitives = { package = "serai-processor-primitives", path = "../../../primitives" } +scanner = { package = "serai-processor-scanner", path = "../../../scanner" } +scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../../primitives" } +utxo-scheduler-primitives = { package = "serai-processor-utxo-scheduler-primitives", path = "../primitives" } diff --git a/processor/scheduler/utxo/standard/LICENSE b/processor/scheduler/utxo/standard/LICENSE new file mode 100644 index 00000000..e091b149 --- /dev/null +++ b/processor/scheduler/utxo/standard/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scheduler/utxo/standard/README.md b/processor/scheduler/utxo/standard/README.md new file mode 100644 index 00000000..8e5360f0 --- /dev/null +++ b/processor/scheduler/utxo/standard/README.md @@ -0,0 +1,17 @@ +# UTXO Scheduler + +A scheduler of transactions for networks premised on the UTXO model. + +### Design + +The scheduler is designed to achieve fulfillment of all expected payments with +an `O(1)` delay (regardless of prior scheduler state), `O(log n)` time, and +`O(log(n) + n)` computational complexity. + +For the time/computational complexity, we use a tree to fulfill payments. +This quickly gives us the ability to make as many outputs as necessary +(regardless of per-transaction output limits) and only has the latency of +including a chain of `O(log n)` transactions on-chain. The only computational +overhead is in creating the transactions which are branches in the tree. +Since we split off the root of the tree from a master output, the delay to start +fulfillment is the delay for the master output to re-appear on-chain. diff --git a/processor/scheduler/utxo/standard/src/db.rs b/processor/scheduler/utxo/standard/src/db.rs new file mode 100644 index 00000000..00761595 --- /dev/null +++ b/processor/scheduler/utxo/standard/src/db.rs @@ -0,0 +1,113 @@ +use core::marker::PhantomData; + +use group::GroupEncoding; + +use serai_primitives::{Coin, Amount, Balance}; + +use borsh::BorshDeserialize; +use serai_db::{Get, DbTxn, create_db, db_channel}; + +use primitives::{Payment, ReceivedOutput}; +use utxo_scheduler_primitives::TreeTransaction; +use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; + +create_db! { + UtxoScheduler { + OperatingCosts: (coin: Coin) -> Amount, + SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, + SerializedQueuedPayments: (key: &[u8], coin: Coin) -> Vec, + } +} + +db_channel! { + UtxoScheduler { + PendingBranch: (key: &[u8], balance: Balance) -> Vec, + } +} + +pub(crate) struct Db(PhantomData); +impl Db { + pub(crate) fn operating_costs(getter: &impl Get, coin: Coin) -> Amount { + OperatingCosts::get(getter, coin).unwrap_or(Amount(0)) + } + pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: Coin, amount: Amount) { + OperatingCosts::set(txn, coin, &amount) + } + + pub(crate) fn outputs( + getter: &impl Get, + key: KeyFor, + coin: Coin, + ) -> Option>> { + let buf = SerializedOutputs::get(getter, key.to_bytes().as_ref(), coin)?; + let mut buf = buf.as_slice(); + + let mut res = Vec::with_capacity(buf.len() / 128); + while !buf.is_empty() { + res.push(OutputFor::::read(&mut buf).unwrap()); + } + Some(res) + } + pub(crate) fn set_outputs( + txn: &mut impl DbTxn, + key: KeyFor, + coin: Coin, + outputs: &[OutputFor], + ) { + let mut buf = Vec::with_capacity(outputs.len() * 128); + for output in outputs { + output.write(&mut buf).unwrap(); + } + SerializedOutputs::set(txn, key.to_bytes().as_ref(), coin, &buf); + } + pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); + } + + pub(crate) fn queued_payments( + getter: &impl Get, + key: KeyFor, + coin: Coin, + ) -> Option>>> { + let buf = SerializedQueuedPayments::get(getter, key.to_bytes().as_ref(), coin)?; + let mut buf = buf.as_slice(); + + let mut res = Vec::with_capacity(buf.len() / 128); + while !buf.is_empty() { + res.push(Payment::read(&mut buf).unwrap()); + } + Some(res) + } + pub(crate) fn set_queued_payments( + txn: &mut impl DbTxn, + key: KeyFor, + coin: Coin, + queued: &[Payment>], + ) { + let mut buf = Vec::with_capacity(queued.len() * 128); + for queued in queued { + queued.write(&mut buf).unwrap(); + } + SerializedQueuedPayments::set(txn, key.to_bytes().as_ref(), coin, &buf); + } + pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + SerializedQueuedPayments::del(txn, key.to_bytes().as_ref(), coin); + } + + pub(crate) fn queue_pending_branch( + txn: &mut impl DbTxn, + key: KeyFor, + balance: Balance, + child: &TreeTransaction>, + ) { + PendingBranch::send(txn, key.to_bytes().as_ref(), balance, &borsh::to_vec(child).unwrap()) + } + pub(crate) fn take_pending_branch( + txn: &mut impl DbTxn, + key: KeyFor, + balance: Balance, + ) -> Option>> { + PendingBranch::try_recv(txn, key.to_bytes().as_ref(), balance) + .map(|bytes| TreeTransaction::>::deserialize(&mut bytes.as_slice()).unwrap()) + } +} diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs new file mode 100644 index 00000000..f69ca54b --- /dev/null +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -0,0 +1,508 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::marker::PhantomData; +use std::collections::HashMap; + +use group::GroupEncoding; + +use serai_primitives::{Coin, Amount, Balance}; + +use serai_db::DbTxn; + +use primitives::{ReceivedOutput, Payment}; +use scanner::{ + LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor, + SchedulerUpdate, Scheduler as SchedulerTrait, +}; +use scheduler_primitives::*; +use utxo_scheduler_primitives::*; + +mod db; +use db::Db; + +/// A scheduler of transactions for networks premised on the UTXO model. +pub struct Scheduler>(PhantomData, PhantomData

); + +impl> Scheduler { + fn aggregate_inputs( + txn: &mut impl DbTxn, + block: &BlockFor, + key_for_change: KeyFor, + key: KeyFor, + coin: Coin, + ) -> Vec> { + let mut eventualities = vec![]; + + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let mut outputs = Db::::outputs(txn, key, coin).unwrap(); + outputs.sort_by_key(|output| output.balance().amount.0); + while outputs.len() > P::MAX_INPUTS { + let to_aggregate = outputs.drain(.. P::MAX_INPUTS).collect::>(); + + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + to_aggregate, + vec![], + Some(key_for_change), + ) else { + continue; + }; + + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + } + + Db::::set_outputs(txn, key, coin, &outputs); + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + eventualities + } + + fn fulfillable_payments( + txn: &mut impl DbTxn, + operating_costs: &mut u64, + key: KeyFor, + coin: Coin, + value_of_outputs: u64, + ) -> Vec>> { + // Fetch all payments for this key + let mut payments = Db::::queued_payments(txn, key, coin).unwrap(); + if payments.is_empty() { + return vec![]; + } + + loop { + // inputs must be >= (payments - operating costs) + // Accordingly, (inputs + operating costs) must be >= payments + let value_fulfillable = value_of_outputs + *operating_costs; + + // Drop to just the payments we can currently fulfill + { + let mut can_handle = 0; + let mut value_used = 0; + for payment in &payments { + value_used += payment.balance().amount.0; + if value_fulfillable < value_used { + break; + } + can_handle += 1; + } + + let remaining_payments = payments.drain(can_handle ..).collect::>(); + // Restore the rest to the database + Db::::set_queued_payments(txn, key, coin, &remaining_payments); + } + + // If these payments are worth less than the operating costs, immediately drop them + let payments_value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); + if payments_value <= *operating_costs { + *operating_costs -= payments_value; + Db::::set_operating_costs(txn, coin, Amount(*operating_costs)); + + // Reset payments to the queued payments + payments = Db::::queued_payments(txn, key, coin).unwrap(); + // If there's no more payments, stop looking for which payments we should fulfill + if payments.is_empty() { + return vec![]; + } + // Find which of these we should handle + continue; + } + + return payments; + } + } + + fn queue_branches( + txn: &mut impl DbTxn, + key: KeyFor, + coin: Coin, + effected_payments: Vec, + tx: TreeTransaction>, + ) { + match tx { + TreeTransaction::Leaves { .. } => {} + TreeTransaction::Branch { mut children, .. } => { + children.sort_by_key(TreeTransaction::value); + children.reverse(); + + /* + This may only be a subset of payments but it'll be the originally-highest-valued + payments. `zip` will truncate to the first children which will be the highest-valued + children thanks to our sort. + */ + for (amount, child) in effected_payments.into_iter().zip(children) { + Db::::queue_pending_branch(txn, key, Balance { coin, amount }, &child); + } + } + } + } + + fn handle_branch( + txn: &mut impl DbTxn, + block: &BlockFor, + eventualities: &mut Vec>, + output: OutputFor, + tx: TreeTransaction>, + ) -> bool { + let key = output.key(); + let coin = output.balance().coin; + let Some(payments) = tx.payments::(coin, &P::branch_address(key), output.balance().amount.0) + else { + // If this output has become too small to satisfy this branch, drop it + return false; + }; + + let Some(planned) = P::plan_transaction_with_fee_amortization( + // Uses 0 as there's no operating costs to incur/amortize here + &mut 0, + P::fee_rate(block, coin), + vec![output], + payments, + None, + ) else { + // This Branch isn't viable, so drop it (and its children) + return false; + }; + + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + + Self::queue_branches(txn, key, coin, planned.effected_payments, tx); + + true + } + + fn step( + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + block: &BlockFor, + key: KeyFor, + ) -> Vec> { + let mut eventualities = vec![]; + + let key_for_change = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") + } + LifetimeStage::Active => active_keys[0].0, + LifetimeStage::UsingNewForChange | LifetimeStage::Forwarding | LifetimeStage::Finishing => { + active_keys[1].0 + } + }; + let branch_address = P::branch_address(key); + + 'coin: for coin in S::NETWORK.coins() { + let coin = *coin; + + // Perform any input aggregation we should + eventualities.append(&mut Self::aggregate_inputs(txn, block, key_for_change, key, coin)); + + // Fetch the operating costs/outputs + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let outputs = Db::::outputs(txn, key, coin).unwrap(); + + // Fetch the fulfillable payments + let payments = Self::fulfillable_payments( + txn, + &mut operating_costs, + key, + coin, + outputs.iter().map(|output| output.balance().amount.0).sum(), + ); + if payments.is_empty() { + continue; + } + + // Create a tree to fulfill the payments + let mut tree = vec![P::tree(&payments)]; + + // Create the transaction for the root of the tree + // Try creating this transaction twice, once with a change output and once with increased + // operating costs to ensure a change output (as necessary to meet the requirements of the + // scanner API) + let mut planned_outer = None; + for i in 0 .. 2 { + let Some(planned) = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs.clone(), + tree[0] + .payments::(coin, &branch_address, tree[0].value()) + .expect("payments were dropped despite providing an input of the needed value"), + Some(key_for_change), + ) else { + // This should trip on the first iteration or not at all + assert_eq!(i, 0); + // This doesn't have inputs even worth aggregating so drop the entire tree + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + continue 'coin; + }; + + // If this doesn't have a change output, increase operating costs and try again + if !planned.has_change { + /* + Since we'll create a change output if it's worth at least dust, amortizing dust from + the payments should solve this. If the new transaction can't afford those operating + costs, then the payments should be amortized out, causing there to be a change or no + transaction at all. + */ + operating_costs += S::dust(coin).0; + continue; + } + + // Since this had a change output, move forward with it + planned_outer = Some(planned); + break; + } + let Some(planned) = planned_outer else { + panic!("couldn't create a tree root with a change output") + }; + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + TransactionsToSign::::send(txn, &key, &planned.signable); + eventualities.push(planned.eventuality); + + // Now save the next layer of the tree to the database + // We'll execute it when it appears + Self::queue_branches(txn, key, coin, planned.effected_payments, tree.remove(0)); + } + + eventualities + } + + fn flush_outputs( + txn: &mut impl DbTxn, + eventualities: &mut HashMap, Vec>>, + block: &BlockFor, + from: KeyFor, + to: KeyFor, + coin: Coin, + ) { + let from_bytes = from.to_bytes().as_ref().to_vec(); + // Ensure our inputs are aggregated + eventualities + .entry(from_bytes.clone()) + .or_insert(vec![]) + .append(&mut Self::aggregate_inputs(txn, block, to, from, coin)); + + // Now that our inputs are aggregated, transfer all of them to the new key + let mut operating_costs = Db::::operating_costs(txn, coin).0; + let outputs = Db::::outputs(txn, from, coin).unwrap(); + if outputs.is_empty() { + return; + } + let planned = P::plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs, + vec![], + Some(to), + ); + Db::::set_operating_costs(txn, coin, Amount(operating_costs)); + let Some(planned) = planned else { return }; + + TransactionsToSign::::send(txn, &from, &planned.signable); + eventualities.get_mut(&from_bytes).unwrap().push(planned.eventuality); + } +} + +impl> SchedulerTrait for Scheduler { + fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { + for coin in S::NETWORK.coins() { + assert!(Db::::outputs(txn, key, *coin).is_none()); + Db::::set_outputs(txn, key, *coin, &[]); + assert!(Db::::queued_payments(txn, key, *coin).is_none()); + Db::::set_queued_payments(txn, key, *coin, &[]); + } + } + + fn flush_key( + txn: &mut impl DbTxn, + block: &BlockFor, + retiring_key: KeyFor, + new_key: KeyFor, + ) -> HashMap, Vec>> { + let mut eventualities = HashMap::new(); + for coin in S::NETWORK.coins() { + // Move the payments to the new key + { + let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + + let mut queued = still_queued; + queued.append(&mut new_queued); + + Db::::set_queued_payments(txn, retiring_key, *coin, &[]); + Db::::set_queued_payments(txn, new_key, *coin, &queued); + } + + // Move the outputs to the new key + Self::flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin); + } + eventualities + } + + fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { + for coin in S::NETWORK.coins() { + assert!(Db::::outputs(txn, key, *coin).unwrap().is_empty()); + Db::::del_outputs(txn, key, *coin); + assert!(Db::::queued_payments(txn, key, *coin).unwrap().is_empty()); + Db::::del_queued_payments(txn, key, *coin); + } + } + + fn update( + txn: &mut impl DbTxn, + block: &BlockFor, + active_keys: &[(KeyFor, LifetimeStage)], + update: SchedulerUpdate, + ) -> HashMap, Vec>> { + let mut eventualities = HashMap::new(); + + // Accumulate the new outputs + { + let mut outputs_by_key = HashMap::new(); + for output in update.outputs() { + // If this aligns for a branch, handle it + if let Some(branch) = Db::::take_pending_branch(txn, output.key(), output.balance()) { + if Self::handle_branch( + txn, + block, + eventualities.entry(output.key().to_bytes().as_ref().to_vec()).or_insert(vec![]), + output.clone(), + branch, + ) { + // If we could use it for a branch, we do and move on + // Else, we let it be accumulated by the standard accumulation code + continue; + } + } + + let coin = output.balance().coin; + outputs_by_key + // Index by key and coin + .entry((output.key().to_bytes().as_ref().to_vec(), coin)) + // If we haven't accumulated here prior, read the outputs from the database + .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) + .1 + .push(output.clone()); + } + // Write the outputs back to the database + for ((_key_vec, coin), (key, outputs)) in outputs_by_key { + Db::::set_outputs(txn, key, coin, &outputs); + } + } + + // Fulfill the payments we prior couldn't + for (key, _stage) in active_keys { + eventualities + .entry(key.to_bytes().as_ref().to_vec()) + .or_insert(vec![]) + .append(&mut Self::step(txn, active_keys, block, *key)); + } + + // If this key has been flushed, forward all outputs + match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => {} + LifetimeStage::Forwarding | LifetimeStage::Finishing => { + for coin in S::NETWORK.coins() { + Self::flush_outputs( + txn, + &mut eventualities, + block, + active_keys[0].0, + active_keys[1].0, + *coin, + ); + } + } + } + + // Create the transactions for the forwards/burns + { + let mut planned_txs = vec![]; + for forward in update.forwards() { + let key = forward.key(); + + assert_eq!(active_keys.len(), 2); + assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); + assert_eq!(active_keys[1].1, LifetimeStage::Active); + let forward_to_key = active_keys[1].0; + + let Some(plan) = P::plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be forwarded, we simply drop it + &mut 0, + P::fee_rate(block, forward.balance().coin), + vec![forward.clone()], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], + None, + ) else { + continue; + }; + planned_txs.push((key, plan)); + } + for to_return in update.returns() { + let key = to_return.output().key(); + let out_instruction = + Payment::new(to_return.address().clone(), to_return.output().balance(), None); + let Some(plan) = P::plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be returned, we simply drop it + &mut 0, + P::fee_rate(block, out_instruction.balance().coin), + vec![to_return.output().clone()], + vec![out_instruction], + None, + ) else { + continue; + }; + planned_txs.push((key, plan)); + } + + for (key, planned_tx) in planned_txs { + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned_tx.signable); + + // Insert the Eventualities into the result + eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); + } + + eventualities + } + } + + fn fulfill( + txn: &mut impl DbTxn, + block: &BlockFor, + active_keys: &[(KeyFor, LifetimeStage)], + payments: Vec>>, + ) -> HashMap, Vec>> { + // Find the key to filfill these payments with + let fulfillment_key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") + } + LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + // Queue the payments for this key + for coin in S::NETWORK.coins() { + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + queued_payments + .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + } + + // Handle the queued payments + HashMap::from([( + fulfillment_key.to_bytes().as_ref().to_vec(), + Self::step(txn, active_keys, block, fulfillment_key), + )]) + } +} diff --git a/processor/scheduler/utxo/transaction-chaining/README.md b/processor/scheduler/utxo/transaction-chaining/README.md index 0788ff53..a129b669 100644 --- a/processor/scheduler/utxo/transaction-chaining/README.md +++ b/processor/scheduler/utxo/transaction-chaining/README.md @@ -9,7 +9,7 @@ to build and sign a transaction spending it. The scheduler is designed to achieve fulfillment of all expected payments with an `O(1)` delay (regardless of prior scheduler state), `O(log n)` time, and -`O(n)` computational complexity. +`O(log(n) + n)` computational complexity. Due to the ability to chain transactions, we can immediately plan/sign dependent transactions. For the time/computational complexity, we use a tree to fulfill diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 321d4b60..9a4ed2eb 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -7,7 +7,7 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_primitives::{Coin, Amount, Balance}; +use serai_primitives::{Coin, Amount}; use serai_db::DbTxn; @@ -22,114 +22,6 @@ use utxo_scheduler_primitives::*; mod db; use db::Db; -#[derive(Clone)] -enum TreeTransaction { - Leaves { payments: Vec>>, value: u64 }, - Branch { children: Vec, value: u64 }, -} -impl TreeTransaction { - fn children(&self) -> usize { - match self { - Self::Leaves { payments, .. } => payments.len(), - Self::Branch { children, .. } => children.len(), - } - } - fn value(&self) -> u64 { - match self { - Self::Leaves { value, .. } | Self::Branch { value, .. } => *value, - } - } - fn payments( - &self, - coin: Coin, - branch_address: &AddressFor, - input_value: u64, - ) -> Option>>> { - // Fetch the amounts for the payments we'll make - let mut amounts: Vec<_> = match self { - Self::Leaves { payments, .. } => { - payments.iter().map(|payment| Some(payment.balance().amount.0)).collect() - } - Self::Branch { children, .. } => children.iter().map(|child| Some(child.value())).collect(), - }; - - // We need to reduce them so their sum is our input value - assert!(input_value <= self.value()); - let amount_to_amortize = self.value() - input_value; - - // If any payments won't survive the reduction, set them to None - let mut amortized = 0; - 'outer: while amounts.iter().any(Option::is_some) && (amortized < amount_to_amortize) { - let adjusted_fee = amount_to_amortize - amortized; - let amounts_len = - u64::try_from(amounts.iter().filter(|amount| amount.is_some()).count()).unwrap(); - let per_payment_fee_check = adjusted_fee.div_ceil(amounts_len); - - // Check each amount to see if it's not viable - let mut i = 0; - while i < amounts.len() { - if let Some(amount) = amounts[i] { - if amount.saturating_sub(per_payment_fee_check) < S::dust(coin).0 { - amounts[i] = None; - amortized += amount; - // If this amount wasn't viable, re-run with the new fee/amortization amounts - continue 'outer; - } - } - i += 1; - } - - // Now that we have the payments which will survive, reduce them - for (i, amount) in amounts.iter_mut().enumerate() { - if let Some(amount) = amount { - *amount -= adjusted_fee / amounts_len; - if i < usize::try_from(adjusted_fee % amounts_len).unwrap() { - *amount -= 1; - } - } - } - break; - } - - // Now that we have the reduced amounts, create the payments - let payments: Vec<_> = match self { - Self::Leaves { payments, .. } => { - payments - .iter() - .zip(amounts) - .filter_map(|(payment, amount)| { - amount.map(|amount| { - // The existing payment, with the new amount - Payment::new( - payment.address().clone(), - Balance { coin, amount: Amount(amount) }, - payment.data().clone(), - ) - }) - }) - .collect() - } - Self::Branch { .. } => { - amounts - .into_iter() - .filter_map(|amount| { - amount.map(|amount| { - // A branch output with the new amount - Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }, None) - }) - }) - .collect() - } - }; - - // Use None for vec![] so we never actually use vec![] - if payments.is_empty() { - None?; - } - Some(payments) - } -} - /// The outputs which will be effected by a PlannedTransaction and received by Serai. pub struct EffectedReceivedOutputs(Vec>); @@ -306,30 +198,8 @@ impl>> Sched assert!(Db::::queued_payments(txn, key, coin).unwrap().is_empty()); } - // Create a tree to fulfillthe payments - // This variable is for the current layer of the tree being built - let mut tree = Vec::with_capacity(payments.len().div_ceil(P::MAX_OUTPUTS)); - - // Push the branches for the leaves (the payments out) - for payments in payments.chunks(P::MAX_OUTPUTS) { - let value = payments.iter().map(|payment| payment.balance().amount.0).sum::(); - tree.push(TreeTransaction::::Leaves { payments: payments.to_vec(), value }); - } - - // While we haven't calculated a tree root, or the tree root doesn't support a change output, - // keep working - while (tree.len() != 1) || (tree[0].children() == P::MAX_OUTPUTS) { - let mut branch_layer = vec![]; - for children in tree.chunks(P::MAX_OUTPUTS) { - branch_layer.push(TreeTransaction::::Branch { - children: children.to_vec(), - value: children.iter().map(TreeTransaction::value).sum(), - }); - } - tree = branch_layer; - } - assert_eq!(tree.len(), 1); - assert!((tree[0].children() + 1) <= P::MAX_OUTPUTS); + // Create a tree to fulfill the payments + let mut tree = vec![P::tree(&payments)]; // Create the transaction for the root of the tree let mut branch_outputs = { @@ -343,7 +213,7 @@ impl>> Sched P::fee_rate(block, coin), outputs.clone(), tree[0] - .payments(coin, &branch_address, tree[0].value()) + .payments::(coin, &branch_address, tree[0].value()) .expect("payments were dropped despite providing an input of the needed value"), Some(key_for_change), ) else { @@ -355,7 +225,7 @@ impl>> Sched }; // If this doesn't have a change output, increase operating costs and try again - if !planned.auxilliary.0.iter().any(|output| output.kind() == OutputType::Change) { + if !planned.has_change { /* Since we'll create a change output if it's worth at least dust, amortizing dust from the payments should solve this. If the new transaction can't afford those operating @@ -399,11 +269,13 @@ impl>> Sched TreeTransaction::Branch { children, .. } => children, }; while !tree.is_empty() { - // Sort the branch outputs by their value + // Sort the branch outputs by their value (high to low) branch_outputs.sort_by_key(|a| a.balance().amount.0); + branch_outputs.reverse(); // Sort the transactions we should create by their value so they share an order with the // branch outputs tree.sort_by_key(TreeTransaction::value); + tree.reverse(); // If we dropped any Branch outputs, drop the associated children tree.truncate(branch_outputs.len()); @@ -417,7 +289,8 @@ impl>> Sched for (branch_output, tx) in branch_outputs_for_this_layer.into_iter().zip(this_layer) { assert_eq!(branch_output.kind(), OutputType::Branch); - let Some(payments) = tx.payments(coin, &branch_address, branch_output.balance().amount.0) + let Some(payments) = + tx.payments::(coin, &branch_address, branch_output.balance().amount.0) else { // If this output has become too small to satisfy this branch, drop it continue; @@ -550,8 +423,9 @@ impl>> Sched // Fulfill the payments we prior couldn't let mut eventualities = HashMap::new(); for (key, _stage) in active_keys { - eventualities - .insert(key.to_bytes().as_ref().to_vec(), Self::step(txn, active_keys, block, *key)); + assert!(eventualities + .insert(key.to_bytes().as_ref().to_vec(), Self::step(txn, active_keys, block, *key)) + .is_none()); } // If this key has been flushed, forward all outputs From 2da24506a2b3860f84c29dd89713de7ac60328e0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 17:03:20 -0400 Subject: [PATCH 073/368] Remove vast swaths of legacy code in the processor --- processor/messages/src/lib.rs | 14 - processor/src/multisigs/db.rs | 260 ----- processor/src/multisigs/mod.rs | 1064 +-------------------- processor/src/multisigs/scheduler/mod.rs | 96 -- processor/src/multisigs/scheduler/utxo.rs | 631 ------------ processor/src/plan.rs | 212 ---- 6 files changed, 1 insertion(+), 2276 deletions(-) delete mode 100644 processor/src/multisigs/db.rs delete mode 100644 processor/src/multisigs/scheduler/mod.rs delete mode 100644 processor/src/multisigs/scheduler/utxo.rs delete mode 100644 processor/src/plan.rs diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 096fddb9..27d75d2e 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -102,10 +102,6 @@ pub mod sign { Shares { id: SignId, shares: HashMap> }, // Re-attempt a signing protocol. Reattempt { id: SignId }, - /* TODO - // Completed a signing protocol already. - Completed { session: Session, id: [u8; 32], tx: Vec }, - */ } impl CoordinatorMessage { @@ -118,7 +114,6 @@ pub mod sign { CoordinatorMessage::Preprocesses { id, .. } | CoordinatorMessage::Shares { id, .. } | CoordinatorMessage::Reattempt { id, .. } => id.session, - // TODO CoordinatorMessage::Completed { session, .. } => *session, } } } @@ -131,8 +126,6 @@ pub mod sign { Preprocesses { id: SignId, preprocesses: Vec> }, // Signed shares for the specified signing protocol. Shares { id: SignId, shares: Vec> }, - // Completed a signing protocol already. - // TODO Completed { session: Session, id: [u8; 32], tx: Vec }, } } @@ -330,11 +323,6 @@ impl CoordinatorMessage { sign::CoordinatorMessage::Preprocesses { id, .. } => (0, id), sign::CoordinatorMessage::Shares { id, .. } => (1, id), sign::CoordinatorMessage::Reattempt { id, .. } => (2, id), - // The coordinator should report all reported completions to the processor - // Accordingly, the intent is a combination of plan ID and actual TX - // While transaction alone may suffice, that doesn't cover cross-chain TX ID conflicts, - // which are possible - // TODO sign::CoordinatorMessage::Completed { id, tx, .. } => (3, (id, tx).encode()), }; let mut res = vec![COORDINATOR_UID, TYPE_SIGN_UID, sub]; @@ -406,8 +394,6 @@ impl ProcessorMessage { // Unique since SignId sign::ProcessorMessage::Preprocesses { id, .. } => (1, id.encode()), sign::ProcessorMessage::Shares { id, .. } => (2, id.encode()), - // Unique since a processor will only sign a TX once - // TODO sign::ProcessorMessage::Completed { id, .. } => (3, id.to_vec()), }; let mut res = vec![PROCESSOR_UID, TYPE_SIGN_UID, sub]; diff --git a/processor/src/multisigs/db.rs b/processor/src/multisigs/db.rs deleted file mode 100644 index 3d1d13bd..00000000 --- a/processor/src/multisigs/db.rs +++ /dev/null @@ -1,260 +0,0 @@ -use std::io; - -use ciphersuite::Ciphersuite; -pub use serai_db::*; - -use scale::{Encode, Decode}; -use serai_client::{primitives::Balance, in_instructions::primitives::InInstructionWithBalance}; - -use crate::{ - Get, Plan, - networks::{Output, Transaction, Network}, -}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum PlanFromScanning { - Refund(N::Output, N::Address), - Forward(N::Output), -} - -impl PlanFromScanning { - fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - match kind[0] { - 0 => { - let output = N::Output::read(reader)?; - - let mut address_vec_len = [0; 4]; - reader.read_exact(&mut address_vec_len)?; - let mut address_vec = - vec![0; usize::try_from(u32::from_le_bytes(address_vec_len)).unwrap()]; - reader.read_exact(&mut address_vec)?; - let address = - N::Address::try_from(address_vec).map_err(|_| "invalid address saved to disk").unwrap(); - - Ok(PlanFromScanning::Refund(output, address)) - } - 1 => { - let output = N::Output::read(reader)?; - Ok(PlanFromScanning::Forward(output)) - } - _ => panic!("reading unrecognized PlanFromScanning"), - } - } - fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - PlanFromScanning::Refund(output, address) => { - writer.write_all(&[0])?; - output.write(writer)?; - - let address_vec: Vec = - address.clone().try_into().map_err(|_| "invalid address being refunded to").unwrap(); - writer.write_all(&u32::try_from(address_vec.len()).unwrap().to_le_bytes())?; - writer.write_all(&address_vec) - } - PlanFromScanning::Forward(output) => { - writer.write_all(&[1])?; - output.write(writer) - } - } - } -} - -create_db!( - MultisigsDb { - NextBatchDb: () -> u32, - PlanDb: (id: &[u8]) -> Vec, - PlansFromScanningDb: (block_number: u64) -> Vec, - OperatingCostsDb: () -> u64, - ResolvedDb: (tx: &[u8]) -> [u8; 32], - SigningDb: (key: &[u8]) -> Vec, - ForwardedOutputDb: (balance: Balance) -> Vec, - DelayedOutputDb: () -> Vec - } -); - -impl PlanDb { - pub fn save_active_plan( - txn: &mut impl DbTxn, - key: &[u8], - block_number: usize, - plan: &Plan, - operating_costs_at_time: u64, - ) { - let id = plan.id(); - - { - let mut signing = SigningDb::get(txn, key).unwrap_or_default(); - - // If we've already noted we're signing this, return - assert_eq!(signing.len() % 32, 0); - for i in 0 .. (signing.len() / 32) { - if signing[(i * 32) .. ((i + 1) * 32)] == id { - return; - } - } - - signing.extend(&id); - SigningDb::set(txn, key, &signing); - } - - { - let mut buf = block_number.to_le_bytes().to_vec(); - plan.write(&mut buf).unwrap(); - buf.extend(&operating_costs_at_time.to_le_bytes()); - Self::set(txn, &id, &buf); - } - } - - pub fn active_plans(getter: &impl Get, key: &[u8]) -> Vec<(u64, Plan, u64)> { - let signing = SigningDb::get(getter, key).unwrap_or_default(); - let mut res = vec![]; - - assert_eq!(signing.len() % 32, 0); - for i in 0 .. (signing.len() / 32) { - let id = &signing[(i * 32) .. ((i + 1) * 32)]; - let buf = Self::get(getter, id).unwrap(); - - let block_number = u64::from_le_bytes(buf[.. 8].try_into().unwrap()); - let plan = Plan::::read::<&[u8]>(&mut &buf[8 ..]).unwrap(); - assert_eq!(id, &plan.id()); - let operating_costs = u64::from_le_bytes(buf[(buf.len() - 8) ..].try_into().unwrap()); - res.push((block_number, plan, operating_costs)); - } - res - } - - pub fn plan_by_key_with_self_change( - getter: &impl Get, - key: ::G, - id: [u8; 32], - ) -> bool { - let plan = Plan::::read::<&[u8]>(&mut &Self::get(getter, &id).unwrap()[8 ..]).unwrap(); - assert_eq!(plan.id(), id); - if let Some(change) = N::change_address(plan.key) { - (key == plan.key) && (Some(change) == plan.change) - } else { - false - } - } -} - -impl OperatingCostsDb { - pub fn take_operating_costs(txn: &mut impl DbTxn) -> u64 { - let existing = Self::get(txn).unwrap_or_default(); - txn.del(Self::key()); - existing - } - pub fn set_operating_costs(txn: &mut impl DbTxn, amount: u64) { - if amount != 0 { - Self::set(txn, &amount); - } - } -} - -impl ResolvedDb { - pub fn resolve_plan( - txn: &mut impl DbTxn, - key: &[u8], - plan: [u8; 32], - resolution: &>::Id, - ) { - let mut signing = SigningDb::get(txn, key).unwrap_or_default(); - assert_eq!(signing.len() % 32, 0); - - let mut found = false; - for i in 0 .. (signing.len() / 32) { - let start = i * 32; - let end = i + 32; - if signing[start .. end] == plan { - found = true; - signing = [&signing[.. start], &signing[end ..]].concat(); - break; - } - } - - if !found { - log::warn!("told to finish signing {} yet wasn't actively signing it", hex::encode(plan)); - } - SigningDb::set(txn, key, &signing); - Self::set(txn, resolution.as_ref(), &plan); - } -} - -impl PlansFromScanningDb { - pub fn set_plans_from_scanning( - txn: &mut impl DbTxn, - block_number: usize, - plans: Vec>, - ) { - let mut buf = vec![]; - for plan in plans { - plan.write(&mut buf).unwrap(); - } - Self::set(txn, block_number.try_into().unwrap(), &buf); - } - - pub fn take_plans_from_scanning( - txn: &mut impl DbTxn, - block_number: usize, - ) -> Option>> { - let block_number = u64::try_from(block_number).unwrap(); - let res = Self::get(txn, block_number).map(|plans| { - let mut plans_ref = plans.as_slice(); - let mut res = vec![]; - while !plans_ref.is_empty() { - res.push(PlanFromScanning::::read(&mut plans_ref).unwrap()); - } - res - }); - if res.is_some() { - txn.del(Self::key(block_number)); - } - res - } -} - -impl ForwardedOutputDb { - pub fn save_forwarded_output(txn: &mut impl DbTxn, instruction: &InInstructionWithBalance) { - let mut existing = Self::get(txn, instruction.balance).unwrap_or_default(); - existing.extend(instruction.encode()); - Self::set(txn, instruction.balance, &existing); - } - - pub fn take_forwarded_output( - txn: &mut impl DbTxn, - balance: Balance, - ) -> Option { - let outputs = Self::get(txn, balance)?; - let mut outputs_ref = outputs.as_slice(); - let res = InInstructionWithBalance::decode(&mut outputs_ref).unwrap(); - assert!(outputs_ref.len() < outputs.len()); - if outputs_ref.is_empty() { - txn.del(Self::key(balance)); - } else { - Self::set(txn, balance, &outputs); - } - Some(res) - } -} - -impl DelayedOutputDb { - pub fn save_delayed_output(txn: &mut impl DbTxn, instruction: &InInstructionWithBalance) { - let mut existing = Self::get(txn).unwrap_or_default(); - existing.extend(instruction.encode()); - Self::set(txn, &existing); - } - - pub fn take_delayed_outputs(txn: &mut impl DbTxn) -> Vec { - let Some(outputs) = Self::get(txn) else { return vec![] }; - txn.del(Self::key()); - - let mut outputs_ref = outputs.as_slice(); - let mut res = vec![]; - while !outputs_ref.is_empty() { - res.push(InInstructionWithBalance::decode(&mut outputs_ref).unwrap()); - } - res - } -} diff --git a/processor/src/multisigs/mod.rs b/processor/src/multisigs/mod.rs index 92ea0271..c20a922c 100644 --- a/processor/src/multisigs/mod.rs +++ b/processor/src/multisigs/mod.rs @@ -1,1070 +1,8 @@ -use core::time::Duration; -use std::collections::HashSet; - -use ciphersuite::{group::GroupEncoding, Ciphersuite}; - -use scale::{Encode, Decode}; -use messages::SubstrateContext; - -use serai_client::{ - primitives::{MAX_DATA_LEN, ExternalAddress, BlockHash, Data}, - in_instructions::primitives::{ - InInstructionWithBalance, Batch, RefundableInInstruction, Shorthand, MAX_BATCH_SIZE, - }, - coins::primitives::{OutInstruction, OutInstructionWithBalance}, -}; - -use log::{info, error}; - -use tokio::time::sleep; - -/* TODO -#[cfg(not(test))] -mod scanner; -#[cfg(test)] -pub mod scanner; -*/ - -use scanner::{ScannerEvent, ScannerHandle, Scanner}; - -mod db; -use db::*; - -pub(crate) mod scheduler; -use scheduler::Scheduler; - -use crate::{ - Get, Db, Payment, Plan, - networks::{OutputType, Output, SignableTransaction, Eventuality, Block, PreparedSend, Network}, -}; - -// InInstructionWithBalance from an external output -fn instruction_from_output( - output: &N::Output, -) -> (Option, Option) { - assert_eq!(output.kind(), OutputType::External); - - let presumed_origin = output.presumed_origin().map(|address| { - ExternalAddress::new( - address - .try_into() - .map_err(|_| ()) - .expect("presumed origin couldn't be converted to a Vec"), - ) - .expect("presumed origin exceeded address limits") - }); - - let mut data = output.data(); - let max_data_len = usize::try_from(MAX_DATA_LEN).unwrap(); - if data.len() > max_data_len { - error!( - "data in output {} exceeded MAX_DATA_LEN ({MAX_DATA_LEN}): {}. skipping", - hex::encode(output.id()), - data.len(), - ); - return (presumed_origin, None); - } - - let shorthand = match Shorthand::decode(&mut data) { - Ok(shorthand) => shorthand, - Err(e) => { - info!("data in output {} wasn't valid shorthand: {e:?}", hex::encode(output.id())); - return (presumed_origin, None); - } - }; - let instruction = match RefundableInInstruction::try_from(shorthand) { - Ok(instruction) => instruction, - Err(e) => { - info!( - "shorthand in output {} wasn't convertible to a RefundableInInstruction: {e:?}", - hex::encode(output.id()) - ); - return (presumed_origin, None); - } - }; - - let mut balance = output.balance(); - // Deduct twice the cost to aggregate to prevent economic attacks by malicious miners against - // other users - balance.amount.0 -= 2 * N::COST_TO_AGGREGATE; - - ( - instruction.origin.or(presumed_origin), - Some(InInstructionWithBalance { instruction: instruction.instruction, balance }), - ) -} - -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -enum RotationStep { - // Use the existing multisig for all actions (steps 1-3) - UseExisting, - // Use the new multisig as change (step 4) - NewAsChange, - // The existing multisig is expected to solely forward transactions at this point (step 5) - ForwardFromExisting, - // The existing multisig is expected to finish its own transactions and do nothing more - // (step 6) - ClosingExisting, -} - -// This explicitly shouldn't take the database as we prepare Plans we won't execute for fee -// estimates -async fn prepare_send( - network: &N, - block_number: usize, - plan: Plan, - operating_costs: u64, -) -> PreparedSend { - loop { - match network.prepare_send(block_number, plan.clone(), operating_costs).await { - Ok(prepared) => { - return prepared; - } - Err(e) => { - error!("couldn't prepare a send for plan {}: {e}", hex::encode(plan.id())); - // The processor is either trying to create an invalid TX (fatal) or the node went - // offline - // The former requires a patch, the latter is a connection issue - // If the latter, this is an appropriate sleep. If the former, we should panic, yet - // this won't flood the console ad infinitum - sleep(Duration::from_secs(60)).await; - } - } - } -} - -pub struct MultisigViewer { - activation_block: usize, - key: ::G, - scheduler: N::Scheduler, -} - #[allow(clippy::type_complexity)] #[derive(Clone, Debug)] pub enum MultisigEvent { // Batches to publish Batches(Option<(::G, ::G)>, Vec), // Eventuality completion found on-chain - Completed(Vec, [u8; 32], ::Completion), -} - -pub struct MultisigManager { - scanner: ScannerHandle, - existing: Option>, - new: Option>, -} - -impl MultisigManager { - pub async fn new( - raw_db: &D, - network: &N, - ) -> ( - Self, - Vec<::G>, - Vec<(Plan, N::SignableTransaction, N::Eventuality)>, - ) { - // The scanner has no long-standing orders to re-issue - let (mut scanner, current_keys) = Scanner::new(network.clone(), raw_db.clone()); - - let mut schedulers = vec![]; - - assert!(current_keys.len() <= 2); - let mut actively_signing = vec![]; - for (_, key) in ¤t_keys { - schedulers.push(N::Scheduler::from_db(raw_db, *key, N::NETWORK).unwrap()); - - // Load any TXs being actively signed - let key = key.to_bytes(); - for (block_number, plan, operating_costs) in PlanDb::active_plans::(raw_db, key.as_ref()) { - let block_number = block_number.try_into().unwrap(); - - let id = plan.id(); - info!("reloading plan {}: {:?}", hex::encode(id), plan); - - let key_bytes = plan.key.to_bytes(); - - let Some((tx, eventuality)) = - prepare_send(network, block_number, plan.clone(), operating_costs).await.tx - else { - panic!("previously created transaction is no longer being created") - }; - - scanner - .register_eventuality(key_bytes.as_ref(), block_number, id, eventuality.clone()) - .await; - actively_signing.push((plan, tx, eventuality)); - } - } - - ( - MultisigManager { - scanner, - existing: current_keys.first().copied().map(|(activation_block, key)| MultisigViewer { - activation_block, - key, - scheduler: schedulers.remove(0), - }), - new: current_keys.get(1).copied().map(|(activation_block, key)| MultisigViewer { - activation_block, - key, - scheduler: schedulers.remove(0), - }), - }, - current_keys.into_iter().map(|(_, key)| key).collect(), - actively_signing, - ) - } - - /// Returns the block number for a block hash, if it's known and all keys have scanned the block. - // This is guaranteed to atomically increment so long as no new keys are added to the scanner - // which activate at a block before the currently highest scanned block. This is prevented by - // the processor waiting for `Batch` inclusion before scanning too far ahead, and activation only - // happening after the "too far ahead" window. - pub async fn block_number( - &self, - getter: &G, - hash: &>::Id, - ) -> Option { - let latest = ScannerHandle::::block_number(getter, hash)?; - - // While the scanner has cemented this block, that doesn't mean it's been scanned for all - // keys - // ram_scanned will return the lowest scanned block number out of all keys - if latest > self.scanner.ram_scanned().await { - return None; - } - Some(latest) - } - - pub async fn add_key( - &mut self, - txn: &mut D::Transaction<'_>, - activation_block: usize, - external_key: ::G, - ) { - self.scanner.register_key(txn, activation_block, external_key).await; - let viewer = Some(MultisigViewer { - activation_block, - key: external_key, - scheduler: N::Scheduler::new::(txn, external_key, N::NETWORK), - }); - - if self.existing.is_none() { - self.existing = viewer; - return; - } - self.new = viewer; - } - - fn current_rotation_step(&self, block_number: usize) -> RotationStep { - let Some(new) = self.new.as_ref() else { return RotationStep::UseExisting }; - - // Period numbering here has no meaning other than these are the time values useful here, and - // the order they're calculated in. They have no reference/shared marker with anything else - - // ESTIMATED_BLOCK_TIME_IN_SECONDS is fine to use here. While inaccurate, it shouldn't be - // drastically off, and even if it is, it's a hiccup to latency handling only possible when - // rotating. The error rate wouldn't be acceptable if it was allowed to accumulate over time, - // yet rotation occurs on Serai's clock, disconnecting any errors here from any prior. - - // N::CONFIRMATIONS + 10 minutes - let period_1_start = new.activation_block + - N::CONFIRMATIONS + - (10usize * 60).div_ceil(N::ESTIMATED_BLOCK_TIME_IN_SECONDS); - - // N::CONFIRMATIONS - let period_2_start = period_1_start + N::CONFIRMATIONS; - - // 6 hours after period 2 - // Also ensure 6 hours is greater than the amount of CONFIRMATIONS, for sanity purposes - let period_3_start = - period_2_start + ((6 * 60 * 60) / N::ESTIMATED_BLOCK_TIME_IN_SECONDS).max(N::CONFIRMATIONS); - - if block_number < period_1_start { - RotationStep::UseExisting - } else if block_number < period_2_start { - RotationStep::NewAsChange - } else if block_number < period_3_start { - RotationStep::ForwardFromExisting - } else { - RotationStep::ClosingExisting - } - } - - // Convert new Burns to Payments. - // - // Also moves payments from the old Scheduler to the new multisig if the step calls for it. - fn burns_to_payments( - &mut self, - txn: &mut D::Transaction<'_>, - step: RotationStep, - burns: Vec, - ) -> (Vec>, Vec>) { - let mut payments = vec![]; - for out in burns { - let OutInstructionWithBalance { instruction: OutInstruction { address, data }, balance } = - out; - assert_eq!(balance.coin.network(), N::NETWORK); - - if let Ok(address) = N::Address::try_from(address.consume()) { - payments.push(Payment { address, data: data.map(Data::consume), balance }); - } - } - - let payments = payments; - match step { - RotationStep::UseExisting | RotationStep::NewAsChange => (payments, vec![]), - RotationStep::ForwardFromExisting | RotationStep::ClosingExisting => { - // Consume any payments the prior scheduler was unable to complete - // This should only actually matter once - let mut new_payments = self.existing.as_mut().unwrap().scheduler.consume_payments::(txn); - // Add the new payments - new_payments.extend(payments); - (vec![], new_payments) - } - } - } - - fn split_outputs_by_key(&self, outputs: Vec) -> (Vec, Vec) { - let mut existing_outputs = Vec::with_capacity(outputs.len()); - let mut new_outputs = vec![]; - - let existing_key = self.existing.as_ref().unwrap().key; - let new_key = self.new.as_ref().map(|new| new.key); - for output in outputs { - if output.key() == existing_key { - existing_outputs.push(output); - } else { - assert_eq!(Some(output.key()), new_key); - new_outputs.push(output); - } - } - - (existing_outputs, new_outputs) - } - - fn refund_plan( - scheduler: &mut N::Scheduler, - txn: &mut D::Transaction<'_>, - output: N::Output, - refund_to: N::Address, - ) -> Plan { - log::info!("creating refund plan for {}", hex::encode(output.id())); - assert_eq!(output.kind(), OutputType::External); - scheduler.refund_plan::(txn, output, refund_to) - } - - // Returns the plan for forwarding if one is needed. - // Returns None if one is not needed to forward this output. - fn forward_plan(&mut self, txn: &mut D::Transaction<'_>, output: &N::Output) -> Option> { - log::info!("creating forwarding plan for {}", hex::encode(output.id())); - let res = self.existing.as_mut().unwrap().scheduler.forward_plan::( - txn, - output.clone(), - self.new.as_ref().expect("forwarding plan yet no new multisig").key, - ); - if res.is_none() { - log::info!("no forwarding plan was necessary for {}", hex::encode(output.id())); - } - res - } - - // Filter newly received outputs due to the step being RotationStep::ClosingExisting. - // - // Returns the Plans for the `Branch`s which should be created off outputs which passed the - // filter. - fn filter_outputs_due_to_closing( - &mut self, - txn: &mut D::Transaction<'_>, - existing_outputs: &mut Vec, - ) -> Vec> { - /* - The document says to only handle outputs we created. We don't know what outputs we - created. We do have an ordered view of equivalent outputs however, and can assume the - first (and likely only) ones are the ones we created. - - Accordingly, only handling outputs we created should be definable as only handling - outputs from the resolution of Eventualities. - - This isn't feasible. It requires knowing what Eventualities were completed in this block, - when we handle this block, which we don't know without fully serialized scanning + Batch - publication. - - Take the following scenario: - 1) A network uses 10 confirmations. Block x is scanned, meaning x+9a exists. - 2) 67% of nodes process x, create, sign, and publish a TX, creating an Eventuality. - 3) A reorganization to a shorter chain occurs, including the published TX in x+1b. - 4) The 33% of nodes which are latent will be allowed to scan x+1b as soon as x+10b - exists. They won't wait for Serai to include the Batch for x until they try to scan - x+10b. - 5) These latent nodes will handle x+1b, post-create an Eventuality, post-learn x+1b - contained resolutions, changing how x+1b should've been interpreted. - - We either have to: - A) Fully serialize scanning (removing the ability to utilize throughput to allow higher - latency, at least while the step is `ClosingExisting`). - B) Create Eventualities immediately, which we can't do as then both the external - network's clock AND Serai's clock can trigger Eventualities, removing ordering. - We'd need to shift entirely to the external network's clock, only handling Burns - outside the parallelization window (which would be extremely latent). - C) Use a different mechanism to determine if we created an output. - D) Re-define which outputs are still to be handled after the 6 hour period expires, such - that the multisig's lifetime cannot be further extended yet it does fulfill its - responsibility. - - External outputs to the existing multisig will be: - - Scanned before the rotation and unused (as used External outputs become Change) - - Forwarded immediately upon scanning - - Not scanned before the cut off time (and accordingly dropped) - - For the first case, since they're scanned before the rotation and unused, they'll be - forwarded with all other available outputs (since they'll be available when scanned). - - Change outputs will be: - - Scanned before the rotation and forwarded with all other available outputs - - Forwarded immediately upon scanning - - Not scanned before the cut off time, requiring an extension exclusive to these outputs - - The important thing to note about honest Change outputs to the existing multisig is that - they'll only be created within `CONFIRMATIONS+1` blocks of the activation block. Also - important to note is that there's another explicit window of `CONFIRMATIONS` before the - 6 hour window. - - Eventualities are not guaranteed to be known before we scan the block containing their - resolution. They are guaranteed to be known within `CONFIRMATIONS-1` blocks however, due - to the limitation on how far we'll scan ahead. - - This means we will know of all Eventualities related to Change outputs we need to forward - before the 6 hour period begins (as forwarding outputs will not create any Change outputs - to the existing multisig). - - This means a definition of complete can be defined as: - 1) Handled all Branch outputs - 2) Forwarded all External outputs received before the end of 6 hour window - 3) Forwarded the results of all Eventualities with Change, which will have been created - before the 6 hour window - - How can we track and ensure this without needing to check if an output is from the - resolution of an Eventuality? - - 1) We only create Branch outputs before the 6 hour window starts. These are guaranteed - to appear within `CONFIRMATIONS` blocks. They will exist with arbitrary depth however, - meaning that upon completion they will spawn several more Eventualities. The further - created Eventualities re-risk being present after the 6 hour period ends. - - We can: - 1) Build a queue for Branch outputs, delaying their handling until relevant - Eventualities are guaranteed to be present. - - This solution would theoretically work for all outputs and allow collapsing this - problem to simply: - - > Accordingly, only handling outputs we created should be definable as only - handling outputs from the resolution of Eventualities. - - 2) Create all Eventualities under a Branch at time of Branch creation. - This idea fails as Plans are tightly bound to outputs. - - 3) Don't track Branch outputs by Eventualities, yet by the amount of Branch outputs - remaining. Any Branch output received, of a useful amount, is assumed to be our - own and handled. All other Branch outputs, even if they're the completion of some - Eventuality, are dropped. - - This avoids needing any additional queue, avoiding additional pipelining/latency. - - 2) External outputs are self-evident. We simply stop handling them at the cut-off point, - and only start checking after `CONFIRMATIONS` blocks if all Eventualities are - complete. - - 3) Since all Change Eventualities will be known prior to the 6 hour window's beginning, - we can safely check if a received Change output is the resolution of an Eventuality. - We only need to forward it if so. Forwarding it simply requires only checking if - Eventualities are complete after `CONFIRMATIONS` blocks, same as for straggling - External outputs. - */ - - let mut plans = vec![]; - existing_outputs.retain(|output| { - match output.kind() { - OutputType::External | OutputType::Forwarded => false, - OutputType::Branch => { - let scheduler = &mut self.existing.as_mut().unwrap().scheduler; - // There *would* be a race condition here due to the fact we only mark a `Branch` output - // as needed when we process the block (and handle scheduling), yet actual `Branch` - // outputs may appear as soon as the next block (and we scan the next block before we - // process the prior block) - // - // Unlike Eventuality checking, which happens on scanning and is therefore asynchronous, - // all scheduling (and this check against the scheduler) happens on processing, which is - // synchronous - // - // While we could move Eventuality checking into the block processing, removing its - // asynchonicity, we could only check data the Scanner deems important. The Scanner won't - // deem important Eventuality resolutions which don't create an output to Serai unless - // it knows of the Eventuality. Accordingly, at best we could have a split role (the - // Scanner noting completion of Eventualities which don't have relevant outputs, the - // processing noting completion of ones which do) - // - // This is unnecessary, due to the current flow around Eventuality resolutions and the - // current bounds naturally found being sufficiently amenable, yet notable for the future - if scheduler.can_use_branch(output.balance()) { - // We could simply call can_use_branch, yet it'd have an edge case where if we receive - // two outputs for 100, and we could use one such output, we'd handle both. - // - // Individually schedule each output once confirming they're usable in order to avoid - // this. - let mut plan = scheduler.schedule::( - txn, - vec![output.clone()], - vec![], - self.new.as_ref().unwrap().key, - false, - ); - assert_eq!(plan.len(), 1); - let plan = plan.remove(0); - plans.push(plan); - } - false - } - OutputType::Change => { - // If the TX containing this output resolved an Eventuality... - if let Some(plan) = ResolvedDb::get(txn, output.tx_id().as_ref()) { - // And the Eventuality had change... - // We need this check as Eventualities have a race condition and can't be relied - // on, as extensively detailed above. Eventualities explicitly with change do have - // a safe timing window however - if PlanDb::plan_by_key_with_self_change::( - txn, - // Pass the key so the DB checks the Plan's key is this multisig's, preventing a - // potential issue where the new multisig creates a Plan with change *and a - // payment to the existing multisig's change address* - self.existing.as_ref().unwrap().key, - plan, - ) { - // Then this is an honest change output we need to forward - // (or it's a payment to the change address in the same transaction as an honest - // change output, which is fine to let slip in) - return true; - } - } - false - } - } - }); - plans - } - - // Returns the Plans caused from a block being acknowledged. - // - // Will rotate keys if the block acknowledged is the retirement block. - async fn plans_from_block( - &mut self, - txn: &mut D::Transaction<'_>, - block_number: usize, - block_id: >::Id, - step: &mut RotationStep, - burns: Vec, - ) -> (bool, Vec>, HashSet<[u8; 32]>) { - let (mut existing_payments, mut new_payments) = self.burns_to_payments(txn, *step, burns); - - let mut plans = vec![]; - let mut plans_from_scanning = HashSet::new(); - - // We now have to acknowledge the acknowledged block, if it's new - // It won't be if this block's `InInstruction`s were split into multiple `Batch`s - let (acquired_lock, (mut existing_outputs, new_outputs)) = { - let (acquired_lock, mut outputs) = if ScannerHandle::::db_scanned(txn) - .expect("published a Batch despite never scanning a block") < - block_number - { - // Load plans crated when we scanned the block - let scanning_plans = - PlansFromScanningDb::take_plans_from_scanning::(txn, block_number).unwrap(); - // Expand into actual plans - plans = scanning_plans - .into_iter() - .map(|plan| match plan { - PlanFromScanning::Refund(output, refund_to) => { - let existing = self.existing.as_mut().unwrap(); - if output.key() == existing.key { - Self::refund_plan(&mut existing.scheduler, txn, output, refund_to) - } else { - let new = self - .new - .as_mut() - .expect("new multisig didn't expect yet output wasn't for existing multisig"); - assert_eq!(output.key(), new.key, "output wasn't for existing nor new multisig"); - Self::refund_plan(&mut new.scheduler, txn, output, refund_to) - } - } - PlanFromScanning::Forward(output) => self - .forward_plan(txn, &output) - .expect("supposed to forward an output yet no forwarding plan"), - }) - .collect(); - - for plan in &plans { - plans_from_scanning.insert(plan.id()); - } - - let (is_retirement_block, outputs) = self.scanner.ack_block(txn, block_id.clone()).await; - if is_retirement_block { - let existing = self.existing.take().unwrap(); - assert!(existing.scheduler.empty()); - self.existing = self.new.take(); - *step = RotationStep::UseExisting; - assert!(existing_payments.is_empty()); - existing_payments = new_payments; - new_payments = vec![]; - } - (true, outputs) - } else { - (false, vec![]) - }; - - // Remove all outputs already present in plans - let mut output_set = HashSet::new(); - for plan in &plans { - for input in &plan.inputs { - output_set.insert(input.id().as_ref().to_vec()); - } - } - outputs.retain(|output| !output_set.remove(output.id().as_ref())); - assert_eq!(output_set.len(), 0); - - (acquired_lock, self.split_outputs_by_key(outputs)) - }; - - // If we're closing the existing multisig, filter its outputs down - if *step == RotationStep::ClosingExisting { - plans.extend(self.filter_outputs_due_to_closing(txn, &mut existing_outputs)); - } - - // Now that we've done all our filtering, schedule the existing multisig's outputs - plans.extend({ - let existing = self.existing.as_mut().unwrap(); - let existing_key = existing.key; - self.existing.as_mut().unwrap().scheduler.schedule::( - txn, - existing_outputs, - existing_payments, - match *step { - RotationStep::UseExisting => existing_key, - RotationStep::NewAsChange | - RotationStep::ForwardFromExisting | - RotationStep::ClosingExisting => self.new.as_ref().unwrap().key, - }, - match *step { - RotationStep::UseExisting | RotationStep::NewAsChange => false, - RotationStep::ForwardFromExisting | RotationStep::ClosingExisting => true, - }, - ) - }); - - for plan in &plans { - // This first equality should 'never meaningfully' be false - // All created plans so far are by the existing multisig EXCEPT: - // A) If we created a refund plan from the new multisig (yet that wouldn't have change) - // B) The existing Scheduler returned a Plan for the new key (yet that happens with the SC - // scheduler, yet that doesn't have change) - // Despite being 'unnecessary' now, it's better to explicitly ensure and be robust - if plan.key == self.existing.as_ref().unwrap().key { - if let Some(change) = N::change_address(plan.key) { - if plan.change == Some(change) { - // Assert these (self-change) are only created during the expected step - match *step { - RotationStep::UseExisting => {} - RotationStep::NewAsChange | - RotationStep::ForwardFromExisting | - RotationStep::ClosingExisting => panic!("change was set to self despite rotating"), - } - } - } - } - } - - // Schedule the new multisig's outputs too - if let Some(new) = self.new.as_mut() { - plans.extend(new.scheduler.schedule::(txn, new_outputs, new_payments, new.key, false)); - } - - (acquired_lock, plans, plans_from_scanning) - } - - /// Handle a SubstrateBlock event, building the relevant Plans. - pub async fn substrate_block( - &mut self, - txn: &mut D::Transaction<'_>, - network: &N, - context: SubstrateContext, - burns: Vec, - ) -> (bool, Vec<(::G, [u8; 32], N::SignableTransaction, N::Eventuality)>) - { - let mut block_id = >::Id::default(); - block_id.as_mut().copy_from_slice(context.network_latest_finalized_block.as_ref()); - let block_number = ScannerHandle::::block_number(txn, &block_id) - .expect("SubstrateBlock with context we haven't synced"); - - // Determine what step of rotation we're currently in - let mut step = self.current_rotation_step(block_number); - - // Get the Plans from this block - let (acquired_lock, plans, plans_from_scanning) = - self.plans_from_block(txn, block_number, block_id, &mut step, burns).await; - - let res = { - let mut res = Vec::with_capacity(plans.len()); - - for plan in plans { - let id = plan.id(); - info!("preparing plan {}: {:?}", hex::encode(id), plan); - - let key = plan.key; - let key_bytes = key.to_bytes(); - - let (tx, post_fee_branches) = { - let running_operating_costs = OperatingCostsDb::take_operating_costs(txn); - - PlanDb::save_active_plan::( - txn, - key_bytes.as_ref(), - block_number, - &plan, - running_operating_costs, - ); - - // If this Plan is from the scanner handler below, don't take the opportunity to amortze - // operating costs - // It operates with limited context, and on a different clock, making it nable to react - // to operating costs - // Despite this, in order to properly save forwarded outputs' instructions, it needs to - // know the actual value forwarded outputs will be created with - // Including operating costs prevents that - let from_scanning = plans_from_scanning.contains(&plan.id()); - let to_use_operating_costs = if from_scanning { 0 } else { running_operating_costs }; - - let PreparedSend { tx, post_fee_branches, mut operating_costs } = - prepare_send(network, block_number, plan, to_use_operating_costs).await; - - // Restore running_operating_costs to operating_costs - if from_scanning { - // If we're forwarding (or refunding) this output, operating_costs should still be 0 - // Either this TX wasn't created, causing no operating costs, or it was yet it'd be - // amortized - assert_eq!(operating_costs, 0); - - operating_costs += running_operating_costs; - } - - OperatingCostsDb::set_operating_costs(txn, operating_costs); - - (tx, post_fee_branches) - }; - - for branch in post_fee_branches { - let existing = self.existing.as_mut().unwrap(); - let to_use = if key == existing.key { - existing - } else { - let new = self - .new - .as_mut() - .expect("plan wasn't for existing multisig yet there wasn't a new multisig"); - assert_eq!(key, new.key); - new - }; - - to_use.scheduler.created_output::(txn, branch.expected, branch.actual); - } - - if let Some((tx, eventuality)) = tx { - // The main function we return to will send an event to the coordinator which must be - // fired before these registered Eventualities have their Completions fired - // Safety is derived from a mutable lock on the Scanner being preserved, preventing - // scanning (and detection of Eventuality resolutions) before it's released - // It's only released by the main function after it does what it will - self - .scanner - .register_eventuality(key_bytes.as_ref(), block_number, id, eventuality.clone()) - .await; - - res.push((key, id, tx, eventuality)); - } - - // TODO: If the TX is None, restore its inputs to the scheduler for efficiency's sake - // If this TODO is removed, also reduce the operating costs - } - res - }; - (acquired_lock, res) - } - - pub async fn release_scanner_lock(&mut self) { - self.scanner.release_lock().await; - } - - pub async fn scanner_event_to_multisig_event( - &self, - txn: &mut D::Transaction<'_>, - network: &N, - msg: ScannerEvent, - ) -> MultisigEvent { - let (block_number, event) = match msg { - ScannerEvent::Block { is_retirement_block, block, mut outputs } => { - // Since the Scanner is asynchronous, the following is a concern for race conditions - // We safely know the step of a block since keys are declared, and the Scanner is safe - // with respect to the declaration of keys - // Accordingly, the following calls regarding new keys and step should be safe - let block_number = ScannerHandle::::block_number(txn, &block) - .expect("didn't have the block number for a block we just scanned"); - let step = self.current_rotation_step(block_number); - - // Instructions created from this block - let mut instructions = vec![]; - - // If any of these outputs were forwarded, create their instruction now - for output in &outputs { - if output.kind() != OutputType::Forwarded { - continue; - } - - if let Some(instruction) = ForwardedOutputDb::take_forwarded_output(txn, output.balance()) - { - instructions.push(instruction); - } - } - - // If the remaining outputs aren't externally received funds, don't handle them as - // instructions - outputs.retain(|output| output.kind() == OutputType::External); - - // These plans are of limited context. They're only allowed the outputs newly received - // within this block and are intended to handle forwarding transactions/refunds - let mut plans = vec![]; - - // If the old multisig is explicitly only supposed to forward, create all such plans now - if step == RotationStep::ForwardFromExisting { - let mut i = 0; - while i < outputs.len() { - let output = &outputs[i]; - let plans = &mut plans; - let txn = &mut *txn; - - #[allow(clippy::redundant_closure_call)] - let should_retain = (|| async move { - // If this output doesn't belong to the existing multisig, it shouldn't be forwarded - if output.key() != self.existing.as_ref().unwrap().key { - return true; - } - - let plans_at_start = plans.len(); - let (refund_to, instruction) = instruction_from_output::(output); - if let Some(mut instruction) = instruction { - let Some(shimmed_plan) = N::Scheduler::shim_forward_plan( - output.clone(), - self.new.as_ref().expect("forwarding from existing yet no new multisig").key, - ) else { - // If this network doesn't need forwarding, report the output now - return true; - }; - plans.push(PlanFromScanning::::Forward(output.clone())); - - // Set the instruction for this output to be returned - // We need to set it under the amount it's forwarded with, so prepare its forwarding - // TX to determine the fees involved - let PreparedSend { tx, post_fee_branches: _, operating_costs } = - prepare_send(network, block_number, shimmed_plan, 0).await; - // operating_costs should not increase in a forwarding TX - assert_eq!(operating_costs, 0); - - // If this actually forwarded any coins, save the output as forwarded - // If this didn't create a TX, we don't bother saving the output as forwarded - // The fact we already created and pushed a plan still using this output will cause - // it to not be retained here, and later the plan will be dropped as this did here, - // letting it die out - if let Some(tx) = &tx { - instruction.balance.amount.0 -= tx.0.fee(); - - /* - Sending a Plan, with arbitrary data proxying the InInstruction, would require - adding a flow for networks which drop their data to still embed arbitrary data. - It'd also have edge cases causing failures (we'd need to manually provide the - origin if it was implied, which may exceed the encoding limit). - - Instead, we save the InInstruction as we scan this output. Then, when the - output is successfully forwarded, we simply read it from the local database. - This also saves the costs of embedding arbitrary data. - - Since we can't rely on the Eventuality system to detect if it's a forwarded - transaction, due to the asynchonicity of the Eventuality system, we instead - interpret an Forwarded output which has an amount associated with an - InInstruction which was forwarded as having been forwarded. - */ - ForwardedOutputDb::save_forwarded_output(txn, &instruction); - } - } else if let Some(refund_to) = refund_to { - if let Ok(refund_to) = refund_to.consume().try_into() { - // Build a dedicated Plan refunding this - plans.push(PlanFromScanning::Refund(output.clone(), refund_to)); - } - } - - // Only keep if we didn't make a Plan consuming it - plans_at_start == plans.len() - })() - .await; - if should_retain { - i += 1; - continue; - } - outputs.remove(i); - } - } - - for output in outputs { - // If this is an External transaction to the existing multisig, and we're either solely - // forwarding or closing the existing multisig, drop it - // In the case of the forwarding case, we'll report it once it hits the new multisig - if (match step { - RotationStep::UseExisting | RotationStep::NewAsChange => false, - RotationStep::ForwardFromExisting | RotationStep::ClosingExisting => true, - }) && (output.key() == self.existing.as_ref().unwrap().key) - { - continue; - } - - let (refund_to, instruction) = instruction_from_output::(&output); - let Some(instruction) = instruction else { - if let Some(refund_to) = refund_to { - if let Ok(refund_to) = refund_to.consume().try_into() { - plans.push(PlanFromScanning::Refund(output.clone(), refund_to)); - } - } - continue; - }; - - // Delay External outputs received to new multisig earlier than expected - if Some(output.key()) == self.new.as_ref().map(|new| new.key) { - match step { - RotationStep::UseExisting => { - DelayedOutputDb::save_delayed_output(txn, &instruction); - continue; - } - RotationStep::NewAsChange | - RotationStep::ForwardFromExisting | - RotationStep::ClosingExisting => {} - } - } - - instructions.push(instruction); - } - - // Save the plans created while scanning - // TODO: Should we combine all of these plans to reduce the fees incurred from their - // execution? They're refunds and forwards. Neither should need isolate Plan/Eventualities. - PlansFromScanningDb::set_plans_from_scanning(txn, block_number, plans); - - // If any outputs were delayed, append them into this block - match step { - RotationStep::UseExisting => {} - RotationStep::NewAsChange | - RotationStep::ForwardFromExisting | - RotationStep::ClosingExisting => { - instructions.extend(DelayedOutputDb::take_delayed_outputs(txn)); - } - } - - let mut block_hash = [0; 32]; - block_hash.copy_from_slice(block.as_ref()); - let mut batch_id = NextBatchDb::get(txn).unwrap_or_default(); - - // start with empty batch - let mut batches = vec![Batch { - network: N::NETWORK, - id: batch_id, - block: BlockHash(block_hash), - instructions: vec![], - }]; - - for instruction in instructions { - let batch = batches.last_mut().unwrap(); - batch.instructions.push(instruction); - - // check if batch is over-size - if batch.encode().len() > MAX_BATCH_SIZE { - // pop the last instruction so it's back in size - let instruction = batch.instructions.pop().unwrap(); - - // bump the id for the new batch - batch_id += 1; - - // make a new batch with this instruction included - batches.push(Batch { - network: N::NETWORK, - id: batch_id, - block: BlockHash(block_hash), - instructions: vec![instruction], - }); - } - } - - // Save the next batch ID - NextBatchDb::set(txn, &(batch_id + 1)); - - ( - block_number, - MultisigEvent::Batches( - if is_retirement_block { - Some((self.existing.as_ref().unwrap().key, self.new.as_ref().unwrap().key)) - } else { - None - }, - batches, - ), - ) - } - - // This must be emitted before ScannerEvent::Block for all completions of known Eventualities - // within the block. Unknown Eventualities may have their Completed events emitted after - // ScannerEvent::Block however. - ScannerEvent::Completed(key, block_number, id, tx_id, completion) => { - ResolvedDb::resolve_plan::(txn, &key, id, &tx_id); - (block_number, MultisigEvent::Completed(key, id, completion)) - } - }; - - // If we either received a Block event (which will be the trigger when we have no - // Plans/Eventualities leading into ClosingExisting), or we received the last Completed for - // this multisig, set its retirement block - let existing = self.existing.as_ref().unwrap(); - - // This multisig is closing - let closing = self.current_rotation_step(block_number) == RotationStep::ClosingExisting; - // There's nothing left in its Scheduler. This call is safe as: - // 1) When ClosingExisting, all outputs should've been already forwarded, preventing - // new UTXOs from accumulating. - // 2) No new payments should be issued. - // 3) While there may be plans, they'll be dropped to create Eventualities. - // If this Eventuality is resolved, the Plan has already been dropped. - // 4) If this Eventuality will trigger a Plan, it'll still be in the plans HashMap. - let scheduler_is_empty = closing && existing.scheduler.empty(); - // Nothing is still being signed - let no_active_plans = scheduler_is_empty && - PlanDb::active_plans::(txn, existing.key.to_bytes().as_ref()).is_empty(); - - self - .scanner - .multisig_completed - // The above explicitly included their predecessor to ensure short-circuiting, yet their - // names aren't defined as an aggregate check. Still including all three here ensures all are - // used in the final value - .send(closing && scheduler_is_empty && no_active_plans) - .unwrap(); - - event - } - - pub async fn next_scanner_event(&mut self) -> ScannerEvent { - self.scanner.events.recv().await.unwrap() - } + Completed(Vec, [u8; 32], (reader: &mut R) -> io::Result; - fn write(&self, writer: &mut W) -> io::Result<()>; -} - -impl SchedulerAddendum for () { - fn read(_: &mut R) -> io::Result { - Ok(()) - } - fn write(&self, _: &mut W) -> io::Result<()> { - Ok(()) - } -} - -pub trait Scheduler: Sized + Clone + PartialEq + Debug { - type Addendum: SchedulerAddendum; - - /// Check if this Scheduler is empty. - fn empty(&self) -> bool; - - /// Create a new Scheduler. - fn new( - txn: &mut D::Transaction<'_>, - key: ::G, - network: NetworkId, - ) -> Self; - - /// Load a Scheduler from the DB. - fn from_db( - db: &D, - key: ::G, - network: NetworkId, - ) -> io::Result; - - /// Check if a branch is usable. - fn can_use_branch(&self, balance: Balance) -> bool; - - /// Schedule a series of outputs/payments. - fn schedule( - &mut self, - txn: &mut D::Transaction<'_>, - utxos: Vec, - payments: Vec>, - // TODO: Tighten this to multisig_for_any_change - key_for_any_change: ::G, - force_spend: bool, - ) -> Vec>; - - /// Consume all payments still pending within this Scheduler, without scheduling them. - fn consume_payments(&mut self, txn: &mut D::Transaction<'_>) -> Vec>; - - /// Note a branch output as having been created, with the amount it was actually created with, - /// or not having been created due to being too small. - fn created_output( - &mut self, - txn: &mut D::Transaction<'_>, - expected: u64, - actual: Option, - ); - - /// Refund a specific output. - fn refund_plan( - &mut self, - txn: &mut D::Transaction<'_>, - output: N::Output, - refund_to: N::Address, - ) -> Plan; - - /// Shim the forwarding Plan as necessary to obtain a fee estimate. - /// - /// If this Scheduler is for a Network which requires forwarding, this must return Some with a - /// plan with identical fee behavior. If forwarding isn't necessary, returns None. - fn shim_forward_plan(output: N::Output, to: ::G) -> Option>; - - /// Forward a specific output to the new multisig. - /// - /// Returns None if no forwarding is necessary. Must return Some if forwarding is necessary. - fn forward_plan( - &mut self, - txn: &mut D::Transaction<'_>, - output: N::Output, - to: ::G, - ) -> Option>; -} diff --git a/processor/src/multisigs/scheduler/utxo.rs b/processor/src/multisigs/scheduler/utxo.rs deleted file mode 100644 index 1865cab9..00000000 --- a/processor/src/multisigs/scheduler/utxo.rs +++ /dev/null @@ -1,631 +0,0 @@ -use std::{ - io::{self, Read}, - collections::{VecDeque, HashMap}, -}; - -use ciphersuite::{group::GroupEncoding, Ciphersuite}; - -use serai_client::primitives::{NetworkId, Coin, Amount, Balance}; - -use crate::{ - DbTxn, Db, Payment, Plan, - networks::{OutputType, Output, Network, UtxoNetwork}, - multisigs::scheduler::Scheduler as SchedulerTrait, -}; - -/// Deterministic output/payment manager. -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Scheduler { - key: ::G, - coin: Coin, - - // Serai, when it has more outputs expected than it can handle in a single transaction, will - // schedule the outputs to be handled later. Immediately, it just creates additional outputs - // which will eventually handle those outputs - // - // These maps map output amounts, which we'll receive in the future, to the payments they should - // be used on - // - // When those output amounts appear, their payments should be scheduled - // The Vec is for all payments that should be done per output instance - // The VecDeque allows multiple sets of payments with the same sum amount to properly co-exist - // - // queued_plans are for outputs which we will create, yet when created, will have their amount - // reduced by the fee it cost to be created. The Scheduler will then be told how what amount the - // output actually has, and it'll be moved into plans - queued_plans: HashMap>>>, - plans: HashMap>>>, - - // UTXOs available - utxos: Vec, - - // Payments awaiting scheduling due to the output availability problem - payments: VecDeque>, -} - -fn scheduler_key(key: &G) -> Vec { - D::key(b"SCHEDULER", b"scheduler", key.to_bytes()) -} - -impl> Scheduler { - pub fn empty(&self) -> bool { - self.queued_plans.is_empty() && - self.plans.is_empty() && - self.utxos.is_empty() && - self.payments.is_empty() - } - - fn read( - key: ::G, - coin: Coin, - reader: &mut R, - ) -> io::Result { - let mut read_plans = || -> io::Result<_> { - let mut all_plans = HashMap::new(); - let mut all_plans_len = [0; 4]; - reader.read_exact(&mut all_plans_len)?; - for _ in 0 .. u32::from_le_bytes(all_plans_len) { - let mut amount = [0; 8]; - reader.read_exact(&mut amount)?; - let amount = u64::from_le_bytes(amount); - - let mut plans = VecDeque::new(); - let mut plans_len = [0; 4]; - reader.read_exact(&mut plans_len)?; - for _ in 0 .. u32::from_le_bytes(plans_len) { - let mut payments = vec![]; - let mut payments_len = [0; 4]; - reader.read_exact(&mut payments_len)?; - - for _ in 0 .. u32::from_le_bytes(payments_len) { - payments.push(Payment::read(reader)?); - } - plans.push_back(payments); - } - all_plans.insert(amount, plans); - } - Ok(all_plans) - }; - let queued_plans = read_plans()?; - let plans = read_plans()?; - - let mut utxos = vec![]; - let mut utxos_len = [0; 4]; - reader.read_exact(&mut utxos_len)?; - for _ in 0 .. u32::from_le_bytes(utxos_len) { - utxos.push(N::Output::read(reader)?); - } - - let mut payments = VecDeque::new(); - let mut payments_len = [0; 4]; - reader.read_exact(&mut payments_len)?; - for _ in 0 .. u32::from_le_bytes(payments_len) { - payments.push_back(Payment::read(reader)?); - } - - Ok(Scheduler { key, coin, queued_plans, plans, utxos, payments }) - } - - // TODO2: Get rid of this - // We reserialize the entire scheduler on any mutation to save it to the DB which is horrible - // We should have an incremental solution - fn serialize(&self) -> Vec { - let mut res = Vec::with_capacity(4096); - - let mut write_plans = |plans: &HashMap>>>| { - res.extend(u32::try_from(plans.len()).unwrap().to_le_bytes()); - for (amount, list_of_plans) in plans { - res.extend(amount.to_le_bytes()); - res.extend(u32::try_from(list_of_plans.len()).unwrap().to_le_bytes()); - for plan in list_of_plans { - res.extend(u32::try_from(plan.len()).unwrap().to_le_bytes()); - for payment in plan { - payment.write(&mut res).unwrap(); - } - } - } - }; - write_plans(&self.queued_plans); - write_plans(&self.plans); - - res.extend(u32::try_from(self.utxos.len()).unwrap().to_le_bytes()); - for utxo in &self.utxos { - utxo.write(&mut res).unwrap(); - } - - res.extend(u32::try_from(self.payments.len()).unwrap().to_le_bytes()); - for payment in &self.payments { - payment.write(&mut res).unwrap(); - } - - debug_assert_eq!(&Self::read(self.key, self.coin, &mut res.as_slice()).unwrap(), self); - res - } - - pub fn new( - txn: &mut D::Transaction<'_>, - key: ::G, - network: NetworkId, - ) -> Self { - assert!(N::branch_address(key).is_some()); - assert!(N::change_address(key).is_some()); - assert!(N::forward_address(key).is_some()); - - let coin = { - let coins = network.coins(); - assert_eq!(coins.len(), 1); - coins[0] - }; - - let res = Scheduler { - key, - coin, - queued_plans: HashMap::new(), - plans: HashMap::new(), - utxos: vec![], - payments: VecDeque::new(), - }; - // Save it to disk so from_db won't panic if we don't mutate it before rebooting - txn.put(scheduler_key::(&res.key), res.serialize()); - res - } - - pub fn from_db( - db: &D, - key: ::G, - network: NetworkId, - ) -> io::Result { - let coin = { - let coins = network.coins(); - assert_eq!(coins.len(), 1); - coins[0] - }; - - let scheduler = db.get(scheduler_key::(&key)).unwrap_or_else(|| { - panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes())) - }); - let mut reader_slice = scheduler.as_slice(); - let reader = &mut reader_slice; - - Self::read(key, coin, reader) - } - - pub fn can_use_branch(&self, balance: Balance) -> bool { - assert_eq!(balance.coin, self.coin); - self.plans.contains_key(&balance.amount.0) - } - - fn execute( - &mut self, - inputs: Vec, - mut payments: Vec>, - key_for_any_change: ::G, - ) -> Plan { - let mut change = false; - let mut max = N::MAX_OUTPUTS; - - let payment_amounts = |payments: &Vec>| { - payments.iter().map(|payment| payment.balance.amount.0).sum::() - }; - - // Requires a change output - if inputs.iter().map(|output| output.balance().amount.0).sum::() != - payment_amounts(&payments) - { - change = true; - max -= 1; - } - - let mut add_plan = |payments| { - let amount = payment_amounts(&payments); - self.queued_plans.entry(amount).or_insert(VecDeque::new()).push_back(payments); - amount - }; - - let branch_address = N::branch_address(self.key).unwrap(); - - // If we have more payments than we can handle in a single TX, create plans for them - // TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create: - // 15 branches of 16 leaves - // 1 branch of: - // - 1 branch of 16 leaves - // - 2 leaves - // If this was perfect, the heaviest branch would have 1 branch of 3 leaves and 15 leaves - while payments.len() > max { - // The resulting TX will have the remaining payments and a new branch payment - let to_remove = (payments.len() + 1) - N::MAX_OUTPUTS; - // Don't remove more than possible - let to_remove = to_remove.min(N::MAX_OUTPUTS); - - // Create the plan - let removed = payments.drain((payments.len() - to_remove) ..).collect::>(); - assert_eq!(removed.len(), to_remove); - let amount = add_plan(removed); - - // Create the payment for the plan - // Push it to the front so it's not moved into a branch until all lower-depth items are - payments.insert( - 0, - Payment { - address: branch_address.clone(), - data: None, - balance: Balance { coin: self.coin, amount: Amount(amount) }, - }, - ); - } - - Plan { - key: self.key, - inputs, - payments, - change: Some(N::change_address(key_for_any_change).unwrap()).filter(|_| change), - scheduler_addendum: (), - } - } - - fn add_outputs( - &mut self, - mut utxos: Vec, - key_for_any_change: ::G, - ) -> Vec> { - log::info!("adding {} outputs", utxos.len()); - - let mut txs = vec![]; - - for utxo in utxos.drain(..) { - if utxo.kind() == OutputType::Branch { - let amount = utxo.balance().amount.0; - if let Some(plans) = self.plans.get_mut(&amount) { - // Execute the first set of payments possible with an output of this amount - let payments = plans.pop_front().unwrap(); - // They won't be equal if we dropped payments due to being dust - assert!(amount >= payments.iter().map(|payment| payment.balance.amount.0).sum::()); - - // If we've grabbed the last plan for this output amount, remove it from the map - if plans.is_empty() { - self.plans.remove(&amount); - } - - // Create a TX for these payments - txs.push(self.execute(vec![utxo], payments, key_for_any_change)); - continue; - } - } - - self.utxos.push(utxo); - } - - log::info!("{} planned TXs have had their required inputs confirmed", txs.len()); - txs - } - - // Schedule a series of outputs/payments. - pub fn schedule( - &mut self, - txn: &mut D::Transaction<'_>, - utxos: Vec, - mut payments: Vec>, - key_for_any_change: ::G, - force_spend: bool, - ) -> Vec> { - for utxo in &utxos { - assert_eq!(utxo.balance().coin, self.coin); - } - for payment in &payments { - assert_eq!(payment.balance.coin, self.coin); - } - - // Drop payments to our own branch address - /* - created_output will be called any time we send to a branch address. If it's called, and it - wasn't expecting to be called, that's almost certainly an error. The only way to guarantee - this however is to only have us send to a branch address when creating a branch, hence the - dropping of pointless payments. - - This is not comprehensive as a payment may still be made to another active multisig's branch - address, depending on timing. This is safe as the issue only occurs when a multisig sends to - its *own* branch address, since created_output is called on the signer's Scheduler. - */ - { - let branch_address = N::branch_address(self.key).unwrap(); - payments = - payments.drain(..).filter(|payment| payment.address != branch_address).collect::>(); - } - - let mut plans = self.add_outputs(utxos, key_for_any_change); - - log::info!("scheduling {} new payments", payments.len()); - - // Add all new payments to the list of pending payments - self.payments.extend(payments); - let payments_at_start = self.payments.len(); - log::info!("{} payments are now scheduled", payments_at_start); - - // If we don't have UTXOs available, don't try to continue - if self.utxos.is_empty() { - log::info!("no utxos currently available"); - return plans; - } - - // Sort UTXOs so the highest valued ones are first - self.utxos.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); - - // We always want to aggregate our UTXOs into a single UTXO in the name of simplicity - // We may have more UTXOs than will fit into a TX though - // We use the most valuable UTXOs to handle our current payments, and we return aggregation TXs - // for the rest of the inputs - // Since we do multiple aggregation TXs at once, this will execute in logarithmic time - let utxos = self.utxos.drain(..).collect::>(); - let mut utxo_chunks = - utxos.chunks(N::MAX_INPUTS).map(<[::Output]>::to_vec).collect::>(); - - // Use the first chunk for any scheduled payments, since it has the most value - let utxos = utxo_chunks.remove(0); - - // If the last chunk exists and only has one output, don't try aggregating it - // Set it to be restored to UTXO set - let mut to_restore = None; - if let Some(mut chunk) = utxo_chunks.pop() { - if chunk.len() == 1 { - to_restore = Some(chunk.pop().unwrap()); - } else { - utxo_chunks.push(chunk); - } - } - - for chunk in utxo_chunks.drain(..) { - log::debug!("aggregating a chunk of {} inputs", chunk.len()); - plans.push(Plan { - key: self.key, - inputs: chunk, - payments: vec![], - change: Some(N::change_address(key_for_any_change).unwrap()), - scheduler_addendum: (), - }) - } - - // We want to use all possible UTXOs for all possible payments - let mut balance = utxos.iter().map(|output| output.balance().amount.0).sum::(); - - // If we can't fulfill the next payment, we have encountered an instance of the UTXO - // availability problem - // This shows up in networks like Monero, where because we spent outputs, our change has yet to - // re-appear. Since it has yet to re-appear, we only operate with a balance which is a subset - // of our total balance - // Despite this, we may be ordered to fulfill a payment which is our total balance - // The solution is to wait for the temporarily unavailable change outputs to re-appear, - // granting us access to our full balance - let mut executing = vec![]; - while !self.payments.is_empty() { - let amount = self.payments[0].balance.amount.0; - if balance.checked_sub(amount).is_some() { - balance -= amount; - executing.push(self.payments.pop_front().unwrap()); - } else { - // Doesn't check if other payments would fit into the current batch as doing so may never - // let enough inputs become simultaneously availabile to enable handling of payments[0] - break; - } - } - - // Now that we have the list of payments we can successfully handle right now, create the TX - // for them - if !executing.is_empty() { - plans.push(self.execute(utxos, executing, key_for_any_change)); - } else { - // If we don't have any payments to execute, save these UTXOs for later - self.utxos.extend(utxos); - } - - // If we're instructed to force a spend, do so - // This is used when an old multisig is retiring and we want to always transfer outputs to the - // new one, regardless if we currently have payments - if force_spend && (!self.utxos.is_empty()) { - assert!(self.utxos.len() <= N::MAX_INPUTS); - plans.push(Plan { - key: self.key, - inputs: self.utxos.drain(..).collect::>(), - payments: vec![], - change: Some(N::change_address(key_for_any_change).unwrap()), - scheduler_addendum: (), - }); - } - - // If there's a UTXO to restore, restore it - // This is done now as if there is a to_restore output, and it was inserted into self.utxos - // earlier, self.utxos.len() may become `N::MAX_INPUTS + 1` - // The prior block requires the len to be `<= N::MAX_INPUTS` - if let Some(to_restore) = to_restore { - self.utxos.push(to_restore); - } - - txn.put(scheduler_key::(&self.key), self.serialize()); - - log::info!( - "created {} plans containing {} payments to sign, with {} payments pending scheduling", - plans.len(), - payments_at_start - self.payments.len(), - self.payments.len(), - ); - plans - } - - pub fn consume_payments(&mut self, txn: &mut D::Transaction<'_>) -> Vec> { - let res: Vec<_> = self.payments.drain(..).collect(); - if !res.is_empty() { - txn.put(scheduler_key::(&self.key), self.serialize()); - } - res - } - - // Note a branch output as having been created, with the amount it was actually created with, - // or not having been created due to being too small - pub fn created_output( - &mut self, - txn: &mut D::Transaction<'_>, - expected: u64, - actual: Option, - ) { - log::debug!("output expected to have {} had {:?} after fees", expected, actual); - - // Get the payments this output is expected to handle - let queued = self.queued_plans.get_mut(&expected).unwrap(); - let mut payments = queued.pop_front().unwrap(); - assert_eq!(expected, payments.iter().map(|payment| payment.balance.amount.0).sum::()); - // If this was the last set of payments at this amount, remove it - if queued.is_empty() { - self.queued_plans.remove(&expected); - } - - // If we didn't actually create this output, return, dropping the child payments - let Some(actual) = actual else { return }; - - // Amortize the fee amongst all payments underneath this branch - { - let mut to_amortize = actual - expected; - // If the payments are worth less than this fee we need to amortize, return, dropping them - if payments.iter().map(|payment| payment.balance.amount.0).sum::() < to_amortize { - return; - } - while to_amortize != 0 { - let payments_len = u64::try_from(payments.len()).unwrap(); - let per_payment = to_amortize / payments_len; - let mut overage = to_amortize % payments_len; - - for payment in &mut payments { - let to_subtract = per_payment + overage; - // Only subtract the overage once - overage = 0; - - let subtractable = payment.balance.amount.0.min(to_subtract); - to_amortize -= subtractable; - payment.balance.amount.0 -= subtractable; - } - } - } - - // Drop payments now below the dust threshold - let payments = payments - .into_iter() - .filter(|payment| payment.balance.amount.0 >= N::DUST) - .collect::>(); - // Sanity check this was done properly - assert!(actual >= payments.iter().map(|payment| payment.balance.amount.0).sum::()); - - // If there's no payments left, return - if payments.is_empty() { - return; - } - - self.plans.entry(actual).or_insert(VecDeque::new()).push_back(payments); - - // TODO2: This shows how ridiculous the serialize function is - txn.put(scheduler_key::(&self.key), self.serialize()); - } -} - -impl> SchedulerTrait for Scheduler { - type Addendum = (); - - /// Check if this Scheduler is empty. - fn empty(&self) -> bool { - Scheduler::empty(self) - } - - /// Create a new Scheduler. - fn new( - txn: &mut D::Transaction<'_>, - key: ::G, - network: NetworkId, - ) -> Self { - Scheduler::new::(txn, key, network) - } - - /// Load a Scheduler from the DB. - fn from_db( - db: &D, - key: ::G, - network: NetworkId, - ) -> io::Result { - Scheduler::from_db::(db, key, network) - } - - /// Check if a branch is usable. - fn can_use_branch(&self, balance: Balance) -> bool { - Scheduler::can_use_branch(self, balance) - } - - /// Schedule a series of outputs/payments. - fn schedule( - &mut self, - txn: &mut D::Transaction<'_>, - utxos: Vec, - payments: Vec>, - key_for_any_change: ::G, - force_spend: bool, - ) -> Vec> { - Scheduler::schedule::(self, txn, utxos, payments, key_for_any_change, force_spend) - } - - /// Consume all payments still pending within this Scheduler, without scheduling them. - fn consume_payments(&mut self, txn: &mut D::Transaction<'_>) -> Vec> { - Scheduler::consume_payments::(self, txn) - } - - /// Note a branch output as having been created, with the amount it was actually created with, - /// or not having been created due to being too small. - // TODO: Move this to Balance. - fn created_output( - &mut self, - txn: &mut D::Transaction<'_>, - expected: u64, - actual: Option, - ) { - Scheduler::created_output::(self, txn, expected, actual) - } - - fn refund_plan( - &mut self, - _: &mut D::Transaction<'_>, - output: N::Output, - refund_to: N::Address, - ) -> Plan { - let output_id = output.id().as_ref().to_vec(); - let res = Plan { - key: output.key(), - // Uses a payment as this will still be successfully sent due to fee amortization, - // and because change is currently always a Serai key - payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }], - inputs: vec![output], - change: None, - scheduler_addendum: (), - }; - log::info!("refund plan for {} has ID {}", hex::encode(output_id), hex::encode(res.id())); - res - } - - fn shim_forward_plan(output: N::Output, to: ::G) -> Option> { - Some(Plan { - key: output.key(), - payments: vec![Payment { - address: N::forward_address(to).unwrap(), - data: None, - balance: output.balance(), - }], - inputs: vec![output], - change: None, - scheduler_addendum: (), - }) - } - - fn forward_plan( - &mut self, - _: &mut D::Transaction<'_>, - output: N::Output, - to: ::G, - ) -> Option> { - assert_eq!(self.key, output.key()); - // Call shim as shim returns the actual - Self::shim_forward_plan(output, to) - } -} diff --git a/processor/src/plan.rs b/processor/src/plan.rs deleted file mode 100644 index 58a8a5e1..00000000 --- a/processor/src/plan.rs +++ /dev/null @@ -1,212 +0,0 @@ -use std::io; - -use scale::{Encode, Decode}; - -use transcript::{Transcript, RecommendedTranscript}; -use ciphersuite::group::GroupEncoding; -use frost::curve::Ciphersuite; - -use serai_client::primitives::Balance; - -use crate::{ - networks::{Output, Network}, - multisigs::scheduler::{SchedulerAddendum, Scheduler}, -}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Payment { - pub address: N::Address, - pub data: Option>, - pub balance: Balance, -} - -impl Payment { - pub fn transcript(&self, transcript: &mut T) { - transcript.domain_separate(b"payment"); - transcript.append_message(b"address", self.address.to_string().as_bytes()); - if let Some(data) = self.data.as_ref() { - transcript.append_message(b"data", data); - } - transcript.append_message(b"coin", self.balance.coin.encode()); - transcript.append_message(b"amount", self.balance.amount.0.to_le_bytes()); - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - // TODO: Don't allow creating Payments with an Address which can't be serialized - let address: Vec = self - .address - .clone() - .try_into() - .map_err(|_| io::Error::other("address couldn't be serialized"))?; - writer.write_all(&u32::try_from(address.len()).unwrap().to_le_bytes())?; - writer.write_all(&address)?; - - writer.write_all(&[u8::from(self.data.is_some())])?; - if let Some(data) = &self.data { - writer.write_all(&u32::try_from(data.len()).unwrap().to_le_bytes())?; - writer.write_all(data)?; - } - - writer.write_all(&self.balance.encode()) - } - - pub fn read(reader: &mut R) -> io::Result { - let mut buf = [0; 4]; - reader.read_exact(&mut buf)?; - let mut address = vec![0; usize::try_from(u32::from_le_bytes(buf)).unwrap()]; - reader.read_exact(&mut address)?; - let address = N::Address::try_from(address).map_err(|_| io::Error::other("invalid address"))?; - - let mut buf = [0; 1]; - reader.read_exact(&mut buf)?; - let data = if buf[0] == 1 { - let mut buf = [0; 4]; - reader.read_exact(&mut buf)?; - let mut data = vec![0; usize::try_from(u32::from_le_bytes(buf)).unwrap()]; - reader.read_exact(&mut data)?; - Some(data) - } else { - None - }; - - let balance = Balance::decode(&mut scale::IoReader(reader)) - .map_err(|_| io::Error::other("invalid balance"))?; - - Ok(Payment { address, data, balance }) - } -} - -#[derive(Clone, PartialEq)] -pub struct Plan { - pub key: ::G, - pub inputs: Vec, - /// The payments this Plan is intended to create. - /// - /// This should only contain payments leaving Serai. While it is acceptable for users to enter - /// Serai's address(es) as the payment address, as that'll be handled by anything which expects - /// certain properties, Serai as a system MUST NOT use payments for internal transfers. Doing - /// so will cause a reduction in their value by the TX fee/operating costs, creating an - /// incomplete transfer. - pub payments: Vec>, - /// The change this Plan should use. - /// - /// This MUST contain a Serai address. Operating costs may be deducted from the payments in this - /// Plan on the premise that the change address is Serai's, and accordingly, Serai will recoup - /// the operating costs. - // - // TODO: Consider moving to ::G? - pub change: Option, - /// The scheduler's additional data. - pub scheduler_addendum: >::Addendum, -} -impl core::fmt::Debug for Plan { - fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { - fmt - .debug_struct("Plan") - .field("key", &hex::encode(self.key.to_bytes())) - .field("inputs", &self.inputs) - .field("payments", &self.payments) - .field("change", &self.change.as_ref().map(ToString::to_string)) - .field("scheduler_addendum", &self.scheduler_addendum) - .finish() - } -} - -impl Plan { - pub fn transcript(&self) -> RecommendedTranscript { - let mut transcript = RecommendedTranscript::new(b"Serai Processor Plan ID"); - transcript.domain_separate(b"meta"); - transcript.append_message(b"network", N::ID); - transcript.append_message(b"key", self.key.to_bytes()); - - transcript.domain_separate(b"inputs"); - for input in &self.inputs { - transcript.append_message(b"input", input.id()); - } - - transcript.domain_separate(b"payments"); - for payment in &self.payments { - payment.transcript(&mut transcript); - } - - if let Some(change) = &self.change { - transcript.append_message(b"change", change.to_string()); - } - - let mut addendum_bytes = vec![]; - self.scheduler_addendum.write(&mut addendum_bytes).unwrap(); - transcript.append_message(b"scheduler_addendum", addendum_bytes); - - transcript - } - - pub fn id(&self) -> [u8; 32] { - let challenge = self.transcript().challenge(b"id"); - let mut res = [0; 32]; - res.copy_from_slice(&challenge[.. 32]); - res - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(self.key.to_bytes().as_ref())?; - - writer.write_all(&u32::try_from(self.inputs.len()).unwrap().to_le_bytes())?; - for input in &self.inputs { - input.write(writer)?; - } - - writer.write_all(&u32::try_from(self.payments.len()).unwrap().to_le_bytes())?; - for payment in &self.payments { - payment.write(writer)?; - } - - // TODO: Have Plan construction fail if change cannot be serialized - let change = if let Some(change) = &self.change { - change.clone().try_into().map_err(|_| { - io::Error::other(format!( - "an address we said to use as change couldn't be converted to a Vec: {}", - change.to_string(), - )) - })? - } else { - vec![] - }; - assert!(serai_client::primitives::MAX_ADDRESS_LEN <= u8::MAX.into()); - writer.write_all(&[u8::try_from(change.len()).unwrap()])?; - writer.write_all(&change)?; - self.scheduler_addendum.write(writer) - } - - pub fn read(reader: &mut R) -> io::Result { - let key = N::Curve::read_G(reader)?; - - let mut inputs = vec![]; - let mut buf = [0; 4]; - reader.read_exact(&mut buf)?; - for _ in 0 .. u32::from_le_bytes(buf) { - inputs.push(N::Output::read(reader)?); - } - - let mut payments = vec![]; - reader.read_exact(&mut buf)?; - for _ in 0 .. u32::from_le_bytes(buf) { - payments.push(Payment::::read(reader)?); - } - - let mut len = [0; 1]; - reader.read_exact(&mut len)?; - let mut change = vec![0; usize::from(len[0])]; - reader.read_exact(&mut change)?; - let change = - if change.is_empty() { - None - } else { - Some(N::Address::try_from(change).map_err(|_| { - io::Error::other("couldn't deserialize an Address serialized into a Plan") - })?) - }; - - let scheduler_addendum = >::Addendum::read(reader)?; - Ok(Plan { key, inputs, payments, change, scheduler_addendum }) - } -} From d570c1d277980abbaebc4a81af5a703206cc45e8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 17:29:48 -0400 Subject: [PATCH 074/368] Move additional_key.rs to serai-processor-view-keys I don't love this. I wanted to simply add this function to `processor/key-gen`, but then anyone who wants a view key needs to pull in Bulletproofs which is a mess of code. They'd also be subject to an AGPL licensed library. This is so small it should be a primitive elsewhere, yet there is no primitives library eligible. Maybe serai-client since that has the code to make transactions to Serai (and will have this as a dependency)? Except then the processor has to import serai-client when this rewrite removed it as a dependency. --- .github/workflows/tests.yml | 1 + Cargo.lock | 7 +++++++ Cargo.toml | 1 + processor/src/additional_key.rs | 14 -------------- processor/src/lib.rs | 15 --------------- processor/src/main.rs | 6 ------ processor/src/multisigs/mod.rs | 2 +- processor/view-keys/Cargo.toml | 19 +++++++++++++++++++ processor/view-keys/LICENSE | 21 +++++++++++++++++++++ processor/view-keys/README.md | 6 ++++++ processor/view-keys/src/lib.rs | 13 +++++++++++++ 11 files changed, 69 insertions(+), 36 deletions(-) delete mode 100644 processor/src/additional_key.rs delete mode 100644 processor/src/lib.rs create mode 100644 processor/view-keys/Cargo.toml create mode 100644 processor/view-keys/LICENSE create mode 100644 processor/view-keys/README.md create mode 100644 processor/view-keys/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index a6260579..1c37eb55 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -40,6 +40,7 @@ jobs: -p serai-message-queue \ -p serai-processor-messages \ -p serai-processor-key-gen \ + -p serai-processor-view-keys \ -p serai-processor-frost-attempt-manager \ -p serai-processor-primitives \ -p serai-processor-scanner \ diff --git a/Cargo.lock b/Cargo.lock index b3fa4e36..8662be6f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8760,6 +8760,13 @@ dependencies = [ "serai-processor-scheduler-primitives", ] +[[package]] +name = "serai-processor-view-keys" +version = "0.1.0" +dependencies = [ + "ciphersuite", +] + [[package]] name = "serai-reproducible-runtime-tests" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index a2d86c82..eb98c263 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -71,6 +71,7 @@ members = [ "processor/messages", "processor/key-gen", + "processor/view-keys", "processor/frost-attempt-manager", "processor/primitives", diff --git a/processor/src/additional_key.rs b/processor/src/additional_key.rs deleted file mode 100644 index f875950d..00000000 --- a/processor/src/additional_key.rs +++ /dev/null @@ -1,14 +0,0 @@ -use ciphersuite::Ciphersuite; - -use crate::networks::Network; - -// Generate a static additional key for a given chain in a globally consistent manner -// Doesn't consider the current group key to increase the simplicity of verifying Serai's status -// Takes an index, k, to support protocols which use multiple secondary keys -// Presumably a view key -pub fn additional_key(k: u64) -> ::F { - ::hash_to_F( - b"Serai DEX Additional Key", - &[N::ID.as_bytes(), &k.to_le_bytes()].concat(), - ) -} diff --git a/processor/src/lib.rs b/processor/src/lib.rs deleted file mode 100644 index bbff33f6..00000000 --- a/processor/src/lib.rs +++ /dev/null @@ -1,15 +0,0 @@ -#![allow(dead_code)] - -mod plan; -pub use plan::*; - -mod db; -pub(crate) use db::*; - -use serai_processor_key_gen as key_gen; - -pub mod networks; -pub(crate) mod multisigs; - -mod additional_key; -pub use additional_key::additional_key; diff --git a/processor/src/main.rs b/processor/src/main.rs index 49406aaf..10406729 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -27,9 +27,6 @@ use serai_env as env; use message_queue::{Service, client::MessageQueue}; -mod plan; -pub use plan::*; - mod networks; use networks::{Block, Network}; #[cfg(feature = "bitcoin")] @@ -39,9 +36,6 @@ use networks::Ethereum; #[cfg(feature = "monero")] use networks::Monero; -mod additional_key; -pub use additional_key::additional_key; - mod db; pub use db::*; diff --git a/processor/src/multisigs/mod.rs b/processor/src/multisigs/mod.rs index c20a922c..1c4adabf 100644 --- a/processor/src/multisigs/mod.rs +++ b/processor/src/multisigs/mod.rs @@ -4,5 +4,5 @@ pub enum MultisigEvent { // Batches to publish Batches(Option<(::G, ::G)>, Vec), // Eventuality completion found on-chain - Completed(Vec, [u8; 32], , [u8; 32], N::Eventuality), } diff --git a/processor/view-keys/Cargo.toml b/processor/view-keys/Cargo.toml new file mode 100644 index 00000000..6fdd9134 --- /dev/null +++ b/processor/view-keys/Cargo.toml @@ -0,0 +1,19 @@ +[package] +name = "serai-processor-view-keys" +version = "0.1.0" +description = "View keys for the Serai processor" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/view-keys" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +ciphersuite = { version = "0.4", path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } diff --git a/processor/view-keys/LICENSE b/processor/view-keys/LICENSE new file mode 100644 index 00000000..91d893c1 --- /dev/null +++ b/processor/view-keys/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022-2024 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/processor/view-keys/README.md b/processor/view-keys/README.md new file mode 100644 index 00000000..4354eed6 --- /dev/null +++ b/processor/view-keys/README.md @@ -0,0 +1,6 @@ +# Serai Processor View Keys + +View keys for the Serai processor. + +This is a MIT-licensed library made available for anyone to generate Serai's +view keys, as necessary for auditing reasons and for sending coins to Serai. diff --git a/processor/view-keys/src/lib.rs b/processor/view-keys/src/lib.rs new file mode 100644 index 00000000..c0d4c68e --- /dev/null +++ b/processor/view-keys/src/lib.rs @@ -0,0 +1,13 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use ciphersuite::Ciphersuite; + +/// Generate a view key for usage within Serai. +/// +/// `k` is the index of the key to generate (enabling generating multiple view keys within a +/// single context). +pub fn view_key(k: u64) -> C::F { + C::hash_to_F(b"Serai DEX View Key", &k.to_le_bytes()) +} From b50b8899180e7442a8100eba209fc051d168500f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 22:39:41 -0400 Subject: [PATCH 075/368] Split processor into bitcoin-processor, ethereum-processor, monero-processor --- .github/workflows/tests.yml | 4 +- Cargo.toml | 5 +- deny.toml | 6 +- processor/Cargo.toml | 96 --- processor/README.md | 6 +- processor/bitcoin/Cargo.toml | 46 ++ processor/{ => bitcoin}/LICENSE | 0 processor/bitcoin/README.md | 1 + .../bitcoin.rs => bitcoin/src/lib.rs} | 4 + processor/ethereum/Cargo.toml | 45 ++ processor/ethereum/LICENSE | 15 + processor/ethereum/README.md | 1 + .../ethereum.rs => ethereum/src/lib.rs} | 4 + processor/monero/Cargo.toml | 46 ++ processor/monero/LICENSE | 15 + processor/monero/README.md | 1 + .../networks/monero.rs => monero/src/lib.rs} | 4 + processor/scanner/src/lib.rs | 4 + .../scheduler/utxo/primitives/src/lib.rs | 1 + processor/src/networks/mod.rs | 658 ------------------ tests/full-stack/Cargo.toml | 2 +- tests/processor/Cargo.toml | 2 +- 22 files changed, 204 insertions(+), 762 deletions(-) delete mode 100644 processor/Cargo.toml create mode 100644 processor/bitcoin/Cargo.toml rename processor/{ => bitcoin}/LICENSE (100%) create mode 100644 processor/bitcoin/README.md rename processor/{src/networks/bitcoin.rs => bitcoin/src/lib.rs} (99%) create mode 100644 processor/ethereum/Cargo.toml create mode 100644 processor/ethereum/LICENSE create mode 100644 processor/ethereum/README.md rename processor/{src/networks/ethereum.rs => ethereum/src/lib.rs} (99%) create mode 100644 processor/monero/Cargo.toml create mode 100644 processor/monero/LICENSE create mode 100644 processor/monero/README.md rename processor/{src/networks/monero.rs => monero/src/lib.rs} (99%) delete mode 100644 processor/src/networks/mod.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 1c37eb55..a572dcf9 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -48,7 +48,9 @@ jobs: -p serai-processor-utxo-scheduler-primitives \ -p serai-processor-utxo-scheduler \ -p serai-processor-transaction-chaining-scheduler \ - -p serai-processor \ + -p serai-bitcoin-processor \ + -p serai-ethereum-processor \ + -p serai-monero-processor \ -p tendermint-machine \ -p tributary-chain \ -p serai-coordinator \ diff --git a/Cargo.toml b/Cargo.toml index eb98c263..3ec76f59 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -70,6 +70,7 @@ members = [ "message-queue", "processor/messages", + "processor/key-gen", "processor/view-keys", "processor/frost-attempt-manager", @@ -80,7 +81,9 @@ members = [ "processor/scheduler/utxo/primitives", "processor/scheduler/utxo/standard", "processor/scheduler/utxo/transaction-chaining", - "processor", + "processor/bitcoin", + "processor/ethereum", + "processor/monero", "coordinator/tributary/tendermint", "coordinator/tributary", diff --git a/deny.toml b/deny.toml index 16d3cbea..8fbb8fc9 100644 --- a/deny.toml +++ b/deny.toml @@ -46,6 +46,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-message-queue" }, { allow = ["AGPL-3.0"], name = "serai-processor-messages" }, + { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, @@ -54,7 +55,10 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-standard-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, - { allow = ["AGPL-3.0"], name = "serai-processor" }, + + { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, + { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, + { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, { allow = ["AGPL-3.0"], name = "serai-coordinator" }, diff --git a/processor/Cargo.toml b/processor/Cargo.toml deleted file mode 100644 index 2d386f2d..00000000 --- a/processor/Cargo.toml +++ /dev/null @@ -1,96 +0,0 @@ -[package] -name = "serai-processor" -version = "0.1.0" -description = "Multichain processor premised on canonicity to reach distributed consensus automatically" -license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/processor" -authors = ["Luke Parker "] -keywords = [] -edition = "2021" -publish = false - -[package.metadata.docs.rs] -all-features = true -rustdoc-args = ["--cfg", "docsrs"] - -[lints] -workspace = true - -[dependencies] -# Macros -async-trait = { version = "0.1", default-features = false } -zeroize = { version = "1", default-features = false, features = ["std"] } -thiserror = { version = "1", default-features = false } - -# Libs -rand_core = { version = "0.6", default-features = false, features = ["std", "getrandom"] } -rand_chacha = { version = "0.3", default-features = false, features = ["std"] } - -# Encoders -const-hex = { version = "1", default-features = false } -hex = { version = "0.4", default-features = false, features = ["std"] } -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } -borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serde_json = { version = "1", default-features = false, features = ["std"] } - -# Cryptography -ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } - -transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std"] } -ec-divisors = { package = "ec-divisors", path = "../crypto/evrf/divisors", default-features = false } -dkg = { package = "dkg", path = "../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } -frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] } -frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false } - -# Bitcoin/Ethereum -k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true } - -# Bitcoin -secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"], optional = true } -bitcoin-serai = { path = "../networks/bitcoin", default-features = false, features = ["std"], optional = true } - -# Ethereum -ethereum-serai = { path = "../networks/ethereum", default-features = false, optional = true } - -# Monero -dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true } -monero-simple-request-rpc = { path = "../networks/monero/rpc/simple-request", default-features = false, optional = true } -monero-wallet = { path = "../networks/monero/wallet", default-features = false, features = ["std", "multisig", "compile-time-generators"], optional = true } - -# Application -log = { version = "0.4", default-features = false, features = ["std"] } -env_logger = { version = "0.10", default-features = false, features = ["humantime"], optional = true } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } - -zalloc = { path = "../common/zalloc" } -serai-db = { path = "../common/db" } -serai-env = { path = "../common/env", optional = true } -# TODO: Replace with direct usage of primitives -serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] } - -messages = { package = "serai-processor-messages", path = "./messages" } - -message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true } - -[dev-dependencies] -frost = { package = "modular-frost", path = "../crypto/frost", features = ["tests"] } - -sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } - -ethereum-serai = { path = "../networks/ethereum", default-features = false, features = ["tests"] } - -dockertest = "0.5" -serai-docker-tests = { path = "../tests/docker" } - -[features] -secp256k1 = ["k256", "dkg/evrf-secp256k1", "frost/secp256k1"] -bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"] - -ethereum = ["secp256k1", "ethereum-serai/tests"] - -ed25519 = ["dalek-ff-group", "dkg/evrf-ed25519", "frost/ed25519"] -monero = ["ed25519", "monero-simple-request-rpc", "monero-wallet", "serai-client/monero"] - -binaries = ["env_logger", "serai-env", "message-queue"] -parity-db = ["serai-db/parity-db"] -rocksdb = ["serai-db/rocksdb"] diff --git a/processor/README.md b/processor/README.md index 37d11e0d..e942f557 100644 --- a/processor/README.md +++ b/processor/README.md @@ -1,5 +1,5 @@ # Processor -The Serai processor scans a specified external network, communicating with the -coordinator. For details on its exact messaging flow, and overall policies, -please view `docs/processor`. +The Serai processors, built from the libraries here, scan an external network +and report the indexed data to the coordinator. For details on its exact +messaging flow, and overall policies, please view `docs/processor`. diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml new file mode 100644 index 00000000..a5749542 --- /dev/null +++ b/processor/bitcoin/Cargo.toml @@ -0,0 +1,46 @@ +[package] +name = "serai-bitcoin-processor" +version = "0.1.0" +description = "Serai Bitcoin Processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/bitcoin" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +const-hex = { version = "1", default-features = false } +hex = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serde_json = { version = "1", default-features = false, features = ["std"] } + +k256 = { version = "^0.13.1", default-features = false, features = ["std"] } +secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } +bitcoin-serai = { path = "../../networks/bitcoin", default-features = false, features = ["std"] } + +log = { version = "0.4", default-features = false, features = ["std"] } +env_logger = { version = "0.10", default-features = false, features = ["humantime"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +zalloc = { path = "../../common/zalloc" } +serai-db = { path = "../../common/db" } +serai-env = { path = "../../common/env" } + +messages = { package = "serai-processor-messages", path = "../messages" } + +message-queue = { package = "serai-message-queue", path = "../../message-queue" } + +[features] +parity-db = ["serai-db/parity-db"] +rocksdb = ["serai-db/rocksdb"] diff --git a/processor/LICENSE b/processor/bitcoin/LICENSE similarity index 100% rename from processor/LICENSE rename to processor/bitcoin/LICENSE diff --git a/processor/bitcoin/README.md b/processor/bitcoin/README.md new file mode 100644 index 00000000..79d1cedd --- /dev/null +++ b/processor/bitcoin/README.md @@ -0,0 +1 @@ +# Serai Bitcoin Processor diff --git a/processor/src/networks/bitcoin.rs b/processor/bitcoin/src/lib.rs similarity index 99% rename from processor/src/networks/bitcoin.rs rename to processor/bitcoin/src/lib.rs index 43cad1c7..bccdc286 100644 --- a/processor/src/networks/bitcoin.rs +++ b/processor/bitcoin/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use std::{sync::OnceLock, time::Duration, io, collections::HashMap}; use async_trait::async_trait; diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml new file mode 100644 index 00000000..eff47af9 --- /dev/null +++ b/processor/ethereum/Cargo.toml @@ -0,0 +1,45 @@ +[package] +name = "serai-ethereum-processor" +version = "0.1.0" +description = "Serai Ethereum Processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +const-hex = { version = "1", default-features = false } +hex = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serde_json = { version = "1", default-features = false, features = ["std"] } + +k256 = { version = "^0.13.1", default-features = false, features = ["std"] } +ethereum-serai = { path = "../../networks/ethereum", default-features = false, optional = true } + +log = { version = "0.4", default-features = false, features = ["std"] } +env_logger = { version = "0.10", default-features = false, features = ["humantime"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +zalloc = { path = "../../common/zalloc" } +serai-db = { path = "../../common/db" } +serai-env = { path = "../../common/env" } + +messages = { package = "serai-processor-messages", path = "../messages" } + +message-queue = { package = "serai-message-queue", path = "../../message-queue" } + +[features] +parity-db = ["serai-db/parity-db"] +rocksdb = ["serai-db/rocksdb"] diff --git a/processor/ethereum/LICENSE b/processor/ethereum/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/README.md b/processor/ethereum/README.md new file mode 100644 index 00000000..5301c64b --- /dev/null +++ b/processor/ethereum/README.md @@ -0,0 +1 @@ +# Serai Ethereum Processor diff --git a/processor/src/networks/ethereum.rs b/processor/ethereum/src/lib.rs similarity index 99% rename from processor/src/networks/ethereum.rs rename to processor/ethereum/src/lib.rs index 3545f34a..99d04203 100644 --- a/processor/src/networks/ethereum.rs +++ b/processor/ethereum/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use core::{fmt, time::Duration}; use std::{ sync::Arc, diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml new file mode 100644 index 00000000..e71472e4 --- /dev/null +++ b/processor/monero/Cargo.toml @@ -0,0 +1,46 @@ +[package] +name = "serai-monero-processor" +version = "0.1.0" +description = "Serai Monero Processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/monero" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +const-hex = { version = "1", default-features = false } +hex = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serde_json = { version = "1", default-features = false, features = ["std"] } + +dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true } +monero-simple-request-rpc = { path = "../../networks/monero/rpc/simple-request", default-features = false, optional = true } +monero-wallet = { path = "../../networks/monero/wallet", default-features = false, features = ["std", "multisig", "compile-time-generators"], optional = true } + +log = { version = "0.4", default-features = false, features = ["std"] } +env_logger = { version = "0.10", default-features = false, features = ["humantime"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +zalloc = { path = "../../common/zalloc" } +serai-db = { path = "../../common/db" } +serai-env = { path = "../../common/env" } + +messages = { package = "serai-processor-messages", path = "../messages" } + +message-queue = { package = "serai-message-queue", path = "../../message-queue" } + +[features] +parity-db = ["serai-db/parity-db"] +rocksdb = ["serai-db/rocksdb"] diff --git a/processor/monero/LICENSE b/processor/monero/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/monero/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/monero/README.md b/processor/monero/README.md new file mode 100644 index 00000000..564c83a0 --- /dev/null +++ b/processor/monero/README.md @@ -0,0 +1 @@ +# Serai Monero Processor diff --git a/processor/src/networks/monero.rs b/processor/monero/src/lib.rs similarity index 99% rename from processor/src/networks/monero.rs rename to processor/monero/src/lib.rs index 6ffa29df..8786bef3 100644 --- a/processor/src/networks/monero.rs +++ b/processor/monero/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use std::{time::Duration, collections::HashMap, io}; use async_trait::async_trait; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index ecefb9a8..17feefbe 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use core::{marker::PhantomData, fmt::Debug}; use std::{io, collections::HashMap}; diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 274eb2a4..2f51e9e0 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -97,6 +97,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// more information. /// /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. + // TODO: Enum for Change of None, Some, Mandatory fn plan_transaction_with_fee_amortization( operating_costs: &mut u64, fee_rate: Self::FeeRate, diff --git a/processor/src/networks/mod.rs b/processor/src/networks/mod.rs deleted file mode 100644 index 81838ae1..00000000 --- a/processor/src/networks/mod.rs +++ /dev/null @@ -1,658 +0,0 @@ -use core::{fmt::Debug, time::Duration}; -use std::{io, collections::HashMap}; - -use async_trait::async_trait; -use thiserror::Error; - -use frost::{ - dkg::evrf::EvrfCurve, - curve::{Ciphersuite, Curve}, - ThresholdKeys, - sign::PreprocessMachine, -}; - -use serai_client::primitives::{NetworkId, Balance}; - -use log::error; - -use tokio::time::sleep; - -#[cfg(feature = "bitcoin")] -pub mod bitcoin; -#[cfg(feature = "bitcoin")] -pub use self::bitcoin::Bitcoin; - -#[cfg(feature = "ethereum")] -pub mod ethereum; -#[cfg(feature = "ethereum")] -pub use ethereum::Ethereum; - -#[cfg(feature = "monero")] -pub mod monero; -#[cfg(feature = "monero")] -pub use monero::Monero; - -use crate::{Payment, Plan, multisigs::scheduler::Scheduler}; - -#[derive(Clone, Copy, Error, Debug)] -pub enum NetworkError { - #[error("failed to connect to network daemon")] - ConnectionError, -} - -pub trait Id: - Send + Sync + Clone + Default + PartialEq + AsRef<[u8]> + AsMut<[u8]> + Debug -{ -} -impl + AsMut<[u8]> + Debug> Id for I {} - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] -pub enum OutputType { - // Needs to be processed/sent up to Substrate - External, - - // Given a known output set, and a known series of outbound transactions, we should be able to - // form a completely deterministic schedule S. The issue is when S has TXs which spend prior TXs - // in S (which is needed for our logarithmic scheduling). In order to have the descendant TX, say - // S[1], build off S[0], we need to observe when S[0] is included on-chain. - // - // We cannot. - // - // Monero (and other privacy coins) do not expose their UTXO graphs. Even if we know how to - // create S[0], and the actual payment info behind it, we cannot observe it on the blockchain - // unless we participated in creating it. Locking the entire schedule, when we cannot sign for - // the entire schedule at once, to a single signing set isn't feasible. - // - // While any member of the active signing set can provide data enabling other signers to - // participate, it's several KB of data which we then have to code communication for. - // The other option is to simply not observe S[0]. Instead, observe a TX with an identical output - // to the one in S[0] we intended to use for S[1]. It's either from S[0], or Eve, a malicious - // actor, has sent us a forged TX which is... equally as usable? so who cares? - // - // The only issue is if we have multiple outputs on-chain with identical amounts and purposes. - // Accordingly, when the scheduler makes a plan for when a specific output is available, it - // shouldn't write that plan. It should *push* that plan to a queue of plans to perform when - // instances of that output occur. - Branch, - - // Should be added to the available UTXO pool with no further action - Change, - - // Forwarded output from the prior multisig - Forwarded, -} - -impl OutputType { - fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&[match self { - OutputType::External => 0, - OutputType::Branch => 1, - OutputType::Change => 2, - OutputType::Forwarded => 3, - }]) - } - - fn read(reader: &mut R) -> io::Result { - let mut byte = [0; 1]; - reader.read_exact(&mut byte)?; - Ok(match byte[0] { - 0 => OutputType::External, - 1 => OutputType::Branch, - 2 => OutputType::Change, - 3 => OutputType::Forwarded, - _ => Err(io::Error::other("invalid OutputType"))?, - }) - } -} - -pub trait Output: Send + Sync + Sized + Clone + PartialEq + Eq + Debug { - type Id: 'static + Id; - - fn kind(&self) -> OutputType; - - fn id(&self) -> Self::Id; - fn tx_id(&self) -> >::Id; // TODO: Review use of - fn key(&self) -> ::G; - - fn presumed_origin(&self) -> Option; - - fn balance(&self) -> Balance; - fn data(&self) -> &[u8]; - - fn write(&self, writer: &mut W) -> io::Result<()>; - fn read(reader: &mut R) -> io::Result; -} - -#[async_trait] -pub trait Transaction: Send + Sync + Sized + Clone + PartialEq + Debug { - type Id: 'static + Id; - fn id(&self) -> Self::Id; - // TODO: Move to Balance - #[cfg(test)] - async fn fee(&self, network: &N) -> u64; -} - -pub trait SignableTransaction: Send + Sync + Clone + Debug { - // TODO: Move to Balance - fn fee(&self) -> u64; -} - -pub trait Eventuality: Send + Sync + Clone + PartialEq + Debug { - type Claim: Send + Sync + Clone + PartialEq + Default + AsRef<[u8]> + AsMut<[u8]> + Debug; - type Completion: Send + Sync + Clone + PartialEq + Debug; - - fn lookup(&self) -> Vec; - - fn read(reader: &mut R) -> io::Result; - fn serialize(&self) -> Vec; - - fn claim(completion: &Self::Completion) -> Self::Claim; - - // TODO: Make a dedicated Completion trait - fn serialize_completion(completion: &Self::Completion) -> Vec; - fn read_completion(reader: &mut R) -> io::Result; -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct EventualitiesTracker { - // Lookup property (input, nonce, TX extra...) -> (plan ID, eventuality) - map: HashMap, ([u8; 32], E)>, - // Block number we've scanned these eventualities too - block_number: usize, -} - -impl EventualitiesTracker { - pub fn new() -> Self { - EventualitiesTracker { map: HashMap::new(), block_number: usize::MAX } - } - - pub fn register(&mut self, block_number: usize, id: [u8; 32], eventuality: E) { - log::info!("registering eventuality for {}", hex::encode(id)); - - let lookup = eventuality.lookup(); - if self.map.contains_key(&lookup) { - panic!("registering an eventuality multiple times or lookup collision"); - } - self.map.insert(lookup, (id, eventuality)); - // If our self tracker already went past this block number, set it back - self.block_number = self.block_number.min(block_number); - } - - pub fn drop(&mut self, id: [u8; 32]) { - // O(n) due to the lack of a reverse lookup - let mut found_key = None; - for (key, value) in &self.map { - if value.0 == id { - found_key = Some(key.clone()); - break; - } - } - - if let Some(key) = found_key { - self.map.remove(&key); - } - } -} - -impl Default for EventualitiesTracker { - fn default() -> Self { - Self::new() - } -} - -#[async_trait] -pub trait Block: Send + Sync + Sized + Clone + Debug { - // This is currently bounded to being 32 bytes. - type Id: 'static + Id; - fn id(&self) -> Self::Id; - fn parent(&self) -> Self::Id; - /// The monotonic network time at this block. - /// - /// This call is presumed to be expensive and should only be called sparingly. - async fn time(&self, rpc: &N) -> u64; -} - -// The post-fee value of an expected branch. -pub struct PostFeeBranch { - pub expected: u64, - pub actual: Option, -} - -// Return the PostFeeBranches needed when dropping a transaction -fn drop_branches( - key: ::G, - payments: &[Payment], -) -> Vec { - let mut branch_outputs = vec![]; - for payment in payments { - if Some(&payment.address) == N::branch_address(key).as_ref() { - branch_outputs.push(PostFeeBranch { expected: payment.balance.amount.0, actual: None }); - } - } - branch_outputs -} - -pub struct PreparedSend { - /// None for the transaction if the SignableTransaction was dropped due to lack of value. - pub tx: Option<(N::SignableTransaction, N::Eventuality)>, - pub post_fee_branches: Vec, - /// The updated operating costs after preparing this transaction. - pub operating_costs: u64, -} - -#[async_trait] -#[rustfmt::skip] -pub trait Network: 'static + Send + Sync + Clone + PartialEq + Debug { - /// The elliptic curve used for this network. - type Curve: Curve - + EvrfCurve::F>>>; - - /// The type representing the transaction for this network. - type Transaction: Transaction; // TODO: Review use of - /// The type representing the block for this network. - type Block: Block; - - /// The type containing all information on a scanned output. - // This is almost certainly distinct from the network's native output type. - type Output: Output; - /// The type containing all information on a planned transaction, waiting to be signed. - type SignableTransaction: SignableTransaction; - /// The type containing all information to check if a plan was completed. - /// - /// This must be binding to both the outputs expected and the plan ID. - type Eventuality: Eventuality; - /// The FROST machine to sign a transaction. - type TransactionMachine: PreprocessMachine< - Signature = ::Completion, - >; - - /// The scheduler for this network. - type Scheduler: Scheduler; - - /// The type representing an address. - // This should NOT be a String, yet a tailored type representing an efficient binary encoding, - // as detailed in the integration documentation. - type Address: Send - + Sync - + Clone - + PartialEq - + Eq - + Debug - + ToString - + TryInto> - + TryFrom>; - - /// Network ID for this network. - const NETWORK: NetworkId; - /// String ID for this network. - const ID: &'static str; - /// The estimated amount of time a block will take. - const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize; - /// The amount of confirmations required to consider a block 'final'. - const CONFIRMATIONS: usize; - /// The maximum amount of outputs which will fit in a TX. - /// This should be equal to MAX_INPUTS unless one is specifically limited. - /// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size. - const MAX_OUTPUTS: usize; - - /// Minimum output value which will be handled. - /// - /// For any received output, there's the cost to spend the output. This value MUST exceed the - /// cost to spend said output, and should by a notable margin (not just 2x, yet an order of - /// magnitude). - // TODO: Dust needs to be diversified per Coin - const DUST: u64; - - /// The cost to perform input aggregation with a 2-input 1-output TX. - const COST_TO_AGGREGATE: u64; - - /// Tweak keys for this network. - fn tweak_keys(key: &mut ThresholdKeys); - - /// Address for the given group key to receive external coins to. - #[cfg(test)] - async fn external_address(&self, key: ::G) -> Self::Address; - /// Address for the given group key to use for scheduled branches. - fn branch_address(key: ::G) -> Option; - /// Address for the given group key to use for change. - fn change_address(key: ::G) -> Option; - /// Address for forwarded outputs from prior multisigs. - /// - /// forward_address must only return None if explicit forwarding isn't necessary. - fn forward_address(key: ::G) -> Option; - - /// Get the latest block's number. - async fn get_latest_block_number(&self) -> Result; - /// Get a block by its number. - async fn get_block(&self, number: usize) -> Result; - - /// Get the latest block's number, retrying until success. - async fn get_latest_block_number_with_retries(&self) -> usize { - loop { - match self.get_latest_block_number().await { - Ok(number) => { - return number; - } - Err(e) => { - error!( - "couldn't get the latest block number in the with retry get_latest_block_number: {e:?}", - ); - sleep(Duration::from_secs(10)).await; - } - } - } - } - - /// Get a block, retrying until success. - async fn get_block_with_retries(&self, block_number: usize) -> Self::Block { - loop { - match self.get_block(block_number).await { - Ok(block) => { - return block; - } - Err(e) => { - error!("couldn't get block {block_number} in the with retry get_block: {:?}", e); - sleep(Duration::from_secs(10)).await; - } - } - } - } - - /// Get the outputs within a block for a specific key. - async fn get_outputs( - &self, - block: &Self::Block, - key: ::G, - ) -> Vec; - - /// Get the registered eventualities completed within this block, and any prior blocks which - /// registered eventualities may have been completed in. - /// - /// This may panic if not fed a block greater than the tracker's block number. - /// - /// Plan ID -> (block number, TX ID, completion) - // TODO: get_eventuality_completions_internal + provided get_eventuality_completions for common - // code - // TODO: Consider having this return the Transaction + the Completion? - // Or Transaction with extract_completion? - async fn get_eventuality_completions( - &self, - eventualities: &mut EventualitiesTracker, - block: &Self::Block, - ) -> HashMap< - [u8; 32], - ( - usize, - >::Id, - ::Completion, - ), - >; - - /// Returns the needed fee to fulfill this Plan at this fee rate. - /// - /// Returns None if this Plan isn't fulfillable (such as when the fee exceeds the input value). - async fn needed_fee( - &self, - block_number: usize, - inputs: &[Self::Output], - payments: &[Payment], - change: &Option, - ) -> Result, NetworkError>; - - /// Create a SignableTransaction for the given Plan. - /// - /// The expected flow is: - /// 1) Call needed_fee - /// 2) If the Plan is fulfillable, amortize the fee - /// 3) Call signable_transaction *which MUST NOT return None if the above was done properly* - /// - /// This takes a destructured Plan as some of these arguments are malleated from the original - /// Plan. - // TODO: Explicit AmortizedPlan? - #[allow(clippy::too_many_arguments)] - async fn signable_transaction( - &self, - block_number: usize, - plan_id: &[u8; 32], - key: ::G, - inputs: &[Self::Output], - payments: &[Payment], - change: &Option, - scheduler_addendum: &>::Addendum, - ) -> Result, NetworkError>; - - /// Prepare a SignableTransaction for a transaction. - /// - /// This must not persist anything as we will prepare Plans we never intend to execute. - async fn prepare_send( - &self, - block_number: usize, - plan: Plan, - operating_costs: u64, - ) -> Result, NetworkError> { - // Sanity check this has at least one output planned - assert!((!plan.payments.is_empty()) || plan.change.is_some()); - - let plan_id = plan.id(); - let Plan { key, inputs, mut payments, change, scheduler_addendum } = plan; - let theoretical_change_amount = if change.is_some() { - inputs.iter().map(|input| input.balance().amount.0).sum::() - - payments.iter().map(|payment| payment.balance.amount.0).sum::() - } else { - 0 - }; - - let Some(tx_fee) = self.needed_fee(block_number, &inputs, &payments, &change).await? else { - // This Plan is not fulfillable - // TODO: Have Plan explicitly distinguish payments and branches in two separate Vecs? - return Ok(PreparedSend { - tx: None, - // Have all of its branches dropped - post_fee_branches: drop_branches(key, &payments), - // This plan expects a change output valued at sum(inputs) - sum(outputs) - // Since we can no longer create this change output, it becomes an operating cost - // TODO: Look at input restoration to reduce this operating cost - operating_costs: operating_costs + - if change.is_some() { theoretical_change_amount } else { 0 }, - }); - }; - - // Amortize the fee over the plan's payments - let (post_fee_branches, mut operating_costs) = (|| { - // If we're creating a change output, letting us recoup coins, amortize the operating costs - // as well - let total_fee = tx_fee + if change.is_some() { operating_costs } else { 0 }; - - let original_outputs = payments.iter().map(|payment| payment.balance.amount.0).sum::(); - // If this isn't enough for the total fee, drop and move on - if original_outputs < total_fee { - let mut remaining_operating_costs = operating_costs; - if change.is_some() { - // Operating costs increase by the TX fee - remaining_operating_costs += tx_fee; - // Yet decrease by the payments we managed to drop - remaining_operating_costs = remaining_operating_costs.saturating_sub(original_outputs); - } - return (drop_branches(key, &payments), remaining_operating_costs); - } - - let initial_payment_amounts = - payments.iter().map(|payment| payment.balance.amount.0).collect::>(); - - // Amortize the transaction fee across outputs - let mut remaining_fee = total_fee; - // Run as many times as needed until we can successfully subtract this fee - while remaining_fee != 0 { - // This shouldn't be a / by 0 as these payments have enough value to cover the fee - let this_iter_fee = remaining_fee / u64::try_from(payments.len()).unwrap(); - let mut overage = remaining_fee % u64::try_from(payments.len()).unwrap(); - for payment in &mut payments { - let this_payment_fee = this_iter_fee + overage; - // Only subtract the overage once - overage = 0; - - let subtractable = payment.balance.amount.0.min(this_payment_fee); - remaining_fee -= subtractable; - payment.balance.amount.0 -= subtractable; - } - } - - // If any payment is now below the dust threshold, set its value to 0 so it'll be dropped - for payment in &mut payments { - if payment.balance.amount.0 < Self::DUST { - payment.balance.amount.0 = 0; - } - } - - // Note the branch outputs' new values - let mut branch_outputs = vec![]; - for (initial_amount, payment) in initial_payment_amounts.into_iter().zip(&payments) { - if Some(&payment.address) == Self::branch_address(key).as_ref() { - branch_outputs.push(PostFeeBranch { - expected: initial_amount, - actual: if payment.balance.amount.0 == 0 { - None - } else { - Some(payment.balance.amount.0) - }, - }); - } - } - - // Drop payments now worth 0 - payments = payments - .drain(..) - .filter(|payment| { - if payment.balance.amount.0 != 0 { - true - } else { - log::debug!("dropping dust payment from plan {}", hex::encode(plan_id)); - false - } - }) - .collect(); - - // Sanity check the fee was successfully amortized - let new_outputs = payments.iter().map(|payment| payment.balance.amount.0).sum::(); - assert!((new_outputs + total_fee) <= original_outputs); - - ( - branch_outputs, - if change.is_none() { - // If the change is None, this had no effect on the operating costs - operating_costs - } else { - // Since the change is some, and we successfully amortized, the operating costs were - // recouped - 0 - }, - ) - })(); - - let Some(tx) = self - .signable_transaction( - block_number, - &plan_id, - key, - &inputs, - &payments, - &change, - &scheduler_addendum, - ) - .await? - else { - panic!( - "{}. {}: {}, {}: {:?}, {}: {:?}, {}: {:?}, {}: {}, {}: {:?}", - "signable_transaction returned None for a TX we prior successfully calculated the fee for", - "id", - hex::encode(plan_id), - "inputs", - inputs, - "post-amortization payments", - payments, - "change", - change, - "successfully amoritized fee", - tx_fee, - "scheduler's addendum", - scheduler_addendum, - ) - }; - - if change.is_some() { - let on_chain_expected_change = - inputs.iter().map(|input| input.balance().amount.0).sum::() - - payments.iter().map(|payment| payment.balance.amount.0).sum::() - - tx_fee; - // If the change value is less than the dust threshold, it becomes an operating cost - // This may be slightly inaccurate as dropping payments may reduce the fee, raising the - // change above dust - // That's fine since it'd have to be in a very precarious state AND then it's over-eager in - // tabulating costs - if on_chain_expected_change < Self::DUST { - operating_costs += theoretical_change_amount; - } - } - - Ok(PreparedSend { tx: Some(tx), post_fee_branches, operating_costs }) - } - - /// Attempt to sign a SignableTransaction. - async fn attempt_sign( - &self, - keys: ThresholdKeys, - transaction: Self::SignableTransaction, - ) -> Result; - - /// Publish a completion. - async fn publish_completion( - &self, - completion: &::Completion, - ) -> Result<(), NetworkError>; - - /// Confirm a plan was completed by the specified transaction, per our bounds. - /// - /// Returns Err if there was an error with the confirmation methodology. - /// Returns Ok(None) if this is not a valid completion. - /// Returns Ok(Some(_)) with the completion if it's valid. - async fn confirm_completion( - &self, - eventuality: &Self::Eventuality, - claim: &::Claim, - ) -> Result::Completion>, NetworkError>; - - /// Get a block's number by its ID. - #[cfg(test)] - async fn get_block_number(&self, id: &>::Id) -> usize; - - /// Check an Eventuality is fulfilled by a claim. - #[cfg(test)] - async fn check_eventuality_by_claim( - &self, - eventuality: &Self::Eventuality, - claim: &::Claim, - ) -> bool; - - /// Get a transaction by the Eventuality it completes. - #[cfg(test)] - async fn get_transaction_by_eventuality( - &self, - block: usize, - eventuality: &Self::Eventuality, - ) -> Self::Transaction; - - #[cfg(test)] - async fn mine_block(&self); - - /// Sends to the specified address. - /// Additionally mines enough blocks so that the TX is past the confirmation depth. - #[cfg(test)] - async fn test_send(&self, key: Self::Address) -> Self::Block; -} - -pub trait UtxoNetwork: Network { - /// The maximum amount of inputs which will fit in a TX. - /// This should be equal to MAX_OUTPUTS unless one is specifically limited. - /// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size. - const MAX_INPUTS: usize; -} diff --git a/tests/full-stack/Cargo.toml b/tests/full-stack/Cargo.toml index 12af01bd..a9dbdc63 100644 --- a/tests/full-stack/Cargo.toml +++ b/tests/full-stack/Cargo.toml @@ -34,7 +34,7 @@ scale = { package = "parity-scale-codec", version = "3" } serde = "1" serde_json = "1" -processor = { package = "serai-processor", path = "../../processor", features = ["bitcoin", "monero"] } +# processor = { package = "serai-processor", path = "../../processor", features = ["bitcoin", "monero"] } serai-client = { path = "../../substrate/client", features = ["serai"] } diff --git a/tests/processor/Cargo.toml b/tests/processor/Cargo.toml index f06e4741..13299b93 100644 --- a/tests/processor/Cargo.toml +++ b/tests/processor/Cargo.toml @@ -46,7 +46,7 @@ serde_json = { version = "1", default-features = false } tokio = { version = "1", features = ["time"] } -processor = { package = "serai-processor", path = "../../processor", features = ["bitcoin", "ethereum", "monero"] } +# processor = { package = "serai-processor", path = "../../processor", features = ["bitcoin", "ethereum", "monero"] } dockertest = "0.5" serai-docker-tests = { path = "../docker" } From 83806538552b439fac01a68c003aa12fb54ca8b1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 4 Sep 2024 22:50:02 -0400 Subject: [PATCH 076/368] Add empty serai-processor-signers library This will replace the signers still in the monolithic Processor binary. --- .github/workflows/tests.yml | 1 + Cargo.toml | 2 ++ deny.toml | 1 + processor/frost-attempt-manager/Cargo.toml | 2 +- processor/frost-attempt-manager/README.md | 2 +- processor/key-gen/README.md | 2 +- processor/scanner/Cargo.toml | 4 ++-- processor/signers/Cargo.toml | 22 ++++++++++++++++++++++ processor/signers/LICENSE | 15 +++++++++++++++ processor/signers/README.md | 6 ++++++ processor/signers/src/cosigner.rs | 0 processor/signers/src/lib.rs | 0 processor/signers/src/substrate.rs | 0 processor/signers/src/transaction.rs | 0 14 files changed, 52 insertions(+), 5 deletions(-) create mode 100644 processor/signers/Cargo.toml create mode 100644 processor/signers/LICENSE create mode 100644 processor/signers/README.md create mode 100644 processor/signers/src/cosigner.rs create mode 100644 processor/signers/src/lib.rs create mode 100644 processor/signers/src/substrate.rs create mode 100644 processor/signers/src/transaction.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index a572dcf9..edd219f9 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -48,6 +48,7 @@ jobs: -p serai-processor-utxo-scheduler-primitives \ -p serai-processor-utxo-scheduler \ -p serai-processor-transaction-chaining-scheduler \ + -p serai-processor-signers \ -p serai-bitcoin-processor \ -p serai-ethereum-processor \ -p serai-monero-processor \ diff --git a/Cargo.toml b/Cargo.toml index 3ec76f59..25e6c25d 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -81,6 +81,8 @@ members = [ "processor/scheduler/utxo/primitives", "processor/scheduler/utxo/standard", "processor/scheduler/utxo/transaction-chaining", + "processor/signers", + "processor/bitcoin", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index 8fbb8fc9..ef195411 100644 --- a/deny.toml +++ b/deny.toml @@ -55,6 +55,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-standard-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, + { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml index a01acf0f..67bd8bb6 100644 --- a/processor/frost-attempt-manager/Cargo.toml +++ b/processor/frost-attempt-manager/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/frost-at authors = ["Luke Parker "] keywords = ["frost", "multisig", "threshold"] edition = "2021" -rust-version = "1.79" +publish = false [package.metadata.docs.rs] all-features = true diff --git a/processor/frost-attempt-manager/README.md b/processor/frost-attempt-manager/README.md index c7b0be25..08a61398 100644 --- a/processor/frost-attempt-manager/README.md +++ b/processor/frost-attempt-manager/README.md @@ -3,4 +3,4 @@ A library for helper structures to manage various attempts of a FROST signing protocol. -This library is interacted with via the `serai-processor-messages::sign` API. +This library is interacted with via the `serai_processor_messages::sign` API. diff --git a/processor/key-gen/README.md b/processor/key-gen/README.md index c28357ba..566d1035 100644 --- a/processor/key-gen/README.md +++ b/processor/key-gen/README.md @@ -5,4 +5,4 @@ protocol. Two invocations of the eVRF-based DKG are performed, one for Ristretto (to have a key to oraclize values onto the Serai blockchain with) and one for the external network's curve. -This library is interacted with via the `serai-processor-messages::key_gen` API. +This library is interacted with via the `serai_processor_messages::key_gen` API. diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index c2dc31fe..a3e6a9ba 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -5,9 +5,9 @@ description = "Scanner of abstract blockchains for Serai" license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/processor/scanner" authors = ["Luke Parker "] -keywords = ["frost", "multisig", "threshold"] +keywords = [] edition = "2021" -rust-version = "1.79" +publish = false [package.metadata.docs.rs] all-features = true diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml new file mode 100644 index 00000000..70248960 --- /dev/null +++ b/processor/signers/Cargo.toml @@ -0,0 +1,22 @@ +[package] +name = "serai-processor-signers" +version = "0.1.0" +description = "Signers for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/signers" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[package.metadata.cargo-machete] +ignored = ["borsh", "scale"] + +[lints] +workspace = true + +[dependencies] diff --git a/processor/signers/LICENSE b/processor/signers/LICENSE new file mode 100644 index 00000000..e091b149 --- /dev/null +++ b/processor/signers/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/signers/README.md b/processor/signers/README.md new file mode 100644 index 00000000..b6eddd56 --- /dev/null +++ b/processor/signers/README.md @@ -0,0 +1,6 @@ +# Processor Signers + +Implementations of the tree signers used by a processor (the transaction signer, +the Substrate signer, and the cosigner). + +This library is interacted with via the `serai_processor_messages::sign` API. diff --git a/processor/signers/src/cosigner.rs b/processor/signers/src/cosigner.rs new file mode 100644 index 00000000..e69de29b diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs new file mode 100644 index 00000000..e69de29b diff --git a/processor/signers/src/substrate.rs b/processor/signers/src/substrate.rs new file mode 100644 index 00000000..e69de29b diff --git a/processor/signers/src/transaction.rs b/processor/signers/src/transaction.rs new file mode 100644 index 00000000..e69de29b From b62fc3a1fa44b458762ba5f1af7a671bf491f80e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 5 Sep 2024 14:42:06 -0400 Subject: [PATCH 077/368] Minor work on the transaction signing task --- processor/frost-attempt-manager/src/lib.rs | 4 +- processor/primitives/src/eventuality.rs | 5 ++ processor/scanner/Cargo.toml | 1 + processor/scanner/src/lib.rs | 3 + processor/scheduler/primitives/src/lib.rs | 17 ++++- processor/scheduler/utxo/standard/src/lib.rs | 2 + .../utxo/transaction-chaining/src/lib.rs | 2 + processor/signers/Cargo.toml | 10 +++ processor/signers/src/cosigner.rs | 0 processor/signers/src/lib.rs | 56 +++++++++++++++ processor/signers/src/substrate.rs | 0 processor/signers/src/transaction.rs | 0 processor/signers/src/transaction/db.rs | 1 + processor/signers/src/transaction/mod.rs | 70 +++++++++++++++++++ 14 files changed, 169 insertions(+), 2 deletions(-) delete mode 100644 processor/signers/src/cosigner.rs delete mode 100644 processor/signers/src/substrate.rs delete mode 100644 processor/signers/src/transaction.rs create mode 100644 processor/signers/src/transaction/db.rs create mode 100644 processor/signers/src/transaction/mod.rs diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index cd8452fa..c4d1708d 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -32,6 +32,8 @@ pub struct AttemptManager { impl AttemptManager { /// Create a new attempt manager. + /// + /// This will not restore any signing sessions from the database. Those must be re-registered. pub fn new(db: D, session: Session, start_i: Participant) -> Self { AttemptManager { db, session, start_i, active: HashMap::new() } } @@ -52,7 +54,7 @@ impl AttemptManager { /// This frees all memory used for it and means no further messages will be handled for it. /// This does not stop the protocol from being re-registered and further worked on (with /// undefined behavior) then. The higher-level context must never call `register` again with this - /// ID. + /// ID accordingly. pub fn retire(&mut self, id: [u8; 32]) { if self.active.remove(&id).is_none() { log::info!("retiring protocol {}, which we didn't register/already retired", hex::encode(id)); diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs index 6a52194d..80337824 100644 --- a/processor/primitives/src/eventuality.rs +++ b/processor/primitives/src/eventuality.rs @@ -7,6 +7,11 @@ pub trait Eventuality: Sized + Send + Sync { /// The type used to identify a received output. type OutputId: Id; + /// The ID of the transaction this Eventuality is for. + /// + /// This is an internal ID arbitrarily definable so long as it's unique. + fn id(&self) -> [u8; 32]; + /// A unique byte sequence which can be used to identify potentially resolving transactions. /// /// Both a transaction and an Eventuality are expected to be able to yield lookup sequences. diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index a3e6a9ba..2a3e7e0a 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -39,3 +39,4 @@ serai-in-instructions-primitives = { path = "../../substrate/in-instructions/pri serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } primitives = { package = "serai-processor-primitives", path = "../primitives" } +scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 17feefbe..5573e484 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -247,6 +247,9 @@ impl SchedulerUpdate { /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: 'static + Send { + /// The type for a signable transaction. + type SignableTransaction: scheduler_primitives::SignableTransaction; + /// Activate a key. /// /// This SHOULD setup any necessary database structures. This SHOULD NOT cause the new key to diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs index 97a00c03..b3bf525c 100644 --- a/processor/scheduler/primitives/src/lib.rs +++ b/processor/scheduler/primitives/src/lib.rs @@ -10,11 +10,26 @@ use group::GroupEncoding; use serai_db::DbTxn; /// A signable transaction. -pub trait SignableTransaction: 'static + Sized + Send + Sync { +pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { + /// The ciphersuite used to sign this transaction. + type Ciphersuite: Cuphersuite; + /// The preprocess machine for the signing protocol for this transaction. + type PreprocessMachine: PreprocessMachine; + /// Read a `SignableTransaction`. fn read(reader: &mut impl io::Read) -> io::Result; /// Write a `SignableTransaction`. fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; + + /// The ID for this transaction. + /// + /// This is an internal ID arbitrarily definable so long as it's unique. + /// + /// This same ID MUST be returned by the Eventuality for this transaction. + fn id(&self) -> [u8; 32]; + + /// Sign this transaction. + fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine; } mod db { diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index f69ca54b..10e40f15 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -309,6 +309,8 @@ impl> Scheduler { } impl> SchedulerTrait for Scheduler { + type SignableTransaction = P::SignableTransaction; + fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { assert!(Db::::outputs(txn, key, *coin).is_none()); diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 9a4ed2eb..d11e4ac2 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -368,6 +368,8 @@ impl>> Sched impl>> SchedulerTrait for Scheduler { + type SignableTransaction = P::SignableTransaction; + fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { assert!(Db::::outputs(txn, key, *coin).is_none()); diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 70248960..007c814c 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -20,3 +20,13 @@ ignored = ["borsh", "scale"] workspace = true [dependencies] +group = { version = "0.13", default-features = false } + +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +primitives = { package = "serai-processor-primitives", path = "../primitives" } +scanner = { package = "serai-processor-scanner", path = "../scanner" } +scheduler = { package = "serai-scheduler-primitives", path = "../scheduler/primitives" } + +frost-attempt-manager = { package = "serai-processor-frost-attempt-manager", path = "../frost-attempt-manager" } diff --git a/processor/signers/src/cosigner.rs b/processor/signers/src/cosigner.rs deleted file mode 100644 index e69de29b..00000000 diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index e69de29b..c221ca4c 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -0,0 +1,56 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +mod transaction; + +/* +// The signers used by a Processor, key-scoped. +struct KeySigners { + transaction: AttemptManager, + substrate: AttemptManager>, + cosigner: AttemptManager>, +} + +/// The signers used by a protocol. +pub struct Signers(HashMap, KeySigners>); + +impl Signers { + /// Create a new set of signers. + pub fn new(db: D) -> Self { + // TODO: Load the registered keys + // TODO: Load the transactions being signed + // TODO: Load the batches being signed + todo!("TODO") + } + + /// Register a transaction to sign. + pub fn sign_transaction(&mut self) -> Vec { + todo!("TODO") + } + /// Mark a transaction as signed. + pub fn signed_transaction(&mut self) { todo!("TODO") } + + /// Register a batch to sign. + pub fn sign_batch(&mut self, key: KeyFor, batch: Batch) -> Vec { + todo!("TODO") + } + /// Mark a batch as signed. + pub fn signed_batch(&mut self, batch: u32) { todo!("TODO") } + + /// Register a slash report to sign. + pub fn sign_slash_report(&mut self) -> Vec { + todo!("TODO") + } + /// Mark a slash report as signed. + pub fn signed_slash_report(&mut self) { todo!("TODO") } + + /// Start a cosigning protocol. + pub fn cosign(&mut self) { todo!("TODO") } + + /// Handle a message for a signing protocol. + pub fn handle(&mut self, msg: CoordinatorMessage) -> Vec { + todo!("TODO") + } +} +*/ diff --git a/processor/signers/src/substrate.rs b/processor/signers/src/substrate.rs deleted file mode 100644 index e69de29b..00000000 diff --git a/processor/signers/src/transaction.rs b/processor/signers/src/transaction.rs deleted file mode 100644 index e69de29b..00000000 diff --git a/processor/signers/src/transaction/db.rs b/processor/signers/src/transaction/db.rs new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/processor/signers/src/transaction/db.rs @@ -0,0 +1 @@ + diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs new file mode 100644 index 00000000..ba1487cb --- /dev/null +++ b/processor/signers/src/transaction/mod.rs @@ -0,0 +1,70 @@ +use serai_db::{Get, DbTxn, Db}; + +use primitives::task::ContinuallyRan; +use scanner::ScannerFeed; +use scheduler::TransactionsToSign; + +mod db; +use db::IndexDb; + +// Fetches transactions to sign and signs them. +pub(crate) struct TransactionTask { + db: D, + keys: ThresholdKeys<::Ciphersuite>, + attempt_manager: + AttemptManager::PreprocessMachine>, +} + +impl TransactionTask { + pub(crate) async fn new( + db: D, + keys: ThresholdKeys<::Ciphersuite>, + ) -> Self { + Self { db, keys, attempt_manager: AttemptManager::new() } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for TransactionTask { + async fn run_iteration(&mut self) -> Result { + let mut iterated = false; + + // Check for new transactions to sign + loop { + let mut txn = self.db.txn(); + let Some(tx) = TransactionsToSign::try_recv(&mut txn, self.key) else { break }; + iterated = true; + + let mut machines = Vec::with_capacity(self.keys.len()); + for keys in &self.keys { + machines.push(tx.clone().sign(keys.clone())); + } + let messages = self.attempt_manager.register(tx.id(), machines); + todo!("TODO"); + txn.commit(); + } + + // Check for completed Eventualities (meaning we should no longer sign for these transactions) + loop { + let mut txn = self.db.txn(); + let Some(tx) = CompletedEventualities::try_recv(&mut txn, self.key) else { break }; + iterated = true; + + self.attempt_manager.retire(tx); + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = TransactionSignMessages::try_recv(&mut txn, self.key) else { break }; + iterated = true; + + match self.attempt_manager.handle(msg) { + Response::Messages(messages) => todo!("TODO"), + Response::Signature(signature) => todo!("TODO"), + } + } + + Ok(iterated) + } +} From a353f9e2daf5a8834f60c23b503cdfbb21c89b80 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 6 Sep 2024 03:20:38 -0400 Subject: [PATCH 078/368] Further work on transaction signing --- Cargo.lock | 135 ++++++++++++------ .../frost-attempt-manager/src/individual.rs | 7 +- processor/frost-attempt-manager/src/lib.rs | 4 + processor/primitives/src/eventuality.rs | 2 +- processor/scanner/src/db.rs | 23 +++ processor/scanner/src/eventuality/mod.rs | 11 +- processor/scanner/src/lib.rs | 5 + processor/scheduler/primitives/Cargo.toml | 3 +- processor/scheduler/primitives/src/lib.rs | 15 +- processor/signers/Cargo.toml | 14 +- processor/signers/src/db.rs | 27 ++++ processor/signers/src/lib.rs | 30 ++++ processor/signers/src/transaction/mod.rs | 97 ++++++++++--- 13 files changed, 299 insertions(+), 74 deletions(-) create mode 100644 processor/signers/src/db.rs diff --git a/Cargo.lock b/Cargo.lock index 8662be6f..768191b4 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8120,6 +8120,29 @@ dependencies = [ "sp-runtime", ] +[[package]] +name = "serai-bitcoin-processor" +version = "0.1.0" +dependencies = [ + "async-trait", + "bitcoin-serai", + "borsh", + "const-hex", + "env_logger", + "hex", + "k256", + "log", + "parity-scale-codec", + "secp256k1", + "serai-db", + "serai-env", + "serai-message-queue", + "serai-processor-messages", + "serde_json", + "tokio", + "zalloc", +] + [[package]] name = "serai-client" version = "0.1.0" @@ -8315,6 +8338,28 @@ dependencies = [ name = "serai-env" version = "0.1.0" +[[package]] +name = "serai-ethereum-processor" +version = "0.1.0" +dependencies = [ + "async-trait", + "borsh", + "const-hex", + "env_logger", + "ethereum-serai", + "hex", + "k256", + "log", + "parity-scale-codec", + "serai-db", + "serai-env", + "serai-message-queue", + "serai-processor-messages", + "serde_json", + "tokio", + "zalloc", +] + [[package]] name = "serai-ethereum-relayer" version = "0.1.0" @@ -8343,7 +8388,6 @@ dependencies = [ "serai-coordinator-tests", "serai-docker-tests", "serai-message-queue-tests", - "serai-processor", "serai-processor-tests", "serde", "serde_json", @@ -8459,6 +8503,29 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-monero-processor" +version = "0.1.0" +dependencies = [ + "async-trait", + "borsh", + "const-hex", + "dalek-ff-group", + "env_logger", + "hex", + "log", + "monero-simple-request-rpc", + "monero-wallet", + "parity-scale-codec", + "serai-db", + "serai-env", + "serai-message-queue", + "serai-processor-messages", + "serde_json", + "tokio", + "zalloc", +] + [[package]] name = "serai-no-std-tests" version = "0.1.0" @@ -8558,47 +8625,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "serai-processor" -version = "0.1.0" -dependencies = [ - "async-trait", - "bitcoin-serai", - "borsh", - "ciphersuite", - "const-hex", - "dalek-ff-group", - "dkg", - "dockertest", - "ec-divisors", - "env_logger", - "ethereum-serai", - "flexible-transcript", - "frost-schnorrkel", - "hex", - "k256", - "log", - "modular-frost", - "monero-simple-request-rpc", - "monero-wallet", - "parity-scale-codec", - "rand_chacha", - "rand_core", - "secp256k1", - "serai-client", - "serai-db", - "serai-docker-tests", - "serai-env", - "serai-message-queue", - "serai-processor-messages", - "serde_json", - "sp-application-crypto", - "thiserror", - "tokio", - "zalloc", - "zeroize", -] - [[package]] name = "serai-processor-frost-attempt-manager" version = "0.1.0" @@ -8676,6 +8702,7 @@ dependencies = [ "serai-in-instructions-primitives", "serai-primitives", "serai-processor-primitives", + "serai-processor-scheduler-primitives", "tokio", ] @@ -8684,11 +8711,32 @@ name = "serai-processor-scheduler-primitives" version = "0.1.0" dependencies = [ "borsh", - "group", + "ciphersuite", + "modular-frost", "parity-scale-codec", "serai-db", ] +[[package]] +name = "serai-processor-signers" +version = "0.1.0" +dependencies = [ + "async-trait", + "borsh", + "ciphersuite", + "log", + "modular-frost", + "parity-scale-codec", + "serai-db", + "serai-processor-frost-attempt-manager", + "serai-processor-messages", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", + "serai-validator-sets-primitives", + "tokio", +] + [[package]] name = "serai-processor-tests" version = "0.1.0" @@ -8711,7 +8759,6 @@ dependencies = [ "serai-docker-tests", "serai-message-queue", "serai-message-queue-tests", - "serai-processor", "serai-processor-messages", "serde_json", "tokio", diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs index d7f4eec0..049731c6 100644 --- a/processor/frost-attempt-manager/src/individual.rs +++ b/processor/frost-attempt-manager/src/individual.rs @@ -80,10 +80,15 @@ impl SigningProtocol { We avoid this by saving to the DB we preprocessed before sending our preprocessed, and only keeping our preprocesses for this instance of the processor. Accordingly, on reboot, we will - flag the prior preprocess and not send new preprocesses. + flag the prior preprocess and not send new preprocesses. This does require our own DB + transaction (to ensure we save to the DB we preprocessed before yielding the preprocess + messages). We also won't send the share we were supposed to, unfortunately, yet caching/reloading the preprocess has enough safety issues it isn't worth the headache. + + Since we bind a signing attempt to the lifetime of the application, we're also safe against + nonce reuse (as the state machines enforce single-use and we never reuse a preprocess). */ { let mut txn = self.db.txn(); diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index c4d1708d..2ce46784 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -65,6 +65,10 @@ impl AttemptManager { } /// Handle a message for a signing protocol. + /// + /// Handling a message multiple times is safe and will cause subsequent calls to return + /// `Response::Messages(vec![])`. Handling a message for a signing protocol which isn't being + /// worked on (potentially due to rebooting) will also return `Response::Messages(vec![])`. pub fn handle(&mut self, msg: CoordinatorMessage) -> Response { match msg { CoordinatorMessage::Preprocesses { id, preprocesses } => { diff --git a/processor/primitives/src/eventuality.rs b/processor/primitives/src/eventuality.rs index 80337824..f68ceeae 100644 --- a/processor/primitives/src/eventuality.rs +++ b/processor/primitives/src/eventuality.rs @@ -7,7 +7,7 @@ pub trait Eventuality: Sized + Send + Sync { /// The type used to identify a received output. type OutputId: Id; - /// The ID of the transaction this Eventuality is for. + /// The ID of the SignableTransaction this Eventuality is for. /// /// This is an internal ID arbitrarily definable so long as it's unique. fn id(&self) -> [u8; 32]; diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index f45d2966..246e5f46 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -520,3 +520,26 @@ impl SubstrateToEventualityDb { Burns::try_recv(txn, acknowledged_block) } } + +mod _completed_eventualities { + use serai_db::{Get, DbTxn, create_db, db_channel}; + + db_channel! { + ScannerPublic { + CompletedEventualities: (empty_key: ()) -> [u8; 32], + } + } +} + +/// The IDs of completed Eventualities found on-chain, within a finalized block. +pub struct CompletedEventualities(PhantomData); +impl CompletedEventualities { + pub(crate) fn send(txn: &mut impl DbTxn, id: [u8; 32]) { + _completed_eventualities::CompletedEventualities::send(txn, (), &id); + } + + /// Receive the ID of a completed Eventuality. + pub fn try_recv(txn: &mut impl DbTxn) -> Option<[u8; 32]> { + _completed_eventualities::CompletedEventualities::try_recv(txn, ()) + } +} diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 6db60b71..7dadbe55 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -14,7 +14,7 @@ use crate::{ ScanToEventualityDb, }, BlockExt, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, SchedulerUpdate, Scheduler, - sort_outputs, + CompletedEventualities, sort_outputs, scan::{next_to_scan_for_outputs_block, queue_output_until_block}, }; @@ -292,8 +292,13 @@ impl> ContinuallyRan for EventualityTas completed_eventualities }; - for tx in completed_eventualities.keys() { - log::info!("eventuality resolved by {}", hex::encode(tx.as_ref())); + for (tx, eventuality) in &completed_eventualities { + log::info!( + "eventuality {} resolved by {}", + hex::encode(eventuality.id()), + hex::encode(tx.as_ref()) + ); + CompletedEventualities::::send(&mut txn, eventuality.id()); } // Fetch all non-External outputs diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5573e484..7c699e9c 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -21,6 +21,7 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; +pub use db::CompletedEventualities; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. @@ -170,6 +171,10 @@ pub type EventualityFor = <::Block as Block>::Eventuality; /// The block type for this ScannerFeed. pub type BlockFor = ::Block; +/// An object usable to publish a Batch. +// This will presumably be the Batch signer defined in `serai-processor-signers` or a test shim. +// It could also be some app-layer database for the purpose of verifying the Batches published to +// Serai. #[async_trait::async_trait] pub trait BatchPublisher: 'static + Send + Sync { /// An error encountered when publishing the Batch. diff --git a/processor/scheduler/primitives/Cargo.toml b/processor/scheduler/primitives/Cargo.toml index cdf12cbb..f847300a 100644 --- a/processor/scheduler/primitives/Cargo.toml +++ b/processor/scheduler/primitives/Cargo.toml @@ -20,7 +20,8 @@ ignored = ["scale", "borsh"] workspace = true [dependencies] -group = { version = "0.13", default-features = false } +ciphersuite = { path = "../../../crypto/ciphersuite", default-features = false, features = ["std"] } +frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs index b3bf525c..4de4f67a 100644 --- a/processor/scheduler/primitives/src/lib.rs +++ b/processor/scheduler/primitives/src/lib.rs @@ -5,16 +5,25 @@ use core::marker::PhantomData; use std::io; -use group::GroupEncoding; +use ciphersuite::{group::GroupEncoding, Ciphersuite}; +use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; use serai_db::DbTxn; +/// A transaction. +pub trait Transaction: Sized { + /// Read a `Transaction`. + fn read(reader: &mut impl io::Read) -> io::Result; + /// Write a `Transaction`. + fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; +} + /// A signable transaction. pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { /// The ciphersuite used to sign this transaction. - type Ciphersuite: Cuphersuite; + type Ciphersuite: Ciphersuite; /// The preprocess machine for the signing protocol for this transaction. - type PreprocessMachine: PreprocessMachine; + type PreprocessMachine: Clone + PreprocessMachine; /// Read a `SignableTransaction`. fn read(reader: &mut impl io::Read) -> io::Result; diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 007c814c..06d64da2 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -20,13 +20,23 @@ ignored = ["borsh", "scale"] workspace = true [dependencies] -group = { version = "0.13", default-features = false } +async-trait = { version = "0.1", default-features = false } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } + +serai-db = { path = "../../common/db" } log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scanner = { package = "serai-processor-scanner", path = "../scanner" } -scheduler = { package = "serai-scheduler-primitives", path = "../scheduler/primitives" } +scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } frost-attempt-manager = { package = "serai-processor-frost-attempt-manager", path = "../frost-attempt-manager" } diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs new file mode 100644 index 00000000..5ba5f7d4 --- /dev/null +++ b/processor/signers/src/db.rs @@ -0,0 +1,27 @@ +use serai_validator_sets_primitives::Session; + +use serai_db::{Get, DbTxn, create_db, db_channel}; + +use messages::sign::{ProcessorMessage, CoordinatorMessage}; + +db_channel! { + SignersGlobal { + // CompletedEventualities needs to be handled by each signer, meaning we need to turn its + // effective spsc into a spmc. We do this by duplicating its message for all keys we're + // signing for. + // TODO: Populate from CompletedEventualities + CompletedEventualitiesForEachKey: (session: Session) -> [u8; 32], + + CoordinatorToTransactionSignerMessages: (session: Session) -> CoordinatorMessage, + TransactionSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + + CoordinatorToBatchSignerMessages: (session: Session) -> CoordinatorMessage, + BatchSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + + CoordinatorToSlashReportSignerMessages: (session: Session) -> CoordinatorMessage, + SlashReportSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + + CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, + CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + } +} diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index c221ca4c..7453f4b6 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -2,8 +2,38 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use core::fmt::Debug; + +use frost::sign::PreprocessMachine; + +use scheduler::SignableTransaction; + +pub(crate) mod db; + mod transaction; +/// An object capable of publishing a transaction. +#[async_trait::async_trait] +pub trait TransactionPublisher: 'static + Send + Sync { + /// An error encountered when publishing a transaction. + /// + /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual + /// intervention/changing the arguments. + /// + /// The transaction already being present in the mempool/on-chain SHOULD NOT be considered an + /// error. + type EphemeralError: Debug; + + /// Publish a transaction. + /// + /// This will be called multiple times, with the same transaction, until the transaction is + /// confirmed on-chain. + async fn publish( + &self, + tx: ::Signature, + ) -> Result<(), Self::EphemeralError>; +} + /* // The signers used by a Processor, key-scoped. struct KeySigners { diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index ba1487cb..4ed573f4 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -1,68 +1,127 @@ -use serai_db::{Get, DbTxn, Db}; +use frost::dkg::ThresholdKeys; + +use serai_validator_sets_primitives::Session; + +use serai_db::{DbTxn, Db}; use primitives::task::ContinuallyRan; -use scanner::ScannerFeed; -use scheduler::TransactionsToSign; +use scheduler::{SignableTransaction, TransactionsToSign}; +use scanner::{ScannerFeed, Scheduler}; + +use frost_attempt_manager::*; + +use crate::{ + db::{ + CoordinatorToTransactionSignerMessages, TransactionSignerToCoordinatorMessages, + CompletedEventualitiesForEachKey, + }, + TransactionPublisher, +}; mod db; -use db::IndexDb; // Fetches transactions to sign and signs them. -pub(crate) struct TransactionTask { +pub(crate) struct TransactionTask< + D: Db, + S: ScannerFeed, + Sch: Scheduler, + P: TransactionPublisher, +> { db: D, - keys: ThresholdKeys<::Ciphersuite>, + session: Session, + keys: Vec::Ciphersuite>>, attempt_manager: AttemptManager::PreprocessMachine>, + publisher: P, } -impl TransactionTask { - pub(crate) async fn new( +impl, P: TransactionPublisher> + TransactionTask +{ + pub(crate) fn new( db: D, - keys: ThresholdKeys<::Ciphersuite>, + session: Session, + keys: Vec::Ciphersuite>>, + publisher: P, ) -> Self { - Self { db, keys, attempt_manager: AttemptManager::new() } + let attempt_manager = AttemptManager::new( + db.clone(), + session, + keys.first().expect("creating a transaction signer with 0 keys").params().i(), + ); + Self { db, session, keys, attempt_manager, publisher } } } #[async_trait::async_trait] -impl ContinuallyRan for TransactionTask { +impl, P: TransactionPublisher> + ContinuallyRan for TransactionTask +{ async fn run_iteration(&mut self) -> Result { let mut iterated = false; // Check for new transactions to sign loop { let mut txn = self.db.txn(); - let Some(tx) = TransactionsToSign::try_recv(&mut txn, self.key) else { break }; + let Some(tx) = TransactionsToSign::::try_recv( + &mut txn, + &self.keys[0].group_key(), + ) else { + break; + }; iterated = true; let mut machines = Vec::with_capacity(self.keys.len()); for keys in &self.keys { machines.push(tx.clone().sign(keys.clone())); } - let messages = self.attempt_manager.register(tx.id(), machines); - todo!("TODO"); + for msg in self.attempt_manager.register(tx.id(), machines) { + TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } txn.commit(); } // Check for completed Eventualities (meaning we should no longer sign for these transactions) loop { let mut txn = self.db.txn(); - let Some(tx) = CompletedEventualities::try_recv(&mut txn, self.key) else { break }; + let Some(id) = CompletedEventualitiesForEachKey::try_recv(&mut txn, self.session) else { + break; + }; iterated = true; - self.attempt_manager.retire(tx); + self.attempt_manager.retire(id); + // TODO: Stop rebroadcasting this transaction txn.commit(); } + // Handle any messages sent to us loop { let mut txn = self.db.txn(); - let Some(msg) = TransactionSignMessages::try_recv(&mut txn, self.key) else { break }; + let Some(msg) = CoordinatorToTransactionSignerMessages::try_recv(&mut txn, self.session) + else { + break; + }; iterated = true; match self.attempt_manager.handle(msg) { - Response::Messages(messages) => todo!("TODO"), - Response::Signature(signature) => todo!("TODO"), + Response::Messages(msgs) => { + for msg in msgs { + TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature(signed_tx) => { + // TODO: Save this TX to the DB + // TODO: Attempt publication every minute + // TODO: On boot, reload all TXs to rebroadcast + self + .publisher + .publish(signed_tx) + .await + .map_err(|e| format!("couldn't publish transaction: {e:?}"))?; + } } + + txn.commit(); } Ok(iterated) From 100c80be9ff91fb371a458aab5628c4362dee166 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 6 Sep 2024 04:15:02 -0400 Subject: [PATCH 079/368] Finish transaction signing task with TX rebroadcast code --- .../frost-attempt-manager/src/individual.rs | 6 +- processor/frost-attempt-manager/src/lib.rs | 15 ++- processor/scheduler/primitives/src/lib.rs | 2 +- processor/signers/src/lib.rs | 6 +- processor/signers/src/transaction/db.rs | 10 ++ processor/signers/src/transaction/mod.rs | 93 +++++++++++++++++-- 6 files changed, 109 insertions(+), 23 deletions(-) diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs index 049731c6..2591b582 100644 --- a/processor/frost-attempt-manager/src/individual.rs +++ b/processor/frost-attempt-manager/src/individual.rs @@ -278,9 +278,7 @@ impl SigningProtocol { } /// Cleanup the database entries for a specified signing protocol. - pub(crate) fn cleanup(db: &mut D, id: [u8; 32]) { - let mut txn = db.txn(); - Attempted::del(&mut txn, id); - txn.commit(); + pub(crate) fn cleanup(txn: &mut impl DbTxn, id: [u8; 32]) { + Attempted::del(txn, id); } } diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index 2ce46784..6666ffac 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -8,7 +8,7 @@ use frost::{Participant, sign::PreprocessMachine}; use serai_validator_sets_primitives::Session; -use serai_db::Db; +use serai_db::{DbTxn, Db}; use messages::sign::{ProcessorMessage, CoordinatorMessage}; mod individual; @@ -19,7 +19,12 @@ pub enum Response { /// Messages to send to the coordinator. Messages(Vec), /// A produced signature. - Signature(M::Signature), + Signature { + /// The ID of the protocol this is for. + id: [u8; 32], + /// The signature. + signature: M::Signature, + }, } /// A manager of attempts for a variety of signing protocols. @@ -55,13 +60,13 @@ impl AttemptManager { /// This does not stop the protocol from being re-registered and further worked on (with /// undefined behavior) then. The higher-level context must never call `register` again with this /// ID accordingly. - pub fn retire(&mut self, id: [u8; 32]) { + pub fn retire(&mut self, txn: &mut impl DbTxn, id: [u8; 32]) { if self.active.remove(&id).is_none() { log::info!("retiring protocol {}, which we didn't register/already retired", hex::encode(id)); } else { log::info!("retired signing protocol {}", hex::encode(id)); } - SigningProtocol::::cleanup(&mut self.db, id); + SigningProtocol::::cleanup(txn, id); } /// Handle a message for a signing protocol. @@ -90,7 +95,7 @@ impl AttemptManager { return Response::Messages(vec![]); }; match protocol.shares(id.attempt, shares) { - Ok(signature) => Response::Signature(signature), + Ok(signature) => Response::Signature { id: id.id, signature }, Err(messages) => Response::Messages(messages), } } diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs index 4de4f67a..cef10d35 100644 --- a/processor/scheduler/primitives/src/lib.rs +++ b/processor/scheduler/primitives/src/lib.rs @@ -23,7 +23,7 @@ pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { /// The ciphersuite used to sign this transaction. type Ciphersuite: Ciphersuite; /// The preprocess machine for the signing protocol for this transaction. - type PreprocessMachine: Clone + PreprocessMachine; + type PreprocessMachine: Clone + PreprocessMachine; /// Read a `SignableTransaction`. fn read(reader: &mut impl io::Read) -> io::Result; diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 7453f4b6..eb09440d 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -19,15 +19,15 @@ pub trait TransactionPublisher: 'static + Send + Sync { /// /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual /// intervention/changing the arguments. - /// - /// The transaction already being present in the mempool/on-chain SHOULD NOT be considered an - /// error. type EphemeralError: Debug; /// Publish a transaction. /// /// This will be called multiple times, with the same transaction, until the transaction is /// confirmed on-chain. + /// + /// The transaction already being present in the mempool/on-chain MUST NOT be considered an + /// error. async fn publish( &self, tx: ::Signature, diff --git a/processor/signers/src/transaction/db.rs b/processor/signers/src/transaction/db.rs index 8b137891..b77d38c7 100644 --- a/processor/signers/src/transaction/db.rs +++ b/processor/signers/src/transaction/db.rs @@ -1 +1,11 @@ +use serai_validator_sets_primitives::Session; +use serai_db::{Get, DbTxn, create_db}; + +create_db! { + TransactionSigner { + ActiveSigningProtocols: (session: Session) -> Vec<[u8; 32]>, + SerializedSignableTransactions: (id: [u8; 32]) -> Vec, + SerializedTransactions: (id: [u8; 32]) -> Vec, + } +} diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index 4ed573f4..85a6a0ab 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -1,11 +1,13 @@ -use frost::dkg::ThresholdKeys; +use std::{collections::HashSet, time::{Duration, Instant}}; + +use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; use serai_validator_sets_primitives::Session; use serai_db::{DbTxn, Db}; use primitives::task::ContinuallyRan; -use scheduler::{SignableTransaction, TransactionsToSign}; +use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; use scanner::{ScannerFeed, Scheduler}; use frost_attempt_manager::*; @@ -19,6 +21,13 @@ use crate::{ }; mod db; +use db::*; + +type TransactionFor = < + < + + >::SignableTransaction as SignableTransaction>::PreprocessMachine as PreprocessMachine +>::Signature; // Fetches transactions to sign and signs them. pub(crate) struct TransactionTask< @@ -28,11 +37,16 @@ pub(crate) struct TransactionTask< P: TransactionPublisher, > { db: D, + publisher: P, + session: Session, keys: Vec::Ciphersuite>>, + + active_signing_protocols: HashSet<[u8; 32]>, attempt_manager: AttemptManager::PreprocessMachine>, - publisher: P, + + last_publication: Instant, } impl, P: TransactionPublisher> @@ -40,16 +54,35 @@ impl, P: TransactionPublisher::Ciphersuite>>, - publisher: P, ) -> Self { - let attempt_manager = AttemptManager::new( + let mut active_signing_protocols = HashSet::new(); + let mut attempt_manager = AttemptManager::new( db.clone(), session, keys.first().expect("creating a transaction signer with 0 keys").params().i(), ); - Self { db, session, keys, attempt_manager, publisher } + + // Re-register all active signing protocols + for tx in ActiveSigningProtocols::get(&db, session).unwrap_or(vec![]) { + active_signing_protocols.insert(tx); + + let signable_transaction_buf = SerializedSignableTransactions::get(&db, tx).unwrap(); + let mut signable_transaction_buf = signable_transaction_buf.as_slice(); + let signable_transaction = >::SignableTransaction::read(&mut signable_transaction_buf).unwrap(); + assert!(signable_transaction_buf.is_empty()); + assert_eq!(signable_transaction.id(), tx); + + let mut machines = Vec::with_capacity(keys.len()); + for keys in &keys { + machines.push(signable_transaction.clone().sign(keys.clone())); + } + attempt_manager.register(tx, machines); + } + + Self { db, publisher, session, keys, active_signing_protocols, attempt_manager, last_publication: Instant::now() } } } @@ -71,6 +104,15 @@ impl, P: TransactionPublisher, P: TransactionPublisher, P: TransactionPublisher, P: TransactionPublisher { - // TODO: Save this TX to the DB + Response::Signature { id, signature: signed_tx } => { + // Save this transaction to the database + { + let mut buf = Vec::with_capacity(256); + signed_tx.write(&mut buf).unwrap(); + SerializedTransactions::set(&mut txn, id, &buf); + } + // TODO: Attempt publication every minute - // TODO: On boot, reload all TXs to rebroadcast self .publisher .publish(signed_tx) @@ -124,6 +182,21 @@ impl, P: TransactionPublisher Duration::from_secs(5 * 60) { + for tx in &self.active_signing_protocols { + let Some(tx_buf) = SerializedTransactions::get(&self.db, *tx) else { continue }; + let mut tx_buf = tx_buf.as_slice(); + let tx = TransactionFor::::read(&mut tx_buf).unwrap(); + assert!(tx_buf.is_empty()); + + self.publisher.publish(tx).await.map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; + } + + self.last_publication = Instant::now(); + } + Ok(iterated) } } From 8f848b1abc5f3556b130586c080625bbeacab691 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 6 Sep 2024 17:33:02 -0400 Subject: [PATCH 080/368] Tidy transaction signing task --- processor/signers/src/transaction/mod.rs | 58 +++++++++++++++++------- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index 85a6a0ab..b638eac0 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -1,4 +1,7 @@ -use std::{collections::HashSet, time::{Duration, Instant}}; +use std::{ + collections::HashSet, + time::{Duration, Instant}, +}; use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; @@ -71,7 +74,8 @@ impl, P: TransactionPublisher>::SignableTransaction::read(&mut signable_transaction_buf).unwrap(); + let signable_transaction = + >::SignableTransaction::read(&mut signable_transaction_buf).unwrap(); assert!(signable_transaction_buf.is_empty()); assert_eq!(signable_transaction.id(), tx); @@ -82,7 +86,15 @@ impl, P: TransactionPublisher, P: TransactionPublisher, P: TransactionPublisher, P: TransactionPublisher, P: TransactionPublisher::read(&mut tx_buf).unwrap(); assert!(tx_buf.is_empty()); - self.publisher.publish(tx).await.map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; + self + .publisher + .publish(tx) + .await + .map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; } self.last_publication = Instant::now(); From 59ff944152b7b4f8e34fad75d6f81513bf4ec7db Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 7 Sep 2024 03:33:26 -0400 Subject: [PATCH 081/368] Work on the higher-level signers API --- Cargo.lock | 1 + processor/signers/Cargo.toml | 1 + processor/signers/src/db.rs | 8 ++ processor/signers/src/lib.rs | 137 +++++++++++++++++++++-- processor/signers/src/transaction/mod.rs | 38 +++---- 5 files changed, 153 insertions(+), 32 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 768191b4..b960db4d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8735,6 +8735,7 @@ dependencies = [ "serai-processor-scheduler-primitives", "serai-validator-sets-primitives", "tokio", + "zeroize", ] [[package]] diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 06d64da2..3a96c043 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -21,6 +21,7 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +zeroize = { version = "1", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index 5ba5f7d4..9975cbda 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -4,6 +4,14 @@ use serai_db::{Get, DbTxn, create_db, db_channel}; use messages::sign::{ProcessorMessage, CoordinatorMessage}; +create_db! { + SignersGlobal { + RegisteredKeys: () -> Vec, + SerializedKeys: (session: Session) -> Vec, + LatestRetiredSession: () -> Session, + } +} + db_channel! { SignersGlobal { // CompletedEventualities needs to be handled by each signer, meaning we need to turn its diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index eb09440d..9bc2459d 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -2,11 +2,18 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::fmt::Debug; +use core::{fmt::Debug, marker::PhantomData}; -use frost::sign::PreprocessMachine; +use zeroize::Zeroizing; -use scheduler::SignableTransaction; +use serai_validator_sets_primitives::Session; + +use ciphersuite::{group::GroupEncoding, Ristretto}; +use frost::dkg::{ThresholdCore, ThresholdKeys}; + +use serai_db::{DbTxn, Db}; + +use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; pub(crate) mod db; @@ -14,7 +21,7 @@ mod transaction; /// An object capable of publishing a transaction. #[async_trait::async_trait] -pub trait TransactionPublisher: 'static + Send + Sync { +pub trait TransactionPublisher: 'static + Send + Sync { /// An error encountered when publishing a transaction. /// /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual @@ -28,10 +35,124 @@ pub trait TransactionPublisher: 'static + Send + Sync { /// /// The transaction already being present in the mempool/on-chain MUST NOT be considered an /// error. - async fn publish( - &self, - tx: ::Signature, - ) -> Result<(), Self::EphemeralError>; + async fn publish(&self, tx: T) -> Result<(), Self::EphemeralError>; +} + +/// The signers used by a processor. +pub struct Signers(PhantomData); + +/* + This is completely outside of consensus, so the worst that can happen is: + + 1) Leakage of a private key, hence the usage of frost-attempt-manager which has an API to ensure + that doesn't happen + 2) The database isn't perfectly cleaned up (leaving some bytes on disk wasted) + 3) The state isn't perfectly cleaned up (leaving some bytes in RAM wasted) + + The last two are notably possible via a series of race conditions. For example, if an Eventuality + completion comes in *before* we registered a key, the signer will hold the signing protocol in + memory until the session is retired entirely. +*/ +impl Signers { + /// Initialize the signers. + /// + /// This will spawn tasks for any historically registered keys. + pub fn new(db: impl Db) -> Self { + for session in db::RegisteredKeys::get(&db).unwrap_or(vec![]) { + let buf = db::SerializedKeys::get(&db, session).unwrap(); + let mut buf = buf.as_slice(); + + let mut substrate_keys = vec![]; + let mut external_keys = vec![]; + while !buf.is_empty() { + substrate_keys + .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); + external_keys + .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); + } + + todo!("TODO") + } + + todo!("TODO") + } + + /// Register a set of keys to sign with. + /// + /// If this session (or a session after it) has already been retired, this is a NOP. + pub fn register_keys( + &mut self, + txn: &mut impl DbTxn, + session: Session, + substrate_keys: Vec>, + network_keys: Vec>, + ) { + if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + return; + } + + { + let mut sessions = db::RegisteredKeys::get(txn).unwrap_or_else(|| Vec::with_capacity(1)); + sessions.push(session); + db::RegisteredKeys::set(txn, &sessions); + } + + { + let mut buf = Zeroizing::new(Vec::with_capacity(2 * substrate_keys.len() * 128)); + for (substrate_keys, network_keys) in substrate_keys.into_iter().zip(network_keys) { + buf.extend(&*substrate_keys.serialize()); + buf.extend(&*network_keys.serialize()); + } + db::SerializedKeys::set(txn, session, &buf); + } + } + + /// Retire the signers for a session. + /// + /// This MUST be called in order, for every session (even if we didn't register keys for this + /// session). + pub fn retire_session( + &mut self, + txn: &mut impl DbTxn, + session: Session, + external_key: &impl GroupEncoding, + ) { + // Update the latest retired session + { + let next_to_retire = + db::LatestRetiredSession::get(txn).map_or(Session(0), |session| Session(session.0 + 1)); + assert_eq!(session, next_to_retire); + db::LatestRetiredSession::set(txn, &session); + } + + // Kill the tasks + todo!("TODO"); + + // Update RegisteredKeys/SerializedKeys + if let Some(registered) = db::RegisteredKeys::get(txn) { + db::RegisteredKeys::set( + txn, + ®istered.into_iter().filter(|session_i| *session_i != session).collect(), + ); + } + db::SerializedKeys::del(txn, session); + + // Drain the transactions to sign + // Presumably, TransactionsToSign will be fully populated before retiry occurs, making this + // perfect in not leaving any pending blobs behind + while TransactionsToSign::::try_recv(txn, external_key).is_some() {} + + // Drain our DB channels + while db::CompletedEventualitiesForEachKey::try_recv(txn, session).is_some() {} + while db::CoordinatorToTransactionSignerMessages::try_recv(txn, session).is_some() {} + while db::TransactionSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} + while db::CoordinatorToBatchSignerMessages::try_recv(txn, session).is_some() {} + while db::BatchSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} + while db::CoordinatorToSlashReportSignerMessages::try_recv(txn, session).is_some() {} + while db::SlashReportSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} + while db::CoordinatorToCosignerMessages::try_recv(txn, session).is_some() {} + while db::CosignerToCoordinatorMessages::try_recv(txn, session).is_some() {} + } } /* diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index b638eac0..8fdf8145 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -11,7 +11,6 @@ use serai_db::{DbTxn, Db}; use primitives::task::ContinuallyRan; use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; -use scanner::{ScannerFeed, Scheduler}; use frost_attempt_manager::*; @@ -26,40 +25,35 @@ use crate::{ mod db; use db::*; -type TransactionFor = < - < - - >::SignableTransaction as SignableTransaction>::PreprocessMachine as PreprocessMachine ->::Signature; +type TransactionFor = + <::PreprocessMachine as PreprocessMachine>::Signature; // Fetches transactions to sign and signs them. pub(crate) struct TransactionTask< D: Db, - S: ScannerFeed, - Sch: Scheduler, - P: TransactionPublisher, + ST: SignableTransaction, + P: TransactionPublisher>, > { db: D, publisher: P, session: Session, - keys: Vec::Ciphersuite>>, + keys: Vec>, active_signing_protocols: HashSet<[u8; 32]>, - attempt_manager: - AttemptManager::PreprocessMachine>, + attempt_manager: AttemptManager::PreprocessMachine>, last_publication: Instant, } -impl, P: TransactionPublisher> - TransactionTask +impl>> + TransactionTask { pub(crate) fn new( db: D, publisher: P, session: Session, - keys: Vec::Ciphersuite>>, + keys: Vec>, ) -> Self { let mut active_signing_protocols = HashSet::new(); let mut attempt_manager = AttemptManager::new( @@ -74,8 +68,7 @@ impl, P: TransactionPublisher>::SignableTransaction::read(&mut signable_transaction_buf).unwrap(); + let signable_transaction = ST::read(&mut signable_transaction_buf).unwrap(); assert!(signable_transaction_buf.is_empty()); assert_eq!(signable_transaction.id(), tx); @@ -99,8 +92,8 @@ impl, P: TransactionPublisher, P: TransactionPublisher> - ContinuallyRan for TransactionTask +impl>> ContinuallyRan + for TransactionTask { async fn run_iteration(&mut self) -> Result { let mut iterated = false; @@ -108,10 +101,7 @@ impl, P: TransactionPublisher::try_recv( - &mut txn, - &self.keys[0].group_key(), - ) else { + let Some(tx) = TransactionsToSign::::try_recv(&mut txn, &self.keys[0].group_key()) else { break; }; iterated = true; @@ -208,7 +198,7 @@ impl, P: TransactionPublisher::read(&mut tx_buf).unwrap(); + let tx = TransactionFor::::read(&mut tx_buf).unwrap(); assert!(tx_buf.is_empty()); self From 7484eadbbb026cf6d8d92e049b481b6161e3298b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Sep 2024 00:30:55 -0400 Subject: [PATCH 082/368] Expand task management These extensions are necessary for the signers task management. --- processor/primitives/src/task.rs | 89 ++++++++++++++++++++++++++------ processor/scanner/src/lib.rs | 12 ++--- processor/signers/src/db.rs | 1 + processor/signers/src/lib.rs | 30 +++++++++-- 4 files changed, 105 insertions(+), 27 deletions(-) diff --git a/processor/primitives/src/task.rs b/processor/primitives/src/task.rs index 94a576a0..a40fb9ff 100644 --- a/processor/primitives/src/task.rs +++ b/processor/primitives/src/task.rs @@ -1,28 +1,54 @@ use core::time::Duration; +use std::sync::Arc; -use tokio::sync::mpsc; +use tokio::sync::{mpsc, oneshot, Mutex}; -/// A handle to immediately run an iteration of a task. +enum Closed { + NotClosed(Option>), + Closed, +} + +/// A handle for a task. #[derive(Clone)] -pub struct RunNowHandle(mpsc::Sender<()>); -/// An instruction recipient to immediately run an iteration of a task. -pub struct RunNowRecipient(mpsc::Receiver<()>); +pub struct TaskHandle { + run_now: mpsc::Sender<()>, + close: mpsc::Sender<()>, + closed: Arc>, +} +/// A task's internal structures. +pub struct Task { + run_now: mpsc::Receiver<()>, + close: mpsc::Receiver<()>, + closed: oneshot::Sender<()>, +} -impl RunNowHandle { - /// Create a new run-now handle to be assigned to a task. - pub fn new() -> (Self, RunNowRecipient) { +impl Task { + /// Create a new task definition. + pub fn new() -> (Self, TaskHandle) { // Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as // soon as possible - let (send, recv) = mpsc::channel(1); - (Self(send), RunNowRecipient(recv)) + let (run_now_send, run_now_recv) = mpsc::channel(1); + // And any call to close satisfies all calls to close + let (close_send, close_recv) = mpsc::channel(1); + let (closed_send, closed_recv) = oneshot::channel(); + ( + Self { run_now: run_now_recv, close: close_recv, closed: closed_send }, + TaskHandle { + run_now: run_now_send, + close: close_send, + closed: Arc::new(Mutex::new(Closed::NotClosed(Some(closed_recv)))), + }, + ) } +} +impl TaskHandle { /// Tell the task to run now (and not whenever its next iteration on a timer is). /// /// Panics if the task has been dropped. pub fn run_now(&self) { #[allow(clippy::match_same_arms)] - match self.0.try_send(()) { + match self.run_now.try_send(()) { Ok(()) => {} // NOP on full, as this task will already be ran as soon as possible Err(mpsc::error::TrySendError::Full(())) => {} @@ -31,6 +57,24 @@ impl RunNowHandle { } } } + + /// Close the task. + /// + /// Returns once the task shuts down after it finishes its current iteration (which may be of + /// unbounded time). + pub async fn close(self) { + // If another instance of the handle called tfhis, don't error + let _ = self.close.send(()).await; + // Wait until we receive the closed message + let mut closed = self.closed.lock().await; + match &mut *closed { + Closed::NotClosed(ref mut recv) => { + assert_eq!(recv.take().unwrap().await, Ok(()), "continually ran task dropped itself?"); + *closed = Closed::Closed; + } + Closed::Closed => {} + } + } } /// A task to be continually ran. @@ -50,10 +94,7 @@ pub trait ContinuallyRan: Sized { async fn run_iteration(&mut self) -> Result; /// Continually run the task. - /// - /// This returns a channel which can have a message set to immediately trigger a new run of an - /// iteration. - async fn continually_run(mut self, mut run_now: RunNowRecipient, dependents: Vec) { + async fn continually_run(mut self, mut task: Task, dependents: Vec) { // The default number of seconds to sleep before running the task again let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS; // The current number of seconds to sleep before running the task again @@ -66,6 +107,15 @@ pub trait ContinuallyRan: Sized { }; loop { + // If we were told to close/all handles were dropped, drop it + { + let should_close = task.close.try_recv(); + match should_close { + Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break, + Err(mpsc::error::TryRecvError::Empty) => {} + } + } + match self.run_iteration().await { Ok(run_dependents) => { // Upon a successful (error-free) loop iteration, reset the amount of time we sleep @@ -86,8 +136,15 @@ pub trait ContinuallyRan: Sized { // Don't run the task again for another few seconds UNLESS told to run now tokio::select! { () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, - msg = run_now.0.recv() => assert_eq!(msg, Some(()), "run now handle was dropped"), + msg = task.run_now.recv() => { + // Check if this is firing because the handle was dropped + if msg.is_none() { + break; + } + }, } } + + task.closed.send(()).unwrap(); } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 7c699e9c..6403605d 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -343,7 +343,7 @@ pub trait Scheduler: 'static + Send { /// A representation of a scanner. #[allow(non_snake_case)] pub struct Scanner { - substrate_handle: RunNowHandle, + substrate_handle: TaskHandle, _S: PhantomData, } impl Scanner { @@ -362,11 +362,11 @@ impl Scanner { let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::<_, _, Sch>::new(db, feed, start_block); - let (_index_handle, index_run) = RunNowHandle::new(); - let (scan_handle, scan_run) = RunNowHandle::new(); - let (report_handle, report_run) = RunNowHandle::new(); - let (substrate_handle, substrate_run) = RunNowHandle::new(); - let (eventuality_handle, eventuality_run) = RunNowHandle::new(); + let (index_run, _index_handle) = Task::new(); + let (scan_run, scan_handle) = Task::new(); + let (report_run, report_handle) = Task::new(); + let (substrate_run, substrate_handle) = Task::new(); + let (eventuality_run, eventuality_handle) = Task::new(); // Upon indexing a new block, scan it tokio::spawn(index_task.continually_run(index_run, vec![scan_handle.clone()])); diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index 9975cbda..ec9b879c 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -9,6 +9,7 @@ create_db! { RegisteredKeys: () -> Vec, SerializedKeys: (session: Session) -> Vec, LatestRetiredSession: () -> Session, + ToCleanup: () -> Vec<(Session, Vec)>, } } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 9bc2459d..72fe2d17 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -3,6 +3,7 @@ #![deny(missing_docs)] use core::{fmt::Debug, marker::PhantomData}; +use std::collections::HashMap; use zeroize::Zeroizing; @@ -13,6 +14,7 @@ use frost::dkg::{ThresholdCore, ThresholdKeys}; use serai_db::{DbTxn, Db}; +use primitives::task::TaskHandle; use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; pub(crate) mod db; @@ -39,7 +41,10 @@ pub trait TransactionPublisher: 'static + Send + Sync { } /// The signers used by a processor. -pub struct Signers(PhantomData); +pub struct Signers { + tasks: HashMap>, + _ST: PhantomData, +} /* This is completely outside of consensus, so the worst that can happen is: @@ -58,6 +63,8 @@ impl Signers { /// /// This will spawn tasks for any historically registered keys. pub fn new(db: impl Db) -> Self { + let mut tasks = HashMap::new(); + for session in db::RegisteredKeys::get(&db).unwrap_or(vec![]) { let buf = db::SerializedKeys::get(&db, session).unwrap(); let mut buf = buf.as_slice(); @@ -74,7 +81,7 @@ impl Signers { todo!("TODO") } - todo!("TODO") + Self { tasks, _ST: PhantomData } } /// Register a set of keys to sign with. @@ -87,6 +94,7 @@ impl Signers { substrate_keys: Vec>, network_keys: Vec>, ) { + // Don't register already retired keys if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { return; } @@ -125,9 +133,6 @@ impl Signers { db::LatestRetiredSession::set(txn, &session); } - // Kill the tasks - todo!("TODO"); - // Update RegisteredKeys/SerializedKeys if let Some(registered) = db::RegisteredKeys::get(txn) { db::RegisteredKeys::set( @@ -137,6 +142,20 @@ impl Signers { } db::SerializedKeys::del(txn, session); + // Queue the session for clean up + let mut to_cleanup = db::ToCleanup::get(txn).unwrap_or(vec![]); + to_cleanup.push((session, external_key.to_bytes().as_ref().to_vec())); + db::ToCleanup::set(txn, &to_cleanup); + + // TODO: Handle all of the following cleanup on a task + /* + // Kill the tasks + if let Some(tasks) = self.tasks.remove(&session) { + for task in tasks { + task.close().await; + } + } + // Drain the transactions to sign // Presumably, TransactionsToSign will be fully populated before retiry occurs, making this // perfect in not leaving any pending blobs behind @@ -152,6 +171,7 @@ impl Signers { while db::SlashReportSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} while db::CoordinatorToCosignerMessages::try_recv(txn, session).is_some() {} while db::CosignerToCoordinatorMessages::try_recv(txn, session).is_some() {} + */ } } From f07ec7bee05a646ce40896c56b5bc80c06c97d0a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Sep 2024 22:13:42 -0400 Subject: [PATCH 083/368] Route the coordinator, fix race conditions in the signers library --- Cargo.lock | 2 +- processor/frost-attempt-manager/Cargo.toml | 1 - .../frost-attempt-manager/src/individual.rs | 24 +- processor/frost-attempt-manager/src/lib.rs | 26 +-- processor/messages/Cargo.toml | 2 + processor/messages/src/lib.rs | 36 ++- processor/primitives/src/block.rs | 2 + processor/scanner/src/db.rs | 15 +- processor/scanner/src/eventuality/mod.rs | 2 +- processor/scanner/src/lib.rs | 20 +- processor/scheduler/primitives/src/lib.rs | 4 + processor/signers/src/coordinator.rs | 98 +++++++++ processor/signers/src/db.rs | 14 +- processor/signers/src/lib.rs | 206 +++++++++++------- processor/signers/src/transaction/mod.rs | 76 ++++--- 15 files changed, 356 insertions(+), 172 deletions(-) create mode 100644 processor/signers/src/coordinator.rs diff --git a/Cargo.lock b/Cargo.lock index b960db4d..d6b0e3de 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8630,7 +8630,6 @@ name = "serai-processor-frost-attempt-manager" version = "0.1.0" dependencies = [ "borsh", - "hex", "log", "modular-frost", "parity-scale-codec", @@ -8666,6 +8665,7 @@ version = "0.1.0" dependencies = [ "borsh", "dkg", + "hex", "parity-scale-codec", "serai-coins-primitives", "serai-in-instructions-primitives", diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml index 67bd8bb6..ad8d2a4c 100644 --- a/processor/frost-attempt-manager/Cargo.toml +++ b/processor/frost-attempt-manager/Cargo.toml @@ -26,7 +26,6 @@ frost = { package = "modular-frost", path = "../../crypto/frost", version = "^0. serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } -hex = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs index 2591b582..6a8b3352 100644 --- a/processor/frost-attempt-manager/src/individual.rs +++ b/processor/frost-attempt-manager/src/individual.rs @@ -10,11 +10,11 @@ use frost::{ use serai_validator_sets_primitives::Session; use serai_db::{Get, DbTxn, Db, create_db}; -use messages::sign::{SignId, ProcessorMessage}; +use messages::sign::{VariantSignId, SignId, ProcessorMessage}; create_db!( FrostAttemptManager { - Attempted: (id: [u8; 32]) -> u32, + Attempted: (id: VariantSignId) -> u32, } ); @@ -28,7 +28,7 @@ pub(crate) struct SigningProtocol { // The key shares we sign with are expected to be continguous from this position. start_i: Participant, // The ID of this signing protocol. - id: [u8; 32], + id: VariantSignId, // This accepts a vector of `root` machines in order to support signing with multiple key shares. root: Vec, preprocessed: HashMap, HashMap>)>, @@ -48,10 +48,10 @@ impl SigningProtocol { db: D, session: Session, start_i: Participant, - id: [u8; 32], + id: VariantSignId, root: Vec, ) -> Self { - log::info!("starting signing protocol {}", hex::encode(id)); + log::info!("starting signing protocol {id:?}"); Self { db, @@ -100,7 +100,7 @@ impl SigningProtocol { txn.commit(); } - log::debug!("attemting a new instance of signing protocol {}", hex::encode(self.id)); + log::debug!("attemting a new instance of signing protocol {:?}", self.id); let mut our_preprocesses = HashMap::with_capacity(self.root.len()); let mut preprocessed = Vec::with_capacity(self.root.len()); @@ -137,7 +137,7 @@ impl SigningProtocol { attempt: u32, serialized_preprocesses: HashMap>, ) -> Vec { - log::debug!("handling preprocesses for signing protocol {}", hex::encode(self.id)); + log::debug!("handling preprocesses for signing protocol {:?}", self.id); let Some((machines, our_serialized_preprocesses)) = self.preprocessed.remove(&attempt) else { return vec![]; @@ -211,8 +211,8 @@ impl SigningProtocol { assert!(self.shared.insert(attempt, (shared.swap_remove(0), our_shares)).is_none()); log::debug!( - "successfully handled preprocesses for signing protocol {}, sending shares", - hex::encode(self.id) + "successfully handled preprocesses for signing protocol {:?}, sending shares", + self.id, ); msgs.push(ProcessorMessage::Shares { id: SignId { session: self.session, id: self.id, attempt }, @@ -229,7 +229,7 @@ impl SigningProtocol { attempt: u32, serialized_shares: HashMap>, ) -> Result> { - log::debug!("handling shares for signing protocol {}", hex::encode(self.id)); + log::debug!("handling shares for signing protocol {:?}", self.id); let Some((machine, our_serialized_shares)) = self.shared.remove(&attempt) else { Err(vec![])? }; @@ -272,13 +272,13 @@ impl SigningProtocol { }, }; - log::info!("finished signing for protocol {}", hex::encode(self.id)); + log::info!("finished signing for protocol {:?}", self.id); Ok(signature) } /// Cleanup the database entries for a specified signing protocol. - pub(crate) fn cleanup(txn: &mut impl DbTxn, id: [u8; 32]) { + pub(crate) fn cleanup(txn: &mut impl DbTxn, id: VariantSignId) { Attempted::del(txn, id); } } diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index 6666ffac..db8b0861 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -9,7 +9,7 @@ use frost::{Participant, sign::PreprocessMachine}; use serai_validator_sets_primitives::Session; use serai_db::{DbTxn, Db}; -use messages::sign::{ProcessorMessage, CoordinatorMessage}; +use messages::sign::{VariantSignId, ProcessorMessage, CoordinatorMessage}; mod individual; use individual::SigningProtocol; @@ -21,7 +21,7 @@ pub enum Response { /// A produced signature. Signature { /// The ID of the protocol this is for. - id: [u8; 32], + id: VariantSignId, /// The signature. signature: M::Signature, }, @@ -32,7 +32,7 @@ pub struct AttemptManager { db: D, session: Session, start_i: Participant, - active: HashMap<[u8; 32], SigningProtocol>, + active: HashMap>, } impl AttemptManager { @@ -46,7 +46,7 @@ impl AttemptManager { /// Register a signing protocol to attempt. /// /// This ID must be unique across all sessions, attempt managers, protocols, etc. - pub fn register(&mut self, id: [u8; 32], machines: Vec) -> Vec { + pub fn register(&mut self, id: VariantSignId, machines: Vec) -> Vec { let mut protocol = SigningProtocol::new(self.db.clone(), self.session, self.start_i, id, machines); let messages = protocol.attempt(0); @@ -60,11 +60,11 @@ impl AttemptManager { /// This does not stop the protocol from being re-registered and further worked on (with /// undefined behavior) then. The higher-level context must never call `register` again with this /// ID accordingly. - pub fn retire(&mut self, txn: &mut impl DbTxn, id: [u8; 32]) { + pub fn retire(&mut self, txn: &mut impl DbTxn, id: VariantSignId) { if self.active.remove(&id).is_none() { - log::info!("retiring protocol {}, which we didn't register/already retired", hex::encode(id)); + log::info!("retiring protocol {id:?}, which we didn't register/already retired"); } else { - log::info!("retired signing protocol {}", hex::encode(id)); + log::info!("retired signing protocol {id:?}"); } SigningProtocol::::cleanup(txn, id); } @@ -79,8 +79,8 @@ impl AttemptManager { CoordinatorMessage::Preprocesses { id, preprocesses } => { let Some(protocol) = self.active.get_mut(&id.id) else { log::trace!( - "handling preprocesses for signing protocol {}, which we're not actively running", - hex::encode(id.id) + "handling preprocesses for signing protocol {:?}, which we're not actively running", + id.id, ); return Response::Messages(vec![]); }; @@ -89,8 +89,8 @@ impl AttemptManager { CoordinatorMessage::Shares { id, shares } => { let Some(protocol) = self.active.get_mut(&id.id) else { log::trace!( - "handling shares for signing protocol {}, which we're not actively running", - hex::encode(id.id) + "handling shares for signing protocol {:?}, which we're not actively running", + id.id, ); return Response::Messages(vec![]); }; @@ -102,8 +102,8 @@ impl AttemptManager { CoordinatorMessage::Reattempt { id } => { let Some(protocol) = self.active.get_mut(&id.id) else { log::trace!( - "reattempting signing protocol {}, which we're not actively running", - hex::encode(id.id) + "reattempting signing protocol {:?}, which we're not actively running", + id.id, ); return Response::Messages(vec![]); }; diff --git a/processor/messages/Cargo.toml b/processor/messages/Cargo.toml index 0eba999d..dbadd9db 100644 --- a/processor/messages/Cargo.toml +++ b/processor/messages/Cargo.toml @@ -17,6 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] +hex = { version = "0.4", default-features = false, features = ["std"] } + scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 27d75d2e..ef907f97 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -1,3 +1,4 @@ +use core::fmt; use std::collections::HashMap; use scale::{Encode, Decode}; @@ -85,10 +86,37 @@ pub mod key_gen { pub mod sign { use super::*; - #[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] + #[derive(Clone, Copy, PartialEq, Eq, Hash, Encode, Decode, BorshSerialize, BorshDeserialize)] + pub enum VariantSignId { + Cosign([u8; 32]), + Batch(u32), + SlashReport([u8; 32]), + Transaction([u8; 32]), + } + impl fmt::Debug for VariantSignId { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { + match self { + Self::Cosign(cosign) => { + f.debug_struct("VariantSignId::Cosign").field("0", &hex::encode(cosign)).finish() + } + Self::Batch(batch) => f.debug_struct("VariantSignId::Batch").field("0", &batch).finish(), + Self::SlashReport(slash_report) => f + .debug_struct("VariantSignId::SlashReport") + .field("0", &hex::encode(slash_report)) + .finish(), + Self::Transaction(tx) => { + f.debug_struct("VariantSignId::Transaction").field("0", &hex::encode(tx)).finish() + } + } + } + } + + #[derive( + Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, + )] pub struct SignId { pub session: Session, - pub id: [u8; 32], + pub id: VariantSignId, pub attempt: u32, } @@ -109,11 +137,11 @@ pub mod sign { None } - pub fn session(&self) -> Session { + pub fn sign_id(&self) -> &SignId { match self { CoordinatorMessage::Preprocesses { id, .. } | CoordinatorMessage::Shares { id, .. } | - CoordinatorMessage::Reattempt { id, .. } => id.session, + CoordinatorMessage::Reattempt { id, .. } => id, } } } diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 6f603ab2..89dff54f 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -60,6 +60,8 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { /// Check if this block resolved any Eventualities. /// + /// This MUST mutate `eventualities` to no longer contain the resolved Eventualities. + /// /// Returns tbe resolved Eventualities, indexed by the ID of the transactions which resolved /// them. fn check_for_eventuality_resolutions( diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 246e5f46..f72fa202 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -1,6 +1,7 @@ use core::marker::PhantomData; use std::io::{self, Read, Write}; +use group::GroupEncoding; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; @@ -526,20 +527,20 @@ mod _completed_eventualities { db_channel! { ScannerPublic { - CompletedEventualities: (empty_key: ()) -> [u8; 32], + CompletedEventualities: (key: &[u8]) -> [u8; 32], } } } /// The IDs of completed Eventualities found on-chain, within a finalized block. -pub struct CompletedEventualities(PhantomData); -impl CompletedEventualities { - pub(crate) fn send(txn: &mut impl DbTxn, id: [u8; 32]) { - _completed_eventualities::CompletedEventualities::send(txn, (), &id); +pub struct CompletedEventualities(PhantomData); +impl CompletedEventualities { + pub(crate) fn send(txn: &mut impl DbTxn, key: &K, id: [u8; 32]) { + _completed_eventualities::CompletedEventualities::send(txn, key.to_bytes().as_ref(), &id); } /// Receive the ID of a completed Eventuality. - pub fn try_recv(txn: &mut impl DbTxn) -> Option<[u8; 32]> { - _completed_eventualities::CompletedEventualities::try_recv(txn, ()) + pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option<[u8; 32]> { + _completed_eventualities::CompletedEventualities::try_recv(txn, key.to_bytes().as_ref()) } } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 7dadbe55..be5b4555 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -298,7 +298,7 @@ impl> ContinuallyRan for EventualityTas hex::encode(eventuality.id()), hex::encode(tx.as_ref()) ); - CompletedEventualities::::send(&mut txn, eventuality.id()); + CompletedEventualities::send(&mut txn, &key.key, eventuality.id()); } // Fetch all non-External outputs diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 6403605d..3323c6ff 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -362,24 +362,24 @@ impl Scanner { let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::<_, _, Sch>::new(db, feed, start_block); - let (index_run, _index_handle) = Task::new(); - let (scan_run, scan_handle) = Task::new(); - let (report_run, report_handle) = Task::new(); - let (substrate_run, substrate_handle) = Task::new(); - let (eventuality_run, eventuality_handle) = Task::new(); + let (index_task_def, _index_handle) = Task::new(); + let (scan_task_def, scan_handle) = Task::new(); + let (report_task_def, report_handle) = Task::new(); + let (substrate_task_def, substrate_handle) = Task::new(); + let (eventuality_task_def, eventuality_handle) = Task::new(); // Upon indexing a new block, scan it - tokio::spawn(index_task.continually_run(index_run, vec![scan_handle.clone()])); + tokio::spawn(index_task.continually_run(index_task_def, vec![scan_handle.clone()])); // Upon scanning a block, report it - tokio::spawn(scan_task.continually_run(scan_run, vec![report_handle])); + tokio::spawn(scan_task.continually_run(scan_task_def, vec![report_handle])); // Upon reporting a block, we do nothing (as the burden is on Substrate which won't be // immediately ready) - tokio::spawn(report_task.continually_run(report_run, vec![])); + tokio::spawn(report_task.continually_run(report_task_def, vec![])); // Upon handling an event from Substrate, we run the Eventuality task (as it's what's affected) - tokio::spawn(substrate_task.continually_run(substrate_run, vec![eventuality_handle])); + tokio::spawn(substrate_task.continually_run(substrate_task_def, vec![eventuality_handle])); // Upon handling the Eventualities in a block, we run the scan task as we've advanced the // window its allowed to scan - tokio::spawn(eventuality_task.continually_run(eventuality_run, vec![scan_handle])); + tokio::spawn(eventuality_task.continually_run(eventuality_task_def, vec![scan_handle])); Self { substrate_handle, _S: PhantomData } } diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs index cef10d35..f146027d 100644 --- a/processor/scheduler/primitives/src/lib.rs +++ b/processor/scheduler/primitives/src/lib.rs @@ -41,6 +41,10 @@ pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine; } +/// The transaction type for a SignableTransaction. +pub type TransactionFor = + <::PreprocessMachine as PreprocessMachine>::Signature; + mod db { use serai_db::{Get, DbTxn, create_db, db_channel}; diff --git a/processor/signers/src/coordinator.rs b/processor/signers/src/coordinator.rs new file mode 100644 index 00000000..43dcc571 --- /dev/null +++ b/processor/signers/src/coordinator.rs @@ -0,0 +1,98 @@ +use serai_db::{DbTxn, Db}; + +use primitives::task::ContinuallyRan; + +use crate::{ + db::{ + RegisteredKeys, CosignerToCoordinatorMessages, BatchSignerToCoordinatorMessages, + SlashReportSignerToCoordinatorMessages, TransactionSignerToCoordinatorMessages, + }, + Coordinator, +}; + +// Fetches messages to send the coordinator and sends them. +pub(crate) struct CoordinatorTask { + db: D, + coordinator: C, +} + +impl CoordinatorTask { + pub(crate) fn new(db: D, coordinator: C) -> Self { + Self { db, coordinator } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for CoordinatorTask { + async fn run_iteration(&mut self) -> Result { + let mut iterated = false; + + for session in RegisteredKeys::get(&self.db).unwrap_or(vec![]) { + loop { + let mut txn = self.db.txn(); + let Some(msg) = CosignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = BatchSignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = SlashReportSignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = TransactionSignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + } + + Ok(iterated) + } +} diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index ec9b879c..ae62c947 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -15,14 +15,8 @@ create_db! { db_channel! { SignersGlobal { - // CompletedEventualities needs to be handled by each signer, meaning we need to turn its - // effective spsc into a spmc. We do this by duplicating its message for all keys we're - // signing for. - // TODO: Populate from CompletedEventualities - CompletedEventualitiesForEachKey: (session: Session) -> [u8; 32], - - CoordinatorToTransactionSignerMessages: (session: Session) -> CoordinatorMessage, - TransactionSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, + CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, CoordinatorToBatchSignerMessages: (session: Session) -> CoordinatorMessage, BatchSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, @@ -30,7 +24,7 @@ db_channel! { CoordinatorToSlashReportSignerMessages: (session: Session) -> CoordinatorMessage, SlashReportSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, - CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, - CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, + CoordinatorToTransactionSignerMessages: (session: Session) -> CoordinatorMessage, + TransactionSignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, } } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 72fe2d17..a53f2208 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -7,23 +7,42 @@ use std::collections::HashMap; use zeroize::Zeroizing; -use serai_validator_sets_primitives::Session; - -use ciphersuite::{group::GroupEncoding, Ristretto}; +use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use frost::dkg::{ThresholdCore, ThresholdKeys}; +use serai_validator_sets_primitives::Session; + use serai_db::{DbTxn, Db}; -use primitives::task::TaskHandle; -use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; +use messages::sign::{VariantSignId, ProcessorMessage, CoordinatorMessage}; + +use primitives::task::{Task, TaskHandle, ContinuallyRan}; +use scheduler::{Transaction, SignableTransaction, TransactionFor}; pub(crate) mod db; +mod coordinator; +use coordinator::CoordinatorTask; + mod transaction; +use transaction::TransactionTask; + +/// A connection to the Coordinator which messages can be published with. +#[async_trait::async_trait] +pub trait Coordinator: 'static + Send + Sync { + /// An error encountered when sending a message. + /// + /// This MUST be an ephemeral error. Retrying sending a message MUST eventually resolve without + /// manual intervention/changing the arguments. + type EphemeralError: Debug; + + /// Send a `messages::sign::ProcessorMessage`. + async fn send(&mut self, message: ProcessorMessage) -> Result<(), Self::EphemeralError>; +} /// An object capable of publishing a transaction. #[async_trait::async_trait] -pub trait TransactionPublisher: 'static + Send + Sync { +pub trait TransactionPublisher: 'static + Send + Sync + Clone { /// An error encountered when publishing a transaction. /// /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual @@ -40,9 +59,18 @@ pub trait TransactionPublisher: 'static + Send + Sync { async fn publish(&self, tx: T) -> Result<(), Self::EphemeralError>; } +struct Tasks { + cosigner: TaskHandle, + batch: TaskHandle, + slash_report: TaskHandle, + transaction: TaskHandle, +} + /// The signers used by a processor. +#[allow(non_snake_case)] pub struct Signers { - tasks: HashMap>, + coordinator_handle: TaskHandle, + tasks: HashMap, _ST: PhantomData, } @@ -62,9 +90,57 @@ impl Signers { /// Initialize the signers. /// /// This will spawn tasks for any historically registered keys. - pub fn new(db: impl Db) -> Self { + pub fn new( + mut db: impl Db, + coordinator: impl Coordinator, + publisher: &impl TransactionPublisher>, + ) -> Self { + /* + On boot, perform any database cleanup which was queued. + + We don't do this cleanup at time of dropping the task as we'd need to wait an unbounded + amount of time for the task to stop (requiring an async task), then we'd have to drain the + channels (which would be on a distinct DB transaction and risk not occurring if we rebooted + while waiting for the task to stop). This is the easiest way to handle this. + */ + { + let mut txn = db.txn(); + for (session, external_key_bytes) in db::ToCleanup::get(&txn).unwrap_or(vec![]) { + let mut external_key_bytes = external_key_bytes.as_slice(); + let external_key = + ::read_G(&mut external_key_bytes).unwrap(); + assert!(external_key_bytes.is_empty()); + + // Drain the transactions to sign + // TransactionsToSign will be fully populated by the scheduler before retiry occurs, making + // this perfect in not leaving any pending blobs behind + while scheduler::TransactionsToSign::::try_recv(&mut txn, &external_key).is_some() {} + + // Drain the completed Eventualities + // This will be fully populated by the scanner before retiry + while scanner::CompletedEventualities::try_recv(&mut txn, &external_key).is_some() {} + + // Drain our DB channels + while db::CoordinatorToCosignerMessages::try_recv(&mut txn, session).is_some() {} + while db::CosignerToCoordinatorMessages::try_recv(&mut txn, session).is_some() {} + while db::CoordinatorToBatchSignerMessages::try_recv(&mut txn, session).is_some() {} + while db::BatchSignerToCoordinatorMessages::try_recv(&mut txn, session).is_some() {} + while db::CoordinatorToSlashReportSignerMessages::try_recv(&mut txn, session).is_some() {} + while db::SlashReportSignerToCoordinatorMessages::try_recv(&mut txn, session).is_some() {} + while db::CoordinatorToTransactionSignerMessages::try_recv(&mut txn, session).is_some() {} + while db::TransactionSignerToCoordinatorMessages::try_recv(&mut txn, session).is_some() {} + } + db::ToCleanup::del(&mut txn); + txn.commit(); + } + let mut tasks = HashMap::new(); + let (coordinator_task, coordinator_handle) = Task::new(); + tokio::spawn( + CoordinatorTask::new(db.clone(), coordinator).continually_run(coordinator_task, vec![]), + ); + for session in db::RegisteredKeys::get(&db).unwrap_or(vec![]) { let buf = db::SerializedKeys::get(&db, session).unwrap(); let mut buf = buf.as_slice(); @@ -78,10 +154,23 @@ impl Signers { .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); } - todo!("TODO") + // TODO: Batch signer, cosigner, slash report signers + + let (transaction_task, transaction_handle) = Task::new(); + tokio::spawn( + TransactionTask::<_, ST, _>::new(db.clone(), publisher.clone(), session, external_keys) + .continually_run(transaction_task, vec![coordinator_handle.clone()]), + ); + + tasks.insert(session, Tasks { + cosigner: todo!("TODO"), + batch: todo!("TODO"), + slash_report: todo!("TODO"), + transaction: transaction_handle, + }); } - Self { tasks, _ST: PhantomData } + Self { coordinator_handle, tasks, _ST: PhantomData } } /// Register a set of keys to sign with. @@ -146,82 +235,31 @@ impl Signers { let mut to_cleanup = db::ToCleanup::get(txn).unwrap_or(vec![]); to_cleanup.push((session, external_key.to_bytes().as_ref().to_vec())); db::ToCleanup::set(txn, &to_cleanup); + } - // TODO: Handle all of the following cleanup on a task - /* - // Kill the tasks - if let Some(tasks) = self.tasks.remove(&session) { - for task in tasks { - task.close().await; + /// Queue handling a message. + /// + /// This is a cheap call and able to be done inline with a higher-level loop. + pub fn queue_message(&mut self, txn: &mut impl DbTxn, message: &CoordinatorMessage) { + let sign_id = message.sign_id(); + let tasks = self.tasks.get(&sign_id.session); + match sign_id.id { + VariantSignId::Cosign(_) => { + db::CoordinatorToCosignerMessages::send(txn, sign_id.session, message); + if let Some(tasks) = tasks { tasks.cosigner.run_now(); } + } + VariantSignId::Batch(_) => { + db::CoordinatorToBatchSignerMessages::send(txn, sign_id.session, message); + if let Some(tasks) = tasks { tasks.batch.run_now(); } + } + VariantSignId::SlashReport(_) => { + db::CoordinatorToSlashReportSignerMessages::send(txn, sign_id.session, message); + if let Some(tasks) = tasks { tasks.slash_report.run_now(); } + } + VariantSignId::Transaction(_) => { + db::CoordinatorToTransactionSignerMessages::send(txn, sign_id.session, message); + if let Some(tasks) = tasks { tasks.transaction.run_now(); } } } - - // Drain the transactions to sign - // Presumably, TransactionsToSign will be fully populated before retiry occurs, making this - // perfect in not leaving any pending blobs behind - while TransactionsToSign::::try_recv(txn, external_key).is_some() {} - - // Drain our DB channels - while db::CompletedEventualitiesForEachKey::try_recv(txn, session).is_some() {} - while db::CoordinatorToTransactionSignerMessages::try_recv(txn, session).is_some() {} - while db::TransactionSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} - while db::CoordinatorToBatchSignerMessages::try_recv(txn, session).is_some() {} - while db::BatchSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} - while db::CoordinatorToSlashReportSignerMessages::try_recv(txn, session).is_some() {} - while db::SlashReportSignerToCoordinatorMessages::try_recv(txn, session).is_some() {} - while db::CoordinatorToCosignerMessages::try_recv(txn, session).is_some() {} - while db::CosignerToCoordinatorMessages::try_recv(txn, session).is_some() {} - */ } } - -/* -// The signers used by a Processor, key-scoped. -struct KeySigners { - transaction: AttemptManager, - substrate: AttemptManager>, - cosigner: AttemptManager>, -} - -/// The signers used by a protocol. -pub struct Signers(HashMap, KeySigners>); - -impl Signers { - /// Create a new set of signers. - pub fn new(db: D) -> Self { - // TODO: Load the registered keys - // TODO: Load the transactions being signed - // TODO: Load the batches being signed - todo!("TODO") - } - - /// Register a transaction to sign. - pub fn sign_transaction(&mut self) -> Vec { - todo!("TODO") - } - /// Mark a transaction as signed. - pub fn signed_transaction(&mut self) { todo!("TODO") } - - /// Register a batch to sign. - pub fn sign_batch(&mut self, key: KeyFor, batch: Batch) -> Vec { - todo!("TODO") - } - /// Mark a batch as signed. - pub fn signed_batch(&mut self, batch: u32) { todo!("TODO") } - - /// Register a slash report to sign. - pub fn sign_slash_report(&mut self) -> Vec { - todo!("TODO") - } - /// Mark a slash report as signed. - pub fn signed_slash_report(&mut self) { todo!("TODO") } - - /// Start a cosigning protocol. - pub fn cosign(&mut self) { todo!("TODO") } - - /// Handle a message for a signing protocol. - pub fn handle(&mut self, msg: CoordinatorMessage) -> Vec { - todo!("TODO") - } -} -*/ diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index 8fdf8145..be08cec2 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -3,31 +3,28 @@ use std::{ time::{Duration, Instant}, }; -use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; +use frost::dkg::ThresholdKeys; use serai_validator_sets_primitives::Session; use serai_db::{DbTxn, Db}; +use messages::sign::VariantSignId; + use primitives::task::ContinuallyRan; -use scheduler::{Transaction, SignableTransaction, TransactionsToSign}; +use scheduler::{Transaction, SignableTransaction, TransactionFor, TransactionsToSign}; +use scanner::CompletedEventualities; use frost_attempt_manager::*; use crate::{ - db::{ - CoordinatorToTransactionSignerMessages, TransactionSignerToCoordinatorMessages, - CompletedEventualitiesForEachKey, - }, + db::{CoordinatorToTransactionSignerMessages, TransactionSignerToCoordinatorMessages}, TransactionPublisher, }; mod db; use db::*; -type TransactionFor = - <::PreprocessMachine as PreprocessMachine>::Signature; - // Fetches transactions to sign and signs them. pub(crate) struct TransactionTask< D: Db, @@ -76,7 +73,7 @@ impl> for keys in &keys { machines.push(signable_transaction.clone().sign(keys.clone())); } - attempt_manager.register(tx, machines); + attempt_manager.register(VariantSignId::Transaction(tx), machines); } Self { @@ -123,7 +120,7 @@ impl> for keys in &self.keys { machines.push(tx.clone().sign(keys.clone())); } - for msg in self.attempt_manager.register(tx.id(), machines) { + for msg in self.attempt_manager.register(VariantSignId::Transaction(tx.id()), machines) { TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } @@ -133,28 +130,42 @@ impl> // Check for completed Eventualities (meaning we should no longer sign for these transactions) loop { let mut txn = self.db.txn(); - let Some(id) = CompletedEventualitiesForEachKey::try_recv(&mut txn, self.session) else { + let Some(id) = CompletedEventualities::try_recv(&mut txn, &self.keys[0].group_key()) else { break; }; + + /* + We may have yet to register this signing protocol. + + While `TransactionsToSign` is populated before `CompletedEventualities`, we could + theoretically have `TransactionsToSign` populated with a new transaction _while iterating + over `CompletedEventualities`_, and then have `CompletedEventualities` populated. In that + edge case, we will see the completion notification before we see the transaction. + + In such a case, we break (dropping the txn, re-queueing the completion notification). On + the task's next iteration, we'll process the transaction from `TransactionsToSign` and be + able to make progress. + */ + if !self.active_signing_protocols.remove(&id) { + break; + } iterated = true; - // This may or may not be an ID this key was responsible for - if self.active_signing_protocols.remove(&id) { - // Since it was, remove this as an active signing protocol - ActiveSigningProtocols::set( - &mut txn, - self.session, - &self.active_signing_protocols.iter().copied().collect(), - ); - // Clean up the database - SerializedSignableTransactions::del(&mut txn, id); - SerializedTransactions::del(&mut txn, id); + // Since it was, remove this as an active signing protocol + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + // Clean up the database + SerializedSignableTransactions::del(&mut txn, id); + SerializedTransactions::del(&mut txn, id); + + // We retire with a txn so we either successfully flag this Eventuality as completed, and + // won't re-register it (making this retire safe), or we don't flag it, meaning we will + // re-register it, yet that's safe as we have yet to retire it + self.attempt_manager.retire(&mut txn, VariantSignId::Transaction(id)); - // We retire with a txn so we either successfully flag this Eventuality as completed, and - // won't re-register it (making this retire safe), or we don't flag it, meaning we will - // re-register it, yet that's safe as we have yet to retire it - self.attempt_manager.retire(&mut txn, id); - } txn.commit(); } @@ -178,7 +189,14 @@ impl> { let mut buf = Vec::with_capacity(256); signed_tx.write(&mut buf).unwrap(); - SerializedTransactions::set(&mut txn, id, &buf); + SerializedTransactions::set( + &mut txn, + match id { + VariantSignId::Transaction(id) => id, + _ => panic!("TransactionTask signed a non-transaction"), + }, + &buf, + ); } self From 4152bcacb250ddcf229bbce73a037cb9be0cb5cf Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Sep 2024 23:42:18 -0400 Subject: [PATCH 084/368] Replace scanner's BatchPublisher with a pair of DB channels --- processor/scanner/Cargo.toml | 2 +- processor/scanner/src/db.rs | 67 ++++++++++++++++++++++---- processor/scanner/src/lib.rs | 34 +++++-------- processor/scanner/src/report/db.rs | 30 +++++++++++- processor/scanner/src/report/mod.rs | 54 ++++++++++++--------- processor/scanner/src/scan/mod.rs | 18 +++++-- processor/scanner/src/substrate/mod.rs | 8 ++- 7 files changed, 152 insertions(+), 61 deletions(-) diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index 2a3e7e0a..e3e08329 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -35,7 +35,7 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-db = { path = "../../common/db" } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } -serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } +serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std", "borsh"] } serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index f72fa202..f54ff8e1 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -2,11 +2,12 @@ use core::marker::PhantomData; use std::io::{self, Read, Write}; use group::GroupEncoding; + use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; -use serai_in_instructions_primitives::InInstructionWithBalance; +use serai_in_instructions_primitives::{InInstructionWithBalance, Batch}; use serai_coins_primitives::OutInstructionWithBalance; use primitives::{EncodableG, Address, ReceivedOutput}; @@ -105,6 +106,10 @@ create_db!( pub(crate) struct ScannerGlobalDb(PhantomData); impl ScannerGlobalDb { + pub(crate) fn has_any_key_been_queued(getter: &impl Get) -> bool { + ActiveKeys::get::>>(getter).is_some() + } + /// Queue a key. /// /// Keys may be queued whenever, so long as they're scheduled to activate `WINDOW_LENGTH` blocks @@ -460,15 +465,20 @@ db_channel! { } } +pub(crate) struct InInstructionData { + pub(crate) external_key_for_session_to_sign_batch: KeyFor, + pub(crate) returnable_in_instructions: Vec>, +} + pub(crate) struct ScanToReportDb(PhantomData); impl ScanToReportDb { pub(crate) fn send_in_instructions( txn: &mut impl DbTxn, block_number: u64, - returnable_in_instructions: &[Returnable], + data: &InInstructionData, ) { - let mut buf = vec![]; - for returnable_in_instruction in returnable_in_instructions { + let mut buf = data.external_key_for_session_to_sign_batch.to_bytes().as_ref().to_vec(); + for returnable_in_instruction in &data.returnable_in_instructions { returnable_in_instruction.write(&mut buf).unwrap(); } InInstructions::send( @@ -481,7 +491,7 @@ impl ScanToReportDb { pub(crate) fn recv_in_instructions( txn: &mut impl DbTxn, block_number: u64, - ) -> Vec> { + ) -> InInstructionData { let data = InInstructions::try_recv(txn, ()) .expect("receiving InInstructions for a scanned block not yet sent"); assert_eq!( @@ -490,11 +500,20 @@ impl ScanToReportDb { ); let mut buf = data.returnable_in_instructions.as_slice(); + let external_key_for_session_to_sign_batch = { + let mut external_key_for_session_to_sign_batch = + as GroupEncoding>::Repr::default(); + let key_len = external_key_for_session_to_sign_batch.as_ref().len(); + external_key_for_session_to_sign_batch.as_mut().copy_from_slice(&buf[.. key_len]); + buf = &buf[key_len ..]; + KeyFor::::from_bytes(&external_key_for_session_to_sign_batch).unwrap() + }; + let mut returnable_in_instructions = vec![]; while !buf.is_empty() { returnable_in_instructions.push(Returnable::read(&mut buf).unwrap()); } - returnable_in_instructions + InInstructionData { external_key_for_session_to_sign_batch, returnable_in_instructions } } } @@ -522,25 +541,55 @@ impl SubstrateToEventualityDb { } } -mod _completed_eventualities { +mod _public_db { + use serai_in_instructions_primitives::Batch; + use serai_db::{Get, DbTxn, create_db, db_channel}; db_channel! { ScannerPublic { + BatchToSign: (key: &[u8]) -> Batch, + AcknowledgedBatch: (key: &[u8]) -> u32, CompletedEventualities: (key: &[u8]) -> [u8; 32], } } } +/// The batches to sign and publish. +pub struct BatchToSign(PhantomData); +impl BatchToSign { + pub(crate) fn send(txn: &mut impl DbTxn, key: &K, batch: &Batch) { + _public_db::BatchToSign::send(txn, key.to_bytes().as_ref(), batch); + } + + /// Receive a batch to sign and publish. + pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option { + _public_db::BatchToSign::try_recv(txn, key.to_bytes().as_ref()) + } +} + +/// The batches which were acknowledged on-chain. +pub struct AcknowledgedBatch(PhantomData); +impl AcknowledgedBatch { + pub(crate) fn send(txn: &mut impl DbTxn, key: &K, batch: u32) { + _public_db::AcknowledgedBatch::send(txn, key.to_bytes().as_ref(), &batch); + } + + /// Receive the ID of a batch which was acknowledged. + pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option { + _public_db::AcknowledgedBatch::try_recv(txn, key.to_bytes().as_ref()) + } +} + /// The IDs of completed Eventualities found on-chain, within a finalized block. pub struct CompletedEventualities(PhantomData); impl CompletedEventualities { pub(crate) fn send(txn: &mut impl DbTxn, key: &K, id: [u8; 32]) { - _completed_eventualities::CompletedEventualities::send(txn, key.to_bytes().as_ref(), &id); + _public_db::CompletedEventualities::send(txn, key.to_bytes().as_ref(), &id); } /// Receive the ID of a completed Eventuality. pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option<[u8; 32]> { - _completed_eventualities::CompletedEventualities::try_recv(txn, key.to_bytes().as_ref()) + _public_db::CompletedEventualities::try_recv(txn, key.to_bytes().as_ref()) } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 3323c6ff..bcd195ec 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -10,7 +10,6 @@ use group::GroupEncoding; use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; -use serai_in_instructions_primitives::Batch; use serai_coins_primitives::OutInstructionWithBalance; use primitives::{task::*, Address, ReceivedOutput, Block, Payment}; @@ -21,7 +20,8 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; -pub use db::CompletedEventualities; +use db::ScannerGlobalDb; +pub use db::{BatchToSign, AcknowledgedBatch, CompletedEventualities}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. @@ -171,24 +171,6 @@ pub type EventualityFor = <::Block as Block>::Eventuality; /// The block type for this ScannerFeed. pub type BlockFor = ::Block; -/// An object usable to publish a Batch. -// This will presumably be the Batch signer defined in `serai-processor-signers` or a test shim. -// It could also be some app-layer database for the purpose of verifying the Batches published to -// Serai. -#[async_trait::async_trait] -pub trait BatchPublisher: 'static + Send + Sync { - /// An error encountered when publishing the Batch. - /// - /// This MUST be an ephemeral error. Retrying publication MUST eventually resolve without manual - /// intervention/changing the arguments. - type EphemeralError: Debug; - - /// Publish a Batch. - /// - /// This function must be safe to call with the same Batch multiple times. - async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError>; -} - /// A return to occur. pub struct Return { address: AddressFor, @@ -351,14 +333,20 @@ impl Scanner { /// /// This will begin its execution, spawning several asynchronous tasks. pub async fn new>( - db: impl Db, + mut db: impl Db, feed: S, - batch_publisher: impl BatchPublisher, start_block: u64, + start_key: KeyFor, ) -> Self { + if !ScannerGlobalDb::::has_any_key_been_queued(&db) { + let mut txn = db.txn(); + ScannerGlobalDb::::queue_key(&mut txn, start_block, start_key); + txn.commit(); + } + let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); - let report_task = report::ReportTask::<_, S, _>::new(db.clone(), batch_publisher, start_block); + let report_task = report::ReportTask::<_, S>::new(db.clone(), start_block); let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::<_, _, Sch>::new(db, feed, start_block); diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index baff6635..05239779 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -1,6 +1,8 @@ use core::marker::PhantomData; use std::io::{Read, Write}; +use group::GroupEncoding; + use scale::{Encode, Decode, IoReader}; use serai_db::{Get, DbTxn, create_db}; @@ -8,7 +10,7 @@ use serai_primitives::Balance; use primitives::Address; -use crate::{ScannerFeed, AddressFor}; +use crate::{ScannerFeed, KeyFor, AddressFor}; create_db!( ScannerReport { @@ -20,6 +22,9 @@ create_db!( // The block number which caused a batch BlockNumberForBatch: (batch: u32) -> u64, + // The external key for the session which should sign a batch + ExternalKeyForSessionToSignBatch: (batch: u32) -> Vec, + // The return addresses for the InInstructions within a Batch SerializedReturnAddresses: (batch: u32) -> Vec, } @@ -55,6 +60,29 @@ impl ReportDb { Some(block_number) } + pub(crate) fn save_external_key_for_session_to_sign_batch( + txn: &mut impl DbTxn, + id: u32, + external_key_for_session_to_sign_batch: &KeyFor, + ) { + ExternalKeyForSessionToSignBatch::set( + txn, + id, + &external_key_for_session_to_sign_batch.to_bytes().as_ref().to_vec(), + ); + } + + pub(crate) fn take_external_key_for_session_to_sign_batch( + txn: &mut impl DbTxn, + id: u32, + ) -> Option> { + ExternalKeyForSessionToSignBatch::get(txn, id).map(|key_vec| { + let mut key = as GroupEncoding>::Repr::default(); + key.as_mut().copy_from_slice(&key_vec); + KeyFor::::from_bytes(&key).unwrap() + }) + } + pub(crate) fn save_return_information( txn: &mut impl DbTxn, id: u32, diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index ba851713..f983d0e7 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -8,23 +8,16 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::task::ContinuallyRan; use crate::{ - db::{Returnable, ScannerGlobalDb, ScanToReportDb}, + db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, BatchToSign}, index, scan::next_to_scan_for_outputs_block, - ScannerFeed, BatchPublisher, + ScannerFeed, KeyFor, }; mod db; pub(crate) use db::ReturnInformation; use db::ReportDb; -pub(crate) fn take_return_information( - txn: &mut impl DbTxn, - id: u32, -) -> Option>>> { - ReportDb::::take_return_information(txn, id) -} - pub(crate) fn take_block_number_for_batch( txn: &mut impl DbTxn, id: u32, @@ -32,6 +25,20 @@ pub(crate) fn take_block_number_for_batch( ReportDb::::take_block_number_for_batch(txn, id) } +pub(crate) fn take_external_key_for_session_to_sign_batch( + txn: &mut impl DbTxn, + id: u32, +) -> Option> { + ReportDb::::take_external_key_for_session_to_sign_batch(txn, id) +} + +pub(crate) fn take_return_information( + txn: &mut impl DbTxn, + id: u32, +) -> Option>>> { + ReportDb::::take_return_information(txn, id) +} + /* This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. @@ -40,14 +47,13 @@ pub(crate) fn take_block_number_for_batch( the InInstructions for it. */ #[allow(non_snake_case)] -pub(crate) struct ReportTask { +pub(crate) struct ReportTask { db: D, - batch_publisher: B, _S: PhantomData, } -impl ReportTask { - pub(crate) fn new(mut db: D, batch_publisher: B, start_block: u64) -> Self { +impl ReportTask { + pub(crate) fn new(mut db: D, start_block: u64) -> Self { if ReportDb::::next_to_potentially_report_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); @@ -55,12 +61,12 @@ impl ReportTask { txn.commit(); } - Self { db, batch_publisher, _S: PhantomData } + Self { db, _S: PhantomData } } } #[async_trait::async_trait] -impl ContinuallyRan for ReportTask { +impl ContinuallyRan for ReportTask { async fn run_iteration(&mut self) -> Result { let highest_reportable = { // Fetch the next to scan block @@ -87,7 +93,10 @@ impl ContinuallyRan for ReportTask::recv_in_instructions(&mut txn, b); + let InInstructionData { + external_key_for_session_to_sign_batch, + returnable_in_instructions: in_instructions, + } = ScanToReportDb::::recv_in_instructions(&mut txn, b); let notable = ScannerGlobalDb::::is_block_notable(&txn, b); if !notable { assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); @@ -138,19 +147,20 @@ impl ContinuallyRan for ReportTask::save_external_key_for_session_to_sign_batch( + &mut txn, + batch.id, + &external_key_for_session_to_sign_batch, + ); ReportDb::::save_return_information(&mut txn, batch.id, return_information); } for batch in batches { - self - .batch_publisher - .publish_batch(batch) - .await - .map_err(|e| format!("failed to publish batch: {e:?}"))?; + BatchToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); } } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 51671dc6..91c97f60 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -13,8 +13,8 @@ use primitives::{task::ContinuallyRan, OutputType, ReceivedOutput, Block}; use crate::{ lifetime::LifetimeStage, db::{ - OutputWithInInstruction, Returnable, SenderScanData, ScannerGlobalDb, ScanToReportDb, - ScanToEventualityDb, + OutputWithInInstruction, Returnable, SenderScanData, ScannerGlobalDb, InInstructionData, + ScanToReportDb, ScanToEventualityDb, }, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, eventuality::latest_scannable_block, @@ -166,7 +166,7 @@ impl ContinuallyRan for ScanTask { let mut costs_to_aggregate = HashMap::with_capacity(1); // Scan for each key - for key in keys { + for key in &keys { for output in block.scan_for_outputs(key.key) { assert_eq!(output.key(), key.key); @@ -339,7 +339,17 @@ impl ContinuallyRan for ScanTask { let in_instructions = in_instructions.into_iter().map(|(_id, in_instruction)| in_instruction).collect::>(); // Send the InInstructions to the report task - ScanToReportDb::::send_in_instructions(&mut txn, b, &in_instructions); + // We need to also specify which key is responsible for signing the Batch for these, which + // will always be the oldest key (as the new key signing the Batch signifies handover + // acceptance) + ScanToReportDb::::send_in_instructions( + &mut txn, + b, + &InInstructionData { + external_key_for_session_to_sign_batch: keys[0].key, + returnable_in_instructions: in_instructions, + }, + ); // Send the scan data to the eventuality task ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index d67be9dc..6f9cd86b 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -6,7 +6,7 @@ use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; use primitives::task::ContinuallyRan; use crate::{ - db::{ScannerGlobalDb, SubstrateToEventualityDb}, + db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatch}, report, ScannerFeed, KeyFor, }; @@ -79,6 +79,12 @@ impl ContinuallyRan for SubstrateTask { return Ok(made_progress); }; + { + let external_key_for_session_to_sign_batch = + report::take_external_key_for_session_to_sign_batch::(&mut txn, batch_id).unwrap(); + AcknowledgedBatch::send(&mut txn, &external_key_for_session_to_sign_batch, batch_id); + } + // Mark we made progress and handle this made_progress = true; From ed0221d8043ddc7d0b7cc81555ffaf38ed111321 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 01:01:29 -0400 Subject: [PATCH 085/368] Add BatchSignerTask Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[]. --- Cargo.lock | 3 + processor/scanner/src/db.rs | 20 +-- processor/scanner/src/lib.rs | 2 +- processor/scanner/src/report/mod.rs | 4 +- processor/scanner/src/substrate/mod.rs | 4 +- processor/signers/Cargo.toml | 3 + processor/signers/src/batch/db.rs | 13 ++ processor/signers/src/batch/mod.rs | 180 ++++++++++++++++++++ processor/signers/src/coordinator.rs | 2 + processor/signers/src/lib.rs | 80 ++++++--- processor/signers/src/transaction/mod.rs | 17 +- processor/signers/src/wrapped_schnorrkel.rs | 86 ++++++++++ processor/src/multisigs/mod.rs | 8 - 13 files changed, 371 insertions(+), 51 deletions(-) create mode 100644 processor/signers/src/batch/db.rs create mode 100644 processor/signers/src/batch/mod.rs create mode 100644 processor/signers/src/wrapped_schnorrkel.rs delete mode 100644 processor/src/multisigs/mod.rs diff --git a/Cargo.lock b/Cargo.lock index d6b0e3de..9db0bb74 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8724,10 +8724,13 @@ dependencies = [ "async-trait", "borsh", "ciphersuite", + "frost-schnorrkel", "log", "modular-frost", "parity-scale-codec", + "rand_core", "serai-db", + "serai-in-instructions-primitives", "serai-processor-frost-attempt-manager", "serai-processor-messages", "serai-processor-primitives", diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index f54ff8e1..3dd5a2e2 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -548,36 +548,36 @@ mod _public_db { db_channel! { ScannerPublic { - BatchToSign: (key: &[u8]) -> Batch, - AcknowledgedBatch: (key: &[u8]) -> u32, + BatchesToSign: (key: &[u8]) -> Batch, + AcknowledgedBatches: (key: &[u8]) -> u32, CompletedEventualities: (key: &[u8]) -> [u8; 32], } } } /// The batches to sign and publish. -pub struct BatchToSign(PhantomData); -impl BatchToSign { +pub struct BatchesToSign(PhantomData); +impl BatchesToSign { pub(crate) fn send(txn: &mut impl DbTxn, key: &K, batch: &Batch) { - _public_db::BatchToSign::send(txn, key.to_bytes().as_ref(), batch); + _public_db::BatchesToSign::send(txn, key.to_bytes().as_ref(), batch); } /// Receive a batch to sign and publish. pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option { - _public_db::BatchToSign::try_recv(txn, key.to_bytes().as_ref()) + _public_db::BatchesToSign::try_recv(txn, key.to_bytes().as_ref()) } } /// The batches which were acknowledged on-chain. -pub struct AcknowledgedBatch(PhantomData); -impl AcknowledgedBatch { +pub struct AcknowledgedBatches(PhantomData); +impl AcknowledgedBatches { pub(crate) fn send(txn: &mut impl DbTxn, key: &K, batch: u32) { - _public_db::AcknowledgedBatch::send(txn, key.to_bytes().as_ref(), &batch); + _public_db::AcknowledgedBatches::send(txn, key.to_bytes().as_ref(), &batch); } /// Receive the ID of a batch which was acknowledged. pub fn try_recv(txn: &mut impl DbTxn, key: &K) -> Option { - _public_db::AcknowledgedBatch::try_recv(txn, key.to_bytes().as_ref()) + _public_db::AcknowledgedBatches::try_recv(txn, key.to_bytes().as_ref()) } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index bcd195ec..e5b39cdd 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -21,7 +21,7 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; use db::ScannerGlobalDb; -pub use db::{BatchToSign, AcknowledgedBatch, CompletedEventualities}; +pub use db::{BatchesToSign, AcknowledgedBatches, CompletedEventualities}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index f983d0e7..309b44aa 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -8,7 +8,7 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::task::ContinuallyRan; use crate::{ - db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, BatchToSign}, + db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, BatchesToSign}, index, scan::next_to_scan_for_outputs_block, ScannerFeed, KeyFor, @@ -160,7 +160,7 @@ impl ContinuallyRan for ReportTask { } for batch in batches { - BatchToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); + BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); } } diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 6f9cd86b..76961c37 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -6,7 +6,7 @@ use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; use primitives::task::ContinuallyRan; use crate::{ - db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatch}, + db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatches}, report, ScannerFeed, KeyFor, }; @@ -82,7 +82,7 @@ impl ContinuallyRan for SubstrateTask { { let external_key_for_session_to_sign_batch = report::take_external_key_for_session_to_sign_batch::(&mut txn, batch_id).unwrap(); - AcknowledgedBatch::send(&mut txn, &external_key_for_session_to_sign_batch, batch_id); + AcknowledgedBatches::send(&mut txn, &external_key_for_session_to_sign_batch, batch_id); } // Mark we made progress and handle this diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 3a96c043..91192a9e 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -21,15 +21,18 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +rand_core = { version = "0.6", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } +frost-schnorrkel = { path = "../../crypto/schnorrkel", default-features = false } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } +serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } serai-db = { path = "../../common/db" } log = { version = "0.4", default-features = false, features = ["std"] } diff --git a/processor/signers/src/batch/db.rs b/processor/signers/src/batch/db.rs new file mode 100644 index 00000000..fec0a894 --- /dev/null +++ b/processor/signers/src/batch/db.rs @@ -0,0 +1,13 @@ +use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::{Batch, SignedBatch}; + +use serai_db::{Get, DbTxn, create_db}; + +create_db! { + BatchSigner { + ActiveSigningProtocols: (session: Session) -> Vec, + Batches: (id: u32) -> Batch, + SignedBatches: (id: u32) -> SignedBatch, + LastAcknowledgedBatch: () -> u32, + } +} diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs new file mode 100644 index 00000000..410ca378 --- /dev/null +++ b/processor/signers/src/batch/mod.rs @@ -0,0 +1,180 @@ +use std::collections::HashSet; + +use ciphersuite::{group::GroupEncoding, Ristretto}; +use frost::dkg::ThresholdKeys; + +use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::{SignedBatch, batch_message}; + +use serai_db::{DbTxn, Db}; + +use messages::sign::VariantSignId; + +use primitives::task::ContinuallyRan; +use scanner::{BatchesToSign, AcknowledgedBatches}; + +use frost_attempt_manager::*; + +use crate::{ + db::{CoordinatorToBatchSignerMessages, BatchSignerToCoordinatorMessages}, + WrappedSchnorrkelMachine, +}; + +mod db; +use db::*; + +// Fetches batches to sign and signs them. +pub(crate) struct BatchSignerTask { + db: D, + + session: Session, + external_key: E, + keys: Vec>, + + active_signing_protocols: HashSet, + attempt_manager: AttemptManager, +} + +impl BatchSignerTask { + pub(crate) fn new( + db: D, + session: Session, + external_key: E, + keys: Vec>, + ) -> Self { + let mut active_signing_protocols = HashSet::new(); + let mut attempt_manager = AttemptManager::new( + db.clone(), + session, + keys.first().expect("creating a batch signer with 0 keys").params().i(), + ); + + // Re-register all active signing protocols + for id in ActiveSigningProtocols::get(&db, session).unwrap_or(vec![]) { + active_signing_protocols.insert(id); + + let batch = Batches::get(&db, id).unwrap(); + assert_eq!(batch.id, id); + + let mut machines = Vec::with_capacity(keys.len()); + for keys in &keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); + } + attempt_manager.register(VariantSignId::Batch(id), machines); + } + + Self { db, session, external_key, keys, active_signing_protocols, attempt_manager } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for BatchSignerTask { + async fn run_iteration(&mut self) -> Result { + let mut iterated = false; + + // Check for new batches to sign + loop { + let mut txn = self.db.txn(); + let Some(batch) = BatchesToSign::try_recv(&mut txn, &self.external_key) else { + break; + }; + iterated = true; + + // Save this to the database as a transaction to sign + self.active_signing_protocols.insert(batch.id); + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + Batches::set(&mut txn, batch.id, &batch); + + let mut machines = Vec::with_capacity(self.keys.len()); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); + } + for msg in self.attempt_manager.register(VariantSignId::Batch(batch.id), machines) { + BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + + txn.commit(); + } + + // Check for acknowledged Batches (meaning we should no longer sign for these Batches) + loop { + let mut txn = self.db.txn(); + let Some(id) = AcknowledgedBatches::try_recv(&mut txn, &self.external_key) else { + break; + }; + + { + let last_acknowledged = LastAcknowledgedBatch::get(&txn); + if Some(id) > last_acknowledged { + LastAcknowledgedBatch::set(&mut txn, &id); + } + } + + /* + We may have yet to register this signing protocol. + + While `BatchesToSign` is populated before `AcknowledgedBatches`, we could theoretically have + `BatchesToSign` populated with a new batch _while iterating over `AcknowledgedBatches`_, and + then have `AcknowledgedBatched` populated. In that edge case, we will see the + acknowledgement notification before we see the transaction. + + In such a case, we break (dropping the txn, re-queueing the acknowledgement notification). + On the task's next iteration, we'll process the Batch from `BatchesToSign` and be + able to make progress. + */ + if !self.active_signing_protocols.remove(&id) { + break; + } + iterated = true; + + // Since it was, remove this as an active signing protocol + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + // Clean up the database + Batches::del(&mut txn, id); + SignedBatches::del(&mut txn, id); + + // We retire with a txn so we either successfully flag this Batch as acknowledged, and + // won't re-register it (making this retire safe), or we don't flag it, meaning we will + // re-register it, yet that's safe as we have yet to retire it + self.attempt_manager.retire(&mut txn, VariantSignId::Batch(id)); + + txn.commit(); + } + + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToBatchSignerMessages::try_recv(&mut txn, self.session) else { + break; + }; + iterated = true; + + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature } => { + let VariantSignId::Batch(id) = id else { panic!("BatchSignerTask signed a non-Batch") }; + let batch = + Batches::get(&txn, id).expect("signed a Batch we didn't save to the database"); + let signed_batch = SignedBatch { batch, signature: signature.into() }; + SignedBatches::set(&mut txn, signed_batch.batch.id, &signed_batch); + } + } + + txn.commit(); + } + + Ok(iterated) + } +} diff --git a/processor/signers/src/coordinator.rs b/processor/signers/src/coordinator.rs index 43dcc571..c87dc4bb 100644 --- a/processor/signers/src/coordinator.rs +++ b/processor/signers/src/coordinator.rs @@ -93,6 +93,8 @@ impl ContinuallyRan for CoordinatorTask { } } + // TODO: For max(last acknowledged batch, last published batch) onwards, publish every batch + Ok(iterated) } } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index a53f2208..def6ef16 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -11,6 +11,7 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use frost::dkg::{ThresholdCore, ThresholdKeys}; use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::SignedBatch; use serai_db::{DbTxn, Db}; @@ -19,25 +20,34 @@ use messages::sign::{VariantSignId, ProcessorMessage, CoordinatorMessage}; use primitives::task::{Task, TaskHandle, ContinuallyRan}; use scheduler::{Transaction, SignableTransaction, TransactionFor}; +mod wrapped_schnorrkel; +pub(crate) use wrapped_schnorrkel::WrappedSchnorrkelMachine; + pub(crate) mod db; mod coordinator; use coordinator::CoordinatorTask; +mod batch; +use batch::BatchSignerTask; + mod transaction; -use transaction::TransactionTask; +use transaction::TransactionSignerTask; /// A connection to the Coordinator which messages can be published with. #[async_trait::async_trait] pub trait Coordinator: 'static + Send + Sync { - /// An error encountered when sending a message. + /// An error encountered when interacting with a coordinator. /// - /// This MUST be an ephemeral error. Retrying sending a message MUST eventually resolve without + /// This MUST be an ephemeral error. Retrying an interaction MUST eventually resolve without /// manual intervention/changing the arguments. type EphemeralError: Debug; /// Send a `messages::sign::ProcessorMessage`. async fn send(&mut self, message: ProcessorMessage) -> Result<(), Self::EphemeralError>; + + /// Publish a `SignedBatch`. + async fn publish_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError>; } /// An object capable of publishing a transaction. @@ -111,13 +121,18 @@ impl Signers { ::read_G(&mut external_key_bytes).unwrap(); assert!(external_key_bytes.is_empty()); + // Drain the Batches to sign + // This will be fully populated by the scanner before retiry occurs, making this perfect + // in not leaving any pending blobs behind + while scanner::BatchesToSign::try_recv(&mut txn, &external_key).is_some() {} + // Drain the acknowledged batches to no longer sign + while scanner::AcknowledgedBatches::try_recv(&mut txn, &external_key).is_some() {} + // Drain the transactions to sign - // TransactionsToSign will be fully populated by the scheduler before retiry occurs, making - // this perfect in not leaving any pending blobs behind + // This will be fully populated by the scheduler before retiry while scheduler::TransactionsToSign::::try_recv(&mut txn, &external_key).is_some() {} // Drain the completed Eventualities - // This will be fully populated by the scanner before retiry while scanner::CompletedEventualities::try_recv(&mut txn, &external_key).is_some() {} // Drain our DB channels @@ -156,18 +171,37 @@ impl Signers { // TODO: Batch signer, cosigner, slash report signers - let (transaction_task, transaction_handle) = Task::new(); + let (batch_task, batch_handle) = Task::new(); tokio::spawn( - TransactionTask::<_, ST, _>::new(db.clone(), publisher.clone(), session, external_keys) - .continually_run(transaction_task, vec![coordinator_handle.clone()]), + BatchSignerTask::new( + db.clone(), + session, + external_keys[0].group_key(), + substrate_keys.clone(), + ) + .continually_run(batch_task, vec![coordinator_handle.clone()]), ); - tasks.insert(session, Tasks { - cosigner: todo!("TODO"), - batch: todo!("TODO"), - slash_report: todo!("TODO"), - transaction: transaction_handle, - }); + let (transaction_task, transaction_handle) = Task::new(); + tokio::spawn( + TransactionSignerTask::<_, ST, _>::new( + db.clone(), + publisher.clone(), + session, + external_keys, + ) + .continually_run(transaction_task, vec![coordinator_handle.clone()]), + ); + + tasks.insert( + session, + Tasks { + cosigner: todo!("TODO"), + batch: batch_handle, + slash_report: todo!("TODO"), + transaction: transaction_handle, + }, + ); } Self { coordinator_handle, tasks, _ST: PhantomData } @@ -246,19 +280,27 @@ impl Signers { match sign_id.id { VariantSignId::Cosign(_) => { db::CoordinatorToCosignerMessages::send(txn, sign_id.session, message); - if let Some(tasks) = tasks { tasks.cosigner.run_now(); } + if let Some(tasks) = tasks { + tasks.cosigner.run_now(); + } } VariantSignId::Batch(_) => { db::CoordinatorToBatchSignerMessages::send(txn, sign_id.session, message); - if let Some(tasks) = tasks { tasks.batch.run_now(); } + if let Some(tasks) = tasks { + tasks.batch.run_now(); + } } VariantSignId::SlashReport(_) => { db::CoordinatorToSlashReportSignerMessages::send(txn, sign_id.session, message); - if let Some(tasks) = tasks { tasks.slash_report.run_now(); } + if let Some(tasks) = tasks { + tasks.slash_report.run_now(); + } } VariantSignId::Transaction(_) => { db::CoordinatorToTransactionSignerMessages::send(txn, sign_id.session, message); - if let Some(tasks) = tasks { tasks.transaction.run_now(); } + if let Some(tasks) = tasks { + tasks.transaction.run_now(); + } } } } diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index be08cec2..9311eb32 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -26,7 +26,7 @@ mod db; use db::*; // Fetches transactions to sign and signs them. -pub(crate) struct TransactionTask< +pub(crate) struct TransactionSignerTask< D: Db, ST: SignableTransaction, P: TransactionPublisher>, @@ -44,7 +44,7 @@ pub(crate) struct TransactionTask< } impl>> - TransactionTask + TransactionSignerTask { pub(crate) fn new( db: D, @@ -90,7 +90,7 @@ impl> #[async_trait::async_trait] impl>> ContinuallyRan - for TransactionTask + for TransactionSignerTask { async fn run_iteration(&mut self) -> Result { let mut iterated = false; @@ -193,17 +193,16 @@ impl> &mut txn, match id { VariantSignId::Transaction(id) => id, - _ => panic!("TransactionTask signed a non-transaction"), + _ => panic!("TransactionSignerTask signed a non-transaction"), }, &buf, ); } - self - .publisher - .publish(signed_tx) - .await - .map_err(|e| format!("couldn't publish transaction: {e:?}"))?; + match self.publisher.publish(signed_tx).await { + Ok(()) => {} + Err(e) => log::warn!("couldn't broadcast transaction: {e:?}"), + } } } diff --git a/processor/signers/src/wrapped_schnorrkel.rs b/processor/signers/src/wrapped_schnorrkel.rs new file mode 100644 index 00000000..d81eaa70 --- /dev/null +++ b/processor/signers/src/wrapped_schnorrkel.rs @@ -0,0 +1,86 @@ +use std::{ + collections::HashMap, + io::{self, Read}, +}; + +use rand_core::{RngCore, CryptoRng}; + +use ciphersuite::Ristretto; +use frost::{ + dkg::{Participant, ThresholdKeys}, + FrostError, + algorithm::Algorithm, + sign::*, +}; +use frost_schnorrkel::Schnorrkel; + +// This wraps a Schnorrkel sign machine into one with a preset message. +#[derive(Clone)] +pub(crate) struct WrappedSchnorrkelMachine(ThresholdKeys, Vec); +impl WrappedSchnorrkelMachine { + pub(crate) fn new(keys: ThresholdKeys, msg: Vec) -> Self { + Self(keys, msg) + } +} + +pub(crate) struct WrappedSchnorrkelSignMachine( + as PreprocessMachine>::SignMachine, + Vec, +); + +type Signature = as PreprocessMachine>::Signature; +impl PreprocessMachine for WrappedSchnorrkelMachine { + type Preprocess = as PreprocessMachine>::Preprocess; + type Signature = Signature; + type SignMachine = WrappedSchnorrkelSignMachine; + + fn preprocess( + self, + rng: &mut R, + ) -> (Self::SignMachine, Preprocess>::Addendum>) + { + let WrappedSchnorrkelMachine(keys, batch) = self; + let (machine, preprocess) = + AlgorithmMachine::new(Schnorrkel::new(b"substrate"), keys).preprocess(rng); + (WrappedSchnorrkelSignMachine(machine, batch), preprocess) + } +} + +impl SignMachine for WrappedSchnorrkelSignMachine { + type Params = as SignMachine>::Params; + type Keys = as SignMachine>::Keys; + type Preprocess = + as SignMachine>::Preprocess; + type SignatureShare = + as SignMachine>::SignatureShare; + type SignatureMachine = + as SignMachine>::SignatureMachine; + + fn cache(self) -> CachedPreprocess { + unimplemented!() + } + + fn from_cache( + _algorithm: Schnorrkel, + _keys: ThresholdKeys, + _cache: CachedPreprocess, + ) -> (Self, Self::Preprocess) { + unimplemented!() + } + + fn read_preprocess(&self, reader: &mut R) -> io::Result { + self.0.read_preprocess(reader) + } + + fn sign( + self, + preprocesses: HashMap< + Participant, + Preprocess>::Addendum>, + >, + msg: &[u8], + ) -> Result<(Self::SignatureMachine, SignatureShare), FrostError> { + assert!(msg.is_empty()); + self.0.sign(preprocesses, &self.1) + } +} diff --git a/processor/src/multisigs/mod.rs b/processor/src/multisigs/mod.rs deleted file mode 100644 index 1c4adabf..00000000 --- a/processor/src/multisigs/mod.rs +++ /dev/null @@ -1,8 +0,0 @@ -#[allow(clippy::type_complexity)] -#[derive(Clone, Debug)] -pub enum MultisigEvent { - // Batches to publish - Batches(Option<(::G, ::G)>, Vec), - // Eventuality completion found on-chain - Completed(Vec, [u8; 32], N::Eventuality), -} From a3cb514400af976c317d06e89504eded38a3c1d9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 01:15:56 -0400 Subject: [PATCH 086/368] Have the coordinator task publish Batches --- processor/signers/src/batch/db.rs | 2 +- processor/signers/src/batch/mod.rs | 10 +++++++- processor/signers/src/coordinator/db.rs | 7 ++++++ .../{coordinator.rs => coordinator/mod.rs} | 23 ++++++++++++++++++- processor/signers/src/lib.rs | 2 +- processor/signers/src/transaction/db.rs | 2 +- 6 files changed, 41 insertions(+), 5 deletions(-) create mode 100644 processor/signers/src/coordinator/db.rs rename processor/signers/src/{coordinator.rs => coordinator/mod.rs} (75%) diff --git a/processor/signers/src/batch/db.rs b/processor/signers/src/batch/db.rs index fec0a894..a895e0bb 100644 --- a/processor/signers/src/batch/db.rs +++ b/processor/signers/src/batch/db.rs @@ -4,7 +4,7 @@ use serai_in_instructions_primitives::{Batch, SignedBatch}; use serai_db::{Get, DbTxn, create_db}; create_db! { - BatchSigner { + SignersBatch { ActiveSigningProtocols: (session: Session) -> Vec, Batches: (id: u32) -> Batch, SignedBatches: (id: u32) -> SignedBatch, diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs index 410ca378..f08fb5e2 100644 --- a/processor/signers/src/batch/mod.rs +++ b/processor/signers/src/batch/mod.rs @@ -6,7 +6,7 @@ use frost::dkg::ThresholdKeys; use serai_validator_sets_primitives::Session; use serai_in_instructions_primitives::{SignedBatch, batch_message}; -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn, Db}; use messages::sign::VariantSignId; @@ -23,6 +23,14 @@ use crate::{ mod db; use db::*; +pub(crate) fn last_acknowledged_batch(getter: &impl Get) -> Option { + LastAcknowledgedBatch::get(getter) +} + +pub(crate) fn signed_batch(getter: &impl Get, id: u32) -> Option { + SignedBatches::get(getter, id) +} + // Fetches batches to sign and signs them. pub(crate) struct BatchSignerTask { db: D, diff --git a/processor/signers/src/coordinator/db.rs b/processor/signers/src/coordinator/db.rs new file mode 100644 index 00000000..c8235ede --- /dev/null +++ b/processor/signers/src/coordinator/db.rs @@ -0,0 +1,7 @@ +use serai_db::{Get, DbTxn, create_db}; + +create_db! { + SignersCoordinator { + LastPublishedBatch: () -> u32, + } +} diff --git a/processor/signers/src/coordinator.rs b/processor/signers/src/coordinator/mod.rs similarity index 75% rename from processor/signers/src/coordinator.rs rename to processor/signers/src/coordinator/mod.rs index c87dc4bb..3255603d 100644 --- a/processor/signers/src/coordinator.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -10,6 +10,8 @@ use crate::{ Coordinator, }; +mod db; + // Fetches messages to send the coordinator and sends them. pub(crate) struct CoordinatorTask { db: D, @@ -93,7 +95,26 @@ impl ContinuallyRan for CoordinatorTask { } } - // TODO: For max(last acknowledged batch, last published batch) onwards, publish every batch + // Publish the signed Batches + { + let mut txn = self.db.txn(); + // The last acknowledged Batch may exceed the last Batch we published if we didn't sign for + // the prior Batch(es) (and accordingly didn't publish them) + let last_batch = + crate::batch::last_acknowledged_batch(&txn).max(db::LastPublishedBatch::get(&txn)); + let mut next_batch = last_batch.map_or(0, |id| id + 1); + while let Some(batch) = crate::batch::signed_batch(&txn, next_batch) { + iterated = true; + db::LastPublishedBatch::set(&mut txn, &batch.batch.id); + self + .coordinator + .publish_batch(batch) + .await + .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; + next_batch += 1; + } + txn.commit(); + } Ok(iterated) } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index def6ef16..024badfa 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -169,7 +169,7 @@ impl Signers { .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); } - // TODO: Batch signer, cosigner, slash report signers + // TODO: Cosigner and slash report signers let (batch_task, batch_handle) = Task::new(); tokio::spawn( diff --git a/processor/signers/src/transaction/db.rs b/processor/signers/src/transaction/db.rs index b77d38c7..a91881e7 100644 --- a/processor/signers/src/transaction/db.rs +++ b/processor/signers/src/transaction/db.rs @@ -3,7 +3,7 @@ use serai_validator_sets_primitives::Session; use serai_db::{Get, DbTxn, create_db}; create_db! { - TransactionSigner { + SignersTransaction { ActiveSigningProtocols: (session: Session) -> Vec<[u8; 32]>, SerializedSignableTransactions: (id: [u8; 32]) -> Vec, SerializedTransactions: (id: [u8; 32]) -> Vec, From 0078858c1cba2e9b8c2a0e1735ba75b5dc8fee3a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 03:06:37 -0400 Subject: [PATCH 087/368] Tidy messages, publish all Batches to the coordinator Prior, we published SignedBatches, yet Batches are necessary for auditing purposes. --- processor/messages/src/lib.rs | 143 ++++++----------------- processor/scanner/src/db.rs | 31 ++++- processor/scanner/src/report/mod.rs | 3 +- processor/scanner/src/substrate/mod.rs | 15 +-- processor/signers/src/coordinator/mod.rs | 16 ++- processor/signers/src/lib.rs | 5 +- 6 files changed, 89 insertions(+), 124 deletions(-) diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index ef907f97..4a191b68 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -46,12 +46,6 @@ pub mod key_gen { } } - impl CoordinatorMessage { - pub fn required_block(&self) -> Option { - None - } - } - #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { // Participated in the specified key generation protocol. @@ -133,10 +127,6 @@ pub mod sign { } impl CoordinatorMessage { - pub fn required_block(&self) -> Option { - None - } - pub fn sign_id(&self) -> &SignId { match self { CoordinatorMessage::Preprocesses { id, .. } | @@ -160,6 +150,7 @@ pub mod sign { pub mod coordinator { use super::*; + // TODO: Why does this not simply take the block hash? pub fn cosign_block_msg(block_number: u64, block: [u8; 32]) -> Vec { const DST: &[u8] = b"Cosign"; let mut res = vec![u8::try_from(DST.len()).unwrap()]; @@ -169,36 +160,10 @@ pub mod coordinator { res } - #[derive( - Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, - )] - pub enum SubstrateSignableId { - CosigningSubstrateBlock([u8; 32]), - Batch(u32), - SlashReport, - } - - #[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] - pub struct SubstrateSignId { - pub session: Session, - pub id: SubstrateSignableId, - pub attempt: u32, - } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { - CosignSubstrateBlock { id: SubstrateSignId, block_number: u64 }, - SignSlashReport { id: SubstrateSignId, report: Vec<([u8; 32], u32)> }, - } - - impl CoordinatorMessage { - // The Coordinator will only send Batch messages once the Batch ID has been recognized - // The ID will only be recognized when the block is acknowledged by a super-majority of the - // network *and the local node* - // This synchrony obtained lets us ignore the synchrony requirement offered here - pub fn required_block(&self) -> Option { - None - } + CosignSubstrateBlock { session: Session, block_number: u64, block: [u8; 32] }, + SignSlashReport { session: Session, report: Vec<([u8; 32], u32)> }, } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] @@ -209,14 +174,9 @@ pub mod coordinator { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { - SubstrateBlockAck { block: u64, plans: Vec }, - InvalidParticipant { id: SubstrateSignId, participant: Participant }, - CosignPreprocess { id: SubstrateSignId, preprocesses: Vec<[u8; 64]> }, - // TODO: Remove BatchPreprocess? Why does this take a BlockHash here and not in its - // SubstrateSignId? - BatchPreprocess { id: SubstrateSignId, block: BlockHash, preprocesses: Vec<[u8; 64]> }, - // TODO: Make these signatures [u8; 64]? CosignedBlock { block_number: u64, block: [u8; 32], signature: Vec }, + SignedBatch { batch: SignedBatch }, + SubstrateBlockAck { block: u64, plans: Vec }, SignedSlashReport { session: Session, signature: Vec }, } } @@ -226,33 +186,23 @@ pub mod substrate { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { - ConfirmKeyPair { - context: SubstrateContext, - session: Session, - key_pair: KeyPair, - }, - SubstrateBlock { - context: SubstrateContext, + /// Keys set on the Serai network. + SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, + /// The data from a block which acknowledged a Batch. + BlockWithBatchAcknowledgement { block: u64, + batch_id: u32, + in_instruction_succeededs: Vec, burns: Vec, - batches: Vec, + key_to_activate: Option, }, - } - - impl CoordinatorMessage { - pub fn required_block(&self) -> Option { - let context = match self { - CoordinatorMessage::ConfirmKeyPair { context, .. } | - CoordinatorMessage::SubstrateBlock { context, .. } => context, - }; - Some(context.network_latest_finalized_block) - } + /// The data from a block which didn't acknowledge a Batch. + BlockWithoutBatchAcknowledgement { block: u64, burns: Vec }, } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { Batch { batch: Batch }, - SignedBatch { batch: SignedBatch }, } } @@ -279,24 +229,6 @@ impl_from!(sign, CoordinatorMessage, Sign); impl_from!(coordinator, CoordinatorMessage, Coordinator); impl_from!(substrate, CoordinatorMessage, Substrate); -impl CoordinatorMessage { - pub fn required_block(&self) -> Option { - let required = match self { - CoordinatorMessage::KeyGen(msg) => msg.required_block(), - CoordinatorMessage::Sign(msg) => msg.required_block(), - CoordinatorMessage::Coordinator(msg) => msg.required_block(), - CoordinatorMessage::Substrate(msg) => msg.required_block(), - }; - - // 0 is used when Serai hasn't acknowledged *any* block for this network, which also means - // there's no need to wait for the block in question - if required == Some(BlockHash([0; 32])) { - return None; - } - required - } -} - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { KeyGen(key_gen::ProcessorMessage), @@ -315,10 +247,10 @@ impl_from!(substrate, ProcessorMessage, Substrate); const COORDINATOR_UID: u8 = 0; const PROCESSOR_UID: u8 = 1; -const TYPE_KEY_GEN_UID: u8 = 2; -const TYPE_SIGN_UID: u8 = 3; -const TYPE_COORDINATOR_UID: u8 = 4; -const TYPE_SUBSTRATE_UID: u8 = 5; +const TYPE_KEY_GEN_UID: u8 = 0; +const TYPE_SIGN_UID: u8 = 1; +const TYPE_COORDINATOR_UID: u8 = 2; +const TYPE_SUBSTRATE_UID: u8 = 3; impl CoordinatorMessage { /// The intent for this message, which should be unique across the validator's entire system, @@ -359,11 +291,12 @@ impl CoordinatorMessage { } CoordinatorMessage::Coordinator(msg) => { let (sub, id) = match msg { - // Unique since this ID contains the hash of the block being cosigned - coordinator::CoordinatorMessage::CosignSubstrateBlock { id, .. } => (0, id.encode()), - // Unique since there's only one of these per session/attempt, and ID is inclusive to - // both - coordinator::CoordinatorMessage::SignSlashReport { id, .. } => (1, id.encode()), + // We only cosign a block once, and Reattempt is a separate message + coordinator::CoordinatorMessage::CosignSubstrateBlock { block_number, .. } => { + (0, block_number.encode()) + } + // We only sign one slash report, and Reattempt is a separate message + coordinator::CoordinatorMessage::SignSlashReport { session, .. } => (1, session.encode()), }; let mut res = vec![COORDINATOR_UID, TYPE_COORDINATOR_UID, sub]; @@ -372,9 +305,13 @@ impl CoordinatorMessage { } CoordinatorMessage::Substrate(msg) => { let (sub, id) = match msg { - // Unique since there's only one key pair for a session - substrate::CoordinatorMessage::ConfirmKeyPair { session, .. } => (0, session.encode()), - substrate::CoordinatorMessage::SubstrateBlock { block, .. } => (1, block.encode()), + substrate::CoordinatorMessage::SetKeys { session, .. } => (0, session.encode()), + substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { block, .. } => { + (1, block.encode()) + } + substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { block, .. } => { + (2, block.encode()) + } }; let mut res = vec![COORDINATOR_UID, TYPE_SUBSTRATE_UID, sub]; @@ -430,14 +367,10 @@ impl ProcessorMessage { } ProcessorMessage::Coordinator(msg) => { let (sub, id) = match msg { - coordinator::ProcessorMessage::SubstrateBlockAck { block, .. } => (0, block.encode()), - // Unique since SubstrateSignId - coordinator::ProcessorMessage::InvalidParticipant { id, .. } => (1, id.encode()), - coordinator::ProcessorMessage::CosignPreprocess { id, .. } => (2, id.encode()), - coordinator::ProcessorMessage::BatchPreprocess { id, .. } => (3, id.encode()), - // Unique since only one instance of a signature matters - coordinator::ProcessorMessage::CosignedBlock { block, .. } => (4, block.encode()), - coordinator::ProcessorMessage::SignedSlashReport { .. } => (5, vec![]), + coordinator::ProcessorMessage::CosignedBlock { block, .. } => (0, block.encode()), + coordinator::ProcessorMessage::SignedBatch { batch, .. } => (1, batch.batch.id.encode()), + coordinator::ProcessorMessage::SubstrateBlockAck { block, .. } => (2, block.encode()), + coordinator::ProcessorMessage::SignedSlashReport { session, .. } => (3, session.encode()), }; let mut res = vec![PROCESSOR_UID, TYPE_COORDINATOR_UID, sub]; @@ -446,11 +379,7 @@ impl ProcessorMessage { } ProcessorMessage::Substrate(msg) => { let (sub, id) = match msg { - // Unique since network and ID binding - substrate::ProcessorMessage::Batch { batch } => (0, (batch.network, batch.id).encode()), - substrate::ProcessorMessage::SignedBatch { batch, .. } => { - (1, (batch.batch.network, batch.batch.id).encode()) - } + substrate::ProcessorMessage::Batch { batch } => (0, batch.id.encode()), }; let mut res = vec![PROCESSOR_UID, TYPE_SUBSTRATE_UID, sub]; diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 3dd5a2e2..52a36419 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -525,12 +525,19 @@ db_channel! { pub(crate) struct SubstrateToEventualityDb; impl SubstrateToEventualityDb { - pub(crate) fn send_burns( + pub(crate) fn send_burns( txn: &mut impl DbTxn, acknowledged_block: u64, - burns: &Vec, + burns: Vec, ) { - Burns::send(txn, acknowledged_block, burns); + // Drop burns less than the dust + let burns = burns + .into_iter() + .filter(|burn| burn.balance.amount.0 >= S::dust(burn.balance.coin).0) + .collect::>(); + if !burns.is_empty() { + Burns::send(txn, acknowledged_block, &burns); + } } pub(crate) fn try_recv_burns( @@ -548,6 +555,7 @@ mod _public_db { db_channel! { ScannerPublic { + Batches: (empty_key: ()) -> Batch, BatchesToSign: (key: &[u8]) -> Batch, AcknowledgedBatches: (key: &[u8]) -> u32, CompletedEventualities: (key: &[u8]) -> [u8; 32], @@ -555,7 +563,24 @@ mod _public_db { } } +/// The batches to publish. +/// +/// This is used for auditing the Batches published to Serai. +pub struct Batches; +impl Batches { + pub(crate) fn send(txn: &mut impl DbTxn, batch: &Batch) { + _public_db::Batches::send(txn, (), batch); + } + + /// Receive a batch to publish. + pub fn try_recv(txn: &mut impl DbTxn) -> Option { + _public_db::Batches::try_recv(txn, ()) + } +} + /// The batches to sign and publish. +/// +/// This is used for publishing Batches onto Serai. pub struct BatchesToSign(PhantomData); impl BatchesToSign { pub(crate) fn send(txn: &mut impl DbTxn, key: &K, batch: &Batch) { diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 309b44aa..5fd2c7eb 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -8,7 +8,7 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::task::ContinuallyRan; use crate::{ - db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, BatchesToSign}, + db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, Batches, BatchesToSign}, index, scan::next_to_scan_for_outputs_block, ScannerFeed, KeyFor, @@ -160,6 +160,7 @@ impl ContinuallyRan for ReportTask { } for batch in batches { + Batches::send(&mut txn, &batch); BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); } } diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 76961c37..fc97daf3 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -144,16 +144,9 @@ impl ContinuallyRan for SubstrateTask { } } - // Drop burns less than the dust - let burns = burns - .into_iter() - .filter(|burn| burn.balance.amount.0 >= S::dust(burn.balance.coin).0) - .collect::>(); - if !burns.is_empty() { - // We send these Burns as stemming from this block we just acknowledged - // This causes them to be acted on after we accumulate the outputs from this block - SubstrateToEventualityDb::send_burns(&mut txn, block_number, &burns); - } + // We send these Burns as stemming from this block we just acknowledged + // This causes them to be acted on after we accumulate the outputs from this block + SubstrateToEventualityDb::send_burns::(&mut txn, block_number, burns); } Action::QueueBurns(burns) => { @@ -163,7 +156,7 @@ impl ContinuallyRan for SubstrateTask { let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(&txn) .expect("queueing Burns yet never acknowledged a block"); - SubstrateToEventualityDb::send_burns(&mut txn, queue_as_of, &burns); + SubstrateToEventualityDb::send_burns::(&mut txn, queue_as_of, burns); } } diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 3255603d..77cdef59 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -95,6 +95,20 @@ impl ContinuallyRan for CoordinatorTask { } } + // Publish the Batches + { + let mut txn = self.db.txn(); + while let Some(batch) = scanner::Batches::try_recv(&mut txn) { + iterated = true; + self + .coordinator + .publish_batch(batch) + .await + .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; + } + txn.commit(); + } + // Publish the signed Batches { let mut txn = self.db.txn(); @@ -108,7 +122,7 @@ impl ContinuallyRan for CoordinatorTask { db::LastPublishedBatch::set(&mut txn, &batch.batch.id); self .coordinator - .publish_batch(batch) + .publish_signed_batch(batch) .await .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; next_batch += 1; diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 024badfa..36e2db2e 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -46,8 +46,11 @@ pub trait Coordinator: 'static + Send + Sync { /// Send a `messages::sign::ProcessorMessage`. async fn send(&mut self, message: ProcessorMessage) -> Result<(), Self::EphemeralError>; + /// Publish a `Batch`. + async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError>; + /// Publish a `SignedBatch`. - async fn publish_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError>; + async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError>; } /// An object capable of publishing a transaction. From 3cc7b4949299211261f4d17bf2ba7a72fa7102ca Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 03:23:55 -0400 Subject: [PATCH 088/368] Strongly type SlashReport, populate cosign/slash report tasks with work --- processor/messages/src/lib.rs | 4 +- processor/signers/src/db.rs | 5 ++- processor/signers/src/lib.rs | 41 ++++++++++++++++++- .../validator-sets/primitives/src/lib.rs | 20 ++++++++- 4 files changed, 64 insertions(+), 6 deletions(-) diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 4a191b68..dc7f2939 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -9,7 +9,7 @@ use dkg::Participant; use serai_primitives::BlockHash; use in_instructions_primitives::{Batch, SignedBatch}; use coins_primitives::OutInstructionWithBalance; -use validator_sets_primitives::{Session, KeyPair}; +use validator_sets_primitives::{Session, KeyPair, Slash}; #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct SubstrateContext { @@ -163,7 +163,7 @@ pub mod coordinator { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { CosignSubstrateBlock { session: Session, block_number: u64, block: [u8; 32] }, - SignSlashReport { session: Session, report: Vec<([u8; 32], u32)> }, + SignSlashReport { session: Session, report: Vec }, } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index ae62c947..66894621 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -1,4 +1,4 @@ -use serai_validator_sets_primitives::Session; +use serai_validator_sets_primitives::{Session, Slash}; use serai_db::{Get, DbTxn, create_db, db_channel}; @@ -15,6 +15,9 @@ create_db! { db_channel! { SignersGlobal { + Cosign: (session: Session) -> (u64, [u8; 32]), + SlashReport: (session: Session) -> Vec, + CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 36e2db2e..de456296 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -10,7 +10,7 @@ use zeroize::Zeroizing; use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use frost::dkg::{ThresholdCore, ThresholdKeys}; -use serai_validator_sets_primitives::Session; +use serai_validator_sets_primitives::{Session, Slash}; use serai_in_instructions_primitives::SignedBatch; use serai_db::{DbTxn, Db}; @@ -139,6 +139,8 @@ impl Signers { while scanner::CompletedEventualities::try_recv(&mut txn, &external_key).is_some() {} // Drain our DB channels + while db::Cosign::try_recv(&mut txn, session).is_some() {} + while db::SlashReport::try_recv(&mut txn, session).is_some() {} while db::CoordinatorToCosignerMessages::try_recv(&mut txn, session).is_some() {} while db::CosignerToCoordinatorMessages::try_recv(&mut txn, session).is_some() {} while db::CoordinatorToBatchSignerMessages::try_recv(&mut txn, session).is_some() {} @@ -276,7 +278,7 @@ impl Signers { /// Queue handling a message. /// - /// This is a cheap call and able to be done inline with a higher-level loop. + /// This is a cheap call and able to be done inline from a higher-level loop. pub fn queue_message(&mut self, txn: &mut impl DbTxn, message: &CoordinatorMessage) { let sign_id = message.sign_id(); let tasks = self.tasks.get(&sign_id.session); @@ -307,4 +309,39 @@ impl Signers { } } } + + /// Cosign a block. + /// + /// This is a cheap call and able to be done inline from a higher-level loop. + pub fn cosign_block( + &mut self, + mut txn: impl DbTxn, + session: Session, + block_number: u64, + block: [u8; 32], + ) { + db::Cosign::send(&mut txn, session, &(block_number, block)); + txn.commit(); + + if let Some(tasks) = self.tasks.get(&session) { + tasks.cosign.run_now(); + } + } + + /// Sign a slash report. + /// + /// This is a cheap call and able to be done inline from a higher-level loop. + pub fn sign_slash_report( + &mut self, + mut txn: impl DbTxn, + session: Session, + slash_report: Vec, + ) { + db::SlashReport::send(&mut txn, session, &slash_report); + txn.commit(); + + if let Some(tasks) = self.tasks.get(&session) { + tasks.slash_report.run_now(); + } + } } diff --git a/substrate/validator-sets/primitives/src/lib.rs b/substrate/validator-sets/primitives/src/lib.rs index 90d58c37..341d211f 100644 --- a/substrate/validator-sets/primitives/src/lib.rs +++ b/substrate/validator-sets/primitives/src/lib.rs @@ -103,7 +103,25 @@ pub fn set_keys_message(set: &ValidatorSet, key_pair: &KeyPair) -> Vec { (b"ValidatorSets-set_keys", set, key_pair).encode() } -pub fn report_slashes_message(set: &ValidatorSet, slashes: &[(Public, u32)]) -> Vec { +#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] +#[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] +#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +pub struct Slash { + #[cfg_attr( + feature = "borsh", + borsh( + serialize_with = "serai_primitives::borsh_serialize_public", + deserialize_with = "serai_primitives::borsh_deserialize_public" + ) + )] + key: Public, + points: u32, +} +#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] +#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +pub struct SlashReport(pub BoundedVec>); + +pub fn report_slashes_message(set: &ValidatorSet, slashes: &SlashReport) -> Vec { (b"ValidatorSets-report_slashes", set, slashes).encode() } From 46c12c0e66e10b7a5a644f8f854f565fc8e9d666 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 04:18:54 -0400 Subject: [PATCH 089/368] SlashReport signing and signature publication --- Cargo.lock | 1 + processor/messages/src/lib.rs | 13 ++- processor/scanner/src/lib.rs | 2 +- processor/signers/Cargo.toml | 1 + processor/signers/src/coordinator/mod.rs | 30 ++++-- processor/signers/src/db.rs | 1 + processor/signers/src/lib.rs | 66 +++++++++---- processor/signers/src/slash_report.rs | 120 +++++++++++++++++++++++ 8 files changed, 200 insertions(+), 34 deletions(-) create mode 100644 processor/signers/src/slash_report.rs diff --git a/Cargo.lock b/Cargo.lock index 9db0bb74..81e3d1de 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8731,6 +8731,7 @@ dependencies = [ "rand_core", "serai-db", "serai-in-instructions-primitives", + "serai-primitives", "serai-processor-frost-attempt-manager", "serai-processor-messages", "serai-processor-primitives", diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index dc7f2939..d9534293 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -7,9 +7,9 @@ use borsh::{BorshSerialize, BorshDeserialize}; use dkg::Participant; use serai_primitives::BlockHash; -use in_instructions_primitives::{Batch, SignedBatch}; -use coins_primitives::OutInstructionWithBalance; use validator_sets_primitives::{Session, KeyPair, Slash}; +use coins_primitives::OutInstructionWithBalance; +use in_instructions_primitives::{Batch, SignedBatch}; #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct SubstrateContext { @@ -84,7 +84,7 @@ pub mod sign { pub enum VariantSignId { Cosign([u8; 32]), Batch(u32), - SlashReport([u8; 32]), + SlashReport(Session), Transaction([u8; 32]), } impl fmt::Debug for VariantSignId { @@ -94,10 +94,9 @@ pub mod sign { f.debug_struct("VariantSignId::Cosign").field("0", &hex::encode(cosign)).finish() } Self::Batch(batch) => f.debug_struct("VariantSignId::Batch").field("0", &batch).finish(), - Self::SlashReport(slash_report) => f - .debug_struct("VariantSignId::SlashReport") - .field("0", &hex::encode(slash_report)) - .finish(), + Self::SlashReport(session) => { + f.debug_struct("VariantSignId::SlashReport").field("0", &session).finish() + } Self::Transaction(tx) => { f.debug_struct("VariantSignId::Transaction").field("0", &hex::encode(tx)).finish() } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index e5b39cdd..5919ff7e 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -21,7 +21,7 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; use db::ScannerGlobalDb; -pub use db::{BatchesToSign, AcknowledgedBatches, CompletedEventualities}; +pub use db::{Batches, BatchesToSign, AcknowledgedBatches, CompletedEventualities}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 91192a9e..7b7ef098 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -31,6 +31,7 @@ frost-schnorrkel = { path = "../../crypto/schnorrkel", default-features = false scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std"] } serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std"] } diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 77cdef59..0b1ee467 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -1,14 +1,9 @@ +use scale::Decode; use serai_db::{DbTxn, Db}; use primitives::task::ContinuallyRan; -use crate::{ - db::{ - RegisteredKeys, CosignerToCoordinatorMessages, BatchSignerToCoordinatorMessages, - SlashReportSignerToCoordinatorMessages, TransactionSignerToCoordinatorMessages, - }, - Coordinator, -}; +use crate::{db::*, Coordinator}; mod db; @@ -30,6 +25,7 @@ impl ContinuallyRan for CoordinatorTask { let mut iterated = false; for session in RegisteredKeys::get(&self.db).unwrap_or(vec![]) { + // Publish the messages generated by this key's signers loop { let mut txn = self.db.txn(); let Some(msg) = CosignerToCoordinatorMessages::try_recv(&mut txn, session) else { @@ -93,6 +89,26 @@ impl ContinuallyRan for CoordinatorTask { txn.commit(); } + + // If this session signed its slash report, publish its signature + { + let mut txn = self.db.txn(); + if let Some(slash_report_signature) = SlashReportSignature::try_recv(&mut txn, session) { + iterated = true; + + self + .coordinator + .publish_slash_report_signature( + <_>::decode(&mut slash_report_signature.as_slice()).unwrap(), + ) + .await + .map_err(|e| { + format!("couldn't send slash report signature to the coordinator: {e:?}") + })?; + + txn.commit(); + } + } } // Publish the Batches diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index 66894621..ea022fca 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -17,6 +17,7 @@ db_channel! { SignersGlobal { Cosign: (session: Session) -> (u64, [u8; 32]), SlashReport: (session: Session) -> Vec, + SlashReportSignature: (session: Session) -> Vec, CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index de456296..cc40ce25 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -10,8 +10,9 @@ use zeroize::Zeroizing; use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use frost::dkg::{ThresholdCore, ThresholdKeys}; +use serai_primitives::Signature; use serai_validator_sets_primitives::{Session, Slash}; -use serai_in_instructions_primitives::SignedBatch; +use serai_in_instructions_primitives::{Batch, SignedBatch}; use serai_db::{DbTxn, Db}; @@ -19,6 +20,7 @@ use messages::sign::{VariantSignId, ProcessorMessage, CoordinatorMessage}; use primitives::task::{Task, TaskHandle, ContinuallyRan}; use scheduler::{Transaction, SignableTransaction, TransactionFor}; +use scanner::{ScannerFeed, Scheduler}; mod wrapped_schnorrkel; pub(crate) use wrapped_schnorrkel::WrappedSchnorrkelMachine; @@ -31,6 +33,9 @@ use coordinator::CoordinatorTask; mod batch; use batch::BatchSignerTask; +mod slash_report; +use slash_report::SlashReportSignerTask; + mod transaction; use transaction::TransactionSignerTask; @@ -51,6 +56,12 @@ pub trait Coordinator: 'static + Send + Sync { /// Publish a `SignedBatch`. async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError>; + + /// Publish a slash report's signature. + async fn publish_slash_report_signature( + &mut self, + signature: Signature, + ) -> Result<(), Self::EphemeralError>; } /// An object capable of publishing a transaction. @@ -81,12 +92,17 @@ struct Tasks { /// The signers used by a processor. #[allow(non_snake_case)] -pub struct Signers { +pub struct Signers> { coordinator_handle: TaskHandle, tasks: HashMap, - _ST: PhantomData, + _Sch: PhantomData, + _S: PhantomData, } +type CiphersuiteFor = + <>::SignableTransaction as SignableTransaction>::Ciphersuite; +type SignableTransactionFor = >::SignableTransaction; + /* This is completely outside of consensus, so the worst that can happen is: @@ -99,14 +115,14 @@ pub struct Signers { completion comes in *before* we registered a key, the signer will hold the signing protocol in memory until the session is retired entirely. */ -impl Signers { +impl> Signers { /// Initialize the signers. /// /// This will spawn tasks for any historically registered keys. pub fn new( mut db: impl Db, coordinator: impl Coordinator, - publisher: &impl TransactionPublisher>, + publisher: &impl TransactionPublisher>>, ) -> Self { /* On boot, perform any database cleanup which was queued. @@ -120,8 +136,7 @@ impl Signers { let mut txn = db.txn(); for (session, external_key_bytes) in db::ToCleanup::get(&txn).unwrap_or(vec![]) { let mut external_key_bytes = external_key_bytes.as_slice(); - let external_key = - ::read_G(&mut external_key_bytes).unwrap(); + let external_key = CiphersuiteFor::::read_G(&mut external_key_bytes).unwrap(); assert!(external_key_bytes.is_empty()); // Drain the Batches to sign @@ -133,7 +148,12 @@ impl Signers { // Drain the transactions to sign // This will be fully populated by the scheduler before retiry - while scheduler::TransactionsToSign::::try_recv(&mut txn, &external_key).is_some() {} + while scheduler::TransactionsToSign::>::try_recv( + &mut txn, + &external_key, + ) + .is_some() + {} // Drain the completed Eventualities while scanner::CompletedEventualities::try_recv(&mut txn, &external_key).is_some() {} @@ -170,11 +190,12 @@ impl Signers { while !buf.is_empty() { substrate_keys .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); - external_keys - .push(ThresholdKeys::from(ThresholdCore::::read(&mut buf).unwrap())); + external_keys.push(ThresholdKeys::from( + ThresholdCore::>::read(&mut buf).unwrap(), + )); } - // TODO: Cosigner and slash report signers + // TODO: Cosigner let (batch_task, batch_handle) = Task::new(); tokio::spawn( @@ -187,9 +208,15 @@ impl Signers { .continually_run(batch_task, vec![coordinator_handle.clone()]), ); + let (slash_report_task, slash_report_handle) = Task::new(); + tokio::spawn( + SlashReportSignerTask::<_, S>::new(db.clone(), session, substrate_keys.clone()) + .continually_run(slash_report_task, vec![coordinator_handle.clone()]), + ); + let (transaction_task, transaction_handle) = Task::new(); tokio::spawn( - TransactionSignerTask::<_, ST, _>::new( + TransactionSignerTask::<_, SignableTransactionFor, _>::new( db.clone(), publisher.clone(), session, @@ -203,13 +230,13 @@ impl Signers { Tasks { cosigner: todo!("TODO"), batch: batch_handle, - slash_report: todo!("TODO"), + slash_report: slash_report_handle, transaction: transaction_handle, }, ); } - Self { coordinator_handle, tasks, _ST: PhantomData } + Self { coordinator_handle, tasks, _Sch: PhantomData, _S: PhantomData } } /// Register a set of keys to sign with. @@ -220,7 +247,7 @@ impl Signers { txn: &mut impl DbTxn, session: Session, substrate_keys: Vec>, - network_keys: Vec>, + network_keys: Vec>>, ) { // Don't register already retired keys if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { @@ -246,7 +273,8 @@ impl Signers { /// Retire the signers for a session. /// /// This MUST be called in order, for every session (even if we didn't register keys for this - /// session). + /// session). This MUST only be called after slash report publication, or after that process + /// times out (not once the key is done with regards to the external network). pub fn retire_session( &mut self, txn: &mut impl DbTxn, @@ -324,7 +352,7 @@ impl Signers { txn.commit(); if let Some(tasks) = self.tasks.get(&session) { - tasks.cosign.run_now(); + tasks.cosigner.run_now(); } } @@ -335,9 +363,9 @@ impl Signers { &mut self, mut txn: impl DbTxn, session: Session, - slash_report: Vec, + slash_report: &Vec, ) { - db::SlashReport::send(&mut txn, session, &slash_report); + db::SlashReport::send(&mut txn, session, slash_report); txn.commit(); if let Some(tasks) = self.tasks.get(&session) { diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs new file mode 100644 index 00000000..bdb6cdba --- /dev/null +++ b/processor/signers/src/slash_report.rs @@ -0,0 +1,120 @@ +use core::marker::PhantomData; + +use ciphersuite::Ristretto; +use frost::dkg::ThresholdKeys; + +use scale::Encode; +use serai_primitives::Signature; +use serai_validator_sets_primitives::{ + Session, ValidatorSet, SlashReport as SlashReportStruct, report_slashes_message, +}; + +use serai_db::{DbTxn, Db}; + +use messages::sign::VariantSignId; + +use primitives::task::ContinuallyRan; +use scanner::ScannerFeed; + +use frost_attempt_manager::*; + +use crate::{ + db::{ + SlashReport, SlashReportSignature, CoordinatorToSlashReportSignerMessages, + SlashReportSignerToCoordinatorMessages, + }, + WrappedSchnorrkelMachine, +}; + +// Fetches slash_reportes to sign and signs them. +#[allow(non_snake_case)] +pub(crate) struct SlashReportSignerTask { + db: D, + _S: PhantomData, + + session: Session, + keys: Vec>, + + has_slash_report: bool, + attempt_manager: AttemptManager, +} + +impl SlashReportSignerTask { + pub(crate) fn new(db: D, session: Session, keys: Vec>) -> Self { + let attempt_manager = AttemptManager::new( + db.clone(), + session, + keys.first().expect("creating a slash_report signer with 0 keys").params().i(), + ); + + Self { db, _S: PhantomData, session, keys, has_slash_report: false, attempt_manager } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for SlashReportSignerTask { + async fn run_iteration(&mut self) -> Result { + let mut iterated = false; + + // Check for the slash report to sign + if !self.has_slash_report { + let mut txn = self.db.txn(); + let Some(slash_report) = SlashReport::try_recv(&mut txn, self.session) else { + return Ok(false); + }; + // We only commit this upon successfully signing this slash report + drop(txn); + iterated = true; + + self.has_slash_report = true; + + let mut machines = Vec::with_capacity(self.keys.len()); + { + let message = report_slashes_message( + &ValidatorSet { network: S::NETWORK, session: self.session }, + &SlashReportStruct(slash_report.try_into().unwrap()), + ); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + } + } + let mut txn = self.db.txn(); + for msg in self.attempt_manager.register(VariantSignId::SlashReport(self.session), machines) { + SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + txn.commit(); + } + + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToSlashReportSignerMessages::try_recv(&mut txn, self.session) + else { + break; + }; + iterated = true; + + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature } => { + let VariantSignId::SlashReport(session) = id else { + panic!("SlashReportSignerTask signed a non-SlashReport") + }; + assert_eq!(session, self.session); + // Drain the channel + SlashReport::try_recv(&mut txn, self.session).unwrap(); + // Send the signature + SlashReportSignature::send(&mut txn, session, &Signature::from(signature).encode()); + } + } + + txn.commit(); + } + + Ok(iterated) + } +} From 8aba71b9c4c4dbc7c221880d691acf2935102f2c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 16:20:04 -0400 Subject: [PATCH 090/368] Add CosignerTask to signers, completing it --- processor/messages/src/lib.rs | 4 +- processor/signers/src/coordinator/mod.rs | 15 +++ processor/signers/src/cosign/db.rs | 9 ++ processor/signers/src/cosign/mod.rs | 122 ++++++++++++++++++ processor/signers/src/db.rs | 5 +- processor/signers/src/lib.rs | 155 ++++++++++++++++------- processor/signers/src/slash_report.rs | 4 +- 7 files changed, 261 insertions(+), 53 deletions(-) create mode 100644 processor/signers/src/cosign/db.rs create mode 100644 processor/signers/src/cosign/mod.rs diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index d9534293..998c7cea 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -82,7 +82,7 @@ pub mod sign { #[derive(Clone, Copy, PartialEq, Eq, Hash, Encode, Decode, BorshSerialize, BorshDeserialize)] pub enum VariantSignId { - Cosign([u8; 32]), + Cosign(u64), Batch(u32), SlashReport(Session), Transaction([u8; 32]), @@ -91,7 +91,7 @@ pub mod sign { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { match self { Self::Cosign(cosign) => { - f.debug_struct("VariantSignId::Cosign").field("0", &hex::encode(cosign)).finish() + f.debug_struct("VariantSignId::Cosign").field("0", &cosign).finish() } Self::Batch(batch) => f.debug_struct("VariantSignId::Batch").field("0", &batch).finish(), Self::SlashReport(session) => { diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 0b1ee467..a3163922 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -90,6 +90,21 @@ impl ContinuallyRan for CoordinatorTask { txn.commit(); } + // Publish the cosigns from this session + { + let mut txn = self.db.txn(); + while let Some(((block_number, block_id), signature)) = Cosign::try_recv(&mut txn, session) + { + iterated = true; + self + .coordinator + .publish_cosign(block_number, block_id, <_>::decode(&mut signature.as_slice()).unwrap()) + .await + .map_err(|e| format!("couldn't publish Cosign: {e:?}"))?; + } + txn.commit(); + } + // If this session signed its slash report, publish its signature { let mut txn = self.db.txn(); diff --git a/processor/signers/src/cosign/db.rs b/processor/signers/src/cosign/db.rs new file mode 100644 index 00000000..01a42446 --- /dev/null +++ b/processor/signers/src/cosign/db.rs @@ -0,0 +1,9 @@ +use serai_validator_sets_primitives::Session; + +use serai_db::{Get, DbTxn, create_db}; + +create_db! { + SignersCosigner { + LatestCosigned: (session: Session) -> u64, + } +} diff --git a/processor/signers/src/cosign/mod.rs b/processor/signers/src/cosign/mod.rs new file mode 100644 index 00000000..41db8050 --- /dev/null +++ b/processor/signers/src/cosign/mod.rs @@ -0,0 +1,122 @@ +use ciphersuite::Ristretto; +use frost::dkg::ThresholdKeys; + +use scale::Encode; +use serai_primitives::Signature; +use serai_validator_sets_primitives::Session; + +use serai_db::{DbTxn, Db}; + +use messages::{sign::VariantSignId, coordinator::cosign_block_msg}; + +use primitives::task::ContinuallyRan; + +use frost_attempt_manager::*; + +use crate::{ + db::{ToCosign, Cosign, CoordinatorToCosignerMessages, CosignerToCoordinatorMessages}, + WrappedSchnorrkelMachine, +}; + +mod db; +use db::LatestCosigned; + +/// Fetches the latest cosign information and works on it. +/// +/// Only the latest cosign attempt is kept. We don't work on historical attempts as later cosigns +/// supersede them. +#[allow(non_snake_case)] +pub(crate) struct CosignerTask { + db: D, + + session: Session, + keys: Vec>, + + current_cosign: Option<(u64, [u8; 32])>, + attempt_manager: AttemptManager, +} + +impl CosignerTask { + pub(crate) fn new(db: D, session: Session, keys: Vec>) -> Self { + let attempt_manager = AttemptManager::new( + db.clone(), + session, + keys.first().expect("creating a cosigner with 0 keys").params().i(), + ); + + Self { db, session, keys, current_cosign: None, attempt_manager } + } +} + +#[async_trait::async_trait] +impl ContinuallyRan for CosignerTask { + async fn run_iteration(&mut self) -> Result { + let mut iterated = false; + + // Check the cosign to work on + { + let mut txn = self.db.txn(); + if let Some(cosign) = ToCosign::get(&txn, self.session) { + // If this wasn't already signed for... + if LatestCosigned::get(&txn, self.session) < Some(cosign.0) { + // If this isn't the cosign we're currently working on, meaning it's fresh + if self.current_cosign != Some(cosign) { + // Retire the current cosign + if let Some(current_cosign) = self.current_cosign { + assert!(current_cosign.0 < cosign.0); + self.attempt_manager.retire(&mut txn, VariantSignId::Cosign(current_cosign.0)); + } + + // Set the cosign being worked on + self.current_cosign = Some(cosign); + + let mut machines = Vec::with_capacity(self.keys.len()); + { + let message = cosign_block_msg(cosign.0, cosign.1); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + } + } + for msg in self.attempt_manager.register(VariantSignId::Cosign(cosign.0), machines) { + CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + + txn.commit(); + } + } + } + } + + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToCosignerMessages::try_recv(&mut txn, self.session) else { + break; + }; + iterated = true; + + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature } => { + let VariantSignId::Cosign(block_number) = id else { + panic!("CosignerTask signed a non-Cosign") + }; + assert_eq!(Some(block_number), self.current_cosign.map(|cosign| cosign.0)); + + let cosign = self.current_cosign.take().unwrap(); + LatestCosigned::set(&mut txn, self.session, &cosign.0); + // Send the cosign + Cosign::send(&mut txn, self.session, &(cosign, Signature::from(signature).encode())); + } + } + + txn.commit(); + } + + Ok(iterated) + } +} diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index ea022fca..b4de78d9 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -10,12 +10,15 @@ create_db! { SerializedKeys: (session: Session) -> Vec, LatestRetiredSession: () -> Session, ToCleanup: () -> Vec<(Session, Vec)>, + + ToCosign: (session: Session) -> (u64, [u8; 32]), } } db_channel! { SignersGlobal { - Cosign: (session: Session) -> (u64, [u8; 32]), + Cosign: (session: Session) -> ((u64, [u8; 32]), Vec), + SlashReport: (session: Session) -> Vec, SlashReportSignature: (session: Session) -> Vec, diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index cc40ce25..881205f8 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -30,6 +30,9 @@ pub(crate) mod db; mod coordinator; use coordinator::CoordinatorTask; +mod cosign; +use cosign::CosignerTask; + mod batch; use batch::BatchSignerTask; @@ -51,6 +54,14 @@ pub trait Coordinator: 'static + Send + Sync { /// Send a `messages::sign::ProcessorMessage`. async fn send(&mut self, message: ProcessorMessage) -> Result<(), Self::EphemeralError>; + /// Publish a cosign. + async fn publish_cosign( + &mut self, + block_number: u64, + block_id: [u8; 32], + signature: Signature, + ) -> Result<(), Self::EphemeralError>; + /// Publish a `Batch`. async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError>; @@ -92,7 +103,14 @@ struct Tasks { /// The signers used by a processor. #[allow(non_snake_case)] -pub struct Signers> { +pub struct Signers< + D: Db, + S: ScannerFeed, + Sch: Scheduler, + P: TransactionPublisher>>, +> { + db: D, + publisher: P, coordinator_handle: TaskHandle, tasks: HashMap, _Sch: PhantomData, @@ -115,15 +133,66 @@ type SignableTransactionFor = >::SignableTransaction completion comes in *before* we registered a key, the signer will hold the signing protocol in memory until the session is retired entirely. */ -impl> Signers { +impl< + D: Db, + S: ScannerFeed, + Sch: Scheduler, + P: TransactionPublisher>>, + > Signers +{ + fn tasks( + db: D, + publisher: P, + coordinator_handle: TaskHandle, + session: Session, + substrate_keys: Vec>, + external_keys: Vec>>, + ) -> Tasks { + let (cosign_task, cosign_handle) = Task::new(); + tokio::spawn( + CosignerTask::new(db.clone(), session, substrate_keys.clone()) + .continually_run(cosign_task, vec![coordinator_handle.clone()]), + ); + + let (batch_task, batch_handle) = Task::new(); + tokio::spawn( + BatchSignerTask::new( + db.clone(), + session, + external_keys[0].group_key(), + substrate_keys.clone(), + ) + .continually_run(batch_task, vec![coordinator_handle.clone()]), + ); + + let (slash_report_task, slash_report_handle) = Task::new(); + tokio::spawn( + SlashReportSignerTask::<_, S>::new(db.clone(), session, substrate_keys) + .continually_run(slash_report_task, vec![coordinator_handle.clone()]), + ); + + let (transaction_task, transaction_handle) = Task::new(); + tokio::spawn( + TransactionSignerTask::<_, SignableTransactionFor, _>::new( + db, + publisher, + session, + external_keys, + ) + .continually_run(transaction_task, vec![coordinator_handle]), + ); + + Tasks { + cosigner: cosign_handle, + batch: batch_handle, + slash_report: slash_report_handle, + transaction: transaction_handle, + } + } /// Initialize the signers. /// /// This will spawn tasks for any historically registered keys. - pub fn new( - mut db: impl Db, - coordinator: impl Coordinator, - publisher: &impl TransactionPublisher>>, - ) -> Self { + pub fn new(mut db: D, coordinator: impl Coordinator, publisher: P) -> Self { /* On boot, perform any database cleanup which was queued. @@ -158,6 +227,8 @@ impl> Signers { // Drain the completed Eventualities while scanner::CompletedEventualities::try_recv(&mut txn, &external_key).is_some() {} + // Delete the cosign this session should be working on + db::ToCosign::del(&mut txn, session); // Drain our DB channels while db::Cosign::try_recv(&mut txn, session).is_some() {} while db::SlashReport::try_recv(&mut txn, session).is_some() {} @@ -195,48 +266,20 @@ impl> Signers { )); } - // TODO: Cosigner - - let (batch_task, batch_handle) = Task::new(); - tokio::spawn( - BatchSignerTask::new( - db.clone(), - session, - external_keys[0].group_key(), - substrate_keys.clone(), - ) - .continually_run(batch_task, vec![coordinator_handle.clone()]), - ); - - let (slash_report_task, slash_report_handle) = Task::new(); - tokio::spawn( - SlashReportSignerTask::<_, S>::new(db.clone(), session, substrate_keys.clone()) - .continually_run(slash_report_task, vec![coordinator_handle.clone()]), - ); - - let (transaction_task, transaction_handle) = Task::new(); - tokio::spawn( - TransactionSignerTask::<_, SignableTransactionFor, _>::new( - db.clone(), - publisher.clone(), - session, - external_keys, - ) - .continually_run(transaction_task, vec![coordinator_handle.clone()]), - ); - tasks.insert( session, - Tasks { - cosigner: todo!("TODO"), - batch: batch_handle, - slash_report: slash_report_handle, - transaction: transaction_handle, - }, + Self::tasks( + db.clone(), + publisher.clone(), + coordinator_handle.clone(), + session, + substrate_keys, + external_keys, + ), ); } - Self { coordinator_handle, tasks, _Sch: PhantomData, _S: PhantomData } + Self { db, publisher, coordinator_handle, tasks, _Sch: PhantomData, _S: PhantomData } } /// Register a set of keys to sign with. @@ -247,7 +290,7 @@ impl> Signers { txn: &mut impl DbTxn, session: Session, substrate_keys: Vec>, - network_keys: Vec>>, + external_keys: Vec>>, ) { // Don't register already retired keys if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { @@ -262,12 +305,25 @@ impl> Signers { { let mut buf = Zeroizing::new(Vec::with_capacity(2 * substrate_keys.len() * 128)); - for (substrate_keys, network_keys) in substrate_keys.into_iter().zip(network_keys) { + for (substrate_keys, external_keys) in substrate_keys.iter().zip(&external_keys) { buf.extend(&*substrate_keys.serialize()); - buf.extend(&*network_keys.serialize()); + buf.extend(&*external_keys.serialize()); } db::SerializedKeys::set(txn, session, &buf); } + + // Spawn the tasks + self.tasks.insert( + session, + Self::tasks( + self.db.clone(), + self.publisher.clone(), + self.coordinator_handle.clone(), + session, + substrate_keys, + external_keys, + ), + ); } /// Retire the signers for a session. @@ -302,6 +358,9 @@ impl> Signers { let mut to_cleanup = db::ToCleanup::get(txn).unwrap_or(vec![]); to_cleanup.push((session, external_key.to_bytes().as_ref().to_vec())); db::ToCleanup::set(txn, &to_cleanup); + + // Drop the task handles, which will cause the tasks to close + self.tasks.remove(&session); } /// Queue handling a message. @@ -348,7 +407,7 @@ impl> Signers { block_number: u64, block: [u8; 32], ) { - db::Cosign::send(&mut txn, session, &(block_number, block)); + db::ToCosign::set(&mut txn, session, &(block_number, block)); txn.commit(); if let Some(tasks) = self.tasks.get(&session) { diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs index bdb6cdba..19a2523b 100644 --- a/processor/signers/src/slash_report.rs +++ b/processor/signers/src/slash_report.rs @@ -26,7 +26,7 @@ use crate::{ WrappedSchnorrkelMachine, }; -// Fetches slash_reportes to sign and signs them. +// Fetches slash reports to sign and signs them. #[allow(non_snake_case)] pub(crate) struct SlashReportSignerTask { db: D, @@ -44,7 +44,7 @@ impl SlashReportSignerTask { let attempt_manager = AttemptManager::new( db.clone(), session, - keys.first().expect("creating a slash_report signer with 0 keys").params().i(), + keys.first().expect("creating a slash report signer with 0 keys").params().i(), ); Self { db, _S: PhantomData, session, keys, has_slash_report: false, attempt_manager } From 0ccf71df1e92d790b4d68fcea4977ef820f28ad1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Sep 2024 16:51:30 -0400 Subject: [PATCH 091/368] Remove old signer impls --- processor/src/batch_signer.rs | 421 ----------------- processor/src/cosigner.rs | 296 ------------ processor/src/signer.rs | 654 --------------------------- processor/src/slash_report_signer.rs | 293 ------------ 4 files changed, 1664 deletions(-) delete mode 100644 processor/src/batch_signer.rs delete mode 100644 processor/src/cosigner.rs delete mode 100644 processor/src/signer.rs delete mode 100644 processor/src/slash_report_signer.rs diff --git a/processor/src/batch_signer.rs b/processor/src/batch_signer.rs deleted file mode 100644 index 41f50322..00000000 --- a/processor/src/batch_signer.rs +++ /dev/null @@ -1,421 +0,0 @@ -use core::{marker::PhantomData, fmt}; -use std::collections::HashMap; - -use rand_core::OsRng; - -use frost::{ - curve::Ristretto, - ThresholdKeys, FrostError, - algorithm::Algorithm, - sign::{ - Writable, PreprocessMachine, SignMachine, SignatureMachine, AlgorithmMachine, - AlgorithmSignMachine, AlgorithmSignatureMachine, - }, -}; -use frost_schnorrkel::Schnorrkel; - -use log::{info, debug, warn}; - -use serai_client::{ - primitives::{NetworkId, BlockHash}, - in_instructions::primitives::{Batch, SignedBatch, batch_message}, - validator_sets::primitives::Session, -}; - -use messages::coordinator::*; -use crate::{Get, DbTxn, Db, create_db}; - -create_db!( - BatchSignerDb { - CompletedDb: (id: u32) -> (), - AttemptDb: (id: u32, attempt: u32) -> (), - BatchDb: (block: BlockHash) -> SignedBatch - } -); - -type Preprocess = as PreprocessMachine>::Preprocess; -type SignatureShare = as SignMachine< - >::Signature, ->>::SignatureShare; - -pub struct BatchSigner { - db: PhantomData, - - network: NetworkId, - session: Session, - keys: Vec>, - - signable: HashMap, - attempt: HashMap, - #[allow(clippy::type_complexity)] - preprocessing: HashMap>, Vec)>, - #[allow(clippy::type_complexity)] - signing: HashMap, Vec)>, -} - -impl fmt::Debug for BatchSigner { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("BatchSigner") - .field("signable", &self.signable) - .field("attempt", &self.attempt) - .finish_non_exhaustive() - } -} - -impl BatchSigner { - pub fn new( - network: NetworkId, - session: Session, - keys: Vec>, - ) -> BatchSigner { - assert!(!keys.is_empty()); - BatchSigner { - db: PhantomData, - - network, - session, - keys, - - signable: HashMap::new(), - attempt: HashMap::new(), - preprocessing: HashMap::new(), - signing: HashMap::new(), - } - } - - fn verify_id(&self, id: &SubstrateSignId) -> Result<(Session, u32, u32), ()> { - let SubstrateSignId { session, id, attempt } = id; - let SubstrateSignableId::Batch(id) = id else { panic!("BatchSigner handed non-Batch") }; - - assert_eq!(session, &self.session); - - // Check the attempt lines up - match self.attempt.get(id) { - // If we don't have an attempt logged, it's because the coordinator is faulty OR because we - // rebooted OR we detected the signed batch on chain - // The latter is the expected flow for batches not actively being participated in - None => { - warn!("not attempting batch {id} #{attempt}"); - Err(())?; - } - Some(our_attempt) => { - if attempt != our_attempt { - warn!("sent signing data for batch {id} #{attempt} yet we have attempt #{our_attempt}"); - Err(())?; - } - } - } - - Ok((*session, *id, *attempt)) - } - - #[must_use] - fn attempt( - &mut self, - txn: &mut D::Transaction<'_>, - id: u32, - attempt: u32, - ) -> Option { - // See above commentary for why this doesn't emit SignedBatch - if CompletedDb::get(txn, id).is_some() { - return None; - } - - // Check if we're already working on this attempt - if let Some(curr_attempt) = self.attempt.get(&id) { - if curr_attempt >= &attempt { - warn!("told to attempt {id} #{attempt} yet we're already working on {curr_attempt}"); - return None; - } - } - - // Start this attempt - let block = if let Some(batch) = self.signable.get(&id) { - batch.block - } else { - warn!("told to attempt signing a batch we aren't currently signing for"); - return None; - }; - - // Delete any existing machines - self.preprocessing.remove(&id); - self.signing.remove(&id); - - // Update the attempt number - self.attempt.insert(id, attempt); - - info!("signing batch {id} #{attempt}"); - - // If we reboot mid-sign, the current design has us abort all signs and wait for latter - // attempts/new signing protocols - // This is distinct from the DKG which will continue DKG sessions, even on reboot - // This is because signing is tolerant of failures of up to 1/3rd of the group - // The DKG requires 100% participation - // While we could apply similar tricks as the DKG (a seeded RNG) to achieve support for - // reboots, it's not worth the complexity when messing up here leaks our secret share - // - // Despite this, on reboot, we'll get told of active signing items, and may be in this - // branch again for something we've already attempted - // - // Only run if this hasn't already been attempted - // TODO: This isn't complete as this txn may not be committed with the expected timing - if AttemptDb::get(txn, id, attempt).is_some() { - warn!( - "already attempted batch {id}, attempt #{attempt}. this is an error if we didn't reboot" - ); - return None; - } - AttemptDb::set(txn, id, attempt, &()); - - let mut machines = vec![]; - let mut preprocesses = vec![]; - let mut serialized_preprocesses = vec![]; - for keys in &self.keys { - // b"substrate" is a literal from sp-core - let machine = AlgorithmMachine::new(Schnorrkel::new(b"substrate"), keys.clone()); - - let (machine, preprocess) = machine.preprocess(&mut OsRng); - machines.push(machine); - serialized_preprocesses.push(preprocess.serialize().try_into().unwrap()); - preprocesses.push(preprocess); - } - self.preprocessing.insert(id, (machines, preprocesses)); - - let id = SubstrateSignId { session: self.session, id: SubstrateSignableId::Batch(id), attempt }; - - // Broadcast our preprocesses - Some(ProcessorMessage::BatchPreprocess { id, block, preprocesses: serialized_preprocesses }) - } - - #[must_use] - pub fn sign(&mut self, txn: &mut D::Transaction<'_>, batch: Batch) -> Option { - debug_assert_eq!(self.network, batch.network); - let id = batch.id; - if CompletedDb::get(txn, id).is_some() { - debug!("Sign batch order for ID we've already completed signing"); - // See batch_signed for commentary on why this simply returns - return None; - } - - self.signable.insert(id, batch); - self.attempt(txn, id, 0) - } - - #[must_use] - pub fn handle( - &mut self, - txn: &mut D::Transaction<'_>, - msg: CoordinatorMessage, - ) -> Option { - match msg { - CoordinatorMessage::CosignSubstrateBlock { .. } => { - panic!("BatchSigner passed CosignSubstrateBlock") - } - - CoordinatorMessage::SignSlashReport { .. } => { - panic!("Cosigner passed SignSlashReport") - } - - CoordinatorMessage::SubstratePreprocesses { id, preprocesses } => { - let (session, id, attempt) = self.verify_id(&id).ok()?; - - let substrate_sign_id = - SubstrateSignId { session, id: SubstrateSignableId::Batch(id), attempt }; - - let (machines, our_preprocesses) = match self.preprocessing.remove(&id) { - // Either rebooted or RPC error, or some invariant - None => { - warn!("not preprocessing for {id}. this is an error if we didn't reboot"); - return None; - } - Some(preprocess) => preprocess, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = preprocesses.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut preprocess_ref = preprocesses.get(&l).unwrap().as_slice(); - let Ok(res) = machines[0].read_preprocess(&mut preprocess_ref) else { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ); - }; - if !preprocess_ref.is_empty() { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ); - } - parsed.insert(l, res); - } - let preprocesses = parsed; - - // Only keep a single machine as we only need one to get the signature - let mut signature_machine = None; - let mut shares = vec![]; - let mut serialized_shares = vec![]; - for (m, machine) in machines.into_iter().enumerate() { - let mut preprocesses = preprocesses.clone(); - for (i, our_preprocess) in our_preprocesses.clone().into_iter().enumerate() { - if i != m { - assert!(preprocesses.insert(self.keys[i].params().i(), our_preprocess).is_none()); - } - } - - let (machine, share) = match machine - .sign(preprocesses, &batch_message(&self.signable[&id])) - { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ) - } - }, - }; - if m == 0 { - signature_machine = Some(machine); - } - - let mut share_bytes = [0; 32]; - share_bytes.copy_from_slice(&share.serialize()); - serialized_shares.push(share_bytes); - - shares.push(share); - } - self.signing.insert(id, (signature_machine.unwrap(), shares)); - - // Broadcast our shares - Some( - (ProcessorMessage::SubstrateShare { id: substrate_sign_id, shares: serialized_shares }) - .into(), - ) - } - - CoordinatorMessage::SubstrateShares { id, shares } => { - let (session, id, attempt) = self.verify_id(&id).ok()?; - - let substrate_sign_id = - SubstrateSignId { session, id: SubstrateSignableId::Batch(id), attempt }; - - let (machine, our_shares) = match self.signing.remove(&id) { - // Rebooted, RPC error, or some invariant - None => { - // If preprocessing has this ID, it means we were never sent the preprocess by the - // coordinator - if self.preprocessing.contains_key(&id) { - panic!("never preprocessed yet signing?"); - } - - warn!("not preprocessing for {id}. this is an error if we didn't reboot"); - return None; - } - Some(signing) => signing, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = shares.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut share_ref = shares.get(&l).unwrap().as_slice(); - let Ok(res) = machine.read_share(&mut share_ref) else { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ); - }; - if !share_ref.is_empty() { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ); - } - parsed.insert(l, res); - } - let mut shares = parsed; - - for (i, our_share) in our_shares.into_iter().enumerate().skip(1) { - assert!(shares.insert(self.keys[i].params().i(), our_share).is_none()); - } - - let sig = match machine.complete(shares) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some( - (ProcessorMessage::InvalidParticipant { id: substrate_sign_id, participant: l }) - .into(), - ) - } - }, - }; - - info!("signed batch {id} with attempt #{attempt}"); - - let batch = - SignedBatch { batch: self.signable.remove(&id).unwrap(), signature: sig.into() }; - - // Save the batch in case it's needed for recovery - BatchDb::set(txn, batch.batch.block, &batch); - CompletedDb::set(txn, id, &()); - - // Stop trying to sign for this batch - assert!(self.attempt.remove(&id).is_some()); - assert!(self.preprocessing.remove(&id).is_none()); - assert!(self.signing.remove(&id).is_none()); - - Some((messages::substrate::ProcessorMessage::SignedBatch { batch }).into()) - } - - CoordinatorMessage::BatchReattempt { id } => { - let SubstrateSignableId::Batch(batch_id) = id.id else { - panic!("BatchReattempt passed non-Batch ID") - }; - self.attempt(txn, batch_id, id.attempt).map(Into::into) - } - } - } - - pub fn batch_signed(&mut self, txn: &mut D::Transaction<'_>, id: u32) { - // Stop trying to sign for this batch - CompletedDb::set(txn, id, &()); - - self.signable.remove(&id); - self.attempt.remove(&id); - self.preprocessing.remove(&id); - self.signing.remove(&id); - - // This doesn't emit SignedBatch because it doesn't have access to the SignedBatch - // This function is expected to only be called once Substrate acknowledges this block, - // which means its batch must have been signed - // While a successive batch's signing would also cause this block to be acknowledged, Substrate - // guarantees a batch's ordered inclusion - - // This also doesn't return any messages since all mutation from the Batch being signed happens - // on the substrate::CoordinatorMessage::SubstrateBlock message (which SignedBatch is meant to - // end up triggering) - } -} diff --git a/processor/src/cosigner.rs b/processor/src/cosigner.rs deleted file mode 100644 index a9fb6ccc..00000000 --- a/processor/src/cosigner.rs +++ /dev/null @@ -1,296 +0,0 @@ -use core::fmt; -use std::collections::HashMap; - -use rand_core::OsRng; - -use frost::{ - curve::Ristretto, - ThresholdKeys, FrostError, - algorithm::Algorithm, - sign::{ - Writable, PreprocessMachine, SignMachine, SignatureMachine, AlgorithmMachine, - AlgorithmSignMachine, AlgorithmSignatureMachine, - }, -}; -use frost_schnorrkel::Schnorrkel; - -use log::{info, warn}; - -use serai_client::validator_sets::primitives::Session; - -use messages::coordinator::*; -use crate::{Get, DbTxn, create_db}; - -create_db! { - CosignerDb { - Completed: (id: [u8; 32]) -> (), - Attempt: (id: [u8; 32], attempt: u32) -> (), - } -} - -type Preprocess = as PreprocessMachine>::Preprocess; -type SignatureShare = as SignMachine< - >::Signature, ->>::SignatureShare; - -pub struct Cosigner { - session: Session, - keys: Vec>, - - block_number: u64, - id: [u8; 32], - attempt: u32, - #[allow(clippy::type_complexity)] - preprocessing: Option<(Vec>, Vec)>, - #[allow(clippy::type_complexity)] - signing: Option<(AlgorithmSignatureMachine, Vec)>, -} - -impl fmt::Debug for Cosigner { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("Cosigner") - .field("session", &self.session) - .field("block_number", &self.block_number) - .field("id", &self.id) - .field("attempt", &self.attempt) - .field("preprocessing", &self.preprocessing.is_some()) - .field("signing", &self.signing.is_some()) - .finish_non_exhaustive() - } -} - -impl Cosigner { - pub fn new( - txn: &mut impl DbTxn, - session: Session, - keys: Vec>, - block_number: u64, - id: [u8; 32], - attempt: u32, - ) -> Option<(Cosigner, ProcessorMessage)> { - assert!(!keys.is_empty()); - - if Completed::get(txn, id).is_some() { - return None; - } - - if Attempt::get(txn, id, attempt).is_some() { - warn!( - "already attempted cosigning {}, attempt #{}. this is an error if we didn't reboot", - hex::encode(id), - attempt, - ); - return None; - } - Attempt::set(txn, id, attempt, &()); - - info!("cosigning block {} with attempt #{}", hex::encode(id), attempt); - - let mut machines = vec![]; - let mut preprocesses = vec![]; - let mut serialized_preprocesses = vec![]; - for keys in &keys { - // b"substrate" is a literal from sp-core - let machine = AlgorithmMachine::new(Schnorrkel::new(b"substrate"), keys.clone()); - - let (machine, preprocess) = machine.preprocess(&mut OsRng); - machines.push(machine); - serialized_preprocesses.push(preprocess.serialize().try_into().unwrap()); - preprocesses.push(preprocess); - } - let preprocessing = Some((machines, preprocesses)); - - let substrate_sign_id = - SubstrateSignId { session, id: SubstrateSignableId::CosigningSubstrateBlock(id), attempt }; - - Some(( - Cosigner { session, keys, block_number, id, attempt, preprocessing, signing: None }, - ProcessorMessage::CosignPreprocess { - id: substrate_sign_id, - preprocesses: serialized_preprocesses, - }, - )) - } - - #[must_use] - pub fn handle( - &mut self, - txn: &mut impl DbTxn, - msg: CoordinatorMessage, - ) -> Option { - match msg { - CoordinatorMessage::CosignSubstrateBlock { .. } => { - panic!("Cosigner passed CosignSubstrateBlock") - } - - CoordinatorMessage::SignSlashReport { .. } => { - panic!("Cosigner passed SignSlashReport") - } - - CoordinatorMessage::SubstratePreprocesses { id, preprocesses } => { - assert_eq!(id.session, self.session); - let SubstrateSignableId::CosigningSubstrateBlock(block) = id.id else { - panic!("cosigner passed Batch") - }; - if block != self.id { - panic!("given preprocesses for a distinct block than cosigner is signing") - } - if id.attempt != self.attempt { - panic!("given preprocesses for a distinct attempt than cosigner is signing") - } - - let (machines, our_preprocesses) = match self.preprocessing.take() { - // Either rebooted or RPC error, or some invariant - None => { - warn!( - "not preprocessing for {}. this is an error if we didn't reboot", - hex::encode(block), - ); - return None; - } - Some(preprocess) => preprocess, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = preprocesses.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut preprocess_ref = preprocesses.get(&l).unwrap().as_slice(); - let Ok(res) = machines[0].read_preprocess(&mut preprocess_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !preprocess_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let preprocesses = parsed; - - // Only keep a single machine as we only need one to get the signature - let mut signature_machine = None; - let mut shares = vec![]; - let mut serialized_shares = vec![]; - for (m, machine) in machines.into_iter().enumerate() { - let mut preprocesses = preprocesses.clone(); - for (i, our_preprocess) in our_preprocesses.clone().into_iter().enumerate() { - if i != m { - assert!(preprocesses.insert(self.keys[i].params().i(), our_preprocess).is_none()); - } - } - - let (machine, share) = - match machine.sign(preprocesses, &cosign_block_msg(self.block_number, self.id)) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - if m == 0 { - signature_machine = Some(machine); - } - - let mut share_bytes = [0; 32]; - share_bytes.copy_from_slice(&share.serialize()); - serialized_shares.push(share_bytes); - - shares.push(share); - } - self.signing = Some((signature_machine.unwrap(), shares)); - - // Broadcast our shares - Some(ProcessorMessage::SubstrateShare { id, shares: serialized_shares }) - } - - CoordinatorMessage::SubstrateShares { id, shares } => { - assert_eq!(id.session, self.session); - let SubstrateSignableId::CosigningSubstrateBlock(block) = id.id else { - panic!("cosigner passed Batch") - }; - if block != self.id { - panic!("given preprocesses for a distinct block than cosigner is signing") - } - if id.attempt != self.attempt { - panic!("given preprocesses for a distinct attempt than cosigner is signing") - } - - let (machine, our_shares) = match self.signing.take() { - // Rebooted, RPC error, or some invariant - None => { - // If preprocessing has this ID, it means we were never sent the preprocess by the - // coordinator - if self.preprocessing.is_some() { - panic!("never preprocessed yet signing?"); - } - - warn!( - "not preprocessing for {}. this is an error if we didn't reboot", - hex::encode(block) - ); - return None; - } - Some(signing) => signing, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = shares.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut share_ref = shares.get(&l).unwrap().as_slice(); - let Ok(res) = machine.read_share(&mut share_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !share_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let mut shares = parsed; - - for (i, our_share) in our_shares.into_iter().enumerate().skip(1) { - assert!(shares.insert(self.keys[i].params().i(), our_share).is_none()); - } - - let sig = match machine.complete(shares) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - - info!("cosigned {} with attempt #{}", hex::encode(block), id.attempt); - - Completed::set(txn, block, &()); - - Some(ProcessorMessage::CosignedBlock { - block_number: self.block_number, - block, - signature: sig.to_bytes().to_vec(), - }) - } - CoordinatorMessage::BatchReattempt { .. } => panic!("BatchReattempt passed to Cosigner"), - } - } -} diff --git a/processor/src/signer.rs b/processor/src/signer.rs deleted file mode 100644 index cab0bceb..00000000 --- a/processor/src/signer.rs +++ /dev/null @@ -1,654 +0,0 @@ -use core::{marker::PhantomData, fmt}; -use std::collections::HashMap; - -use rand_core::OsRng; -use frost::{ - ThresholdKeys, FrostError, - sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine}, -}; - -use log::{info, debug, warn, error}; - -use serai_client::validator_sets::primitives::Session; -use messages::sign::*; - -pub use serai_db::*; - -use crate::{ - Get, DbTxn, Db, - networks::{Eventuality, Network}, -}; - -create_db!( - SignerDb { - CompletionsDb: (id: [u8; 32]) -> Vec, - EventualityDb: (id: [u8; 32]) -> Vec, - AttemptDb: (id: &SignId) -> (), - CompletionDb: (claim: &[u8]) -> Vec, - ActiveSignsDb: () -> Vec<[u8; 32]>, - CompletedOnChainDb: (id: &[u8; 32]) -> (), - } -); - -impl ActiveSignsDb { - fn add_active_sign(txn: &mut impl DbTxn, id: &[u8; 32]) { - if CompletedOnChainDb::get(txn, id).is_some() { - return; - } - let mut active = ActiveSignsDb::get(txn).unwrap_or_default(); - active.push(*id); - ActiveSignsDb::set(txn, &active); - } -} - -impl CompletedOnChainDb { - fn complete_on_chain(txn: &mut impl DbTxn, id: &[u8; 32]) { - CompletedOnChainDb::set(txn, id, &()); - ActiveSignsDb::set( - txn, - &ActiveSignsDb::get(txn) - .unwrap_or_default() - .into_iter() - .filter(|active| active != id) - .collect::>(), - ); - } -} -impl CompletionsDb { - fn completions( - getter: &impl Get, - id: [u8; 32], - ) -> Vec<::Claim> { - let Some(completions) = Self::get(getter, id) else { return vec![] }; - - // If this was set yet is empty, it's because it's the encoding of a claim with a length of 0 - if completions.is_empty() { - let default = ::Claim::default(); - assert_eq!(default.as_ref().len(), 0); - return vec![default]; - } - - let mut completions_ref = completions.as_slice(); - let mut res = vec![]; - while !completions_ref.is_empty() { - let mut id = ::Claim::default(); - let id_len = id.as_ref().len(); - id.as_mut().copy_from_slice(&completions_ref[.. id_len]); - completions_ref = &completions_ref[id_len ..]; - res.push(id); - } - res - } - - fn complete( - txn: &mut impl DbTxn, - id: [u8; 32], - completion: &::Completion, - ) { - // Completions can be completed by multiple signatures - // Save every solution in order to be robust - CompletionDb::save_completion::(txn, completion); - - let claim = N::Eventuality::claim(completion); - let claim: &[u8] = claim.as_ref(); - - // If claim has a 0-byte encoding, the set key, even if empty, is the claim - if claim.is_empty() { - Self::set(txn, id, &vec![]); - return; - } - - let mut existing = Self::get(txn, id).unwrap_or_default(); - assert_eq!(existing.len() % claim.len(), 0); - - // Don't add this completion if it's already present - let mut i = 0; - while i < existing.len() { - if &existing[i .. (i + claim.len())] == claim { - return; - } - i += claim.len(); - } - - existing.extend(claim); - Self::set(txn, id, &existing); - } -} - -impl EventualityDb { - fn save_eventuality( - txn: &mut impl DbTxn, - id: [u8; 32], - eventuality: &N::Eventuality, - ) { - txn.put(Self::key(id), eventuality.serialize()); - } - - fn eventuality(getter: &impl Get, id: [u8; 32]) -> Option { - Some(N::Eventuality::read(&mut getter.get(Self::key(id))?.as_slice()).unwrap()) - } -} - -impl CompletionDb { - fn save_completion( - txn: &mut impl DbTxn, - completion: &::Completion, - ) { - let claim = N::Eventuality::claim(completion); - let claim: &[u8] = claim.as_ref(); - Self::set(txn, claim, &N::Eventuality::serialize_completion(completion)); - } - - fn completion( - getter: &impl Get, - claim: &::Claim, - ) -> Option<::Completion> { - Self::get(getter, claim.as_ref()) - .map(|completion| N::Eventuality::read_completion::<&[u8]>(&mut completion.as_ref()).unwrap()) - } -} - -type PreprocessFor = <::TransactionMachine as PreprocessMachine>::Preprocess; -type SignMachineFor = <::TransactionMachine as PreprocessMachine>::SignMachine; -type SignatureShareFor = as SignMachine< - <::Eventuality as Eventuality>::Completion, ->>::SignatureShare; -type SignatureMachineFor = as SignMachine< - <::Eventuality as Eventuality>::Completion, ->>::SignatureMachine; - -pub struct Signer { - db: PhantomData, - - network: N, - - session: Session, - keys: Vec>, - - signable: HashMap<[u8; 32], N::SignableTransaction>, - attempt: HashMap<[u8; 32], u32>, - #[allow(clippy::type_complexity)] - preprocessing: HashMap<[u8; 32], (Vec>, Vec>)>, - #[allow(clippy::type_complexity)] - signing: HashMap<[u8; 32], (SignatureMachineFor, Vec>)>, -} - -impl fmt::Debug for Signer { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("Signer") - .field("network", &self.network) - .field("signable", &self.signable) - .field("attempt", &self.attempt) - .finish_non_exhaustive() - } -} - -impl Signer { - /// Rebroadcast already signed TXs which haven't had their completions mined into a sufficiently - /// confirmed block. - pub async fn rebroadcast_task(db: D, network: N) { - log::info!("rebroadcasting transactions for plans whose completions yet to be confirmed..."); - loop { - for active in ActiveSignsDb::get(&db).unwrap_or_default() { - for claim in CompletionsDb::completions::(&db, active) { - log::info!("rebroadcasting completion with claim {}", hex::encode(claim.as_ref())); - // TODO: Don't drop the error entirely. Check for invariants - let _ = - network.publish_completion(&CompletionDb::completion::(&db, &claim).unwrap()).await; - } - } - // Only run every five minutes so we aren't frequently loading tens to hundreds of KB from - // the DB - tokio::time::sleep(core::time::Duration::from_secs(5 * 60)).await; - } - } - pub fn new(network: N, session: Session, keys: Vec>) -> Signer { - assert!(!keys.is_empty()); - Signer { - db: PhantomData, - - network, - - session, - keys, - - signable: HashMap::new(), - attempt: HashMap::new(), - preprocessing: HashMap::new(), - signing: HashMap::new(), - } - } - - fn verify_id(&self, id: &SignId) -> Result<(), ()> { - // Check the attempt lines up - match self.attempt.get(&id.id) { - // If we don't have an attempt logged, it's because the coordinator is faulty OR because we - // rebooted OR we detected the signed transaction on chain, so there's notable network - // latency/a malicious validator - None => { - warn!( - "not attempting {} #{}. this is an error if we didn't reboot", - hex::encode(id.id), - id.attempt - ); - Err(())?; - } - Some(attempt) => { - if attempt != &id.attempt { - warn!( - "sent signing data for {} #{} yet we have attempt #{}", - hex::encode(id.id), - id.attempt, - attempt - ); - Err(())?; - } - } - } - - Ok(()) - } - - #[must_use] - fn already_completed(txn: &mut D::Transaction<'_>, id: [u8; 32]) -> bool { - if !CompletionsDb::completions::(txn, id).is_empty() { - debug!( - "SignTransaction/Reattempt order for {}, which we've already completed signing", - hex::encode(id) - ); - - true - } else { - false - } - } - - #[must_use] - fn complete( - &mut self, - id: [u8; 32], - claim: &::Claim, - ) -> ProcessorMessage { - // Assert we're actively signing for this TX - assert!(self.signable.remove(&id).is_some(), "completed a TX we weren't signing for"); - assert!(self.attempt.remove(&id).is_some(), "attempt had an ID signable didn't have"); - // If we weren't selected to participate, we'll have a preprocess - self.preprocessing.remove(&id); - // If we were selected, the signature will only go through if we contributed a share - // Despite this, we then need to get everyone's shares, and we may get a completion before - // we get everyone's shares - // This would be if the coordinator fails and we find the eventuality completion on-chain - self.signing.remove(&id); - - // Emit the event for it - ProcessorMessage::Completed { session: self.session, id, tx: claim.as_ref().to_vec() } - } - - #[must_use] - pub fn completed( - &mut self, - txn: &mut D::Transaction<'_>, - id: [u8; 32], - completion: &::Completion, - ) -> Option { - let first_completion = !Self::already_completed(txn, id); - - // Save this completion to the DB - CompletedOnChainDb::complete_on_chain(txn, &id); - CompletionsDb::complete::(txn, id, completion); - - if first_completion { - Some(self.complete(id, &N::Eventuality::claim(completion))) - } else { - None - } - } - - /// Returns Some if the first completion. - // Doesn't use any loops/retries since we'll eventually get this from the Scanner anyways - #[must_use] - async fn claimed_eventuality_completion( - &mut self, - txn: &mut D::Transaction<'_>, - id: [u8; 32], - claim: &::Claim, - ) -> Option { - if let Some(eventuality) = EventualityDb::eventuality::(txn, id) { - match self.network.confirm_completion(&eventuality, claim).await { - Ok(Some(completion)) => { - info!( - "signer eventuality for {} resolved in {}", - hex::encode(id), - hex::encode(claim.as_ref()) - ); - - let first_completion = !Self::already_completed(txn, id); - - // Save this completion to the DB - CompletionsDb::complete::(txn, id, &completion); - - if first_completion { - return Some(self.complete(id, claim)); - } - } - Ok(None) => { - warn!( - "a validator claimed {} completed {} when it did not", - hex::encode(claim.as_ref()), - hex::encode(id), - ); - } - Err(_) => { - // Transaction hasn't hit our mempool/was dropped for a different signature - // The latter can happen given certain latency conditions/a single malicious signer - // In the case of a single malicious signer, they can drag multiple honest validators down - // with them, so we unfortunately can't slash on this case - warn!( - "a validator claimed {} completed {} yet we couldn't check that claim", - hex::encode(claim.as_ref()), - hex::encode(id), - ); - } - } - } else { - warn!( - "informed of completion {} for eventuality {}, when we didn't have that eventuality", - hex::encode(claim.as_ref()), - hex::encode(id), - ); - } - None - } - - #[must_use] - async fn attempt( - &mut self, - txn: &mut D::Transaction<'_>, - id: [u8; 32], - attempt: u32, - ) -> Option { - if Self::already_completed(txn, id) { - return None; - } - - // Check if we're already working on this attempt - if let Some(curr_attempt) = self.attempt.get(&id) { - if curr_attempt >= &attempt { - warn!( - "told to attempt {} #{} yet we're already working on {}", - hex::encode(id), - attempt, - curr_attempt - ); - return None; - } - } - - // Start this attempt - // Clone the TX so we don't have an immutable borrow preventing the below mutable actions - // (also because we do need an owned tx anyways) - let Some(tx) = self.signable.get(&id).cloned() else { - warn!("told to attempt a TX we aren't currently signing for"); - return None; - }; - - // Delete any existing machines - self.preprocessing.remove(&id); - self.signing.remove(&id); - - // Update the attempt number - self.attempt.insert(id, attempt); - - let id = SignId { session: self.session, id, attempt }; - - info!("signing for {} #{}", hex::encode(id.id), id.attempt); - - // If we reboot mid-sign, the current design has us abort all signs and wait for latter - // attempts/new signing protocols - // This is distinct from the DKG which will continue DKG sessions, even on reboot - // This is because signing is tolerant of failures of up to 1/3rd of the group - // The DKG requires 100% participation - // While we could apply similar tricks as the DKG (a seeded RNG) to achieve support for - // reboots, it's not worth the complexity when messing up here leaks our secret share - // - // Despite this, on reboot, we'll get told of active signing items, and may be in this - // branch again for something we've already attempted - // - // Only run if this hasn't already been attempted - // TODO: This isn't complete as this txn may not be committed with the expected timing - if AttemptDb::get(txn, &id).is_some() { - warn!( - "already attempted {} #{}. this is an error if we didn't reboot", - hex::encode(id.id), - id.attempt - ); - return None; - } - AttemptDb::set(txn, &id, &()); - - // Attempt to create the TX - let mut machines = vec![]; - let mut preprocesses = vec![]; - let mut serialized_preprocesses = vec![]; - for keys in &self.keys { - let machine = match self.network.attempt_sign(keys.clone(), tx.clone()).await { - Err(e) => { - error!("failed to attempt {}, #{}: {:?}", hex::encode(id.id), id.attempt, e); - return None; - } - Ok(machine) => machine, - }; - - let (machine, preprocess) = machine.preprocess(&mut OsRng); - machines.push(machine); - serialized_preprocesses.push(preprocess.serialize()); - preprocesses.push(preprocess); - } - - self.preprocessing.insert(id.id, (machines, preprocesses)); - - // Broadcast our preprocess - Some(ProcessorMessage::Preprocess { id, preprocesses: serialized_preprocesses }) - } - - #[must_use] - pub async fn sign_transaction( - &mut self, - txn: &mut D::Transaction<'_>, - id: [u8; 32], - tx: N::SignableTransaction, - eventuality: &N::Eventuality, - ) -> Option { - // The caller is expected to re-issue sign orders on reboot - // This is solely used by the rebroadcast task - ActiveSignsDb::add_active_sign(txn, &id); - - if Self::already_completed(txn, id) { - return None; - } - - EventualityDb::save_eventuality::(txn, id, eventuality); - - self.signable.insert(id, tx); - self.attempt(txn, id, 0).await - } - - #[must_use] - pub async fn handle( - &mut self, - txn: &mut D::Transaction<'_>, - msg: CoordinatorMessage, - ) -> Option { - match msg { - CoordinatorMessage::Preprocesses { id, preprocesses } => { - if self.verify_id(&id).is_err() { - return None; - } - - let (machines, our_preprocesses) = match self.preprocessing.remove(&id.id) { - // Either rebooted or RPC error, or some invariant - None => { - warn!( - "not preprocessing for {}. this is an error if we didn't reboot", - hex::encode(id.id) - ); - return None; - } - Some(machine) => machine, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = preprocesses.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut preprocess_ref = preprocesses.get(&l).unwrap().as_slice(); - let Ok(res) = machines[0].read_preprocess(&mut preprocess_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !preprocess_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let preprocesses = parsed; - - // Only keep a single machine as we only need one to get the signature - let mut signature_machine = None; - let mut shares = vec![]; - let mut serialized_shares = vec![]; - for (m, machine) in machines.into_iter().enumerate() { - let mut preprocesses = preprocesses.clone(); - for (i, our_preprocess) in our_preprocesses.clone().into_iter().enumerate() { - if i != m { - assert!(preprocesses.insert(self.keys[i].params().i(), our_preprocess).is_none()); - } - } - - // Use an empty message, as expected of TransactionMachines - let (machine, share) = match machine.sign(preprocesses, &[]) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - if m == 0 { - signature_machine = Some(machine); - } - serialized_shares.push(share.serialize()); - shares.push(share); - } - self.signing.insert(id.id, (signature_machine.unwrap(), shares)); - - // Broadcast our shares - Some(ProcessorMessage::Share { id, shares: serialized_shares }) - } - - CoordinatorMessage::Shares { id, shares } => { - if self.verify_id(&id).is_err() { - return None; - } - - let (machine, our_shares) = match self.signing.remove(&id.id) { - // Rebooted, RPC error, or some invariant - None => { - // If preprocessing has this ID, it means we were never sent the preprocess by the - // coordinator - if self.preprocessing.contains_key(&id.id) { - panic!("never preprocessed yet signing?"); - } - - warn!( - "not preprocessing for {}. this is an error if we didn't reboot", - hex::encode(id.id) - ); - return None; - } - Some(machine) => machine, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = shares.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut share_ref = shares.get(&l).unwrap().as_slice(); - let Ok(res) = machine.read_share(&mut share_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !share_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let mut shares = parsed; - - for (i, our_share) in our_shares.into_iter().enumerate().skip(1) { - assert!(shares.insert(self.keys[i].params().i(), our_share).is_none()); - } - - let completion = match machine.complete(shares) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - - // Save the completion in case it's needed for recovery - CompletionsDb::complete::(txn, id.id, &completion); - - // Publish it - if let Err(e) = self.network.publish_completion(&completion).await { - error!("couldn't publish completion for plan {}: {:?}", hex::encode(id.id), e); - } else { - info!("published completion for plan {}", hex::encode(id.id)); - } - - // Stop trying to sign for this TX - Some(self.complete(id.id, &N::Eventuality::claim(&completion))) - } - - CoordinatorMessage::Reattempt { id } => self.attempt(txn, id.id, id.attempt).await, - - CoordinatorMessage::Completed { session: _, id, tx: mut claim_vec } => { - let mut claim = ::Claim::default(); - if claim.as_ref().len() != claim_vec.len() { - let true_len = claim_vec.len(); - claim_vec.truncate(2 * claim.as_ref().len()); - warn!( - "a validator claimed {}... (actual length {}) completed {} yet {}", - hex::encode(&claim_vec), - true_len, - hex::encode(id), - "that's not a valid Claim", - ); - return None; - } - claim.as_mut().copy_from_slice(&claim_vec); - - self.claimed_eventuality_completion(txn, id, &claim).await - } - } - } -} diff --git a/processor/src/slash_report_signer.rs b/processor/src/slash_report_signer.rs deleted file mode 100644 index b7b2d55c..00000000 --- a/processor/src/slash_report_signer.rs +++ /dev/null @@ -1,293 +0,0 @@ -use core::fmt; -use std::collections::HashMap; - -use rand_core::OsRng; - -use frost::{ - curve::Ristretto, - ThresholdKeys, FrostError, - algorithm::Algorithm, - sign::{ - Writable, PreprocessMachine, SignMachine, SignatureMachine, AlgorithmMachine, - AlgorithmSignMachine, AlgorithmSignatureMachine, - }, -}; -use frost_schnorrkel::Schnorrkel; - -use log::{info, warn}; - -use serai_client::{ - Public, - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet, report_slashes_message}, -}; - -use messages::coordinator::*; -use crate::{Get, DbTxn, create_db}; - -create_db! { - SlashReportSignerDb { - Completed: (session: Session) -> (), - Attempt: (session: Session, attempt: u32) -> (), - } -} - -type Preprocess = as PreprocessMachine>::Preprocess; -type SignatureShare = as SignMachine< - >::Signature, ->>::SignatureShare; - -pub struct SlashReportSigner { - network: NetworkId, - session: Session, - keys: Vec>, - report: Vec<([u8; 32], u32)>, - - attempt: u32, - #[allow(clippy::type_complexity)] - preprocessing: Option<(Vec>, Vec)>, - #[allow(clippy::type_complexity)] - signing: Option<(AlgorithmSignatureMachine, Vec)>, -} - -impl fmt::Debug for SlashReportSigner { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("SlashReportSigner") - .field("session", &self.session) - .field("report", &self.report) - .field("attempt", &self.attempt) - .field("preprocessing", &self.preprocessing.is_some()) - .field("signing", &self.signing.is_some()) - .finish_non_exhaustive() - } -} - -impl SlashReportSigner { - pub fn new( - txn: &mut impl DbTxn, - network: NetworkId, - session: Session, - keys: Vec>, - report: Vec<([u8; 32], u32)>, - attempt: u32, - ) -> Option<(SlashReportSigner, ProcessorMessage)> { - assert!(!keys.is_empty()); - - if Completed::get(txn, session).is_some() { - return None; - } - - if Attempt::get(txn, session, attempt).is_some() { - warn!( - "already attempted signing slash report for session {:?}, attempt #{}. {}", - session, attempt, "this is an error if we didn't reboot", - ); - return None; - } - Attempt::set(txn, session, attempt, &()); - - info!("signing slash report for session {:?} with attempt #{}", session, attempt); - - let mut machines = vec![]; - let mut preprocesses = vec![]; - let mut serialized_preprocesses = vec![]; - for keys in &keys { - // b"substrate" is a literal from sp-core - let machine = AlgorithmMachine::new(Schnorrkel::new(b"substrate"), keys.clone()); - - let (machine, preprocess) = machine.preprocess(&mut OsRng); - machines.push(machine); - serialized_preprocesses.push(preprocess.serialize().try_into().unwrap()); - preprocesses.push(preprocess); - } - let preprocessing = Some((machines, preprocesses)); - - let substrate_sign_id = - SubstrateSignId { session, id: SubstrateSignableId::SlashReport, attempt }; - - Some(( - SlashReportSigner { network, session, keys, report, attempt, preprocessing, signing: None }, - ProcessorMessage::SlashReportPreprocess { - id: substrate_sign_id, - preprocesses: serialized_preprocesses, - }, - )) - } - - #[must_use] - pub fn handle( - &mut self, - txn: &mut impl DbTxn, - msg: CoordinatorMessage, - ) -> Option { - match msg { - CoordinatorMessage::CosignSubstrateBlock { .. } => { - panic!("SlashReportSigner passed CosignSubstrateBlock") - } - - CoordinatorMessage::SignSlashReport { .. } => { - panic!("SlashReportSigner passed SignSlashReport") - } - - CoordinatorMessage::SubstratePreprocesses { id, preprocesses } => { - assert_eq!(id.session, self.session); - assert_eq!(id.id, SubstrateSignableId::SlashReport); - if id.attempt != self.attempt { - panic!("given preprocesses for a distinct attempt than SlashReportSigner is signing") - } - - let (machines, our_preprocesses) = match self.preprocessing.take() { - // Either rebooted or RPC error, or some invariant - None => { - warn!("not preprocessing. this is an error if we didn't reboot"); - return None; - } - Some(preprocess) => preprocess, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = preprocesses.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut preprocess_ref = preprocesses.get(&l).unwrap().as_slice(); - let Ok(res) = machines[0].read_preprocess(&mut preprocess_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !preprocess_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let preprocesses = parsed; - - // Only keep a single machine as we only need one to get the signature - let mut signature_machine = None; - let mut shares = vec![]; - let mut serialized_shares = vec![]; - for (m, machine) in machines.into_iter().enumerate() { - let mut preprocesses = preprocesses.clone(); - for (i, our_preprocess) in our_preprocesses.clone().into_iter().enumerate() { - if i != m { - assert!(preprocesses.insert(self.keys[i].params().i(), our_preprocess).is_none()); - } - } - - let (machine, share) = match machine.sign( - preprocesses, - &report_slashes_message( - &ValidatorSet { network: self.network, session: self.session }, - &self - .report - .clone() - .into_iter() - .map(|(validator, points)| (Public(validator), points)) - .collect::>(), - ), - ) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - if m == 0 { - signature_machine = Some(machine); - } - - let mut share_bytes = [0; 32]; - share_bytes.copy_from_slice(&share.serialize()); - serialized_shares.push(share_bytes); - - shares.push(share); - } - self.signing = Some((signature_machine.unwrap(), shares)); - - // Broadcast our shares - Some(ProcessorMessage::SubstrateShare { id, shares: serialized_shares }) - } - - CoordinatorMessage::SubstrateShares { id, shares } => { - assert_eq!(id.session, self.session); - assert_eq!(id.id, SubstrateSignableId::SlashReport); - if id.attempt != self.attempt { - panic!("given preprocesses for a distinct attempt than SlashReportSigner is signing") - } - - let (machine, our_shares) = match self.signing.take() { - // Rebooted, RPC error, or some invariant - None => { - // If preprocessing has this ID, it means we were never sent the preprocess by the - // coordinator - if self.preprocessing.is_some() { - panic!("never preprocessed yet signing?"); - } - - warn!("not preprocessing. this is an error if we didn't reboot"); - return None; - } - Some(signing) => signing, - }; - - let mut parsed = HashMap::new(); - for l in { - let mut keys = shares.keys().copied().collect::>(); - keys.sort(); - keys - } { - let mut share_ref = shares.get(&l).unwrap().as_slice(); - let Ok(res) = machine.read_share(&mut share_ref) else { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - }; - if !share_ref.is_empty() { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }); - } - parsed.insert(l, res); - } - let mut shares = parsed; - - for (i, our_share) in our_shares.into_iter().enumerate().skip(1) { - assert!(shares.insert(self.keys[i].params().i(), our_share).is_none()); - } - - let sig = match machine.complete(shares) { - Ok(res) => res, - Err(e) => match e { - FrostError::InternalError(_) | - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!(), - - FrostError::InvalidPreprocess(l) | FrostError::InvalidShare(l) => { - return Some(ProcessorMessage::InvalidParticipant { id, participant: l }) - } - }, - }; - - info!("signed slash report for session {:?} with attempt #{}", self.session, id.attempt); - - Completed::set(txn, self.session, &()); - - Some(ProcessorMessage::SignedSlashReport { - session: self.session, - signature: sig.to_bytes().to_vec(), - }) - } - CoordinatorMessage::BatchReattempt { .. } => { - panic!("BatchReattempt passed to SlashReportSigner") - } - } - } -} From 247cc8f0cc608d3f3f2f3dc180d6fa476bd0ecd5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 10 Sep 2024 03:48:06 -0400 Subject: [PATCH 092/368] Bitcoin Output/Transaction definitions --- Cargo.lock | 12 +- processor/bitcoin/Cargo.toml | 14 +- processor/bitcoin/src/block.rs | 0 processor/bitcoin/src/lib.rs | 198 ++-------------------- processor/bitcoin/src/output.rs | 133 +++++++++++++++ processor/bitcoin/src/transaction.rs | 170 +++++++++++++++++++ processor/primitives/src/lib.rs | 19 ++- processor/primitives/src/output.rs | 17 +- processor/primitives/src/payment.rs | 4 +- processor/scanner/src/db.rs | 15 +- processor/scanner/src/lib.rs | 5 +- processor/scanner/src/report/db.rs | 7 +- processor/scheduler/primitives/src/lib.rs | 9 +- processor/signers/src/transaction/mod.rs | 2 + substrate/client/Cargo.toml | 1 + substrate/client/src/networks/bitcoin.rs | 191 ++++++++++++--------- substrate/primitives/src/lib.rs | 6 +- 17 files changed, 504 insertions(+), 299 deletions(-) create mode 100644 processor/bitcoin/src/block.rs create mode 100644 processor/bitcoin/src/output.rs create mode 100644 processor/bitcoin/src/transaction.rs diff --git a/Cargo.lock b/Cargo.lock index 81e3d1de..ee8c8a99 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8127,18 +8127,21 @@ dependencies = [ "async-trait", "bitcoin-serai", "borsh", - "const-hex", + "ciphersuite", "env_logger", - "hex", - "k256", + "flexible-transcript", "log", + "modular-frost", "parity-scale-codec", + "rand_core", "secp256k1", + "serai-client", "serai-db", "serai-env", "serai-message-queue", "serai-processor-messages", - "serde_json", + "serai-processor-primitives", + "serai-processor-scheduler-primitives", "tokio", "zalloc", ] @@ -8151,6 +8154,7 @@ dependencies = [ "bitcoin", "bitvec", "blake2", + "borsh", "ciphersuite", "dockertest", "frame-system", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index a5749542..656c7c40 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -18,14 +18,15 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +rand_core = { version = "0.6", default-features = false } -const-hex = { version = "1", default-features = false } -hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serde_json = { version = "1", default-features = false, features = ["std"] } -k256 = { version = "^0.13.1", default-features = false, features = ["std"] } +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } + secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } bitcoin-serai = { path = "../../networks/bitcoin", default-features = false, features = ["std"] } @@ -37,8 +38,13 @@ zalloc = { path = "../../common/zalloc" } serai-db = { path = "../../common/db" } serai-env = { path = "../../common/env" } +serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } + messages = { package = "serai-processor-messages", path = "../messages" } +primitives = { package = "serai-processor-primitives", path = "../primitives" } +scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } + message-queue = { package = "serai-message-queue", path = "../../message-queue" } [features] diff --git a/processor/bitcoin/src/block.rs b/processor/bitcoin/src/block.rs new file mode 100644 index 00000000..e69de29b diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/lib.rs index bccdc286..112d8fd3 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/lib.rs @@ -2,7 +2,15 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use std::{sync::OnceLock, time::Duration, io, collections::HashMap}; +#[global_allocator] +static ALLOCATOR: zalloc::ZeroizingAlloc = + zalloc::ZeroizingAlloc(std::alloc::System); + +mod output; +mod transaction; + +/* +use std::{sync::LazyLock, time::Duration, io, collections::HashMap}; use async_trait::async_trait; @@ -49,127 +57,9 @@ use serai_client::{ primitives::{MAX_DATA_LEN, Coin, NetworkId, Amount, Balance}, networks::bitcoin::Address, }; +*/ -use crate::{ - networks::{ - NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait, - Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait, - Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork, - }, - Payment, - multisigs::scheduler::utxo::Scheduler, -}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct OutputId(pub [u8; 36]); -impl Default for OutputId { - fn default() -> Self { - Self([0; 36]) - } -} -impl AsRef<[u8]> for OutputId { - fn as_ref(&self) -> &[u8] { - self.0.as_ref() - } -} -impl AsMut<[u8]> for OutputId { - fn as_mut(&mut self) -> &mut [u8] { - self.0.as_mut() - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Output { - kind: OutputType, - presumed_origin: Option

, - output: ReceivedOutput, - data: Vec, -} - -impl OutputTrait for Output { - type Id = OutputId; - - fn kind(&self) -> OutputType { - self.kind - } - - fn id(&self) -> Self::Id { - let mut res = OutputId::default(); - self.output.outpoint().consensus_encode(&mut res.as_mut()).unwrap(); - debug_assert_eq!( - { - let mut outpoint = vec![]; - self.output.outpoint().consensus_encode(&mut outpoint).unwrap(); - outpoint - }, - res.as_ref().to_vec() - ); - res - } - - fn tx_id(&self) -> [u8; 32] { - let mut hash = *self.output.outpoint().txid.as_raw_hash().as_byte_array(); - hash.reverse(); - hash - } - - fn key(&self) -> ProjectivePoint { - let script = &self.output.output().script_pubkey; - assert!(script.is_p2tr()); - let Instruction::PushBytes(key) = script.instructions_minimal().last().unwrap().unwrap() else { - panic!("last item in v1 Taproot script wasn't bytes") - }; - let key = XOnlyPublicKey::from_slice(key.as_ref()) - .expect("last item in v1 Taproot script wasn't x-only public key"); - Secp256k1::read_G(&mut key.public_key(Parity::Even).serialize().as_slice()).unwrap() - - (ProjectivePoint::GENERATOR * self.output.offset()) - } - - fn presumed_origin(&self) -> Option
{ - self.presumed_origin.clone() - } - - fn balance(&self) -> Balance { - Balance { coin: Coin::Bitcoin, amount: Amount(self.output.value()) } - } - - fn data(&self) -> &[u8] { - &self.data - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - self.kind.write(writer)?; - let presumed_origin: Option> = self.presumed_origin.clone().map(Into::into); - writer.write_all(&presumed_origin.encode())?; - self.output.write(writer)?; - writer.write_all(&u16::try_from(self.data.len()).unwrap().to_le_bytes())?; - writer.write_all(&self.data) - } - - fn read(mut reader: &mut R) -> io::Result { - Ok(Output { - kind: OutputType::read(reader)?, - presumed_origin: { - let mut io_reader = scale::IoReader(reader); - let res = Option::>::decode(&mut io_reader) - .unwrap() - .map(|address| Address::try_from(address).unwrap()); - reader = io_reader.0; - res - }, - output: ReceivedOutput::read(reader)?, - data: { - let mut data_len = [0; 2]; - reader.read_exact(&mut data_len)?; - - let mut data = vec![0; usize::from(u16::from_le_bytes(data_len))]; - reader.read_exact(&mut data)?; - data - }, - }) - } -} - +/* #[derive(Clone, Copy, PartialEq, Eq, Debug)] pub struct Fee(u64); @@ -201,71 +91,6 @@ impl TransactionTrait for Transaction { } } -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Eventuality([u8; 32]); - -#[derive(Clone, PartialEq, Eq, Default, Debug)] -pub struct EmptyClaim; -impl AsRef<[u8]> for EmptyClaim { - fn as_ref(&self) -> &[u8] { - &[] - } -} -impl AsMut<[u8]> for EmptyClaim { - fn as_mut(&mut self) -> &mut [u8] { - &mut [] - } -} - -impl EventualityTrait for Eventuality { - type Claim = EmptyClaim; - type Completion = Transaction; - - fn lookup(&self) -> Vec { - self.0.to_vec() - } - - fn read(reader: &mut R) -> io::Result { - let mut id = [0; 32]; - reader - .read_exact(&mut id) - .map_err(|_| io::Error::other("couldn't decode ID in eventuality"))?; - Ok(Eventuality(id)) - } - fn serialize(&self) -> Vec { - self.0.to_vec() - } - - fn claim(_: &Transaction) -> EmptyClaim { - EmptyClaim - } - fn serialize_completion(completion: &Transaction) -> Vec { - let mut buf = vec![]; - completion.consensus_encode(&mut buf).unwrap(); - buf - } - fn read_completion(reader: &mut R) -> io::Result { - Transaction::consensus_decode(&mut io::BufReader::with_capacity(0, reader)) - .map_err(|e| io::Error::other(format!("{e}"))) - } -} - -#[derive(Clone, Debug)] -pub struct SignableTransaction { - actual: BSignableTransaction, -} -impl PartialEq for SignableTransaction { - fn eq(&self, other: &SignableTransaction) -> bool { - self.actual == other.actual - } -} -impl Eq for SignableTransaction {} -impl SignableTransactionTrait for SignableTransaction { - fn fee(&self) -> u64 { - self.actual.fee() - } -} - #[async_trait] impl BlockTrait for Block { type Id = [u8; 32]; @@ -944,3 +769,4 @@ impl Network for Bitcoin { impl UtxoNetwork for Bitcoin { const MAX_INPUTS: usize = MAX_INPUTS; } +*/ diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/output.rs new file mode 100644 index 00000000..cc624319 --- /dev/null +++ b/processor/bitcoin/src/output.rs @@ -0,0 +1,133 @@ +use std::io; + +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::{ + bitcoin::{ + hashes::Hash as HashTrait, + key::{Parity, XOnlyPublicKey}, + consensus::Encodable, + script::Instruction, + }, + wallet::ReceivedOutput as WalletOutput, +}; + +use scale::{Encode, Decode, IoReader}; +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::{ + primitives::{Coin, Amount, Balance, ExternalAddress}, + networks::bitcoin::Address, +}; + +use primitives::{OutputType, ReceivedOutput}; + +#[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] +pub(crate) struct OutputId([u8; 36]); +impl Default for OutputId { + fn default() -> Self { + Self([0; 36]) + } +} +impl AsRef<[u8]> for OutputId { + fn as_ref(&self) -> &[u8] { + self.0.as_ref() + } +} +impl AsMut<[u8]> for OutputId { + fn as_mut(&mut self) -> &mut [u8] { + self.0.as_mut() + } +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Output { + kind: OutputType, + presumed_origin: Option
, + output: WalletOutput, + data: Vec, +} + +impl ReceivedOutput<::G, Address> for Output { + type Id = OutputId; + type TransactionId = [u8; 32]; + + fn kind(&self) -> OutputType { + self.kind + } + + fn id(&self) -> Self::Id { + let mut id = OutputId::default(); + self.output.outpoint().consensus_encode(&mut id.as_mut()).unwrap(); + id + } + + fn transaction_id(&self) -> Self::TransactionId { + self.output.outpoint().txid.to_raw_hash().to_byte_array() + } + + fn key(&self) -> ::G { + // We read the key from the script pubkey so we don't have to independently store it + let script = &self.output.output().script_pubkey; + + // These assumptions are safe since it's an output we successfully scanned + assert!(script.is_p2tr()); + let Instruction::PushBytes(key) = script.instructions_minimal().last().unwrap().unwrap() else { + panic!("last item in v1 Taproot script wasn't bytes") + }; + let key = XOnlyPublicKey::from_slice(key.as_ref()) + .expect("last item in v1 Taproot script wasn't a valid x-only public key"); + + // Convert to a full key + let key = key.public_key(Parity::Even); + // Convert to a k256 key (from libsecp256k1) + let output_key = Secp256k1::read_G(&mut key.serialize().as_slice()).unwrap(); + // The output's key minus the output's offset is the root key + output_key - (::G::GENERATOR * self.output.offset()) + } + + fn presumed_origin(&self) -> Option
{ + self.presumed_origin.clone() + } + + fn balance(&self) -> Balance { + Balance { coin: Coin::Bitcoin, amount: Amount(self.output.value()) } + } + + fn data(&self) -> &[u8] { + &self.data + } + + fn write(&self, writer: &mut W) -> io::Result<()> { + self.kind.write(writer)?; + let presumed_origin: Option = self.presumed_origin.clone().map(Into::into); + writer.write_all(&presumed_origin.encode())?; + self.output.write(writer)?; + writer.write_all(&u16::try_from(self.data.len()).unwrap().to_le_bytes())?; + writer.write_all(&self.data) + } + + fn read(mut reader: &mut R) -> io::Result { + Ok(Output { + kind: OutputType::read(reader)?, + presumed_origin: { + Option::::decode(&mut IoReader(&mut reader)) + .map_err(|e| io::Error::other(format!("couldn't decode ExternalAddress: {e:?}")))? + .map(|address| { + Address::try_from(address) + .map_err(|()| io::Error::other("couldn't decode Address from ExternalAddress")) + }) + .transpose()? + }, + output: WalletOutput::read(reader)?, + data: { + let mut data_len = [0; 2]; + reader.read_exact(&mut data_len)?; + + let mut data = vec![0; usize::from(u16::from_le_bytes(data_len))]; + reader.read_exact(&mut data)?; + data + }, + }) + } +} diff --git a/processor/bitcoin/src/transaction.rs b/processor/bitcoin/src/transaction.rs new file mode 100644 index 00000000..ef48d3f0 --- /dev/null +++ b/processor/bitcoin/src/transaction.rs @@ -0,0 +1,170 @@ +use std::io; + +use rand_core::{RngCore, CryptoRng}; + +use transcript::{Transcript, RecommendedTranscript}; +use ciphersuite::Secp256k1; +use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; + +use bitcoin_serai::{ + bitcoin::{ + consensus::{Encodable, Decodable}, + ScriptBuf, Transaction as BTransaction, + }, + wallet::{ + ReceivedOutput, TransactionError, SignableTransaction as BSignableTransaction, + TransactionMachine, + }, +}; + +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::networks::bitcoin::Address; + +use crate::output::OutputId; + +#[derive(Clone, Debug)] +pub(crate) struct Transaction(BTransaction); + +impl From for Transaction { + fn from(tx: BTransaction) -> Self { + Self(tx) + } +} + +impl scheduler::Transaction for Transaction { + fn read(reader: &mut impl io::Read) -> io::Result { + let tx = + BTransaction::consensus_decode(&mut io::BufReader::new(reader)).map_err(io::Error::other)?; + Ok(Self(tx)) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + let mut writer = io::BufWriter::new(writer); + self.0.consensus_encode(&mut writer)?; + writer.into_inner()?; + Ok(()) + } +} + +#[derive(Clone, Debug)] +pub(crate) struct SignableTransaction { + inputs: Vec, + payments: Vec<(Address, u64)>, + change: Option
, + data: Option>, + fee_per_vbyte: u64, +} + +impl SignableTransaction { + fn signable(self) -> Result { + BSignableTransaction::new( + self.inputs, + &self + .payments + .iter() + .cloned() + .map(|(address, amount)| (ScriptBuf::from(address), amount)) + .collect::>(), + self.change.map(ScriptBuf::from), + self.data, + self.fee_per_vbyte, + ) + } +} + +#[derive(Clone)] +pub(crate) struct ClonableTransctionMachine(SignableTransaction, ThresholdKeys); +impl PreprocessMachine for ClonableTransctionMachine { + type Preprocess = ::Preprocess; + type Signature = ::Signature; + type SignMachine = ::SignMachine; + + fn preprocess( + self, + rng: &mut R, + ) -> (Self::SignMachine, Self::Preprocess) { + self + .0 + .signable() + .expect("signing an invalid SignableTransaction") + .multisig(&self.1, RecommendedTranscript::new(b"Serai Processor Bitcoin Transaction")) + .expect("incorrect keys used for SignableTransaction") + .preprocess(rng) + } +} + +impl scheduler::SignableTransaction for SignableTransaction { + type Transaction = Transaction; + type Ciphersuite = Secp256k1; + type PreprocessMachine = ClonableTransctionMachine; + + fn read(reader: &mut impl io::Read) -> io::Result { + let inputs = { + let mut input_len = [0; 4]; + reader.read_exact(&mut input_len)?; + let mut inputs = vec![]; + for _ in 0 .. u32::from_le_bytes(input_len) { + inputs.push(ReceivedOutput::read(reader)?); + } + inputs + }; + + let payments = <_>::deserialize_reader(reader)?; + let change = <_>::deserialize_reader(reader)?; + let data = <_>::deserialize_reader(reader)?; + let fee_per_vbyte = <_>::deserialize_reader(reader)?; + + Ok(Self { inputs, payments, change, data, fee_per_vbyte }) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + writer.write_all(&u32::try_from(self.inputs.len()).unwrap().to_le_bytes())?; + for input in &self.inputs { + input.write(writer)?; + } + + self.payments.serialize(writer)?; + self.change.serialize(writer)?; + self.data.serialize(writer)?; + self.fee_per_vbyte.serialize(writer)?; + + Ok(()) + } + + fn id(&self) -> [u8; 32] { + self.clone().signable().unwrap().txid() + } + + fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine { + ClonableTransctionMachine(self, keys) + } +} + +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub(crate) struct Eventuality { + txid: [u8; 32], + singular_spent_output: Option, +} + +impl primitives::Eventuality for Eventuality { + type OutputId = OutputId; + + fn id(&self) -> [u8; 32] { + self.txid + } + + // We define the lookup as our ID since the resolving transaction only has a singular possible ID + fn lookup(&self) -> Vec { + self.txid.to_vec() + } + + fn singular_spent_output(&self) -> Option { + self.singular_spent_output.clone() + } + + fn read(reader: &mut impl io::Read) -> io::Result { + Self::deserialize_reader(reader) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.serialize(writer) + } +} diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index 4e45fa8f..cc915ca2 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -46,7 +46,24 @@ pub trait Id: + BorshDeserialize { } -impl Id for [u8; N] where [u8; N]: Default {} +impl< + I: Send + + Sync + + Clone + + Default + + PartialEq + + Eq + + Hash + + AsRef<[u8]> + + AsMut<[u8]> + + Debug + + Encode + + Decode + + BorshSerialize + + BorshDeserialize, + > Id for I +{ +} /// A wrapper for a group element which implements the scale/borsh traits. #[derive(Clone, Copy, PartialEq, Eq, Debug)] diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index cbfe59f3..76acde60 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -19,10 +19,19 @@ pub trait Address: + BorshSerialize + BorshDeserialize { - /// Write this address. - fn write(&self, writer: &mut impl io::Write) -> io::Result<()>; - /// Read an address. - fn read(reader: &mut impl io::Read) -> io::Result; +} +// This casts a wide net, yet it only implements `Address` for things `Into` so +// it should only implement this for addresses +impl< + A: Send + + Sync + + Clone + + Into + + TryFrom + + BorshSerialize + + BorshDeserialize, + > Address for A +{ } /// The type of the output. diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs index 67a5bbad..4c1e04f4 100644 --- a/processor/primitives/src/payment.rs +++ b/processor/primitives/src/payment.rs @@ -48,7 +48,7 @@ impl Payment { /// Read a Payment. pub fn read(reader: &mut impl io::Read) -> io::Result { - let address = A::read(reader)?; + let address = A::deserialize_reader(reader)?; let reader = &mut IoReader(reader); let balance = Balance::decode(reader).map_err(io::Error::other)?; let data = Option::>::decode(reader).map_err(io::Error::other)?; @@ -56,7 +56,7 @@ impl Payment { } /// Write the Payment. pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - self.address.write(writer).unwrap(); + self.address.serialize(writer)?; self.balance.encode_to(writer); self.data.encode_to(writer); Ok(()) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 52a36419..ef37ef38 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -10,7 +10,7 @@ use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_in_instructions_primitives::{InInstructionWithBalance, Batch}; use serai_coins_primitives::OutInstructionWithBalance; -use primitives::{EncodableG, Address, ReceivedOutput}; +use primitives::{EncodableG, ReceivedOutput}; use crate::{ lifetime::{LifetimeStage, Lifetime}, @@ -49,7 +49,7 @@ impl OutputWithInInstruction { let mut opt = [0xff]; reader.read_exact(&mut opt)?; assert!((opt[0] == 0) || (opt[0] == 1)); - (opt[0] == 1).then(|| AddressFor::::read(reader)).transpose()? + (opt[0] == 1).then(|| AddressFor::::deserialize_reader(reader)).transpose()? }; let in_instruction = InInstructionWithBalance::decode(&mut IoReader(reader)).map_err(io::Error::other)?; @@ -59,7 +59,7 @@ impl OutputWithInInstruction { self.output.write(writer)?; if let Some(return_address) = &self.return_address { writer.write_all(&[1])?; - return_address.write(writer)?; + return_address.serialize(writer)?; } else { writer.write_all(&[0])?; } @@ -278,7 +278,7 @@ impl ScannerGlobalDb { buf.read_exact(&mut opt).unwrap(); assert!((opt[0] == 0) || (opt[0] == 1)); - let address = (opt[0] == 1).then(|| AddressFor::::read(&mut buf).unwrap()); + let address = (opt[0] == 1).then(|| AddressFor::::deserialize_reader(&mut buf).unwrap()); Some((address, InInstructionWithBalance::decode(&mut IoReader(buf)).unwrap())) } } @@ -338,7 +338,7 @@ impl ScanToEventualityDb { let mut buf = vec![]; if let Some(address) = &forward.return_address { buf.write_all(&[1]).unwrap(); - address.write(&mut buf).unwrap(); + address.serialize(&mut buf).unwrap(); } else { buf.write_all(&[0]).unwrap(); } @@ -435,7 +435,8 @@ impl Returnable { reader.read_exact(&mut opt).unwrap(); assert!((opt[0] == 0) || (opt[0] == 1)); - let return_address = (opt[0] == 1).then(|| AddressFor::::read(reader)).transpose()?; + let return_address = + (opt[0] == 1).then(|| AddressFor::::deserialize_reader(reader)).transpose()?; let in_instruction = InInstructionWithBalance::decode(&mut IoReader(reader)).map_err(io::Error::other)?; @@ -444,7 +445,7 @@ impl Returnable { fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { if let Some(return_address) = &self.return_address { writer.write_all(&[1])?; - return_address.write(writer)?; + return_address.serialize(writer)?; } else { writer.write_all(&[0])?; } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5919ff7e..9831d41a 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -7,6 +7,7 @@ use std::{io, collections::HashMap}; use group::GroupEncoding; +use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; @@ -179,12 +180,12 @@ pub struct Return { impl Return { pub(crate) fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - self.address.write(writer)?; + self.address.serialize(writer)?; self.output.write(writer) } pub(crate) fn read(reader: &mut impl io::Read) -> io::Result { - let address = AddressFor::::read(reader)?; + let address = AddressFor::::deserialize_reader(reader)?; let output = OutputFor::::read(reader)?; Ok(Return { address, output }) } diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 05239779..10a3f6bb 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -4,12 +4,11 @@ use std::io::{Read, Write}; use group::GroupEncoding; use scale::{Encode, Decode, IoReader}; +use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; use serai_primitives::Balance; -use primitives::Address; - use crate::{ScannerFeed, KeyFor, AddressFor}; create_db!( @@ -92,7 +91,7 @@ impl ReportDb { for return_information in return_information { if let Some(ReturnInformation { address, balance }) = return_information { buf.write_all(&[1]).unwrap(); - address.write(&mut buf).unwrap(); + address.serialize(&mut buf).unwrap(); balance.encode_to(&mut buf); } else { buf.write_all(&[0]).unwrap(); @@ -115,7 +114,7 @@ impl ReportDb { assert!((opt[0] == 0) || (opt[0] == 1)); res.push((opt[0] == 1).then(|| { - let address = AddressFor::::read(&mut buf).unwrap(); + let address = AddressFor::::deserialize_reader(&mut buf).unwrap(); let balance = Balance::decode(&mut IoReader(&mut buf)).unwrap(); ReturnInformation { address, balance } })); diff --git a/processor/scheduler/primitives/src/lib.rs b/processor/scheduler/primitives/src/lib.rs index f146027d..3c214d15 100644 --- a/processor/scheduler/primitives/src/lib.rs +++ b/processor/scheduler/primitives/src/lib.rs @@ -11,7 +11,7 @@ use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; use serai_db::DbTxn; /// A transaction. -pub trait Transaction: Sized { +pub trait Transaction: Sized + Send { /// Read a `Transaction`. fn read(reader: &mut impl io::Read) -> io::Result; /// Write a `Transaction`. @@ -20,10 +20,12 @@ pub trait Transaction: Sized { /// A signable transaction. pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { + /// The underlying transaction type. + type Transaction: Transaction; /// The ciphersuite used to sign this transaction. type Ciphersuite: Ciphersuite; /// The preprocess machine for the signing protocol for this transaction. - type PreprocessMachine: Clone + PreprocessMachine; + type PreprocessMachine: Clone + PreprocessMachine>; /// Read a `SignableTransaction`. fn read(reader: &mut impl io::Read) -> io::Result; @@ -42,8 +44,7 @@ pub trait SignableTransaction: 'static + Sized + Send + Sync + Clone { } /// The transaction type for a SignableTransaction. -pub type TransactionFor = - <::PreprocessMachine as PreprocessMachine>::Signature; +pub type TransactionFor = ::Transaction; mod db { use serai_db::{Get, DbTxn, create_db, db_channel}; diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index 9311eb32..b9b62e75 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -185,6 +185,8 @@ impl> } } Response::Signature { id, signature: signed_tx } => { + let signed_tx: TransactionFor = signed_tx.into(); + // Save this transaction to the database { let mut buf = Vec::with_capacity(256); diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index e653c9af..5cba05f0 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -24,6 +24,7 @@ bitvec = { version = "1", default-features = false, features = ["alloc", "serde" hex = "0.4" scale = { package = "parity-scale-codec", version = "3" } +borsh = { version = "1" } serde = { version = "1", features = ["derive"], optional = true } serde_json = { version = "1", optional = true } diff --git a/substrate/client/src/networks/bitcoin.rs b/substrate/client/src/networks/bitcoin.rs index 502bfb44..28f66053 100644 --- a/substrate/client/src/networks/bitcoin.rs +++ b/substrate/client/src/networks/bitcoin.rs @@ -1,6 +1,7 @@ use core::{str::FromStr, fmt}; use scale::{Encode, Decode}; +use borsh::{BorshSerialize, BorshDeserialize}; use bitcoin::{ hashes::{Hash as HashTrait, hash160::Hash}, @@ -10,47 +11,10 @@ use bitcoin::{ address::{AddressType, NetworkChecked, Address as BAddress}, }; -#[derive(Clone, Eq, Debug)] -pub struct Address(ScriptBuf); +use crate::primitives::ExternalAddress; -impl PartialEq for Address { - fn eq(&self, other: &Self) -> bool { - // Since Serai defines the Bitcoin-address specification as a variant of the script alone, - // define equivalency as the script alone - self.0 == other.0 - } -} - -impl From
for ScriptBuf { - fn from(addr: Address) -> ScriptBuf { - addr.0 - } -} - -impl FromStr for Address { - type Err = (); - fn from_str(str: &str) -> Result { - Address::new( - BAddress::from_str(str) - .map_err(|_| ())? - .require_network(Network::Bitcoin) - .map_err(|_| ())? - .script_pubkey(), - ) - .ok_or(()) - } -} - -impl fmt::Display for Address { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - BAddress::::from_script(&self.0, Network::Bitcoin) - .map_err(|_| fmt::Error)? - .fmt(f) - } -} - -// SCALE-encoded variant of Monero addresses. -#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] +// SCALE-encodable representation of Bitcoin addresses, used internally. +#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] enum EncodedAddress { P2PKH([u8; 20]), P2SH([u8; 20]), @@ -59,34 +23,13 @@ enum EncodedAddress { P2TR([u8; 32]), } -impl TryFrom> for Address { +impl TryFrom<&ScriptBuf> for EncodedAddress { type Error = (); - fn try_from(data: Vec) -> Result { - Ok(Address(match EncodedAddress::decode(&mut data.as_ref()).map_err(|_| ())? { - EncodedAddress::P2PKH(hash) => { - ScriptBuf::new_p2pkh(&PubkeyHash::from_raw_hash(Hash::from_byte_array(hash))) - } - EncodedAddress::P2SH(hash) => { - ScriptBuf::new_p2sh(&ScriptHash::from_raw_hash(Hash::from_byte_array(hash))) - } - EncodedAddress::P2WPKH(hash) => { - ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V0, &hash).unwrap()) - } - EncodedAddress::P2WSH(hash) => { - ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V0, &hash).unwrap()) - } - EncodedAddress::P2TR(key) => { - ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V1, &key).unwrap()) - } - })) - } -} - -fn try_to_vec(addr: &Address) -> Result, ()> { - let parsed_addr = - BAddress::::from_script(&addr.0, Network::Bitcoin).map_err(|_| ())?; - Ok( - (match parsed_addr.address_type() { + fn try_from(script_buf: &ScriptBuf) -> Result { + // This uses mainnet as our encodings don't specify a network. + let parsed_addr = + BAddress::::from_script(script_buf, Network::Bitcoin).map_err(|_| ())?; + Ok(match parsed_addr.address_type() { Some(AddressType::P2pkh) => { EncodedAddress::P2PKH(*parsed_addr.pubkey_hash().unwrap().as_raw_hash().as_byte_array()) } @@ -110,23 +53,119 @@ fn try_to_vec(addr: &Address) -> Result, ()> { } _ => Err(())?, }) - .encode(), - ) + } } -impl From
for Vec { - fn from(addr: Address) -> Vec { +impl From for ScriptBuf { + fn from(encoded: EncodedAddress) -> Self { + match encoded { + EncodedAddress::P2PKH(hash) => { + ScriptBuf::new_p2pkh(&PubkeyHash::from_raw_hash(Hash::from_byte_array(hash))) + } + EncodedAddress::P2SH(hash) => { + ScriptBuf::new_p2sh(&ScriptHash::from_raw_hash(Hash::from_byte_array(hash))) + } + EncodedAddress::P2WPKH(hash) => { + ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V0, &hash).unwrap()) + } + EncodedAddress::P2WSH(hash) => { + ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V0, &hash).unwrap()) + } + EncodedAddress::P2TR(key) => { + ScriptBuf::new_witness_program(&WitnessProgram::new(WitnessVersion::V1, &key).unwrap()) + } + } + } +} + +/// A Bitcoin address usable with Serai. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct Address(ScriptBuf); + +// Support consuming into the underlying ScriptBuf. +impl From
for ScriptBuf { + fn from(addr: Address) -> ScriptBuf { + addr.0 + } +} + +impl From<&Address> for BAddress { + fn from(addr: &Address) -> BAddress { + // This fails if the script doesn't have an address representation, yet all our representable + // addresses' scripts do + BAddress::::from_script(&addr.0, Network::Bitcoin).unwrap() + } +} + +// Support converting a string into an address. +impl FromStr for Address { + type Err = (); + fn from_str(str: &str) -> Result { + Address::new( + BAddress::from_str(str) + .map_err(|_| ())? + .require_network(Network::Bitcoin) + .map_err(|_| ())? + .script_pubkey(), + ) + .ok_or(()) + } +} + +// Support converting an address into a string. +impl fmt::Display for Address { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + BAddress::from(self).fmt(f) + } +} + +impl TryFrom for Address { + type Error = (); + fn try_from(data: ExternalAddress) -> Result { + // Decode as an EncodedAddress, then map to a ScriptBuf + let mut data = data.as_ref(); + let encoded = EncodedAddress::decode(&mut data).map_err(|_| ())?; + if !data.is_empty() { + Err(())? + } + Ok(Address(ScriptBuf::from(encoded))) + } +} + +impl From
for EncodedAddress { + fn from(addr: Address) -> EncodedAddress { // Safe since only encodable addresses can be created - try_to_vec(&addr).unwrap() + EncodedAddress::try_from(&addr.0).unwrap() + } +} + +impl From
for ExternalAddress { + fn from(addr: Address) -> ExternalAddress { + // Safe since all variants are fixed-length and fit into MAX_ADDRESS_LEN + ExternalAddress::new(EncodedAddress::from(addr).encode()).unwrap() + } +} + +impl BorshSerialize for Address { + fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { + EncodedAddress::from(self.clone()).serialize(writer) + } +} + +impl BorshDeserialize for Address { + fn deserialize_reader(reader: &mut R) -> borsh::io::Result { + Ok(Self(ScriptBuf::from(EncodedAddress::deserialize_reader(reader)?))) } } impl Address { - pub fn new(address: ScriptBuf) -> Option { - let res = Self(address); - if try_to_vec(&res).is_ok() { - return Some(res); + /// Create a new Address from a ScriptBuf. + pub fn new(script_buf: ScriptBuf) -> Option { + // If we can represent this Script, it's an acceptable address + if EncodedAddress::try_from(&script_buf).is_ok() { + return Some(Self(script_buf)); } + // Else, it isn't acceptable None } } diff --git a/substrate/primitives/src/lib.rs b/substrate/primitives/src/lib.rs index d2c52219..2cf37e00 100644 --- a/substrate/primitives/src/lib.rs +++ b/substrate/primitives/src/lib.rs @@ -62,7 +62,7 @@ pub fn borsh_deserialize_bounded_vec &[u8] { - self.0.as_ref() - } - #[cfg(feature = "std")] pub fn consume(self) -> Vec { self.0.into_inner() From 2d4b775b6e89674ca5ce59789dfc96d6ae2ba625 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 10 Sep 2024 06:25:21 -0400 Subject: [PATCH 093/368] Add bitcoin Block trait impl --- processor/bitcoin/src/block.rs | 70 ++++++++++++++++ processor/bitcoin/src/lib.rs | 64 +++------------ processor/bitcoin/src/output.rs | 21 ++++- processor/bitcoin/src/scanner.rs | 131 ++++++++++++++++++++++++++++++ processor/primitives/src/block.rs | 22 ++--- 5 files changed, 239 insertions(+), 69 deletions(-) create mode 100644 processor/bitcoin/src/scanner.rs diff --git a/processor/bitcoin/src/block.rs b/processor/bitcoin/src/block.rs index e69de29b..304f19e3 100644 --- a/processor/bitcoin/src/block.rs +++ b/processor/bitcoin/src/block.rs @@ -0,0 +1,70 @@ +use std::collections::HashMap; + +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::bitcoin::block::{Header, Block as BBlock}; + +use serai_client::networks::bitcoin::Address; + +use primitives::{ReceivedOutput, EventualityTracker}; + +use crate::{hash_bytes, scanner::scanner, output::Output, transaction::Eventuality}; + +#[derive(Clone, Debug)] +pub(crate) struct BlockHeader(Header); +impl primitives::BlockHeader for BlockHeader { + fn id(&self) -> [u8; 32] { + hash_bytes(self.0.block_hash().to_raw_hash()) + } + fn parent(&self) -> [u8; 32] { + hash_bytes(self.0.prev_blockhash.to_raw_hash()) + } +} + +#[derive(Clone, Debug)] +pub(crate) struct Block(BBlock); + +#[async_trait::async_trait] +impl primitives::Block for Block { + type Header = BlockHeader; + + type Key = ::G; + type Address = Address; + type Output = Output; + type Eventuality = Eventuality; + + fn id(&self) -> [u8; 32] { + primitives::BlockHeader::id(&BlockHeader(self.0.header)) + } + + fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + let scanner = scanner(key); + + let mut res = vec![]; + // We skip the coinbase transaction as its burdened by maturity + for tx in &self.0.txdata[1 ..] { + for output in scanner.scan_transaction(tx) { + res.push(Output::new(key, tx, output)); + } + } + res + } + + #[allow(clippy::type_complexity)] + fn check_for_eventuality_resolutions( + &self, + eventualities: &mut EventualityTracker, + ) -> HashMap< + >::TransactionId, + Self::Eventuality, + > { + let mut res = HashMap::new(); + for tx in &self.0.txdata[1 ..] { + let id = hash_bytes(tx.compute_txid().to_raw_hash()); + if let Some(eventuality) = eventualities.active_eventualities.remove(id.as_slice()) { + res.insert(id, eventuality); + } + } + res + } +} diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/lib.rs index 112d8fd3..03c9e903 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/lib.rs @@ -6,8 +6,19 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); +mod scanner; + mod output; mod transaction; +mod block; + +pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { + use bitcoin_serai::bitcoin::hashes::Hash; + + let mut res = hash.to_byte_array(); + res.reverse(); + res +} /* use std::{sync::LazyLock, time::Duration, io, collections::HashMap}; @@ -299,59 +310,6 @@ impl Bitcoin { } } - // Expected script has to start with SHA256 PUSH MSG_HASH OP_EQUALVERIFY .. - fn segwit_data_pattern(script: &ScriptBuf) -> Option { - let mut ins = script.instructions(); - - // first item should be SHA256 code - if ins.next()?.ok()?.opcode()? != OP_SHA256 { - return Some(false); - } - - // next should be a data push - ins.next()?.ok()?.push_bytes()?; - - // next should be a equality check - if ins.next()?.ok()?.opcode()? != OP_EQUALVERIFY { - return Some(false); - } - - Some(true) - } - - fn extract_serai_data(tx: &Transaction) -> Vec { - // check outputs - let mut data = (|| { - for output in &tx.output { - if output.script_pubkey.is_op_return() { - match output.script_pubkey.instructions_minimal().last() { - Some(Ok(Instruction::PushBytes(data))) => return data.as_bytes().to_vec(), - _ => continue, - } - } - } - vec![] - })(); - - // check inputs - if data.is_empty() { - for input in &tx.input { - let witness = input.witness.to_vec(); - // expected witness at least has to have 2 items, msg and the redeem script. - if witness.len() >= 2 { - let redeem_script = ScriptBuf::from_bytes(witness.last().unwrap().clone()); - if Self::segwit_data_pattern(&redeem_script) == Some(true) { - data.clone_from(&witness[witness.len() - 2]); // len() - 1 is the redeem_script - break; - } - } - } - } - - data.truncate(MAX_DATA_LEN.try_into().unwrap()); - data - } - #[cfg(test)] pub fn sign_btc_input_for_p2pkh( tx: &Transaction, diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/output.rs index cc624319..c7ed060f 100644 --- a/processor/bitcoin/src/output.rs +++ b/processor/bitcoin/src/output.rs @@ -8,6 +8,7 @@ use bitcoin_serai::{ key::{Parity, XOnlyPublicKey}, consensus::Encodable, script::Instruction, + transaction::Transaction, }, wallet::ReceivedOutput as WalletOutput, }; @@ -22,6 +23,8 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; +use crate::scanner::{offsets_for_key, presumed_origin, extract_serai_data}; + #[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] pub(crate) struct OutputId([u8; 36]); impl Default for OutputId { @@ -48,6 +51,20 @@ pub(crate) struct Output { data: Vec, } +impl Output { + pub fn new(key: ::G, tx: &Transaction, output: WalletOutput) -> Self { + Self { + kind: offsets_for_key(key) + .into_iter() + .find_map(|(kind, offset)| (offset == output.offset()).then_some(kind)) + .expect("scanned output for unknown offset"), + presumed_origin: presumed_origin(tx), + output, + data: extract_serai_data(tx), + } + } +} + impl ReceivedOutput<::G, Address> for Output { type Id = OutputId; type TransactionId = [u8; 32]; @@ -63,7 +80,9 @@ impl ReceivedOutput<::G, Address> for Output { } fn transaction_id(&self) -> Self::TransactionId { - self.output.outpoint().txid.to_raw_hash().to_byte_array() + let mut res = self.output.outpoint().txid.to_raw_hash().to_byte_array(); + res.reverse(); + res } fn key(&self) -> ::G { diff --git a/processor/bitcoin/src/scanner.rs b/processor/bitcoin/src/scanner.rs new file mode 100644 index 00000000..43518b57 --- /dev/null +++ b/processor/bitcoin/src/scanner.rs @@ -0,0 +1,131 @@ +use std::{sync::LazyLock, collections::HashMap}; + +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::{ + bitcoin::{ + blockdata::opcodes, + script::{Instruction, ScriptBuf}, + Transaction, + }, + wallet::Scanner, +}; + +use serai_client::networks::bitcoin::Address; + +use primitives::OutputType; + +const KEY_DST: &[u8] = b"Serai Bitcoin Processor Key Offset"; +static BRANCH_BASE_OFFSET: LazyLock<::F> = + LazyLock::new(|| Secp256k1::hash_to_F(KEY_DST, b"branch")); +static CHANGE_BASE_OFFSET: LazyLock<::F> = + LazyLock::new(|| Secp256k1::hash_to_F(KEY_DST, b"change")); +static FORWARD_BASE_OFFSET: LazyLock<::F> = + LazyLock::new(|| Secp256k1::hash_to_F(KEY_DST, b"forward")); + +// Unfortunately, we have per-key offsets as it's the root key plus the base offset may not be +// even. While we could tweak the key until all derivations are even, that'd require significantly +// more tweaking. This algorithmic complexity is preferred. +pub(crate) fn offsets_for_key( + key: ::G, +) -> HashMap::F> { + let mut offsets = HashMap::from([(OutputType::External, ::F::ZERO)]); + + // We create an actual Bitcoin scanner as upon adding an offset, it yields the tweaked offset + // actually used + let mut scanner = Scanner::new(key).unwrap(); + let mut register = |kind, offset| { + let tweaked_offset = scanner.register_offset(offset).expect("offset collision"); + offsets.insert(kind, tweaked_offset); + }; + + register(OutputType::Branch, *BRANCH_BASE_OFFSET); + register(OutputType::Change, *CHANGE_BASE_OFFSET); + register(OutputType::Forwarded, *FORWARD_BASE_OFFSET); + + offsets +} + +pub(crate) fn scanner(key: ::G) -> Scanner { + let mut scanner = Scanner::new(key).unwrap(); + for (_, offset) in offsets_for_key(key) { + let tweaked_offset = scanner.register_offset(offset).unwrap(); + assert_eq!(tweaked_offset, offset); + } + scanner +} + +pub(crate) fn presumed_origin(tx: &Transaction) -> Option
{ + todo!("TODO") + + /* + let spent_output = { + let input = &tx.input[0]; + let mut spent_tx = input.previous_output.txid.as_raw_hash().to_byte_array(); + spent_tx.reverse(); + let mut tx; + while { + tx = self.rpc.get_transaction(&spent_tx).await; + tx.is_err() + } { + log::error!("couldn't get transaction from bitcoin node: {tx:?}"); + sleep(Duration::from_secs(5)).await; + } + tx.unwrap().output.swap_remove(usize::try_from(input.previous_output.vout).unwrap()) + }; + Address::new(spent_output.script_pubkey) + */ +} + +// Checks if this script matches SHA256 PUSH MSG_HASH OP_EQUALVERIFY .. +fn matches_segwit_data(script: &ScriptBuf) -> Option { + let mut ins = script.instructions(); + + // first item should be SHA256 code + if ins.next()?.ok()?.opcode()? != opcodes::all::OP_SHA256 { + return Some(false); + } + + // next should be a data push + ins.next()?.ok()?.push_bytes()?; + + // next should be a equality check + if ins.next()?.ok()?.opcode()? != opcodes::all::OP_EQUALVERIFY { + return Some(false); + } + + Some(true) +} + +// Extract the data for Serai from a transaction +pub(crate) fn extract_serai_data(tx: &Transaction) -> Vec { + // Check for an OP_RETURN output + let mut data = (|| { + for output in &tx.output { + if output.script_pubkey.is_op_return() { + match output.script_pubkey.instructions_minimal().last() { + Some(Ok(Instruction::PushBytes(data))) => return Some(data.as_bytes().to_vec()), + _ => continue, + } + } + } + None + })(); + + // Check the inputs + if data.is_none() { + for input in &tx.input { + let witness = input.witness.to_vec(); + // The witness has to have at least 2 items, msg and the redeem script + if witness.len() >= 2 { + let redeem_script = ScriptBuf::from_bytes(witness.last().unwrap().clone()); + if matches_segwit_data(&redeem_script) == Some(true) { + data = Some(witness[witness.len() - 2].clone()); // len() - 1 is the redeem_script + break; + } + } + } + } + + data.unwrap_or(vec![]) +} diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 89dff54f..4f721d02 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -3,7 +3,7 @@ use std::collections::HashMap; use group::{Group, GroupEncoding}; -use crate::{Id, Address, ReceivedOutput, Eventuality, EventualityTracker}; +use crate::{Address, ReceivedOutput, Eventuality, EventualityTracker}; /// A block header from an external network. pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { @@ -16,12 +16,6 @@ pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { fn parent(&self) -> [u8; 32]; } -/// A transaction from an external network. -pub trait Transaction: Send + Sync + Sized { - /// The type used to identify transactions on this external network. - type Id: Id; -} - /// A block from an external network. /// /// A block is defined as a consensus event associated with a set of transactions. It is not @@ -37,14 +31,8 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { type Key: Group + GroupEncoding; /// The type used to represent addresses on this external network. type Address: Address; - /// The type used to represent transactions on this external network. - type Transaction: Transaction; /// The type used to represent received outputs on this external network. - type Output: ReceivedOutput< - Self::Key, - Self::Address, - TransactionId = ::Id, - >; + type Output: ReceivedOutput; /// The type used to represent an Eventuality for a transaction on this external network. type Eventuality: Eventuality< OutputId = >::Id, @@ -64,8 +52,12 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { /// /// Returns tbe resolved Eventualities, indexed by the ID of the transactions which resolved /// them. + #[allow(clippy::type_complexity)] fn check_for_eventuality_resolutions( &self, eventualities: &mut EventualityTracker, - ) -> HashMap<::Id, Self::Eventuality>; + ) -> HashMap< + >::TransactionId, + Self::Eventuality, + >; } From e36b671f379799d096db5113a87f8cfa2eb4cc5d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 10 Sep 2024 06:40:41 -0400 Subject: [PATCH 094/368] Remove bound that WINDOW_LENGTH < CONFIRMATIONS It's unnecessary and not valuable. --- processor/scanner/src/db.rs | 2 +- processor/scanner/src/eventuality/mod.rs | 5 ++--- processor/scanner/src/lib.rs | 4 ++-- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index ef37ef38..5fcdc160 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -120,7 +120,7 @@ impl ScannerGlobalDb { /// A new key MUST NOT be queued to activate a block preceding the finishing of the key prior to /// its prior. There MUST only be two keys active at one time. /// - /// activation_block_number is inclusive, so the key will be scanned for starting at the + /// `activation_block_number` is inclusive, so the key will be scanned for starting at the /// specified block. pub(crate) fn queue_key(txn: &mut impl DbTxn, activation_block_number: u64, key: KeyFor) { // Set the block which has a key activate as notable diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index be5b4555..5d139c6d 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -123,8 +123,8 @@ impl> EventualityTask { block_number: u64, ) -> (Vec>>, Vec<(KeyFor, LifetimeStage)>) { /* - This is proper as the keys for the next-to-scan block (at most `WINDOW_LENGTH` ahead, - which is `<= CONFIRMATIONS`) will be the keys to use here, with only minor edge cases. + This is proper as the keys for the next-to-scan block (at most `WINDOW_LENGTH` ahead) will be + the keys to use here, with only minor edge cases. This may include a key which has yet to activate by our perception. We can simply drop those. @@ -136,7 +136,6 @@ impl> EventualityTask { This also may include a key we've retired which has yet to officially retire. That's fine as we'll do nothing with it, and the Scheduler traits document this behavior. */ - assert!(S::WINDOW_LENGTH <= S::CONFIRMATIONS); let mut keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&self.db) .expect("scanning for a blockchain without any keys set"); // Since the next-to-scan block is ahead of us, drop keys which have yet to actually activate diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 9831d41a..2c56db35 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -71,8 +71,8 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The amount of blocks to process in parallel. /// - /// This must be at least `1`. This must be less than or equal to `CONFIRMATIONS`. This value - /// should be the worst-case latency to handle a block divided by the expected block time. + /// This must be at least `1`. This value should be the worst-case latency to handle a block + /// divided by the expected block time. const WINDOW_LENGTH: u64; /// The amount of blocks which will occur in 10 minutes (approximate). From ba3a6f9e91c1ef7b7b744e5b0531ec390426dd25 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 10 Sep 2024 07:07:09 -0400 Subject: [PATCH 095/368] Bitcoin ScannerFeed --- Cargo.lock | 1 + processor/bitcoin/Cargo.toml | 1 + processor/bitcoin/src/block.rs | 6 +- processor/bitcoin/src/lib.rs | 9 ++- processor/bitcoin/src/output.rs | 2 +- processor/bitcoin/src/{scanner.rs => scan.rs} | 0 processor/bitcoin/src/scanner_feed.rs | 62 +++++++++++++++++++ processor/scanner/src/lib.rs | 10 ++- 8 files changed, 84 insertions(+), 7 deletions(-) rename processor/bitcoin/src/{scanner.rs => scan.rs} (100%) create mode 100644 processor/bitcoin/src/scanner_feed.rs diff --git a/Cargo.lock b/Cargo.lock index ee8c8a99..b35cda50 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8141,6 +8141,7 @@ dependencies = [ "serai-message-queue", "serai-processor-messages", "serai-processor-primitives", + "serai-processor-scanner", "serai-processor-scheduler-primitives", "tokio", "zalloc", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 656c7c40..ff14890e 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -44,6 +44,7 @@ messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } +scanner = { package = "serai-processor-scanner", path = "../scanner" } message-queue = { package = "serai-message-queue", path = "../../message-queue" } diff --git a/processor/bitcoin/src/block.rs b/processor/bitcoin/src/block.rs index 304f19e3..24cccec9 100644 --- a/processor/bitcoin/src/block.rs +++ b/processor/bitcoin/src/block.rs @@ -8,10 +8,10 @@ use serai_client::networks::bitcoin::Address; use primitives::{ReceivedOutput, EventualityTracker}; -use crate::{hash_bytes, scanner::scanner, output::Output, transaction::Eventuality}; +use crate::{hash_bytes, scan::scanner, output::Output, transaction::Eventuality}; #[derive(Clone, Debug)] -pub(crate) struct BlockHeader(Header); +pub(crate) struct BlockHeader(pub(crate) Header); impl primitives::BlockHeader for BlockHeader { fn id(&self) -> [u8; 32] { hash_bytes(self.0.block_hash().to_raw_hash()) @@ -22,7 +22,7 @@ impl primitives::BlockHeader for BlockHeader { } #[derive(Clone, Debug)] -pub(crate) struct Block(BBlock); +pub(crate) struct Block(pub(crate) BBlock); #[async_trait::async_trait] impl primitives::Block for Block { diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/lib.rs index 03c9e903..bba8629e 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/lib.rs @@ -6,12 +6,19 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); -mod scanner; +// Internal utilities for scanning transactions +mod scan; +// Output trait satisfaction mod output; +// Transaction/SignableTransaction/Eventuality trait satisfaction mod transaction; +// Block trait satisfaction mod block; +// ScannerFeed trait satisfaction +mod scanner_feed; + pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { use bitcoin_serai::bitcoin::hashes::Hash; diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/output.rs index c7ed060f..a783792d 100644 --- a/processor/bitcoin/src/output.rs +++ b/processor/bitcoin/src/output.rs @@ -23,7 +23,7 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; -use crate::scanner::{offsets_for_key, presumed_origin, extract_serai_data}; +use crate::scan::{offsets_for_key, presumed_origin, extract_serai_data}; #[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] pub(crate) struct OutputId([u8; 36]); diff --git a/processor/bitcoin/src/scanner.rs b/processor/bitcoin/src/scan.rs similarity index 100% rename from processor/bitcoin/src/scanner.rs rename to processor/bitcoin/src/scan.rs diff --git a/processor/bitcoin/src/scanner_feed.rs b/processor/bitcoin/src/scanner_feed.rs new file mode 100644 index 00000000..73265bfe --- /dev/null +++ b/processor/bitcoin/src/scanner_feed.rs @@ -0,0 +1,62 @@ +use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; + +use serai_client::primitives::{NetworkId, Coin, Amount}; + +use scanner::ScannerFeed; + +use crate::block::{BlockHeader, Block}; + +#[derive(Clone)] +pub(crate) struct Rpc(BRpc); + +#[async_trait::async_trait] +impl ScannerFeed for Rpc { + const NETWORK: NetworkId = NetworkId::Bitcoin; + const CONFIRMATIONS: u64 = 6; + const WINDOW_LENGTH: u64 = 6; + + const TEN_MINUTES: u64 = 1; + + type Block = Block; + + type EphemeralError = RpcError; + + async fn latest_finalized_block_number(&self) -> Result { + u64::try_from(self.0.get_latest_block_number().await?) + .unwrap() + .checked_sub(Self::CONFIRMATIONS) + .ok_or(RpcError::ConnectionError) + } + + async fn unchecked_block_header_by_number( + &self, + number: u64, + ) -> Result<::Header, Self::EphemeralError> { + Ok(BlockHeader( + self.0.get_block(&self.0.get_block_hash(number.try_into().unwrap()).await?).await?.header, + )) + } + + async fn unchecked_block_by_number( + &self, + number: u64, + ) -> Result { + Ok(Block(self.0.get_block(&self.0.get_block_hash(number.try_into().unwrap()).await?).await?)) + } + + fn dust(coin: Coin) -> Amount { + assert_eq!(coin, Coin::Bitcoin); + // 10,000 satoshis, or $5 if 1 BTC = 50,000 USD + Amount(10_000) + } + + async fn cost_to_aggregate( + &self, + coin: Coin, + _reference_block: &Self::Block, + ) -> Result { + assert_eq!(coin, Coin::Bitcoin); + // TODO + Ok(Amount(0)) + } +} diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 2c56db35..4f30f5e7 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -71,8 +71,14 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The amount of blocks to process in parallel. /// - /// This must be at least `1`. This value should be the worst-case latency to handle a block - /// divided by the expected block time. + /// This must be at least `1`. This value MUST be at least the worst-case latency to publish a + /// Batch for a block divided by the expected block time. Setting this value too low will risk a + /// backlog forming. Setting this value too high will only delay key rotation and forwarded + /// outputs. + // The latency to publish a Batch for a block is the latency of a provided transaction + // (1 minute), the latency of a signing protocol (1 minute), the latency of Serai to finalize a + // block (1 minute), and the latency to cosign such a block (5 minutes for the cosign distance + // plus 1 minute). Accordingly, this should be at least ~30 minutes, ideally 60 minutes. const WINDOW_LENGTH: u64; /// The amount of blocks which will occur in 10 minutes (approximate). From 017aab22586a29c49da8012d5fe6249a17983fb3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 00:01:40 -0400 Subject: [PATCH 096/368] Satisfy Scheduler for Bitcoin --- Cargo.lock | 3 +- networks/bitcoin/src/wallet/send.rs | 14 +- networks/bitcoin/tests/wallet.rs | 6 +- processor/bitcoin/Cargo.toml | 2 + processor/bitcoin/src/lib.rs | 334 +----------------- processor/bitcoin/src/output.rs | 2 +- processor/bitcoin/src/scanner_feed.rs | 34 +- processor/bitcoin/src/scheduler.rs | 177 ++++++++++ processor/bitcoin/src/transaction.rs | 19 +- .../scheduler/utxo/primitives/Cargo.toml | 2 - .../scheduler/utxo/primitives/src/lib.rs | 1 - processor/scheduler/utxo/standard/src/lib.rs | 3 + .../utxo/transaction-chaining/src/lib.rs | 5 +- 13 files changed, 245 insertions(+), 357 deletions(-) create mode 100644 processor/bitcoin/src/scheduler.rs diff --git a/Cargo.lock b/Cargo.lock index b35cda50..7ae3a0f2 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8143,6 +8143,8 @@ dependencies = [ "serai-processor-primitives", "serai-processor-scanner", "serai-processor-scheduler-primitives", + "serai-processor-transaction-chaining-scheduler", + "serai-processor-utxo-scheduler-primitives", "tokio", "zalloc", ] @@ -8809,7 +8811,6 @@ dependencies = [ name = "serai-processor-utxo-scheduler-primitives" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "serai-primitives", "serai-processor-primitives", diff --git a/networks/bitcoin/src/wallet/send.rs b/networks/bitcoin/src/wallet/send.rs index ccb020b2..276f536e 100644 --- a/networks/bitcoin/src/wallet/send.rs +++ b/networks/bitcoin/src/wallet/send.rs @@ -44,7 +44,7 @@ pub enum TransactionError { #[error("fee was too low to pass the default minimum fee rate")] TooLowFee, #[error("not enough funds for these payments")] - NotEnoughFunds, + NotEnoughFunds { inputs: u64, payments: u64, fee: u64 }, #[error("transaction was too large")] TooLargeTransaction, } @@ -213,7 +213,11 @@ impl SignableTransaction { } if input_sat < (payment_sat + needed_fee) { - Err(TransactionError::NotEnoughFunds)?; + Err(TransactionError::NotEnoughFunds { + inputs: input_sat, + payments: payment_sat, + fee: needed_fee, + })?; } // If there's a change address, check if there's change to give it @@ -258,9 +262,9 @@ impl SignableTransaction { res } - /// Returns the outputs this transaction will create. - pub fn outputs(&self) -> &[TxOut] { - &self.tx.output + /// Returns the transaction, sans witness, this will create if signed. + pub fn transaction(&self) -> &Transaction { + &self.tx } /// Create a multisig machine for this transaction. diff --git a/networks/bitcoin/tests/wallet.rs b/networks/bitcoin/tests/wallet.rs index a290122b..45371414 100644 --- a/networks/bitcoin/tests/wallet.rs +++ b/networks/bitcoin/tests/wallet.rs @@ -195,10 +195,10 @@ async_sequential! { Err(TransactionError::TooLowFee), ); - assert_eq!( + assert!(matches!( SignableTransaction::new(inputs.clone(), &[(addr(), inputs[0].value() * 2)], None, None, FEE), - Err(TransactionError::NotEnoughFunds), - ); + Err(TransactionError::NotEnoughFunds { .. }), + )); assert_eq!( SignableTransaction::new(inputs, &vec![(addr(), 1000); 10000], None, None, FEE), diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index ff14890e..91813bac 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -45,6 +45,8 @@ messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } scanner = { package = "serai-processor-scanner", path = "../scanner" } +utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = "../scheduler/utxo/primitives" } +transaction-chaining-scheduler = { package = "serai-processor-transaction-chaining-scheduler", path = "../scheduler/utxo/transaction-chaining" } message-queue = { package = "serai-message-queue", path = "../../message-queue" } diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/lib.rs index bba8629e..cbf65093 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/lib.rs @@ -9,15 +9,14 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = // Internal utilities for scanning transactions mod scan; -// Output trait satisfaction +// Primitive trait satisfactions mod output; -// Transaction/SignableTransaction/Eventuality trait satisfaction mod transaction; -// Block trait satisfaction mod block; -// ScannerFeed trait satisfaction +// App-logic trait satisfactions mod scanner_feed; +mod scheduler; pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { use bitcoin_serai::bitcoin::hashes::Hash; @@ -28,21 +27,6 @@ pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> } /* -use std::{sync::LazyLock, time::Duration, io, collections::HashMap}; - -use async_trait::async_trait; - -use scale::{Encode, Decode}; - -use ciphersuite::group::ff::PrimeField; -use k256::{ProjectivePoint, Scalar}; -use frost::{ - curve::{Curve, Secp256k1}, - ThresholdKeys, -}; - -use tokio::time::sleep; - use bitcoin_serai::{ bitcoin::{ hashes::Hash as HashTrait, @@ -111,19 +95,6 @@ impl TransactionTrait for Transaction { #[async_trait] impl BlockTrait for Block { - type Id = [u8; 32]; - fn id(&self) -> Self::Id { - let mut hash = *self.block_hash().as_raw_hash().as_byte_array(); - hash.reverse(); - hash - } - - fn parent(&self) -> Self::Id { - let mut hash = *self.header.prev_blockhash.as_raw_hash().as_byte_array(); - hash.reverse(); - hash - } - async fn time(&self, rpc: &Bitcoin) -> u64 { // Use the network median time defined in BIP-0113 since the in-block time isn't guaranteed to // be monotonic @@ -152,51 +123,6 @@ impl BlockTrait for Block { } } -const KEY_DST: &[u8] = b"Serai Bitcoin Output Offset"; -static BRANCH_OFFSET: OnceLock = OnceLock::new(); -static CHANGE_OFFSET: OnceLock = OnceLock::new(); -static FORWARD_OFFSET: OnceLock = OnceLock::new(); - -// Always construct the full scanner in order to ensure there's no collisions -fn scanner( - key: ProjectivePoint, -) -> (Scanner, HashMap, HashMap, OutputType>) { - let mut scanner = Scanner::new(key).unwrap(); - let mut offsets = HashMap::from([(OutputType::External, Scalar::ZERO)]); - - let zero = Scalar::ZERO.to_repr(); - let zero_ref: &[u8] = zero.as_ref(); - let mut kinds = HashMap::from([(zero_ref.to_vec(), OutputType::External)]); - - let mut register = |kind, offset| { - let offset = scanner.register_offset(offset).expect("offset collision"); - offsets.insert(kind, offset); - - let offset = offset.to_repr(); - let offset_ref: &[u8] = offset.as_ref(); - kinds.insert(offset_ref.to_vec(), kind); - }; - - register( - OutputType::Branch, - *BRANCH_OFFSET.get_or_init(|| Secp256k1::hash_to_F(KEY_DST, b"branch")), - ); - register( - OutputType::Change, - *CHANGE_OFFSET.get_or_init(|| Secp256k1::hash_to_F(KEY_DST, b"change")), - ); - register( - OutputType::Forwarded, - *FORWARD_OFFSET.get_or_init(|| Secp256k1::hash_to_F(KEY_DST, b"forward")), - ); - - (scanner, offsets, kinds) -} - -#[derive(Clone, Debug)] -pub struct Bitcoin { - pub(crate) rpc: Rpc, -} // Shim required for testing/debugging purposes due to generic arguments also necessitating trait // bounds impl PartialEq for Bitcoin { @@ -355,20 +281,6 @@ impl Bitcoin { } } -// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT) -// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes -// While our inputs are entirely SegWit, such fine tuning is not necessary and could create -// issues in the future (if the size decreases or we misevaluate it) -// It also offers a minimal amount of benefit when we are able to logarithmically accumulate -// inputs -// For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and -// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192 -// bytes -// 100,000 / 192 = 520 -// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself -const MAX_INPUTS: usize = 520; -const MAX_OUTPUTS: usize = 520; - fn address_from_key(key: ProjectivePoint) -> Address { Address::new( p2tr_script_buf(key).expect("creating address from key which isn't properly tweaked"), @@ -378,59 +290,8 @@ fn address_from_key(key: ProjectivePoint) -> Address { #[async_trait] impl Network for Bitcoin { - type Curve = Secp256k1; - - type Transaction = Transaction; - type Block = Block; - - type Output = Output; - type SignableTransaction = SignableTransaction; - type Eventuality = Eventuality; - type TransactionMachine = TransactionMachine; - type Scheduler = Scheduler; - type Address = Address; - - const NETWORK: NetworkId = NetworkId::Bitcoin; - const ID: &'static str = "Bitcoin"; - const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 600; - const CONFIRMATIONS: usize = 6; - - /* - A Taproot input is: - - 36 bytes for the OutPoint - - 0 bytes for the script (+1 byte for the length) - - 4 bytes for the sequence - Per https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format - - There's also: - - 1 byte for the witness length - - 1 byte for the signature length - - 64 bytes for the signature - which have the SegWit discount. - - (4 * (36 + 1 + 4)) + (1 + 1 + 64) = 164 + 66 = 230 weight units - 230 ceil div 4 = 57 vbytes - - Bitcoin defines multiple minimum feerate constants *per kilo-vbyte*. Currently, these are: - - 1000 sat/kilo-vbyte for a transaction to be relayed - - Each output's value must exceed the fee of the TX spending it at 3000 sat/kilo-vbyte - The DUST constant needs to be determined by the latter. - Since these are solely relay rules, and may be raised, we require all outputs be spendable - under a 5000 sat/kilo-vbyte fee rate. - - 5000 sat/kilo-vbyte = 5 sat/vbyte - 5 * 57 = 285 sats/spent-output - - Even if an output took 100 bytes (it should be just ~29-43), taking 400 weight units, adding - 100 vbytes, tripling the transaction size, then the sats/tx would be < 1000. - - Increase by an order of magnitude, in order to ensure this is actually worth our time, and we - get 10,000 satoshis. - */ - const DUST: u64 = 10_000; - // 2 inputs should be 2 * 230 = 460 weight units // The output should be ~36 bytes, or 144 weight units // The overhead should be ~20 bytes at most, or 80 weight units @@ -467,195 +328,6 @@ impl Network for Bitcoin { Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded]))) } - async fn get_latest_block_number(&self) -> Result { - self.rpc.get_latest_block_number().await.map_err(|_| NetworkError::ConnectionError) - } - - async fn get_block(&self, number: usize) -> Result { - let block_hash = - self.rpc.get_block_hash(number).await.map_err(|_| NetworkError::ConnectionError)?; - self.rpc.get_block(&block_hash).await.map_err(|_| NetworkError::ConnectionError) - } - - async fn get_outputs(&self, block: &Self::Block, key: ProjectivePoint) -> Vec { - let (scanner, _, kinds) = scanner(key); - - let mut outputs = vec![]; - // Skip the coinbase transaction which is burdened by maturity - for tx in &block.txdata[1 ..] { - for output in scanner.scan_transaction(tx) { - let offset_repr = output.offset().to_repr(); - let offset_repr_ref: &[u8] = offset_repr.as_ref(); - let kind = kinds[offset_repr_ref]; - - let output = Output { kind, presumed_origin: None, output, data: vec![] }; - assert_eq!(output.tx_id(), tx.id()); - outputs.push(output); - } - - if outputs.is_empty() { - continue; - } - - // populate the outputs with the origin and data - let presumed_origin = { - // This may identify the P2WSH output *embedding the InInstruction* as the origin, which - // would be a bit trickier to spend that a traditional output... - // There's no risk of the InInstruction going missing as it'd already be on-chain though - // We *could* parse out the script *without the InInstruction prefix* and declare that the - // origin - // TODO - let spent_output = { - let input = &tx.input[0]; - let mut spent_tx = input.previous_output.txid.as_raw_hash().to_byte_array(); - spent_tx.reverse(); - let mut tx; - while { - tx = self.rpc.get_transaction(&spent_tx).await; - tx.is_err() - } { - log::error!("couldn't get transaction from bitcoin node: {tx:?}"); - sleep(Duration::from_secs(5)).await; - } - tx.unwrap().output.swap_remove(usize::try_from(input.previous_output.vout).unwrap()) - }; - Address::new(spent_output.script_pubkey) - }; - let data = Self::extract_serai_data(tx); - for output in &mut outputs { - if output.kind == OutputType::External { - output.data.clone_from(&data); - } - output.presumed_origin.clone_from(&presumed_origin); - } - } - - outputs - } - - async fn get_eventuality_completions( - &self, - eventualities: &mut EventualitiesTracker, - block: &Self::Block, - ) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> { - let mut res = HashMap::new(); - if eventualities.map.is_empty() { - return res; - } - - fn check_block( - eventualities: &mut EventualitiesTracker, - block: &Block, - res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>, - ) { - for tx in &block.txdata[1 ..] { - if let Some((plan, _)) = eventualities.map.remove(tx.id().as_slice()) { - res.insert(plan, (eventualities.block_number, tx.id(), tx.clone())); - } - } - - eventualities.block_number += 1; - } - - let this_block_hash = block.id(); - let this_block_num = (async { - loop { - match self.rpc.get_block_number(&this_block_hash).await { - Ok(number) => return number, - Err(e) => { - log::error!("couldn't get the block number for {}: {}", hex::encode(this_block_hash), e) - } - } - sleep(Duration::from_secs(60)).await; - } - }) - .await; - - for block_num in (eventualities.block_number + 1) .. this_block_num { - let block = { - let mut block; - while { - block = self.get_block(block_num).await; - block.is_err() - } { - log::error!("couldn't get block {}: {}", block_num, block.err().unwrap()); - sleep(Duration::from_secs(60)).await; - } - block.unwrap() - }; - - check_block(eventualities, &block, &mut res); - } - - // Also check the current block - check_block(eventualities, block, &mut res); - assert_eq!(eventualities.block_number, this_block_num); - - res - } - - async fn needed_fee( - &self, - block_number: usize, - inputs: &[Output], - payments: &[Payment], - change: &Option
, - ) -> Result, NetworkError> { - Ok( - self - .make_signable_transaction(block_number, inputs, payments, change, true) - .await? - .map(|signable| signable.needed_fee()), - ) - } - - async fn signable_transaction( - &self, - block_number: usize, - _plan_id: &[u8; 32], - _key: ProjectivePoint, - inputs: &[Output], - payments: &[Payment], - change: &Option
, - (): &(), - ) -> Result, NetworkError> { - Ok(self.make_signable_transaction(block_number, inputs, payments, change, false).await?.map( - |signable| { - let eventuality = Eventuality(signable.txid()); - (SignableTransaction { actual: signable }, eventuality) - }, - )) - } - - async fn attempt_sign( - &self, - keys: ThresholdKeys, - transaction: Self::SignableTransaction, - ) -> Result { - Ok(transaction.actual.clone().multisig(&keys).expect("used the wrong keys")) - } - - async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> { - match self.rpc.send_raw_transaction(tx).await { - Ok(_) => (), - Err(RpcError::ConnectionError) => Err(NetworkError::ConnectionError)?, - // TODO: Distinguish already in pool vs double spend (other signing attempt succeeded) vs - // invalid transaction - Err(e) => panic!("failed to publish TX {}: {e}", tx.compute_txid()), - } - Ok(()) - } - - async fn confirm_completion( - &self, - eventuality: &Self::Eventuality, - _: &EmptyClaim, - ) -> Result, NetworkError> { - Ok(Some( - self.rpc.get_transaction(&eventuality.0).await.map_err(|_| NetworkError::ConnectionError)?, - )) - } - #[cfg(test)] async fn get_block_number(&self, id: &[u8; 32]) -> usize { self.rpc.get_block_number(id).await.unwrap() diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/output.rs index a783792d..dc541350 100644 --- a/processor/bitcoin/src/output.rs +++ b/processor/bitcoin/src/output.rs @@ -47,7 +47,7 @@ impl AsMut<[u8]> for OutputId { pub(crate) struct Output { kind: OutputType, presumed_origin: Option
, - output: WalletOutput, + pub(crate) output: WalletOutput, data: Vec, } diff --git a/processor/bitcoin/src/scanner_feed.rs b/processor/bitcoin/src/scanner_feed.rs index 73265bfe..5a3c491c 100644 --- a/processor/bitcoin/src/scanner_feed.rs +++ b/processor/bitcoin/src/scanner_feed.rs @@ -46,7 +46,39 @@ impl ScannerFeed for Rpc { fn dust(coin: Coin) -> Amount { assert_eq!(coin, Coin::Bitcoin); - // 10,000 satoshis, or $5 if 1 BTC = 50,000 USD + + /* + A Taproot input is: + - 36 bytes for the OutPoint + - 0 bytes for the script (+1 byte for the length) + - 4 bytes for the sequence + Per https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format + + There's also: + - 1 byte for the witness length + - 1 byte for the signature length + - 64 bytes for the signature + which have the SegWit discount. + + (4 * (36 + 1 + 4)) + (1 + 1 + 64) = 164 + 66 = 230 weight units + 230 ceil div 4 = 57 vbytes + + Bitcoin defines multiple minimum feerate constants *per kilo-vbyte*. Currently, these are: + - 1000 sat/kilo-vbyte for a transaction to be relayed + - Each output's value must exceed the fee of the TX spending it at 3000 sat/kilo-vbyte + The DUST constant needs to be determined by the latter. + Since these are solely relay rules, and may be raised, we require all outputs be spendable + under a 5000 sat/kilo-vbyte fee rate. + + 5000 sat/kilo-vbyte = 5 sat/vbyte + 5 * 57 = 285 sats/spent-output + + Even if an output took 100 bytes (it should be just ~29-43), taking 400 weight units, adding + 100 vbytes, tripling the transaction size, then the sats/tx would be < 1000. + + Increase by an order of magnitude, in order to ensure this is actually worth our time, and we + get 10,000 satoshis. This is $5 if 1 BTC = 50,000 USD. + */ Amount(10_000) } diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs new file mode 100644 index 00000000..0c1debdb --- /dev/null +++ b/processor/bitcoin/src/scheduler.rs @@ -0,0 +1,177 @@ +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::{ + bitcoin::ScriptBuf, + wallet::{TransactionError, SignableTransaction as BSignableTransaction, p2tr_script_buf}, +}; + +use serai_client::{ + primitives::{Coin, Amount}, + networks::bitcoin::Address, +}; + +use primitives::{OutputType, ReceivedOutput, Payment}; +use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; +use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; +use transaction_chaining_scheduler::{EffectedReceivedOutputs, Scheduler as GenericScheduler}; + +use crate::{ + scan::{offsets_for_key, scanner}, + output::Output, + transaction::{SignableTransaction, Eventuality}, + scanner_feed::Rpc, +}; + +fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { + let offset = ::G::GENERATOR * offsets_for_key(key)[&kind]; + Address::new( + p2tr_script_buf(key + offset) + .expect("creating address from Serai key which wasn't properly tweaked"), + ) + .expect("couldn't create Serai-representable address for P2TR script") +} + +fn signable_transaction( + fee_per_vbyte: u64, + inputs: Vec>, + payments: Vec>>, + change: Option>, +) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { + assert!(inputs.len() < Planner::MAX_INPUTS); + assert!((payments.len() + usize::from(u8::from(change.is_some()))) < Planner::MAX_OUTPUTS); + + let inputs = inputs.into_iter().map(|input| input.output).collect::>(); + let payments = payments + .into_iter() + .map(|payment| { + (payment.address().clone(), { + let balance = payment.balance(); + assert_eq!(balance.coin, Coin::Bitcoin); + balance.amount.0 + }) + }) + .collect::>(); + let change = change.map(Planner::change_address); + + // TODO: ACP output + BSignableTransaction::new( + inputs.clone(), + &payments + .iter() + .cloned() + .map(|(address, amount)| (ScriptBuf::from(address), amount)) + .collect::>(), + change.clone().map(ScriptBuf::from), + None, + fee_per_vbyte, + ) + .map(|bst| (SignableTransaction { inputs, payments, change, fee_per_vbyte }, bst)) +} + +pub(crate) struct Planner; +impl TransactionPlanner> for Planner { + type FeeRate = u64; + + type SignableTransaction = SignableTransaction; + + /* + Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT). + + A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes. While + our inputs are entirely SegWit, such fine tuning is not necessary and could create issues in + the future (if the size decreases or we misevaluate it). It also offers a minimal amount of + benefit when we are able to logarithmically accumulate inputs/fulfill payments. + + For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and + 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192 + bytes. + + 100,000 / 192 = 520 + 520 * 192 leaves 160 bytes of overhead for the transaction structure itself. + */ + const MAX_INPUTS: usize = 520; + // We always reserve one output to create an anyone-can-spend output enabling anyone to use CPFP + // to unstick any transactions which had too low of a fee. + const MAX_OUTPUTS: usize = 519; + + fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { + assert_eq!(coin, Coin::Bitcoin); + // TODO + 1 + } + + fn branch_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Branch) + } + fn change_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Change) + } + fn forwarding_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Forwarded) + } + + fn calculate_fee( + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>>, + change: Option>, + ) -> Amount { + match signable_transaction(fee_rate, inputs, payments, change) { + Ok(tx) => Amount(tx.1.needed_fee()), + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to calculate_fee"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { fee, .. }) => Amount(fee), + } + } + + fn plan( + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>>, + change: Option>, + ) -> PlannedTransaction> { + let key = inputs.first().unwrap().key(); + for input in &inputs { + assert_eq!(key, input.key()); + } + + let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); + match signable_transaction(fee_rate, inputs, payments, change) { + Ok(tx) => PlannedTransaction { + signable: tx.0, + eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, + auxilliary: EffectedReceivedOutputs({ + let tx = tx.1.transaction(); + let scanner = scanner(key); + + let mut res = vec![]; + for output in scanner.scan_transaction(tx) { + res.push(Output::new(key, tx, output)); + } + res + }), + }, + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to plan"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { .. }) => { + panic!("plan called for a transaction without enough funds") + } + } + } +} + +pub(crate) type Scheduler = GenericScheduler; diff --git a/processor/bitcoin/src/transaction.rs b/processor/bitcoin/src/transaction.rs index ef48d3f0..f529b178 100644 --- a/processor/bitcoin/src/transaction.rs +++ b/processor/bitcoin/src/transaction.rs @@ -48,11 +48,10 @@ impl scheduler::Transaction for Transaction { #[derive(Clone, Debug)] pub(crate) struct SignableTransaction { - inputs: Vec, - payments: Vec<(Address, u64)>, - change: Option
, - data: Option>, - fee_per_vbyte: u64, + pub(crate) inputs: Vec, + pub(crate) payments: Vec<(Address, u64)>, + pub(crate) change: Option
, + pub(crate) fee_per_vbyte: u64, } impl SignableTransaction { @@ -66,7 +65,7 @@ impl SignableTransaction { .map(|(address, amount)| (ScriptBuf::from(address), amount)) .collect::>(), self.change.map(ScriptBuf::from), - self.data, + None, self.fee_per_vbyte, ) } @@ -111,10 +110,9 @@ impl scheduler::SignableTransaction for SignableTransaction { let payments = <_>::deserialize_reader(reader)?; let change = <_>::deserialize_reader(reader)?; - let data = <_>::deserialize_reader(reader)?; let fee_per_vbyte = <_>::deserialize_reader(reader)?; - Ok(Self { inputs, payments, change, data, fee_per_vbyte }) + Ok(Self { inputs, payments, change, fee_per_vbyte }) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { writer.write_all(&u32::try_from(self.inputs.len()).unwrap().to_le_bytes())?; @@ -124,7 +122,6 @@ impl scheduler::SignableTransaction for SignableTransaction { self.payments.serialize(writer)?; self.change.serialize(writer)?; - self.data.serialize(writer)?; self.fee_per_vbyte.serialize(writer)?; Ok(()) @@ -141,8 +138,8 @@ impl scheduler::SignableTransaction for SignableTransaction { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct Eventuality { - txid: [u8; 32], - singular_spent_output: Option, + pub(crate) txid: [u8; 32], + pub(crate) singular_spent_output: Option, } impl primitives::Eventuality for Eventuality { diff --git a/processor/scheduler/utxo/primitives/Cargo.toml b/processor/scheduler/utxo/primitives/Cargo.toml index 85935ae0..80b1f22e 100644 --- a/processor/scheduler/utxo/primitives/Cargo.toml +++ b/processor/scheduler/utxo/primitives/Cargo.toml @@ -17,8 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } - borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-primitives = { path = "../../../../substrate/primitives", default-features = false, features = ["std"] } diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 2f51e9e0..e48221a1 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -39,7 +39,6 @@ pub struct AmortizePlannedTransaction: 'static + Send + Sync { /// The type representing a fee rate to use for transactions. type FeeRate: Clone + Copy; diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index 10e40f15..3ae855e7 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -203,6 +203,9 @@ impl> Scheduler { // Fetch the operating costs/outputs let mut operating_costs = Db::::operating_costs(txn, coin).0; let outputs = Db::::outputs(txn, key, coin).unwrap(); + if outputs.is_empty() { + continue; + } // Fetch the fulfillable payments let payments = Self::fulfillable_payments( diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index d11e4ac2..e43f5fec 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -23,7 +23,7 @@ mod db; use db::Db; /// The outputs which will be effected by a PlannedTransaction and received by Serai. -pub struct EffectedReceivedOutputs(Vec>); +pub struct EffectedReceivedOutputs(pub Vec>); /// A scheduler of transactions for networks premised on the UTXO model which support /// transaction chaining. @@ -179,6 +179,9 @@ impl>> Sched // Fetch the operating costs/outputs let mut operating_costs = Db::::operating_costs(txn, coin).0; let outputs = Db::::outputs(txn, key, coin).unwrap(); + if outputs.is_empty() { + continue; + } // Fetch the fulfillable payments let payments = Self::fulfillable_payments( From c988b7cdb062c0880386c2b1481d21d5dee743fa Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 00:48:52 -0400 Subject: [PATCH 097/368] Bitcoin TransactionPublisher --- Cargo.lock | 1 + processor/bitcoin/Cargo.toml | 1 + processor/bitcoin/src/lib.rs | 2 +- processor/bitcoin/src/{scanner_feed.rs => rpc.rs} | 15 ++++++++++++++- processor/bitcoin/src/scheduler.rs | 2 +- processor/bitcoin/src/transaction.rs | 2 +- 6 files changed, 19 insertions(+), 4 deletions(-) rename processor/bitcoin/src/{scanner_feed.rs => rpc.rs} (88%) diff --git a/Cargo.lock b/Cargo.lock index 7ae3a0f2..1839cc98 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8143,6 +8143,7 @@ dependencies = [ "serai-processor-primitives", "serai-processor-scanner", "serai-processor-scheduler-primitives", + "serai-processor-signers", "serai-processor-transaction-chaining-scheduler", "serai-processor-utxo-scheduler-primitives", "tokio", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 91813bac..54ace26f 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -47,6 +47,7 @@ scheduler = { package = "serai-processor-scheduler-primitives", path = "../sched scanner = { package = "serai-processor-scanner", path = "../scanner" } utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = "../scheduler/utxo/primitives" } transaction-chaining-scheduler = { package = "serai-processor-transaction-chaining-scheduler", path = "../scheduler/utxo/transaction-chaining" } +signers = { package = "serai-processor-signers", path = "../signers" } message-queue = { package = "serai-message-queue", path = "../../message-queue" } diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/lib.rs index cbf65093..281b7358 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/lib.rs @@ -15,7 +15,7 @@ mod transaction; mod block; // App-logic trait satisfactions -mod scanner_feed; +mod rpc; mod scheduler; pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { diff --git a/processor/bitcoin/src/scanner_feed.rs b/processor/bitcoin/src/rpc.rs similarity index 88% rename from processor/bitcoin/src/scanner_feed.rs rename to processor/bitcoin/src/rpc.rs index 5a3c491c..8af82121 100644 --- a/processor/bitcoin/src/scanner_feed.rs +++ b/processor/bitcoin/src/rpc.rs @@ -3,8 +3,12 @@ use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; use serai_client::primitives::{NetworkId, Coin, Amount}; use scanner::ScannerFeed; +use signers::TransactionPublisher; -use crate::block::{BlockHeader, Block}; +use crate::{ + transaction::Transaction, + block::{BlockHeader, Block}, +}; #[derive(Clone)] pub(crate) struct Rpc(BRpc); @@ -92,3 +96,12 @@ impl ScannerFeed for Rpc { Ok(Amount(0)) } } + +#[async_trait::async_trait] +impl TransactionPublisher for Rpc { + type EphemeralError = RpcError; + + async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { + self.0.send_raw_transaction(&tx.0).await.map(|_| ()) + } +} diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index 0c1debdb..c48f9a69 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -19,7 +19,7 @@ use crate::{ scan::{offsets_for_key, scanner}, output::Output, transaction::{SignableTransaction, Eventuality}, - scanner_feed::Rpc, + rpc::Rpc, }; fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { diff --git a/processor/bitcoin/src/transaction.rs b/processor/bitcoin/src/transaction.rs index f529b178..5fca0b91 100644 --- a/processor/bitcoin/src/transaction.rs +++ b/processor/bitcoin/src/transaction.rs @@ -24,7 +24,7 @@ use serai_client::networks::bitcoin::Address; use crate::output::OutputId; #[derive(Clone, Debug)] -pub(crate) struct Transaction(BTransaction); +pub(crate) struct Transaction(pub(crate) BTransaction); impl From for Transaction { fn from(tx: BTransaction) -> Self { From 4cb838e248296af148f0ddd04b85fd45c9873e19 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 00:52:01 -0400 Subject: [PATCH 098/368] Bitcoin processor lib.rs -> main.rs --- processor/bitcoin/src/{lib.rs => main.rs} | 3 +++ 1 file changed, 3 insertions(+) rename processor/bitcoin/src/{lib.rs => main.rs} (99%) diff --git a/processor/bitcoin/src/lib.rs b/processor/bitcoin/src/main.rs similarity index 99% rename from processor/bitcoin/src/lib.rs rename to processor/bitcoin/src/main.rs index 281b7358..653e8b5a 100644 --- a/processor/bitcoin/src/lib.rs +++ b/processor/bitcoin/src/main.rs @@ -26,6 +26,9 @@ pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> res } +#[tokio::main] +async fn main() {} + /* use bitcoin_serai::{ bitcoin::{ From 93c7d06684ec6592cb11693d76211055c35844a4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 02:46:18 -0400 Subject: [PATCH 099/368] Implement presumed_origin Before we yield a block for scanning, we save all of the contained script public keys. Then, when we want the address credited for creating an output, we read the script public key of the spent output from the database. Fixes #559. --- processor/bitcoin/src/block.rs | 21 +++++--- processor/bitcoin/src/db.rs | 8 +++ processor/bitcoin/src/main.rs | 4 ++ processor/bitcoin/src/output.rs | 27 +++++++++- processor/bitcoin/src/rpc.rs | 27 ++++++---- processor/bitcoin/src/scan.rs | 32 +++++------- processor/bitcoin/src/scheduler.rs | 64 +++++++++++++++--------- processor/bitcoin/src/txindex.rs | 80 ++++++++++++++++++++++++++++++ 8 files changed, 200 insertions(+), 63 deletions(-) create mode 100644 processor/bitcoin/src/db.rs create mode 100644 processor/bitcoin/src/txindex.rs diff --git a/processor/bitcoin/src/block.rs b/processor/bitcoin/src/block.rs index 24cccec9..8221c8b5 100644 --- a/processor/bitcoin/src/block.rs +++ b/processor/bitcoin/src/block.rs @@ -1,3 +1,4 @@ +use core::fmt; use std::collections::HashMap; use ciphersuite::{Ciphersuite, Secp256k1}; @@ -6,6 +7,7 @@ use bitcoin_serai::bitcoin::block::{Header, Block as BBlock}; use serai_client::networks::bitcoin::Address; +use serai_db::Db; use primitives::{ReceivedOutput, EventualityTracker}; use crate::{hash_bytes, scan::scanner, output::Output, transaction::Eventuality}; @@ -21,11 +23,16 @@ impl primitives::BlockHeader for BlockHeader { } } -#[derive(Clone, Debug)] -pub(crate) struct Block(pub(crate) BBlock); +#[derive(Clone)] +pub(crate) struct Block(pub(crate) D, pub(crate) BBlock); +impl fmt::Debug for Block { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + fmt.debug_struct("Block").field("1", &self.1).finish_non_exhaustive() + } +} #[async_trait::async_trait] -impl primitives::Block for Block { +impl primitives::Block for Block { type Header = BlockHeader; type Key = ::G; @@ -34,7 +41,7 @@ impl primitives::Block for Block { type Eventuality = Eventuality; fn id(&self) -> [u8; 32] { - primitives::BlockHeader::id(&BlockHeader(self.0.header)) + primitives::BlockHeader::id(&BlockHeader(self.1.header)) } fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { @@ -42,9 +49,9 @@ impl primitives::Block for Block { let mut res = vec![]; // We skip the coinbase transaction as its burdened by maturity - for tx in &self.0.txdata[1 ..] { + for tx in &self.1.txdata[1 ..] { for output in scanner.scan_transaction(tx) { - res.push(Output::new(key, tx, output)); + res.push(Output::new(&self.0, key, tx, output)); } } res @@ -59,7 +66,7 @@ impl primitives::Block for Block { Self::Eventuality, > { let mut res = HashMap::new(); - for tx in &self.0.txdata[1 ..] { + for tx in &self.1.txdata[1 ..] { let id = hash_bytes(tx.compute_txid().to_raw_hash()); if let Some(eventuality) = eventualities.active_eventualities.remove(id.as_slice()) { res.insert(id, eventuality); diff --git a/processor/bitcoin/src/db.rs b/processor/bitcoin/src/db.rs new file mode 100644 index 00000000..1d73ebfe --- /dev/null +++ b/processor/bitcoin/src/db.rs @@ -0,0 +1,8 @@ +use serai_db::{Get, DbTxn, create_db}; + +create_db! { + BitcoinProcessor { + LatestBlockToYieldAsFinalized: () -> u64, + ScriptPubKey: (tx: [u8; 32], vout: u32) -> Vec, + } +} diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 653e8b5a..941cc0dc 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -18,6 +18,10 @@ mod block; mod rpc; mod scheduler; +// Our custom code for Bitcoin +mod db; +mod txindex; + pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { use bitcoin_serai::bitcoin::hashes::Hash; diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/output.rs index dc541350..2ed03705 100644 --- a/processor/bitcoin/src/output.rs +++ b/processor/bitcoin/src/output.rs @@ -15,6 +15,7 @@ use bitcoin_serai::{ use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::Get; use serai_client::{ primitives::{Coin, Amount, Balance, ExternalAddress}, @@ -52,13 +53,35 @@ pub(crate) struct Output { } impl Output { - pub fn new(key: ::G, tx: &Transaction, output: WalletOutput) -> Self { + pub fn new( + getter: &impl Get, + key: ::G, + tx: &Transaction, + output: WalletOutput, + ) -> Self { Self { kind: offsets_for_key(key) .into_iter() .find_map(|(kind, offset)| (offset == output.offset()).then_some(kind)) .expect("scanned output for unknown offset"), - presumed_origin: presumed_origin(tx), + presumed_origin: presumed_origin(getter, tx), + output, + data: extract_serai_data(tx), + } + } + + pub fn new_with_presumed_origin( + key: ::G, + tx: &Transaction, + presumed_origin: Option
, + output: WalletOutput, + ) -> Self { + Self { + kind: offsets_for_key(key) + .into_iter() + .find_map(|(kind, offset)| (offset == output.offset()).then_some(kind)) + .expect("scanned output for unknown offset"), + presumed_origin, output, data: extract_serai_data(tx), } diff --git a/processor/bitcoin/src/rpc.rs b/processor/bitcoin/src/rpc.rs index 8af82121..cafb0ef3 100644 --- a/processor/bitcoin/src/rpc.rs +++ b/processor/bitcoin/src/rpc.rs @@ -2,34 +2,36 @@ use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_db::Db; use scanner::ScannerFeed; use signers::TransactionPublisher; use crate::{ + db, transaction::Transaction, block::{BlockHeader, Block}, }; #[derive(Clone)] -pub(crate) struct Rpc(BRpc); +pub(crate) struct Rpc { + pub(crate) db: D, + pub(crate) rpc: BRpc, +} #[async_trait::async_trait] -impl ScannerFeed for Rpc { +impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Bitcoin; const CONFIRMATIONS: u64 = 6; const WINDOW_LENGTH: u64 = 6; const TEN_MINUTES: u64 = 1; - type Block = Block; + type Block = Block; type EphemeralError = RpcError; async fn latest_finalized_block_number(&self) -> Result { - u64::try_from(self.0.get_latest_block_number().await?) - .unwrap() - .checked_sub(Self::CONFIRMATIONS) - .ok_or(RpcError::ConnectionError) + db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) } async fn unchecked_block_header_by_number( @@ -37,7 +39,7 @@ impl ScannerFeed for Rpc { number: u64, ) -> Result<::Header, Self::EphemeralError> { Ok(BlockHeader( - self.0.get_block(&self.0.get_block_hash(number.try_into().unwrap()).await?).await?.header, + self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?.header, )) } @@ -45,7 +47,10 @@ impl ScannerFeed for Rpc { &self, number: u64, ) -> Result { - Ok(Block(self.0.get_block(&self.0.get_block_hash(number.try_into().unwrap()).await?).await?)) + Ok(Block( + self.db.clone(), + self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?, + )) } fn dust(coin: Coin) -> Amount { @@ -98,10 +103,10 @@ impl ScannerFeed for Rpc { } #[async_trait::async_trait] -impl TransactionPublisher for Rpc { +impl TransactionPublisher for Rpc { type EphemeralError = RpcError; async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { - self.0.send_raw_transaction(&tx.0).await.map(|_| ()) + self.rpc.send_raw_transaction(&tx.0).await.map(|_| ()) } } diff --git a/processor/bitcoin/src/scan.rs b/processor/bitcoin/src/scan.rs index 43518b57..b3d3a6dc 100644 --- a/processor/bitcoin/src/scan.rs +++ b/processor/bitcoin/src/scan.rs @@ -13,8 +13,11 @@ use bitcoin_serai::{ use serai_client::networks::bitcoin::Address; +use serai_db::Get; use primitives::OutputType; +use crate::{db, hash_bytes}; + const KEY_DST: &[u8] = b"Serai Bitcoin Processor Key Offset"; static BRANCH_BASE_OFFSET: LazyLock<::F> = LazyLock::new(|| Secp256k1::hash_to_F(KEY_DST, b"branch")); @@ -55,26 +58,17 @@ pub(crate) fn scanner(key: ::G) -> Scanner { scanner } -pub(crate) fn presumed_origin(tx: &Transaction) -> Option
{ - todo!("TODO") - - /* - let spent_output = { - let input = &tx.input[0]; - let mut spent_tx = input.previous_output.txid.as_raw_hash().to_byte_array(); - spent_tx.reverse(); - let mut tx; - while { - tx = self.rpc.get_transaction(&spent_tx).await; - tx.is_err() - } { - log::error!("couldn't get transaction from bitcoin node: {tx:?}"); - sleep(Duration::from_secs(5)).await; +pub(crate) fn presumed_origin(getter: &impl Get, tx: &Transaction) -> Option
{ + for input in &tx.input { + let txid = hash_bytes(input.previous_output.txid.to_raw_hash()); + let vout = input.previous_output.vout; + if let Some(address) = Address::new(ScriptBuf::from_bytes( + db::ScriptPubKey::get(getter, txid, vout).expect("unknown output being spent by input"), + )) { + return Some(address); } - tx.unwrap().output.swap_remove(usize::try_from(input.previous_output.vout).unwrap()) - }; - Address::new(spent_output.script_pubkey) - */ + } + None? } // Checks if this script matches SHA256 PUSH MSG_HASH OP_EQUALVERIFY .. diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index c48f9a69..e225613c 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -10,6 +10,7 @@ use serai_client::{ networks::bitcoin::Address, }; +use serai_db::Db; use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; @@ -31,17 +32,24 @@ fn address_from_serai_key(key: ::G, kind: OutputType) .expect("couldn't create Serai-representable address for P2TR script") } -fn signable_transaction( +fn signable_transaction( fee_per_vbyte: u64, - inputs: Vec>, - payments: Vec>>, - change: Option>, + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, ) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { - assert!(inputs.len() < Planner::MAX_INPUTS); - assert!((payments.len() + usize::from(u8::from(change.is_some()))) < Planner::MAX_OUTPUTS); + assert!( + inputs.len() < + , EffectedReceivedOutputs>>>::MAX_INPUTS + ); + assert!( + (payments.len() + usize::from(u8::from(change.is_some()))) < + , EffectedReceivedOutputs>>>::MAX_OUTPUTS + ); let inputs = inputs.into_iter().map(|input| input.output).collect::>(); - let payments = payments + + let mut payments = payments .into_iter() .map(|payment| { (payment.address().clone(), { @@ -51,7 +59,8 @@ fn signable_transaction( }) }) .collect::>(); - let change = change.map(Planner::change_address); + let change = change + .map(, EffectedReceivedOutputs>>>::change_address); // TODO: ACP output BSignableTransaction::new( @@ -69,7 +78,7 @@ fn signable_transaction( } pub(crate) struct Planner; -impl TransactionPlanner> for Planner { +impl TransactionPlanner, EffectedReceivedOutputs>> for Planner { type FeeRate = u64; type SignableTransaction = SignableTransaction; @@ -94,29 +103,29 @@ impl TransactionPlanner> for Planner { // to unstick any transactions which had too low of a fee. const MAX_OUTPUTS: usize = 519; - fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { + fn fee_rate(block: &BlockFor>, coin: Coin) -> Self::FeeRate { assert_eq!(coin, Coin::Bitcoin); // TODO 1 } - fn branch_address(key: KeyFor) -> AddressFor { + fn branch_address(key: KeyFor>) -> AddressFor> { address_from_serai_key(key, OutputType::Branch) } - fn change_address(key: KeyFor) -> AddressFor { + fn change_address(key: KeyFor>) -> AddressFor> { address_from_serai_key(key, OutputType::Change) } - fn forwarding_address(key: KeyFor) -> AddressFor { + fn forwarding_address(key: KeyFor>) -> AddressFor> { address_from_serai_key(key, OutputType::Forwarded) } fn calculate_fee( fee_rate: Self::FeeRate, - inputs: Vec>, - payments: Vec>>, - change: Option>, + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, ) -> Amount { - match signable_transaction(fee_rate, inputs, payments, change) { + match signable_transaction::(fee_rate, inputs, payments, change) { Ok(tx) => Amount(tx.1.needed_fee()), Err( TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, @@ -133,17 +142,17 @@ impl TransactionPlanner> for Planner { fn plan( fee_rate: Self::FeeRate, - inputs: Vec>, - payments: Vec>>, - change: Option>, - ) -> PlannedTransaction> { + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, + ) -> PlannedTransaction, Self::SignableTransaction, EffectedReceivedOutputs>> { let key = inputs.first().unwrap().key(); for input in &inputs { assert_eq!(key, input.key()); } let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); - match signable_transaction(fee_rate, inputs, payments, change) { + match signable_transaction::(fee_rate, inputs.clone(), payments, change) { Ok(tx) => PlannedTransaction { signable: tx.0, eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, @@ -153,7 +162,14 @@ impl TransactionPlanner> for Planner { let mut res = vec![]; for output in scanner.scan_transaction(tx) { - res.push(Output::new(key, tx, output)); + res.push(Output::new_with_presumed_origin( + key, + tx, + // It shouldn't matter if this is wrong as we should never try to return these + // We still provide an accurate value to ensure a lack of discrepancies + Some(Address::new(inputs[0].output.output().script_pubkey.clone()).unwrap()), + output, + )); } res }), @@ -174,4 +190,4 @@ impl TransactionPlanner> for Planner { } } -pub(crate) type Scheduler = GenericScheduler; +pub(crate) type Scheduler = GenericScheduler, Planner>; diff --git a/processor/bitcoin/src/txindex.rs b/processor/bitcoin/src/txindex.rs new file mode 100644 index 00000000..d9d52526 --- /dev/null +++ b/processor/bitcoin/src/txindex.rs @@ -0,0 +1,80 @@ +/* + We want to be able to return received outputs. We do that by iterating over the inputs to find an + address format we recognize, then setting that address as the address to return to. + + Since inputs only contain the script signatures, yet addresses are for script public keys, we + need to pull up the output spent by an input and read the script public key from that. While we + could use `txindex=1`, and an asynchronous call to the Bitcoin node, we: + + 1) Can maintain a much smaller index ourselves + 2) Don't want the asynchronous call (which would require the flow be async, allowed to + potentially error, and more latent) + 3) Don't want to risk Bitcoin's `txindex` corruptions (frequently observed on testnet) + + This task builds that index. +*/ + +use serai_db::{DbTxn, Db}; + +use primitives::task::ContinuallyRan; +use scanner::ScannerFeed; + +use crate::{db, rpc::Rpc, hash_bytes}; + +pub(crate) struct TxIndexTask(Rpc); + +#[async_trait::async_trait] +impl ContinuallyRan for TxIndexTask { + async fn run_iteration(&mut self) -> Result { + let latest_block_number = self + .0 + .rpc + .get_latest_block_number() + .await + .map_err(|e| format!("couldn't fetch latest block number: {e:?}"))?; + let latest_block_number = u64::try_from(latest_block_number).unwrap(); + // `CONFIRMATIONS - 1` as any on-chain block inherently has one confirmation (itself) + let finalized_block_number = + latest_block_number.checked_sub(Rpc::::CONFIRMATIONS - 1).ok_or(format!( + "blockchain only just started and doesn't have {} blocks yet", + Rpc::::CONFIRMATIONS + ))?; + + let finalized_block_number_in_db = db::LatestBlockToYieldAsFinalized::get(&self.0.db); + let next_block = finalized_block_number_in_db.map_or(0, |block| block + 1); + + let mut iterated = false; + for b in next_block ..= finalized_block_number { + iterated = true; + + // Fetch the block + let block_hash = self + .0 + .rpc + .get_block_hash(b.try_into().unwrap()) + .await + .map_err(|e| format!("couldn't fetch block hash for block {b}: {e:?}"))?; + let block = self + .0 + .rpc + .get_block(&block_hash) + .await + .map_err(|e| format!("couldn't fetch block {b}: {e:?}"))?; + + let mut txn = self.0.db.txn(); + + for tx in &block.txdata[1 ..] { + let txid = hash_bytes(tx.compute_txid().to_raw_hash()); + for (o, output) in tx.output.iter().enumerate() { + let o = u32::try_from(o).unwrap(); + // Set the script pub key for this transaction + db::ScriptPubKey::set(&mut txn, txid, o, &output.script_pubkey.clone().into_bytes()); + } + } + + db::LatestBlockToYieldAsFinalized::set(&mut txn, &b); + txn.commit(); + } + Ok(iterated) + } +} From 76a3f3ec4b96883e02a772aaee8d972d047634bd Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 02:48:53 -0400 Subject: [PATCH 100/368] Add an anyone-can-pay output to every Bitcoin transaction Resolves #284. --- processor/bitcoin/src/scheduler.rs | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index e225613c..7f365c56 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -59,10 +59,22 @@ fn signable_transaction( }) }) .collect::>(); + /* + Push a payment to a key with a known private key which anyone can spend. If this transaction + gets stuck, this lets anyone create a child transaction spending this output, raising the fee, + getting the transaction unstuck (via CPFP). + */ + payments.push(Payment::new( + // The generator is even so this is valid + Address::new(p2tr_script_buf(::G::GENERATOR)), + // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai + Balance { coin: Coin::Bitcoin, amount: Amount(bitcoin_serai::wallet::DUST) }, + None, + )); + let change = change .map(, EffectedReceivedOutputs>>>::change_address); - // TODO: ACP output BSignableTransaction::new( inputs.clone(), &payments From 776cbbb9a4c1ef37db055d27cd493e36a743e0c7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 03:01:39 -0400 Subject: [PATCH 101/368] Misc changes in response to prior two commits --- .github/actions/bitcoin/action.yml | 2 +- orchestration/dev/networks/bitcoin/run.sh | 4 ++-- orchestration/dev/networks/ethereum-relayer/.folder | 11 ----------- orchestration/dev/networks/monero/run.sh | 2 +- orchestration/testnet/networks/bitcoin/run.sh | 2 +- .../testnet/networks/ethereum-relayer/.folder | 11 ----------- processor/bitcoin/src/scheduler.rs | 8 ++++---- 7 files changed, 9 insertions(+), 31 deletions(-) diff --git a/.github/actions/bitcoin/action.yml b/.github/actions/bitcoin/action.yml index 6f628172..2765571f 100644 --- a/.github/actions/bitcoin/action.yml +++ b/.github/actions/bitcoin/action.yml @@ -37,4 +37,4 @@ runs: - name: Bitcoin Regtest Daemon shell: bash - run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -daemon + run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon diff --git a/orchestration/dev/networks/bitcoin/run.sh b/orchestration/dev/networks/bitcoin/run.sh index da7c95a8..bec89fa9 100755 --- a/orchestration/dev/networks/bitcoin/run.sh +++ b/orchestration/dev/networks/bitcoin/run.sh @@ -3,7 +3,7 @@ RPC_USER="${RPC_USER:=serai}" RPC_PASS="${RPC_PASS:=seraidex}" -bitcoind -txindex -regtest --port=8333 \ +bitcoind -regtest --port=8333 \ -rpcuser=$RPC_USER -rpcpassword=$RPC_PASS \ -rpcbind=0.0.0.0 -rpcallowip=0.0.0.0/0 -rpcport=8332 \ - $1 + $@ diff --git a/orchestration/dev/networks/ethereum-relayer/.folder b/orchestration/dev/networks/ethereum-relayer/.folder index 675d4438..e69de29b 100644 --- a/orchestration/dev/networks/ethereum-relayer/.folder +++ b/orchestration/dev/networks/ethereum-relayer/.folder @@ -1,11 +0,0 @@ -#!/bin/sh - -RPC_USER="${RPC_USER:=serai}" -RPC_PASS="${RPC_PASS:=seraidex}" - -# Run Monero -monerod --non-interactive --regtest --offline --fixed-difficulty=1 \ - --no-zmq --rpc-bind-ip=0.0.0.0 --rpc-bind-port=18081 --confirm-external-bind \ - --rpc-access-control-origins "*" --disable-rpc-ban \ - --rpc-login=$RPC_USER:$RPC_PASS \ - $1 diff --git a/orchestration/dev/networks/monero/run.sh b/orchestration/dev/networks/monero/run.sh index 75a93e46..1186c4d1 100755 --- a/orchestration/dev/networks/monero/run.sh +++ b/orchestration/dev/networks/monero/run.sh @@ -8,4 +8,4 @@ monerod --non-interactive --regtest --offline --fixed-difficulty=1 \ --no-zmq --rpc-bind-ip=0.0.0.0 --rpc-bind-port=18081 --confirm-external-bind \ --rpc-access-control-origins "*" --disable-rpc-ban \ --rpc-login=$RPC_USER:$RPC_PASS --log-level 2 \ - $1 + $@ diff --git a/orchestration/testnet/networks/bitcoin/run.sh b/orchestration/testnet/networks/bitcoin/run.sh index dbec375a..6544243b 100755 --- a/orchestration/testnet/networks/bitcoin/run.sh +++ b/orchestration/testnet/networks/bitcoin/run.sh @@ -3,7 +3,7 @@ RPC_USER="${RPC_USER:=serai}" RPC_PASS="${RPC_PASS:=seraidex}" -bitcoind -txindex -testnet -port=8333 \ +bitcoind -testnet -port=8333 \ -rpcuser=$RPC_USER -rpcpassword=$RPC_PASS \ -rpcbind=0.0.0.0 -rpcallowip=0.0.0.0/0 -rpcport=8332 \ --datadir=/volume diff --git a/orchestration/testnet/networks/ethereum-relayer/.folder b/orchestration/testnet/networks/ethereum-relayer/.folder index 675d4438..e69de29b 100644 --- a/orchestration/testnet/networks/ethereum-relayer/.folder +++ b/orchestration/testnet/networks/ethereum-relayer/.folder @@ -1,11 +0,0 @@ -#!/bin/sh - -RPC_USER="${RPC_USER:=serai}" -RPC_PASS="${RPC_PASS:=seraidex}" - -# Run Monero -monerod --non-interactive --regtest --offline --fixed-difficulty=1 \ - --no-zmq --rpc-bind-ip=0.0.0.0 --rpc-bind-port=18081 --confirm-external-bind \ - --rpc-access-control-origins "*" --disable-rpc-ban \ - --rpc-login=$RPC_USER:$RPC_PASS \ - $1 diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index 7f365c56..6e49d23d 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -64,12 +64,12 @@ fn signable_transaction( gets stuck, this lets anyone create a child transaction spending this output, raising the fee, getting the transaction unstuck (via CPFP). */ - payments.push(Payment::new( + payments.push(( // The generator is even so this is valid - Address::new(p2tr_script_buf(::G::GENERATOR)), + Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai - Balance { coin: Coin::Bitcoin, amount: Amount(bitcoin_serai::wallet::DUST) }, - None, + // TODO: Add a test for this comparing to bitcoin's `minimal_non_dust` + bitcoin_serai::wallet::DUST, )); let change = change From b61ba9d1bb6cfc347566ecaf13487100b76040cc Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 03:09:44 -0400 Subject: [PATCH 102/368] Adjust Bitcoin processor layout --- processor/bitcoin/src/main.rs | 123 +----------------- .../bitcoin/src/{ => primitives}/block.rs | 0 processor/bitcoin/src/primitives/mod.rs | 3 + .../bitcoin/src/{ => primitives}/output.rs | 4 +- .../src/{ => primitives}/transaction.rs | 0 processor/bitcoin/src/txindex.rs | 2 +- 6 files changed, 13 insertions(+), 119 deletions(-) rename processor/bitcoin/src/{ => primitives}/block.rs (100%) create mode 100644 processor/bitcoin/src/primitives/mod.rs rename processor/bitcoin/src/{ => primitives}/output.rs (98%) rename processor/bitcoin/src/{ => primitives}/transaction.rs (100%) diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 941cc0dc..2ff072b4 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -6,14 +6,12 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); +mod primitives; +pub(crate) use primitives::*; + // Internal utilities for scanning transactions mod scan; -// Primitive trait satisfactions -mod output; -mod transaction; -mod block; - // App-logic trait satisfactions mod rpc; mod scheduler; @@ -70,17 +68,10 @@ use serai_client::{ /* #[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct Fee(u64); +pub(crate) struct Fee(u64); #[async_trait] impl TransactionTrait for Transaction { - type Id = [u8; 32]; - fn id(&self) -> Self::Id { - let mut hash = *self.compute_txid().as_raw_hash().as_byte_array(); - hash.reverse(); - hash - } - #[cfg(test)] async fn fee(&self, network: &Bitcoin) -> u64 { let mut value = 0; @@ -130,17 +121,8 @@ impl BlockTrait for Block { } } -// Shim required for testing/debugging purposes due to generic arguments also necessitating trait -// bounds -impl PartialEq for Bitcoin { - fn eq(&self, _: &Self) -> bool { - true - } -} -impl Eq for Bitcoin {} - impl Bitcoin { - pub async fn new(url: String) -> Bitcoin { + pub(crate) async fn new(url: String) -> Bitcoin { let mut res = Rpc::new(url.clone()).await; while let Err(e) = res { log::error!("couldn't connect to Bitcoin node: {e:?}"); @@ -151,7 +133,7 @@ impl Bitcoin { } #[cfg(test)] - pub async fn fresh_chain(&self) { + pub(crate) async fn fresh_chain(&self) { if self.rpc.get_latest_block_number().await.unwrap() > 0 { self .rpc @@ -194,64 +176,8 @@ impl Bitcoin { Ok(Fee(fee.max(1))) } - async fn make_signable_transaction( - &self, - block_number: usize, - inputs: &[Output], - payments: &[Payment], - change: &Option
, - calculating_fee: bool, - ) -> Result, NetworkError> { - for payment in payments { - assert_eq!(payment.balance.coin, Coin::Bitcoin); - } - - // TODO2: Use an fee representative of several blocks, cached inside Self - let block_for_fee = self.get_block(block_number).await?; - let fee = self.median_fee(&block_for_fee).await?; - - let payments = payments - .iter() - .map(|payment| { - ( - payment.address.clone().into(), - // If we're solely estimating the fee, don't specify the actual amount - // This won't affect the fee calculation yet will ensure we don't hit a not enough funds - // error - if calculating_fee { Self::DUST } else { payment.balance.amount.0 }, - ) - }) - .collect::>(); - - match BSignableTransaction::new( - inputs.iter().map(|input| input.output.clone()).collect(), - &payments, - change.clone().map(Into::into), - None, - fee.0, - ) { - Ok(signable) => Ok(Some(signable)), - Err(TransactionError::NoInputs) => { - panic!("trying to create a bitcoin transaction without inputs") - } - // No outputs left and the change isn't worth enough/not even enough funds to pay the fee - Err(TransactionError::NoOutputs | TransactionError::NotEnoughFunds) => Ok(None), - // amortize_fee removes payments which fall below the dust threshold - Err(TransactionError::DustPayment) => panic!("dust payment despite removing dust"), - Err(TransactionError::TooMuchData) => { - panic!("too much data despite not specifying data") - } - Err(TransactionError::TooLowFee) => { - panic!("created a transaction whose fee is below the minimum") - } - Err(TransactionError::TooLargeTransaction) => { - panic!("created a too large transaction despite limiting inputs/outputs") - } - } - } - #[cfg(test)] - pub fn sign_btc_input_for_p2pkh( + pub(crate) fn sign_btc_input_for_p2pkh( tx: &Transaction, input_index: usize, private_key: &PrivateKey, @@ -288,17 +214,8 @@ impl Bitcoin { } } -fn address_from_key(key: ProjectivePoint) -> Address { - Address::new( - p2tr_script_buf(key).expect("creating address from key which isn't properly tweaked"), - ) - .expect("couldn't create Serai-representable address for P2TR script") -} - #[async_trait] impl Network for Bitcoin { - type Scheduler = Scheduler; - // 2 inputs should be 2 * 230 = 460 weight units // The output should be ~36 bytes, or 144 weight units // The overhead should be ~20 bytes at most, or 80 weight units @@ -307,34 +224,12 @@ impl Network for Bitcoin { // aggregation TX const COST_TO_AGGREGATE: u64 = 800; - const MAX_OUTPUTS: usize = MAX_OUTPUTS; - fn tweak_keys(keys: &mut ThresholdKeys) { *keys = tweak_keys(keys); // Also create a scanner to assert these keys, and all expected paths, are usable scanner(keys.group_key()); } - #[cfg(test)] - async fn external_address(&self, key: ProjectivePoint) -> Address { - address_from_key(key) - } - - fn branch_address(key: ProjectivePoint) -> Option
{ - let (_, offsets, _) = scanner(key); - Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch]))) - } - - fn change_address(key: ProjectivePoint) -> Option
{ - let (_, offsets, _) = scanner(key); - Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change]))) - } - - fn forward_address(key: ProjectivePoint) -> Option
{ - let (_, offsets, _) = scanner(key); - Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded]))) - } - #[cfg(test)] async fn get_block_number(&self, id: &[u8; 32]) -> usize { self.rpc.get_block_number(id).await.unwrap() @@ -409,8 +304,4 @@ impl Network for Bitcoin { self.get_block(block).await.unwrap() } } - -impl UtxoNetwork for Bitcoin { - const MAX_INPUTS: usize = MAX_INPUTS; -} */ diff --git a/processor/bitcoin/src/block.rs b/processor/bitcoin/src/primitives/block.rs similarity index 100% rename from processor/bitcoin/src/block.rs rename to processor/bitcoin/src/primitives/block.rs diff --git a/processor/bitcoin/src/primitives/mod.rs b/processor/bitcoin/src/primitives/mod.rs new file mode 100644 index 00000000..fba52dd9 --- /dev/null +++ b/processor/bitcoin/src/primitives/mod.rs @@ -0,0 +1,3 @@ +pub(crate) mod output; +pub(crate) mod transaction; +pub(crate) mod block; diff --git a/processor/bitcoin/src/output.rs b/processor/bitcoin/src/primitives/output.rs similarity index 98% rename from processor/bitcoin/src/output.rs rename to processor/bitcoin/src/primitives/output.rs index 2ed03705..05ab6acf 100644 --- a/processor/bitcoin/src/output.rs +++ b/processor/bitcoin/src/primitives/output.rs @@ -53,7 +53,7 @@ pub(crate) struct Output { } impl Output { - pub fn new( + pub(crate) fn new( getter: &impl Get, key: ::G, tx: &Transaction, @@ -70,7 +70,7 @@ impl Output { } } - pub fn new_with_presumed_origin( + pub(crate) fn new_with_presumed_origin( key: ::G, tx: &Transaction, presumed_origin: Option
, diff --git a/processor/bitcoin/src/transaction.rs b/processor/bitcoin/src/primitives/transaction.rs similarity index 100% rename from processor/bitcoin/src/transaction.rs rename to processor/bitcoin/src/primitives/transaction.rs diff --git a/processor/bitcoin/src/txindex.rs b/processor/bitcoin/src/txindex.rs index d9d52526..63d5072c 100644 --- a/processor/bitcoin/src/txindex.rs +++ b/processor/bitcoin/src/txindex.rs @@ -67,7 +67,7 @@ impl ContinuallyRan for TxIndexTask { let txid = hash_bytes(tx.compute_txid().to_raw_hash()); for (o, output) in tx.output.iter().enumerate() { let o = u32::try_from(o).unwrap(); - // Set the script pub key for this transaction + // Set the script public key for this transaction db::ScriptPubKey::set(&mut txn, txid, o, &output.script_pubkey.clone().into_bytes()); } } From a8159e90703cc28cad3db9606f5ae4a1604c136a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 03:23:00 -0400 Subject: [PATCH 103/368] Bitcoin Key Gen --- Cargo.lock | 2 ++ processor/bitcoin/Cargo.toml | 2 ++ processor/bitcoin/src/key_gen.rs | 26 ++++++++++++++++++++++++ processor/bitcoin/src/main.rs | 7 +------ processor/key-gen/src/db.rs | 34 +++++++++++++++++++------------- 5 files changed, 51 insertions(+), 20 deletions(-) create mode 100644 processor/bitcoin/src/key_gen.rs diff --git a/Cargo.lock b/Cargo.lock index 1839cc98..8c0c3dd5 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8128,6 +8128,7 @@ dependencies = [ "bitcoin-serai", "borsh", "ciphersuite", + "dkg", "env_logger", "flexible-transcript", "log", @@ -8139,6 +8140,7 @@ dependencies = [ "serai-db", "serai-env", "serai-message-queue", + "serai-processor-key-gen", "serai-processor-messages", "serai-processor-primitives", "serai-processor-scanner", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 54ace26f..c92e1384 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -25,6 +25,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } @@ -41,6 +42,7 @@ serai-env = { path = "../../common/env" } serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } messages = { package = "serai-processor-messages", path = "../messages" } +key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs new file mode 100644 index 00000000..16183231 --- /dev/null +++ b/processor/bitcoin/src/key_gen.rs @@ -0,0 +1,26 @@ +use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; +use frost::ThresholdKeys; + +use key_gen::KeyGenParams; + +use crate::scan::scanner; + +pub(crate) struct KeyGen; +impl KeyGenParams for KeyGen { + const ID: &'static str = "Bitcoin"; + + type ExternalNetworkCurve = Secp256k1; + + fn tweak_keys(keys: &mut ThresholdKeys) { + *keys = bitcoin_serai::wallet::tweak_keys(keys); + // Also create a scanner to assert these keys, and all expected paths, are usable + scanner(keys.group_key()); + } + + fn encode_key(key: ::G) -> Vec { + let key = key.to_bytes(); + let key: &[u8] = key.as_ref(); + // Skip the parity encoding as we know this key is even + key[1 ..].to_vec() + } +} diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 2ff072b4..d86a4ba1 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -13,6 +13,7 @@ pub(crate) use primitives::*; mod scan; // App-logic trait satisfactions +mod key_gen; mod rpc; mod scheduler; @@ -224,12 +225,6 @@ impl Network for Bitcoin { // aggregation TX const COST_TO_AGGREGATE: u64 = 800; - fn tweak_keys(keys: &mut ThresholdKeys) { - *keys = tweak_keys(keys); - // Also create a scanner to assert these keys, and all expected paths, are usable - scanner(keys.group_key()); - } - #[cfg(test)] async fn get_block_number(&self, id: &[u8; 32]) -> usize { self.rpc.get_block_number(id).await.unwrap() diff --git a/processor/key-gen/src/db.rs b/processor/key-gen/src/db.rs index e82b84a5..676fd2aa 100644 --- a/processor/key-gen/src/db.rs +++ b/processor/key-gen/src/db.rs @@ -9,7 +9,7 @@ use dkg::{Participant, ThresholdCore, ThresholdKeys, evrf::EvrfCurve}; use serai_validator_sets_primitives::Session; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_db::{Get, DbTxn, create_db}; +use serai_db::{Get, DbTxn}; use crate::KeyGenParams; @@ -35,20 +35,26 @@ pub(crate) struct Participations { pub(crate) network_participations: HashMap>, } -create_db!( - KeyGen { - Params: (session: &Session) -> RawParams, - Participations: (session: &Session) -> Participations, - KeyShares: (session: &Session) -> Vec, - } -); +mod _db { + use serai_validator_sets_primitives::Session; + + use serai_db::{Get, DbTxn, create_db}; + + create_db!( + KeyGen { + Params: (session: &Session) -> super::RawParams, + Participations: (session: &Session) -> super::Participations, + KeyShares: (session: &Session) -> Vec, + } + ); +} pub(crate) struct KeyGenDb(PhantomData

); impl KeyGenDb

{ pub(crate) fn set_params(txn: &mut impl DbTxn, session: Session, params: Params

) { assert_eq!(params.substrate_evrf_public_keys.len(), params.network_evrf_public_keys.len()); - Params::set( + _db::Params::set( txn, &session, &RawParams { @@ -68,7 +74,7 @@ impl KeyGenDb

{ } pub(crate) fn params(getter: &impl Get, session: Session) -> Option> { - Params::get(getter, &session).map(|params| Params { + _db::Params::get(getter, &session).map(|params| Params { t: params.t, n: params .network_evrf_public_keys @@ -101,10 +107,10 @@ impl KeyGenDb

{ session: Session, participations: &Participations, ) { - Participations::set(txn, &session, participations) + _db::Participations::set(txn, &session, participations) } pub(crate) fn participations(getter: &impl Get, session: Session) -> Option { - Participations::get(getter, &session) + _db::Participations::get(getter, &session) } // Set the key shares for a session. @@ -121,7 +127,7 @@ impl KeyGenDb

{ keys.extend(substrate_keys.serialize().as_slice()); keys.extend(network_keys.serialize().as_slice()); } - KeyShares::set(txn, &session, &keys); + _db::KeyShares::set(txn, &session, &keys); } #[allow(clippy::type_complexity)] @@ -129,7 +135,7 @@ impl KeyGenDb

{ getter: &impl Get, session: Session, ) -> Option<(Vec>, Vec>)> { - let keys = KeyShares::get(getter, &session)?; + let keys = _db::KeyShares::get(getter, &session)?; let mut keys: &[u8] = keys.as_ref(); let mut substrate_keys = vec![]; From 4054e44471db146ae15d127c71424daf195b6b79 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 04:54:03 -0400 Subject: [PATCH 104/368] Start on the new processor main loop --- processor/bitcoin/src/db.rs | 8 + processor/bitcoin/src/key_gen.rs | 6 +- processor/bitcoin/src/main.rs | 83 ++++++++++ processor/messages/src/lib.rs | 11 +- processor/src/main.rs | 259 ------------------------------- 5 files changed, 99 insertions(+), 268 deletions(-) diff --git a/processor/bitcoin/src/db.rs b/processor/bitcoin/src/db.rs index 1d73ebfe..94a7c0ba 100644 --- a/processor/bitcoin/src/db.rs +++ b/processor/bitcoin/src/db.rs @@ -1,5 +1,13 @@ +use serai_client::validator_sets::primitives::Session; + use serai_db::{Get, DbTxn, create_db}; +create_db! { + Processor { + ExternalKeyForSession: (session: Session) -> Vec, + } +} + create_db! { BitcoinProcessor { LatestBlockToYieldAsFinalized: () -> u64, diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs index 16183231..416677e7 100644 --- a/processor/bitcoin/src/key_gen.rs +++ b/processor/bitcoin/src/key_gen.rs @@ -1,12 +1,10 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; use frost::ThresholdKeys; -use key_gen::KeyGenParams; - use crate::scan::scanner; -pub(crate) struct KeyGen; -impl KeyGenParams for KeyGen { +pub(crate) struct KeyGenParams; +impl key_gen::KeyGenParams for KeyGenParams { const ID: &'static str = "Bitcoin"; type ExternalNetworkCurve = Secp256k1; diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index d86a4ba1..bb788d1e 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -6,6 +6,10 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); +use ciphersuite::Ciphersuite; + +use serai_db::{DbTxn, Db}; + mod primitives; pub(crate) use primitives::*; @@ -14,8 +18,11 @@ mod scan; // App-logic trait satisfactions mod key_gen; +use crate::key_gen::KeyGenParams; mod rpc; +use rpc::Rpc; mod scheduler; +use scheduler::Scheduler; // Our custom code for Bitcoin mod db; @@ -29,6 +36,82 @@ pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> res } +/// Fetch the next message from the Coordinator. +/// +/// This message is guaranteed to have never been handled before, where handling is defined as +/// this `txn` being committed. +async fn next_message(_txn: &mut impl DbTxn) -> messages::CoordinatorMessage { + todo!("TODO") +} + +async fn send_message(_msg: messages::ProcessorMessage) { + todo!("TODO") +} + +async fn coordinator_loop( + mut db: D, + mut key_gen: ::key_gen::KeyGen, + mut signers: signers::Signers, Scheduler, Rpc>, + mut scanner: Option>>, +) { + loop { + let mut txn = Some(db.txn()); + let msg = next_message(txn.as_mut().unwrap()).await; + match msg { + messages::CoordinatorMessage::KeyGen(msg) => { + // This is a computationally expensive call yet it happens infrequently + for msg in key_gen.handle(txn.as_mut().unwrap(), msg) { + send_message(messages::ProcessorMessage::KeyGen(msg)).await; + } + } + // These are cheap calls which are fine to be here in this loop + messages::CoordinatorMessage::Sign(msg) => signers.queue_message(txn.as_mut().unwrap(), &msg), + messages::CoordinatorMessage::Coordinator( + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { + session, + block_number, + block, + }, + ) => signers.cosign_block(txn.take().unwrap(), session, block_number, block), + messages::CoordinatorMessage::Coordinator( + messages::coordinator::CoordinatorMessage::SignSlashReport { session, report }, + ) => signers.sign_slash_report(txn.take().unwrap(), session, &report), + messages::CoordinatorMessage::Substrate(msg) => match msg { + messages::substrate::CoordinatorMessage::SetKeys { serai_time, session, key_pair } => { + db::ExternalKeyForSession::set(txn.as_mut().unwrap(), session, &key_pair.1.into_inner()); + todo!("TODO: Register in signers"); + todo!("TODO: Scanner activation") + } + messages::substrate::CoordinatorMessage::SlashesReported { session } => { + let key_bytes = db::ExternalKeyForSession::get(txn.as_ref().unwrap(), session).unwrap(); + let mut key_bytes = key_bytes.as_slice(); + let key = + ::ExternalNetworkCurve::read_G(&mut key_bytes) + .unwrap(); + assert!(key_bytes.is_empty()); + + signers.retire_session(txn.as_mut().unwrap(), session, &key) + } + messages::substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { + block, + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, + } => todo!("TODO"), + messages::substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { + block, + burns, + } => todo!("TODO"), + }, + }; + // If the txn wasn't already consumed and committed, commit it + if let Some(txn) = txn { + txn.commit(); + } + } +} + #[tokio::main] async fn main() {} diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 998c7cea..ae1ab6d5 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -113,8 +113,6 @@ pub mod sign { pub attempt: u32, } - // TODO: Make this generic to the ID once we introduce topics into the message-queue and remove - // the global ProcessorMessage/CoordinatorMessage #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { // Received preprocesses for the specified signing protocol. @@ -185,8 +183,10 @@ pub mod substrate { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { - /// Keys set on the Serai network. + /// Keys set on the Serai blockchain. SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, + /// Slashes reported on the Serai blockchain OR the process timed out. + SlashesReported { session: Session }, /// The data from a block which acknowledged a Batch. BlockWithBatchAcknowledgement { block: u64, @@ -305,11 +305,12 @@ impl CoordinatorMessage { CoordinatorMessage::Substrate(msg) => { let (sub, id) = match msg { substrate::CoordinatorMessage::SetKeys { session, .. } => (0, session.encode()), + substrate::CoordinatorMessage::SlashesReported { session } => (1, session.encode()), substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { block, .. } => { - (1, block.encode()) + (2, block.encode()) } substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { block, .. } => { - (2, block.encode()) + (3, block.encode()) } }; diff --git a/processor/src/main.rs b/processor/src/main.rs index 10406729..51123b92 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -1,21 +1,3 @@ -use std::{time::Duration, collections::HashMap}; - -use zeroize::{Zeroize, Zeroizing}; - -use ciphersuite::{ - group::{ff::PrimeField, GroupEncoding}, - Ciphersuite, Ristretto, -}; -use dkg::evrf::EvrfCurve; - -use log::{info, warn}; -use tokio::time::sleep; - -use serai_client::{ - primitives::{BlockHash, NetworkId}, - validator_sets::primitives::{Session, KeyPair}, -}; - use messages::{ coordinator::{ SubstrateSignableId, PlanMeta, CoordinatorMessage as CoordinatorCoordinatorMessage, @@ -27,112 +9,18 @@ use serai_env as env; use message_queue::{Service, client::MessageQueue}; -mod networks; -use networks::{Block, Network}; -#[cfg(feature = "bitcoin")] -use networks::Bitcoin; -#[cfg(feature = "ethereum")] -use networks::Ethereum; -#[cfg(feature = "monero")] -use networks::Monero; - mod db; pub use db::*; mod coordinator; pub use coordinator::*; -use serai_processor_key_gen as key_gen; -use key_gen::{SessionDb, KeyConfirmed, KeyGen}; - -mod signer; -use signer::Signer; - -mod cosigner; -use cosigner::Cosigner; - -mod batch_signer; -use batch_signer::BatchSigner; - -mod slash_report_signer; -use slash_report_signer::SlashReportSigner; - mod multisigs; use multisigs::{MultisigEvent, MultisigManager}; #[cfg(test)] mod tests; -#[global_allocator] -static ALLOCATOR: zalloc::ZeroizingAlloc = - zalloc::ZeroizingAlloc(std::alloc::System); - -// Items which are mutably borrowed by Tributary. -// Any exceptions to this have to be carefully monitored in order to ensure consistency isn't -// violated. -struct TributaryMutable { - // The following are actually mutably borrowed by Substrate as well. - // - Substrate triggers key gens, and determines which to use. - // - SubstrateBlock events cause scheduling which causes signing. - // - // This is still considered Tributary-mutable as most mutation (preprocesses/shares) happens by - // the Tributary. - // - // Creation of tasks is by Substrate, yet this is safe since the mutable borrow is transferred to - // Tributary. - // - // Tributary stops mutating a key gen attempt before Substrate is made aware of it, ensuring - // Tributary drops its mutable borrow before Substrate acquires it. Tributary will maintain a - // mutable borrow on the *key gen task*, yet the finalization code can successfully run for any - // attempt. - // - // The only other note is how the scanner may cause a signer task to be dropped, effectively - // invalidating the Tributary's mutable borrow. The signer is coded to allow for attempted usage - // of a dropped task. - key_gen: KeyGen, - signers: HashMap>, - - // This is also mutably borrowed by the Scanner. - // The Scanner starts new sign tasks. - // The Tributary mutates already-created signed tasks, potentially completing them. - // Substrate may mark tasks as completed, invalidating any existing mutable borrows. - // The safety of this follows as written above. - - // There should only be one BatchSigner at a time (see #277) - batch_signer: Option>, - - // Solely mutated by the tributary. - cosigner: Option, - slash_report_signer: Option, -} - -// Items which are mutably borrowed by Substrate. -// Any exceptions to this have to be carefully monitored in order to ensure consistency isn't -// violated. - -/* - The MultisigManager contains the Scanner and Schedulers. - - The scanner is expected to autonomously operate, scanning blocks as they appear. When a block is - sufficiently confirmed, the scanner causes the Substrate signer to sign a batch. It itself only - mutates its list of finalized blocks, to protect against re-orgs, and its in-memory state though. - - Disk mutations to the scan-state only happens once the relevant `Batch` is included on Substrate. - It can't be mutated as soon as the `Batch` is signed as we need to know the order of `Batch`s - relevant to `Burn`s. - - Schedulers take in new outputs, confirmed in `Batch`s, and outbound payments, triggered by - `Burn`s. - - Substrate also decides when to move to a new multisig, hence why this entire object is - Substrate-mutable. - - Since MultisigManager should always be verifiable, and the Tributary is temporal, MultisigManager - being entirely SubstrateMutable shows proper data pipe-lining. -*/ - -type SubstrateMutable = MultisigManager; - async fn handle_coordinator_msg( txn: &mut D::Transaction<'_>, network: &N, @@ -141,54 +29,6 @@ async fn handle_coordinator_msg( substrate_mutable: &mut SubstrateMutable, msg: &Message, ) { - // If this message expects a higher block number than we have, halt until synced - async fn wait( - txn: &D::Transaction<'_>, - substrate_mutable: &SubstrateMutable, - block_hash: &BlockHash, - ) { - let mut needed_hash = >::Id::default(); - needed_hash.as_mut().copy_from_slice(&block_hash.0); - - loop { - // Ensure our scanner has scanned this block, which means our daemon has this block at - // a sufficient depth - if substrate_mutable.block_number(txn, &needed_hash).await.is_none() { - warn!( - "node is desynced. we haven't scanned {} which should happen after {} confirms", - hex::encode(&needed_hash), - N::CONFIRMATIONS, - ); - sleep(Duration::from_secs(10)).await; - continue; - }; - break; - } - - // TODO2: Sanity check we got an AckBlock (or this is the AckBlock) for the block in question - - /* - let synced = |context: &SubstrateContext, key| -> Result<(), ()> { - // Check that we've synced this block and can actually operate on it ourselves - let latest = scanner.latest_scanned(key); - if usize::try_from(context.network_latest_finalized_block).unwrap() < latest { - log::warn!( - "external network node disconnected/desynced from rest of the network. \ - our block: {latest:?}, network's acknowledged: {}", - context.network_latest_finalized_block, - ); - Err(())?; - } - Ok(()) - }; - */ - } - - if let Some(required) = msg.msg.required_block() { - // wait only reads from, it doesn't mutate, substrate_mutable - wait(txn, substrate_mutable, &required).await; - } - async fn activate_key( network: &N, substrate_mutable: &mut SubstrateMutable, @@ -220,105 +60,6 @@ async fn handle_coordinator_msg( } match msg.msg.clone() { - CoordinatorMessage::KeyGen(msg) => { - for msg in tributary_mutable.key_gen.handle(txn, msg) { - coordinator.send(msg).await; - } - } - - CoordinatorMessage::Sign(msg) => { - if let Some(msg) = tributary_mutable - .signers - .get_mut(&msg.session()) - .expect("coordinator told us to sign with a signer we don't have") - .handle(txn, msg) - .await - { - coordinator.send(msg).await; - } - } - - CoordinatorMessage::Coordinator(msg) => match msg { - CoordinatorCoordinatorMessage::CosignSubstrateBlock { id, block_number } => { - let SubstrateSignableId::CosigningSubstrateBlock(block) = id.id else { - panic!("CosignSubstrateBlock id didn't have a CosigningSubstrateBlock") - }; - let Some(keys) = tributary_mutable.key_gen.substrate_keys_by_session(id.session) else { - panic!("didn't have key shares for the key we were told to cosign with"); - }; - if let Some((cosigner, msg)) = - Cosigner::new(txn, id.session, keys, block_number, block, id.attempt) - { - tributary_mutable.cosigner = Some(cosigner); - coordinator.send(msg).await; - } else { - log::warn!("Cosigner::new returned None"); - } - } - CoordinatorCoordinatorMessage::SignSlashReport { id, report } => { - assert_eq!(id.id, SubstrateSignableId::SlashReport); - let Some(keys) = tributary_mutable.key_gen.substrate_keys_by_session(id.session) else { - panic!("didn't have key shares for the key we were told to perform a slash report with"); - }; - if let Some((slash_report_signer, msg)) = - SlashReportSigner::new(txn, N::NETWORK, id.session, keys, report, id.attempt) - { - tributary_mutable.slash_report_signer = Some(slash_report_signer); - coordinator.send(msg).await; - } else { - log::warn!("SlashReportSigner::new returned None"); - } - } - _ => { - let (is_cosign, is_batch, is_slash_report) = match msg { - CoordinatorCoordinatorMessage::CosignSubstrateBlock { .. } | - CoordinatorCoordinatorMessage::SignSlashReport { .. } => (false, false, false), - CoordinatorCoordinatorMessage::SubstratePreprocesses { ref id, .. } | - CoordinatorCoordinatorMessage::SubstrateShares { ref id, .. } => ( - matches!(&id.id, SubstrateSignableId::CosigningSubstrateBlock(_)), - matches!(&id.id, SubstrateSignableId::Batch(_)), - matches!(&id.id, SubstrateSignableId::SlashReport), - ), - CoordinatorCoordinatorMessage::BatchReattempt { .. } => (false, true, false), - }; - - if is_cosign { - if let Some(cosigner) = tributary_mutable.cosigner.as_mut() { - if let Some(msg) = cosigner.handle(txn, msg) { - coordinator.send(msg).await; - } - } else { - log::warn!( - "received message for cosigner yet didn't have a cosigner. {}", - "this is an error if we didn't reboot", - ); - } - } else if is_batch { - if let Some(msg) = tributary_mutable - .batch_signer - .as_mut() - .expect( - "coordinator told us to sign a batch when we don't currently have a Substrate signer", - ) - .handle(txn, msg) - { - coordinator.send(msg).await; - } - } else if is_slash_report { - if let Some(slash_report_signer) = tributary_mutable.slash_report_signer.as_mut() { - if let Some(msg) = slash_report_signer.handle(txn, msg) { - coordinator.send(msg).await; - } - } else { - log::warn!( - "received message for slash report signer yet didn't have {}", - "a slash report signer. this is an error if we didn't reboot", - ); - } - } - } - }, - CoordinatorMessage::Substrate(msg) => { match msg { messages::substrate::CoordinatorMessage::ConfirmKeyPair { context, session, key_pair } => { From 73af09effbf4b6b905fade17df1831f1273966c6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 06:39:44 -0400 Subject: [PATCH 105/368] Add note to signers on reducing disk IO --- processor/signers/src/db.rs | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index b4de78d9..2c13ddba 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -22,6 +22,19 @@ db_channel! { SlashReport: (session: Session) -> Vec, SlashReportSignature: (session: Session) -> Vec, + /* + TODO: Most of these are pointless? We drop all active signing sessions on reboot. It's + accordingly not valuable to use a DB-backed channel to communicate messages for signing + sessions (Preprocess/Shares). + + Transactions, Batches, Slash Reports, and Cosigns all have their own mechanisms/DB entries + and don't use the following channels. The only questions are: + + 1) If it's safe to drop Reattempt? Or if we need tweaks to enable that + 2) If we reboot with a pending Reattempt, we'll participate on reboot. If we drop that + Reattempt, we won't. Accordingly, we have degraded performance in that edge case in + exchange for less disk IO in the majority of cases. Is that work it? + */ CoordinatorToCosignerMessages: (session: Session) -> CoordinatorMessage, CosignerToCoordinatorMessages: (session: Session) -> ProcessorMessage, From 723f529659a46c66aa102dfa9f63739568ecdd4e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 08:57:57 -0400 Subject: [PATCH 106/368] Note better message structure in messages --- processor/messages/src/lib.rs | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index ae1ab6d5..080864dc 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -181,6 +181,24 @@ pub mod coordinator { pub mod substrate { use super::*; + /* TODO + #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + pub enum InInstructionResult { + Succeeded, + Failed, + } + #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + pub struct ExecutedBatch { + batch_id: u32, + in_instructions: Vec, + } + Block { + block: u64, + batches: Vec, + burns: Vec, + } + */ + #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { /// Keys set on the Serai blockchain. @@ -193,7 +211,6 @@ pub mod substrate { batch_id: u32, in_instruction_succeededs: Vec, burns: Vec, - key_to_activate: Option, }, /// The data from a block which didn't acknowledge a Batch. BlockWithoutBatchAcknowledgement { block: u64, burns: Vec }, From 59fa49f75015c3b44ebf16d433f011ba16b6d04c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 08:58:58 -0400 Subject: [PATCH 107/368] Continue filling out main loop Adds generics to the db_channel macro, fixes the bug where it needed at least one key. --- common/db/src/create_db.rs | 46 +++++++-- processor/bitcoin/src/db.rs | 13 ++- processor/bitcoin/src/key_gen.rs | 6 +- processor/bitcoin/src/main.rs | 112 ++++++++++++++++----- processor/bitcoin/src/primitives/mod.rs | 17 ++++ processor/bitcoin/src/primitives/output.rs | 21 ++-- processor/key-gen/src/lib.rs | 35 ++++--- 7 files changed, 186 insertions(+), 64 deletions(-) diff --git a/common/db/src/create_db.rs b/common/db/src/create_db.rs index 7be1e1c8..1fb52b1b 100644 --- a/common/db/src/create_db.rs +++ b/common/db/src/create_db.rs @@ -79,10 +79,22 @@ macro_rules! create_db { pub(crate) fn del$(<$($generic_name: $generic_type),+>)?( txn: &mut impl DbTxn $(, $arg: $arg_type)* - ) -> core::marker::PhantomData<($($($generic_name),+)?)> { + ) -> core::marker::PhantomData<($($($generic_name),+)?)> { txn.del(&$field_name::key$(::<$($generic_name),+>)?($($arg),*)); core::marker::PhantomData } + + pub(crate) fn take$(<$($generic_name: $generic_type),+>)?( + txn: &mut impl DbTxn + $(, $arg: $arg_type)* + ) -> Option<$field_type> { + let key = $field_name::key$(::<$($generic_name),+>)?($($arg),*); + let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap()); + if res.is_some() { + txn.del(key); + } + res + } } )* }; @@ -91,19 +103,30 @@ macro_rules! create_db { #[macro_export] macro_rules! db_channel { ($db_name: ident { - $($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)* + $($field_name: ident: + $(<$($generic_name: tt: $generic_type: tt),+>)?( + $($arg: ident: $arg_type: ty),* + ) -> $field_type: ty$(,)? + )* }) => { $( create_db! { $db_name { - $field_name: ($($arg: $arg_type,)* index: u32) -> $field_type, + $field_name: $(<$($generic_name: $generic_type),+>)?( + $($arg: $arg_type,)* + index: u32 + ) -> $field_type } } impl $field_name { - pub(crate) fn send(txn: &mut impl DbTxn $(, $arg: $arg_type)*, value: &$field_type) { + pub(crate) fn send$(<$($generic_name: $generic_type),+>)?( + txn: &mut impl DbTxn + $(, $arg: $arg_type)* + , value: &$field_type + ) { // Use index 0 to store the amount of messages - let messages_sent_key = $field_name::key($($arg),*, 0); + let messages_sent_key = $field_name::key$(::<$($generic_name),+>)?($($arg,)* 0); let messages_sent = txn.get(&messages_sent_key).map(|counter| { u32::from_le_bytes(counter.try_into().unwrap()) }).unwrap_or(0); @@ -114,19 +137,22 @@ macro_rules! db_channel { // at the same time let index_to_use = messages_sent + 2; - $field_name::set(txn, $($arg),*, index_to_use, value); + $field_name::set$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_use, value); } - pub(crate) fn try_recv(txn: &mut impl DbTxn $(, $arg: $arg_type)*) -> Option<$field_type> { - let messages_recvd_key = $field_name::key($($arg),*, 1); + pub(crate) fn try_recv$(<$($generic_name: $generic_type),+>)?( + txn: &mut impl DbTxn + $(, $arg: $arg_type)* + ) -> Option<$field_type> { + let messages_recvd_key = $field_name::key$(::<$($generic_name),+>)?($($arg,)* 1); let messages_recvd = txn.get(&messages_recvd_key).map(|counter| { u32::from_le_bytes(counter.try_into().unwrap()) }).unwrap_or(0); let index_to_read = messages_recvd + 2; - let res = $field_name::get(txn, $($arg),*, index_to_read); + let res = $field_name::get$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_read); if res.is_some() { - $field_name::del(txn, $($arg),*, index_to_read); + $field_name::del$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_read); txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes()); } res diff --git a/processor/bitcoin/src/db.rs b/processor/bitcoin/src/db.rs index 94a7c0ba..b0acc427 100644 --- a/processor/bitcoin/src/db.rs +++ b/processor/bitcoin/src/db.rs @@ -1,10 +1,19 @@ +use ciphersuite::group::GroupEncoding; + use serai_client::validator_sets::primitives::Session; -use serai_db::{Get, DbTxn, create_db}; +use serai_db::{Get, DbTxn, create_db, db_channel}; +use primitives::EncodableG; create_db! { Processor { - ExternalKeyForSession: (session: Session) -> Vec, + ExternalKeyForSessionForSigners: (session: Session) -> EncodableG, + } +} + +db_channel! { + Processor { + KeyToActivate: () -> EncodableG } } diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs index 416677e7..75944364 100644 --- a/processor/bitcoin/src/key_gen.rs +++ b/processor/bitcoin/src/key_gen.rs @@ -1,7 +1,7 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; use frost::ThresholdKeys; -use crate::scan::scanner; +use crate::{primitives::x_coord_to_even_point, scan::scanner}; pub(crate) struct KeyGenParams; impl key_gen::KeyGenParams for KeyGenParams { @@ -21,4 +21,8 @@ impl key_gen::KeyGenParams for KeyGenParams { // Skip the parity encoding as we know this key is even key[1 ..].to_vec() } + + fn decode_key(key: &[u8]) -> Option<::G> { + x_coord_to_even_point(key) + } } diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index bb788d1e..136b89cb 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -9,9 +9,11 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = use ciphersuite::Ciphersuite; use serai_db::{DbTxn, Db}; +use ::primitives::EncodableG; +use ::key_gen::KeyGenParams as KeyGenParamsTrait; mod primitives; -pub(crate) use primitives::*; +pub(crate) use crate::primitives::*; // Internal utilities for scanning transactions mod scan; @@ -50,59 +52,123 @@ async fn send_message(_msg: messages::ProcessorMessage) { async fn coordinator_loop( mut db: D, - mut key_gen: ::key_gen::KeyGen, + mut key_gen: ::key_gen::KeyGen, mut signers: signers::Signers, Scheduler, Rpc>, mut scanner: Option>>, ) { loop { - let mut txn = Some(db.txn()); - let msg = next_message(txn.as_mut().unwrap()).await; + let mut txn = db.txn(); + let msg = next_message(&mut txn).await; + let mut txn = Some(txn); match msg { messages::CoordinatorMessage::KeyGen(msg) => { + let txn = txn.as_mut().unwrap(); + let mut new_key = None; // This is a computationally expensive call yet it happens infrequently - for msg in key_gen.handle(txn.as_mut().unwrap(), msg) { + for msg in key_gen.handle(txn, msg) { + if let messages::key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } = &msg { + new_key = Some(*session) + } send_message(messages::ProcessorMessage::KeyGen(msg)).await; } + + // If we were yielded a key, register it in the signers + if let Some(session) = new_key { + let (substrate_keys, network_keys) = + ::key_gen::KeyGen::::key_shares(txn, session) + .expect("generated key pair yet couldn't get key shares"); + signers.register_keys(txn, session, substrate_keys, network_keys); + } } + // These are cheap calls which are fine to be here in this loop - messages::CoordinatorMessage::Sign(msg) => signers.queue_message(txn.as_mut().unwrap(), &msg), + messages::CoordinatorMessage::Sign(msg) => { + let txn = txn.as_mut().unwrap(); + signers.queue_message(txn, &msg) + } messages::CoordinatorMessage::Coordinator( messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { session, block_number, block, }, - ) => signers.cosign_block(txn.take().unwrap(), session, block_number, block), + ) => { + let txn = txn.take().unwrap(); + signers.cosign_block(txn, session, block_number, block) + } messages::CoordinatorMessage::Coordinator( messages::coordinator::CoordinatorMessage::SignSlashReport { session, report }, - ) => signers.sign_slash_report(txn.take().unwrap(), session, &report), + ) => { + let txn = txn.take().unwrap(); + signers.sign_slash_report(txn, session, &report) + } + messages::CoordinatorMessage::Substrate(msg) => match msg { messages::substrate::CoordinatorMessage::SetKeys { serai_time, session, key_pair } => { - db::ExternalKeyForSession::set(txn.as_mut().unwrap(), session, &key_pair.1.into_inner()); - todo!("TODO: Register in signers"); - todo!("TODO: Scanner activation") + let txn = txn.as_mut().unwrap(); + let key = EncodableG( + KeyGenParams::decode_key(key_pair.1.as_ref()).expect("invalid key set on serai"), + ); + + // Queue the key to be activated upon the next Batch + db::KeyToActivate::send::< + <::ExternalNetworkCurve as Ciphersuite>::G, + >(txn, &key); + + // Set the external key, as needed by the signers + db::ExternalKeyForSessionForSigners::set::< + <::ExternalNetworkCurve as Ciphersuite>::G, + >(txn, session, &key); + + // This isn't cheap yet only happens for the very first set of keys + if scanner.is_none() { + todo!("TODO") + } } messages::substrate::CoordinatorMessage::SlashesReported { session } => { - let key_bytes = db::ExternalKeyForSession::get(txn.as_ref().unwrap(), session).unwrap(); - let mut key_bytes = key_bytes.as_slice(); - let key = - ::ExternalNetworkCurve::read_G(&mut key_bytes) - .unwrap(); - assert!(key_bytes.is_empty()); + let txn = txn.as_mut().unwrap(); - signers.retire_session(txn.as_mut().unwrap(), session, &key) + // Since this session had its slashes reported, it has finished all its signature + // protocols and has been fully retired. We retire it from the signers accordingly + let key = db::ExternalKeyForSessionForSigners::take::< + <::ExternalNetworkCurve as Ciphersuite>::G, + >(txn, session) + .unwrap() + .0; + + // This is a cheap call + signers.retire_session(txn, session, &key) } messages::substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { - block, + block: _, batch_id, in_instruction_succeededs, burns, - key_to_activate, - } => todo!("TODO"), + } => { + let mut txn = txn.take().unwrap(); + let scanner = scanner.as_mut().unwrap(); + let key_to_activate = db::KeyToActivate::try_recv::< + <::ExternalNetworkCurve as Ciphersuite>::G, + >(&mut txn) + .map(|key| key.0); + // This is a cheap call as it internally just queues this to be done later + scanner.acknowledge_batch( + txn, + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, + ) + } messages::substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { - block, + block: _, burns, - } => todo!("TODO"), + } => { + let txn = txn.take().unwrap(); + let scanner = scanner.as_mut().unwrap(); + // This is a cheap call as it internally just queues this to be done later + scanner.queue_burns(txn, burns) + } }, }; // If the txn wasn't already consumed and committed, commit it diff --git a/processor/bitcoin/src/primitives/mod.rs b/processor/bitcoin/src/primitives/mod.rs index fba52dd9..e089c623 100644 --- a/processor/bitcoin/src/primitives/mod.rs +++ b/processor/bitcoin/src/primitives/mod.rs @@ -1,3 +1,20 @@ +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::bitcoin::key::{Parity, XOnlyPublicKey}; + pub(crate) mod output; pub(crate) mod transaction; pub(crate) mod block; + +pub(crate) fn x_coord_to_even_point(key: &[u8]) -> Option<::G> { + if key.len() != 32 { + None? + }; + + // Read the x-only public key + let key = XOnlyPublicKey::from_slice(key).ok()?; + // Convert to a full public key + let key = key.public_key(Parity::Even); + // Convert to k256 (from libsecp256k1) + Secp256k1::read_G(&mut key.serialize().as_slice()).ok() +} diff --git a/processor/bitcoin/src/primitives/output.rs b/processor/bitcoin/src/primitives/output.rs index 05ab6acf..f1a1dc7a 100644 --- a/processor/bitcoin/src/primitives/output.rs +++ b/processor/bitcoin/src/primitives/output.rs @@ -4,11 +4,7 @@ use ciphersuite::{Ciphersuite, Secp256k1}; use bitcoin_serai::{ bitcoin::{ - hashes::Hash as HashTrait, - key::{Parity, XOnlyPublicKey}, - consensus::Encodable, - script::Instruction, - transaction::Transaction, + hashes::Hash as HashTrait, consensus::Encodable, script::Instruction, transaction::Transaction, }, wallet::ReceivedOutput as WalletOutput, }; @@ -24,7 +20,10 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; -use crate::scan::{offsets_for_key, presumed_origin, extract_serai_data}; +use crate::{ + primitives::x_coord_to_even_point, + scan::{offsets_for_key, presumed_origin, extract_serai_data}, +}; #[derive(Clone, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)] pub(crate) struct OutputId([u8; 36]); @@ -117,15 +116,11 @@ impl ReceivedOutput<::G, Address> for Output { let Instruction::PushBytes(key) = script.instructions_minimal().last().unwrap().unwrap() else { panic!("last item in v1 Taproot script wasn't bytes") }; - let key = XOnlyPublicKey::from_slice(key.as_ref()) - .expect("last item in v1 Taproot script wasn't a valid x-only public key"); + let key = x_coord_to_even_point(key.as_ref()) + .expect("last item in scanned v1 Taproot script wasn't a valid x-only public key"); - // Convert to a full key - let key = key.public_key(Parity::Even); - // Convert to a k256 key (from libsecp256k1) - let output_key = Secp256k1::read_G(&mut key.serialize().as_slice()).unwrap(); // The output's key minus the output's offset is the root key - output_key - (::G::GENERATOR * self.output.offset()) + key - (::G::GENERATOR * self.output.offset()) } fn presumed_origin(&self) -> Option

{ diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index 60753412..cb23a740 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -20,7 +20,7 @@ use dkg::{Participant, ThresholdKeys, evrf::*}; use serai_validator_sets_primitives::Session; use messages::key_gen::*; -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn}; mod generators; use generators::generators; @@ -49,6 +49,17 @@ pub trait KeyGenParams { fn encode_key(key: ::G) -> Vec { key.to_bytes().as_ref().to_vec() } + + /// Decode keys from their optimal encoding. + /// + /// A default implementation is provided which calls the traditional `from_bytes`. + fn decode_key(mut key: &[u8]) -> Option<::G> { + let res = ::read_G(&mut key).ok()?; + if !key.is_empty() { + None?; + } + Some(res) + } } /* @@ -128,47 +139,41 @@ fn coerce_keys( /// An instance of the Serai key generation protocol. #[derive(Debug)] -pub struct KeyGen { - db: D, +pub struct KeyGen { substrate_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, network_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, } -impl KeyGen { +impl KeyGen

{ /// Create a new key generation instance. #[allow(clippy::new_ret_no_self)] pub fn new( - db: D, substrate_evrf_private_key: Zeroizing< <::EmbeddedCurve as Ciphersuite>::F, >, network_evrf_private_key: Zeroizing< <::EmbeddedCurve as Ciphersuite>::F, >, - ) -> KeyGen { - KeyGen { db, substrate_evrf_private_key, network_evrf_private_key } + ) -> KeyGen

{ + KeyGen { substrate_evrf_private_key, network_evrf_private_key } } /// Fetch the key shares for a specific session. #[allow(clippy::type_complexity)] pub fn key_shares( - &self, + getter: &impl Get, session: Session, ) -> Option<(Vec>, Vec>)> { // This is safe, despite not having a txn, since it's a static value // It doesn't change over time/in relation to other operations // It is solely set or unset - KeyGenDb::

::key_shares(&self.db, session) + KeyGenDb::

::key_shares(getter, session) } /// Handle a message from the coordinator. - pub fn handle( - &mut self, - txn: &mut D::Transaction<'_>, - msg: CoordinatorMessage, - ) -> Vec { + pub fn handle(&mut self, txn: &mut impl DbTxn, msg: CoordinatorMessage) -> Vec { const SUBSTRATE_KEY_CONTEXT: &[u8] = b"substrate"; const NETWORK_KEY_CONTEXT: &[u8] = b"network"; fn context(session: Session, key_context: &[u8]) -> [u8; 32] { @@ -292,7 +297,7 @@ impl KeyGen { // If we've already generated these keys, we don't actually need to save these // participations and continue. We solely have to verify them, as to identify malicious // participants and prevent DoSs, before returning - if self.key_shares(session).is_some() { + if Self::key_shares(txn, session).is_some() { log::debug!("already finished generating a key for {:?}", session); match EvrfDkg::::verify( From 9b8c8f8231d0beb28054109812e5c3ffbaf9dcc0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 09:12:00 -0400 Subject: [PATCH 108/368] Misc tidying of serai-db calls --- common/db/src/create_db.rs | 42 ++++++++++--------- processor/bitcoin/src/main.rs | 16 +++---- processor/scanner/src/db.rs | 19 ++++----- processor/scanner/src/report/db.rs | 7 +--- processor/scanner/src/substrate/db.rs | 7 ++-- .../utxo/transaction-chaining/src/db.rs | 4 +- 6 files changed, 46 insertions(+), 49 deletions(-) diff --git a/common/db/src/create_db.rs b/common/db/src/create_db.rs index 1fb52b1b..50fe51f7 100644 --- a/common/db/src/create_db.rs +++ b/common/db/src/create_db.rs @@ -47,9 +47,13 @@ macro_rules! create_db { }) => { $( #[derive(Clone, Debug)] - pub(crate) struct $field_name; - impl $field_name { - pub(crate) fn key$(<$($generic_name: $generic_type),+>)?($($arg: $arg_type),*) -> Vec { + pub(crate) struct $field_name$( + <$($generic_name: $generic_type),+> + )?$( + (core::marker::PhantomData<($($generic_name),+)>) + )?; + impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? { + pub(crate) fn key($($arg: $arg_type),*) -> Vec { use scale::Encode; $crate::serai_db_key( stringify!($db_name).as_bytes(), @@ -57,38 +61,38 @@ macro_rules! create_db { ($($arg),*).encode() ) } - pub(crate) fn set$(<$($generic_name: $generic_type),+>)?( + pub(crate) fn set( txn: &mut impl DbTxn $(, $arg: $arg_type)*, data: &$field_type ) { - let key = $field_name::key$(::<$($generic_name),+>)?($($arg),*); + let key = Self::key($($arg),*); txn.put(&key, borsh::to_vec(data).unwrap()); } - pub(crate) fn get$(<$($generic_name: $generic_type),+>)?( + pub(crate) fn get( getter: &impl Get, $($arg: $arg_type),* ) -> Option<$field_type> { - getter.get($field_name::key$(::<$($generic_name),+>)?($($arg),*)).map(|data| { + getter.get(Self::key($($arg),*)).map(|data| { borsh::from_slice(data.as_ref()).unwrap() }) } // Returns a PhantomData of all generic types so if the generic was only used in the value, // not the keys, this doesn't have unused generic types #[allow(dead_code)] - pub(crate) fn del$(<$($generic_name: $generic_type),+>)?( + pub(crate) fn del( txn: &mut impl DbTxn $(, $arg: $arg_type)* ) -> core::marker::PhantomData<($($($generic_name),+)?)> { - txn.del(&$field_name::key$(::<$($generic_name),+>)?($($arg),*)); + txn.del(&Self::key($($arg),*)); core::marker::PhantomData } - pub(crate) fn take$(<$($generic_name: $generic_type),+>)?( + pub(crate) fn take( txn: &mut impl DbTxn $(, $arg: $arg_type)* ) -> Option<$field_type> { - let key = $field_name::key$(::<$($generic_name),+>)?($($arg),*); + let key = Self::key($($arg),*); let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap()); if res.is_some() { txn.del(key); @@ -119,14 +123,14 @@ macro_rules! db_channel { } } - impl $field_name { - pub(crate) fn send$(<$($generic_name: $generic_type),+>)?( + impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? { + pub(crate) fn send( txn: &mut impl DbTxn $(, $arg: $arg_type)* , value: &$field_type ) { // Use index 0 to store the amount of messages - let messages_sent_key = $field_name::key$(::<$($generic_name),+>)?($($arg,)* 0); + let messages_sent_key = Self::key($($arg,)* 0); let messages_sent = txn.get(&messages_sent_key).map(|counter| { u32::from_le_bytes(counter.try_into().unwrap()) }).unwrap_or(0); @@ -137,22 +141,22 @@ macro_rules! db_channel { // at the same time let index_to_use = messages_sent + 2; - $field_name::set$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_use, value); + Self::set(txn, $($arg,)* index_to_use, value); } - pub(crate) fn try_recv$(<$($generic_name: $generic_type),+>)?( + pub(crate) fn try_recv( txn: &mut impl DbTxn $(, $arg: $arg_type)* ) -> Option<$field_type> { - let messages_recvd_key = $field_name::key$(::<$($generic_name),+>)?($($arg,)* 1); + let messages_recvd_key = Self::key($($arg,)* 1); let messages_recvd = txn.get(&messages_recvd_key).map(|counter| { u32::from_le_bytes(counter.try_into().unwrap()) }).unwrap_or(0); let index_to_read = messages_recvd + 2; - let res = $field_name::get$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_read); + let res = Self::get(txn, $($arg,)* index_to_read); if res.is_some() { - $field_name::del$(::<$($generic_name),+>)?(txn, $($arg,)* index_to_read); + Self::del(txn, $($arg,)* index_to_read); txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes()); } res diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 136b89cb..f1f14082 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -111,14 +111,14 @@ async fn coordinator_loop( ); // Queue the key to be activated upon the next Batch - db::KeyToActivate::send::< + db::KeyToActivate::< <::ExternalNetworkCurve as Ciphersuite>::G, - >(txn, &key); + >::send(txn, &key); // Set the external key, as needed by the signers - db::ExternalKeyForSessionForSigners::set::< + db::ExternalKeyForSessionForSigners::< <::ExternalNetworkCurve as Ciphersuite>::G, - >(txn, session, &key); + >::set(txn, session, &key); // This isn't cheap yet only happens for the very first set of keys if scanner.is_none() { @@ -130,9 +130,9 @@ async fn coordinator_loop( // Since this session had its slashes reported, it has finished all its signature // protocols and has been fully retired. We retire it from the signers accordingly - let key = db::ExternalKeyForSessionForSigners::take::< + let key = db::ExternalKeyForSessionForSigners::< <::ExternalNetworkCurve as Ciphersuite>::G, - >(txn, session) + >::take(txn, session) .unwrap() .0; @@ -147,9 +147,9 @@ async fn coordinator_loop( } => { let mut txn = txn.take().unwrap(); let scanner = scanner.as_mut().unwrap(); - let key_to_activate = db::KeyToActivate::try_recv::< + let key_to_activate = db::KeyToActivate::< <::ExternalNetworkCurve as Ciphersuite>::G, - >(&mut txn) + >::try_recv(&mut txn) .map(|key| key.0); // This is a cheap call as it internally just queues this to be done later scanner.acknowledge_batch( diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 5fcdc160..107616cc 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -107,7 +107,7 @@ create_db!( pub(crate) struct ScannerGlobalDb(PhantomData); impl ScannerGlobalDb { pub(crate) fn has_any_key_been_queued(getter: &impl Get) -> bool { - ActiveKeys::get::>>(getter).is_some() + ActiveKeys::>>::get(getter).is_some() } /// Queue a key. @@ -315,7 +315,7 @@ pub(crate) struct ReceiverScanData { db_channel! { ScannerScanEventuality { - ScannedBlock: (empty_key: ()) -> Vec, + ScannedBlock: () -> Vec, } } @@ -364,14 +364,14 @@ impl ScanToEventualityDb { for output in &data.returns { output.write(&mut buf).unwrap(); } - ScannedBlock::send(txn, (), &buf); + ScannedBlock::send(txn, &buf); } pub(crate) fn recv_scan_data( txn: &mut impl DbTxn, expected_block_number: u64, ) -> ReceiverScanData { let data = - ScannedBlock::try_recv(txn, ()).expect("receiving data for a scanned block not yet sent"); + ScannedBlock::try_recv(txn).expect("receiving data for a scanned block not yet sent"); let mut data = data.as_slice(); let block_number = { @@ -462,7 +462,7 @@ struct BlockBoundInInstructions { db_channel! { ScannerScanReport { - InInstructions: (empty_key: ()) -> BlockBoundInInstructions, + InInstructions: () -> BlockBoundInInstructions, } } @@ -484,7 +484,6 @@ impl ScanToReportDb { } InInstructions::send( txn, - (), &BlockBoundInInstructions { block_number, returnable_in_instructions: buf }, ); } @@ -493,7 +492,7 @@ impl ScanToReportDb { txn: &mut impl DbTxn, block_number: u64, ) -> InInstructionData { - let data = InInstructions::try_recv(txn, ()) + let data = InInstructions::try_recv(txn) .expect("receiving InInstructions for a scanned block not yet sent"); assert_eq!( block_number, data.block_number, @@ -556,7 +555,7 @@ mod _public_db { db_channel! { ScannerPublic { - Batches: (empty_key: ()) -> Batch, + Batches: () -> Batch, BatchesToSign: (key: &[u8]) -> Batch, AcknowledgedBatches: (key: &[u8]) -> u32, CompletedEventualities: (key: &[u8]) -> [u8; 32], @@ -570,12 +569,12 @@ mod _public_db { pub struct Batches; impl Batches { pub(crate) fn send(txn: &mut impl DbTxn, batch: &Batch) { - _public_db::Batches::send(txn, (), batch); + _public_db::Batches::send(txn, batch); } /// Receive a batch to publish. pub fn try_recv(txn: &mut impl DbTxn) -> Option { - _public_db::Batches::try_recv(txn, ()) + _public_db::Batches::try_recv(txn) } } diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 10a3f6bb..186accac 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -54,9 +54,7 @@ impl ReportDb { } pub(crate) fn take_block_number_for_batch(txn: &mut impl DbTxn, id: u32) -> Option { - let block_number = BlockNumberForBatch::get(txn, id)?; - BlockNumberForBatch::del(txn, id); - Some(block_number) + BlockNumberForBatch::take(txn, id) } pub(crate) fn save_external_key_for_session_to_sign_batch( @@ -103,8 +101,7 @@ impl ReportDb { txn: &mut impl DbTxn, id: u32, ) -> Option>>> { - let buf = SerializedReturnAddresses::get(txn, id)?; - SerializedReturnAddresses::del(txn, id); + let buf = SerializedReturnAddresses::take(txn, id)?; let mut buf = buf.as_slice(); let mut res = Vec::with_capacity(buf.len() / (32 + 1 + 8)); diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs index 697897c2..18435856 100644 --- a/processor/scanner/src/substrate/db.rs +++ b/processor/scanner/src/substrate/db.rs @@ -37,7 +37,7 @@ pub(crate) enum Action { db_channel!( ScannerSubstrate { - Actions: (empty_key: ()) -> ActionEncodable, + Actions: () -> ActionEncodable, } ); @@ -52,7 +52,6 @@ impl SubstrateDb { ) { Actions::send( txn, - (), &ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, in_instruction_succeededs, @@ -62,11 +61,11 @@ impl SubstrateDb { ); } pub(crate) fn queue_queue_burns(txn: &mut impl DbTxn, burns: Vec) { - Actions::send(txn, (), &ActionEncodable::QueueBurns(burns)); + Actions::send(txn, &ActionEncodable::QueueBurns(burns)); } pub(crate) fn next_action(txn: &mut impl DbTxn) -> Option> { - let action_encodable = Actions::try_recv(txn, ())?; + let action_encodable = Actions::try_recv(txn)?; Some(match action_encodable { ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index 697f1009..11bcd78d 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -69,9 +69,7 @@ impl Db { txn: &mut impl DbTxn, output: & as ReceivedOutput, AddressFor>>::Id, ) -> bool { - let res = AlreadyAccumulatedOutput::get(txn, output.as_ref()).is_some(); - AlreadyAccumulatedOutput::del(txn, output.as_ref()); - res + AlreadyAccumulatedOutput::take(txn, output.as_ref()).is_some() } pub(crate) fn queued_payments( From 3ac0265f0727aca0a4b0a763857154da9f49562c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 11:58:27 -0400 Subject: [PATCH 109/368] Add section documenting the safety of txindex upon reorganizations --- processor/bitcoin/src/scan.rs | 8 ++++---- processor/bitcoin/src/txindex.rs | 30 ++++++++++++++++++++++++++++-- 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/processor/bitcoin/src/scan.rs b/processor/bitcoin/src/scan.rs index b3d3a6dc..6d7fab88 100644 --- a/processor/bitcoin/src/scan.rs +++ b/processor/bitcoin/src/scan.rs @@ -16,7 +16,7 @@ use serai_client::networks::bitcoin::Address; use serai_db::Get; use primitives::OutputType; -use crate::{db, hash_bytes}; +use crate::hash_bytes; const KEY_DST: &[u8] = b"Serai Bitcoin Processor Key Offset"; static BRANCH_BASE_OFFSET: LazyLock<::F> = @@ -62,9 +62,9 @@ pub(crate) fn presumed_origin(getter: &impl Get, tx: &Transaction) -> Option ScriptBuf { + // We index every single output on the blockchain, so this shouldn't be possible + ScriptBuf::from_bytes( + db::ScriptPubKey::get(getter, txid, vout) + .expect("requested script public key for unknown output"), + ) +} + pub(crate) struct TxIndexTask(Rpc); #[async_trait::async_trait] @@ -40,6 +54,18 @@ impl ContinuallyRan for TxIndexTask { Rpc::::CONFIRMATIONS ))?; + /* + `finalized_block_number` is the latest block number minus confirmations. The blockchain may + undetectably re-organize though, as while the scanner will maintain an index of finalized + blocks and panics on reorganization, this runs prior to the scanner and that index. + + A reorganization of `CONFIRMATIONS` blocks is still an invariant. Even if that occurs, this + saves the script public keys *by the transaction hash an output index*. Accordingly, it isn't + invalidated on reorganization. The only risk would be if the new chain reorganized to + include a transaction to Serai which we didn't index the parents of. If that happens, we'll + panic when we scan the transaction, causing the invariant to be detected. + */ + let finalized_block_number_in_db = db::LatestBlockToYieldAsFinalized::get(&self.0.db); let next_block = finalized_block_number_in_db.map_or(0, |block| block + 1); @@ -63,7 +89,7 @@ impl ContinuallyRan for TxIndexTask { let mut txn = self.0.db.txn(); - for tx in &block.txdata[1 ..] { + for tx in &block.txdata { let txid = hash_bytes(tx.compute_txid().to_raw_hash()); for (o, output) in tx.output.iter().enumerate() { let o = u32::try_from(o).unwrap(); From fcd5fb85df7357b14a4ee0d1d2c10896b6650ee1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 11:59:15 -0400 Subject: [PATCH 110/368] Add binary search to find the block to start scanning from --- processor/bitcoin/src/main.rs | 97 ++++++++++++++------- processor/bitcoin/src/rpc.rs | 44 ++++++++++ processor/scanner/src/lib.rs | 5 ++ processor/src/main.rs | 160 ---------------------------------- 4 files changed, 113 insertions(+), 193 deletions(-) diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index f1f14082..1c07b6cd 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -6,11 +6,16 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); +use core::cmp::Ordering; + use ciphersuite::Ciphersuite; +use serai_client::validator_sets::primitives::Session; + use serai_db::{DbTxn, Db}; use ::primitives::EncodableG; use ::key_gen::KeyGenParams as KeyGenParamsTrait; +use scanner::{ScannerFeed, Scanner}; mod primitives; pub(crate) use crate::primitives::*; @@ -38,6 +43,56 @@ pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> res } +async fn first_block_after_time(feed: &S, serai_time: u64) -> u64 { + async fn first_block_after_time_iteration( + feed: &S, + serai_time: u64, + ) -> Result, S::EphemeralError> { + let latest = feed.latest_finalized_block_number().await?; + let latest_time = feed.time_of_block(latest).await?; + if latest_time < serai_time { + tokio::time::sleep(core::time::Duration::from_secs(serai_time - latest_time)).await; + return Ok(None); + } + + // A finalized block has a time greater than or equal to the time we want to start at + // Find the first such block with a binary search + // start_search and end_search are inclusive + let mut start_search = 0; + let mut end_search = latest; + while start_search != end_search { + // This on purposely chooses the earlier block in the case two blocks are both in the middle + let to_check = start_search + ((end_search - start_search) / 2); + let block_time = feed.time_of_block(to_check).await?; + match block_time.cmp(&serai_time) { + Ordering::Less => { + start_search = to_check + 1; + assert!(start_search <= end_search); + } + Ordering::Equal | Ordering::Greater => { + // This holds true since we pick the earlier block upon an even search distance + // If it didn't, this would cause an infinite loop + assert!(to_check < end_search); + end_search = to_check; + } + } + } + Ok(Some(start_search)) + } + loop { + match first_block_after_time_iteration(feed, serai_time).await { + Ok(Some(block)) => return block, + Ok(None) => { + log::info!("waiting for block to activate at (a block with timestamp >= {serai_time})"); + } + Err(e) => { + log::error!("couldn't find the first block Serai should scan due to an RPC error: {e:?}"); + } + } + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } +} + /// Fetch the next message from the Coordinator. /// /// This message is guaranteed to have never been handled before, where handling is defined as @@ -52,11 +107,13 @@ async fn send_message(_msg: messages::ProcessorMessage) { async fn coordinator_loop( mut db: D, + feed: Rpc, mut key_gen: ::key_gen::KeyGen, mut signers: signers::Signers, Scheduler, Rpc>, mut scanner: Option>>, ) { loop { + let db_clone = db.clone(); let mut txn = db.txn(); let msg = next_message(&mut txn).await; let mut txn = Some(txn); @@ -120,9 +177,13 @@ async fn coordinator_loop( <::ExternalNetworkCurve as Ciphersuite>::G, >::set(txn, session, &key); - // This isn't cheap yet only happens for the very first set of keys - if scanner.is_none() { - todo!("TODO") + // This is presumed extremely expensive, potentially blocking for several minutes, yet + // only happens for the very first set of keys + if session == Session(0) { + assert!(scanner.is_none()); + let start_block = first_block_after_time(&feed, serai_time).await; + scanner = + Some(Scanner::new::>(db_clone, feed.clone(), start_block, key.0).await); } } messages::substrate::CoordinatorMessage::SlashesReported { session } => { @@ -241,36 +302,6 @@ impl TransactionTrait for Transaction { } } -#[async_trait] -impl BlockTrait for Block { - async fn time(&self, rpc: &Bitcoin) -> u64 { - // Use the network median time defined in BIP-0113 since the in-block time isn't guaranteed to - // be monotonic - let mut timestamps = vec![u64::from(self.header.time)]; - let mut parent = self.parent(); - // BIP-0113 uses a median of the prior 11 blocks - while timestamps.len() < 11 { - let mut parent_block; - while { - parent_block = rpc.rpc.get_block(&parent).await; - parent_block.is_err() - } { - log::error!("couldn't get parent block when trying to get block time: {parent_block:?}"); - sleep(Duration::from_secs(5)).await; - } - let parent_block = parent_block.unwrap(); - timestamps.push(u64::from(parent_block.header.time)); - parent = parent_block.parent(); - - if parent == [0; 32] { - break; - } - } - timestamps.sort(); - timestamps[timestamps.len() / 2] - } -} - impl Bitcoin { pub(crate) async fn new(url: String) -> Bitcoin { let mut res = Rpc::new(url.clone()).await; diff --git a/processor/bitcoin/src/rpc.rs b/processor/bitcoin/src/rpc.rs index cafb0ef3..a6f6e5fd 100644 --- a/processor/bitcoin/src/rpc.rs +++ b/processor/bitcoin/src/rpc.rs @@ -34,6 +34,50 @@ impl ScannerFeed for Rpc { db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) } + async fn time_of_block(&self, number: u64) -> Result { + let number = usize::try_from(number).unwrap(); + + /* + The block time isn't guaranteed to be monotonic. It is guaranteed to be greater than the + median time of prior blocks, as detailed in BIP-0113 (a BIP which used that fact to improve + CLTV). This creates a monotonic median time which we use as the block time. + */ + // This implements `GetMedianTimePast` + let median = { + const MEDIAN_TIMESPAN: usize = 11; + let mut timestamps = Vec::with_capacity(MEDIAN_TIMESPAN); + for i in number.saturating_sub(MEDIAN_TIMESPAN) .. number { + timestamps.push(self.rpc.get_block(&self.rpc.get_block_hash(i).await?).await?.header.time); + } + timestamps.sort(); + timestamps[timestamps.len() / 2] + }; + + /* + This block's timestamp is guaranteed to be greater than this median: + https://github.com/bitcoin/bitcoin/blob/0725a374941355349bb4bc8a79dad1affb27d3b9 + /src/validation.cpp#L4182-L4184 + + This does not guarantee the median always increases however. Take the following trivial + example, as the window is initially built: + + 0 block has time 0 // Prior blocks: [] + 1 block has time 1 // Prior blocks: [0] + 2 block has time 2 // Prior blocks: [0, 1] + 3 block has time 2 // Prior blocks: [0, 1, 2] + + These two blocks have the same time (both greater than the median of their prior blocks) and + the same median. + + The median will never decrease however. The values pushed onto the window will always be + greater than the median. If a value greater than the median is popped, the median will remain + the same (due to the counterbalance of the pushed value). If a value less than the median is + popped, the median will increase (either to another instance of the same value, yet one + closer to the end of the repeating sequence, or to a higher value). + */ + Ok(median.into()) + } + async fn unchecked_block_header_by_number( &self, number: u64, diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 4f30f5e7..6ed16d74 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -106,6 +106,11 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// consensus. The genesis block accordingly has block number 0. async fn latest_finalized_block_number(&self) -> Result; + /// Fetch the timestamp of a block (represented in seconds since the epoch). + /// + /// This must be monotonically incrementing. Two blocks may share a timestamp. + async fn time_of_block(&self, number: u64) -> Result; + /// Fetch a block header by its number. /// /// This does not check the returned BlockHeader is the header for the block we indexed. diff --git a/processor/src/main.rs b/processor/src/main.rs index 51123b92..65e74f55 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -29,158 +29,15 @@ async fn handle_coordinator_msg( substrate_mutable: &mut SubstrateMutable, msg: &Message, ) { - async fn activate_key( - network: &N, - substrate_mutable: &mut SubstrateMutable, - tributary_mutable: &mut TributaryMutable, - txn: &mut D::Transaction<'_>, - session: Session, - key_pair: KeyPair, - activation_number: usize, - ) { - info!("activating {session:?}'s keys at {activation_number}"); - - let network_key = ::Curve::read_G::<&[u8]>(&mut key_pair.1.as_ref()) - .expect("Substrate finalized invalid point as a network's key"); - - if tributary_mutable.key_gen.in_set(&session) { - // See TributaryMutable's struct definition for why this block is safe - let KeyConfirmed { substrate_keys, network_keys } = - tributary_mutable.key_gen.confirm(txn, session, &key_pair); - if session.0 == 0 { - tributary_mutable.batch_signer = - Some(BatchSigner::new(N::NETWORK, session, substrate_keys)); - } - tributary_mutable - .signers - .insert(session, Signer::new(network.clone(), session, network_keys)); - } - - substrate_mutable.add_key(txn, activation_number, network_key).await; - } - match msg.msg.clone() { CoordinatorMessage::Substrate(msg) => { match msg { - messages::substrate::CoordinatorMessage::ConfirmKeyPair { context, session, key_pair } => { - // This is the first key pair for this network so no block has been finalized yet - // TODO: Write documentation for this in docs/ - // TODO: Use an Option instead of a magic? - if context.network_latest_finalized_block.0 == [0; 32] { - assert!(tributary_mutable.signers.is_empty()); - assert!(tributary_mutable.batch_signer.is_none()); - assert!(tributary_mutable.cosigner.is_none()); - // We can't check this as existing is no longer pub - // assert!(substrate_mutable.existing.as_ref().is_none()); - - // Wait until a network's block's time exceeds Serai's time - // These time calls are extremely expensive for what they do, yet they only run when - // confirming the first key pair, before any network activity has occurred, so they - // should be fine - - // If the latest block number is 10, then the block indexed by 1 has 10 confirms - // 10 + 1 - 10 = 1 - let mut block_i; - while { - block_i = (network.get_latest_block_number_with_retries().await + 1) - .saturating_sub(N::CONFIRMATIONS); - network.get_block_with_retries(block_i).await.time(network).await < context.serai_time - } { - info!( - "serai confirmed the first key pair for a set. {} {}", - "we're waiting for a network's finalized block's time to exceed unix time ", - context.serai_time, - ); - sleep(Duration::from_secs(5)).await; - } - - // Find the first block to do so - let mut earliest = block_i; - // earliest > 0 prevents a panic if Serai creates keys before the genesis block - // which... should be impossible - // Yet a prevented panic is a prevented panic - while (earliest > 0) && - (network.get_block_with_retries(earliest - 1).await.time(network).await >= - context.serai_time) - { - earliest -= 1; - } - - // Use this as the activation block - let activation_number = earliest; - - activate_key( - network, - substrate_mutable, - tributary_mutable, - txn, - session, - key_pair, - activation_number, - ) - .await; - } else { - let mut block_before_queue_block = >::Id::default(); - block_before_queue_block - .as_mut() - .copy_from_slice(&context.network_latest_finalized_block.0); - // We can't set these keys for activation until we know their queue block, which we - // won't until the next Batch is confirmed - // Set this variable so when we get the next Batch event, we can handle it - PendingActivationsDb::set_pending_activation::( - txn, - &block_before_queue_block, - session, - key_pair, - ); - } - } - messages::substrate::CoordinatorMessage::SubstrateBlock { context, block: substrate_block, burns, batches, } => { - if let Some((block, session, key_pair)) = - PendingActivationsDb::pending_activation::(txn) - { - // Only run if this is a Batch belonging to a distinct block - if context.network_latest_finalized_block.as_ref() != block.as_ref() { - let mut queue_block = >::Id::default(); - queue_block.as_mut().copy_from_slice(context.network_latest_finalized_block.as_ref()); - - let activation_number = substrate_mutable - .block_number(txn, &queue_block) - .await - .expect("KeyConfirmed from context we haven't synced") + - N::CONFIRMATIONS; - - activate_key( - network, - substrate_mutable, - tributary_mutable, - txn, - session, - key_pair, - activation_number, - ) - .await; - //clear pending activation - txn.del(PendingActivationsDb::key()); - } - } - - // Since this block was acknowledged, we no longer have to sign the batches within it - if let Some(batch_signer) = tributary_mutable.batch_signer.as_mut() { - for batch_id in batches { - batch_signer.batch_signed(txn, batch_id); - } - } - - let (acquired_lock, to_sign) = - substrate_mutable.substrate_block(txn, network, context, burns).await; - // Send SubstrateBlockAck, with relevant plan IDs, before we trigger the signing of these // plans if !tributary_mutable.signers.is_empty() { @@ -197,23 +54,6 @@ async fn handle_coordinator_msg( }) .await; } - - // See commentary in TributaryMutable for why this is safe - let signers = &mut tributary_mutable.signers; - for (key, id, tx, eventuality) in to_sign { - if let Some(session) = SessionDb::get(txn, key.to_bytes().as_ref()) { - let signer = signers.get_mut(&session).unwrap(); - if let Some(msg) = signer.sign_transaction(txn, id, tx, &eventuality).await { - coordinator.send(msg).await; - } - } - } - - // This is not premature, even if this block had multiple `Batch`s created, as the first - // `Batch` alone will trigger all Plans/Eventualities/Signs - if acquired_lock { - substrate_mutable.release_scanner_lock().await; - } } } } From b6811f90153980121f4a3f84cc1cebc8b9d48c64 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 18:56:23 -0400 Subject: [PATCH 111/368] serai-processor-bin Moves the coordinator loop out of serai-bitcoin-processor, completing it. Fixes a potential race condition in the message-queue regarding multiple sockets sending messages at once. --- .github/workflows/tests.yml | 1 + Cargo.lock | 37 +++ Cargo.toml | 1 + message-queue/src/main.rs | 5 +- processor/bin/Cargo.toml | 60 +++++ processor/bin/LICENSE | 15 ++ processor/bin/README.md | 3 + processor/bin/src/coordinator.rs | 196 +++++++++++++++ processor/bin/src/lib.rs | 293 +++++++++++++++++++++++ processor/bitcoin/Cargo.toml | 8 +- processor/bitcoin/src/key_gen.rs | 8 +- processor/bitcoin/src/main.rs | 234 ++---------------- processor/bitcoin/src/txindex.rs | 2 +- processor/key-gen/src/db.rs | 13 +- processor/key-gen/src/lib.rs | 31 +-- processor/scanner/src/db.rs | 9 +- processor/scanner/src/lib.rs | 38 ++- processor/signers/src/coordinator/mod.rs | 1 + processor/signers/src/lib.rs | 1 + processor/src/coordinator.rs | 43 ---- processor/src/db.rs | 43 ---- processor/src/main.rs | 257 -------------------- 22 files changed, 705 insertions(+), 594 deletions(-) create mode 100644 processor/bin/Cargo.toml create mode 100644 processor/bin/LICENSE create mode 100644 processor/bin/README.md create mode 100644 processor/bin/src/coordinator.rs create mode 100644 processor/bin/src/lib.rs delete mode 100644 processor/src/coordinator.rs delete mode 100644 processor/src/db.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index edd219f9..8bf4084d 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -49,6 +49,7 @@ jobs: -p serai-processor-utxo-scheduler \ -p serai-processor-transaction-chaining-scheduler \ -p serai-processor-signers \ + -p serai-processor-bin \ -p serai-bitcoin-processor \ -p serai-ethereum-processor \ -p serai-monero-processor \ diff --git a/Cargo.lock b/Cargo.lock index 8c0c3dd5..7e7d78a3 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8131,6 +8131,7 @@ dependencies = [ "dkg", "env_logger", "flexible-transcript", + "hex", "log", "modular-frost", "parity-scale-codec", @@ -8140,6 +8141,7 @@ dependencies = [ "serai-db", "serai-env", "serai-message-queue", + "serai-processor-bin", "serai-processor-key-gen", "serai-processor-messages", "serai-processor-primitives", @@ -8150,6 +8152,7 @@ dependencies = [ "serai-processor-utxo-scheduler-primitives", "tokio", "zalloc", + "zeroize", ] [[package]] @@ -8635,6 +8638,40 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-bin" +version = "0.1.0" +dependencies = [ + "async-trait", + "bitcoin-serai", + "borsh", + "ciphersuite", + "dkg", + "env_logger", + "flexible-transcript", + "hex", + "log", + "modular-frost", + "parity-scale-codec", + "rand_core", + "secp256k1", + "serai-client", + "serai-db", + "serai-env", + "serai-message-queue", + "serai-processor-key-gen", + "serai-processor-messages", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", + "serai-processor-signers", + "serai-processor-transaction-chaining-scheduler", + "serai-processor-utxo-scheduler-primitives", + "tokio", + "zalloc", + "zeroize", +] + [[package]] name = "serai-processor-frost-attempt-manager" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 25e6c25d..b35b3318 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -83,6 +83,7 @@ members = [ "processor/scheduler/utxo/transaction-chaining", "processor/signers", + "processor/bin", "processor/bitcoin", "processor/ethereum", "processor/monero", diff --git a/message-queue/src/main.rs b/message-queue/src/main.rs index c43cc3c8..03c580ce 100644 --- a/message-queue/src/main.rs +++ b/message-queue/src/main.rs @@ -72,6 +72,9 @@ pub(crate) fn queue_message( // Assert one, and only one of these, is the coordinator assert!(matches!(meta.from, Service::Coordinator) ^ matches!(meta.to, Service::Coordinator)); + // Lock the queue + let queue_lock = QUEUES.read().unwrap()[&(meta.from, meta.to)].write().unwrap(); + // Verify (from, to, intent) hasn't been prior seen fn key(domain: &'static [u8], key: impl AsRef<[u8]>) -> Vec { [&[u8::try_from(domain.len()).unwrap()], domain, key.as_ref()].concat() @@ -93,7 +96,7 @@ pub(crate) fn queue_message( DbTxn::put(&mut txn, intent_key, []); // Queue it - let id = QUEUES.read().unwrap()[&(meta.from, meta.to)].write().unwrap().queue_message( + let id = queue_lock.queue_message( &mut txn, QueuedMessage { from: meta.from, diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml new file mode 100644 index 00000000..f3f3b753 --- /dev/null +++ b/processor/bin/Cargo.toml @@ -0,0 +1,60 @@ +[package] +name = "serai-processor-bin" +version = "0.1.0" +description = "Framework for Serai processor binaries" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/bin" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } +zeroize = { version = "1", default-features = false, features = ["std"] } +rand_core = { version = "0.6", default-features = false } + +hex = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } + +secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } +bitcoin-serai = { path = "../../networks/bitcoin", default-features = false, features = ["std"] } + +log = { version = "0.4", default-features = false, features = ["std"] } +env_logger = { version = "0.10", default-features = false, features = ["humantime"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +zalloc = { path = "../../common/zalloc" } +serai-db = { path = "../../common/db" } +serai-env = { path = "../../common/env" } + +serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } + +messages = { package = "serai-processor-messages", path = "../messages" } +key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } + +primitives = { package = "serai-processor-primitives", path = "../primitives" } +scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } +scanner = { package = "serai-processor-scanner", path = "../scanner" } +utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = "../scheduler/utxo/primitives" } +transaction-chaining-scheduler = { package = "serai-processor-transaction-chaining-scheduler", path = "../scheduler/utxo/transaction-chaining" } +signers = { package = "serai-processor-signers", path = "../signers" } + +message-queue = { package = "serai-message-queue", path = "../../message-queue" } + +[features] +parity-db = ["serai-db/parity-db"] +rocksdb = ["serai-db/rocksdb"] diff --git a/processor/bin/LICENSE b/processor/bin/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/bin/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/bin/README.md b/processor/bin/README.md new file mode 100644 index 00000000..858a2925 --- /dev/null +++ b/processor/bin/README.md @@ -0,0 +1,3 @@ +# Serai Processor Bin + +The framework for Serai processor binaries, common to the Serai processors. diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs new file mode 100644 index 00000000..d9d8d112 --- /dev/null +++ b/processor/bin/src/coordinator.rs @@ -0,0 +1,196 @@ +use std::sync::Arc; + +use tokio::sync::mpsc; + +use scale::Encode; +use serai_client::{ + primitives::{NetworkId, Signature}, + validator_sets::primitives::Session, + in_instructions::primitives::{Batch, SignedBatch}, +}; + +use serai_env as env; +use serai_db::{Get, DbTxn, Db, create_db, db_channel}; +use message_queue::{Service, Metadata, client::MessageQueue}; + +create_db! { + ProcessorBinCoordinator { + SavedMessages: () -> u64, + } +} + +db_channel! { + ProcessorBinCoordinator { + CoordinatorMessages: () -> Vec + } +} + +async fn send(service: Service, queue: &MessageQueue, msg: messages::ProcessorMessage) { + let metadata = Metadata { from: service, to: Service::Coordinator, intent: msg.intent() }; + let msg = borsh::to_vec(&msg).unwrap(); + queue.queue(metadata, msg).await; +} + +pub(crate) struct Coordinator { + new_message: mpsc::UnboundedReceiver<()>, + service: Service, + message_queue: Arc, +} + +pub(crate) struct CoordinatorSend(Service, Arc); + +impl Coordinator { + pub(crate) fn new(mut db: crate::Db) -> Self { + let (new_message_send, new_message_recv) = mpsc::unbounded_channel(); + + let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() { + "bitcoin" => NetworkId::Bitcoin, + "ethereum" => NetworkId::Ethereum, + "monero" => NetworkId::Monero, + _ => panic!("unrecognized network"), + }; + let service = Service::Processor(network_id); + let message_queue = Arc::new(MessageQueue::from_env(service)); + + // Spawn a task to move messages from the message-queue to our database so we can achieve + // atomicity. This is the only place we read/ack messages from + tokio::spawn({ + let message_queue = message_queue.clone(); + async move { + loop { + let msg = message_queue.next(Service::Coordinator).await; + + let prior_msg = msg.id.checked_sub(1); + let saved_messages = SavedMessages::get(&db); + /* + This should either be: + A) The message after the message we just saved (as normal) + B) The message we just saved (if we rebooted and failed to ack it) + */ + assert!((saved_messages == prior_msg) || (saved_messages == Some(msg.id))); + if saved_messages < Some(msg.id) { + let mut txn = db.txn(); + CoordinatorMessages::send(&mut txn, &msg.msg); + SavedMessages::set(&mut txn, &msg.id); + txn.commit(); + } + // Acknowledge this message + message_queue.ack(Service::Coordinator, msg.id).await; + + // Fire that there's a new message + new_message_send.send(()).expect("failed to tell the Coordinator there's a new message"); + } + } + }); + + Coordinator { new_message: new_message_recv, service, message_queue } + } + + pub(crate) fn coordinator_send(&self) -> CoordinatorSend { + CoordinatorSend(self.service, self.message_queue.clone()) + } + + /// Fetch the next message from the Coordinator. + /// + /// This message is guaranteed to have never been handled before, where handling is defined as + /// this `txn` being committed. + pub(crate) async fn next_message( + &mut self, + txn: &mut impl DbTxn, + ) -> messages::CoordinatorMessage { + loop { + match CoordinatorMessages::try_recv(txn) { + Some(msg) => { + return borsh::from_slice(&msg) + .expect("message wasn't a borsh-encoded CoordinatorMessage") + } + None => { + let _ = + tokio::time::timeout(core::time::Duration::from_secs(60), self.new_message.recv()) + .await; + } + } + } + } + + #[allow(clippy::unused_async)] + pub(crate) async fn send_message(&mut self, msg: messages::ProcessorMessage) { + send(self.service, &self.message_queue, msg).await + } +} + +#[async_trait::async_trait] +impl signers::Coordinator for CoordinatorSend { + type EphemeralError = (); + + async fn send( + &mut self, + msg: messages::sign::ProcessorMessage, + ) -> Result<(), Self::EphemeralError> { + // TODO: Use a fallible send for these methods + send(self.0, &self.1, messages::ProcessorMessage::Sign(msg)).await; + Ok(()) + } + + async fn publish_cosign( + &mut self, + block_number: u64, + block: [u8; 32], + signature: Signature, + ) -> Result<(), Self::EphemeralError> { + send( + self.0, + &self.1, + messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::CosignedBlock { + block_number, + block, + signature: signature.encode(), + }, + ), + ) + .await; + Ok(()) + } + + async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError> { + send( + self.0, + &self.1, + messages::ProcessorMessage::Substrate(messages::substrate::ProcessorMessage::Batch { batch }), + ) + .await; + Ok(()) + } + + async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError> { + send( + self.0, + &self.1, + messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedBatch { batch }, + ), + ) + .await; + Ok(()) + } + + async fn publish_slash_report_signature( + &mut self, + session: Session, + signature: Signature, + ) -> Result<(), Self::EphemeralError> { + send( + self.0, + &self.1, + messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedSlashReport { + session, + signature: signature.encode(), + }, + ), + ) + .await; + Ok(()) + } +} diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs new file mode 100644 index 00000000..15873873 --- /dev/null +++ b/processor/bin/src/lib.rs @@ -0,0 +1,293 @@ +use core::cmp::Ordering; + +use zeroize::{Zeroize, Zeroizing}; + +use ciphersuite::{ + group::{ff::PrimeField, GroupEncoding}, + Ciphersuite, Ristretto, +}; +use dkg::evrf::EvrfCurve; + +use serai_client::validator_sets::primitives::Session; + +use serai_env as env; +use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; + +use primitives::EncodableG; +use ::key_gen::{KeyGenParams, KeyGen}; +use scheduler::SignableTransaction; +use scanner::{ScannerFeed, Scanner, KeyFor, Scheduler}; +use signers::{TransactionPublisher, Signers}; + +mod coordinator; +use coordinator::Coordinator; + +create_db! { + ProcessorBin { + ExternalKeyForSessionForSigners: (session: Session) -> EncodableG, + } +} + +db_channel! { + ProcessorBin { + KeyToActivate: () -> EncodableG + } +} + +/// The type used for the database. +#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] +pub type Db = serai_db::ParityDb; +/// The type used for the database. +#[cfg(feature = "rocksdb")] +pub type Db = serai_db::RocksDB; + +/// Initialize the processor. +/// +/// Yields the database. +#[allow(unused_variables, unreachable_code)] +pub fn init() -> Db { + // Override the panic handler with one which will panic if any tokio task panics + { + let existing = std::panic::take_hook(); + std::panic::set_hook(Box::new(move |panic| { + existing(panic); + const MSG: &str = "exiting the process due to a task panicking"; + println!("{MSG}"); + log::error!("{MSG}"); + std::process::exit(1); + })); + } + + if std::env::var("RUST_LOG").is_err() { + std::env::set_var("RUST_LOG", serai_env::var("RUST_LOG").unwrap_or_else(|| "info".to_string())); + } + env_logger::init(); + + #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] + let db = + serai_db::new_parity_db(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); + #[cfg(feature = "rocksdb")] + let db = serai_db::new_rocksdb(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); + db +} + +/// THe URL for the external network's node. +pub fn url() -> String { + let login = env::var("NETWORK_RPC_LOGIN").expect("network RPC login wasn't specified"); + let hostname = env::var("NETWORK_RPC_HOSTNAME").expect("network RPC hostname wasn't specified"); + let port = env::var("NETWORK_RPC_PORT").expect("network port domain wasn't specified"); + "http://".to_string() + &login + "@" + &hostname + ":" + &port +} + +fn key_gen() -> KeyGen { + fn read_key_from_env(label: &'static str) -> Zeroizing { + let key_hex = + Zeroizing::new(env::var(label).unwrap_or_else(|| panic!("{label} wasn't provided"))); + let bytes = Zeroizing::new( + hex::decode(key_hex).unwrap_or_else(|_| panic!("{label} wasn't a valid hex string")), + ); + + let mut repr = ::Repr::default(); + if repr.as_ref().len() != bytes.len() { + panic!("{label} wasn't the correct length"); + } + repr.as_mut().copy_from_slice(bytes.as_slice()); + let res = Zeroizing::new( + Option::from(::from_repr(repr)) + .unwrap_or_else(|| panic!("{label} wasn't a valid scalar")), + ); + repr.as_mut().zeroize(); + res + } + KeyGen::new( + read_key_from_env::<::EmbeddedCurve>("SUBSTRATE_EVRF_KEY"), + read_key_from_env::<::EmbeddedCurve>( + "NETWORK_EVRF_KEY", + ), + ) +} + +async fn first_block_after_time(feed: &S, serai_time: u64) -> u64 { + async fn first_block_after_time_iteration( + feed: &S, + serai_time: u64, + ) -> Result, S::EphemeralError> { + let latest = feed.latest_finalized_block_number().await?; + let latest_time = feed.time_of_block(latest).await?; + if latest_time < serai_time { + tokio::time::sleep(core::time::Duration::from_secs(serai_time - latest_time)).await; + return Ok(None); + } + + // A finalized block has a time greater than or equal to the time we want to start at + // Find the first such block with a binary search + // start_search and end_search are inclusive + let mut start_search = 0; + let mut end_search = latest; + while start_search != end_search { + // This on purposely chooses the earlier block in the case two blocks are both in the middle + let to_check = start_search + ((end_search - start_search) / 2); + let block_time = feed.time_of_block(to_check).await?; + match block_time.cmp(&serai_time) { + Ordering::Less => { + start_search = to_check + 1; + assert!(start_search <= end_search); + } + Ordering::Equal | Ordering::Greater => { + // This holds true since we pick the earlier block upon an even search distance + // If it didn't, this would cause an infinite loop + assert!(to_check < end_search); + end_search = to_check; + } + } + } + Ok(Some(start_search)) + } + loop { + match first_block_after_time_iteration(feed, serai_time).await { + Ok(Some(block)) => return block, + Ok(None) => { + log::info!("waiting for block to activate at (a block with timestamp >= {serai_time})"); + } + Err(e) => { + log::error!("couldn't find the first block Serai should scan due to an RPC error: {e:?}"); + } + } + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } +} + +/// The main loop of a Processor, interacting with the Coordinator. +pub async fn coordinator_loop< + S: ScannerFeed, + K: KeyGenParams>>, + Sch: Scheduler< + S, + SignableTransaction: SignableTransaction, + >, + P: TransactionPublisher<::Transaction>, +>( + mut db: Db, + feed: S, + publisher: P, +) { + let mut coordinator = Coordinator::new(db.clone()); + + let mut key_gen = key_gen::(); + let mut scanner = Scanner::new::(db.clone(), feed.clone()).await; + let mut signers = + Signers::::new(db.clone(), coordinator.coordinator_send(), publisher); + + loop { + let db_clone = db.clone(); + let mut txn = db.txn(); + let msg = coordinator.next_message(&mut txn).await; + let mut txn = Some(txn); + match msg { + messages::CoordinatorMessage::KeyGen(msg) => { + let txn = txn.as_mut().unwrap(); + let mut new_key = None; + // This is a computationally expensive call yet it happens infrequently + for msg in key_gen.handle(txn, msg) { + if let messages::key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } = &msg { + new_key = Some(*session) + } + coordinator.send_message(messages::ProcessorMessage::KeyGen(msg)).await; + } + + // If we were yielded a key, register it in the signers + if let Some(session) = new_key { + let (substrate_keys, network_keys) = KeyGen::::key_shares(txn, session) + .expect("generated key pair yet couldn't get key shares"); + signers.register_keys(txn, session, substrate_keys, network_keys); + } + } + + // These are cheap calls which are fine to be here in this loop + messages::CoordinatorMessage::Sign(msg) => { + let txn = txn.as_mut().unwrap(); + signers.queue_message(txn, &msg) + } + messages::CoordinatorMessage::Coordinator( + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { + session, + block_number, + block, + }, + ) => { + let txn = txn.take().unwrap(); + signers.cosign_block(txn, session, block_number, block) + } + messages::CoordinatorMessage::Coordinator( + messages::coordinator::CoordinatorMessage::SignSlashReport { session, report }, + ) => { + let txn = txn.take().unwrap(); + signers.sign_slash_report(txn, session, &report) + } + + messages::CoordinatorMessage::Substrate(msg) => match msg { + messages::substrate::CoordinatorMessage::SetKeys { serai_time, session, key_pair } => { + let txn = txn.as_mut().unwrap(); + let key = + EncodableG(K::decode_key(key_pair.1.as_ref()).expect("invalid key set on serai")); + + // Queue the key to be activated upon the next Batch + KeyToActivate::>::send(txn, &key); + + // Set the external key, as needed by the signers + ExternalKeyForSessionForSigners::>::set(txn, session, &key); + + // This is presumed extremely expensive, potentially blocking for several minutes, yet + // only happens for the very first set of keys + if session == Session(0) { + assert!(scanner.is_none()); + let start_block = first_block_after_time(&feed, serai_time).await; + scanner = + Some(Scanner::initialize::(db_clone, feed.clone(), start_block, key.0).await); + } + } + messages::substrate::CoordinatorMessage::SlashesReported { session } => { + let txn = txn.as_mut().unwrap(); + + // Since this session had its slashes reported, it has finished all its signature + // protocols and has been fully retired. We retire it from the signers accordingly + let key = ExternalKeyForSessionForSigners::>::take(txn, session).unwrap().0; + + // This is a cheap call + signers.retire_session(txn, session, &key) + } + messages::substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { + block: _, + batch_id, + in_instruction_succeededs, + burns, + } => { + let mut txn = txn.take().unwrap(); + let scanner = scanner.as_mut().unwrap(); + let key_to_activate = KeyToActivate::>::try_recv(&mut txn).map(|key| key.0); + // This is a cheap call as it internally just queues this to be done later + scanner.acknowledge_batch( + txn, + batch_id, + in_instruction_succeededs, + burns, + key_to_activate, + ) + } + messages::substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { + block: _, + burns, + } => { + let txn = txn.take().unwrap(); + let scanner = scanner.as_mut().unwrap(); + // This is a cheap call as it internally just queues this to be done later + scanner.queue_burns(txn, burns) + } + }, + }; + // If the txn wasn't already consumed and committed, commit it + if let Some(txn) = txn { + txn.commit(); + } + } +} diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index c92e1384..c968e36b 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -18,8 +18,10 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +zeroize = { version = "1", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false } +hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } @@ -51,8 +53,10 @@ utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = transaction-chaining-scheduler = { package = "serai-processor-transaction-chaining-scheduler", path = "../scheduler/utxo/transaction-chaining" } signers = { package = "serai-processor-signers", path = "../signers" } +bin = { package = "serai-processor-bin", path = "../bin" } + message-queue = { package = "serai-message-queue", path = "../../message-queue" } [features] -parity-db = ["serai-db/parity-db"] -rocksdb = ["serai-db/rocksdb"] +parity-db = ["bin/parity-db"] +rocksdb = ["bin/rocksdb"] diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs index 75944364..41544134 100644 --- a/processor/bitcoin/src/key_gen.rs +++ b/processor/bitcoin/src/key_gen.rs @@ -7,22 +7,22 @@ pub(crate) struct KeyGenParams; impl key_gen::KeyGenParams for KeyGenParams { const ID: &'static str = "Bitcoin"; - type ExternalNetworkCurve = Secp256k1; + type ExternalNetworkCiphersuite = Secp256k1; - fn tweak_keys(keys: &mut ThresholdKeys) { + fn tweak_keys(keys: &mut ThresholdKeys) { *keys = bitcoin_serai::wallet::tweak_keys(keys); // Also create a scanner to assert these keys, and all expected paths, are usable scanner(keys.group_key()); } - fn encode_key(key: ::G) -> Vec { + fn encode_key(key: ::G) -> Vec { let key = key.to_bytes(); let key: &[u8] = key.as_ref(); // Skip the parity encoding as we know this key is even key[1 ..].to_vec() } - fn decode_key(key: &[u8]) -> Option<::G> { + fn decode_key(key: &[u8]) -> Option<::G> { x_coord_to_even_point(key) } } diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 1c07b6cd..09228d44 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -6,16 +6,9 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); -use core::cmp::Ordering; +use bitcoin_serai::rpc::Rpc as BRpc; -use ciphersuite::Ciphersuite; - -use serai_client::validator_sets::primitives::Session; - -use serai_db::{DbTxn, Db}; -use ::primitives::EncodableG; -use ::key_gen::KeyGenParams as KeyGenParamsTrait; -use scanner::{ScannerFeed, Scanner}; +use ::primitives::task::{Task, ContinuallyRan}; mod primitives; pub(crate) use crate::primitives::*; @@ -34,6 +27,7 @@ use scheduler::Scheduler; // Our custom code for Bitcoin mod db; mod txindex; +use txindex::TxIndexTask; pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> [u8; 32] { use bitcoin_serai::bitcoin::hashes::Hash; @@ -43,204 +37,29 @@ pub(crate) fn hash_bytes(hash: bitcoin_serai::bitcoin::hashes::sha256d::Hash) -> res } -async fn first_block_after_time(feed: &S, serai_time: u64) -> u64 { - async fn first_block_after_time_iteration( - feed: &S, - serai_time: u64, - ) -> Result, S::EphemeralError> { - let latest = feed.latest_finalized_block_number().await?; - let latest_time = feed.time_of_block(latest).await?; - if latest_time < serai_time { - tokio::time::sleep(core::time::Duration::from_secs(serai_time - latest_time)).await; - return Ok(None); - } - - // A finalized block has a time greater than or equal to the time we want to start at - // Find the first such block with a binary search - // start_search and end_search are inclusive - let mut start_search = 0; - let mut end_search = latest; - while start_search != end_search { - // This on purposely chooses the earlier block in the case two blocks are both in the middle - let to_check = start_search + ((end_search - start_search) / 2); - let block_time = feed.time_of_block(to_check).await?; - match block_time.cmp(&serai_time) { - Ordering::Less => { - start_search = to_check + 1; - assert!(start_search <= end_search); - } - Ordering::Equal | Ordering::Greater => { - // This holds true since we pick the earlier block upon an even search distance - // If it didn't, this would cause an infinite loop - assert!(to_check < end_search); - end_search = to_check; - } - } - } - Ok(Some(start_search)) - } - loop { - match first_block_after_time_iteration(feed, serai_time).await { - Ok(Some(block)) => return block, - Ok(None) => { - log::info!("waiting for block to activate at (a block with timestamp >= {serai_time})"); - } - Err(e) => { - log::error!("couldn't find the first block Serai should scan due to an RPC error: {e:?}"); - } - } - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - } -} - -/// Fetch the next message from the Coordinator. -/// -/// This message is guaranteed to have never been handled before, where handling is defined as -/// this `txn` being committed. -async fn next_message(_txn: &mut impl DbTxn) -> messages::CoordinatorMessage { - todo!("TODO") -} - -async fn send_message(_msg: messages::ProcessorMessage) { - todo!("TODO") -} - -async fn coordinator_loop( - mut db: D, - feed: Rpc, - mut key_gen: ::key_gen::KeyGen, - mut signers: signers::Signers, Scheduler, Rpc>, - mut scanner: Option>>, -) { - loop { - let db_clone = db.clone(); - let mut txn = db.txn(); - let msg = next_message(&mut txn).await; - let mut txn = Some(txn); - match msg { - messages::CoordinatorMessage::KeyGen(msg) => { - let txn = txn.as_mut().unwrap(); - let mut new_key = None; - // This is a computationally expensive call yet it happens infrequently - for msg in key_gen.handle(txn, msg) { - if let messages::key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } = &msg { - new_key = Some(*session) - } - send_message(messages::ProcessorMessage::KeyGen(msg)).await; - } - - // If we were yielded a key, register it in the signers - if let Some(session) = new_key { - let (substrate_keys, network_keys) = - ::key_gen::KeyGen::::key_shares(txn, session) - .expect("generated key pair yet couldn't get key shares"); - signers.register_keys(txn, session, substrate_keys, network_keys); - } - } - - // These are cheap calls which are fine to be here in this loop - messages::CoordinatorMessage::Sign(msg) => { - let txn = txn.as_mut().unwrap(); - signers.queue_message(txn, &msg) - } - messages::CoordinatorMessage::Coordinator( - messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - session, - block_number, - block, - }, - ) => { - let txn = txn.take().unwrap(); - signers.cosign_block(txn, session, block_number, block) - } - messages::CoordinatorMessage::Coordinator( - messages::coordinator::CoordinatorMessage::SignSlashReport { session, report }, - ) => { - let txn = txn.take().unwrap(); - signers.sign_slash_report(txn, session, &report) - } - - messages::CoordinatorMessage::Substrate(msg) => match msg { - messages::substrate::CoordinatorMessage::SetKeys { serai_time, session, key_pair } => { - let txn = txn.as_mut().unwrap(); - let key = EncodableG( - KeyGenParams::decode_key(key_pair.1.as_ref()).expect("invalid key set on serai"), - ); - - // Queue the key to be activated upon the next Batch - db::KeyToActivate::< - <::ExternalNetworkCurve as Ciphersuite>::G, - >::send(txn, &key); - - // Set the external key, as needed by the signers - db::ExternalKeyForSessionForSigners::< - <::ExternalNetworkCurve as Ciphersuite>::G, - >::set(txn, session, &key); - - // This is presumed extremely expensive, potentially blocking for several minutes, yet - // only happens for the very first set of keys - if session == Session(0) { - assert!(scanner.is_none()); - let start_block = first_block_after_time(&feed, serai_time).await; - scanner = - Some(Scanner::new::>(db_clone, feed.clone(), start_block, key.0).await); - } - } - messages::substrate::CoordinatorMessage::SlashesReported { session } => { - let txn = txn.as_mut().unwrap(); - - // Since this session had its slashes reported, it has finished all its signature - // protocols and has been fully retired. We retire it from the signers accordingly - let key = db::ExternalKeyForSessionForSigners::< - <::ExternalNetworkCurve as Ciphersuite>::G, - >::take(txn, session) - .unwrap() - .0; - - // This is a cheap call - signers.retire_session(txn, session, &key) - } - messages::substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { - block: _, - batch_id, - in_instruction_succeededs, - burns, - } => { - let mut txn = txn.take().unwrap(); - let scanner = scanner.as_mut().unwrap(); - let key_to_activate = db::KeyToActivate::< - <::ExternalNetworkCurve as Ciphersuite>::G, - >::try_recv(&mut txn) - .map(|key| key.0); - // This is a cheap call as it internally just queues this to be done later - scanner.acknowledge_batch( - txn, - batch_id, - in_instruction_succeededs, - burns, - key_to_activate, - ) - } - messages::substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { - block: _, - burns, - } => { - let txn = txn.take().unwrap(); - let scanner = scanner.as_mut().unwrap(); - // This is a cheap call as it internally just queues this to be done later - scanner.queue_burns(txn, burns) - } - }, - }; - // If the txn wasn't already consumed and committed, commit it - if let Some(txn) = txn { - txn.commit(); - } - } -} - #[tokio::main] -async fn main() {} +async fn main() { + let db = bin::init(); + let feed = Rpc { + db: db.clone(), + rpc: loop { + match BRpc::new(bin::url()).await { + Ok(rpc) => break rpc, + Err(e) => { + log::error!("couldn't connect to the Bitcoin node: {e:?}"); + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } + } + }, + }; + + let (index_task, index_handle) = Task::new(); + tokio::spawn(TxIndexTask(feed.clone()).continually_run(index_task, vec![])); + core::mem::forget(index_handle); + + bin::coordinator_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed) + .await; +} /* use bitcoin_serai::{ @@ -278,9 +97,6 @@ use serai_client::{ */ /* -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub(crate) struct Fee(u64); - #[async_trait] impl TransactionTrait for Transaction { #[cfg(test)] diff --git a/processor/bitcoin/src/txindex.rs b/processor/bitcoin/src/txindex.rs index 2d3f1cd6..4ed38973 100644 --- a/processor/bitcoin/src/txindex.rs +++ b/processor/bitcoin/src/txindex.rs @@ -35,7 +35,7 @@ pub(crate) fn script_pubkey_for_on_chain_output( ) } -pub(crate) struct TxIndexTask(Rpc); +pub(crate) struct TxIndexTask(pub(crate) Rpc); #[async_trait::async_trait] impl ContinuallyRan for TxIndexTask { diff --git a/processor/key-gen/src/db.rs b/processor/key-gen/src/db.rs index 676fd2aa..149fe1a2 100644 --- a/processor/key-gen/src/db.rs +++ b/processor/key-gen/src/db.rs @@ -19,7 +19,7 @@ pub(crate) struct Params { pub(crate) substrate_evrf_public_keys: Vec<<::EmbeddedCurve as Ciphersuite>::G>, pub(crate) network_evrf_public_keys: - Vec<<::EmbeddedCurve as Ciphersuite>::G>, + Vec<<::EmbeddedCurve as Ciphersuite>::G>, } #[derive(BorshSerialize, BorshDeserialize)] @@ -93,9 +93,9 @@ impl KeyGenDb

{ .network_evrf_public_keys .into_iter() .map(|key| { - <::EmbeddedCurve as Ciphersuite>::read_G::<&[u8]>( - &mut key.as_ref(), - ) + <::EmbeddedCurve as Ciphersuite>::read_G::< + &[u8], + >(&mut key.as_ref()) .unwrap() }) .collect(), @@ -118,7 +118,7 @@ impl KeyGenDb

{ txn: &mut impl DbTxn, session: Session, substrate_keys: &[ThresholdKeys], - network_keys: &[ThresholdKeys], + network_keys: &[ThresholdKeys], ) { assert_eq!(substrate_keys.len(), network_keys.len()); @@ -134,7 +134,8 @@ impl KeyGenDb

{ pub(crate) fn key_shares( getter: &impl Get, session: Session, - ) -> Option<(Vec>, Vec>)> { + ) -> Option<(Vec>, Vec>)> + { let keys = _db::KeyShares::get(getter, &session)?; let mut keys: &[u8] = keys.as_ref(); diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index cb23a740..fd847cc5 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -34,27 +34,29 @@ pub trait KeyGenParams { const ID: &'static str; /// The curve used for the external network. - type ExternalNetworkCurve: EvrfCurve< + type ExternalNetworkCiphersuite: EvrfCurve< EmbeddedCurve: Ciphersuite< - G: ec_divisors::DivisorCurve::F>, + G: ec_divisors::DivisorCurve< + FieldElement = ::F, + >, >, >; /// Tweaks keys as necessary/beneficial. - fn tweak_keys(keys: &mut ThresholdKeys); + fn tweak_keys(keys: &mut ThresholdKeys); /// Encode keys as optimal. /// /// A default implementation is provided which calls the traditional `to_bytes`. - fn encode_key(key: ::G) -> Vec { + fn encode_key(key: ::G) -> Vec { key.to_bytes().as_ref().to_vec() } /// Decode keys from their optimal encoding. /// /// A default implementation is provided which calls the traditional `from_bytes`. - fn decode_key(mut key: &[u8]) -> Option<::G> { - let res = ::read_G(&mut key).ok()?; + fn decode_key(mut key: &[u8]) -> Option<::G> { + let res = ::read_G(&mut key).ok()?; if !key.is_empty() { None?; } @@ -143,7 +145,7 @@ pub struct KeyGen { substrate_evrf_private_key: Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, network_evrf_private_key: - Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, + Zeroizing<<::EmbeddedCurve as Ciphersuite>::F>, } impl KeyGen

{ @@ -154,7 +156,7 @@ impl KeyGen

{ <::EmbeddedCurve as Ciphersuite>::F, >, network_evrf_private_key: Zeroizing< - <::EmbeddedCurve as Ciphersuite>::F, + <::EmbeddedCurve as Ciphersuite>::F, >, ) -> KeyGen

{ KeyGen { substrate_evrf_private_key, network_evrf_private_key } @@ -165,7 +167,8 @@ impl KeyGen

{ pub fn key_shares( getter: &impl Get, session: Session, - ) -> Option<(Vec>, Vec>)> { + ) -> Option<(Vec>, Vec>)> + { // This is safe, despite not having a txn, since it's a static value // It doesn't change over time/in relation to other operations // It is solely set or unset @@ -198,7 +201,7 @@ impl KeyGen

{ let network_evrf_public_keys = evrf_public_keys.into_iter().map(|(_, key)| key).collect::>(); let (network_evrf_public_keys, additional_faulty) = - coerce_keys::(&network_evrf_public_keys); + coerce_keys::(&network_evrf_public_keys); faulty.extend(additional_faulty); // Participate for both Substrate and the network @@ -228,7 +231,7 @@ impl KeyGen

{ &self.substrate_evrf_private_key, &mut participation, ); - participate::( + participate::( context::

(session, NETWORK_KEY_CONTEXT), threshold, &network_evrf_public_keys, @@ -283,7 +286,7 @@ impl KeyGen

{ }; let len_at_network_participation_start_pos = participation.len(); let Ok(network_participation) = - Participation::::read(&mut participation, n) + Participation::::read(&mut participation, n) else { return blame; }; @@ -317,7 +320,7 @@ impl KeyGen

{ } } - match EvrfDkg::::verify( + match EvrfDkg::::verify( &mut OsRng, generators(), context::

(session, NETWORK_KEY_CONTEXT), @@ -490,7 +493,7 @@ impl KeyGen

{ Err(blames) => return blames, }; - let network_dkg = match verify_dkg::( + let network_dkg = match verify_dkg::( txn, session, false, diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 107616cc..49ab1785 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -70,6 +70,8 @@ impl OutputWithInInstruction { create_db!( ScannerGlobal { + StartBlock: () -> u64, + QueuedKey: (key: K) -> (), ActiveKeys: () -> Vec>, @@ -106,8 +108,11 @@ create_db!( pub(crate) struct ScannerGlobalDb(PhantomData); impl ScannerGlobalDb { - pub(crate) fn has_any_key_been_queued(getter: &impl Get) -> bool { - ActiveKeys::>>::get(getter).is_some() + pub(crate) fn start_block(getter: &impl Get) -> Option { + StartBlock::get(getter) + } + pub(crate) fn set_start_block(txn: &mut impl DbTxn, block: u64) { + StartBlock::set(txn, &block) } /// Queue a key. diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 6ed16d74..ebd783bf 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -344,17 +344,10 @@ impl Scanner { /// Create a new scanner. /// /// This will begin its execution, spawning several asynchronous tasks. - pub async fn new>( - mut db: impl Db, - feed: S, - start_block: u64, - start_key: KeyFor, - ) -> Self { - if !ScannerGlobalDb::::has_any_key_been_queued(&db) { - let mut txn = db.txn(); - ScannerGlobalDb::::queue_key(&mut txn, start_block, start_key); - txn.commit(); - } + /// + /// This will return None if the Scanner was never initialized. + pub async fn new>(db: impl Db, feed: S) -> Option { + let start_block = ScannerGlobalDb::::start_block(&db)?; let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); @@ -381,7 +374,28 @@ impl Scanner { // window its allowed to scan tokio::spawn(eventuality_task.continually_run(eventuality_task_def, vec![scan_handle])); - Self { substrate_handle, _S: PhantomData } + Some(Self { substrate_handle, _S: PhantomData }) + } + + /// Initialize the scanner. + /// + /// This will begin its execution, spawning several asynchronous tasks. + /// + /// This passes through to `Scanner::new` if prior called. + pub async fn initialize>( + mut db: impl Db, + feed: S, + start_block: u64, + start_key: KeyFor, + ) -> Self { + if ScannerGlobalDb::::start_block(&db).is_none() { + let mut txn = db.txn(); + ScannerGlobalDb::::set_start_block(&mut txn, start_block); + ScannerGlobalDb::::queue_key(&mut txn, start_block, start_key); + txn.commit(); + } + + Self::new::(db, feed).await.unwrap() } /// Acknowledge a Batch having been published on Serai. diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index a3163922..e749f841 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -114,6 +114,7 @@ impl ContinuallyRan for CoordinatorTask { self .coordinator .publish_slash_report_signature( + session, <_>::decode(&mut slash_report_signature.as_slice()).unwrap(), ) .await diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 881205f8..e06dd07f 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -71,6 +71,7 @@ pub trait Coordinator: 'static + Send + Sync { /// Publish a slash report's signature. async fn publish_slash_report_signature( &mut self, + session: Session, signature: Signature, ) -> Result<(), Self::EphemeralError>; } diff --git a/processor/src/coordinator.rs b/processor/src/coordinator.rs deleted file mode 100644 index 26786e30..00000000 --- a/processor/src/coordinator.rs +++ /dev/null @@ -1,43 +0,0 @@ -use messages::{ProcessorMessage, CoordinatorMessage}; - -use message_queue::{Service, Metadata, client::MessageQueue}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Message { - pub id: u64, - pub msg: CoordinatorMessage, -} - -#[async_trait::async_trait] -pub trait Coordinator { - async fn send(&mut self, msg: impl Send + Into); - async fn recv(&mut self) -> Message; - async fn ack(&mut self, msg: Message); -} - -#[async_trait::async_trait] -impl Coordinator for MessageQueue { - async fn send(&mut self, msg: impl Send + Into) { - let msg: ProcessorMessage = msg.into(); - let metadata = Metadata { from: self.service, to: Service::Coordinator, intent: msg.intent() }; - let msg = borsh::to_vec(&msg).unwrap(); - - self.queue(metadata, msg).await; - } - - async fn recv(&mut self) -> Message { - let msg = self.next(Service::Coordinator).await; - - let id = msg.id; - - // Deserialize it into a CoordinatorMessage - let msg: CoordinatorMessage = - borsh::from_slice(&msg.msg).expect("message wasn't a borsh-encoded CoordinatorMessage"); - - return Message { id, msg }; - } - - async fn ack(&mut self, msg: Message) { - MessageQueue::ack(self, Service::Coordinator, msg.id).await - } -} diff --git a/processor/src/db.rs b/processor/src/db.rs deleted file mode 100644 index ffd7c43a..00000000 --- a/processor/src/db.rs +++ /dev/null @@ -1,43 +0,0 @@ -use std::io::Read; - -use scale::{Encode, Decode}; -use serai_client::validator_sets::primitives::{Session, KeyPair}; - -pub use serai_db::*; - -use crate::networks::{Block, Network}; - -create_db!( - MainDb { - HandledMessageDb: (id: u64) -> (), - PendingActivationsDb: () -> Vec - } -); - -impl PendingActivationsDb { - pub fn pending_activation( - getter: &impl Get, - ) -> Option<(>::Id, Session, KeyPair)> { - if let Some(bytes) = Self::get(getter) { - if !bytes.is_empty() { - let mut slice = bytes.as_slice(); - let (session, key_pair) = <(Session, KeyPair)>::decode(&mut slice).unwrap(); - let mut block_before_queue_block = >::Id::default(); - slice.read_exact(block_before_queue_block.as_mut()).unwrap(); - assert!(slice.is_empty()); - return Some((block_before_queue_block, session, key_pair)); - } - } - None - } - pub fn set_pending_activation( - txn: &mut impl DbTxn, - block_before_queue_block: &>::Id, - session: Session, - key_pair: KeyPair, - ) { - let mut buf = (session, key_pair).encode(); - buf.extend(block_before_queue_block.as_ref()); - Self::set(txn, &buf); - } -} diff --git a/processor/src/main.rs b/processor/src/main.rs index 65e74f55..b4a5053a 100644 --- a/processor/src/main.rs +++ b/processor/src/main.rs @@ -60,263 +60,9 @@ async fn handle_coordinator_msg( } } -async fn boot( - raw_db: &mut D, - network: &N, - coordinator: &mut Co, -) -> (D, TributaryMutable, SubstrateMutable) { - fn read_key_from_env(label: &'static str) -> Zeroizing { - let key_hex = - Zeroizing::new(env::var(label).unwrap_or_else(|| panic!("{label} wasn't provided"))); - let bytes = Zeroizing::new( - hex::decode(key_hex).unwrap_or_else(|_| panic!("{label} wasn't a valid hex string")), - ); - - let mut repr = ::Repr::default(); - if repr.as_ref().len() != bytes.len() { - panic!("{label} wasn't the correct length"); - } - repr.as_mut().copy_from_slice(bytes.as_slice()); - let res = Zeroizing::new( - Option::from(::from_repr(repr)) - .unwrap_or_else(|| panic!("{label} wasn't a valid scalar")), - ); - repr.as_mut().zeroize(); - res - } - - let key_gen = KeyGen::::new( - raw_db.clone(), - read_key_from_env::<::EmbeddedCurve>("SUBSTRATE_EVRF_KEY"), - read_key_from_env::<::EmbeddedCurve>("NETWORK_EVRF_KEY"), - ); - - let (multisig_manager, current_keys, actively_signing) = - MultisigManager::new(raw_db, network).await; - - let mut batch_signer = None; - let mut signers = HashMap::new(); - - for (i, key) in current_keys.iter().enumerate() { - let Some((session, (substrate_keys, network_keys))) = key_gen.keys(key) else { continue }; - let network_key = network_keys[0].group_key(); - - // If this is the oldest key, load the BatchSigner for it as the active BatchSigner - // The new key only takes responsibility once the old key is fully deprecated - // - // We don't have to load any state for this since the Scanner will re-fire any events - // necessary, only no longer scanning old blocks once Substrate acks them - if i == 0 { - batch_signer = Some(BatchSigner::new(N::NETWORK, session, substrate_keys)); - } - - // The Scanner re-fires events as needed for batch_signer yet not signer - // This is due to the transactions which we start signing from due to a block not being - // guaranteed to be signed before we stop scanning the block on reboot - // We could simplify the Signer flow by delaying when it acks a block, yet that'd: - // 1) Increase the startup time - // 2) Cause re-emission of Batch events, which we'd need to check the safety of - // (TODO: Do anyways?) - // 3) Violate the attempt counter (TODO: Is this already being violated?) - let mut signer = Signer::new(network.clone(), session, network_keys); - - // Sign any TXs being actively signed - for (plan, tx, eventuality) in &actively_signing { - if plan.key == network_key { - let mut txn = raw_db.txn(); - if let Some(msg) = - signer.sign_transaction(&mut txn, plan.id(), tx.clone(), eventuality).await - { - coordinator.send(msg).await; - } - // This should only have re-writes of existing data - drop(txn); - } - } - - signers.insert(session, signer); - } - - // Spawn a task to rebroadcast signed TXs yet to be mined into a finalized block - // This hedges against being dropped due to full mempools, temporarily too low of a fee... - tokio::spawn(Signer::::rebroadcast_task(raw_db.clone(), network.clone())); - - ( - raw_db.clone(), - TributaryMutable { key_gen, batch_signer, cosigner: None, slash_report_signer: None, signers }, - multisig_manager, - ) -} - -#[allow(clippy::await_holding_lock)] // Needed for txn, unfortunately can't be down-scoped -async fn run(mut raw_db: D, network: N, mut coordinator: Co) { - // We currently expect a contextless bidirectional mapping between these two values - // (which is that any value of A can be interpreted as B and vice versa) - // While we can write a contextual mapping, we have yet to do so - // This check ensures no network which doesn't have a bidirectional mapping is defined - assert_eq!(>::Id::default().as_ref().len(), BlockHash([0u8; 32]).0.len()); - - let (main_db, mut tributary_mutable, mut substrate_mutable) = - boot(&mut raw_db, &network, &mut coordinator).await; - - // We can't load this from the DB as we can't guarantee atomic increments with the ack function - // TODO: Load with a slight tolerance - let mut last_coordinator_msg = None; - - loop { - let mut txn = raw_db.txn(); - - log::trace!("new db txn in run"); - - let mut outer_msg = None; - - tokio::select! { - // This blocks the entire processor until it finishes handling this message - // KeyGen specifically may take a notable amount of processing time - // While that shouldn't be an issue in practice, as after processing an attempt it'll handle - // the other messages in the queue, it may be beneficial to parallelize these - // They could potentially be parallelized by type (KeyGen, Sign, Substrate) without issue - msg = coordinator.recv() => { - if let Some(last_coordinator_msg) = last_coordinator_msg { - assert_eq!(msg.id, last_coordinator_msg + 1); - } - last_coordinator_msg = Some(msg.id); - - // Only handle this if we haven't already - if HandledMessageDb::get(&main_db, msg.id).is_none() { - HandledMessageDb::set(&mut txn, msg.id, &()); - - // This is isolated to better think about how its ordered, or rather, about how the other - // cases aren't ordered - // - // While the coordinator messages are ordered, they're not deterministically ordered - // Tributary-caused messages are deterministically ordered, and Substrate-caused messages - // are deterministically-ordered, yet they're both shoved into a singular queue - // The order at which they're shoved in together isn't deterministic - // - // This is safe so long as Tributary and Substrate messages don't both expect mutable - // references over the same data - handle_coordinator_msg( - &mut txn, - &network, - &mut coordinator, - &mut tributary_mutable, - &mut substrate_mutable, - &msg, - ).await; - } - - outer_msg = Some(msg); - }, - - scanner_event = substrate_mutable.next_scanner_event() => { - let msg = substrate_mutable.scanner_event_to_multisig_event( - &mut txn, - &network, - scanner_event - ).await; - - match msg { - MultisigEvent::Batches(retired_key_new_key, batches) => { - // Start signing this batch - for batch in batches { - info!("created batch {} ({} instructions)", batch.id, batch.instructions.len()); - - // The coordinator expects BatchPreprocess to immediately follow Batch - coordinator.send( - messages::substrate::ProcessorMessage::Batch { batch: batch.clone() } - ).await; - - if let Some(batch_signer) = tributary_mutable.batch_signer.as_mut() { - if let Some(msg) = batch_signer.sign(&mut txn, batch) { - coordinator.send(msg).await; - } - } - } - - if let Some((retired_key, new_key)) = retired_key_new_key { - // Safe to mutate since all signing operations are done and no more will be added - if let Some(retired_session) = SessionDb::get(&txn, retired_key.to_bytes().as_ref()) { - tributary_mutable.signers.remove(&retired_session); - } - tributary_mutable.batch_signer.take(); - let keys = tributary_mutable.key_gen.keys(&new_key); - if let Some((session, (substrate_keys, _))) = keys { - tributary_mutable.batch_signer = - Some(BatchSigner::new(N::NETWORK, session, substrate_keys)); - } - } - }, - MultisigEvent::Completed(key, id, tx) => { - if let Some(session) = SessionDb::get(&txn, &key) { - let signer = tributary_mutable.signers.get_mut(&session).unwrap(); - if let Some(msg) = signer.completed(&mut txn, id, &tx) { - coordinator.send(msg).await; - } - } - } - } - }, - } - - txn.commit(); - if let Some(msg) = outer_msg { - coordinator.ack(msg).await; - } - } -} - #[tokio::main] async fn main() { - // Override the panic handler with one which will panic if any tokio task panics - { - let existing = std::panic::take_hook(); - std::panic::set_hook(Box::new(move |panic| { - existing(panic); - const MSG: &str = "exiting the process due to a task panicking"; - println!("{MSG}"); - log::error!("{MSG}"); - std::process::exit(1); - })); - } - - if std::env::var("RUST_LOG").is_err() { - std::env::set_var("RUST_LOG", serai_env::var("RUST_LOG").unwrap_or_else(|| "info".to_string())); - } - env_logger::init(); - - #[allow(unused_variables, unreachable_code)] - let db = { - #[cfg(all(feature = "parity-db", feature = "rocksdb"))] - panic!("built with parity-db and rocksdb"); - #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] - let db = - serai_db::new_parity_db(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); - #[cfg(feature = "rocksdb")] - let db = - serai_db::new_rocksdb(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); - db - }; - - // Network configuration - let url = { - let login = env::var("NETWORK_RPC_LOGIN").expect("network RPC login wasn't specified"); - let hostname = env::var("NETWORK_RPC_HOSTNAME").expect("network RPC hostname wasn't specified"); - let port = env::var("NETWORK_RPC_PORT").expect("network port domain wasn't specified"); - "http://".to_string() + &login + "@" + &hostname + ":" + &port - }; - let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() { - "bitcoin" => NetworkId::Bitcoin, - "ethereum" => NetworkId::Ethereum, - "monero" => NetworkId::Monero, - _ => panic!("unrecognized network"), - }; - - let coordinator = MessageQueue::from_env(Service::Processor(network_id)); - match network_id { - #[cfg(feature = "bitcoin")] - NetworkId::Bitcoin => run(db, Bitcoin::new(url).await, coordinator).await, #[cfg(feature = "ethereum")] NetworkId::Ethereum => { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") @@ -327,8 +73,5 @@ async fn main() { let relayer_url = relayer_hostname + ":" + &relayer_port; run(db.clone(), Ethereum::new(db, url, relayer_url).await, coordinator).await } - #[cfg(feature = "monero")] - NetworkId::Monero => run(db, Monero::new(url).await, coordinator).await, - _ => panic!("spawning a processor for an unsupported network"), } } From 0d4c8cf0322d0fa27f9f32a22d361fd797eb9fa8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Sep 2024 19:29:56 -0400 Subject: [PATCH 112/368] Use a local DB channel for sending to the message-queue The provided message-queue queue functions runs unti it succeeds. This means sending to the message-queue will no longer potentially block for arbitrary amount of times as sending messages is just writing them to a DB. --- processor/bin/src/coordinator.rs | 154 ++++++++++++++++++------------- processor/bin/src/lib.rs | 4 +- processor/bitcoin/src/db.rs | 19 +--- processor/bitcoin/src/main.rs | 3 +- 4 files changed, 96 insertions(+), 84 deletions(-) diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index d9d8d112..12442c3d 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -1,4 +1,4 @@ -use std::sync::Arc; +use std::sync::{LazyLock, Arc, Mutex}; use tokio::sync::mpsc; @@ -9,8 +9,8 @@ use serai_client::{ in_instructions::primitives::{Batch, SignedBatch}, }; -use serai_env as env; use serai_db::{Get, DbTxn, Db, create_db, db_channel}; +use serai_env as env; use message_queue::{Service, Metadata, client::MessageQueue}; create_db! { @@ -21,27 +21,47 @@ create_db! { db_channel! { ProcessorBinCoordinator { - CoordinatorMessages: () -> Vec + ReceivedCoordinatorMessages: () -> Vec, } } -async fn send(service: Service, queue: &MessageQueue, msg: messages::ProcessorMessage) { - let metadata = Metadata { from: service, to: Service::Coordinator, intent: msg.intent() }; - let msg = borsh::to_vec(&msg).unwrap(); - queue.queue(metadata, msg).await; +// A lock to access SentCoordinatorMessages::send +static SEND_LOCK: LazyLock> = LazyLock::new(|| Mutex::new(())); + +db_channel! { + ProcessorBinCoordinator { + SentCoordinatorMessages: () -> Vec, + } +} + +#[derive(Clone)] +pub(crate) struct CoordinatorSend { + db: crate::Db, + sent_message: mpsc::UnboundedSender<()>, +} + +impl CoordinatorSend { + fn send(&mut self, msg: &messages::ProcessorMessage) { + let _lock = SEND_LOCK.lock().unwrap(); + let mut txn = self.db.txn(); + SentCoordinatorMessages::send(&mut txn, &borsh::to_vec(msg).unwrap()); + txn.commit(); + self + .sent_message + .send(()) + .expect("failed to tell the Coordinator tasks there's a new message to send"); + } } pub(crate) struct Coordinator { - new_message: mpsc::UnboundedReceiver<()>, - service: Service, - message_queue: Arc, + received_message: mpsc::UnboundedReceiver<()>, + send: CoordinatorSend, } -pub(crate) struct CoordinatorSend(Service, Arc); - impl Coordinator { - pub(crate) fn new(mut db: crate::Db) -> Self { - let (new_message_send, new_message_recv) = mpsc::unbounded_channel(); + pub(crate) fn new(db: crate::Db) -> Self { + let (received_message_send, received_message_recv) = mpsc::unbounded_channel(); + let (sent_message_send, mut sent_message_recv) = mpsc::unbounded_channel(); let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() { "bitcoin" => NetworkId::Bitcoin, @@ -55,6 +75,7 @@ impl Coordinator { // Spawn a task to move messages from the message-queue to our database so we can achieve // atomicity. This is the only place we read/ack messages from tokio::spawn({ + let mut db = db.clone(); let message_queue = message_queue.clone(); async move { loop { @@ -70,7 +91,7 @@ impl Coordinator { assert!((saved_messages == prior_msg) || (saved_messages == Some(msg.id))); if saved_messages < Some(msg.id) { let mut txn = db.txn(); - CoordinatorMessages::send(&mut txn, &msg.msg); + ReceivedCoordinatorMessages::send(&mut txn, &msg.msg); SavedMessages::set(&mut txn, &msg.id); txn.commit(); } @@ -78,16 +99,45 @@ impl Coordinator { message_queue.ack(Service::Coordinator, msg.id).await; // Fire that there's a new message - new_message_send.send(()).expect("failed to tell the Coordinator there's a new message"); + received_message_send + .send(()) + .expect("failed to tell the Coordinator there's a new message"); } } }); - Coordinator { new_message: new_message_recv, service, message_queue } + // Spawn a task to send messages to the message-queue + tokio::spawn({ + let mut db = db.clone(); + async move { + loop { + let mut txn = db.txn(); + match SentCoordinatorMessages::try_recv(&mut txn) { + Some(msg) => { + let metadata = Metadata { + from: service, + to: Service::Coordinator, + intent: borsh::from_slice::(&msg).unwrap().intent(), + }; + message_queue.queue(metadata, msg).await; + txn.commit(); + } + None => { + let _ = + tokio::time::timeout(core::time::Duration::from_secs(60), sent_message_recv.recv()) + .await; + } + } + } + } + }); + + let send = CoordinatorSend { db, sent_message: sent_message_send }; + Coordinator { received_message: received_message_recv, send } } pub(crate) fn coordinator_send(&self) -> CoordinatorSend { - CoordinatorSend(self.service, self.message_queue.clone()) + self.send.clone() } /// Fetch the next message from the Coordinator. @@ -99,23 +149,22 @@ impl Coordinator { txn: &mut impl DbTxn, ) -> messages::CoordinatorMessage { loop { - match CoordinatorMessages::try_recv(txn) { + match ReceivedCoordinatorMessages::try_recv(txn) { Some(msg) => { return borsh::from_slice(&msg) .expect("message wasn't a borsh-encoded CoordinatorMessage") } None => { let _ = - tokio::time::timeout(core::time::Duration::from_secs(60), self.new_message.recv()) + tokio::time::timeout(core::time::Duration::from_secs(60), self.received_message.recv()) .await; } } } } - #[allow(clippy::unused_async)] - pub(crate) async fn send_message(&mut self, msg: messages::ProcessorMessage) { - send(self.service, &self.message_queue, msg).await + pub(crate) fn send_message(&mut self, msg: &messages::ProcessorMessage) { + self.send.send(msg); } } @@ -127,8 +176,7 @@ impl signers::Coordinator for CoordinatorSend { &mut self, msg: messages::sign::ProcessorMessage, ) -> Result<(), Self::EphemeralError> { - // TODO: Use a fallible send for these methods - send(self.0, &self.1, messages::ProcessorMessage::Sign(msg)).await; + self.send(&messages::ProcessorMessage::Sign(msg)); Ok(()) } @@ -138,40 +186,27 @@ impl signers::Coordinator for CoordinatorSend { block: [u8; 32], signature: Signature, ) -> Result<(), Self::EphemeralError> { - send( - self.0, - &self.1, - messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::CosignedBlock { - block_number, - block, - signature: signature.encode(), - }, - ), - ) - .await; + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::CosignedBlock { + block_number, + block, + signature: signature.encode(), + }, + )); Ok(()) } async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError> { - send( - self.0, - &self.1, - messages::ProcessorMessage::Substrate(messages::substrate::ProcessorMessage::Batch { batch }), - ) - .await; + self.send(&messages::ProcessorMessage::Substrate( + messages::substrate::ProcessorMessage::Batch { batch }, + )); Ok(()) } async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError> { - send( - self.0, - &self.1, - messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::SignedBatch { batch }, - ), - ) - .await; + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedBatch { batch }, + )); Ok(()) } @@ -180,17 +215,12 @@ impl signers::Coordinator for CoordinatorSend { session: Session, signature: Signature, ) -> Result<(), Self::EphemeralError> { - send( - self.0, - &self.1, - messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::SignedSlashReport { - session, - signature: signature.encode(), - }, - ), - ) - .await; + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedSlashReport { + session, + signature: signature.encode(), + }, + )); Ok(()) } } diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 15873873..67ea6150 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -158,7 +158,7 @@ async fn first_block_after_time(feed: &S, serai_time: u64) -> u6 } /// The main loop of a Processor, interacting with the Coordinator. -pub async fn coordinator_loop< +pub async fn main_loop< S: ScannerFeed, K: KeyGenParams>>, Sch: Scheduler< @@ -192,7 +192,7 @@ pub async fn coordinator_loop< if let messages::key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } = &msg { new_key = Some(*session) } - coordinator.send_message(messages::ProcessorMessage::KeyGen(msg)).await; + coordinator.send_message(&messages::ProcessorMessage::KeyGen(msg)); } // If we were yielded a key, register it in the signers diff --git a/processor/bitcoin/src/db.rs b/processor/bitcoin/src/db.rs index b0acc427..1d73ebfe 100644 --- a/processor/bitcoin/src/db.rs +++ b/processor/bitcoin/src/db.rs @@ -1,21 +1,4 @@ -use ciphersuite::group::GroupEncoding; - -use serai_client::validator_sets::primitives::Session; - -use serai_db::{Get, DbTxn, create_db, db_channel}; -use primitives::EncodableG; - -create_db! { - Processor { - ExternalKeyForSessionForSigners: (session: Session) -> EncodableG, - } -} - -db_channel! { - Processor { - KeyToActivate: () -> EncodableG - } -} +use serai_db::{Get, DbTxn, create_db}; create_db! { BitcoinProcessor { diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 09228d44..74e174ee 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -57,8 +57,7 @@ async fn main() { tokio::spawn(TxIndexTask(feed.clone()).continually_run(index_task, vec![])); core::mem::forget(index_handle); - bin::coordinator_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed) - .await; + bin::main_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed).await; } /* From f2cf03cedfb9ef09839e6517b627aaa1c00472d6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 12 Sep 2024 18:40:10 -0400 Subject: [PATCH 113/368] Monero processor primitives --- Cargo.lock | 36 ++- processor/bin/Cargo.toml | 17 +- processor/bin/src/coordinator.rs | 1 + processor/bitcoin/Cargo.toml | 18 +- processor/monero/Cargo.toml | 36 +-- processor/monero/src/key_gen.rs | 11 + processor/monero/src/lib.rs | 2 + processor/monero/src/main.rs | 43 ++++ processor/monero/src/primitives/block.rs | 54 +++++ processor/monero/src/primitives/mod.rs | 3 + processor/monero/src/primitives/output.rs | 86 +++++++ .../monero/src/primitives/transaction.rs | 137 ++++++++++++ processor/monero/src/rpc.rs | 156 +++++++++++++ processor/monero/src/scheduler.rs | 205 +++++++++++++++++ substrate/client/Cargo.toml | 4 +- substrate/client/src/networks/monero.rs | 211 +++++++++++------- 16 files changed, 873 insertions(+), 147 deletions(-) create mode 100644 processor/monero/src/key_gen.rs create mode 100644 processor/monero/src/main.rs create mode 100644 processor/monero/src/primitives/block.rs create mode 100644 processor/monero/src/primitives/mod.rs create mode 100644 processor/monero/src/primitives/output.rs create mode 100644 processor/monero/src/primitives/transaction.rs create mode 100644 processor/monero/src/rpc.rs create mode 100644 processor/monero/src/scheduler.rs diff --git a/Cargo.lock b/Cargo.lock index 7e7d78a3..ec3ccf8b 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8129,7 +8129,6 @@ dependencies = [ "borsh", "ciphersuite", "dkg", - "env_logger", "flexible-transcript", "hex", "log", @@ -8139,11 +8138,8 @@ dependencies = [ "secp256k1", "serai-client", "serai-db", - "serai-env", - "serai-message-queue", "serai-processor-bin", "serai-processor-key-gen", - "serai-processor-messages", "serai-processor-primitives", "serai-processor-scanner", "serai-processor-scheduler-primitives", @@ -8152,7 +8148,6 @@ dependencies = [ "serai-processor-utxo-scheduler-primitives", "tokio", "zalloc", - "zeroize", ] [[package]] @@ -8170,7 +8165,7 @@ dependencies = [ "frost-schnorrkel", "hex", "modular-frost", - "monero-wallet", + "monero-address", "multiaddr", "parity-scale-codec", "rand_core", @@ -8522,19 +8517,26 @@ version = "0.1.0" dependencies = [ "async-trait", "borsh", - "const-hex", + "ciphersuite", "dalek-ff-group", - "env_logger", + "dkg", + "flexible-transcript", "hex", "log", - "monero-simple-request-rpc", + "modular-frost", "monero-wallet", "parity-scale-codec", + "rand_core", + "serai-client", "serai-db", - "serai-env", - "serai-message-queue", - "serai-processor-messages", - "serde_json", + "serai-processor-bin", + "serai-processor-key-gen", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", + "serai-processor-signers", + "serai-processor-utxo-scheduler", + "serai-processor-utxo-scheduler-primitives", "tokio", "zalloc", ] @@ -8643,18 +8645,13 @@ name = "serai-processor-bin" version = "0.1.0" dependencies = [ "async-trait", - "bitcoin-serai", "borsh", "ciphersuite", "dkg", "env_logger", - "flexible-transcript", "hex", "log", - "modular-frost", "parity-scale-codec", - "rand_core", - "secp256k1", "serai-client", "serai-db", "serai-env", @@ -8665,10 +8662,7 @@ dependencies = [ "serai-processor-scanner", "serai-processor-scheduler-primitives", "serai-processor-signers", - "serai-processor-transaction-chaining-scheduler", - "serai-processor-utxo-scheduler-primitives", "tokio", - "zalloc", "zeroize", ] diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml index f3f3b753..01a774ac 100644 --- a/processor/bin/Cargo.toml +++ b/processor/bin/Cargo.toml @@ -19,29 +19,22 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } -rand_core = { version = "0.6", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } -ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } -dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } -frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } -secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } -bitcoin-serai = { path = "../../networks/bitcoin", default-features = false, features = ["std"] } +serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } -zalloc = { path = "../../common/zalloc" } -serai-db = { path = "../../common/db" } serai-env = { path = "../../common/env" } - -serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } +serai-db = { path = "../../common/db" } messages = { package = "serai-processor-messages", path = "../messages" } key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } @@ -49,8 +42,6 @@ key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } scanner = { package = "serai-processor-scanner", path = "../scanner" } -utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = "../scheduler/utxo/primitives" } -transaction-chaining-scheduler = { package = "serai-processor-transaction-chaining-scheduler", path = "../scheduler/utxo/transaction-chaining" } signers = { package = "serai-processor-signers", path = "../signers" } message-queue = { package = "serai-message-queue", path = "../../message-queue" } diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index 12442c3d..ead4a131 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -69,6 +69,7 @@ impl Coordinator { "monero" => NetworkId::Monero, _ => panic!("unrecognized network"), }; + // TODO: Read this from ScannerFeed let service = Service::Processor(network_id); let message_queue = Arc::new(MessageQueue::from_env(service)); diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index c968e36b..2d4958c7 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -18,7 +18,6 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } -zeroize = { version = "1", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } @@ -33,17 +32,14 @@ frost = { package = "modular-frost", path = "../../crypto/frost", default-featur secp256k1 = { version = "0.29", default-features = false, features = ["std", "global-context", "rand-std"] } bitcoin-serai = { path = "../../networks/bitcoin", default-features = false, features = ["std"] } -log = { version = "0.4", default-features = false, features = ["std"] } -env_logger = { version = "0.10", default-features = false, features = ["humantime"] } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } - -zalloc = { path = "../../common/zalloc" } -serai-db = { path = "../../common/db" } -serai-env = { path = "../../common/env" } - serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } -messages = { package = "serai-processor-messages", path = "../messages" } +zalloc = { path = "../../common/zalloc" } +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + +serai-db = { path = "../../common/db" } + key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } primitives = { package = "serai-processor-primitives", path = "../primitives" } @@ -55,8 +51,6 @@ signers = { package = "serai-processor-signers", path = "../signers" } bin = { package = "serai-processor-bin", path = "../bin" } -message-queue = { package = "serai-message-queue", path = "../../message-queue" } - [features] parity-db = ["bin/parity-db"] rocksdb = ["bin/rocksdb"] diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index e71472e4..5538d025 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -18,29 +18,39 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } +rand_core = { version = "0.6", default-features = false } -const-hex = { version = "1", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serde_json = { version = "1", default-features = false, features = ["std"] } -dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true } -monero-simple-request-rpc = { path = "../../networks/monero/rpc/simple-request", default-features = false, optional = true } -monero-wallet = { path = "../../networks/monero/wallet", default-features = false, features = ["std", "multisig", "compile-time-generators"], optional = true } +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ed25519"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ed25519"] } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } -log = { version = "0.4", default-features = false, features = ["std"] } -env_logger = { version = "0.10", default-features = false, features = ["humantime"] } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +monero-wallet = { path = "../../networks/monero/wallet", default-features = false, features = ["std", "multisig"] } + +serai-client = { path = "../../substrate/client", default-features = false, features = ["monero"] } zalloc = { path = "../../common/zalloc" } +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + serai-db = { path = "../../common/db" } -serai-env = { path = "../../common/env" } -messages = { package = "serai-processor-messages", path = "../messages" } +key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } -message-queue = { package = "serai-message-queue", path = "../../message-queue" } +primitives = { package = "serai-processor-primitives", path = "../primitives" } +scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } +scanner = { package = "serai-processor-scanner", path = "../scanner" } +utxo-scheduler = { package = "serai-processor-utxo-scheduler-primitives", path = "../scheduler/utxo/primitives" } +utxo-standard-scheduler = { package = "serai-processor-utxo-scheduler", path = "../scheduler/utxo/standard" } +signers = { package = "serai-processor-signers", path = "../signers" } + +bin = { package = "serai-processor-bin", path = "../bin" } [features] -parity-db = ["serai-db/parity-db"] -rocksdb = ["serai-db/rocksdb"] +parity-db = ["bin/parity-db"] +rocksdb = ["bin/rocksdb"] diff --git a/processor/monero/src/key_gen.rs b/processor/monero/src/key_gen.rs new file mode 100644 index 00000000..dee33029 --- /dev/null +++ b/processor/monero/src/key_gen.rs @@ -0,0 +1,11 @@ +use ciphersuite::{group::GroupEncoding, Ciphersuite, Ed25519}; +use frost::ThresholdKeys; + +pub(crate) struct KeyGenParams; +impl key_gen::KeyGenParams for KeyGenParams { + const ID: &'static str = "Monero"; + + type ExternalNetworkCiphersuite = Ed25519; + + fn tweak_keys(keys: &mut ThresholdKeys) {} +} diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs index 8786bef3..f9b334ef 100644 --- a/processor/monero/src/lib.rs +++ b/processor/monero/src/lib.rs @@ -1,3 +1,4 @@ +/* #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] #![deny(missing_docs)] @@ -809,3 +810,4 @@ impl UtxoNetwork for Monero { // TODO: Test creating a TX this big const MAX_INPUTS: usize = 120; } +*/ diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs new file mode 100644 index 00000000..41896de1 --- /dev/null +++ b/processor/monero/src/main.rs @@ -0,0 +1,43 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +#[global_allocator] +static ALLOCATOR: zalloc::ZeroizingAlloc = + zalloc::ZeroizingAlloc(std::alloc::System); + +use monero_wallet::rpc::Rpc as MRpc; + +mod primitives; +pub(crate) use crate::primitives::*; + +/* +mod key_gen; +use crate::key_gen::KeyGenParams; +mod rpc; +use rpc::Rpc; +mod scheduler; +use scheduler::Scheduler; + +#[tokio::main] +async fn main() { + let db = bin::init(); + let feed = Rpc { + db: db.clone(), + rpc: loop { + match MRpc::new(bin::url()).await { + Ok(rpc) => break rpc, + Err(e) => { + log::error!("couldn't connect to the Monero node: {e:?}"); + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } + } + }, + }; + + bin::main_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed).await; +} +*/ + +#[tokio::main] +async fn main() {} diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs new file mode 100644 index 00000000..40d0f296 --- /dev/null +++ b/processor/monero/src/primitives/block.rs @@ -0,0 +1,54 @@ +use std::collections::HashMap; + +use ciphersuite::{Ciphersuite, Ed25519}; + +use monero_wallet::{transaction::Transaction, block::Block as MBlock}; + +use serai_client::networks::monero::Address; + +use primitives::{ReceivedOutput, EventualityTracker}; + +use crate::{output::Output, transaction::Eventuality}; + +#[derive(Clone, Debug)] +pub(crate) struct BlockHeader(pub(crate) MBlock); +impl primitives::BlockHeader for BlockHeader { + fn id(&self) -> [u8; 32] { + self.0.hash() + } + fn parent(&self) -> [u8; 32] { + self.0.header.previous + } +} + +#[derive(Clone, Debug)] +pub(crate) struct Block(pub(crate) MBlock, Vec); + +#[async_trait::async_trait] +impl primitives::Block for Block { + type Header = BlockHeader; + + type Key = ::G; + type Address = Address; + type Output = Output; + type Eventuality = Eventuality; + + fn id(&self) -> [u8; 32] { + self.0.hash() + } + + fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + todo!("TODO") + } + + #[allow(clippy::type_complexity)] + fn check_for_eventuality_resolutions( + &self, + eventualities: &mut EventualityTracker, + ) -> HashMap< + >::TransactionId, + Self::Eventuality, + > { + todo!("TODO") + } +} diff --git a/processor/monero/src/primitives/mod.rs b/processor/monero/src/primitives/mod.rs new file mode 100644 index 00000000..fba52dd9 --- /dev/null +++ b/processor/monero/src/primitives/mod.rs @@ -0,0 +1,3 @@ +pub(crate) mod output; +pub(crate) mod transaction; +pub(crate) mod block; diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs new file mode 100644 index 00000000..d3eb3be3 --- /dev/null +++ b/processor/monero/src/primitives/output.rs @@ -0,0 +1,86 @@ +use std::io; + +use ciphersuite::{group::Group, Ciphersuite, Ed25519}; + +use monero_wallet::WalletOutput; + +use scale::{Encode, Decode}; +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::{ + primitives::{Coin, Amount, Balance}, + networks::monero::Address, +}; + +use primitives::{OutputType, ReceivedOutput}; + +#[rustfmt::skip] +#[derive( + Clone, Copy, PartialEq, Eq, Default, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, +)] +pub(crate) struct OutputId(pub(crate) [u8; 32]); +impl AsRef<[u8]> for OutputId { + fn as_ref(&self) -> &[u8] { + self.0.as_ref() + } +} +impl AsMut<[u8]> for OutputId { + fn as_mut(&mut self) -> &mut [u8] { + self.0.as_mut() + } +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Output(WalletOutput); + +impl Output { + pub(crate) fn new(output: WalletOutput) -> Self { + Self(output) + } +} + +impl ReceivedOutput<::G, Address> for Output { + type Id = OutputId; + type TransactionId = [u8; 32]; + + fn kind(&self) -> OutputType { + todo!("TODO") + } + + fn id(&self) -> Self::Id { + OutputId(self.0.key().compress().to_bytes()) + } + + fn transaction_id(&self) -> Self::TransactionId { + self.0.transaction() + } + + fn key(&self) -> ::G { + // The spend key will be a key we generated, so it'll be in the prime-order subgroup + // The output's key is the spend key + (key_offset * G), so it's in the prime-order subgroup if + // the spend key is + dalek_ff_group::EdwardsPoint( + self.0.key() - (*::G::generator() * self.0.key_offset()), + ) + } + + fn presumed_origin(&self) -> Option

{ + None + } + + fn balance(&self) -> Balance { + Balance { coin: Coin::Monero, amount: Amount(self.0.commitment().amount) } + } + + fn data(&self) -> &[u8] { + self.0.arbitrary_data().first().map_or(&[], Vec::as_slice) + } + + fn write(&self, writer: &mut W) -> io::Result<()> { + self.0.write(writer) + } + + fn read(reader: &mut R) -> io::Result { + WalletOutput::read(reader).map(Self) + } +} diff --git a/processor/monero/src/primitives/transaction.rs b/processor/monero/src/primitives/transaction.rs new file mode 100644 index 00000000..1ba49471 --- /dev/null +++ b/processor/monero/src/primitives/transaction.rs @@ -0,0 +1,137 @@ +use std::io; + +use rand_core::{RngCore, CryptoRng}; + +use ciphersuite::Ed25519; +use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; + +use monero_wallet::{ + transaction::Transaction as MTransaction, + send::{ + SignableTransaction as MSignableTransaction, TransactionMachine, Eventuality as MEventuality, + }, +}; + +use crate::output::OutputId; + +#[derive(Clone, Debug)] +pub(crate) struct Transaction(pub(crate) MTransaction); + +impl From for Transaction { + fn from(tx: MTransaction) -> Self { + Self(tx) + } +} + +impl scheduler::Transaction for Transaction { + fn read(reader: &mut impl io::Read) -> io::Result { + MTransaction::read(reader).map(Self) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.0.write(writer) + } +} + +#[derive(Clone, Debug)] +pub(crate) struct SignableTransaction { + id: [u8; 32], + signable: MSignableTransaction, +} + +#[derive(Clone)] +pub(crate) struct ClonableTransctionMachine(MSignableTransaction, ThresholdKeys); +impl PreprocessMachine for ClonableTransctionMachine { + type Preprocess = ::Preprocess; + type Signature = ::Signature; + type SignMachine = ::SignMachine; + + fn preprocess( + self, + rng: &mut R, + ) -> (Self::SignMachine, Self::Preprocess) { + self.0.multisig(self.1).expect("incorrect keys used for SignableTransaction").preprocess(rng) + } +} + +impl scheduler::SignableTransaction for SignableTransaction { + type Transaction = Transaction; + type Ciphersuite = Ed25519; + type PreprocessMachine = ClonableTransctionMachine; + + fn read(reader: &mut impl io::Read) -> io::Result { + let mut id = [0; 32]; + reader.read_exact(&mut id)?; + + let signable = MSignableTransaction::read(reader)?; + Ok(SignableTransaction { id, signable }) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + writer.write_all(&self.id)?; + self.signable.write(writer) + } + + fn id(&self) -> [u8; 32] { + self.id + } + + fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine { + ClonableTransctionMachine(self.signable, keys) + } +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Eventuality { + id: [u8; 32], + singular_spent_output: Option, + eventuality: MEventuality, +} + +impl primitives::Eventuality for Eventuality { + type OutputId = OutputId; + + fn id(&self) -> [u8; 32] { + self.id + } + + // We define the lookup as our ID since the resolving transaction only has a singular possible ID + fn lookup(&self) -> Vec { + self.eventuality.extra() + } + + fn singular_spent_output(&self) -> Option { + self.singular_spent_output + } + + fn read(reader: &mut impl io::Read) -> io::Result { + let mut id = [0; 32]; + reader.read_exact(&mut id)?; + + let singular_spent_output = { + let mut singular_spent_output_opt = [0xff]; + reader.read_exact(&mut singular_spent_output_opt)?; + assert!(singular_spent_output_opt[0] <= 1); + (singular_spent_output_opt[0] == 1) + .then(|| -> io::Result<_> { + let mut singular_spent_output = [0; 32]; + reader.read_exact(&mut singular_spent_output)?; + Ok(OutputId(singular_spent_output)) + }) + .transpose()? + }; + + let eventuality = MEventuality::read(reader)?; + Ok(Self { id, singular_spent_output, eventuality }) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + writer.write_all(&self.id)?; + + if let Some(singular_spent_output) = self.singular_spent_output { + writer.write_all(&[1])?; + writer.write_all(singular_spent_output.as_ref())?; + } else { + writer.write_all(&[0])?; + } + + self.eventuality.write(writer) + } +} diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs new file mode 100644 index 00000000..a6f6e5fd --- /dev/null +++ b/processor/monero/src/rpc.rs @@ -0,0 +1,156 @@ +use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; + +use serai_client::primitives::{NetworkId, Coin, Amount}; + +use serai_db::Db; +use scanner::ScannerFeed; +use signers::TransactionPublisher; + +use crate::{ + db, + transaction::Transaction, + block::{BlockHeader, Block}, +}; + +#[derive(Clone)] +pub(crate) struct Rpc { + pub(crate) db: D, + pub(crate) rpc: BRpc, +} + +#[async_trait::async_trait] +impl ScannerFeed for Rpc { + const NETWORK: NetworkId = NetworkId::Bitcoin; + const CONFIRMATIONS: u64 = 6; + const WINDOW_LENGTH: u64 = 6; + + const TEN_MINUTES: u64 = 1; + + type Block = Block; + + type EphemeralError = RpcError; + + async fn latest_finalized_block_number(&self) -> Result { + db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) + } + + async fn time_of_block(&self, number: u64) -> Result { + let number = usize::try_from(number).unwrap(); + + /* + The block time isn't guaranteed to be monotonic. It is guaranteed to be greater than the + median time of prior blocks, as detailed in BIP-0113 (a BIP which used that fact to improve + CLTV). This creates a monotonic median time which we use as the block time. + */ + // This implements `GetMedianTimePast` + let median = { + const MEDIAN_TIMESPAN: usize = 11; + let mut timestamps = Vec::with_capacity(MEDIAN_TIMESPAN); + for i in number.saturating_sub(MEDIAN_TIMESPAN) .. number { + timestamps.push(self.rpc.get_block(&self.rpc.get_block_hash(i).await?).await?.header.time); + } + timestamps.sort(); + timestamps[timestamps.len() / 2] + }; + + /* + This block's timestamp is guaranteed to be greater than this median: + https://github.com/bitcoin/bitcoin/blob/0725a374941355349bb4bc8a79dad1affb27d3b9 + /src/validation.cpp#L4182-L4184 + + This does not guarantee the median always increases however. Take the following trivial + example, as the window is initially built: + + 0 block has time 0 // Prior blocks: [] + 1 block has time 1 // Prior blocks: [0] + 2 block has time 2 // Prior blocks: [0, 1] + 3 block has time 2 // Prior blocks: [0, 1, 2] + + These two blocks have the same time (both greater than the median of their prior blocks) and + the same median. + + The median will never decrease however. The values pushed onto the window will always be + greater than the median. If a value greater than the median is popped, the median will remain + the same (due to the counterbalance of the pushed value). If a value less than the median is + popped, the median will increase (either to another instance of the same value, yet one + closer to the end of the repeating sequence, or to a higher value). + */ + Ok(median.into()) + } + + async fn unchecked_block_header_by_number( + &self, + number: u64, + ) -> Result<::Header, Self::EphemeralError> { + Ok(BlockHeader( + self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?.header, + )) + } + + async fn unchecked_block_by_number( + &self, + number: u64, + ) -> Result { + Ok(Block( + self.db.clone(), + self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?, + )) + } + + fn dust(coin: Coin) -> Amount { + assert_eq!(coin, Coin::Bitcoin); + + /* + A Taproot input is: + - 36 bytes for the OutPoint + - 0 bytes for the script (+1 byte for the length) + - 4 bytes for the sequence + Per https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format + + There's also: + - 1 byte for the witness length + - 1 byte for the signature length + - 64 bytes for the signature + which have the SegWit discount. + + (4 * (36 + 1 + 4)) + (1 + 1 + 64) = 164 + 66 = 230 weight units + 230 ceil div 4 = 57 vbytes + + Bitcoin defines multiple minimum feerate constants *per kilo-vbyte*. Currently, these are: + - 1000 sat/kilo-vbyte for a transaction to be relayed + - Each output's value must exceed the fee of the TX spending it at 3000 sat/kilo-vbyte + The DUST constant needs to be determined by the latter. + Since these are solely relay rules, and may be raised, we require all outputs be spendable + under a 5000 sat/kilo-vbyte fee rate. + + 5000 sat/kilo-vbyte = 5 sat/vbyte + 5 * 57 = 285 sats/spent-output + + Even if an output took 100 bytes (it should be just ~29-43), taking 400 weight units, adding + 100 vbytes, tripling the transaction size, then the sats/tx would be < 1000. + + Increase by an order of magnitude, in order to ensure this is actually worth our time, and we + get 10,000 satoshis. This is $5 if 1 BTC = 50,000 USD. + */ + Amount(10_000) + } + + async fn cost_to_aggregate( + &self, + coin: Coin, + _reference_block: &Self::Block, + ) -> Result { + assert_eq!(coin, Coin::Bitcoin); + // TODO + Ok(Amount(0)) + } +} + +#[async_trait::async_trait] +impl TransactionPublisher for Rpc { + type EphemeralError = RpcError; + + async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { + self.rpc.send_raw_transaction(&tx.0).await.map(|_| ()) + } +} diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs new file mode 100644 index 00000000..6e49d23d --- /dev/null +++ b/processor/monero/src/scheduler.rs @@ -0,0 +1,205 @@ +use ciphersuite::{Ciphersuite, Secp256k1}; + +use bitcoin_serai::{ + bitcoin::ScriptBuf, + wallet::{TransactionError, SignableTransaction as BSignableTransaction, p2tr_script_buf}, +}; + +use serai_client::{ + primitives::{Coin, Amount}, + networks::bitcoin::Address, +}; + +use serai_db::Db; +use primitives::{OutputType, ReceivedOutput, Payment}; +use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; +use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; +use transaction_chaining_scheduler::{EffectedReceivedOutputs, Scheduler as GenericScheduler}; + +use crate::{ + scan::{offsets_for_key, scanner}, + output::Output, + transaction::{SignableTransaction, Eventuality}, + rpc::Rpc, +}; + +fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { + let offset = ::G::GENERATOR * offsets_for_key(key)[&kind]; + Address::new( + p2tr_script_buf(key + offset) + .expect("creating address from Serai key which wasn't properly tweaked"), + ) + .expect("couldn't create Serai-representable address for P2TR script") +} + +fn signable_transaction( + fee_per_vbyte: u64, + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, +) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { + assert!( + inputs.len() < + , EffectedReceivedOutputs>>>::MAX_INPUTS + ); + assert!( + (payments.len() + usize::from(u8::from(change.is_some()))) < + , EffectedReceivedOutputs>>>::MAX_OUTPUTS + ); + + let inputs = inputs.into_iter().map(|input| input.output).collect::>(); + + let mut payments = payments + .into_iter() + .map(|payment| { + (payment.address().clone(), { + let balance = payment.balance(); + assert_eq!(balance.coin, Coin::Bitcoin); + balance.amount.0 + }) + }) + .collect::>(); + /* + Push a payment to a key with a known private key which anyone can spend. If this transaction + gets stuck, this lets anyone create a child transaction spending this output, raising the fee, + getting the transaction unstuck (via CPFP). + */ + payments.push(( + // The generator is even so this is valid + Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), + // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai + // TODO: Add a test for this comparing to bitcoin's `minimal_non_dust` + bitcoin_serai::wallet::DUST, + )); + + let change = change + .map(, EffectedReceivedOutputs>>>::change_address); + + BSignableTransaction::new( + inputs.clone(), + &payments + .iter() + .cloned() + .map(|(address, amount)| (ScriptBuf::from(address), amount)) + .collect::>(), + change.clone().map(ScriptBuf::from), + None, + fee_per_vbyte, + ) + .map(|bst| (SignableTransaction { inputs, payments, change, fee_per_vbyte }, bst)) +} + +pub(crate) struct Planner; +impl TransactionPlanner, EffectedReceivedOutputs>> for Planner { + type FeeRate = u64; + + type SignableTransaction = SignableTransaction; + + /* + Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT). + + A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes. While + our inputs are entirely SegWit, such fine tuning is not necessary and could create issues in + the future (if the size decreases or we misevaluate it). It also offers a minimal amount of + benefit when we are able to logarithmically accumulate inputs/fulfill payments. + + For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and + 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192 + bytes. + + 100,000 / 192 = 520 + 520 * 192 leaves 160 bytes of overhead for the transaction structure itself. + */ + const MAX_INPUTS: usize = 520; + // We always reserve one output to create an anyone-can-spend output enabling anyone to use CPFP + // to unstick any transactions which had too low of a fee. + const MAX_OUTPUTS: usize = 519; + + fn fee_rate(block: &BlockFor>, coin: Coin) -> Self::FeeRate { + assert_eq!(coin, Coin::Bitcoin); + // TODO + 1 + } + + fn branch_address(key: KeyFor>) -> AddressFor> { + address_from_serai_key(key, OutputType::Branch) + } + fn change_address(key: KeyFor>) -> AddressFor> { + address_from_serai_key(key, OutputType::Change) + } + fn forwarding_address(key: KeyFor>) -> AddressFor> { + address_from_serai_key(key, OutputType::Forwarded) + } + + fn calculate_fee( + fee_rate: Self::FeeRate, + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, + ) -> Amount { + match signable_transaction::(fee_rate, inputs, payments, change) { + Ok(tx) => Amount(tx.1.needed_fee()), + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to calculate_fee"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { fee, .. }) => Amount(fee), + } + } + + fn plan( + fee_rate: Self::FeeRate, + inputs: Vec>>, + payments: Vec>>>, + change: Option>>, + ) -> PlannedTransaction, Self::SignableTransaction, EffectedReceivedOutputs>> { + let key = inputs.first().unwrap().key(); + for input in &inputs { + assert_eq!(key, input.key()); + } + + let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); + match signable_transaction::(fee_rate, inputs.clone(), payments, change) { + Ok(tx) => PlannedTransaction { + signable: tx.0, + eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, + auxilliary: EffectedReceivedOutputs({ + let tx = tx.1.transaction(); + let scanner = scanner(key); + + let mut res = vec![]; + for output in scanner.scan_transaction(tx) { + res.push(Output::new_with_presumed_origin( + key, + tx, + // It shouldn't matter if this is wrong as we should never try to return these + // We still provide an accurate value to ensure a lack of discrepancies + Some(Address::new(inputs[0].output.output().script_pubkey.clone()).unwrap()), + output, + )); + } + res + }), + }, + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to plan"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { .. }) => { + panic!("plan called for a transaction without enough funds") + } + } + } +} + +pub(crate) type Scheduler = GenericScheduler, Planner>; diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index 5cba05f0..5f7a24d4 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -42,7 +42,7 @@ simple-request = { path = "../../common/request", version = "0.1", optional = tr bitcoin = { version = "0.32", optional = true } ciphersuite = { path = "../../crypto/ciphersuite", version = "0.4", optional = true } -monero-wallet = { path = "../../networks/monero/wallet", version = "0.1.0", default-features = false, features = ["std"], optional = true } +monero-address = { path = "../../networks/monero/wallet/address", version = "0.1.0", default-features = false, features = ["std"], optional = true } [dev-dependencies] rand_core = "0.6" @@ -65,7 +65,7 @@ borsh = ["serai-abi/borsh"] networks = [] bitcoin = ["networks", "dep:bitcoin"] -monero = ["networks", "ciphersuite/ed25519", "monero-wallet"] +monero = ["networks", "ciphersuite/ed25519", "monero-address"] # Assumes the default usage is to use Serai as a DEX, which doesn't actually # require connecting to a Serai node diff --git a/substrate/client/src/networks/monero.rs b/substrate/client/src/networks/monero.rs index bd5e0a15..c99a0abd 100644 --- a/substrate/client/src/networks/monero.rs +++ b/substrate/client/src/networks/monero.rs @@ -1,102 +1,141 @@ use core::{str::FromStr, fmt}; -use scale::{Encode, Decode}; - use ciphersuite::{Ciphersuite, Ed25519}; -use monero_wallet::address::{AddressError, Network, AddressType, MoneroAddress}; +use monero_address::{Network, AddressType as MoneroAddressType, MoneroAddress}; -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Address(MoneroAddress); -impl Address { - pub fn new(address: MoneroAddress) -> Option
{ - if address.payment_id().is_some() { - return None; - } - Some(Address(address)) - } -} +use crate::primitives::ExternalAddress; -impl FromStr for Address { - type Err = AddressError; - fn from_str(str: &str) -> Result { - MoneroAddress::from_str(Network::Mainnet, str).map(Address) - } -} - -impl fmt::Display for Address { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - self.0.fmt(f) - } -} - -// SCALE-encoded variant of Monero addresses. -#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] -enum EncodedAddressType { +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +enum AddressType { Legacy, Subaddress, Featured(u8), } -#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)] -struct EncodedAddress { - kind: EncodedAddressType, - spend: [u8; 32], - view: [u8; 32], +/// A representation of a Monero address. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct Address { + kind: AddressType, + spend: ::G, + view: ::G, } -impl TryFrom> for Address { - type Error = (); - fn try_from(data: Vec) -> Result { - // Decode as SCALE - let addr = EncodedAddress::decode(&mut data.as_ref()).map_err(|_| ())?; - // Convert over - Ok(Address(MoneroAddress::new( - Network::Mainnet, - match addr.kind { - EncodedAddressType::Legacy => AddressType::Legacy, - EncodedAddressType::Subaddress => AddressType::Subaddress, - EncodedAddressType::Featured(flags) => { - let subaddress = (flags & 1) != 0; - let integrated = (flags & (1 << 1)) != 0; - let guaranteed = (flags & (1 << 2)) != 0; - if integrated { - Err(())?; - } - AddressType::Featured { subaddress, payment_id: None, guaranteed } - } - }, - Ed25519::read_G::<&[u8]>(&mut addr.spend.as_ref()).map_err(|_| ())?.0, - Ed25519::read_G::<&[u8]>(&mut addr.view.as_ref()).map_err(|_| ())?.0, - ))) - } -} - -#[allow(clippy::from_over_into)] -impl Into for Address { - fn into(self) -> MoneroAddress { - self.0 - } -} - -#[allow(clippy::from_over_into)] -impl Into> for Address { - fn into(self) -> Vec { - EncodedAddress { - kind: match self.0.kind() { - AddressType::Legacy => EncodedAddressType::Legacy, - AddressType::LegacyIntegrated(_) => { - panic!("integrated address became Serai Monero address") - } - AddressType::Subaddress => EncodedAddressType::Subaddress, - AddressType::Featured { subaddress, payment_id, guaranteed } => { - debug_assert!(payment_id.is_none()); - EncodedAddressType::Featured(u8::from(*subaddress) + (u8::from(*guaranteed) << 2)) - } - }, - spend: self.0.spend().compress().0, - view: self.0.view().compress().0, +fn byte_for_kind(kind: AddressType) -> u8 { + // We use the second and third highest bits for the type + // This leaves the top bit open for interpretation as a VarInt later + match kind { + AddressType::Legacy => 0, + AddressType::Subaddress => 1 << 5, + AddressType::Featured(flags) => { + // The flags only take up the low three bits + debug_assert!(flags <= 0b111); + (2 << 5) | flags } - .encode() + } +} + +impl borsh::BorshSerialize for Address { + fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { + writer.write_all(&[byte_for_kind(self.kind)])?; + writer.write_all(&self.spend.compress().to_bytes())?; + writer.write_all(&self.view.compress().to_bytes()) + } +} +impl borsh::BorshDeserialize for Address { + fn deserialize_reader(reader: &mut R) -> borsh::io::Result { + let mut kind_byte = [0xff]; + reader.read_exact(&mut kind_byte)?; + let kind_byte = kind_byte[0]; + let kind = match kind_byte >> 5 { + 0 => AddressType::Legacy, + 1 => AddressType::Subaddress, + 2 => AddressType::Featured(kind_byte & 0b111), + _ => Err(borsh::io::Error::other("unrecognized type"))?, + }; + // Check this wasn't malleated + if byte_for_kind(kind) != kind_byte { + Err(borsh::io::Error::other("malleated type byte"))?; + } + let spend = Ed25519::read_G(reader)?; + let view = Ed25519::read_G(reader)?; + Ok(Self { kind, spend, view }) + } +} + +impl TryFrom for Address { + type Error = (); + fn try_from(address: MoneroAddress) -> Result { + let spend = address.spend().compress().to_bytes(); + let view = address.view().compress().to_bytes(); + let kind = match address.kind() { + MoneroAddressType::Legacy => AddressType::Legacy, + MoneroAddressType::LegacyIntegrated(_) => Err(())?, + MoneroAddressType::Subaddress => AddressType::Subaddress, + MoneroAddressType::Featured { subaddress, payment_id, guaranteed } => { + if payment_id.is_some() { + Err(())? + } + // This maintains the same bit layout as featured addresses use + AddressType::Featured(u8::from(*subaddress) + (u8::from(*guaranteed) << 2)) + } + }; + Ok(Address { + kind, + spend: Ed25519::read_G(&mut spend.as_slice()).map_err(|_| ())?, + view: Ed25519::read_G(&mut view.as_slice()).map_err(|_| ())?, + }) + } +} + +impl From
for MoneroAddress { + fn from(address: Address) -> MoneroAddress { + let kind = match address.kind { + AddressType::Legacy => MoneroAddressType::Legacy, + AddressType::Subaddress => MoneroAddressType::Subaddress, + AddressType::Featured(features) => { + debug_assert!(features <= 0b111); + let subaddress = (features & 1) != 0; + let integrated = (features & (1 << 1)) != 0; + debug_assert!(!integrated); + let guaranteed = (features & (1 << 2)) != 0; + MoneroAddressType::Featured { subaddress, payment_id: None, guaranteed } + } + }; + MoneroAddress::new(Network::Mainnet, kind, address.spend.0, address.view.0) + } +} + +impl TryFrom for Address { + type Error = (); + fn try_from(data: ExternalAddress) -> Result { + // Decode as an Address + let mut data = data.as_ref(); + let address = +
::deserialize_reader(&mut data).map_err(|_| ())?; + if !data.is_empty() { + Err(())? + } + Ok(address) + } +} +impl From
for ExternalAddress { + fn from(address: Address) -> ExternalAddress { + // This is 65 bytes which is less than MAX_ADDRESS_LEN + ExternalAddress::new(borsh::to_vec(&address).unwrap()).unwrap() + } +} + +impl FromStr for Address { + type Err = (); + fn from_str(str: &str) -> Result { + let Ok(address) = MoneroAddress::from_str(Network::Mainnet, str) else { Err(())? }; + Address::try_from(address) + } +} + +impl fmt::Display for Address { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + MoneroAddress::from(*self).fmt(f) } } From 02409c57355edec0a670f0f0f7de74c01a1691d9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 00:10:52 -0400 Subject: [PATCH 114/368] Correct Multisig Rotation to use WINDOW_LENGTH where proper --- spec/processor/Multisig Rotation.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/spec/processor/Multisig Rotation.md b/spec/processor/Multisig Rotation.md index 916ce56b..86708025 100644 --- a/spec/processor/Multisig Rotation.md +++ b/spec/processor/Multisig Rotation.md @@ -12,11 +12,11 @@ The following timeline is established: 1) The new multisig is created, and has its keys set on Serai. Once the next `Batch` with a new external network block is published, its block becomes the "queue block". The new multisig is set to activate at the "queue block", plus - `CONFIRMATIONS` blocks (the "activation block"). + `WINDOW_LENGTH` blocks (the "activation block"). We don't use the last `Batch`'s external network block, as that `Batch` may - be older than `CONFIRMATIONS` blocks. Any yet-to-be-included-and-finalized - `Batch` will be within `CONFIRMATIONS` blocks of what any processor has + be older than `WINDOW_LENGTH` blocks. Any yet-to-be-included-and-finalized + `Batch` will be within `WINDOW_LENGTH` blocks of what any processor has scanned however, as it'll wait for inclusion and finalization before continuing scanning. @@ -122,7 +122,7 @@ The following timeline is established: Once all the 6 hour period has expired, no `Eventuality`s remain, and all outputs are forwarded, the multisig publishes a final `Batch` of the first - block, plus `CONFIRMATIONS`, which met these conditions, regardless of if it + block, plus `WINDOW_LENGTH`, which met these conditions, regardless of if it would've otherwise had a `Batch`. No further actions by it, nor its validators, are expected (unless, of course, those validators remain present in the new multisig). From 2c4c33e6326e0f8fe70be1515aad94a8cc784214 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 00:48:57 -0400 Subject: [PATCH 115/368] Misc continuances on the Monero processor --- Cargo.lock | 2 +- processor/bitcoin/src/rpc.rs | 2 + processor/key-gen/src/lib.rs | 6 +- processor/monero/Cargo.toml | 1 + processor/monero/src/key_gen.rs | 5 +- processor/monero/src/main.rs | 2 +- processor/monero/src/rpc.rs | 117 ++++++------------------------ processor/monero/src/scheduler.rs | 31 ++------ processor/scanner/src/lib.rs | 6 ++ 9 files changed, 46 insertions(+), 126 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index ec3ccf8b..b3419a85 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -5105,7 +5105,6 @@ dependencies = [ "hex", "modular-frost", "monero-address", - "monero-clsag", "monero-rpc", "monero-serai", "monero-simple-request-rpc", @@ -8524,6 +8523,7 @@ dependencies = [ "hex", "log", "modular-frost", + "monero-simple-request-rpc", "monero-wallet", "parity-scale-codec", "rand_core", diff --git a/processor/bitcoin/src/rpc.rs b/processor/bitcoin/src/rpc.rs index a6f6e5fd..23db5570 100644 --- a/processor/bitcoin/src/rpc.rs +++ b/processor/bitcoin/src/rpc.rs @@ -21,7 +21,9 @@ pub(crate) struct Rpc { #[async_trait::async_trait] impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Bitcoin; + // 6 confirmations is widely accepted as secure and shouldn't occur const CONFIRMATIONS: u64 = 6; + // The window length should be roughly an hour const WINDOW_LENGTH: u64 = 6; const TEN_MINUTES: u64 = 1; diff --git a/processor/key-gen/src/lib.rs b/processor/key-gen/src/lib.rs index fd847cc5..4db87b20 100644 --- a/processor/key-gen/src/lib.rs +++ b/processor/key-gen/src/lib.rs @@ -43,7 +43,11 @@ pub trait KeyGenParams { >; /// Tweaks keys as necessary/beneficial. - fn tweak_keys(keys: &mut ThresholdKeys); + /// + /// A default implementation which doesn't perform any tweaking is provided. + fn tweak_keys(keys: &mut ThresholdKeys) { + let _ = keys; + } /// Encode keys as optimal. /// diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index 5538d025..f70d6187 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -31,6 +31,7 @@ dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } monero-wallet = { path = "../../networks/monero/wallet", default-features = false, features = ["std", "multisig"] } +monero-simple-request-rpc = { path = "../../networks/monero/rpc/simple-request", default-features = false } serai-client = { path = "../../substrate/client", default-features = false, features = ["monero"] } diff --git a/processor/monero/src/key_gen.rs b/processor/monero/src/key_gen.rs index dee33029..6e30d7bf 100644 --- a/processor/monero/src/key_gen.rs +++ b/processor/monero/src/key_gen.rs @@ -1,11 +1,8 @@ -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ed25519}; -use frost::ThresholdKeys; +use ciphersuite::Ed25519; pub(crate) struct KeyGenParams; impl key_gen::KeyGenParams for KeyGenParams { const ID: &'static str = "Monero"; type ExternalNetworkCiphersuite = Ed25519; - - fn tweak_keys(keys: &mut ThresholdKeys) {} } diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index 41896de1..eda24b56 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -11,11 +11,11 @@ use monero_wallet::rpc::Rpc as MRpc; mod primitives; pub(crate) use crate::primitives::*; -/* mod key_gen; use crate::key_gen::KeyGenParams; mod rpc; use rpc::Rpc; +/* mod scheduler; use scheduler::Scheduler; diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index a6f6e5fd..21a202cc 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -1,81 +1,43 @@ -use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; +use monero_wallet::rpc::{RpcError, Rpc as RpcTrait}; +use monero_simple_request_rpc::SimpleRequestRpc; use serai_client::primitives::{NetworkId, Coin, Amount}; -use serai_db::Db; use scanner::ScannerFeed; use signers::TransactionPublisher; use crate::{ - db, transaction::Transaction, block::{BlockHeader, Block}, }; #[derive(Clone)] -pub(crate) struct Rpc { - pub(crate) db: D, - pub(crate) rpc: BRpc, +pub(crate) struct Rpc { + pub(crate) rpc: SimpleRequestRpc, } #[async_trait::async_trait] -impl ScannerFeed for Rpc { - const NETWORK: NetworkId = NetworkId::Bitcoin; - const CONFIRMATIONS: u64 = 6; - const WINDOW_LENGTH: u64 = 6; +impl ScannerFeed for Rpc { + const NETWORK: NetworkId = NetworkId::Monero; + // Outputs aren't spendable until 10 blocks later due to the 10-block lock + // Since we assumed scanned outputs are spendable, that sets a minimum confirmation depth of 10 + // A 10-block reorganization hasn't been observed in years and shouldn't occur + const CONFIRMATIONS: u64 = 10; + // The window length should be roughly an hour + const WINDOW_LENGTH: u64 = 30; - const TEN_MINUTES: u64 = 1; + const TEN_MINUTES: u64 = 5; - type Block = Block; + type Block = Block; type EphemeralError = RpcError; async fn latest_finalized_block_number(&self) -> Result { - db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) + Ok(self.rpc.get_height().await?.checked_sub(1).expect("connected to an invalid Monero RPC").try_into().unwrap()) } async fn time_of_block(&self, number: u64) -> Result { - let number = usize::try_from(number).unwrap(); - - /* - The block time isn't guaranteed to be monotonic. It is guaranteed to be greater than the - median time of prior blocks, as detailed in BIP-0113 (a BIP which used that fact to improve - CLTV). This creates a monotonic median time which we use as the block time. - */ - // This implements `GetMedianTimePast` - let median = { - const MEDIAN_TIMESPAN: usize = 11; - let mut timestamps = Vec::with_capacity(MEDIAN_TIMESPAN); - for i in number.saturating_sub(MEDIAN_TIMESPAN) .. number { - timestamps.push(self.rpc.get_block(&self.rpc.get_block_hash(i).await?).await?.header.time); - } - timestamps.sort(); - timestamps[timestamps.len() / 2] - }; - - /* - This block's timestamp is guaranteed to be greater than this median: - https://github.com/bitcoin/bitcoin/blob/0725a374941355349bb4bc8a79dad1affb27d3b9 - /src/validation.cpp#L4182-L4184 - - This does not guarantee the median always increases however. Take the following trivial - example, as the window is initially built: - - 0 block has time 0 // Prior blocks: [] - 1 block has time 1 // Prior blocks: [0] - 2 block has time 2 // Prior blocks: [0, 1] - 3 block has time 2 // Prior blocks: [0, 1, 2] - - These two blocks have the same time (both greater than the median of their prior blocks) and - the same median. - - The median will never decrease however. The values pushed onto the window will always be - greater than the median. If a value greater than the median is popped, the median will remain - the same (due to the counterbalance of the pushed value). If a value less than the median is - popped, the median will increase (either to another instance of the same value, yet one - closer to the end of the repeating sequence, or to a higher value). - */ - Ok(median.into()) + todo!("TODO") } async fn unchecked_block_header_by_number( @@ -83,7 +45,7 @@ impl ScannerFeed for Rpc { number: u64, ) -> Result<::Header, Self::EphemeralError> { Ok(BlockHeader( - self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?.header, + self.rpc.get_block_by_number(number.try_into().unwrap()).await? )) } @@ -91,48 +53,13 @@ impl ScannerFeed for Rpc { &self, number: u64, ) -> Result { - Ok(Block( - self.db.clone(), - self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?, - )) + todo!("TODO") } fn dust(coin: Coin) -> Amount { - assert_eq!(coin, Coin::Bitcoin); + assert_eq!(coin, Coin::Monero); - /* - A Taproot input is: - - 36 bytes for the OutPoint - - 0 bytes for the script (+1 byte for the length) - - 4 bytes for the sequence - Per https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format - - There's also: - - 1 byte for the witness length - - 1 byte for the signature length - - 64 bytes for the signature - which have the SegWit discount. - - (4 * (36 + 1 + 4)) + (1 + 1 + 64) = 164 + 66 = 230 weight units - 230 ceil div 4 = 57 vbytes - - Bitcoin defines multiple minimum feerate constants *per kilo-vbyte*. Currently, these are: - - 1000 sat/kilo-vbyte for a transaction to be relayed - - Each output's value must exceed the fee of the TX spending it at 3000 sat/kilo-vbyte - The DUST constant needs to be determined by the latter. - Since these are solely relay rules, and may be raised, we require all outputs be spendable - under a 5000 sat/kilo-vbyte fee rate. - - 5000 sat/kilo-vbyte = 5 sat/vbyte - 5 * 57 = 285 sats/spent-output - - Even if an output took 100 bytes (it should be just ~29-43), taking 400 weight units, adding - 100 vbytes, tripling the transaction size, then the sats/tx would be < 1000. - - Increase by an order of magnitude, in order to ensure this is actually worth our time, and we - get 10,000 satoshis. This is $5 if 1 BTC = 50,000 USD. - */ - Amount(10_000) + todo!("TODO") } async fn cost_to_aggregate( @@ -147,10 +74,10 @@ impl ScannerFeed for Rpc { } #[async_trait::async_trait] -impl TransactionPublisher for Rpc { +impl TransactionPublisher for Rpc { type EphemeralError = RpcError; async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { - self.rpc.send_raw_transaction(&tx.0).await.map(|_| ()) + self.rpc.publish_transaction(&tx.0).await } } diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index 6e49d23d..25f17c64 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -14,7 +14,6 @@ use serai_db::Db; use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; -use transaction_chaining_scheduler::{EffectedReceivedOutputs, Scheduler as GenericScheduler}; use crate::{ scan::{offsets_for_key, scanner}, @@ -40,11 +39,11 @@ fn signable_transaction( ) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { assert!( inputs.len() < - , EffectedReceivedOutputs>>>::MAX_INPUTS + , ()>>::MAX_INPUTS ); assert!( (payments.len() + usize::from(u8::from(change.is_some()))) < - , EffectedReceivedOutputs>>>::MAX_OUTPUTS + , ()>>::MAX_OUTPUTS ); let inputs = inputs.into_iter().map(|input| input.output).collect::>(); @@ -73,7 +72,7 @@ fn signable_transaction( )); let change = change - .map(, EffectedReceivedOutputs>>>::change_address); + .map(, ()>>::change_address); BSignableTransaction::new( inputs.clone(), @@ -90,7 +89,7 @@ fn signable_transaction( } pub(crate) struct Planner; -impl TransactionPlanner, EffectedReceivedOutputs>> for Planner { +impl TransactionPlanner for Planner { type FeeRate = u64; type SignableTransaction = SignableTransaction; @@ -157,7 +156,7 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan inputs: Vec>>, payments: Vec>>>, change: Option>>, - ) -> PlannedTransaction, Self::SignableTransaction, EffectedReceivedOutputs>> { + ) -> PlannedTransaction, Self::SignableTransaction, ()> { let key = inputs.first().unwrap().key(); for input in &inputs { assert_eq!(key, input.key()); @@ -168,23 +167,7 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan Ok(tx) => PlannedTransaction { signable: tx.0, eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, - auxilliary: EffectedReceivedOutputs({ - let tx = tx.1.transaction(); - let scanner = scanner(key); - - let mut res = vec![]; - for output in scanner.scan_transaction(tx) { - res.push(Output::new_with_presumed_origin( - key, - tx, - // It shouldn't matter if this is wrong as we should never try to return these - // We still provide an accurate value to ensure a lack of discrepancies - Some(Address::new(inputs[0].output.output().script_pubkey.clone()).unwrap()), - output, - )); - } - res - }), + auxilliary: (), }, Err( TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, @@ -202,4 +185,4 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan } } -pub(crate) type Scheduler = GenericScheduler, Planner>; +pub(crate) type Scheduler = utxo_standard_scheduler::Scheduler; diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index ebd783bf..d100815d 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -67,6 +67,12 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The amount of confirmations a block must have to be considered finalized. /// /// This value must be at least `1`. + // This is distinct from `WINDOW_LENGTH` as it's only used for determining the lifetime of the + // key. The key switches to various stages of its lifetime depending on when user transactions + // will hit the Serai network (relative to the time they're made) and when outputs created by + // Serai become available again. If we set a long WINDOW_LENGTH, say two hours, that doesn't mean + // we expect user transactions made within a few minutes of a new key being declared to only + // appear in finalized blocks two hours later. const CONFIRMATIONS: u64; /// The amount of blocks to process in parallel. From e78236276aee0cc10dacbe473c07715ebc2b3494 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 01:14:47 -0400 Subject: [PATCH 116/368] Remove async-trait from processor/ Part of https://github.com/serai-dex/issues/607. --- processor/bin/Cargo.toml | 1 - processor/bin/src/coordinator.rs | 84 ++-- processor/bitcoin/Cargo.toml | 1 - processor/bitcoin/src/main.rs | 2 - processor/bitcoin/src/primitives/block.rs | 1 - processor/bitcoin/src/rpc.rs | 155 +++--- processor/bitcoin/src/txindex.rs | 144 +++--- processor/ethereum/Cargo.toml | 2 - processor/monero/Cargo.toml | 1 - processor/monero/src/primitives/block.rs | 1 - processor/monero/src/rpc.rs | 67 ++- processor/primitives/Cargo.toml | 2 - processor/primitives/src/block.rs | 1 - processor/primitives/src/task.rs | 101 ++-- processor/scanner/Cargo.toml | 3 - processor/scanner/src/eventuality/mod.rs | 586 +++++++++++----------- processor/scanner/src/index/mod.rs | 98 ++-- processor/scanner/src/lib.rs | 32 +- processor/scanner/src/report/mod.rs | 186 +++---- processor/scanner/src/scan/mod.rs | 475 +++++++++--------- processor/scanner/src/substrate/mod.rs | 184 +++---- processor/signers/Cargo.toml | 3 +- processor/signers/src/batch/mod.rs | 192 +++---- processor/signers/src/coordinator/mod.rs | 272 +++++----- processor/signers/src/cosign/mod.rs | 117 ++--- processor/signers/src/lib.rs | 27 +- processor/signers/src/slash_report.rs | 114 ++--- processor/signers/src/transaction/mod.rs | 5 +- processor/src/tests/scanner.rs | 2 +- 29 files changed, 1481 insertions(+), 1378 deletions(-) diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml index 01a774ac..f6da8b7c 100644 --- a/processor/bin/Cargo.toml +++ b/processor/bin/Cargo.toml @@ -17,7 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index ead4a131..6fe5aea0 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -1,3 +1,4 @@ +use core::future::Future; use std::sync::{LazyLock, Arc, Mutex}; use tokio::sync::mpsc; @@ -169,59 +170,74 @@ impl Coordinator { } } -#[async_trait::async_trait] impl signers::Coordinator for CoordinatorSend { type EphemeralError = (); - async fn send( + fn send( &mut self, msg: messages::sign::ProcessorMessage, - ) -> Result<(), Self::EphemeralError> { - self.send(&messages::ProcessorMessage::Sign(msg)); - Ok(()) + ) -> impl Send + Future> { + async move { + self.send(&messages::ProcessorMessage::Sign(msg)); + Ok(()) + } } - async fn publish_cosign( + fn publish_cosign( &mut self, block_number: u64, block: [u8; 32], signature: Signature, - ) -> Result<(), Self::EphemeralError> { - self.send(&messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::CosignedBlock { - block_number, - block, - signature: signature.encode(), - }, - )); - Ok(()) + ) -> impl Send + Future> { + async move { + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::CosignedBlock { + block_number, + block, + signature: signature.encode(), + }, + )); + Ok(()) + } } - async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError> { - self.send(&messages::ProcessorMessage::Substrate( - messages::substrate::ProcessorMessage::Batch { batch }, - )); - Ok(()) + fn publish_batch( + &mut self, + batch: Batch, + ) -> impl Send + Future> { + async move { + self.send(&messages::ProcessorMessage::Substrate( + messages::substrate::ProcessorMessage::Batch { batch }, + )); + Ok(()) + } } - async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError> { - self.send(&messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::SignedBatch { batch }, - )); - Ok(()) + fn publish_signed_batch( + &mut self, + batch: SignedBatch, + ) -> impl Send + Future> { + async move { + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedBatch { batch }, + )); + Ok(()) + } } - async fn publish_slash_report_signature( + fn publish_slash_report_signature( &mut self, session: Session, signature: Signature, - ) -> Result<(), Self::EphemeralError> { - self.send(&messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::SignedSlashReport { - session, - signature: signature.encode(), - }, - )); - Ok(()) + ) -> impl Send + Future> { + async move { + self.send(&messages::ProcessorMessage::Coordinator( + messages::coordinator::ProcessorMessage::SignedSlashReport { + session, + signature: signature.encode(), + }, + )); + Ok(()) + } } } diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 2d4958c7..52cca1ae 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -17,7 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } rand_core = { version = "0.6", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 74e174ee..56bfd619 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -96,7 +96,6 @@ use serai_client::{ */ /* -#[async_trait] impl TransactionTrait for Transaction { #[cfg(test)] async fn fee(&self, network: &Bitcoin) -> u64 { @@ -210,7 +209,6 @@ impl Bitcoin { } } -#[async_trait] impl Network for Bitcoin { // 2 inputs should be 2 * 230 = 460 weight units // The output should be ~36 bytes, or 144 weight units diff --git a/processor/bitcoin/src/primitives/block.rs b/processor/bitcoin/src/primitives/block.rs index 8221c8b5..e3df7e69 100644 --- a/processor/bitcoin/src/primitives/block.rs +++ b/processor/bitcoin/src/primitives/block.rs @@ -31,7 +31,6 @@ impl fmt::Debug for Block { } } -#[async_trait::async_trait] impl primitives::Block for Block { type Header = BlockHeader; diff --git a/processor/bitcoin/src/rpc.rs b/processor/bitcoin/src/rpc.rs index 23db5570..acd3be85 100644 --- a/processor/bitcoin/src/rpc.rs +++ b/processor/bitcoin/src/rpc.rs @@ -1,3 +1,5 @@ +use core::future::Future; + use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; use serai_client::primitives::{NetworkId, Coin, Amount}; @@ -18,7 +20,6 @@ pub(crate) struct Rpc { pub(crate) rpc: BRpc, } -#[async_trait::async_trait] impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Bitcoin; // 6 confirmations is widely accepted as secure and shouldn't occur @@ -32,71 +33,89 @@ impl ScannerFeed for Rpc { type EphemeralError = RpcError; - async fn latest_finalized_block_number(&self) -> Result { - db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) + fn latest_finalized_block_number( + &self, + ) -> impl Send + Future> { + async move { db::LatestBlockToYieldAsFinalized::get(&self.db).ok_or(RpcError::ConnectionError) } } - async fn time_of_block(&self, number: u64) -> Result { - let number = usize::try_from(number).unwrap(); - - /* - The block time isn't guaranteed to be monotonic. It is guaranteed to be greater than the - median time of prior blocks, as detailed in BIP-0113 (a BIP which used that fact to improve - CLTV). This creates a monotonic median time which we use as the block time. - */ - // This implements `GetMedianTimePast` - let median = { - const MEDIAN_TIMESPAN: usize = 11; - let mut timestamps = Vec::with_capacity(MEDIAN_TIMESPAN); - for i in number.saturating_sub(MEDIAN_TIMESPAN) .. number { - timestamps.push(self.rpc.get_block(&self.rpc.get_block_hash(i).await?).await?.header.time); - } - timestamps.sort(); - timestamps[timestamps.len() / 2] - }; - - /* - This block's timestamp is guaranteed to be greater than this median: - https://github.com/bitcoin/bitcoin/blob/0725a374941355349bb4bc8a79dad1affb27d3b9 - /src/validation.cpp#L4182-L4184 - - This does not guarantee the median always increases however. Take the following trivial - example, as the window is initially built: - - 0 block has time 0 // Prior blocks: [] - 1 block has time 1 // Prior blocks: [0] - 2 block has time 2 // Prior blocks: [0, 1] - 3 block has time 2 // Prior blocks: [0, 1, 2] - - These two blocks have the same time (both greater than the median of their prior blocks) and - the same median. - - The median will never decrease however. The values pushed onto the window will always be - greater than the median. If a value greater than the median is popped, the median will remain - the same (due to the counterbalance of the pushed value). If a value less than the median is - popped, the median will increase (either to another instance of the same value, yet one - closer to the end of the repeating sequence, or to a higher value). - */ - Ok(median.into()) - } - - async fn unchecked_block_header_by_number( + fn time_of_block( &self, number: u64, - ) -> Result<::Header, Self::EphemeralError> { - Ok(BlockHeader( - self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?.header, - )) + ) -> impl Send + Future> { + async move { + let number = usize::try_from(number).unwrap(); + + /* + The block time isn't guaranteed to be monotonic. It is guaranteed to be greater than the + median time of prior blocks, as detailed in BIP-0113 (a BIP which used that fact to improve + CLTV). This creates a monotonic median time which we use as the block time. + */ + // This implements `GetMedianTimePast` + let median = { + const MEDIAN_TIMESPAN: usize = 11; + let mut timestamps = Vec::with_capacity(MEDIAN_TIMESPAN); + for i in number.saturating_sub(MEDIAN_TIMESPAN) .. number { + timestamps + .push(self.rpc.get_block(&self.rpc.get_block_hash(i).await?).await?.header.time); + } + timestamps.sort(); + timestamps[timestamps.len() / 2] + }; + + /* + This block's timestamp is guaranteed to be greater than this median: + https://github.com/bitcoin/bitcoin/blob/0725a374941355349bb4bc8a79dad1affb27d3b9 + /src/validation.cpp#L4182-L4184 + + This does not guarantee the median always increases however. Take the following trivial + example, as the window is initially built: + + 0 block has time 0 // Prior blocks: [] + 1 block has time 1 // Prior blocks: [0] + 2 block has time 2 // Prior blocks: [0, 1] + 3 block has time 2 // Prior blocks: [0, 1, 2] + + These two blocks have the same time (both greater than the median of their prior blocks) and + the same median. + + The median will never decrease however. The values pushed onto the window will always be + greater than the median. If a value greater than the median is popped, the median will + remain the same (due to the counterbalance of the pushed value). If a value less than the + median is popped, the median will increase (either to another instance of the same value, + yet one closer to the end of the repeating sequence, or to a higher value). + */ + Ok(median.into()) + } } - async fn unchecked_block_by_number( + fn unchecked_block_header_by_number( &self, number: u64, - ) -> Result { - Ok(Block( - self.db.clone(), - self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?, - )) + ) -> impl Send + + Future::Header, Self::EphemeralError>> + { + async move { + Ok(BlockHeader( + self + .rpc + .get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?) + .await? + .header, + )) + } + } + + fn unchecked_block_by_number( + &self, + number: u64, + ) -> impl Send + Future> { + async move { + Ok(Block( + self.db.clone(), + self.rpc.get_block(&self.rpc.get_block_hash(number.try_into().unwrap()).await?).await?, + )) + } } fn dust(coin: Coin) -> Amount { @@ -137,22 +156,26 @@ impl ScannerFeed for Rpc { Amount(10_000) } - async fn cost_to_aggregate( + fn cost_to_aggregate( &self, coin: Coin, _reference_block: &Self::Block, - ) -> Result { - assert_eq!(coin, Coin::Bitcoin); - // TODO - Ok(Amount(0)) + ) -> impl Send + Future> { + async move { + assert_eq!(coin, Coin::Bitcoin); + // TODO + Ok(Amount(0)) + } } } -#[async_trait::async_trait] impl TransactionPublisher for Rpc { type EphemeralError = RpcError; - async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { - self.rpc.send_raw_transaction(&tx.0).await.map(|_| ()) + fn publish( + &self, + tx: Transaction, + ) -> impl Send + Future> { + async move { self.rpc.send_raw_transaction(&tx.0).await.map(|_| ()) } } } diff --git a/processor/bitcoin/src/txindex.rs b/processor/bitcoin/src/txindex.rs index 4ed38973..6a55a4c4 100644 --- a/processor/bitcoin/src/txindex.rs +++ b/processor/bitcoin/src/txindex.rs @@ -1,18 +1,4 @@ -/* - We want to be able to return received outputs. We do that by iterating over the inputs to find an - address format we recognize, then setting that address as the address to return to. - - Since inputs only contain the script signatures, yet addresses are for script public keys, we - need to pull up the output spent by an input and read the script public key from that. While we - could use `txindex=1`, and an asynchronous call to the Bitcoin node, we: - - 1) Can maintain a much smaller index ourselves - 2) Don't want the asynchronous call (which would require the flow be async, allowed to - potentially error, and more latent) - 3) Don't want to risk Bitcoin's `txindex` corruptions (frequently observed on testnet) - - This task builds that index. -*/ +use core::future::Future; use bitcoin_serai::bitcoin::ScriptBuf; @@ -35,72 +21,88 @@ pub(crate) fn script_pubkey_for_on_chain_output( ) } +/* + We want to be able to return received outputs. We do that by iterating over the inputs to find an + address format we recognize, then setting that address as the address to return to. + + Since inputs only contain the script signatures, yet addresses are for script public keys, we + need to pull up the output spent by an input and read the script public key from that. While we + could use `txindex=1`, and an asynchronous call to the Bitcoin node, we: + + 1) Can maintain a much smaller index ourselves + 2) Don't want the asynchronous call (which would require the flow be async, allowed to + potentially error, and more latent) + 3) Don't want to risk Bitcoin's `txindex` corruptions (frequently observed on testnet) + + This task builds that index. +*/ pub(crate) struct TxIndexTask(pub(crate) Rpc); -#[async_trait::async_trait] impl ContinuallyRan for TxIndexTask { - async fn run_iteration(&mut self) -> Result { - let latest_block_number = self - .0 - .rpc - .get_latest_block_number() - .await - .map_err(|e| format!("couldn't fetch latest block number: {e:?}"))?; - let latest_block_number = u64::try_from(latest_block_number).unwrap(); - // `CONFIRMATIONS - 1` as any on-chain block inherently has one confirmation (itself) - let finalized_block_number = - latest_block_number.checked_sub(Rpc::::CONFIRMATIONS - 1).ok_or(format!( - "blockchain only just started and doesn't have {} blocks yet", - Rpc::::CONFIRMATIONS - ))?; - - /* - `finalized_block_number` is the latest block number minus confirmations. The blockchain may - undetectably re-organize though, as while the scanner will maintain an index of finalized - blocks and panics on reorganization, this runs prior to the scanner and that index. - - A reorganization of `CONFIRMATIONS` blocks is still an invariant. Even if that occurs, this - saves the script public keys *by the transaction hash an output index*. Accordingly, it isn't - invalidated on reorganization. The only risk would be if the new chain reorganized to - include a transaction to Serai which we didn't index the parents of. If that happens, we'll - panic when we scan the transaction, causing the invariant to be detected. - */ - - let finalized_block_number_in_db = db::LatestBlockToYieldAsFinalized::get(&self.0.db); - let next_block = finalized_block_number_in_db.map_or(0, |block| block + 1); - - let mut iterated = false; - for b in next_block ..= finalized_block_number { - iterated = true; - - // Fetch the block - let block_hash = self + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let latest_block_number = self .0 .rpc - .get_block_hash(b.try_into().unwrap()) + .get_latest_block_number() .await - .map_err(|e| format!("couldn't fetch block hash for block {b}: {e:?}"))?; - let block = self - .0 - .rpc - .get_block(&block_hash) - .await - .map_err(|e| format!("couldn't fetch block {b}: {e:?}"))?; + .map_err(|e| format!("couldn't fetch latest block number: {e:?}"))?; + let latest_block_number = u64::try_from(latest_block_number).unwrap(); + // `CONFIRMATIONS - 1` as any on-chain block inherently has one confirmation (itself) + let finalized_block_number = + latest_block_number.checked_sub(Rpc::::CONFIRMATIONS - 1).ok_or(format!( + "blockchain only just started and doesn't have {} blocks yet", + Rpc::::CONFIRMATIONS + ))?; - let mut txn = self.0.db.txn(); + /* + `finalized_block_number` is the latest block number minus confirmations. The blockchain may + undetectably re-organize though, as while the scanner will maintain an index of finalized + blocks and panics on reorganization, this runs prior to the scanner and that index. - for tx in &block.txdata { - let txid = hash_bytes(tx.compute_txid().to_raw_hash()); - for (o, output) in tx.output.iter().enumerate() { - let o = u32::try_from(o).unwrap(); - // Set the script public key for this transaction - db::ScriptPubKey::set(&mut txn, txid, o, &output.script_pubkey.clone().into_bytes()); + A reorganization of `CONFIRMATIONS` blocks is still an invariant. Even if that occurs, this + saves the script public keys *by the transaction hash an output index*. Accordingly, it + isn't invalidated on reorganization. The only risk would be if the new chain reorganized to + include a transaction to Serai which we didn't index the parents of. If that happens, we'll + panic when we scan the transaction, causing the invariant to be detected. + */ + + let finalized_block_number_in_db = db::LatestBlockToYieldAsFinalized::get(&self.0.db); + let next_block = finalized_block_number_in_db.map_or(0, |block| block + 1); + + let mut iterated = false; + for b in next_block ..= finalized_block_number { + iterated = true; + + // Fetch the block + let block_hash = self + .0 + .rpc + .get_block_hash(b.try_into().unwrap()) + .await + .map_err(|e| format!("couldn't fetch block hash for block {b}: {e:?}"))?; + let block = self + .0 + .rpc + .get_block(&block_hash) + .await + .map_err(|e| format!("couldn't fetch block {b}: {e:?}"))?; + + let mut txn = self.0.db.txn(); + + for tx in &block.txdata { + let txid = hash_bytes(tx.compute_txid().to_raw_hash()); + for (o, output) in tx.output.iter().enumerate() { + let o = u32::try_from(o).unwrap(); + // Set the script public key for this transaction + db::ScriptPubKey::set(&mut txn, txid, o, &output.script_pubkey.clone().into_bytes()); + } } - } - db::LatestBlockToYieldAsFinalized::set(&mut txn, &b); - txn.commit(); + db::LatestBlockToYieldAsFinalized::set(&mut txn, &b); + txn.commit(); + } + Ok(iterated) } - Ok(iterated) } } diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index eff47af9..ea65d570 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -17,8 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } - const-hex = { version = "1", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index f70d6187..22137b2d 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -17,7 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } rand_core = { version = "0.6", default-features = false } hex = { version = "0.4", default-features = false, features = ["std"] } diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index 40d0f296..ad28b0c1 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -24,7 +24,6 @@ impl primitives::BlockHeader for BlockHeader { #[derive(Clone, Debug)] pub(crate) struct Block(pub(crate) MBlock, Vec); -#[async_trait::async_trait] impl primitives::Block for Block { type Header = BlockHeader; diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index 21a202cc..0e0739b8 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -1,3 +1,5 @@ +use core::future::Future; + use monero_wallet::rpc::{RpcError, Rpc as RpcTrait}; use monero_simple_request_rpc::SimpleRequestRpc; @@ -16,7 +18,6 @@ pub(crate) struct Rpc { pub(crate) rpc: SimpleRequestRpc, } -#[async_trait::async_trait] impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Monero; // Outputs aren't spendable until 10 blocks later due to the 10-block lock @@ -32,28 +33,44 @@ impl ScannerFeed for Rpc { type EphemeralError = RpcError; - async fn latest_finalized_block_number(&self) -> Result { - Ok(self.rpc.get_height().await?.checked_sub(1).expect("connected to an invalid Monero RPC").try_into().unwrap()) + fn latest_finalized_block_number( + &self, + ) -> impl Send + Future> { + async move { + Ok( + self + .rpc + .get_height() + .await? + .checked_sub(1) + .expect("connected to an invalid Monero RPC") + .try_into() + .unwrap(), + ) + } } - async fn time_of_block(&self, number: u64) -> Result { - todo!("TODO") - } - - async fn unchecked_block_header_by_number( + fn time_of_block( &self, number: u64, - ) -> Result<::Header, Self::EphemeralError> { - Ok(BlockHeader( - self.rpc.get_block_by_number(number.try_into().unwrap()).await? - )) + ) -> impl Send + Future> { + async move{todo!("TODO")} } - async fn unchecked_block_by_number( + fn unchecked_block_header_by_number( &self, number: u64, - ) -> Result { - todo!("TODO") + ) -> impl Send + + Future::Header, Self::EphemeralError>> + { + async move { Ok(BlockHeader(self.rpc.get_block_by_number(number.try_into().unwrap()).await?)) } + } + + fn unchecked_block_by_number( + &self, + number: u64, + ) -> impl Send + Future> { + async move { todo!("TODO") } } fn dust(coin: Coin) -> Amount { @@ -62,22 +79,26 @@ impl ScannerFeed for Rpc { todo!("TODO") } - async fn cost_to_aggregate( + fn cost_to_aggregate( &self, coin: Coin, _reference_block: &Self::Block, - ) -> Result { - assert_eq!(coin, Coin::Bitcoin); - // TODO - Ok(Amount(0)) + ) -> impl Send + Future> { + async move { + assert_eq!(coin, Coin::Bitcoin); + // TODO + Ok(Amount(0)) + } } } -#[async_trait::async_trait] impl TransactionPublisher for Rpc { type EphemeralError = RpcError; - async fn publish(&self, tx: Transaction) -> Result<(), Self::EphemeralError> { - self.rpc.publish_transaction(&tx.0).await + fn publish( + &self, + tx: Transaction, + ) -> impl Send + Future> { + async move { self.rpc.publish_transaction(&tx.0).await } } } diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml index dd1b74ea..6dd3082b 100644 --- a/processor/primitives/Cargo.toml +++ b/processor/primitives/Cargo.toml @@ -17,8 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } - group = { version = "0.13", default-features = false } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index 4f721d02..da481247 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -22,7 +22,6 @@ pub trait BlockHeader: Send + Sync + Sized + Clone + Debug { /// necessary to literally define it as whatever the external network defines as a block. For /// external networks which finalize block(s), this block type should be a representation of all /// transactions within a period finalization (whether block or epoch). -#[async_trait::async_trait] pub trait Block: Send + Sync + Sized + Clone + Debug { /// The type used for this block's header. type Header: BlockHeader; diff --git a/processor/primitives/src/task.rs b/processor/primitives/src/task.rs index a40fb9ff..e8efc64c 100644 --- a/processor/primitives/src/task.rs +++ b/processor/primitives/src/task.rs @@ -1,4 +1,4 @@ -use core::time::Duration; +use core::{future::Future, time::Duration}; use std::sync::Arc; use tokio::sync::{mpsc, oneshot, Mutex}; @@ -78,8 +78,7 @@ impl TaskHandle { } /// A task to be continually ran. -#[async_trait::async_trait] -pub trait ContinuallyRan: Sized { +pub trait ContinuallyRan: Sized + Send { /// The amount of seconds before this task should be polled again. const DELAY_BETWEEN_ITERATIONS: u64 = 5; /// The maximum amount of seconds before this task should be run again. @@ -91,60 +90,66 @@ pub trait ContinuallyRan: Sized { /// /// If this returns `true`, all dependents of the task will immediately have a new iteration ran /// (without waiting for whatever timer they were already on). - async fn run_iteration(&mut self) -> Result; + fn run_iteration(&mut self) -> impl Send + Future>; /// Continually run the task. - async fn continually_run(mut self, mut task: Task, dependents: Vec) { - // The default number of seconds to sleep before running the task again - let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS; - // The current number of seconds to sleep before running the task again - // We increment this upon errors in order to not flood the logs with errors - let mut current_sleep_before_next_task = default_sleep_before_next_task; - let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| { - let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task; - // Set a limit of sleeping for two minutes - *current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS); - }; + fn continually_run( + mut self, + mut task: Task, + dependents: Vec, + ) -> impl Send + Future { + async move { + // The default number of seconds to sleep before running the task again + let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS; + // The current number of seconds to sleep before running the task again + // We increment this upon errors in order to not flood the logs with errors + let mut current_sleep_before_next_task = default_sleep_before_next_task; + let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| { + let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task; + // Set a limit of sleeping for two minutes + *current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS); + }; - loop { - // If we were told to close/all handles were dropped, drop it - { - let should_close = task.close.try_recv(); - match should_close { - Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break, - Err(mpsc::error::TryRecvError::Empty) => {} + loop { + // If we were told to close/all handles were dropped, drop it + { + let should_close = task.close.try_recv(); + match should_close { + Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break, + Err(mpsc::error::TryRecvError::Empty) => {} + } } - } - match self.run_iteration().await { - Ok(run_dependents) => { - // Upon a successful (error-free) loop iteration, reset the amount of time we sleep - current_sleep_before_next_task = default_sleep_before_next_task; + match self.run_iteration().await { + Ok(run_dependents) => { + // Upon a successful (error-free) loop iteration, reset the amount of time we sleep + current_sleep_before_next_task = default_sleep_before_next_task; - if run_dependents { - for dependent in &dependents { - dependent.run_now(); + if run_dependents { + for dependent in &dependents { + dependent.run_now(); + } } } - } - Err(e) => { - log::warn!("{}", e); - increase_sleep_before_next_task(&mut current_sleep_before_next_task); - } - } - - // Don't run the task again for another few seconds UNLESS told to run now - tokio::select! { - () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, - msg = task.run_now.recv() => { - // Check if this is firing because the handle was dropped - if msg.is_none() { - break; + Err(e) => { + log::warn!("{}", e); + increase_sleep_before_next_task(&mut current_sleep_before_next_task); } - }, - } - } + } - task.closed.send(()).unwrap(); + // Don't run the task again for another few seconds UNLESS told to run now + tokio::select! { + () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, + msg = task.run_now.recv() => { + // Check if this is firing because the handle was dropped + if msg.is_none() { + break; + } + }, + } + } + + task.closed.send(()).unwrap(); + } } } diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index e3e08329..1ff154cd 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -17,9 +17,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -# Macros -async-trait = { version = "0.1", default-features = false } - # Encoders hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 5d139c6d..46a5e13b 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,4 +1,4 @@ -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use std::collections::{HashSet, HashMap}; use group::GroupEncoding; @@ -185,317 +185,323 @@ impl> EventualityTask { } } -#[async_trait::async_trait] impl> ContinuallyRan for EventualityTask { - async fn run_iteration(&mut self) -> Result { - // Fetch the highest acknowledged block - let Some(highest_acknowledged) = ScannerGlobalDb::::highest_acknowledged_block(&self.db) - else { - // If we've never acknowledged a block, return - return Ok(false); - }; - - // A boolean of if we've made any progress to return at the end of the function - let mut made_progress = false; - - // Start by intaking any Burns we have sitting around - // It's important we run this regardless of if we have a new block to handle - made_progress |= self.intake_burns().await?; - - /* - Eventualities increase upon one of two cases: - - 1) We're fulfilling Burns - 2) We acknowledged a block - - We can't know the processor has intaked all Burns it should have when we process block `b`. - We solve this by executing a consensus protocol whenever a resolution for an Eventuality - created to fulfill Burns occurs. Accordingly, we force ourselves to obtain synchrony on such - blocks (and all preceding Burns). - - This means we can only iterate up to the block currently pending acknowledgement. - - We only know blocks will need acknowledgement *for sure* if they were scanned. The only other - causes are key activation and retirement (both scheduled outside the scan window). This makes - the exclusive upper bound the *next block to scan*. - */ - let exclusive_upper_bound = { - // Fetch the next to scan block - let next_to_scan = next_to_scan_for_outputs_block::(&self.db) - .expect("EventualityTask run before writing the start block"); - // If we haven't done any work, return - if next_to_scan == 0 { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + // Fetch the highest acknowledged block + let Some(highest_acknowledged) = ScannerGlobalDb::::highest_acknowledged_block(&self.db) + else { + // If we've never acknowledged a block, return return Ok(false); - } - next_to_scan - }; + }; - // Fetch the next block to check - let next_to_check = EventualityDb::::next_to_check_for_eventualities_block(&self.db) - .expect("EventualityTask run before writing the start block"); + // A boolean of if we've made any progress to return at the end of the function + let mut made_progress = false; - // Check all blocks - for b in next_to_check .. exclusive_upper_bound { - let is_block_notable = ScannerGlobalDb::::is_block_notable(&self.db, b); - if is_block_notable { - /* - If this block is notable *and* not acknowledged, break. + // Start by intaking any Burns we have sitting around + // It's important we run this regardless of if we have a new block to handle + made_progress |= self.intake_burns().await?; - This is so if Burns queued prior to this block's acknowledgement caused any Eventualities - (which may resolve this block), we have them. If it wasn't for that, it'd be so if this - block's acknowledgement caused any Eventualities, we have them, though those would only - potentially resolve in the next block (letting us scan this block without delay). - */ - if b > highest_acknowledged { - break; + /* + Eventualities increase upon one of two cases: + + 1) We're fulfilling Burns + 2) We acknowledged a block + + We can't know the processor has intaked all Burns it should have when we process block `b`. + We solve this by executing a consensus protocol whenever a resolution for an Eventuality + created to fulfill Burns occurs. Accordingly, we force ourselves to obtain synchrony on + such blocks (and all preceding Burns). + + This means we can only iterate up to the block currently pending acknowledgement. + + We only know blocks will need acknowledgement *for sure* if they were scanned. The only + other causes are key activation and retirement (both scheduled outside the scan window). + This makes the exclusive upper bound the *next block to scan*. + */ + let exclusive_upper_bound = { + // Fetch the next to scan block + let next_to_scan = next_to_scan_for_outputs_block::(&self.db) + .expect("EventualityTask run before writing the start block"); + // If we haven't done any work, return + if next_to_scan == 0 { + return Ok(false); } + next_to_scan + }; - // Since this block is notable, ensure we've intaked all the Burns preceding it - // We can know with certainty that the channel is fully populated at this time since we've - // acknowledged a newer block (so we've handled the state up to this point and any new - // state will be for the newer block) - #[allow(unused_assignments)] - { - made_progress |= self.intake_burns().await?; - } - } + // Fetch the next block to check + let next_to_check = EventualityDb::::next_to_check_for_eventualities_block(&self.db) + .expect("EventualityTask run before writing the start block"); - // Since we're handling this block, we are making progress - made_progress = true; - - let block = self.feed.block_by_number(&self.db, b).await?; - - log::debug!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); - - let (keys, keys_with_stages) = self.keys_and_keys_with_stages(b); - - let mut txn = self.db.txn(); - - // Fetch the data from the scanner - let scan_data = ScanToEventualityDb::recv_scan_data(&mut txn, b); - assert_eq!(scan_data.block_number, b); - let ReceiverScanData { block_number: _, received_external_outputs, forwards, returns } = - scan_data; - let mut outputs = received_external_outputs; - - for key in &keys { - // If this is the key's activation block, activate it - if key.activation_block_number == b { - Sch::activate_key(&mut txn, key.key); - } - - let completed_eventualities = { - let mut eventualities = EventualityDb::::eventualities(&txn, key.key); - let completed_eventualities = block.check_for_eventuality_resolutions(&mut eventualities); - EventualityDb::::set_eventualities(&mut txn, key.key, &eventualities); - completed_eventualities - }; - - for (tx, eventuality) in &completed_eventualities { - log::info!( - "eventuality {} resolved by {}", - hex::encode(eventuality.id()), - hex::encode(tx.as_ref()) - ); - CompletedEventualities::send(&mut txn, &key.key, eventuality.id()); - } - - // Fetch all non-External outputs - let mut non_external_outputs = block.scan_for_outputs(key.key); - non_external_outputs.retain(|output| output.kind() != OutputType::External); - // Drop any outputs less than the dust limit - non_external_outputs.retain(|output| { - let balance = output.balance(); - balance.amount.0 >= S::dust(balance.coin).0 - }); - - /* - Now that we have all non-External outputs, we filter them to be only the outputs which - are from transactions which resolve our own Eventualities *if* the multisig is retiring. - This implements step 6 of `spec/processor/Multisig Rotation.md`. - - We may receive a Change output. The only issue with accumulating this would be if it - extends the multisig's lifetime (by increasing the amount of outputs yet to be - forwarded). By checking it's one we made, either: - 1) It's a legitimate Change output to be forwarded - 2) It's a Change output created by a user burning coins (specifying the Change address), - which can only be created while the multisig is actively handling `Burn`s (therefore - ensuring this multisig cannot be kept alive ad-infinitum) - - The commentary on Change outputs also applies to Branch/Forwarded. They'll presumably get - ignored if not usable however. - */ - if key.stage == LifetimeStage::Finishing { - non_external_outputs - .retain(|output| completed_eventualities.contains_key(&output.transaction_id())); - } - - // Finally, for non-External outputs we didn't make, we check they're worth more than the - // cost to aggregate them to avoid some profitable spam attacks by malicious miners - { - // Fetch and cache the costs to aggregate as this call may be expensive - let coins = - non_external_outputs.iter().map(|output| output.balance().coin).collect::>(); - let mut costs_to_aggregate = HashMap::new(); - for coin in coins { - costs_to_aggregate.insert( - coin, - self.feed.cost_to_aggregate(coin, &block).await.map_err(|e| { - format!("EventualityTask couldn't fetch cost to aggregate {coin:?} at {b}: {e:?}") - })?, - ); - } - - // Only retain out outputs/outputs sufficiently worthwhile - non_external_outputs.retain(|output| { - completed_eventualities.contains_key(&output.transaction_id()) || { - let balance = output.balance(); - balance.amount.0 >= (2 * costs_to_aggregate[&balance.coin].0) - } - }); - } - - // Now, we iterate over all Forwarded outputs and queue their InInstructions - for output in - non_external_outputs.iter().filter(|output| output.kind() == OutputType::Forwarded) - { - let Some(eventuality) = completed_eventualities.get(&output.transaction_id()) else { - // Output sent to the forwarding address yet not one we made - continue; - }; - let Some(forwarded) = eventuality.singular_spent_output() else { - // This was a TX made by us, yet someone burned to the forwarding address as it doesn't - // follow the structure of forwarding transactions - continue; - }; - - let Some((return_address, mut in_instruction)) = - ScannerGlobalDb::::return_address_and_in_instruction_for_forwarded_output( - &txn, &forwarded, - ) - else { - // This was a TX made by us, coincidentally with the necessary structure, yet wasn't - // forwarding an output - continue; - }; - - // We use the original amount, minus twice the cost to aggregate - // If the fees we paid to forward this now (less than the cost to aggregate now, yet not - // necessarily the cost to aggregate historically) caused this amount to be less, reduce - // it accordingly - in_instruction.balance.amount.0 = - in_instruction.balance.amount.0.min(output.balance().amount.0); - - queue_output_until_block::( - &mut txn, - b + S::WINDOW_LENGTH, - &OutputWithInInstruction { output: output.clone(), return_address, in_instruction }, - ); - } - - // Accumulate all of these outputs - outputs.extend(non_external_outputs); - } - - // Update the scheduler - { - let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; - scheduler_update.outputs.sort_by(sort_outputs); - scheduler_update.forwards.sort_by(sort_outputs); - scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); - - let empty = { - let a: core::slice::Iter<'_, OutputFor> = scheduler_update.outputs.iter(); - let b: core::slice::Iter<'_, OutputFor> = scheduler_update.forwards.iter(); - let c = scheduler_update.returns.iter().map(|output_to_return| &output_to_return.output); - let mut all_outputs = a.chain(b).chain(c).peekable(); - - // If we received any output, sanity check this block is notable - let empty = all_outputs.peek().is_none(); - if !empty { - assert!(is_block_notable, "accumulating output(s) in non-notable block"); - } - - // Sanity check we've never accumulated these outputs before - for output in all_outputs { - assert!( - !EventualityDb::::prior_accumulated_output(&txn, &output.id()), - "prior accumulated an output with this ID" - ); - EventualityDb::::accumulated_output(&mut txn, &output.id()); - } - - empty - }; - - if !empty { - // Accumulate the outputs + // Check all blocks + for b in next_to_check .. exclusive_upper_bound { + let is_block_notable = ScannerGlobalDb::::is_block_notable(&self.db, b); + if is_block_notable { /* - This uses the `keys_with_stages` for the current block, yet this block is notable. - Accordingly, all future intaked Burns will use at least this block when determining - what LifetimeStage a key is. That makes the LifetimeStage monotonically incremented. If - this block wasn't notable, we'd potentially intake Burns with the LifetimeStage - determined off an earlier block than this (enabling an earlier LifetimeStage to be used - after a later one was already used). + If this block is notable *and* not acknowledged, break. + + This is so if Burns queued prior to this block's acknowledgement caused any + Eventualities (which may resolve this block), we have them. If it wasn't for that, it'd + be so if this block's acknowledgement caused any Eventualities, we have them, though + those would only potentially resolve in the next block (letting us scan this block + without delay). */ - let new_eventualities = - Sch::update(&mut txn, &block, &keys_with_stages, scheduler_update); - // Intake the new Eventualities - for key in new_eventualities.keys() { - keys - .iter() - .find(|serai_key| serai_key.key.to_bytes().as_ref() == key.as_slice()) - .expect("intaking Eventuality for key which isn't active"); + if b > highest_acknowledged { + break; } - intake_eventualities::(&mut txn, new_eventualities); - } - } - for key in &keys { - // If this is the block at which forwarding starts for this key, flush it - // We do this after we issue the above update for any efficiencies gained by doing so - if key.block_at_which_forwarding_starts == Some(b) { - assert!( - key.key != keys.last().unwrap().key, - "key which was forwarding was the last key (which has no key after it to forward to)" - ); - let new_eventualities = - Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); - intake_eventualities::(&mut txn, new_eventualities); + // Since this block is notable, ensure we've intaked all the Burns preceding it + // We can know with certainty that the channel is fully populated at this time since + // we've acknowledged a newer block (so we've handled the state up to this point and any + // new state will be for the newer block) + #[allow(unused_assignments)] + { + made_progress |= self.intake_burns().await?; + } } - // Now that we've intaked any Eventualities caused, check if we're retiring any keys - if key.stage == LifetimeStage::Finishing { - let eventualities = EventualityDb::::eventualities(&txn, key.key); - if eventualities.active_eventualities.is_empty() { + // Since we're handling this block, we are making progress + made_progress = true; + + let block = self.feed.block_by_number(&self.db, b).await?; + + log::debug!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); + + let (keys, keys_with_stages) = self.keys_and_keys_with_stages(b); + + let mut txn = self.db.txn(); + + // Fetch the data from the scanner + let scan_data = ScanToEventualityDb::recv_scan_data(&mut txn, b); + assert_eq!(scan_data.block_number, b); + let ReceiverScanData { block_number: _, received_external_outputs, forwards, returns } = + scan_data; + let mut outputs = received_external_outputs; + + for key in &keys { + // If this is the key's activation block, activate it + if key.activation_block_number == b { + Sch::activate_key(&mut txn, key.key); + } + + let completed_eventualities = { + let mut eventualities = EventualityDb::::eventualities(&txn, key.key); + let completed_eventualities = + block.check_for_eventuality_resolutions(&mut eventualities); + EventualityDb::::set_eventualities(&mut txn, key.key, &eventualities); + completed_eventualities + }; + + for (tx, eventuality) in &completed_eventualities { log::info!( - "key {} has finished and is being retired", - hex::encode(key.key.to_bytes().as_ref()) + "eventuality {} resolved by {}", + hex::encode(eventuality.id()), + hex::encode(tx.as_ref()) ); + CompletedEventualities::send(&mut txn, &key.key, eventuality.id()); + } - // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never - // has a malleable view of the keys. - ScannerGlobalDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); + // Fetch all non-External outputs + let mut non_external_outputs = block.scan_for_outputs(key.key); + non_external_outputs.retain(|output| output.kind() != OutputType::External); + // Drop any outputs less than the dust limit + non_external_outputs.retain(|output| { + let balance = output.balance(); + balance.amount.0 >= S::dust(balance.coin).0 + }); - // We tell the scheduler to retire it now as we're done with it, and this fn doesn't - // require it be called with a canonical order - Sch::retire_key(&mut txn, key.key); + /* + Now that we have all non-External outputs, we filter them to be only the outputs which + are from transactions which resolve our own Eventualities *if* the multisig is retiring. + This implements step 6 of `spec/processor/Multisig Rotation.md`. + + We may receive a Change output. The only issue with accumulating this would be if it + extends the multisig's lifetime (by increasing the amount of outputs yet to be + forwarded). By checking it's one we made, either: + 1) It's a legitimate Change output to be forwarded + 2) It's a Change output created by a user burning coins (specifying the Change address), + which can only be created while the multisig is actively handling `Burn`s (therefore + ensuring this multisig cannot be kept alive ad-infinitum) + + The commentary on Change outputs also applies to Branch/Forwarded. They'll presumably + get ignored if not usable however. + */ + if key.stage == LifetimeStage::Finishing { + non_external_outputs + .retain(|output| completed_eventualities.contains_key(&output.transaction_id())); + } + + // Finally, for non-External outputs we didn't make, we check they're worth more than the + // cost to aggregate them to avoid some profitable spam attacks by malicious miners + { + // Fetch and cache the costs to aggregate as this call may be expensive + let coins = non_external_outputs + .iter() + .map(|output| output.balance().coin) + .collect::>(); + let mut costs_to_aggregate = HashMap::new(); + for coin in coins { + costs_to_aggregate.insert( + coin, + self.feed.cost_to_aggregate(coin, &block).await.map_err(|e| { + format!("EventualityTask couldn't fetch cost to aggregate {coin:?} at {b}: {e:?}") + })?, + ); + } + + // Only retain out outputs/outputs sufficiently worthwhile + non_external_outputs.retain(|output| { + completed_eventualities.contains_key(&output.transaction_id()) || { + let balance = output.balance(); + balance.amount.0 >= (2 * costs_to_aggregate[&balance.coin].0) + } + }); + } + + // Now, we iterate over all Forwarded outputs and queue their InInstructions + for output in + non_external_outputs.iter().filter(|output| output.kind() == OutputType::Forwarded) + { + let Some(eventuality) = completed_eventualities.get(&output.transaction_id()) else { + // Output sent to the forwarding address yet not one we made + continue; + }; + let Some(forwarded) = eventuality.singular_spent_output() else { + // This was a TX made by us, yet someone burned to the forwarding address as it + // doesn't follow the structure of forwarding transactions + continue; + }; + + let Some((return_address, mut in_instruction)) = + ScannerGlobalDb::::return_address_and_in_instruction_for_forwarded_output( + &txn, &forwarded, + ) + else { + // This was a TX made by us, coincidentally with the necessary structure, yet wasn't + // forwarding an output + continue; + }; + + // We use the original amount, minus twice the cost to aggregate + // If the fees we paid to forward this now (less than the cost to aggregate now, yet not + // necessarily the cost to aggregate historically) caused this amount to be less, reduce + // it accordingly + in_instruction.balance.amount.0 = + in_instruction.balance.amount.0.min(output.balance().amount.0); + + queue_output_until_block::( + &mut txn, + b + S::WINDOW_LENGTH, + &OutputWithInInstruction { output: output.clone(), return_address, in_instruction }, + ); + } + + // Accumulate all of these outputs + outputs.extend(non_external_outputs); + } + + // Update the scheduler + { + let mut scheduler_update = SchedulerUpdate { outputs, forwards, returns }; + scheduler_update.outputs.sort_by(sort_outputs); + scheduler_update.forwards.sort_by(sort_outputs); + scheduler_update.returns.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + + let empty = { + let a: core::slice::Iter<'_, OutputFor> = scheduler_update.outputs.iter(); + let b: core::slice::Iter<'_, OutputFor> = scheduler_update.forwards.iter(); + let c = + scheduler_update.returns.iter().map(|output_to_return| &output_to_return.output); + let mut all_outputs = a.chain(b).chain(c).peekable(); + + // If we received any output, sanity check this block is notable + let empty = all_outputs.peek().is_none(); + if !empty { + assert!(is_block_notable, "accumulating output(s) in non-notable block"); + } + + // Sanity check we've never accumulated these outputs before + for output in all_outputs { + assert!( + !EventualityDb::::prior_accumulated_output(&txn, &output.id()), + "prior accumulated an output with this ID" + ); + EventualityDb::::accumulated_output(&mut txn, &output.id()); + } + + empty + }; + + if !empty { + // Accumulate the outputs + /* + This uses the `keys_with_stages` for the current block, yet this block is notable. + Accordingly, all future intaked Burns will use at least this block when determining + what LifetimeStage a key is. That makes the LifetimeStage monotonically incremented. + If this block wasn't notable, we'd potentially intake Burns with the LifetimeStage + determined off an earlier block than this (enabling an earlier LifetimeStage to be + used after a later one was already used). + */ + let new_eventualities = + Sch::update(&mut txn, &block, &keys_with_stages, scheduler_update); + // Intake the new Eventualities + for key in new_eventualities.keys() { + keys + .iter() + .find(|serai_key| serai_key.key.to_bytes().as_ref() == key.as_slice()) + .expect("intaking Eventuality for key which isn't active"); + } + intake_eventualities::(&mut txn, new_eventualities); } } + + for key in &keys { + // If this is the block at which forwarding starts for this key, flush it + // We do this after we issue the above update for any efficiencies gained by doing so + if key.block_at_which_forwarding_starts == Some(b) { + assert!( + key.key != keys.last().unwrap().key, + "key which was forwarding was the last key (which has no key after it to forward to)" + ); + let new_eventualities = + Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); + intake_eventualities::(&mut txn, new_eventualities); + } + + // Now that we've intaked any Eventualities caused, check if we're retiring any keys + if key.stage == LifetimeStage::Finishing { + let eventualities = EventualityDb::::eventualities(&txn, key.key); + if eventualities.active_eventualities.is_empty() { + log::info!( + "key {} has finished and is being retired", + hex::encode(key.key.to_bytes().as_ref()) + ); + + // Retire this key `WINDOW_LENGTH` blocks in the future to ensure the scan task never + // has a malleable view of the keys. + ScannerGlobalDb::::retire_key(&mut txn, b + S::WINDOW_LENGTH, key.key); + + // We tell the scheduler to retire it now as we're done with it, and this fn doesn't + // require it be called with a canonical order + Sch::retire_key(&mut txn, key.key); + } + } + } + + // Update the next-to-check block + EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); + + // If this block was notable, update the latest-handled notable block + if is_block_notable { + EventualityDb::::set_latest_handled_notable_block(&mut txn, b); + } + + txn.commit(); } - // Update the next-to-check block - EventualityDb::::set_next_to_check_for_eventualities_block(&mut txn, next_to_check); - - // If this block was notable, update the latest-handled notable block - if is_block_notable { - EventualityDb::::set_latest_handled_notable_block(&mut txn, b); - } - - txn.commit(); + // Run dependents if we successfully checked any blocks + Ok(made_progress) } - - // Run dependents if we successfully checked any blocks - Ok(made_progress) } } diff --git a/processor/scanner/src/index/mod.rs b/processor/scanner/src/index/mod.rs index 930ce55a..03abc8a8 100644 --- a/processor/scanner/src/index/mod.rs +++ b/processor/scanner/src/index/mod.rs @@ -1,5 +1,6 @@ -use serai_db::{Get, DbTxn, Db}; +use core::future::Future; +use serai_db::{Get, DbTxn, Db}; use primitives::{task::ContinuallyRan, BlockHeader}; use crate::ScannerFeed; @@ -56,58 +57,59 @@ impl IndexTask { } } -#[async_trait::async_trait] impl ContinuallyRan for IndexTask { - async fn run_iteration(&mut self) -> Result { - // Fetch the latest finalized block - let our_latest_finalized = IndexDb::latest_finalized_block(&self.db) - .expect("IndexTask run before writing the start block"); - let latest_finalized = match self.feed.latest_finalized_block_number().await { - Ok(latest_finalized) => latest_finalized, - Err(e) => Err(format!("couldn't fetch the latest finalized block number: {e:?}"))?, - }; - - if latest_finalized < our_latest_finalized { - // Explicitly log this as an error as returned ephemeral errors are logged with debug - // This doesn't panic as the node should sync along our indexed chain, and if it doesn't, - // we'll panic at that point in time - log::error!( - "node is out of sync, latest finalized {} is behind our indexed {}", - latest_finalized, - our_latest_finalized - ); - Err("node is out of sync".to_string())?; - } - - // Index the hashes of all blocks until the latest finalized block - for b in (our_latest_finalized + 1) ..= latest_finalized { - let block = match self.feed.unchecked_block_header_by_number(b).await { - Ok(block) => block, - Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + // Fetch the latest finalized block + let our_latest_finalized = IndexDb::latest_finalized_block(&self.db) + .expect("IndexTask run before writing the start block"); + let latest_finalized = match self.feed.latest_finalized_block_number().await { + Ok(latest_finalized) => latest_finalized, + Err(e) => Err(format!("couldn't fetch the latest finalized block number: {e:?}"))?, }; - // Check this descends from our indexed chain - { - let expected_parent = - IndexDb::block_id(&self.db, b - 1).expect("didn't have the ID of the prior block"); - if block.parent() != expected_parent { - panic!( - "current finalized block (#{b}, {}) doesn't build off finalized block (#{}, {})", - hex::encode(block.parent()), - b - 1, - hex::encode(expected_parent) - ); - } + if latest_finalized < our_latest_finalized { + // Explicitly log this as an error as returned ephemeral errors are logged with debug + // This doesn't panic as the node should sync along our indexed chain, and if it doesn't, + // we'll panic at that point in time + log::error!( + "node is out of sync, latest finalized {} is behind our indexed {}", + latest_finalized, + our_latest_finalized + ); + Err("node is out of sync".to_string())?; } - // Update the latest finalized block - let mut txn = self.db.txn(); - IndexDb::set_block(&mut txn, b, block.id()); - IndexDb::set_latest_finalized_block(&mut txn, b); - txn.commit(); - } + // Index the hashes of all blocks until the latest finalized block + for b in (our_latest_finalized + 1) ..= latest_finalized { + let block = match self.feed.unchecked_block_header_by_number(b).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {b}: {e:?}"))?, + }; - // Have dependents run if we updated the latest finalized block - Ok(our_latest_finalized != latest_finalized) + // Check this descends from our indexed chain + { + let expected_parent = + IndexDb::block_id(&self.db, b - 1).expect("didn't have the ID of the prior block"); + if block.parent() != expected_parent { + panic!( + "current finalized block (#{b}, {}) doesn't build off finalized block (#{}, {})", + hex::encode(block.parent()), + b - 1, + hex::encode(expected_parent) + ); + } + } + + // Update the latest finalized block + let mut txn = self.db.txn(); + IndexDb::set_block(&mut txn, b, block.id()); + IndexDb::set_latest_finalized_block(&mut txn, b); + txn.commit(); + } + + // Have dependents run if we updated the latest finalized block + Ok(our_latest_finalized != latest_finalized) + } } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index d100815d..a5c5c038 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -2,7 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::{marker::PhantomData, fmt::Debug}; +use core::{marker::PhantomData, future::Future, fmt::Debug}; use std::{io, collections::HashMap}; use group::GroupEncoding; @@ -59,7 +59,6 @@ impl BlockExt for B { /// A feed usable to scan a blockchain. /// /// This defines the primitive types used, along with various getters necessary for indexing. -#[async_trait::async_trait] pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The ID of the network being scanned for. const NETWORK: NetworkId; @@ -110,38 +109,43 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// /// The block number is its zero-indexed position within a linear view of the external network's /// consensus. The genesis block accordingly has block number 0. - async fn latest_finalized_block_number(&self) -> Result; + fn latest_finalized_block_number( + &self, + ) -> impl Send + Future>; /// Fetch the timestamp of a block (represented in seconds since the epoch). /// /// This must be monotonically incrementing. Two blocks may share a timestamp. - async fn time_of_block(&self, number: u64) -> Result; + fn time_of_block( + &self, + number: u64, + ) -> impl Send + Future>; /// Fetch a block header by its number. /// /// This does not check the returned BlockHeader is the header for the block we indexed. - async fn unchecked_block_header_by_number( + fn unchecked_block_header_by_number( &self, number: u64, - ) -> Result<::Header, Self::EphemeralError>; + ) -> impl Send + Future::Header, Self::EphemeralError>>; /// Fetch a block by its number. /// /// This does not check the returned Block is the block we indexed. - async fn unchecked_block_by_number( + fn unchecked_block_by_number( &self, number: u64, - ) -> Result; + ) -> impl Send + Future>; /// Fetch a block by its number. /// /// Panics if the block requested wasn't indexed. - async fn block_by_number( + fn block_by_number( &self, getter: &(impl Send + Sync + Get), number: u64, - ) -> Result { - let block = match self.unchecked_block_by_number(number).await { + ) -> impl Send + Future> { + async move {let block = match self.unchecked_block_by_number(number).await { Ok(block) => block, Err(e) => Err(format!("couldn't fetch block {number}: {e:?}"))?, }; @@ -159,7 +163,7 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { } } - Ok(block) + Ok(block)} } /// The dust threshold for the specified coin. @@ -171,11 +175,11 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The cost to aggregate an input as of the specified block. /// /// This is defined as the transaction fee for a 2-input, 1-output transaction. - async fn cost_to_aggregate( + fn cost_to_aggregate( &self, coin: Coin, reference_block: &Self::Block, - ) -> Result; + ) -> impl Send + Future>; } /// The key type for this ScannerFeed. diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 5fd2c7eb..afb1b672 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -1,4 +1,4 @@ -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use scale::Encode; use serai_db::{DbTxn, Db}; @@ -65,113 +65,119 @@ impl ReportTask { } } -#[async_trait::async_trait] impl ContinuallyRan for ReportTask { - async fn run_iteration(&mut self) -> Result { - let highest_reportable = { - // Fetch the next to scan block - let next_to_scan = next_to_scan_for_outputs_block::(&self.db) + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let highest_reportable = { + // Fetch the next to scan block + let next_to_scan = next_to_scan_for_outputs_block::(&self.db) + .expect("ReportTask run before writing the start block"); + // If we haven't done any work, return + if next_to_scan == 0 { + return Ok(false); + } + // The last scanned block is the block prior to this + #[allow(clippy::let_and_return)] + let last_scanned = next_to_scan - 1; + // The last scanned block is the highest reportable block as we only scan blocks within a + // window where it's safe to immediately report the block + // See `eventuality.rs` for more info + last_scanned + }; + + let next_to_potentially_report = ReportDb::::next_to_potentially_report_block(&self.db) .expect("ReportTask run before writing the start block"); - // If we haven't done any work, return - if next_to_scan == 0 { - return Ok(false); - } - // The last scanned block is the block prior to this - #[allow(clippy::let_and_return)] - let last_scanned = next_to_scan - 1; - // The last scanned block is the highest reportable block as we only scan blocks within a - // window where it's safe to immediately report the block - // See `eventuality.rs` for more info - last_scanned - }; - let next_to_potentially_report = ReportDb::::next_to_potentially_report_block(&self.db) - .expect("ReportTask run before writing the start block"); + for b in next_to_potentially_report ..= highest_reportable { + let mut txn = self.db.txn(); - for b in next_to_potentially_report ..= highest_reportable { - let mut txn = self.db.txn(); + // Receive the InInstructions for this block + // We always do this as we can't trivially tell if we should recv InInstructions before we + // do + let InInstructionData { + external_key_for_session_to_sign_batch, + returnable_in_instructions: in_instructions, + } = ScanToReportDb::::recv_in_instructions(&mut txn, b); + let notable = ScannerGlobalDb::::is_block_notable(&txn, b); + if !notable { + assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); + } + // If this block is notable, create the Batch(s) for it + if notable { + let network = S::NETWORK; + let block_hash = index::block_id(&txn, b); + let mut batch_id = ReportDb::::acquire_batch_id(&mut txn, b); - // Receive the InInstructions for this block - // We always do this as we can't trivially tell if we should recv InInstructions before we do - let InInstructionData { - external_key_for_session_to_sign_batch, - returnable_in_instructions: in_instructions, - } = ScanToReportDb::::recv_in_instructions(&mut txn, b); - let notable = ScannerGlobalDb::::is_block_notable(&txn, b); - if !notable { - assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); - } - // If this block is notable, create the Batch(s) for it - if notable { - let network = S::NETWORK; - let block_hash = index::block_id(&txn, b); - let mut batch_id = ReportDb::::acquire_batch_id(&mut txn, b); + // start with empty batch + let mut batches = vec![Batch { + network, + id: batch_id, + block: BlockHash(block_hash), + instructions: vec![], + }]; + // We also track the return information for the InInstructions within a Batch in case + // they error + let mut return_information = vec![vec![]]; - // start with empty batch - let mut batches = - vec![Batch { network, id: batch_id, block: BlockHash(block_hash), instructions: vec![] }]; - // We also track the return information for the InInstructions within a Batch in case they - // error - let mut return_information = vec![vec![]]; + for Returnable { return_address, in_instruction } in in_instructions { + let balance = in_instruction.balance; - for Returnable { return_address, in_instruction } in in_instructions { - let balance = in_instruction.balance; + let batch = batches.last_mut().unwrap(); + batch.instructions.push(in_instruction); - let batch = batches.last_mut().unwrap(); - batch.instructions.push(in_instruction); + // check if batch is over-size + if batch.encode().len() > MAX_BATCH_SIZE { + // pop the last instruction so it's back in size + let in_instruction = batch.instructions.pop().unwrap(); - // check if batch is over-size - if batch.encode().len() > MAX_BATCH_SIZE { - // pop the last instruction so it's back in size - let in_instruction = batch.instructions.pop().unwrap(); + // bump the id for the new batch + batch_id = ReportDb::::acquire_batch_id(&mut txn, b); - // bump the id for the new batch - batch_id = ReportDb::::acquire_batch_id(&mut txn, b); + // make a new batch with this instruction included + batches.push(Batch { + network, + id: batch_id, + block: BlockHash(block_hash), + instructions: vec![in_instruction], + }); + // Since we're allocating a new batch, allocate a new set of return addresses for it + return_information.push(vec![]); + } - // make a new batch with this instruction included - batches.push(Batch { - network, - id: batch_id, - block: BlockHash(block_hash), - instructions: vec![in_instruction], - }); - // Since we're allocating a new batch, allocate a new set of return addresses for it - return_information.push(vec![]); + // For the set of return addresses for the InInstructions for the batch we just pushed + // onto, push this InInstruction's return addresses + return_information + .last_mut() + .unwrap() + .push(return_address.map(|address| ReturnInformation { address, balance })); } - // For the set of return addresses for the InInstructions for the batch we just pushed - // onto, push this InInstruction's return addresses - return_information - .last_mut() - .unwrap() - .push(return_address.map(|address| ReturnInformation { address, balance })); + // Save the return addresses to the database + assert_eq!(batches.len(), return_information.len()); + for (batch, return_information) in batches.iter().zip(&return_information) { + assert_eq!(batch.instructions.len(), return_information.len()); + ReportDb::::save_external_key_for_session_to_sign_batch( + &mut txn, + batch.id, + &external_key_for_session_to_sign_batch, + ); + ReportDb::::save_return_information(&mut txn, batch.id, return_information); + } + + for batch in batches { + Batches::send(&mut txn, &batch); + BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); + } } - // Save the return addresses to the database - assert_eq!(batches.len(), return_information.len()); - for (batch, return_information) in batches.iter().zip(&return_information) { - assert_eq!(batch.instructions.len(), return_information.len()); - ReportDb::::save_external_key_for_session_to_sign_batch( - &mut txn, - batch.id, - &external_key_for_session_to_sign_batch, - ); - ReportDb::::save_return_information(&mut txn, batch.id, return_information); - } + // Update the next to potentially report block + ReportDb::::set_next_to_potentially_report_block(&mut txn, b + 1); - for batch in batches { - Batches::send(&mut txn, &batch); - BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); - } + txn.commit(); } - // Update the next to potentially report block - ReportDb::::set_next_to_potentially_report_block(&mut txn, b + 1); - - txn.commit(); + // Run dependents if we decided to report any blocks + Ok(next_to_potentially_report <= highest_reportable) } - - // Run dependents if we decided to report any blocks - Ok(next_to_potentially_report <= highest_reportable) } } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 91c97f60..c54dc3e0 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -1,3 +1,4 @@ +use core::future::Future; use std::collections::HashMap; use scale::Decode; @@ -107,258 +108,262 @@ impl ScanTask { } } -#[async_trait::async_trait] impl ContinuallyRan for ScanTask { - async fn run_iteration(&mut self) -> Result { - // Fetch the safe to scan block - let latest_scannable = - latest_scannable_block::(&self.db).expect("ScanTask run before writing the start block"); - // Fetch the next block to scan - let next_to_scan = ScanDb::::next_to_scan_for_outputs_block(&self.db) - .expect("ScanTask run before writing the start block"); + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + // Fetch the safe to scan block + let latest_scannable = + latest_scannable_block::(&self.db).expect("ScanTask run before writing the start block"); + // Fetch the next block to scan + let next_to_scan = ScanDb::::next_to_scan_for_outputs_block(&self.db) + .expect("ScanTask run before writing the start block"); - for b in next_to_scan ..= latest_scannable { - let block = self.feed.block_by_number(&self.db, b).await?; + for b in next_to_scan ..= latest_scannable { + let block = self.feed.block_by_number(&self.db, b).await?; - log::info!("scanning block: {} ({b})", hex::encode(block.id())); + log::info!("scanning block: {} ({b})", hex::encode(block.id())); - let mut txn = self.db.txn(); + let mut txn = self.db.txn(); - assert_eq!(ScanDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); + assert_eq!(ScanDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); - // Tidy the keys, then fetch them - // We don't have to tidy them here, we just have to somewhere, so why not here? - ScannerGlobalDb::::tidy_keys(&mut txn); - let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) - .expect("scanning for a blockchain without any keys set"); + // Tidy the keys, then fetch them + // We don't have to tidy them here, we just have to somewhere, so why not here? + ScannerGlobalDb::::tidy_keys(&mut txn); + let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) + .expect("scanning for a blockchain without any keys set"); - // The scan data for this block - let mut scan_data = SenderScanData { - block_number: b, - received_external_outputs: vec![], - forwards: vec![], - returns: vec![], - }; - // The InInstructions for this block - let mut in_instructions = vec![]; - - // The outputs queued for this block - let queued_outputs = { - let mut queued_outputs = ScanDb::::take_queued_outputs(&mut txn, b); - // Sort the queued outputs in case they weren't queued in a deterministic fashion - queued_outputs.sort_by(|a, b| sort_outputs(&a.output, &b.output)); - queued_outputs - }; - for queued_output in queued_outputs { - in_instructions.push(( - queued_output.output.id(), - Returnable { - return_address: queued_output.return_address, - in_instruction: queued_output.in_instruction, - }, - )); - scan_data.received_external_outputs.push(queued_output.output); - } - - // We subtract the cost to aggregate from some outputs we scan - // This cost is fetched with an asynchronous function which may be non-trivial - // We cache the result of this function here to avoid calling it multiple times - let mut costs_to_aggregate = HashMap::with_capacity(1); - - // Scan for each key - for key in &keys { - for output in block.scan_for_outputs(key.key) { - assert_eq!(output.key(), key.key); - - /* - The scan task runs ahead of time, obtaining ordering on the external network's blocks - with relation to events on the Serai network. This is done via publishing a Batch which - contains the InInstructions from External outputs. Accordingly, the scan process only - has to yield External outputs. - - It'd appear to make sense to scan for all outputs, and after scanning for all outputs, - yield all outputs. The issue is we can't identify outputs we created here. We can only - identify the outputs we receive and their *declared intention*. - - We only want to handle Change/Branch/Forwarded outputs we made ourselves. For - Forwarded, the reasoning is obvious (retiring multisigs should only downsize, yet - accepting new outputs solely because they claim to be Forwarded would increase the size - of the multisig). For Change/Branch, it's because such outputs which aren't ours are - pointless. They wouldn't hurt to accumulate though. - - The issue is they would hurt to accumulate. We want to filter outputs which are less - than their cost to aggregate, a variable itself variable to the current blockchain. We - can filter such outputs here, yet if we drop a Change output, we create an insolvency. - We'd need to track the loss and offset it later. That means we can't filter such - outputs, as we expect any Change output we make. - - The issue is the Change outputs we don't make. Someone can create an output declaring - to be Change, yet not actually Change. If we don't filter it, it'd be queued for - accumulation, yet it may cost more to accumulate than it's worth. - - The solution is to let the Eventuality task, which does know if we made an output or - not (or rather, if a transaction is identical to a transaction which should exist - regarding effects) decide to keep/yield the outputs which we should only keep if we - made them (as Serai itself should not make worthless outputs, so we can assume they're - worthwhile, and even if they're not economically, they are technically). - - The alternative, we drop outputs here with a generic filter rule and then report back - the insolvency created, still doesn't work as we'd only be creating an insolvency if - the output was actually made by us (and not simply someone else sending in). We can - have the Eventuality task report the insolvency, yet that requires the scanner be - responsible for such filter logic. It's more flexible, and has a cleaner API, - to do so at a higher level. - */ - if output.kind() != OutputType::External { - // While we don't report these outputs, we still need consensus on this block and - // accordingly still need to set it as notable - let balance = output.balance(); - // We ensure it's over the dust limit to prevent people sending 1 satoshi from causing - // an invocation of a consensus/signing protocol - if balance.amount.0 >= S::dust(balance.coin).0 { - ScannerGlobalDb::::flag_notable_due_to_non_external_output(&mut txn, b); - } - continue; - } - - // Check this isn't dust - let balance_to_use = { - let mut balance = output.balance(); - - // First, subtract 2 * the cost to aggregate, as detailed in - // `spec/processor/UTXO Management.md` - - // We cache this, so if it isn't yet cached, insert it into the cache - if let std::collections::hash_map::Entry::Vacant(e) = - costs_to_aggregate.entry(balance.coin) - { - e.insert(self.feed.cost_to_aggregate(balance.coin, &block).await.map_err(|e| { - format!( - "ScanTask couldn't fetch cost to aggregate {:?} at {b}: {e:?}", - balance.coin - ) - })?); - } - let cost_to_aggregate = costs_to_aggregate[&balance.coin]; - balance.amount.0 -= 2 * cost_to_aggregate.0; - - // Now, check it's still past the dust threshold - if balance.amount.0 < S::dust(balance.coin).0 { - continue; - } - - balance - }; - - // Fetch the InInstruction/return addr for this output - let output_with_in_instruction = match in_instruction_from_output::(&output) { - (return_address, Some(instruction)) => OutputWithInInstruction { - output, - return_address, - in_instruction: InInstructionWithBalance { instruction, balance: balance_to_use }, - }, - (Some(address), None) => { - // Since there was no instruction here, return this since we parsed a return address - if key.stage != LifetimeStage::Finishing { - scan_data.returns.push(Return { address, output }); - } - continue; - } - // Since we didn't receive an instruction nor can we return this, queue this for - // accumulation and move on - (None, None) => { - if key.stage != LifetimeStage::Finishing { - scan_data.received_external_outputs.push(output); - } - continue; - } - }; - - // Drop External outputs if they're to a multisig which won't report them - // This means we should report any External output we save to disk here - #[allow(clippy::match_same_arms)] - match key.stage { - // This multisig isn't yet reporting its External outputs to avoid a DoS - // Queue the output to be reported when this multisig starts reporting - LifetimeStage::ActiveYetNotReporting => { - ScanDb::::queue_output_until_block( - &mut txn, - key.block_at_which_reporting_starts, - &output_with_in_instruction, - ); - continue; - } - // We should report External outputs in these cases - LifetimeStage::Active | LifetimeStage::UsingNewForChange => {} - // We should report External outputs only once forwarded, where they'll appear as - // OutputType::Forwarded. We save them now for when they appear - LifetimeStage::Forwarding => { - // When the forwarded output appears, we can see which Plan it's associated with and - // from there recover this output - scan_data.forwards.push(output_with_in_instruction); - continue; - } - // We should drop these as we should not be handling new External outputs at this - // time - LifetimeStage::Finishing => { - continue; - } - } - // Ensures we didn't miss a `continue` above - assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); + // The scan data for this block + let mut scan_data = SenderScanData { + block_number: b, + received_external_outputs: vec![], + forwards: vec![], + returns: vec![], + }; + // The InInstructions for this block + let mut in_instructions = vec![]; + // The outputs queued for this block + let queued_outputs = { + let mut queued_outputs = ScanDb::::take_queued_outputs(&mut txn, b); + // Sort the queued outputs in case they weren't queued in a deterministic fashion + queued_outputs.sort_by(|a, b| sort_outputs(&a.output, &b.output)); + queued_outputs + }; + for queued_output in queued_outputs { in_instructions.push(( - output_with_in_instruction.output.id(), + queued_output.output.id(), Returnable { - return_address: output_with_in_instruction.return_address, - in_instruction: output_with_in_instruction.in_instruction, + return_address: queued_output.return_address, + in_instruction: queued_output.in_instruction, }, )); - scan_data.received_external_outputs.push(output_with_in_instruction.output); + scan_data.received_external_outputs.push(queued_output.output); } - } - // Sort the InInstructions by the output ID - in_instructions.sort_by(|(output_id_a, _), (output_id_b, _)| { - use core::cmp::{Ordering, Ord}; - let res = output_id_a.as_ref().cmp(output_id_b.as_ref()); - assert!(res != Ordering::Equal, "two outputs within a collection had the same ID"); - res - }); - // Check we haven't prior reported an InInstruction for this output - // This is a sanity check which is intended to prevent multiple instances of sriXYZ on-chain - // due to a single output - for (id, _) in &in_instructions { - assert!( - !ScanDb::::prior_reported_in_instruction_for_output(&txn, id), - "prior reported an InInstruction for an output with this ID" + // We subtract the cost to aggregate from some outputs we scan + // This cost is fetched with an asynchronous function which may be non-trivial + // We cache the result of this function here to avoid calling it multiple times + let mut costs_to_aggregate = HashMap::with_capacity(1); + + // Scan for each key + for key in &keys { + for output in block.scan_for_outputs(key.key) { + assert_eq!(output.key(), key.key); + + /* + The scan task runs ahead of time, obtaining ordering on the external network's blocks + with relation to events on the Serai network. This is done via publishing a Batch + which contains the InInstructions from External outputs. Accordingly, the scan + process only has to yield External outputs. + + It'd appear to make sense to scan for all outputs, and after scanning for all + outputs, yield all outputs. The issue is we can't identify outputs we created here. + We can only identify the outputs we receive and their *declared intention*. + + We only want to handle Change/Branch/Forwarded outputs we made ourselves. For + Forwarded, the reasoning is obvious (retiring multisigs should only downsize, yet + accepting new outputs solely because they claim to be Forwarded would increase the + size of the multisig). For Change/Branch, it's because such outputs which aren't ours + are pointless. They wouldn't hurt to accumulate though. + + The issue is they would hurt to accumulate. We want to filter outputs which are less + than their cost to aggregate, a variable itself variable to the current blockchain. + We can filter such outputs here, yet if we drop a Change output, we create an + insolvency. We'd need to track the loss and offset it later. That means we can't + filter such outputs, as we expect any Change output we make. + + The issue is the Change outputs we don't make. Someone can create an output declaring + to be Change, yet not actually Change. If we don't filter it, it'd be queued for + accumulation, yet it may cost more to accumulate than it's worth. + + The solution is to let the Eventuality task, which does know if we made an output or + not (or rather, if a transaction is identical to a transaction which should exist + regarding effects) decide to keep/yield the outputs which we should only keep if we + made them (as Serai itself should not make worthless outputs, so we can assume + they're worthwhile, and even if they're not economically, they are technically). + + The alternative, we drop outputs here with a generic filter rule and then report back + the insolvency created, still doesn't work as we'd only be creating an insolvency if + the output was actually made by us (and not simply someone else sending in). We can + have the Eventuality task report the insolvency, yet that requires the scanner be + responsible for such filter logic. It's more flexible, and has a cleaner API, + to do so at a higher level. + */ + if output.kind() != OutputType::External { + // While we don't report these outputs, we still need consensus on this block and + // accordingly still need to set it as notable + let balance = output.balance(); + // We ensure it's over the dust limit to prevent people sending 1 satoshi from + // causing an invocation of a consensus/signing protocol + if balance.amount.0 >= S::dust(balance.coin).0 { + ScannerGlobalDb::::flag_notable_due_to_non_external_output(&mut txn, b); + } + continue; + } + + // Check this isn't dust + let balance_to_use = { + let mut balance = output.balance(); + + // First, subtract 2 * the cost to aggregate, as detailed in + // `spec/processor/UTXO Management.md` + + // We cache this, so if it isn't yet cached, insert it into the cache + if let std::collections::hash_map::Entry::Vacant(e) = + costs_to_aggregate.entry(balance.coin) + { + e.insert(self.feed.cost_to_aggregate(balance.coin, &block).await.map_err(|e| { + format!( + "ScanTask couldn't fetch cost to aggregate {:?} at {b}: {e:?}", + balance.coin + ) + })?); + } + let cost_to_aggregate = costs_to_aggregate[&balance.coin]; + balance.amount.0 -= 2 * cost_to_aggregate.0; + + // Now, check it's still past the dust threshold + if balance.amount.0 < S::dust(balance.coin).0 { + continue; + } + + balance + }; + + // Fetch the InInstruction/return addr for this output + let output_with_in_instruction = match in_instruction_from_output::(&output) { + (return_address, Some(instruction)) => OutputWithInInstruction { + output, + return_address, + in_instruction: InInstructionWithBalance { instruction, balance: balance_to_use }, + }, + (Some(address), None) => { + // Since there was no instruction here, return this since we parsed a return + // address + if key.stage != LifetimeStage::Finishing { + scan_data.returns.push(Return { address, output }); + } + continue; + } + // Since we didn't receive an instruction nor can we return this, queue this for + // accumulation and move on + (None, None) => { + if key.stage != LifetimeStage::Finishing { + scan_data.received_external_outputs.push(output); + } + continue; + } + }; + + // Drop External outputs if they're to a multisig which won't report them + // This means we should report any External output we save to disk here + #[allow(clippy::match_same_arms)] + match key.stage { + // This multisig isn't yet reporting its External outputs to avoid a DoS + // Queue the output to be reported when this multisig starts reporting + LifetimeStage::ActiveYetNotReporting => { + ScanDb::::queue_output_until_block( + &mut txn, + key.block_at_which_reporting_starts, + &output_with_in_instruction, + ); + continue; + } + // We should report External outputs in these cases + LifetimeStage::Active | LifetimeStage::UsingNewForChange => {} + // We should report External outputs only once forwarded, where they'll appear as + // OutputType::Forwarded. We save them now for when they appear + LifetimeStage::Forwarding => { + // When the forwarded output appears, we can see which Plan it's associated with + // and from there recover this output + scan_data.forwards.push(output_with_in_instruction); + continue; + } + // We should drop these as we should not be handling new External outputs at this + // time + LifetimeStage::Finishing => { + continue; + } + } + // Ensures we didn't miss a `continue` above + assert!(matches!(key.stage, LifetimeStage::Active | LifetimeStage::UsingNewForChange)); + + in_instructions.push(( + output_with_in_instruction.output.id(), + Returnable { + return_address: output_with_in_instruction.return_address, + in_instruction: output_with_in_instruction.in_instruction, + }, + )); + scan_data.received_external_outputs.push(output_with_in_instruction.output); + } + } + + // Sort the InInstructions by the output ID + in_instructions.sort_by(|(output_id_a, _), (output_id_b, _)| { + use core::cmp::{Ordering, Ord}; + let res = output_id_a.as_ref().cmp(output_id_b.as_ref()); + assert!(res != Ordering::Equal, "two outputs within a collection had the same ID"); + res + }); + // Check we haven't prior reported an InInstruction for this output + // This is a sanity check which is intended to prevent multiple instances of sriXYZ + // on-chain due to a single output + for (id, _) in &in_instructions { + assert!( + !ScanDb::::prior_reported_in_instruction_for_output(&txn, id), + "prior reported an InInstruction for an output with this ID" + ); + ScanDb::::reported_in_instruction_for_output(&mut txn, id); + } + // Reformat the InInstructions to just the InInstructions + let in_instructions = in_instructions + .into_iter() + .map(|(_id, in_instruction)| in_instruction) + .collect::>(); + // Send the InInstructions to the report task + // We need to also specify which key is responsible for signing the Batch for these, which + // will always be the oldest key (as the new key signing the Batch signifies handover + // acceptance) + ScanToReportDb::::send_in_instructions( + &mut txn, + b, + &InInstructionData { + external_key_for_session_to_sign_batch: keys[0].key, + returnable_in_instructions: in_instructions, + }, ); - ScanDb::::reported_in_instruction_for_output(&mut txn, id); + + // Send the scan data to the eventuality task + ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); + // Update the next to scan block + ScanDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); + txn.commit(); } - // Reformat the InInstructions to just the InInstructions - let in_instructions = - in_instructions.into_iter().map(|(_id, in_instruction)| in_instruction).collect::>(); - // Send the InInstructions to the report task - // We need to also specify which key is responsible for signing the Batch for these, which - // will always be the oldest key (as the new key signing the Batch signifies handover - // acceptance) - ScanToReportDb::::send_in_instructions( - &mut txn, - b, - &InInstructionData { - external_key_for_session_to_sign_batch: keys[0].key, - returnable_in_instructions: in_instructions, - }, - ); - // Send the scan data to the eventuality task - ScanToEventualityDb::::send_scan_data(&mut txn, b, &scan_data); - // Update the next to scan block - ScanDb::::set_next_to_scan_for_outputs_block(&mut txn, b + 1); - txn.commit(); + // Run dependents if we successfully scanned any blocks + Ok(next_to_scan <= latest_scannable) } - - // Run dependents if we successfully scanned any blocks - Ok(next_to_scan <= latest_scannable) } } diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index fc97daf3..a7302e5c 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -1,4 +1,4 @@ -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use serai_db::{DbTxn, Db}; @@ -52,115 +52,121 @@ impl SubstrateTask { } } -#[async_trait::async_trait] impl ContinuallyRan for SubstrateTask { - async fn run_iteration(&mut self) -> Result { - let mut made_progress = false; - loop { - // Fetch the next action to handle - let mut txn = self.db.txn(); - let Some(action) = SubstrateDb::::next_action(&mut txn) else { - drop(txn); - return Ok(made_progress); - }; + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + loop { + // Fetch the next action to handle + let mut txn = self.db.txn(); + let Some(action) = SubstrateDb::::next_action(&mut txn) else { + drop(txn); + return Ok(made_progress); + }; - match action { - Action::AcknowledgeBatch(AcknowledgeBatch { - batch_id, - in_instruction_succeededs, - mut burns, - key_to_activate, - }) => { - // Check if we have the information for this batch - let Some(block_number) = report::take_block_number_for_batch::(&mut txn, batch_id) - else { - // If we don't, drop this txn (restoring the action to the database) - drop(txn); - return Ok(made_progress); - }; + match action { + Action::AcknowledgeBatch(AcknowledgeBatch { + batch_id, + in_instruction_succeededs, + mut burns, + key_to_activate, + }) => { + // Check if we have the information for this batch + let Some(block_number) = report::take_block_number_for_batch::(&mut txn, batch_id) + else { + // If we don't, drop this txn (restoring the action to the database) + drop(txn); + return Ok(made_progress); + }; - { - let external_key_for_session_to_sign_batch = - report::take_external_key_for_session_to_sign_batch::(&mut txn, batch_id).unwrap(); - AcknowledgedBatches::send(&mut txn, &external_key_for_session_to_sign_batch, batch_id); - } - - // Mark we made progress and handle this - made_progress = true; - - assert!( - ScannerGlobalDb::::is_block_notable(&txn, block_number), - "acknowledging a block which wasn't notable" - ); - if let Some(prior_highest_acknowledged_block) = - ScannerGlobalDb::::highest_acknowledged_block(&txn) - { - // If a single block produced multiple Batches, the block number won't increment - assert!( - block_number >= prior_highest_acknowledged_block, - "acknowledging blocks out-of-order" - ); - for b in (prior_highest_acknowledged_block + 1) .. block_number { - assert!( - !ScannerGlobalDb::::is_block_notable(&txn, b), - "skipped acknowledging a block which was notable" + { + let external_key_for_session_to_sign_batch = + report::take_external_key_for_session_to_sign_batch::(&mut txn, batch_id) + .unwrap(); + AcknowledgedBatches::send( + &mut txn, + &external_key_for_session_to_sign_batch, + batch_id, ); } - } - ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); - if let Some(key_to_activate) = key_to_activate { - ScannerGlobalDb::::queue_key( - &mut txn, - block_number + S::WINDOW_LENGTH, - key_to_activate, + // Mark we made progress and handle this + made_progress = true; + + assert!( + ScannerGlobalDb::::is_block_notable(&txn, block_number), + "acknowledging a block which wasn't notable" ); - } + if let Some(prior_highest_acknowledged_block) = + ScannerGlobalDb::::highest_acknowledged_block(&txn) + { + // If a single block produced multiple Batches, the block number won't increment + assert!( + block_number >= prior_highest_acknowledged_block, + "acknowledging blocks out-of-order" + ); + for b in (prior_highest_acknowledged_block + 1) .. block_number { + assert!( + !ScannerGlobalDb::::is_block_notable(&txn, b), + "skipped acknowledging a block which was notable" + ); + } + } - // Return the balances for any InInstructions which failed to execute - { - let return_information = report::take_return_information::(&mut txn, batch_id) - .expect("didn't save the return information for Batch we published"); - assert_eq!( + ScannerGlobalDb::::set_highest_acknowledged_block(&mut txn, block_number); + if let Some(key_to_activate) = key_to_activate { + ScannerGlobalDb::::queue_key( + &mut txn, + block_number + S::WINDOW_LENGTH, + key_to_activate, + ); + } + + // Return the balances for any InInstructions which failed to execute + { + let return_information = report::take_return_information::(&mut txn, batch_id) + .expect("didn't save the return information for Batch we published"); + assert_eq!( in_instruction_succeededs.len(), return_information.len(), "amount of InInstruction succeededs differed from amount of return information saved" ); - // We map these into standard Burns - for (succeeded, return_information) in - in_instruction_succeededs.into_iter().zip(return_information) - { - if succeeded { - continue; - } + // We map these into standard Burns + for (succeeded, return_information) in + in_instruction_succeededs.into_iter().zip(return_information) + { + if succeeded { + continue; + } - if let Some(report::ReturnInformation { address, balance }) = return_information { - burns.push(OutInstructionWithBalance { - instruction: OutInstruction { address: address.into(), data: None }, - balance, - }); + if let Some(report::ReturnInformation { address, balance }) = return_information { + burns.push(OutInstructionWithBalance { + instruction: OutInstruction { address: address.into(), data: None }, + balance, + }); + } } } + + // We send these Burns as stemming from this block we just acknowledged + // This causes them to be acted on after we accumulate the outputs from this block + SubstrateToEventualityDb::send_burns::(&mut txn, block_number, burns); } - // We send these Burns as stemming from this block we just acknowledged - // This causes them to be acted on after we accumulate the outputs from this block - SubstrateToEventualityDb::send_burns::(&mut txn, block_number, burns); + Action::QueueBurns(burns) => { + // We can instantly handle this so long as we've handled all prior actions + made_progress = true; + + let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(&txn) + .expect("queueing Burns yet never acknowledged a block"); + + SubstrateToEventualityDb::send_burns::(&mut txn, queue_as_of, burns); + } } - Action::QueueBurns(burns) => { - // We can instantly handle this so long as we've handled all prior actions - made_progress = true; - - let queue_as_of = ScannerGlobalDb::::highest_acknowledged_block(&txn) - .expect("queueing Burns yet never acknowledged a block"); - - SubstrateToEventualityDb::send_burns::(&mut txn, queue_as_of, burns); - } + txn.commit(); } - - txn.commit(); } } } diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 7b7ef098..65222896 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -14,13 +14,12 @@ all-features = true rustdoc-args = ["--cfg", "docsrs"] [package.metadata.cargo-machete] -ignored = ["borsh", "scale"] +ignored = ["borsh"] [lints] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } rand_core = { version = "0.6", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs index f08fb5e2..b8ad7ccb 100644 --- a/processor/signers/src/batch/mod.rs +++ b/processor/signers/src/batch/mod.rs @@ -1,3 +1,4 @@ +use core::future::Future; use std::collections::HashSet; use ciphersuite::{group::GroupEncoding, Ristretto}; @@ -75,114 +76,115 @@ impl BatchSignerTask { } } -#[async_trait::async_trait] impl ContinuallyRan for BatchSignerTask { - async fn run_iteration(&mut self) -> Result { - let mut iterated = false; + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut iterated = false; - // Check for new batches to sign - loop { - let mut txn = self.db.txn(); - let Some(batch) = BatchesToSign::try_recv(&mut txn, &self.external_key) else { - break; - }; - iterated = true; + // Check for new batches to sign + loop { + let mut txn = self.db.txn(); + let Some(batch) = BatchesToSign::try_recv(&mut txn, &self.external_key) else { + break; + }; + iterated = true; - // Save this to the database as a transaction to sign - self.active_signing_protocols.insert(batch.id); - ActiveSigningProtocols::set( - &mut txn, - self.session, - &self.active_signing_protocols.iter().copied().collect(), - ); - Batches::set(&mut txn, batch.id, &batch); + // Save this to the database as a transaction to sign + self.active_signing_protocols.insert(batch.id); + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + Batches::set(&mut txn, batch.id, &batch); - let mut machines = Vec::with_capacity(self.keys.len()); - for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); - } - for msg in self.attempt_manager.register(VariantSignId::Batch(batch.id), machines) { - BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); - } - - txn.commit(); - } - - // Check for acknowledged Batches (meaning we should no longer sign for these Batches) - loop { - let mut txn = self.db.txn(); - let Some(id) = AcknowledgedBatches::try_recv(&mut txn, &self.external_key) else { - break; - }; - - { - let last_acknowledged = LastAcknowledgedBatch::get(&txn); - if Some(id) > last_acknowledged { - LastAcknowledgedBatch::set(&mut txn, &id); + let mut machines = Vec::with_capacity(self.keys.len()); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); } + for msg in self.attempt_manager.register(VariantSignId::Batch(batch.id), machines) { + BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + + txn.commit(); } - /* - We may have yet to register this signing protocol. + // Check for acknowledged Batches (meaning we should no longer sign for these Batches) + loop { + let mut txn = self.db.txn(); + let Some(id) = AcknowledgedBatches::try_recv(&mut txn, &self.external_key) else { + break; + }; - While `BatchesToSign` is populated before `AcknowledgedBatches`, we could theoretically have - `BatchesToSign` populated with a new batch _while iterating over `AcknowledgedBatches`_, and - then have `AcknowledgedBatched` populated. In that edge case, we will see the - acknowledgement notification before we see the transaction. - - In such a case, we break (dropping the txn, re-queueing the acknowledgement notification). - On the task's next iteration, we'll process the Batch from `BatchesToSign` and be - able to make progress. - */ - if !self.active_signing_protocols.remove(&id) { - break; - } - iterated = true; - - // Since it was, remove this as an active signing protocol - ActiveSigningProtocols::set( - &mut txn, - self.session, - &self.active_signing_protocols.iter().copied().collect(), - ); - // Clean up the database - Batches::del(&mut txn, id); - SignedBatches::del(&mut txn, id); - - // We retire with a txn so we either successfully flag this Batch as acknowledged, and - // won't re-register it (making this retire safe), or we don't flag it, meaning we will - // re-register it, yet that's safe as we have yet to retire it - self.attempt_manager.retire(&mut txn, VariantSignId::Batch(id)); - - txn.commit(); - } - - // Handle any messages sent to us - loop { - let mut txn = self.db.txn(); - let Some(msg) = CoordinatorToBatchSignerMessages::try_recv(&mut txn, self.session) else { - break; - }; - iterated = true; - - match self.attempt_manager.handle(msg) { - Response::Messages(msgs) => { - for msg in msgs { - BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + { + let last_acknowledged = LastAcknowledgedBatch::get(&txn); + if Some(id) > last_acknowledged { + LastAcknowledgedBatch::set(&mut txn, &id); } } - Response::Signature { id, signature } => { - let VariantSignId::Batch(id) = id else { panic!("BatchSignerTask signed a non-Batch") }; - let batch = - Batches::get(&txn, id).expect("signed a Batch we didn't save to the database"); - let signed_batch = SignedBatch { batch, signature: signature.into() }; - SignedBatches::set(&mut txn, signed_batch.batch.id, &signed_batch); + + /* + We may have yet to register this signing protocol. + + While `BatchesToSign` is populated before `AcknowledgedBatches`, we could theoretically + have `BatchesToSign` populated with a new batch _while iterating over + `AcknowledgedBatches`_, and then have `AcknowledgedBatched` populated. In that edge case, + we will see the acknowledgement notification before we see the transaction. + + In such a case, we break (dropping the txn, re-queueing the acknowledgement notification). + On the task's next iteration, we'll process the Batch from `BatchesToSign` and be + able to make progress. + */ + if !self.active_signing_protocols.remove(&id) { + break; } + iterated = true; + + // Since it was, remove this as an active signing protocol + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + // Clean up the database + Batches::del(&mut txn, id); + SignedBatches::del(&mut txn, id); + + // We retire with a txn so we either successfully flag this Batch as acknowledged, and + // won't re-register it (making this retire safe), or we don't flag it, meaning we will + // re-register it, yet that's safe as we have yet to retire it + self.attempt_manager.retire(&mut txn, VariantSignId::Batch(id)); + + txn.commit(); } - txn.commit(); - } + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToBatchSignerMessages::try_recv(&mut txn, self.session) else { + break; + }; + iterated = true; - Ok(iterated) + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature } => { + let VariantSignId::Batch(id) = id else { panic!("BatchSignerTask signed a non-Batch") }; + let batch = + Batches::get(&txn, id).expect("signed a Batch we didn't save to the database"); + let signed_batch = SignedBatch { batch, signature: signature.into() }; + SignedBatches::set(&mut txn, signed_batch.batch.id, &signed_batch); + } + } + + txn.commit(); + } + + Ok(iterated) + } } } diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index e749f841..1e3c84d2 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -1,3 +1,5 @@ +use core::future::Future; + use scale::Decode; use serai_db::{DbTxn, Db}; @@ -19,149 +21,157 @@ impl CoordinatorTask { } } -#[async_trait::async_trait] impl ContinuallyRan for CoordinatorTask { - async fn run_iteration(&mut self) -> Result { - let mut iterated = false; + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut iterated = false; - for session in RegisteredKeys::get(&self.db).unwrap_or(vec![]) { - // Publish the messages generated by this key's signers - loop { - let mut txn = self.db.txn(); - let Some(msg) = CosignerToCoordinatorMessages::try_recv(&mut txn, session) else { - break; - }; - iterated = true; - - self - .coordinator - .send(msg) - .await - .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; - - txn.commit(); - } - - loop { - let mut txn = self.db.txn(); - let Some(msg) = BatchSignerToCoordinatorMessages::try_recv(&mut txn, session) else { - break; - }; - iterated = true; - - self - .coordinator - .send(msg) - .await - .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; - - txn.commit(); - } - - loop { - let mut txn = self.db.txn(); - let Some(msg) = SlashReportSignerToCoordinatorMessages::try_recv(&mut txn, session) else { - break; - }; - iterated = true; - - self - .coordinator - .send(msg) - .await - .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; - - txn.commit(); - } - - loop { - let mut txn = self.db.txn(); - let Some(msg) = TransactionSignerToCoordinatorMessages::try_recv(&mut txn, session) else { - break; - }; - iterated = true; - - self - .coordinator - .send(msg) - .await - .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; - - txn.commit(); - } - - // Publish the cosigns from this session - { - let mut txn = self.db.txn(); - while let Some(((block_number, block_id), signature)) = Cosign::try_recv(&mut txn, session) - { - iterated = true; - self - .coordinator - .publish_cosign(block_number, block_id, <_>::decode(&mut signature.as_slice()).unwrap()) - .await - .map_err(|e| format!("couldn't publish Cosign: {e:?}"))?; - } - txn.commit(); - } - - // If this session signed its slash report, publish its signature - { - let mut txn = self.db.txn(); - if let Some(slash_report_signature) = SlashReportSignature::try_recv(&mut txn, session) { + for session in RegisteredKeys::get(&self.db).unwrap_or(vec![]) { + // Publish the messages generated by this key's signers + loop { + let mut txn = self.db.txn(); + let Some(msg) = CosignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; iterated = true; self .coordinator - .publish_slash_report_signature( - session, - <_>::decode(&mut slash_report_signature.as_slice()).unwrap(), - ) + .send(msg) .await - .map_err(|e| { - format!("couldn't send slash report signature to the coordinator: {e:?}") - })?; + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; txn.commit(); } - } - } - // Publish the Batches - { - let mut txn = self.db.txn(); - while let Some(batch) = scanner::Batches::try_recv(&mut txn) { - iterated = true; - self - .coordinator - .publish_batch(batch) - .await - .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; - } - txn.commit(); - } + loop { + let mut txn = self.db.txn(); + let Some(msg) = BatchSignerToCoordinatorMessages::try_recv(&mut txn, session) else { + break; + }; + iterated = true; - // Publish the signed Batches - { - let mut txn = self.db.txn(); - // The last acknowledged Batch may exceed the last Batch we published if we didn't sign for - // the prior Batch(es) (and accordingly didn't publish them) - let last_batch = - crate::batch::last_acknowledged_batch(&txn).max(db::LastPublishedBatch::get(&txn)); - let mut next_batch = last_batch.map_or(0, |id| id + 1); - while let Some(batch) = crate::batch::signed_batch(&txn, next_batch) { - iterated = true; - db::LastPublishedBatch::set(&mut txn, &batch.batch.id); - self - .coordinator - .publish_signed_batch(batch) - .await - .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; - next_batch += 1; - } - txn.commit(); - } + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; - Ok(iterated) + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = SlashReportSignerToCoordinatorMessages::try_recv(&mut txn, session) + else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(msg) = TransactionSignerToCoordinatorMessages::try_recv(&mut txn, session) + else { + break; + }; + iterated = true; + + self + .coordinator + .send(msg) + .await + .map_err(|e| format!("couldn't send sign message to the coordinator: {e:?}"))?; + + txn.commit(); + } + + // Publish the cosigns from this session + { + let mut txn = self.db.txn(); + while let Some(((block_number, block_id), signature)) = + Cosign::try_recv(&mut txn, session) + { + iterated = true; + self + .coordinator + .publish_cosign( + block_number, + block_id, + <_>::decode(&mut signature.as_slice()).unwrap(), + ) + .await + .map_err(|e| format!("couldn't publish Cosign: {e:?}"))?; + } + txn.commit(); + } + + // If this session signed its slash report, publish its signature + { + let mut txn = self.db.txn(); + if let Some(slash_report_signature) = SlashReportSignature::try_recv(&mut txn, session) { + iterated = true; + + self + .coordinator + .publish_slash_report_signature( + session, + <_>::decode(&mut slash_report_signature.as_slice()).unwrap(), + ) + .await + .map_err(|e| { + format!("couldn't send slash report signature to the coordinator: {e:?}") + })?; + + txn.commit(); + } + } + } + + // Publish the Batches + { + let mut txn = self.db.txn(); + while let Some(batch) = scanner::Batches::try_recv(&mut txn) { + iterated = true; + self + .coordinator + .publish_batch(batch) + .await + .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; + } + txn.commit(); + } + + // Publish the signed Batches + { + let mut txn = self.db.txn(); + // The last acknowledged Batch may exceed the last Batch we published if we didn't sign for + // the prior Batch(es) (and accordingly didn't publish them) + let last_batch = + crate::batch::last_acknowledged_batch(&txn).max(db::LastPublishedBatch::get(&txn)); + let mut next_batch = last_batch.map_or(0, |id| id + 1); + while let Some(batch) = crate::batch::signed_batch(&txn, next_batch) { + iterated = true; + db::LastPublishedBatch::set(&mut txn, &batch.batch.id); + self + .coordinator + .publish_signed_batch(batch) + .await + .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; + next_batch += 1; + } + txn.commit(); + } + + Ok(iterated) + } } } diff --git a/processor/signers/src/cosign/mod.rs b/processor/signers/src/cosign/mod.rs index 41db8050..2de18e86 100644 --- a/processor/signers/src/cosign/mod.rs +++ b/processor/signers/src/cosign/mod.rs @@ -1,3 +1,5 @@ +use core::future::Future; + use ciphersuite::Ristretto; use frost::dkg::ThresholdKeys; @@ -48,75 +50,76 @@ impl CosignerTask { } } -#[async_trait::async_trait] impl ContinuallyRan for CosignerTask { - async fn run_iteration(&mut self) -> Result { - let mut iterated = false; + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut iterated = false; - // Check the cosign to work on - { - let mut txn = self.db.txn(); - if let Some(cosign) = ToCosign::get(&txn, self.session) { - // If this wasn't already signed for... - if LatestCosigned::get(&txn, self.session) < Some(cosign.0) { - // If this isn't the cosign we're currently working on, meaning it's fresh - if self.current_cosign != Some(cosign) { - // Retire the current cosign - if let Some(current_cosign) = self.current_cosign { - assert!(current_cosign.0 < cosign.0); - self.attempt_manager.retire(&mut txn, VariantSignId::Cosign(current_cosign.0)); - } - - // Set the cosign being worked on - self.current_cosign = Some(cosign); - - let mut machines = Vec::with_capacity(self.keys.len()); - { - let message = cosign_block_msg(cosign.0, cosign.1); - for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + // Check the cosign to work on + { + let mut txn = self.db.txn(); + if let Some(cosign) = ToCosign::get(&txn, self.session) { + // If this wasn't already signed for... + if LatestCosigned::get(&txn, self.session) < Some(cosign.0) { + // If this isn't the cosign we're currently working on, meaning it's fresh + if self.current_cosign != Some(cosign) { + // Retire the current cosign + if let Some(current_cosign) = self.current_cosign { + assert!(current_cosign.0 < cosign.0); + self.attempt_manager.retire(&mut txn, VariantSignId::Cosign(current_cosign.0)); } + + // Set the cosign being worked on + self.current_cosign = Some(cosign); + + let mut machines = Vec::with_capacity(self.keys.len()); + { + let message = cosign_block_msg(cosign.0, cosign.1); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + } + } + for msg in self.attempt_manager.register(VariantSignId::Cosign(cosign.0), machines) { + CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + + txn.commit(); } - for msg in self.attempt_manager.register(VariantSignId::Cosign(cosign.0), machines) { + } + } + } + + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToCosignerMessages::try_recv(&mut txn, self.session) else { + break; + }; + iterated = true; + + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } + } + Response::Signature { id, signature } => { + let VariantSignId::Cosign(block_number) = id else { + panic!("CosignerTask signed a non-Cosign") + }; + assert_eq!(Some(block_number), self.current_cosign.map(|cosign| cosign.0)); - txn.commit(); + let cosign = self.current_cosign.take().unwrap(); + LatestCosigned::set(&mut txn, self.session, &cosign.0); + // Send the cosign + Cosign::send(&mut txn, self.session, &(cosign, Signature::from(signature).encode())); } } - } - } - // Handle any messages sent to us - loop { - let mut txn = self.db.txn(); - let Some(msg) = CoordinatorToCosignerMessages::try_recv(&mut txn, self.session) else { - break; - }; - iterated = true; - - match self.attempt_manager.handle(msg) { - Response::Messages(msgs) => { - for msg in msgs { - CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); - } - } - Response::Signature { id, signature } => { - let VariantSignId::Cosign(block_number) = id else { - panic!("CosignerTask signed a non-Cosign") - }; - assert_eq!(Some(block_number), self.current_cosign.map(|cosign| cosign.0)); - - let cosign = self.current_cosign.take().unwrap(); - LatestCosigned::set(&mut txn, self.session, &cosign.0); - // Send the cosign - Cosign::send(&mut txn, self.session, &(cosign, Signature::from(signature).encode())); - } + txn.commit(); } - txn.commit(); + Ok(iterated) } - - Ok(iterated) } } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index e06dd07f..c76fbd32 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -2,7 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::{fmt::Debug, marker::PhantomData}; +use core::{future::Future, fmt::Debug, marker::PhantomData}; use std::collections::HashMap; use zeroize::Zeroizing; @@ -43,7 +43,6 @@ mod transaction; use transaction::TransactionSignerTask; /// A connection to the Coordinator which messages can be published with. -#[async_trait::async_trait] pub trait Coordinator: 'static + Send + Sync { /// An error encountered when interacting with a coordinator. /// @@ -52,32 +51,38 @@ pub trait Coordinator: 'static + Send + Sync { type EphemeralError: Debug; /// Send a `messages::sign::ProcessorMessage`. - async fn send(&mut self, message: ProcessorMessage) -> Result<(), Self::EphemeralError>; + fn send( + &mut self, + message: ProcessorMessage, + ) -> impl Send + Future>; /// Publish a cosign. - async fn publish_cosign( + fn publish_cosign( &mut self, block_number: u64, block_id: [u8; 32], signature: Signature, - ) -> Result<(), Self::EphemeralError>; + ) -> impl Send + Future>; /// Publish a `Batch`. - async fn publish_batch(&mut self, batch: Batch) -> Result<(), Self::EphemeralError>; + fn publish_batch(&mut self, batch: Batch) + -> impl Send + Future>; /// Publish a `SignedBatch`. - async fn publish_signed_batch(&mut self, batch: SignedBatch) -> Result<(), Self::EphemeralError>; + fn publish_signed_batch( + &mut self, + batch: SignedBatch, + ) -> impl Send + Future>; /// Publish a slash report's signature. - async fn publish_slash_report_signature( + fn publish_slash_report_signature( &mut self, session: Session, signature: Signature, - ) -> Result<(), Self::EphemeralError>; + ) -> impl Send + Future>; } /// An object capable of publishing a transaction. -#[async_trait::async_trait] pub trait TransactionPublisher: 'static + Send + Sync + Clone { /// An error encountered when publishing a transaction. /// @@ -92,7 +97,7 @@ pub trait TransactionPublisher: 'static + Send + Sync + Clone { /// /// The transaction already being present in the mempool/on-chain MUST NOT be considered an /// error. - async fn publish(&self, tx: T) -> Result<(), Self::EphemeralError>; + fn publish(&self, tx: T) -> impl Send + Future>; } struct Tasks { diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs index 19a2523b..e040798c 100644 --- a/processor/signers/src/slash_report.rs +++ b/processor/signers/src/slash_report.rs @@ -1,4 +1,4 @@ -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use ciphersuite::Ristretto; use frost::dkg::ThresholdKeys; @@ -51,70 +51,72 @@ impl SlashReportSignerTask { } } -#[async_trait::async_trait] impl ContinuallyRan for SlashReportSignerTask { - async fn run_iteration(&mut self) -> Result { - let mut iterated = false; + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut iterated = false; - // Check for the slash report to sign - if !self.has_slash_report { - let mut txn = self.db.txn(); - let Some(slash_report) = SlashReport::try_recv(&mut txn, self.session) else { - return Ok(false); - }; - // We only commit this upon successfully signing this slash report - drop(txn); - iterated = true; + // Check for the slash report to sign + if !self.has_slash_report { + let mut txn = self.db.txn(); + let Some(slash_report) = SlashReport::try_recv(&mut txn, self.session) else { + return Ok(false); + }; + // We only commit this upon successfully signing this slash report + drop(txn); + iterated = true; - self.has_slash_report = true; + self.has_slash_report = true; - let mut machines = Vec::with_capacity(self.keys.len()); - { - let message = report_slashes_message( - &ValidatorSet { network: S::NETWORK, session: self.session }, - &SlashReportStruct(slash_report.try_into().unwrap()), - ); - for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); - } - } - let mut txn = self.db.txn(); - for msg in self.attempt_manager.register(VariantSignId::SlashReport(self.session), machines) { - SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); - } - txn.commit(); - } - - // Handle any messages sent to us - loop { - let mut txn = self.db.txn(); - let Some(msg) = CoordinatorToSlashReportSignerMessages::try_recv(&mut txn, self.session) - else { - break; - }; - iterated = true; - - match self.attempt_manager.handle(msg) { - Response::Messages(msgs) => { - for msg in msgs { - SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + let mut machines = Vec::with_capacity(self.keys.len()); + { + let message = report_slashes_message( + &ValidatorSet { network: S::NETWORK, session: self.session }, + &SlashReportStruct(slash_report.try_into().unwrap()), + ); + for keys in &self.keys { + machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); } } - Response::Signature { id, signature } => { - let VariantSignId::SlashReport(session) = id else { - panic!("SlashReportSignerTask signed a non-SlashReport") - }; - assert_eq!(session, self.session); - // Drain the channel - SlashReport::try_recv(&mut txn, self.session).unwrap(); - // Send the signature - SlashReportSignature::send(&mut txn, session, &Signature::from(signature).encode()); + let mut txn = self.db.txn(); + for msg in self.attempt_manager.register(VariantSignId::SlashReport(self.session), machines) + { + SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } + txn.commit(); } - txn.commit(); - } + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToSlashReportSignerMessages::try_recv(&mut txn, self.session) + else { + break; + }; + iterated = true; - Ok(iterated) + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature } => { + let VariantSignId::SlashReport(session) = id else { + panic!("SlashReportSignerTask signed a non-SlashReport") + }; + assert_eq!(session, self.session); + // Drain the channel + SlashReport::try_recv(&mut txn, self.session).unwrap(); + // Send the signature + SlashReportSignature::send(&mut txn, session, &Signature::from(signature).encode()); + } + } + + txn.commit(); + } + + Ok(iterated) + } } } diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index b9b62e75..f089e931 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -1,3 +1,4 @@ +use core::future::Future; use std::{ collections::HashSet, time::{Duration, Instant}, @@ -88,11 +89,10 @@ impl> } } -#[async_trait::async_trait] impl>> ContinuallyRan for TransactionSignerTask { - async fn run_iteration(&mut self) -> Result { + fn run_iteration(&mut self) -> impl Send + Future> {async{ let mut iterated = false; // Check for new transactions to sign @@ -233,3 +233,4 @@ impl> Ok(iterated) } } +} diff --git a/processor/src/tests/scanner.rs b/processor/src/tests/scanner.rs index 6421c499..a40e465c 100644 --- a/processor/src/tests/scanner.rs +++ b/processor/src/tests/scanner.rs @@ -71,7 +71,7 @@ pub async fn test_scanner( let block_id = block.id(); // Verify the Scanner picked them up - let verify_event = |mut scanner: ScannerHandle| async { + let verify_event = |mut scanner: ScannerHandle| async move { let outputs = match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() { ScannerEvent::Block { is_retirement_block, block, outputs } => { From 1b391384720a33c01ecefb65591a64f573a352eb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 02:12:32 -0400 Subject: [PATCH 117/368] Define subaddress indexes to use (1, 0) is the external address. (2, *) are the internal addresses. --- Cargo.lock | 7 ------- processor/monero/src/primitives/block.rs | 13 +++++++++++-- processor/monero/src/primitives/mod.rs | 7 +++++++ processor/monero/src/primitives/output.rs | 16 +++++++++++++++- 4 files changed, 33 insertions(+), 10 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index b3419a85..01edbcfe 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8123,7 +8123,6 @@ dependencies = [ name = "serai-bitcoin-processor" version = "0.1.0" dependencies = [ - "async-trait", "bitcoin-serai", "borsh", "ciphersuite", @@ -8349,7 +8348,6 @@ version = "0.1.0" name = "serai-ethereum-processor" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "const-hex", "env_logger", @@ -8514,7 +8512,6 @@ dependencies = [ name = "serai-monero-processor" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "ciphersuite", "dalek-ff-group", @@ -8644,7 +8641,6 @@ dependencies = [ name = "serai-processor-bin" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "ciphersuite", "dkg", @@ -8718,7 +8714,6 @@ dependencies = [ name = "serai-processor-primitives" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "group", "log", @@ -8732,7 +8727,6 @@ dependencies = [ name = "serai-processor-scanner" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "group", "hex", @@ -8762,7 +8756,6 @@ dependencies = [ name = "serai-processor-signers" version = "0.1.0" dependencies = [ - "async-trait", "borsh", "ciphersuite", "frost-schnorrkel", diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index ad28b0c1..634a0fbb 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -2,13 +2,13 @@ use std::collections::HashMap; use ciphersuite::{Ciphersuite, Ed25519}; -use monero_wallet::{transaction::Transaction, block::Block as MBlock}; +use monero_wallet::{transaction::Transaction, block::Block as MBlock, ViewPairError, GuaranteedViewPair, GuaranteedScanner}; use serai_client::networks::monero::Address; use primitives::{ReceivedOutput, EventualityTracker}; -use crate::{output::Output, transaction::Eventuality}; +use crate::{EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, output::Output, transaction::Eventuality}; #[derive(Clone, Debug)] pub(crate) struct BlockHeader(pub(crate) MBlock); @@ -37,6 +37,15 @@ impl primitives::Block for Block { } fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + let view_pair = match GuaranteedViewPair::new(key.0, additional_key) { + Ok(view_pair) => view_pair, + Err(ViewPairError::TorsionedSpendKey) => unreachable!("dalek_ff_group::EdwardsPoint has torsion"), + }; + let mut scanner = GuaranteedScanner::new(view_pair); + scanner.register_subaddress(EXTERNAL_SUBADDRESS.unwrap()); + scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap()); + scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap()); + scanner.register_subaddress(FORWARDED_SUBADDRESS.unwrap()); todo!("TODO") } diff --git a/processor/monero/src/primitives/mod.rs b/processor/monero/src/primitives/mod.rs index fba52dd9..de057399 100644 --- a/processor/monero/src/primitives/mod.rs +++ b/processor/monero/src/primitives/mod.rs @@ -1,3 +1,10 @@ +use monero_wallet::address::SubaddressIndex; + pub(crate) mod output; pub(crate) mod transaction; pub(crate) mod block; + +pub(crate) const EXTERNAL_SUBADDRESS: Option = SubaddressIndex::new(1, 0); +pub(crate) const BRANCH_SUBADDRESS: Option = SubaddressIndex::new(2, 0); +pub(crate) const CHANGE_SUBADDRESS: Option = SubaddressIndex::new(2, 1); +pub(crate) const FORWARDED_SUBADDRESS: Option = SubaddressIndex::new(2, 2); diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs index d3eb3be3..385429c2 100644 --- a/processor/monero/src/primitives/output.rs +++ b/processor/monero/src/primitives/output.rs @@ -14,6 +14,8 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; +use crate::{EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS}; + #[rustfmt::skip] #[derive( Clone, Copy, PartialEq, Eq, Default, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, @@ -44,7 +46,19 @@ impl ReceivedOutput<::G, Address> for Output { type TransactionId = [u8; 32]; fn kind(&self) -> OutputType { - todo!("TODO") + if self.0.subaddress() == EXTERNAL_SUBADDRESS { + return OutputType::External; + } + if self.0.subaddress() == BRANCH_SUBADDRESS { + return OutputType::Branch; + } + if self.0.subaddress() == CHANGE_SUBADDRESS { + return OutputType::Change; + } + if self.0.subaddress() == FORWARDED_SUBADDRESS { + return OutputType::Forwarded; + } + unreachable!("scanned output to unknown subaddress"); } fn id(&self) -> Self::Id { From b4e94f3d5126315e9b207a8b7134ffa8bd509437 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 05:10:37 -0400 Subject: [PATCH 118/368] cargo fmt signers/scanner --- processor/scanner/src/lib.rs | 34 ++-- processor/signers/src/lib.rs | 6 +- processor/signers/src/transaction/mod.rs | 248 ++++++++++++----------- 3 files changed, 147 insertions(+), 141 deletions(-) diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index a5c5c038..6ac45223 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -145,25 +145,27 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { getter: &(impl Send + Sync + Get), number: u64, ) -> impl Send + Future> { - async move {let block = match self.unchecked_block_by_number(number).await { - Ok(block) => block, - Err(e) => Err(format!("couldn't fetch block {number}: {e:?}"))?, - }; + async move { + let block = match self.unchecked_block_by_number(number).await { + Ok(block) => block, + Err(e) => Err(format!("couldn't fetch block {number}: {e:?}"))?, + }; - // Check the ID of this block is the expected ID - { - let expected = crate::index::block_id(getter, number); - if block.id() != expected { - panic!( - "finalized chain reorganized from {} to {} at {}", - hex::encode(expected), - hex::encode(block.id()), - number, - ); + // Check the ID of this block is the expected ID + { + let expected = crate::index::block_id(getter, number); + if block.id() != expected { + panic!( + "finalized chain reorganized from {} to {} at {}", + hex::encode(expected), + hex::encode(block.id()), + number, + ); + } } - } - Ok(block)} + Ok(block) + } } /// The dust threshold for the specified coin. diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index c76fbd32..a6714fdf 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -65,8 +65,10 @@ pub trait Coordinator: 'static + Send + Sync { ) -> impl Send + Future>; /// Publish a `Batch`. - fn publish_batch(&mut self, batch: Batch) - -> impl Send + Future>; + fn publish_batch( + &mut self, + batch: Batch, + ) -> impl Send + Future>; /// Publish a `SignedBatch`. fn publish_signed_batch( diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index f089e931..efb20217 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -92,145 +92,147 @@ impl> impl>> ContinuallyRan for TransactionSignerTask { - fn run_iteration(&mut self) -> impl Send + Future> {async{ - let mut iterated = false; + fn run_iteration(&mut self) -> impl Send + Future> { + async { + let mut iterated = false; - // Check for new transactions to sign - loop { - let mut txn = self.db.txn(); - let Some(tx) = TransactionsToSign::::try_recv(&mut txn, &self.keys[0].group_key()) else { - break; - }; - iterated = true; + // Check for new transactions to sign + loop { + let mut txn = self.db.txn(); + let Some(tx) = TransactionsToSign::::try_recv(&mut txn, &self.keys[0].group_key()) + else { + break; + }; + iterated = true; - // Save this to the database as a transaction to sign - self.active_signing_protocols.insert(tx.id()); - ActiveSigningProtocols::set( - &mut txn, - self.session, - &self.active_signing_protocols.iter().copied().collect(), - ); - { - let mut buf = Vec::with_capacity(256); - tx.write(&mut buf).unwrap(); - SerializedSignableTransactions::set(&mut txn, tx.id(), &buf); + // Save this to the database as a transaction to sign + self.active_signing_protocols.insert(tx.id()); + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + { + let mut buf = Vec::with_capacity(256); + tx.write(&mut buf).unwrap(); + SerializedSignableTransactions::set(&mut txn, tx.id(), &buf); + } + + let mut machines = Vec::with_capacity(self.keys.len()); + for keys in &self.keys { + machines.push(tx.clone().sign(keys.clone())); + } + for msg in self.attempt_manager.register(VariantSignId::Transaction(tx.id()), machines) { + TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + + txn.commit(); } - let mut machines = Vec::with_capacity(self.keys.len()); - for keys in &self.keys { - machines.push(tx.clone().sign(keys.clone())); - } - for msg in self.attempt_manager.register(VariantSignId::Transaction(tx.id()), machines) { - TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + // Check for completed Eventualities (meaning we should no longer sign for these transactions) + loop { + let mut txn = self.db.txn(); + let Some(id) = CompletedEventualities::try_recv(&mut txn, &self.keys[0].group_key()) else { + break; + }; + + /* + We may have yet to register this signing protocol. + + While `TransactionsToSign` is populated before `CompletedEventualities`, we could + theoretically have `TransactionsToSign` populated with a new transaction _while iterating + over `CompletedEventualities`_, and then have `CompletedEventualities` populated. In that + edge case, we will see the completion notification before we see the transaction. + + In such a case, we break (dropping the txn, re-queueing the completion notification). On + the task's next iteration, we'll process the transaction from `TransactionsToSign` and be + able to make progress. + */ + if !self.active_signing_protocols.remove(&id) { + break; + } + iterated = true; + + // Since it was, remove this as an active signing protocol + ActiveSigningProtocols::set( + &mut txn, + self.session, + &self.active_signing_protocols.iter().copied().collect(), + ); + // Clean up the database + SerializedSignableTransactions::del(&mut txn, id); + SerializedTransactions::del(&mut txn, id); + + // We retire with a txn so we either successfully flag this Eventuality as completed, and + // won't re-register it (making this retire safe), or we don't flag it, meaning we will + // re-register it, yet that's safe as we have yet to retire it + self.attempt_manager.retire(&mut txn, VariantSignId::Transaction(id)); + + txn.commit(); } - txn.commit(); - } + // Handle any messages sent to us + loop { + let mut txn = self.db.txn(); + let Some(msg) = CoordinatorToTransactionSignerMessages::try_recv(&mut txn, self.session) + else { + break; + }; + iterated = true; - // Check for completed Eventualities (meaning we should no longer sign for these transactions) - loop { - let mut txn = self.db.txn(); - let Some(id) = CompletedEventualities::try_recv(&mut txn, &self.keys[0].group_key()) else { - break; - }; + match self.attempt_manager.handle(msg) { + Response::Messages(msgs) => { + for msg in msgs { + TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + } + } + Response::Signature { id, signature: signed_tx } => { + let signed_tx: TransactionFor = signed_tx.into(); - /* - We may have yet to register this signing protocol. + // Save this transaction to the database + { + let mut buf = Vec::with_capacity(256); + signed_tx.write(&mut buf).unwrap(); + SerializedTransactions::set( + &mut txn, + match id { + VariantSignId::Transaction(id) => id, + _ => panic!("TransactionSignerTask signed a non-transaction"), + }, + &buf, + ); + } - While `TransactionsToSign` is populated before `CompletedEventualities`, we could - theoretically have `TransactionsToSign` populated with a new transaction _while iterating - over `CompletedEventualities`_, and then have `CompletedEventualities` populated. In that - edge case, we will see the completion notification before we see the transaction. - - In such a case, we break (dropping the txn, re-queueing the completion notification). On - the task's next iteration, we'll process the transaction from `TransactionsToSign` and be - able to make progress. - */ - if !self.active_signing_protocols.remove(&id) { - break; - } - iterated = true; - - // Since it was, remove this as an active signing protocol - ActiveSigningProtocols::set( - &mut txn, - self.session, - &self.active_signing_protocols.iter().copied().collect(), - ); - // Clean up the database - SerializedSignableTransactions::del(&mut txn, id); - SerializedTransactions::del(&mut txn, id); - - // We retire with a txn so we either successfully flag this Eventuality as completed, and - // won't re-register it (making this retire safe), or we don't flag it, meaning we will - // re-register it, yet that's safe as we have yet to retire it - self.attempt_manager.retire(&mut txn, VariantSignId::Transaction(id)); - - txn.commit(); - } - - // Handle any messages sent to us - loop { - let mut txn = self.db.txn(); - let Some(msg) = CoordinatorToTransactionSignerMessages::try_recv(&mut txn, self.session) - else { - break; - }; - iterated = true; - - match self.attempt_manager.handle(msg) { - Response::Messages(msgs) => { - for msg in msgs { - TransactionSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); + match self.publisher.publish(signed_tx).await { + Ok(()) => {} + Err(e) => log::warn!("couldn't broadcast transaction: {e:?}"), + } } } - Response::Signature { id, signature: signed_tx } => { - let signed_tx: TransactionFor = signed_tx.into(); - // Save this transaction to the database - { - let mut buf = Vec::with_capacity(256); - signed_tx.write(&mut buf).unwrap(); - SerializedTransactions::set( - &mut txn, - match id { - VariantSignId::Transaction(id) => id, - _ => panic!("TransactionSignerTask signed a non-transaction"), - }, - &buf, - ); - } + txn.commit(); + } - match self.publisher.publish(signed_tx).await { - Ok(()) => {} - Err(e) => log::warn!("couldn't broadcast transaction: {e:?}"), - } + // If it's been five minutes since the last publication, republish the transactions for all + // active signing protocols + if Instant::now().duration_since(self.last_publication) > Duration::from_secs(5 * 60) { + for tx in &self.active_signing_protocols { + let Some(tx_buf) = SerializedTransactions::get(&self.db, *tx) else { continue }; + let mut tx_buf = tx_buf.as_slice(); + let tx = TransactionFor::::read(&mut tx_buf).unwrap(); + assert!(tx_buf.is_empty()); + + self + .publisher + .publish(tx) + .await + .map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; } + + self.last_publication = Instant::now(); } - txn.commit(); + Ok(iterated) } - - // If it's been five minutes since the last publication, republish the transactions for all - // active signing protocols - if Instant::now().duration_since(self.last_publication) > Duration::from_secs(5 * 60) { - for tx in &self.active_signing_protocols { - let Some(tx_buf) = SerializedTransactions::get(&self.db, *tx) else { continue }; - let mut tx_buf = tx_buf.as_slice(); - let tx = TransactionFor::::read(&mut tx_buf).unwrap(); - assert!(tx_buf.is_empty()); - - self - .publisher - .publish(tx) - .await - .map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; - } - - self.last_publication = Instant::now(); - } - - Ok(iterated) } } -} From 947e1067d9644627a5756c9b1b391650120c1c0e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 05:11:07 -0400 Subject: [PATCH 119/368] Monero Processor scan, check_for_eventuality_resolutions --- Cargo.lock | 3 + processor/monero/Cargo.toml | 2 + processor/monero/src/lib.rs | 283 ------------------ processor/monero/src/primitives/block.rs | 46 ++- processor/monero/src/primitives/output.rs | 2 +- .../monero/src/primitives/transaction.rs | 2 +- processor/monero/src/rpc.rs | 2 +- 7 files changed, 44 insertions(+), 296 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 01edbcfe..b08cde03 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -5105,6 +5105,7 @@ dependencies = [ "hex", "modular-frost", "monero-address", + "monero-clsag", "monero-rpc", "monero-serai", "monero-simple-request-rpc", @@ -8534,8 +8535,10 @@ dependencies = [ "serai-processor-signers", "serai-processor-utxo-scheduler", "serai-processor-utxo-scheduler-primitives", + "serai-processor-view-keys", "tokio", "zalloc", + "zeroize", ] [[package]] diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index 22137b2d..6f9ce40a 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -18,6 +18,7 @@ workspace = true [dependencies] rand_core = { version = "0.6", default-features = false } +zeroize = { version = "1", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } @@ -41,6 +42,7 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-db = { path = "../../common/db" } key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } +view-keys = { package = "serai-processor-view-keys", path = "../view-keys" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs index f9b334ef..46ce16d3 100644 --- a/processor/monero/src/lib.rs +++ b/processor/monero/src/lib.rs @@ -1,119 +1,4 @@ /* -#![cfg_attr(docsrs, feature(doc_auto_cfg))] -#![doc = include_str!("../README.md")] -#![deny(missing_docs)] - -use std::{time::Duration, collections::HashMap, io}; - -use async_trait::async_trait; - -use zeroize::Zeroizing; - -use rand_core::SeedableRng; -use rand_chacha::ChaCha20Rng; - -use transcript::{Transcript, RecommendedTranscript}; - -use ciphersuite::group::{ff::Field, Group}; -use dalek_ff_group::{Scalar, EdwardsPoint}; -use frost::{curve::Ed25519, ThresholdKeys}; - -use monero_simple_request_rpc::SimpleRequestRpc; -use monero_wallet::{ - ringct::RctType, - transaction::Transaction, - block::Block, - rpc::{FeeRate, RpcError, Rpc}, - address::{Network as MoneroNetwork, SubaddressIndex}, - ViewPair, GuaranteedViewPair, WalletOutput, OutputWithDecoys, GuaranteedScanner, - send::{ - SendError, Change, SignableTransaction as MSignableTransaction, Eventuality, TransactionMachine, - }, -}; -#[cfg(test)] -use monero_wallet::Scanner; - -use tokio::time::sleep; - -pub use serai_client::{ - primitives::{MAX_DATA_LEN, Coin, NetworkId, Amount, Balance}, - networks::monero::Address, -}; - -use crate::{ - Payment, additional_key, - networks::{ - NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait, - Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait, - Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork, - }, - multisigs::scheduler::utxo::Scheduler, -}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Output(WalletOutput); - -const EXTERNAL_SUBADDRESS: Option = SubaddressIndex::new(0, 0); -const BRANCH_SUBADDRESS: Option = SubaddressIndex::new(1, 0); -const CHANGE_SUBADDRESS: Option = SubaddressIndex::new(2, 0); -const FORWARD_SUBADDRESS: Option = SubaddressIndex::new(3, 0); - -impl OutputTrait for Output { - // While we could use (tx, o), using the key ensures we won't be susceptible to the burning bug. - // While we already are immune, thanks to using featured address, this doesn't hurt and is - // technically more efficient. - type Id = [u8; 32]; - - fn kind(&self) -> OutputType { - match self.0.subaddress() { - EXTERNAL_SUBADDRESS => OutputType::External, - BRANCH_SUBADDRESS => OutputType::Branch, - CHANGE_SUBADDRESS => OutputType::Change, - FORWARD_SUBADDRESS => OutputType::Forwarded, - _ => panic!("unrecognized address was scanned for"), - } - } - - fn id(&self) -> Self::Id { - self.0.key().compress().to_bytes() - } - - fn tx_id(&self) -> [u8; 32] { - self.0.transaction() - } - - fn key(&self) -> EdwardsPoint { - EdwardsPoint(self.0.key() - (EdwardsPoint::generator().0 * self.0.key_offset())) - } - - fn presumed_origin(&self) -> Option
{ - None - } - - fn balance(&self) -> Balance { - Balance { coin: Coin::Monero, amount: Amount(self.0.commitment().amount) } - } - - fn data(&self) -> &[u8] { - let Some(data) = self.0.arbitrary_data().first() else { return &[] }; - // If the data is too large, prune it - // This should cause decoding the instruction to fail, and trigger a refund as appropriate - if data.len() > usize::try_from(MAX_DATA_LEN).unwrap() { - return &[]; - } - data - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - self.0.write(writer)?; - Ok(()) - } - - fn read(reader: &mut R) -> io::Result { - Ok(Output(WalletOutput::read(reader)?)) - } -} - // TODO: Consider ([u8; 32], TransactionPruned) #[async_trait] impl TransactionTrait for Transaction { @@ -227,29 +112,6 @@ impl BlockTrait for Block { } } -#[derive(Clone, Debug)] -pub struct Monero { - rpc: SimpleRequestRpc, -} -// Shim required for testing/debugging purposes due to generic arguments also necessitating trait -// bounds -impl PartialEq for Monero { - fn eq(&self, _: &Self) -> bool { - true - } -} -impl Eq for Monero {} - -#[allow(clippy::needless_pass_by_value)] // Needed to satisfy API expectations -fn map_rpc_err(err: RpcError) -> NetworkError { - if let RpcError::InvalidNode(reason) = &err { - log::error!("Monero RpcError::InvalidNode({reason})"); - } else { - log::debug!("Monero RpcError {err:?}"); - } - NetworkError::ConnectionError -} - enum MakeSignableTransactionResult { Fee(u64), SignableTransaction(MSignableTransaction), @@ -461,20 +323,6 @@ impl Monero { #[async_trait] impl Network for Monero { - type Curve = Ed25519; - - type Transaction = Transaction; - type Block = Block; - - type Output = Output; - type SignableTransaction = SignableTransaction; - type Eventuality = Eventuality; - type TransactionMachine = TransactionMachine; - - type Scheduler = Scheduler; - - type Address = Address; - const NETWORK: NetworkId = NetworkId::Monero; const ID: &'static str = "Monero"; const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120; @@ -488,9 +336,6 @@ impl Network for Monero { // TODO const COST_TO_AGGREGATE: u64 = 0; - // Monero doesn't require/benefit from tweaking - fn tweak_keys(_: &mut ThresholdKeys) {} - #[cfg(test)] async fn external_address(&self, key: EdwardsPoint) -> Address { Self::address_internal(key, EXTERNAL_SUBADDRESS) @@ -508,121 +353,6 @@ impl Network for Monero { Some(Self::address_internal(key, FORWARD_SUBADDRESS)) } - async fn get_latest_block_number(&self) -> Result { - // Monero defines height as chain length, so subtract 1 for block number - Ok(self.rpc.get_height().await.map_err(map_rpc_err)? - 1) - } - - async fn get_block(&self, number: usize) -> Result { - Ok( - self - .rpc - .get_block(self.rpc.get_block_hash(number).await.map_err(map_rpc_err)?) - .await - .map_err(map_rpc_err)?, - ) - } - - async fn get_outputs(&self, block: &Block, key: EdwardsPoint) -> Vec { - let outputs = loop { - match self - .rpc - .get_scannable_block(block.clone()) - .await - .map_err(|e| format!("{e:?}")) - .and_then(|block| Self::scanner(key).scan(block).map_err(|e| format!("{e:?}"))) - { - Ok(outputs) => break outputs, - Err(e) => { - log::error!("couldn't scan block {}: {e:?}", hex::encode(block.id())); - sleep(Duration::from_secs(60)).await; - continue; - } - } - }; - - // Miner transactions are required to explicitly state their timelock, so this does exclude - // those (which have an extended timelock we don't want to deal with) - let raw_outputs = outputs.not_additionally_locked(); - let mut outputs = Vec::with_capacity(raw_outputs.len()); - for output in raw_outputs { - // This should be pointless as we shouldn't be able to scan for any other subaddress - // This just helps ensures nothing invalid makes it through - assert!([EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARD_SUBADDRESS] - .contains(&output.subaddress())); - - outputs.push(Output(output)); - } - - outputs - } - - async fn get_eventuality_completions( - &self, - eventualities: &mut EventualitiesTracker, - block: &Block, - ) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> { - let mut res = HashMap::new(); - if eventualities.map.is_empty() { - return res; - } - - async fn check_block( - network: &Monero, - eventualities: &mut EventualitiesTracker, - block: &Block, - res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>, - ) { - for hash in &block.transactions { - let tx = { - let mut tx; - while { - tx = network.rpc.get_transaction(*hash).await; - tx.is_err() - } { - log::error!("couldn't get transaction {}: {}", hex::encode(hash), tx.err().unwrap()); - sleep(Duration::from_secs(60)).await; - } - tx.unwrap() - }; - - if let Some((_, eventuality)) = eventualities.map.get(&tx.prefix().extra) { - if eventuality.matches(&tx.clone().into()) { - res.insert( - eventualities.map.remove(&tx.prefix().extra).unwrap().0, - (block.number().unwrap(), tx.id(), tx), - ); - } - } - } - - eventualities.block_number += 1; - assert_eq!(eventualities.block_number, block.number().unwrap()); - } - - for block_num in (eventualities.block_number + 1) .. block.number().unwrap() { - let block = { - let mut block; - while { - block = self.get_block(block_num).await; - block.is_err() - } { - log::error!("couldn't get block {}: {}", block_num, block.err().unwrap()); - sleep(Duration::from_secs(60)).await; - } - block.unwrap() - }; - - check_block(self, eventualities, &block, &mut res).await; - } - - // Also check the current block - check_block(self, eventualities, block, &mut res).await; - assert_eq!(eventualities.block_number, block.number().unwrap()); - - res - } - async fn needed_fee( &self, block_number: usize, @@ -687,19 +417,6 @@ impl Network for Monero { } } - async fn confirm_completion( - &self, - eventuality: &Eventuality, - id: &[u8; 32], - ) -> Result, NetworkError> { - let tx = self.rpc.get_transaction(*id).await.map_err(map_rpc_err)?; - if eventuality.matches(&tx.clone().into()) { - Ok(Some(tx)) - } else { - Ok(None) - } - } - #[cfg(test)] async fn get_block_number(&self, id: &[u8; 32]) -> usize { self.rpc.get_block(*id).await.unwrap().number().unwrap() diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index 634a0fbb..62715f8c 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -1,14 +1,22 @@ use std::collections::HashMap; +use zeroize::Zeroizing; + use ciphersuite::{Ciphersuite, Ed25519}; -use monero_wallet::{transaction::Transaction, block::Block as MBlock, ViewPairError, GuaranteedViewPair, GuaranteedScanner}; +use monero_wallet::{ + block::Block as MBlock, rpc::ScannableBlock as MScannableBlock, + ViewPairError, GuaranteedViewPair, ScanError, GuaranteedScanner, +}; use serai_client::networks::monero::Address; use primitives::{ReceivedOutput, EventualityTracker}; - -use crate::{EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, output::Output, transaction::Eventuality}; +use view_keys::view_key; +use crate::{ + EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, output::Output, + transaction::Eventuality, +}; #[derive(Clone, Debug)] pub(crate) struct BlockHeader(pub(crate) MBlock); @@ -22,7 +30,7 @@ impl primitives::BlockHeader for BlockHeader { } #[derive(Clone, Debug)] -pub(crate) struct Block(pub(crate) MBlock, Vec); +pub(crate) struct Block(pub(crate) MScannableBlock); impl primitives::Block for Block { type Header = BlockHeader; @@ -33,20 +41,26 @@ impl primitives::Block for Block { type Eventuality = Eventuality; fn id(&self) -> [u8; 32] { - self.0.hash() + self.0.block.hash() } fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { - let view_pair = match GuaranteedViewPair::new(key.0, additional_key) { + let view_pair = match GuaranteedViewPair::new(key.0, Zeroizing::new(*view_key::(0))) { Ok(view_pair) => view_pair, - Err(ViewPairError::TorsionedSpendKey) => unreachable!("dalek_ff_group::EdwardsPoint has torsion"), - }; + Err(ViewPairError::TorsionedSpendKey) => { + unreachable!("dalek_ff_group::EdwardsPoint had torsion") + } + }; let mut scanner = GuaranteedScanner::new(view_pair); scanner.register_subaddress(EXTERNAL_SUBADDRESS.unwrap()); scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap()); scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap()); scanner.register_subaddress(FORWARDED_SUBADDRESS.unwrap()); - todo!("TODO") + match scanner.scan(self.0.clone()) { + Ok(outputs) => outputs.not_additionally_locked().into_iter().map(Output).collect(), + Err(ScanError::UnsupportedProtocol(version)) => panic!("Monero unexpectedly hard-forked (version {version})"), + Err(ScanError::InvalidScannableBlock(reason)) => panic!("fetched an invalid scannable block from the RPC: {reason}"), + } } #[allow(clippy::type_complexity)] @@ -57,6 +71,18 @@ impl primitives::Block for Block { >::TransactionId, Self::Eventuality, > { - todo!("TODO") + let mut res = HashMap::new(); + assert_eq!(self.0.block.transactions.len(), self.0.transactions.len()); + for (hash, tx) in self.0.block.transactions.iter().zip(&self.0.transactions) { + if let Some(eventuality) = eventualities.active_eventualities.get(&tx.prefix().extra) { + if eventuality.eventuality.matches(tx) { + res.insert( + *hash, + eventualities.active_eventualities.remove(&tx.prefix().extra).unwrap(), + ); + } + } + } + res } } diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs index 385429c2..d66fd983 100644 --- a/processor/monero/src/primitives/output.rs +++ b/processor/monero/src/primitives/output.rs @@ -33,7 +33,7 @@ impl AsMut<[u8]> for OutputId { } #[derive(Clone, PartialEq, Eq, Debug)] -pub(crate) struct Output(WalletOutput); +pub(crate) struct Output(pub(crate) WalletOutput); impl Output { pub(crate) fn new(output: WalletOutput) -> Self { diff --git a/processor/monero/src/primitives/transaction.rs b/processor/monero/src/primitives/transaction.rs index 1ba49471..f6765cd9 100644 --- a/processor/monero/src/primitives/transaction.rs +++ b/processor/monero/src/primitives/transaction.rs @@ -83,7 +83,7 @@ impl scheduler::SignableTransaction for SignableTransaction { pub(crate) struct Eventuality { id: [u8; 32], singular_spent_output: Option, - eventuality: MEventuality, + pub(crate) eventuality: MEventuality, } impl primitives::Eventuality for Eventuality { diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index 0e0739b8..d826802b 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -54,7 +54,7 @@ impl ScannerFeed for Rpc { &self, number: u64, ) -> impl Send + Future> { - async move{todo!("TODO")} + async move { todo!("TODO") } } fn unchecked_block_header_by_number( From e56af7fc51b99d74706e94e4d09edb35639417c2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 19:24:45 -0400 Subject: [PATCH 120/368] Monero time_for_block, dust --- processor/monero/src/lib.rs | 61 ------------------------ processor/monero/src/primitives/block.rs | 17 +++---- processor/monero/src/rpc.rs | 41 ++++++++++++++-- 3 files changed, 47 insertions(+), 72 deletions(-) diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs index 46ce16d3..1cde1414 100644 --- a/processor/monero/src/lib.rs +++ b/processor/monero/src/lib.rs @@ -54,64 +54,6 @@ impl SignableTransactionTrait for SignableTransaction { } } -#[async_trait] -impl BlockTrait for Block { - type Id = [u8; 32]; - fn id(&self) -> Self::Id { - self.hash() - } - - fn parent(&self) -> Self::Id { - self.header.previous - } - - async fn time(&self, rpc: &Monero) -> u64 { - // Constant from Monero - const BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW: usize = 60; - - // If Monero doesn't have enough blocks to build a window, it doesn't define a network time - if (self.number().unwrap() + 1) < BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW { - // Use the block number as the time - return u64::try_from(self.number().unwrap()).unwrap(); - } - - let mut timestamps = vec![self.header.timestamp]; - let mut parent = self.parent(); - while timestamps.len() < BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW { - let mut parent_block; - while { - parent_block = rpc.rpc.get_block(parent).await; - parent_block.is_err() - } { - log::error!("couldn't get parent block when trying to get block time: {parent_block:?}"); - sleep(Duration::from_secs(5)).await; - } - let parent_block = parent_block.unwrap(); - timestamps.push(parent_block.header.timestamp); - parent = parent_block.parent(); - - if parent_block.number().unwrap() == 0 { - break; - } - } - timestamps.sort(); - - // Because 60 has two medians, Monero's epee picks the in-between value, calculated by the - // following formula (from the "get_mid" function) - let n = timestamps.len() / 2; - let a = timestamps[n - 1]; - let b = timestamps[n]; - #[rustfmt::skip] // Enables Ctrl+F'ing for everything after the `= ` - let res = (a/2) + (b/2) + ((a - 2*(a/2)) + (b - 2*(b/2)))/2; - // Technically, res may be 1 if all prior blocks had a timestamp by 0, which would break - // monotonicity with our above definition of height as time - // Monero also solely requires the block's time not be less than the median, it doesn't ensure - // it advances the median forward - // Ensure monotonicity despite both these issues by adding the block number to the median time - res + u64::try_from(self.number().unwrap()).unwrap() - } -} - enum MakeSignableTransactionResult { Fee(u64), SignableTransaction(MSignableTransaction), @@ -330,9 +272,6 @@ impl Network for Monero { const MAX_OUTPUTS: usize = 16; - // 0.01 XMR - const DUST: u64 = 10000000000; - // TODO const COST_TO_AGGREGATE: u64 = 0; diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index 62715f8c..130e5ac8 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -5,8 +5,8 @@ use zeroize::Zeroizing; use ciphersuite::{Ciphersuite, Ed25519}; use monero_wallet::{ - block::Block as MBlock, rpc::ScannableBlock as MScannableBlock, - ViewPairError, GuaranteedViewPair, ScanError, GuaranteedScanner, + block::Block as MBlock, rpc::ScannableBlock as MScannableBlock, ViewPairError, + GuaranteedViewPair, ScanError, GuaranteedScanner, }; use serai_client::networks::monero::Address; @@ -58,8 +58,12 @@ impl primitives::Block for Block { scanner.register_subaddress(FORWARDED_SUBADDRESS.unwrap()); match scanner.scan(self.0.clone()) { Ok(outputs) => outputs.not_additionally_locked().into_iter().map(Output).collect(), - Err(ScanError::UnsupportedProtocol(version)) => panic!("Monero unexpectedly hard-forked (version {version})"), - Err(ScanError::InvalidScannableBlock(reason)) => panic!("fetched an invalid scannable block from the RPC: {reason}"), + Err(ScanError::UnsupportedProtocol(version)) => { + panic!("Monero unexpectedly hard-forked (version {version})") + } + Err(ScanError::InvalidScannableBlock(reason)) => { + panic!("fetched an invalid scannable block from the RPC: {reason}") + } } } @@ -76,10 +80,7 @@ impl primitives::Block for Block { for (hash, tx) in self.0.block.transactions.iter().zip(&self.0.transactions) { if let Some(eventuality) = eventualities.active_eventualities.get(&tx.prefix().extra) { if eventuality.eventuality.matches(tx) { - res.insert( - *hash, - eventualities.active_eventualities.remove(&tx.prefix().extra).unwrap(), - ); + res.insert(*hash, eventualities.active_eventualities.remove(&tx.prefix().extra).unwrap()); } } } diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index d826802b..9244b23f 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -54,7 +54,38 @@ impl ScannerFeed for Rpc { &self, number: u64, ) -> impl Send + Future> { - async move { todo!("TODO") } + async move { + // Constant from Monero + const BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW: u64 = 60; + + // If Monero doesn't have enough blocks to build a window, it doesn't define a network time + if (number + 1) < BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW { + return Ok(0); + } + + // Fetch all the timestamps within the window + let block_for_time_of = self.rpc.get_block_by_number(number.try_into().unwrap()).await?; + let mut timestamps = vec![block_for_time_of.header.timestamp]; + let mut parent = block_for_time_of.header.previous; + for _ in 1 .. BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW { + let parent_block = self.rpc.get_block(parent).await?; + timestamps.push(parent_block.header.timestamp); + parent = parent_block.header.previous; + } + timestamps.sort(); + + // Because there are two timestamps equidistance from the ends, Monero's epee picks the + // in-between value, calculated by the following formula (from the "get_mid" function) + let n = timestamps.len() / 2; + let a = timestamps[n - 1]; + let b = timestamps[n]; + #[rustfmt::skip] // Enables Ctrl+F'ing for everything after the `= ` + let res = (a/2) + (b/2) + ((a - 2*(a/2)) + (b - 2*(b/2)))/2; + + // Monero does check that the new block's time is greater than the median, causing the median + // to be monotonic + Ok(res) + } } fn unchecked_block_header_by_number( @@ -66,17 +97,21 @@ impl ScannerFeed for Rpc { async move { Ok(BlockHeader(self.rpc.get_block_by_number(number.try_into().unwrap()).await?)) } } + #[rustfmt::skip] // It wants to improperly format the `async move` to a single line fn unchecked_block_by_number( &self, number: u64, ) -> impl Send + Future> { - async move { todo!("TODO") } + async move { + Ok(Block(self.rpc.get_scannable_block_by_number(number.try_into().unwrap()).await?)) + } } fn dust(coin: Coin) -> Amount { assert_eq!(coin, Coin::Monero); - todo!("TODO") + // 0.01 XMR + Amount(10_000_000_000) } fn cost_to_aggregate( From 2edc2f36123ab8bcd972f4392651f3e04cbd7fce Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 13 Sep 2024 23:51:53 -0400 Subject: [PATCH 121/368] Add a database of all Monero outs into the processor Enables synchronous transaction creation (which requires synchronous decoy selection). --- Cargo.lock | 1 + networks/monero/rpc/src/lib.rs | 6 +- processor/monero/Cargo.toml | 1 + processor/monero/src/decoys.rs | 294 ++++++++++++++++++++++++++++++ processor/monero/src/lib.rs | 140 -------------- processor/monero/src/main.rs | 2 + processor/monero/src/rpc.rs | 27 +-- processor/monero/src/scheduler.rs | 142 +++++++++++++++ 8 files changed, 457 insertions(+), 156 deletions(-) create mode 100644 processor/monero/src/decoys.rs diff --git a/Cargo.lock b/Cargo.lock index b08cde03..9e34ea3c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8515,6 +8515,7 @@ version = "0.1.0" dependencies = [ "borsh", "ciphersuite", + "curve25519-dalek", "dalek-ff-group", "dkg", "flexible-transcript", diff --git a/networks/monero/rpc/src/lib.rs b/networks/monero/rpc/src/lib.rs index 4c5055cc..3c8d337a 100644 --- a/networks/monero/rpc/src/lib.rs +++ b/networks/monero/rpc/src/lib.rs @@ -249,7 +249,7 @@ fn rpc_point(point: &str) -> Result { /// While no implementors are directly provided, [monero-simple-request-rpc]( /// https://github.com/serai-dex/serai/tree/develop/networks/monero/rpc/simple-request /// ) is recommended. -pub trait Rpc: Sync + Clone + Debug { +pub trait Rpc: Sync + Clone { /// Perform a POST request to the specified route with the specified body. /// /// The implementor is left to handle anything such as authentication. @@ -1003,10 +1003,10 @@ pub trait Rpc: Sync + Clone + Debug { /// An implementation is provided for any satisfier of `Rpc`. It is not recommended to use an `Rpc` /// object to satisfy this. This should be satisfied by a local store of the output distribution, /// both for performance and to prevent potential attacks a remote node can perform. -pub trait DecoyRpc: Sync + Clone + Debug { +pub trait DecoyRpc: Sync { /// Get the height the output distribution ends at. /// - /// This is equivalent to the hight of the blockchain it's for. This is intended to be cheaper + /// This is equivalent to the height of the blockchain it's for. This is intended to be cheaper /// than fetching the entire output distribution. fn get_output_distribution_end_height( &self, diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index 6f9ce40a..436f327e 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -25,6 +25,7 @@ scale = { package = "parity-scale-codec", version = "3", default-features = fals borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize"] } dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ed25519"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ed25519"] } diff --git a/processor/monero/src/decoys.rs b/processor/monero/src/decoys.rs new file mode 100644 index 00000000..000463d0 --- /dev/null +++ b/processor/monero/src/decoys.rs @@ -0,0 +1,294 @@ +use core::{ + future::Future, + ops::{Bound, RangeBounds}, +}; + +use curve25519_dalek::{ + scalar::Scalar, + edwards::{CompressedEdwardsY, EdwardsPoint}, +}; +use monero_wallet::{ + DEFAULT_LOCK_WINDOW, + primitives::Commitment, + transaction::{Timelock, Input, Pruned, Transaction}, + rpc::{OutputInformation, RpcError, Rpc as MRpcTrait, DecoyRpc}, +}; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_db::{Get, DbTxn, Db, create_db}; + +use primitives::task::ContinuallyRan; +use scanner::ScannerFeed; + +use crate::Rpc; + +#[derive(BorshSerialize, BorshDeserialize)] +struct EncodableOutputInformation { + height: u64, + timelocked: bool, + key: [u8; 32], + commitment: [u8; 32], +} + +create_db! { + MoneroProcessorDecoys { + NextToIndexBlock: () -> u64, + PriorIndexedBlock: () -> [u8; 32], + DistributionStartBlock: () -> u64, + Distribution: () -> Vec, + Out: (index: u64) -> EncodableOutputInformation, + } +} + +/* + We want to be able to select decoys when planning transactions, but planning transactions is a + synchronous process. We store the decoys to a local database and have our database implement + `DecoyRpc` to achieve synchronous decoy selection. + + This is only needed as the transactions we sign must have decoys decided and agreed upon. With + FCMP++s, we'll be able to sign transactions without the membership proof, letting any signer + prove for membership after the fact (with their local views). Until then, this task remains. +*/ +pub(crate) struct DecoysTask { + pub(crate) rpc: Rpc, + pub(crate) current_distribution: Vec, +} + +impl ContinuallyRan for DecoysTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let finalized_block_number = self + .rpc + .rpc + .get_height() + .await + .map_err(|e| format!("couldn't fetch latest block number: {e:?}"))? + .checked_sub(Rpc::::CONFIRMATIONS.try_into().unwrap()) + .ok_or(format!( + "blockchain only just started and doesn't have {} blocks yet", + Rpc::::CONFIRMATIONS + ))?; + + if NextToIndexBlock::get(&self.rpc.db).is_none() { + let distribution = self + .rpc + .rpc + .get_output_distribution(..= finalized_block_number) + .await + .map_err(|e| format!("failed to get output distribution: {e:?}"))?; + if distribution.is_empty() { + Err("distribution was empty".to_string())?; + } + + let distribution_start_block = finalized_block_number - (distribution.len() - 1); + // There may have been a reorg between the time of getting the distribution and the time of + // getting this block. This is an invariant and assumed not to have happened in the split + // second it's possible. + let block = self + .rpc + .rpc + .get_block_by_number(distribution_start_block) + .await + .map_err(|e| format!("failed to get the start block for the distribution: {e:?}"))?; + + let mut txn = self.rpc.db.txn(); + NextToIndexBlock::set(&mut txn, &distribution_start_block.try_into().unwrap()); + PriorIndexedBlock::set(&mut txn, &block.header.previous); + DistributionStartBlock::set(&mut txn, &u64::try_from(distribution_start_block).unwrap()); + txn.commit(); + } + + let next_to_index_block = + usize::try_from(NextToIndexBlock::get(&self.rpc.db).unwrap()).unwrap(); + if next_to_index_block >= finalized_block_number { + return Ok(false); + } + + for b in next_to_index_block ..= finalized_block_number { + // Fetch the block + let block = self + .rpc + .rpc + .get_block_by_number(b) + .await + .map_err(|e| format!("decoys task failed to fetch block: {e:?}"))?; + let prior = PriorIndexedBlock::get(&self.rpc.db).unwrap(); + if block.header.previous != prior { + panic!( + "decoys task detected reorg: expected {}, found {}", + hex::encode(prior), + hex::encode(block.header.previous) + ); + } + + // Fetch the transactions in the block + let transactions = self + .rpc + .rpc + .get_pruned_transactions(&block.transactions) + .await + .map_err(|e| format!("failed to get the pruned transactions within a block: {e:?}"))?; + + fn outputs( + list: &mut Vec, + block_number: u64, + tx: Transaction, + ) { + match tx { + Transaction::V1 { .. } => {} + Transaction::V2 { prefix, proofs } => { + for (i, output) in prefix.outputs.into_iter().enumerate() { + list.push(EncodableOutputInformation { + // This is correct per the documentation on OutputInformation, which this maps to + height: block_number, + timelocked: prefix.additional_timelock != Timelock::None, + key: output.key.to_bytes(), + commitment: if matches!( + prefix.inputs.first().expect("Monero transaction had no inputs"), + Input::Gen(_) + ) { + Commitment::new( + Scalar::ONE, + output.amount.expect("miner transaction outputs didn't have amounts set"), + ) + .calculate() + .compress() + .to_bytes() + } else { + proofs + .as_ref() + .expect("non-miner V2 transaction didn't have proofs") + .base + .commitments + .get(i) + .expect("amount of commitments didn't match amount of outputs") + .compress() + .to_bytes() + }, + }); + } + } + } + } + + let block_hash = block.hash(); + + let b = u64::try_from(b).unwrap(); + let mut encodable = Vec::with_capacity(2 * (1 + block.transactions.len())); + outputs(&mut encodable, b, block.miner_transaction.into()); + for transaction in transactions { + outputs(&mut encodable, b, transaction); + } + + let existing_outputs = self.current_distribution.last().copied().unwrap_or(0); + let now_outputs = existing_outputs + u64::try_from(encodable.len()).unwrap(); + self.current_distribution.push(now_outputs); + + let mut txn = self.rpc.db.txn(); + NextToIndexBlock::set(&mut txn, &(b + 1)); + PriorIndexedBlock::set(&mut txn, &block_hash); + // TODO: Don't write the entire 10 MB distribution to the DB every two minutes + Distribution::set(&mut txn, &self.current_distribution); + for (b, out) in (existing_outputs .. now_outputs).zip(encodable) { + Out::set(&mut txn, b, &out); + } + txn.commit(); + } + Ok(true) + } + } +} + +// TODO: Cache the distribution in a static +pub(crate) struct Decoys<'a, G: Get>(&'a G); +impl<'a, G: Sync + Get> DecoyRpc for Decoys<'a, G> { + #[rustfmt::skip] + fn get_output_distribution_end_height( + &self, + ) -> impl Send + Future> { + async move { + Ok(NextToIndexBlock::get(self.0).map_or(0, |b| usize::try_from(b).unwrap() + 1)) + } + } + fn get_output_distribution( + &self, + range: impl Send + RangeBounds, + ) -> impl Send + Future, RpcError>> { + async move { + let from = match range.start_bound() { + Bound::Included(from) => *from, + Bound::Excluded(from) => from.checked_add(1).ok_or_else(|| { + RpcError::InternalError("range's from wasn't representable".to_string()) + })?, + Bound::Unbounded => 0, + }; + let to = match range.end_bound() { + Bound::Included(to) => *to, + Bound::Excluded(to) => to + .checked_sub(1) + .ok_or_else(|| RpcError::InternalError("range's to wasn't representable".to_string()))?, + Bound::Unbounded => { + panic!("requested distribution till latest block, which is non-deterministic") + } + }; + if from > to { + Err(RpcError::InternalError(format!( + "malformed range: inclusive start {from}, inclusive end {to}" + )))?; + } + + let distribution_start_block = usize::try_from( + DistributionStartBlock::get(self.0).expect("never populated the distribution start block"), + ) + .unwrap(); + let len_of_distribution_until_to = + to.checked_sub(distribution_start_block).ok_or_else(|| { + RpcError::InternalError( + "requested distribution until a block when the distribution had yet to start" + .to_string(), + ) + })? + + 1; + let distribution = Distribution::get(self.0).expect("never populated the distribution"); + assert!( + distribution.len() >= len_of_distribution_until_to, + "requested distribution until block we have yet to index" + ); + Ok( + distribution[from.saturating_sub(distribution_start_block) .. len_of_distribution_until_to] + .to_vec(), + ) + } + } + fn get_outs( + &self, + _indexes: &[u64], + ) -> impl Send + Future, RpcError>> { + async move { unimplemented!("get_outs is unused") } + } + fn get_unlocked_outputs( + &self, + indexes: &[u64], + height: usize, + fingerprintable_deterministic: bool, + ) -> impl Send + Future>, RpcError>> { + assert!(fingerprintable_deterministic, "processor wasn't using deterministic output selection"); + async move { + let mut res = vec![]; + for index in indexes { + let out = Out::get(self.0, *index).expect("requested output we didn't index"); + let unlocked = (!out.timelocked) && + ((usize::try_from(out.height).unwrap() + DEFAULT_LOCK_WINDOW) <= height); + res.push(unlocked.then(|| CompressedEdwardsY(out.key).decompress()).flatten().map(|key| { + [ + key, + CompressedEdwardsY(out.commitment) + .decompress() + .expect("output with invalid commitment"), + ] + })); + } + Ok(res) + } + } +} diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs index 1cde1414..52ebb6cb 100644 --- a/processor/monero/src/lib.rs +++ b/processor/monero/src/lib.rs @@ -107,146 +107,6 @@ impl Monero { Ok(FeeRate::new(fee.max(MINIMUM_FEE), 10000).unwrap()) } - async fn make_signable_transaction( - &self, - block_number: usize, - plan_id: &[u8; 32], - inputs: &[Output], - payments: &[Payment], - change: &Option
, - calculating_fee: bool, - ) -> Result, NetworkError> { - for payment in payments { - assert_eq!(payment.balance.coin, Coin::Monero); - } - - // TODO2: Use an fee representative of several blocks, cached inside Self - let block_for_fee = self.get_block(block_number).await?; - let fee_rate = self.median_fee(&block_for_fee).await?; - - // Determine the RCT proofs to make based off the hard fork - // TODO: Make a fn for this block which is duplicated with tests - let rct_type = match block_for_fee.header.hardfork_version { - 14 => RctType::ClsagBulletproof, - 15 | 16 => RctType::ClsagBulletproofPlus, - _ => panic!("Monero hard forked and the processor wasn't updated for it"), - }; - - let mut transcript = - RecommendedTranscript::new(b"Serai Processor Monero Transaction Transcript"); - transcript.append_message(b"plan", plan_id); - - // All signers need to select the same decoys - // All signers use the same height and a seeded RNG to make sure they do so. - let mut inputs_actual = Vec::with_capacity(inputs.len()); - for input in inputs { - inputs_actual.push( - OutputWithDecoys::fingerprintable_deterministic_new( - &mut ChaCha20Rng::from_seed(transcript.rng_seed(b"decoys")), - &self.rpc, - // TODO: Have Decoys take RctType - match rct_type { - RctType::ClsagBulletproof => 11, - RctType::ClsagBulletproofPlus => 16, - _ => panic!("selecting decoys for an unsupported RctType"), - }, - block_number + 1, - input.0.clone(), - ) - .await - .map_err(map_rpc_err)?, - ); - } - - // Monero requires at least two outputs - // If we only have one output planned, add a dummy payment - let mut payments = payments.to_vec(); - let outputs = payments.len() + usize::from(u8::from(change.is_some())); - if outputs == 0 { - return Ok(None); - } else if outputs == 1 { - payments.push(Payment { - address: Address::new( - ViewPair::new(EdwardsPoint::generator().0, Zeroizing::new(Scalar::ONE.0)) - .unwrap() - .legacy_address(MoneroNetwork::Mainnet), - ) - .unwrap(), - balance: Balance { coin: Coin::Monero, amount: Amount(0) }, - data: None, - }); - } - - let payments = payments - .into_iter() - .map(|payment| (payment.address.into(), payment.balance.amount.0)) - .collect::>(); - - match MSignableTransaction::new( - rct_type, - // Use the plan ID as the outgoing view key - Zeroizing::new(*plan_id), - inputs_actual, - payments, - Change::fingerprintable(change.as_ref().map(|change| change.clone().into())), - vec![], - fee_rate, - ) { - Ok(signable) => Ok(Some({ - if calculating_fee { - MakeSignableTransactionResult::Fee(signable.necessary_fee()) - } else { - MakeSignableTransactionResult::SignableTransaction(signable) - } - })), - Err(e) => match e { - SendError::UnsupportedRctType => { - panic!("trying to use an RctType unsupported by monero-wallet") - } - SendError::NoInputs | - SendError::InvalidDecoyQuantity | - SendError::NoOutputs | - SendError::TooManyOutputs | - SendError::NoChange | - SendError::TooMuchArbitraryData | - SendError::TooLargeTransaction | - SendError::WrongPrivateKey => { - panic!("created an invalid Monero transaction: {e}"); - } - SendError::MultiplePaymentIds => { - panic!("multiple payment IDs despite not supporting integrated addresses"); - } - SendError::NotEnoughFunds { inputs, outputs, necessary_fee } => { - log::debug!( - "Monero NotEnoughFunds. inputs: {:?}, outputs: {:?}, necessary_fee: {necessary_fee:?}", - inputs, - outputs - ); - match necessary_fee { - Some(necessary_fee) => { - // If we're solely calculating the fee, return the fee this TX will cost - if calculating_fee { - Ok(Some(MakeSignableTransactionResult::Fee(necessary_fee))) - } else { - // If we're actually trying to make the TX, return None - Ok(None) - } - } - // We didn't have enough funds to even cover the outputs - None => { - // Ensure we're not misinterpreting this - assert!(outputs > inputs); - Ok(None) - } - } - } - SendError::MaliciousSerialization | SendError::ClsagError(_) | SendError::FrostError(_) => { - panic!("supposedly unreachable (at this time) Monero error: {e}"); - } - }, - } - } - #[cfg(test)] fn test_view_pair() -> ViewPair { ViewPair::new(*EdwardsPoint::generator(), Zeroizing::new(Scalar::ONE.0)).unwrap() diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index eda24b56..5b32e0f1 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -15,6 +15,8 @@ mod key_gen; use crate::key_gen::KeyGenParams; mod rpc; use rpc::Rpc; + +mod decoys; /* mod scheduler; use scheduler::Scheduler; diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index 9244b23f..58e6cf8b 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -5,6 +5,7 @@ use monero_simple_request_rpc::SimpleRequestRpc; use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_db::Db; use scanner::ScannerFeed; use signers::TransactionPublisher; @@ -14,11 +15,12 @@ use crate::{ }; #[derive(Clone)] -pub(crate) struct Rpc { +pub(crate) struct Rpc { + pub(crate) db: D, pub(crate) rpc: SimpleRequestRpc, } -impl ScannerFeed for Rpc { +impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Monero; // Outputs aren't spendable until 10 blocks later due to the 10-block lock // Since we assumed scanned outputs are spendable, that sets a minimum confirmation depth of 10 @@ -37,16 +39,15 @@ impl ScannerFeed for Rpc { &self, ) -> impl Send + Future> { async move { - Ok( - self - .rpc - .get_height() - .await? - .checked_sub(1) - .expect("connected to an invalid Monero RPC") - .try_into() - .unwrap(), - ) + // The decoys task only indexes finalized blocks + crate::decoys::NextToIndexBlock::get(&self.db) + .ok_or_else(|| { + RpcError::InternalError("decoys task hasn't indexed any blocks yet".to_string()) + })? + .checked_sub(1) + .ok_or_else(|| { + RpcError::InternalError("only the genesis block has been indexed".to_string()) + }) } } @@ -127,7 +128,7 @@ impl ScannerFeed for Rpc { } } -impl TransactionPublisher for Rpc { +impl TransactionPublisher for Rpc { type EphemeralError = RpcError; fn publish( diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index 25f17c64..7666ec4f 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -1,3 +1,144 @@ +async fn make_signable_transaction( +block_number: usize, +plan_id: &[u8; 32], +inputs: &[Output], +payments: &[Payment], +change: &Option
, +calculating_fee: bool, +) -> Result, NetworkError> { +for payment in payments { + assert_eq!(payment.balance.coin, Coin::Monero); +} + +// TODO2: Use an fee representative of several blocks, cached inside Self +let block_for_fee = self.get_block(block_number).await?; +let fee_rate = self.median_fee(&block_for_fee).await?; + +// Determine the RCT proofs to make based off the hard fork +// TODO: Make a fn for this block which is duplicated with tests +let rct_type = match block_for_fee.header.hardfork_version { + 14 => RctType::ClsagBulletproof, + 15 | 16 => RctType::ClsagBulletproofPlus, + _ => panic!("Monero hard forked and the processor wasn't updated for it"), +}; + +let mut transcript = + RecommendedTranscript::new(b"Serai Processor Monero Transaction Transcript"); +transcript.append_message(b"plan", plan_id); + +// All signers need to select the same decoys +// All signers use the same height and a seeded RNG to make sure they do so. +let mut inputs_actual = Vec::with_capacity(inputs.len()); +for input in inputs { + inputs_actual.push( + OutputWithDecoys::fingerprintable_deterministic_new( + &mut ChaCha20Rng::from_seed(transcript.rng_seed(b"decoys")), + &self.rpc, + // TODO: Have Decoys take RctType + match rct_type { + RctType::ClsagBulletproof => 11, + RctType::ClsagBulletproofPlus => 16, + _ => panic!("selecting decoys for an unsupported RctType"), + }, + block_number + 1, + input.0.clone(), + ) + .await + .map_err(map_rpc_err)?, + ); +} + +// Monero requires at least two outputs +// If we only have one output planned, add a dummy payment +let mut payments = payments.to_vec(); +let outputs = payments.len() + usize::from(u8::from(change.is_some())); +if outputs == 0 { + return Ok(None); +} else if outputs == 1 { + payments.push(Payment { + address: Address::new( + ViewPair::new(EdwardsPoint::generator().0, Zeroizing::new(Scalar::ONE.0)) + .unwrap() + .legacy_address(MoneroNetwork::Mainnet), + ) + .unwrap(), + balance: Balance { coin: Coin::Monero, amount: Amount(0) }, + data: None, + }); +} + +let payments = payments + .into_iter() + .map(|payment| (payment.address.into(), payment.balance.amount.0)) + .collect::>(); + +match MSignableTransaction::new( + rct_type, + // Use the plan ID as the outgoing view key + Zeroizing::new(*plan_id), + inputs_actual, + payments, + Change::fingerprintable(change.as_ref().map(|change| change.clone().into())), + vec![], + fee_rate, +) { + Ok(signable) => Ok(Some({ + if calculating_fee { + MakeSignableTransactionResult::Fee(signable.necessary_fee()) + } else { + MakeSignableTransactionResult::SignableTransaction(signable) + } + })), + Err(e) => match e { + SendError::UnsupportedRctType => { + panic!("trying to use an RctType unsupported by monero-wallet") + } + SendError::NoInputs | + SendError::InvalidDecoyQuantity | + SendError::NoOutputs | + SendError::TooManyOutputs | + SendError::NoChange | + SendError::TooMuchArbitraryData | + SendError::TooLargeTransaction | + SendError::WrongPrivateKey => { + panic!("created an invalid Monero transaction: {e}"); + } + SendError::MultiplePaymentIds => { + panic!("multiple payment IDs despite not supporting integrated addresses"); + } + SendError::NotEnoughFunds { inputs, outputs, necessary_fee } => { + log::debug!( + "Monero NotEnoughFunds. inputs: {:?}, outputs: {:?}, necessary_fee: {necessary_fee:?}", + inputs, + outputs + ); + match necessary_fee { + Some(necessary_fee) => { + // If we're solely calculating the fee, return the fee this TX will cost + if calculating_fee { + Ok(Some(MakeSignableTransactionResult::Fee(necessary_fee))) + } else { + // If we're actually trying to make the TX, return None + Ok(None) + } + } + // We didn't have enough funds to even cover the outputs + None => { + // Ensure we're not misinterpreting this + assert!(outputs > inputs); + Ok(None) + } + } + } + SendError::MaliciousSerialization | SendError::ClsagError(_) | SendError::FrostError(_) => { + panic!("supposedly unreachable (at this time) Monero error: {e}"); + } + }, +} +} + + +/* use ciphersuite::{Ciphersuite, Secp256k1}; use bitcoin_serai::{ @@ -186,3 +327,4 @@ impl TransactionPlanner for Planner { } pub(crate) type Scheduler = utxo_standard_scheduler::Scheduler; +*/ From e1ad897f7e64571bdf0ee15a8ddf508ed1e1dd4c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 01:09:35 -0400 Subject: [PATCH 122/368] Allow scheduler's creation of transactions to be async and error I don't love this, but it's the only way to select decoys without using a local database. While the prior commit added such a databse, the performance of it presumably wasn't viable, and while TODOs marked the needed improvements, it was still messy with an immense scope re: any auditing. The relevant scheduler functions now take `&self` (intentional, as all mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are expected to be completely deterministic (as usual). --- processor/bin/src/lib.rs | 25 +- processor/bitcoin/src/main.rs | 4 +- processor/bitcoin/src/scheduler.rs | 87 ++-- processor/monero/src/decoys.rs | 294 ------------ processor/monero/src/main.rs | 1 - processor/monero/src/rpc.rs | 27 +- processor/scanner/src/eventuality/mod.rs | 44 +- processor/scanner/src/lib.rs | 28 +- .../scheduler/utxo/primitives/src/lib.rs | 260 +++++----- processor/scheduler/utxo/standard/src/lib.rs | 443 ++++++++++-------- .../utxo/transaction-chaining/src/lib.rs | 364 +++++++------- 11 files changed, 723 insertions(+), 854 deletions(-) delete mode 100644 processor/monero/src/decoys.rs diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 67ea6150..7758b1ea 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -15,7 +15,7 @@ use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use primitives::EncodableG; use ::key_gen::{KeyGenParams, KeyGen}; -use scheduler::SignableTransaction; +use scheduler::{SignableTransaction, TransactionFor}; use scanner::{ScannerFeed, Scanner, KeyFor, Scheduler}; use signers::{TransactionPublisher, Signers}; @@ -161,22 +161,23 @@ async fn first_block_after_time(feed: &S, serai_time: u64) -> u6 pub async fn main_loop< S: ScannerFeed, K: KeyGenParams>>, - Sch: Scheduler< - S, - SignableTransaction: SignableTransaction, - >, - P: TransactionPublisher<::Transaction>, + Sch: Clone + + Scheduler< + S, + SignableTransaction: SignableTransaction, + >, >( mut db: Db, feed: S, - publisher: P, + scheduler: Sch, + publisher: impl TransactionPublisher>, ) { let mut coordinator = Coordinator::new(db.clone()); let mut key_gen = key_gen::(); - let mut scanner = Scanner::new::(db.clone(), feed.clone()).await; + let mut scanner = Scanner::new(db.clone(), feed.clone(), scheduler.clone()).await; let mut signers = - Signers::::new(db.clone(), coordinator.coordinator_send(), publisher); + Signers::::new(db.clone(), coordinator.coordinator_send(), publisher); loop { let db_clone = db.clone(); @@ -242,8 +243,10 @@ pub async fn main_loop< if session == Session(0) { assert!(scanner.is_none()); let start_block = first_block_after_time(&feed, serai_time).await; - scanner = - Some(Scanner::initialize::(db_clone, feed.clone(), start_block, key.0).await); + scanner = Some( + Scanner::initialize(db_clone, feed.clone(), scheduler.clone(), start_block, key.0) + .await, + ); } } messages::substrate::CoordinatorMessage::SlashesReported { session } => { diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 56bfd619..d029ad8b 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -22,7 +22,7 @@ use crate::key_gen::KeyGenParams; mod rpc; use rpc::Rpc; mod scheduler; -use scheduler::Scheduler; +use scheduler::{Planner, Scheduler}; // Our custom code for Bitcoin mod db; @@ -57,7 +57,7 @@ async fn main() { tokio::spawn(TxIndexTask(feed.clone()).continually_run(index_task, vec![])); core::mem::forget(index_handle); - bin::main_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed).await; + bin::main_loop::<_, KeyGenParams, _>(db, feed.clone(), Scheduler::new(Planner), feed).await; } /* diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index 6e49d23d..b6554bda 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -1,3 +1,5 @@ +use core::future::Future; + use ciphersuite::{Ciphersuite, Secp256k1}; use bitcoin_serai::{ @@ -89,8 +91,10 @@ fn signable_transaction( .map(|bst| (SignableTransaction { inputs, payments, change, fee_per_vbyte }, bst)) } +#[derive(Clone)] pub(crate) struct Planner; impl TransactionPlanner, EffectedReceivedOutputs>> for Planner { + type EphemeralError = (); type FeeRate = u64; type SignableTransaction = SignableTransaction; @@ -153,50 +157,59 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan } fn plan( + &self, fee_rate: Self::FeeRate, inputs: Vec>>, payments: Vec>>>, change: Option>>, - ) -> PlannedTransaction, Self::SignableTransaction, EffectedReceivedOutputs>> { - let key = inputs.first().unwrap().key(); - for input in &inputs { - assert_eq!(key, input.key()); - } + ) -> impl Send + + Future< + Output = Result< + PlannedTransaction, Self::SignableTransaction, EffectedReceivedOutputs>>, + Self::EphemeralError, + >, + > { + async move { + let key = inputs.first().unwrap().key(); + for input in &inputs { + assert_eq!(key, input.key()); + } - let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); - match signable_transaction::(fee_rate, inputs.clone(), payments, change) { - Ok(tx) => PlannedTransaction { - signable: tx.0, - eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, - auxilliary: EffectedReceivedOutputs({ - let tx = tx.1.transaction(); - let scanner = scanner(key); + let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); + match signable_transaction::(fee_rate, inputs.clone(), payments, change) { + Ok(tx) => Ok(PlannedTransaction { + signable: tx.0, + eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, + auxilliary: EffectedReceivedOutputs({ + let tx = tx.1.transaction(); + let scanner = scanner(key); - let mut res = vec![]; - for output in scanner.scan_transaction(tx) { - res.push(Output::new_with_presumed_origin( - key, - tx, - // It shouldn't matter if this is wrong as we should never try to return these - // We still provide an accurate value to ensure a lack of discrepancies - Some(Address::new(inputs[0].output.output().script_pubkey.clone()).unwrap()), - output, - )); - } - res + let mut res = vec![]; + for output in scanner.scan_transaction(tx) { + res.push(Output::new_with_presumed_origin( + key, + tx, + // It shouldn't matter if this is wrong as we should never try to return these + // We still provide an accurate value to ensure a lack of discrepancies + Some(Address::new(inputs[0].output.output().script_pubkey.clone()).unwrap()), + output, + )); + } + res + }), }), - }, - Err( - TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, - ) => panic!("malformed arguments to plan"), - // No data, we have a minimum fee rate, we checked the amount of inputs/outputs - Err( - TransactionError::TooMuchData | - TransactionError::TooLowFee | - TransactionError::TooLargeTransaction, - ) => unreachable!(), - Err(TransactionError::NotEnoughFunds { .. }) => { - panic!("plan called for a transaction without enough funds") + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to plan"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { .. }) => { + panic!("plan called for a transaction without enough funds") + } } } } diff --git a/processor/monero/src/decoys.rs b/processor/monero/src/decoys.rs deleted file mode 100644 index 000463d0..00000000 --- a/processor/monero/src/decoys.rs +++ /dev/null @@ -1,294 +0,0 @@ -use core::{ - future::Future, - ops::{Bound, RangeBounds}, -}; - -use curve25519_dalek::{ - scalar::Scalar, - edwards::{CompressedEdwardsY, EdwardsPoint}, -}; -use monero_wallet::{ - DEFAULT_LOCK_WINDOW, - primitives::Commitment, - transaction::{Timelock, Input, Pruned, Transaction}, - rpc::{OutputInformation, RpcError, Rpc as MRpcTrait, DecoyRpc}, -}; - -use borsh::{BorshSerialize, BorshDeserialize}; -use serai_db::{Get, DbTxn, Db, create_db}; - -use primitives::task::ContinuallyRan; -use scanner::ScannerFeed; - -use crate::Rpc; - -#[derive(BorshSerialize, BorshDeserialize)] -struct EncodableOutputInformation { - height: u64, - timelocked: bool, - key: [u8; 32], - commitment: [u8; 32], -} - -create_db! { - MoneroProcessorDecoys { - NextToIndexBlock: () -> u64, - PriorIndexedBlock: () -> [u8; 32], - DistributionStartBlock: () -> u64, - Distribution: () -> Vec, - Out: (index: u64) -> EncodableOutputInformation, - } -} - -/* - We want to be able to select decoys when planning transactions, but planning transactions is a - synchronous process. We store the decoys to a local database and have our database implement - `DecoyRpc` to achieve synchronous decoy selection. - - This is only needed as the transactions we sign must have decoys decided and agreed upon. With - FCMP++s, we'll be able to sign transactions without the membership proof, letting any signer - prove for membership after the fact (with their local views). Until then, this task remains. -*/ -pub(crate) struct DecoysTask { - pub(crate) rpc: Rpc, - pub(crate) current_distribution: Vec, -} - -impl ContinuallyRan for DecoysTask { - fn run_iteration(&mut self) -> impl Send + Future> { - async move { - let finalized_block_number = self - .rpc - .rpc - .get_height() - .await - .map_err(|e| format!("couldn't fetch latest block number: {e:?}"))? - .checked_sub(Rpc::::CONFIRMATIONS.try_into().unwrap()) - .ok_or(format!( - "blockchain only just started and doesn't have {} blocks yet", - Rpc::::CONFIRMATIONS - ))?; - - if NextToIndexBlock::get(&self.rpc.db).is_none() { - let distribution = self - .rpc - .rpc - .get_output_distribution(..= finalized_block_number) - .await - .map_err(|e| format!("failed to get output distribution: {e:?}"))?; - if distribution.is_empty() { - Err("distribution was empty".to_string())?; - } - - let distribution_start_block = finalized_block_number - (distribution.len() - 1); - // There may have been a reorg between the time of getting the distribution and the time of - // getting this block. This is an invariant and assumed not to have happened in the split - // second it's possible. - let block = self - .rpc - .rpc - .get_block_by_number(distribution_start_block) - .await - .map_err(|e| format!("failed to get the start block for the distribution: {e:?}"))?; - - let mut txn = self.rpc.db.txn(); - NextToIndexBlock::set(&mut txn, &distribution_start_block.try_into().unwrap()); - PriorIndexedBlock::set(&mut txn, &block.header.previous); - DistributionStartBlock::set(&mut txn, &u64::try_from(distribution_start_block).unwrap()); - txn.commit(); - } - - let next_to_index_block = - usize::try_from(NextToIndexBlock::get(&self.rpc.db).unwrap()).unwrap(); - if next_to_index_block >= finalized_block_number { - return Ok(false); - } - - for b in next_to_index_block ..= finalized_block_number { - // Fetch the block - let block = self - .rpc - .rpc - .get_block_by_number(b) - .await - .map_err(|e| format!("decoys task failed to fetch block: {e:?}"))?; - let prior = PriorIndexedBlock::get(&self.rpc.db).unwrap(); - if block.header.previous != prior { - panic!( - "decoys task detected reorg: expected {}, found {}", - hex::encode(prior), - hex::encode(block.header.previous) - ); - } - - // Fetch the transactions in the block - let transactions = self - .rpc - .rpc - .get_pruned_transactions(&block.transactions) - .await - .map_err(|e| format!("failed to get the pruned transactions within a block: {e:?}"))?; - - fn outputs( - list: &mut Vec, - block_number: u64, - tx: Transaction, - ) { - match tx { - Transaction::V1 { .. } => {} - Transaction::V2 { prefix, proofs } => { - for (i, output) in prefix.outputs.into_iter().enumerate() { - list.push(EncodableOutputInformation { - // This is correct per the documentation on OutputInformation, which this maps to - height: block_number, - timelocked: prefix.additional_timelock != Timelock::None, - key: output.key.to_bytes(), - commitment: if matches!( - prefix.inputs.first().expect("Monero transaction had no inputs"), - Input::Gen(_) - ) { - Commitment::new( - Scalar::ONE, - output.amount.expect("miner transaction outputs didn't have amounts set"), - ) - .calculate() - .compress() - .to_bytes() - } else { - proofs - .as_ref() - .expect("non-miner V2 transaction didn't have proofs") - .base - .commitments - .get(i) - .expect("amount of commitments didn't match amount of outputs") - .compress() - .to_bytes() - }, - }); - } - } - } - } - - let block_hash = block.hash(); - - let b = u64::try_from(b).unwrap(); - let mut encodable = Vec::with_capacity(2 * (1 + block.transactions.len())); - outputs(&mut encodable, b, block.miner_transaction.into()); - for transaction in transactions { - outputs(&mut encodable, b, transaction); - } - - let existing_outputs = self.current_distribution.last().copied().unwrap_or(0); - let now_outputs = existing_outputs + u64::try_from(encodable.len()).unwrap(); - self.current_distribution.push(now_outputs); - - let mut txn = self.rpc.db.txn(); - NextToIndexBlock::set(&mut txn, &(b + 1)); - PriorIndexedBlock::set(&mut txn, &block_hash); - // TODO: Don't write the entire 10 MB distribution to the DB every two minutes - Distribution::set(&mut txn, &self.current_distribution); - for (b, out) in (existing_outputs .. now_outputs).zip(encodable) { - Out::set(&mut txn, b, &out); - } - txn.commit(); - } - Ok(true) - } - } -} - -// TODO: Cache the distribution in a static -pub(crate) struct Decoys<'a, G: Get>(&'a G); -impl<'a, G: Sync + Get> DecoyRpc for Decoys<'a, G> { - #[rustfmt::skip] - fn get_output_distribution_end_height( - &self, - ) -> impl Send + Future> { - async move { - Ok(NextToIndexBlock::get(self.0).map_or(0, |b| usize::try_from(b).unwrap() + 1)) - } - } - fn get_output_distribution( - &self, - range: impl Send + RangeBounds, - ) -> impl Send + Future, RpcError>> { - async move { - let from = match range.start_bound() { - Bound::Included(from) => *from, - Bound::Excluded(from) => from.checked_add(1).ok_or_else(|| { - RpcError::InternalError("range's from wasn't representable".to_string()) - })?, - Bound::Unbounded => 0, - }; - let to = match range.end_bound() { - Bound::Included(to) => *to, - Bound::Excluded(to) => to - .checked_sub(1) - .ok_or_else(|| RpcError::InternalError("range's to wasn't representable".to_string()))?, - Bound::Unbounded => { - panic!("requested distribution till latest block, which is non-deterministic") - } - }; - if from > to { - Err(RpcError::InternalError(format!( - "malformed range: inclusive start {from}, inclusive end {to}" - )))?; - } - - let distribution_start_block = usize::try_from( - DistributionStartBlock::get(self.0).expect("never populated the distribution start block"), - ) - .unwrap(); - let len_of_distribution_until_to = - to.checked_sub(distribution_start_block).ok_or_else(|| { - RpcError::InternalError( - "requested distribution until a block when the distribution had yet to start" - .to_string(), - ) - })? + - 1; - let distribution = Distribution::get(self.0).expect("never populated the distribution"); - assert!( - distribution.len() >= len_of_distribution_until_to, - "requested distribution until block we have yet to index" - ); - Ok( - distribution[from.saturating_sub(distribution_start_block) .. len_of_distribution_until_to] - .to_vec(), - ) - } - } - fn get_outs( - &self, - _indexes: &[u64], - ) -> impl Send + Future, RpcError>> { - async move { unimplemented!("get_outs is unused") } - } - fn get_unlocked_outputs( - &self, - indexes: &[u64], - height: usize, - fingerprintable_deterministic: bool, - ) -> impl Send + Future>, RpcError>> { - assert!(fingerprintable_deterministic, "processor wasn't using deterministic output selection"); - async move { - let mut res = vec![]; - for index in indexes { - let out = Out::get(self.0, *index).expect("requested output we didn't index"); - let unlocked = (!out.timelocked) && - ((usize::try_from(out.height).unwrap() + DEFAULT_LOCK_WINDOW) <= height); - res.push(unlocked.then(|| CompressedEdwardsY(out.key).decompress()).flatten().map(|key| { - [ - key, - CompressedEdwardsY(out.commitment) - .decompress() - .expect("output with invalid commitment"), - ] - })); - } - Ok(res) - } - } -} diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index 5b32e0f1..344b6c48 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -16,7 +16,6 @@ use crate::key_gen::KeyGenParams; mod rpc; use rpc::Rpc; -mod decoys; /* mod scheduler; use scheduler::Scheduler; diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index 58e6cf8b..9244b23f 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -5,7 +5,6 @@ use monero_simple_request_rpc::SimpleRequestRpc; use serai_client::primitives::{NetworkId, Coin, Amount}; -use serai_db::Db; use scanner::ScannerFeed; use signers::TransactionPublisher; @@ -15,12 +14,11 @@ use crate::{ }; #[derive(Clone)] -pub(crate) struct Rpc { - pub(crate) db: D, +pub(crate) struct Rpc { pub(crate) rpc: SimpleRequestRpc, } -impl ScannerFeed for Rpc { +impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Monero; // Outputs aren't spendable until 10 blocks later due to the 10-block lock // Since we assumed scanned outputs are spendable, that sets a minimum confirmation depth of 10 @@ -39,15 +37,16 @@ impl ScannerFeed for Rpc { &self, ) -> impl Send + Future> { async move { - // The decoys task only indexes finalized blocks - crate::decoys::NextToIndexBlock::get(&self.db) - .ok_or_else(|| { - RpcError::InternalError("decoys task hasn't indexed any blocks yet".to_string()) - })? - .checked_sub(1) - .ok_or_else(|| { - RpcError::InternalError("only the genesis block has been indexed".to_string()) - }) + Ok( + self + .rpc + .get_height() + .await? + .checked_sub(1) + .expect("connected to an invalid Monero RPC") + .try_into() + .unwrap(), + ) } } @@ -128,7 +127,7 @@ impl ScannerFeed for Rpc { } } -impl TransactionPublisher for Rpc { +impl TransactionPublisher for Rpc { type EphemeralError = RpcError; fn publish( diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 46a5e13b..99fea2fb 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -1,4 +1,4 @@ -use core::{marker::PhantomData, future::Future}; +use core::future::Future; use std::collections::{HashSet, HashMap}; use group::GroupEncoding; @@ -102,11 +102,11 @@ fn intake_eventualities( pub(crate) struct EventualityTask> { db: D, feed: S, - scheduler: PhantomData, + scheduler: Sch, } impl> EventualityTask { - pub(crate) fn new(mut db: D, feed: S, start_block: u64) -> Self { + pub(crate) fn new(mut db: D, feed: S, scheduler: Sch, start_block: u64) -> Self { if EventualityDb::::next_to_check_for_eventualities_block(&db).is_none() { // Initialize the DB let mut txn = db.txn(); @@ -114,7 +114,7 @@ impl> EventualityTask { txn.commit(); } - Self { db, feed, scheduler: PhantomData } + Self { db, feed, scheduler } } #[allow(clippy::type_complexity)] @@ -167,15 +167,19 @@ impl> EventualityTask { { intaked_any = true; - let new_eventualities = Sch::fulfill( - &mut txn, - &block, - &keys_with_stages, - burns - .into_iter() - .filter_map(|burn| Payment::>::try_from(burn).ok()) - .collect(), - ); + let new_eventualities = self + .scheduler + .fulfill( + &mut txn, + &block, + &keys_with_stages, + burns + .into_iter() + .filter_map(|burn| Payment::>::try_from(burn).ok()) + .collect(), + ) + .await + .map_err(|e| format!("failed to queue fulfilling payments: {e:?}"))?; intake_eventualities::(&mut txn, new_eventualities); } txn.commit(); @@ -443,8 +447,11 @@ impl> ContinuallyRan for EventualityTas determined off an earlier block than this (enabling an earlier LifetimeStage to be used after a later one was already used). */ - let new_eventualities = - Sch::update(&mut txn, &block, &keys_with_stages, scheduler_update); + let new_eventualities = self + .scheduler + .update(&mut txn, &block, &keys_with_stages, scheduler_update) + .await + .map_err(|e| format!("failed to update scheduler: {e:?}"))?; // Intake the new Eventualities for key in new_eventualities.keys() { keys @@ -464,8 +471,11 @@ impl> ContinuallyRan for EventualityTas key.key != keys.last().unwrap().key, "key which was forwarding was the last key (which has no key after it to forward to)" ); - let new_eventualities = - Sch::flush_key(&mut txn, &block, key.key, keys.last().unwrap().key); + let new_eventualities = self + .scheduler + .flush_key(&mut txn, &block, key.key, keys.last().unwrap().key) + .await + .map_err(|e| format!("failed to flush key from scheduler: {e:?}"))?; intake_eventualities::(&mut txn, new_eventualities); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 6ac45223..1b6afaa9 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -256,8 +256,17 @@ impl SchedulerUpdate { } } +/// Eventualities, keyed by the encoding of the key the Eventualities are for. +pub type KeyScopedEventualities = HashMap, Vec>>; + /// The object responsible for accumulating outputs and planning new transactions. pub trait Scheduler: 'static + Send { + /// An error encountered when handling updates/payments. + /// + /// This MUST be an ephemeral error. Retrying handling updates/payments MUST eventually + /// resolve without manual intervention/changing the arguments. + type EphemeralError: Debug; + /// The type for a signable transaction. type SignableTransaction: scheduler_primitives::SignableTransaction; @@ -278,11 +287,12 @@ pub trait Scheduler: 'static + Send { /// If the retiring key has any unfulfilled payments associated with it, those MUST be made /// the responsibility of the new key. fn flush_key( + &self, txn: &mut impl DbTxn, block: &BlockFor, retiring_key: KeyFor, new_key: KeyFor, - ) -> HashMap, Vec>>; + ) -> impl Send + Future, Self::EphemeralError>>; /// Retire a key as it'll no longer be used. /// @@ -300,11 +310,12 @@ pub trait Scheduler: 'static + Send { /// The `Vec` used as the key in the returned HashMap should be the encoded key the /// Eventualities are for. fn update( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, - ) -> HashMap, Vec>>; + ) -> impl Send + Future, Self::EphemeralError>>; /// Fulfill a series of payments, yielding the Eventualities now to be scanned for. /// @@ -339,11 +350,12 @@ pub trait Scheduler: 'static + Send { has an output-to-Serai, the new primary output). */ fn fulfill( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], payments: Vec>>, - ) -> HashMap, Vec>>; + ) -> impl Send + Future, Self::EphemeralError>>; } /// A representation of a scanner. @@ -358,14 +370,15 @@ impl Scanner { /// This will begin its execution, spawning several asynchronous tasks. /// /// This will return None if the Scanner was never initialized. - pub async fn new>(db: impl Db, feed: S) -> Option { + pub async fn new(db: impl Db, feed: S, scheduler: impl Scheduler) -> Option { let start_block = ScannerGlobalDb::::start_block(&db)?; let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); let report_task = report::ReportTask::<_, S>::new(db.clone(), start_block); let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); - let eventuality_task = eventuality::EventualityTask::<_, _, Sch>::new(db, feed, start_block); + let eventuality_task = + eventuality::EventualityTask::<_, _, _>::new(db, feed, scheduler, start_block); let (index_task_def, _index_handle) = Task::new(); let (scan_task_def, scan_handle) = Task::new(); @@ -394,9 +407,10 @@ impl Scanner { /// This will begin its execution, spawning several asynchronous tasks. /// /// This passes through to `Scanner::new` if prior called. - pub async fn initialize>( + pub async fn initialize( mut db: impl Db, feed: S, + scheduler: impl Scheduler, start_block: u64, start_key: KeyFor, ) -> Self { @@ -407,7 +421,7 @@ impl Scanner { txn.commit(); } - Self::new::(db, feed).await.unwrap() + Self::new(db, feed, scheduler).await.unwrap() } /// Acknowledge a Batch having been published on Serai. diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index e48221a1..00b2d10f 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -2,6 +2,8 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use core::{fmt::Debug, future::Future}; + use serai_primitives::{Coin, Amount}; use primitives::{ReceivedOutput, Payment}; @@ -40,8 +42,14 @@ pub struct AmortizePlannedTransaction: 'static + Send + Sync { + /// An error encountered when handling planning transactions. + /// + /// This MUST be an ephemeral error. Retrying planning transactions MUST eventually resolve + /// resolve manual intervention/changing the arguments. + type EphemeralError: Debug; + /// The type representing a fee rate to use for transactions. - type FeeRate: Clone + Copy; + type FeeRate: Send + Clone + Copy; /// The type representing a signable transaction. type SignableTransaction: SignableTransaction; @@ -82,11 +90,15 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// `change` will always be an address belonging to the Serai network. If it is `Some`, a change /// output must be created. fn plan( + &self, fee_rate: Self::FeeRate, inputs: Vec>, payments: Vec>>, change: Option>, - ) -> PlannedTransaction; + ) -> impl Send + + Future< + Output = Result, Self::EphemeralError>, + >; /// Obtain a PlannedTransaction via amortizing the fee over the payments. /// @@ -98,132 +110,142 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. // TODO: Enum for Change of None, Some, Mandatory fn plan_transaction_with_fee_amortization( + &self, operating_costs: &mut u64, fee_rate: Self::FeeRate, inputs: Vec>, mut payments: Vec>>, mut change: Option>, - ) -> Option> { - // If there's no change output, we can't recoup any operating costs we would amortize - // We also don't have any losses if the inputs are written off/the change output is reduced - let mut operating_costs_if_no_change = 0; - let operating_costs_in_effect = - if change.is_none() { &mut operating_costs_if_no_change } else { operating_costs }; + ) -> impl Send + + Future< + Output = Result< + Option>, + Self::EphemeralError, + >, + > { + async move { + // If there's no change output, we can't recoup any operating costs we would amortize + // We also don't have any losses if the inputs are written off/the change output is reduced + let mut operating_costs_if_no_change = 0; + let operating_costs_in_effect = + if change.is_none() { &mut operating_costs_if_no_change } else { operating_costs }; + + // Sanity checks + { + assert!(!inputs.is_empty()); + assert!((!payments.is_empty()) || change.is_some()); + let coin = inputs.first().unwrap().balance().coin; + for input in &inputs { + assert_eq!(coin, input.balance().coin); + } + for payment in &payments { + assert_eq!(coin, payment.balance().coin); + } + assert!( + (inputs.iter().map(|input| input.balance().amount.0).sum::() + + *operating_costs_in_effect) >= + payments.iter().map(|payment| payment.balance().amount.0).sum::(), + "attempted to fulfill payments without a sufficient input set" + ); + } - // Sanity checks - { - assert!(!inputs.is_empty()); - assert!((!payments.is_empty()) || change.is_some()); let coin = inputs.first().unwrap().balance().coin; - for input in &inputs { - assert_eq!(coin, input.balance().coin); + + // Amortization + { + // Sort payments from high amount to low amount + payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); + + let mut fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; + let mut amortized = 0; + while !payments.is_empty() { + // We need to pay the fee, and any accrued operating costs, minus what we've already + // amortized + let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); + + /* + Ideally, we wouldn't use a ceil div yet would be accurate about it. Any remainder could + be amortized over the largest outputs, which wouldn't be relevant here as we only work + with the smallest output. The issue is the theoretical edge case where all outputs have + the same value and are of the minimum value. In that case, none would be able to have + the remainder amortized as it'd cause them to need to be dropped. Using a ceil div + avoids this. + */ + let per_payment_fee = adjusted_fee.div_ceil(u64::try_from(payments.len()).unwrap()); + // Pop the last payment if it can't pay the fee, remaining about the dust limit as it does + if payments.last().unwrap().balance().amount.0 <= (per_payment_fee + S::dust(coin).0) { + amortized += payments.pop().unwrap().balance().amount.0; + // Recalculate the fee and try again + fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; + continue; + } + // Break since all of these payments shouldn't be dropped + break; + } + + // If we couldn't amortize the fee over the payments, check if we even have enough to pay it + if payments.is_empty() { + // If we don't have a change output, we simply return here + // We no longer have anything to do here, nor any expectations + if change.is_none() { + return Ok(None); + } + + let inputs = inputs.iter().map(|input| input.balance().amount.0).sum::(); + // Checks not just if we can pay for it, yet that the would-be change output is at least + // dust + if inputs < (fee + S::dust(coin).0) { + // Write off these inputs + *operating_costs_in_effect += inputs; + // Yet also claw back the payments we dropped, as we only lost the change + // The dropped payments will be worth less than the inputs + operating_costs we started + // with, so this shouldn't use `saturating_sub` + *operating_costs_in_effect -= amortized; + return Ok(None); + } + } else { + // Since we have payments which can pay the fee we ended up with, amortize it + let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); + let per_payment_base_fee = adjusted_fee / u64::try_from(payments.len()).unwrap(); + let payments_paying_one_atomic_unit_more = + usize::try_from(adjusted_fee % u64::try_from(payments.len()).unwrap()).unwrap(); + + for (i, payment) in payments.iter_mut().enumerate() { + let per_payment_fee = + per_payment_base_fee + u64::from(u8::from(i < payments_paying_one_atomic_unit_more)); + payment.balance().amount.0 -= per_payment_fee; + amortized += per_payment_fee; + } + assert!(amortized >= (*operating_costs_in_effect + fee)); + + // If the change is less than the dust, drop it + let would_be_change = inputs.iter().map(|input| input.balance().amount.0).sum::() - + payments.iter().map(|payment| payment.balance().amount.0).sum::() - + fee; + if would_be_change < S::dust(coin).0 { + change = None; + *operating_costs_in_effect += would_be_change; + } + } + + // Update the amount of operating costs + *operating_costs_in_effect = (*operating_costs_in_effect + fee).saturating_sub(amortized); } - for payment in &payments { - assert_eq!(coin, payment.balance().coin); - } - assert!( - (inputs.iter().map(|input| input.balance().amount.0).sum::() + - *operating_costs_in_effect) >= - payments.iter().map(|payment| payment.balance().amount.0).sum::(), - "attempted to fulfill payments without a sufficient input set" - ); + + // Because we amortized, or accrued as operating costs, the fee, make the transaction + let effected_payments = payments.iter().map(|payment| payment.balance().amount).collect(); + let has_change = change.is_some(); + + let PlannedTransaction { signable, eventuality, auxilliary } = + self.plan(fee_rate, inputs, payments, change).await?; + Ok(Some(AmortizePlannedTransaction { + effected_payments, + has_change, + signable, + eventuality, + auxilliary, + })) } - - let coin = inputs.first().unwrap().balance().coin; - - // Amortization - { - // Sort payments from high amount to low amount - payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); - - let mut fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; - let mut amortized = 0; - while !payments.is_empty() { - // We need to pay the fee, and any accrued operating costs, minus what we've already - // amortized - let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); - - /* - Ideally, we wouldn't use a ceil div yet would be accurate about it. Any remainder could - be amortized over the largest outputs, which wouldn't be relevant here as we only work - with the smallest output. The issue is the theoretical edge case where all outputs have - the same value and are of the minimum value. In that case, none would be able to have the - remainder amortized as it'd cause them to need to be dropped. Using a ceil div avoids - this. - */ - let per_payment_fee = adjusted_fee.div_ceil(u64::try_from(payments.len()).unwrap()); - // Pop the last payment if it can't pay the fee, remaining about the dust limit as it does - if payments.last().unwrap().balance().amount.0 <= (per_payment_fee + S::dust(coin).0) { - amortized += payments.pop().unwrap().balance().amount.0; - // Recalculate the fee and try again - fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; - continue; - } - // Break since all of these payments shouldn't be dropped - break; - } - - // If we couldn't amortize the fee over the payments, check if we even have enough to pay it - if payments.is_empty() { - // If we don't have a change output, we simply return here - // We no longer have anything to do here, nor any expectations - if change.is_none() { - None?; - } - - let inputs = inputs.iter().map(|input| input.balance().amount.0).sum::(); - // Checks not just if we can pay for it, yet that the would-be change output is at least - // dust - if inputs < (fee + S::dust(coin).0) { - // Write off these inputs - *operating_costs_in_effect += inputs; - // Yet also claw back the payments we dropped, as we only lost the change - // The dropped payments will be worth less than the inputs + operating_costs we started - // with, so this shouldn't use `saturating_sub` - *operating_costs_in_effect -= amortized; - None?; - } - } else { - // Since we have payments which can pay the fee we ended up with, amortize it - let adjusted_fee = (*operating_costs_in_effect + fee).saturating_sub(amortized); - let per_payment_base_fee = adjusted_fee / u64::try_from(payments.len()).unwrap(); - let payments_paying_one_atomic_unit_more = - usize::try_from(adjusted_fee % u64::try_from(payments.len()).unwrap()).unwrap(); - - for (i, payment) in payments.iter_mut().enumerate() { - let per_payment_fee = - per_payment_base_fee + u64::from(u8::from(i < payments_paying_one_atomic_unit_more)); - payment.balance().amount.0 -= per_payment_fee; - amortized += per_payment_fee; - } - assert!(amortized >= (*operating_costs_in_effect + fee)); - - // If the change is less than the dust, drop it - let would_be_change = inputs.iter().map(|input| input.balance().amount.0).sum::() - - payments.iter().map(|payment| payment.balance().amount.0).sum::() - - fee; - if would_be_change < S::dust(coin).0 { - change = None; - *operating_costs_in_effect += would_be_change; - } - } - - // Update the amount of operating costs - *operating_costs_in_effect = (*operating_costs_in_effect + fee).saturating_sub(amortized); - } - - // Because we amortized, or accrued as operating costs, the fee, make the transaction - let effected_payments = payments.iter().map(|payment| payment.balance().amount).collect(); - let has_change = change.is_some(); - let PlannedTransaction { signable, eventuality, auxilliary } = - Self::plan(fee_rate, inputs, payments, change); - Some(AmortizePlannedTransaction { - effected_payments, - has_change, - signable, - eventuality, - auxilliary, - }) } /// Create a tree to fulfill a set of payments. diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index 3ae855e7..5ff786a7 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -2,7 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use std::collections::HashMap; use group::GroupEncoding; @@ -14,7 +14,7 @@ use serai_db::DbTxn; use primitives::{ReceivedOutput, Payment}; use scanner::{ LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor, - SchedulerUpdate, Scheduler as SchedulerTrait, + SchedulerUpdate, KeyScopedEventualities, Scheduler as SchedulerTrait, }; use scheduler_primitives::*; use utxo_scheduler_primitives::*; @@ -23,16 +23,27 @@ mod db; use db::Db; /// A scheduler of transactions for networks premised on the UTXO model. -pub struct Scheduler>(PhantomData, PhantomData

); +#[allow(non_snake_case)] +#[derive(Clone)] +pub struct Scheduler> { + planner: P, + _S: PhantomData, +} impl> Scheduler { - fn aggregate_inputs( + /// Create a new scheduler. + pub fn new(planner: P) -> Self { + Self { planner, _S: PhantomData } + } + + async fn aggregate_inputs( + &self, txn: &mut impl DbTxn, block: &BlockFor, key_for_change: KeyFor, key: KeyFor, coin: Coin, - ) -> Vec> { + ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; let mut operating_costs = Db::::operating_costs(txn, coin).0; @@ -41,13 +52,17 @@ impl> Scheduler { while outputs.len() > P::MAX_INPUTS { let to_aggregate = outputs.drain(.. P::MAX_INPUTS).collect::>(); - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - to_aggregate, - vec![], - Some(key_for_change), - ) else { + let Some(planned) = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + to_aggregate, + vec![], + Some(key_for_change), + ) + .await? + else { continue; }; @@ -57,7 +72,7 @@ impl> Scheduler { Db::::set_outputs(txn, key, coin, &outputs); Db::::set_operating_costs(txn, coin, Amount(operating_costs)); - eventualities + Ok(eventualities) } fn fulfillable_payments( @@ -140,31 +155,36 @@ impl> Scheduler { } } - fn handle_branch( + async fn handle_branch( + &self, txn: &mut impl DbTxn, block: &BlockFor, eventualities: &mut Vec>, output: OutputFor, tx: TreeTransaction>, - ) -> bool { + ) -> Result>::EphemeralError> { let key = output.key(); let coin = output.balance().coin; let Some(payments) = tx.payments::(coin, &P::branch_address(key), output.balance().amount.0) else { // If this output has become too small to satisfy this branch, drop it - return false; + return Ok(false); }; - let Some(planned) = P::plan_transaction_with_fee_amortization( - // Uses 0 as there's no operating costs to incur/amortize here - &mut 0, - P::fee_rate(block, coin), - vec![output], - payments, - None, - ) else { + let Some(planned) = self + .planner + .plan_transaction_with_fee_amortization( + // Uses 0 as there's no operating costs to incur/amortize here + &mut 0, + P::fee_rate(block, coin), + vec![output], + payments, + None, + ) + .await? + else { // This Branch isn't viable, so drop it (and its children) - return false; + return Ok(false); }; TransactionsToSign::::send(txn, &key, &planned.signable); @@ -172,15 +192,16 @@ impl> Scheduler { Self::queue_branches(txn, key, coin, planned.effected_payments, tx); - true + Ok(true) } - fn step( + async fn step( + &self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], block: &BlockFor, key: KeyFor, - ) -> Vec> { + ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; let key_for_change = match active_keys[0].1 { @@ -198,7 +219,8 @@ impl> Scheduler { let coin = *coin; // Perform any input aggregation we should - eventualities.append(&mut Self::aggregate_inputs(txn, block, key_for_change, key, coin)); + eventualities + .append(&mut self.aggregate_inputs(txn, block, key_for_change, key, coin).await?); // Fetch the operating costs/outputs let mut operating_costs = Db::::operating_costs(txn, coin).0; @@ -228,15 +250,19 @@ impl> Scheduler { // scanner API) let mut planned_outer = None; for i in 0 .. 2 { - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - outputs.clone(), - tree[0] - .payments::(coin, &branch_address, tree[0].value()) - .expect("payments were dropped despite providing an input of the needed value"), - Some(key_for_change), - ) else { + let Some(planned) = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs.clone(), + tree[0] + .payments::(coin, &branch_address, tree[0].value()) + .expect("payments were dropped despite providing an input of the needed value"), + Some(key_for_change), + ) + .await? + else { // This should trip on the first iteration or not at all assert_eq!(i, 0); // This doesn't have inputs even worth aggregating so drop the entire tree @@ -272,46 +298,53 @@ impl> Scheduler { Self::queue_branches(txn, key, coin, planned.effected_payments, tree.remove(0)); } - eventualities + Ok(eventualities) } - fn flush_outputs( + async fn flush_outputs( + &self, txn: &mut impl DbTxn, - eventualities: &mut HashMap, Vec>>, + eventualities: &mut KeyScopedEventualities, block: &BlockFor, from: KeyFor, to: KeyFor, coin: Coin, - ) { + ) -> Result<(), >::EphemeralError> { let from_bytes = from.to_bytes().as_ref().to_vec(); // Ensure our inputs are aggregated eventualities .entry(from_bytes.clone()) .or_insert(vec![]) - .append(&mut Self::aggregate_inputs(txn, block, to, from, coin)); + .append(&mut self.aggregate_inputs(txn, block, to, from, coin).await?); // Now that our inputs are aggregated, transfer all of them to the new key let mut operating_costs = Db::::operating_costs(txn, coin).0; let outputs = Db::::outputs(txn, from, coin).unwrap(); if outputs.is_empty() { - return; + return Ok(()); } - let planned = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - outputs, - vec![], - Some(to), - ); + let planned = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs, + vec![], + Some(to), + ) + .await?; Db::::set_operating_costs(txn, coin, Amount(operating_costs)); - let Some(planned) = planned else { return }; + let Some(planned) = planned else { return Ok(()) }; TransactionsToSign::::send(txn, &from, &planned.signable); eventualities.get_mut(&from_bytes).unwrap().push(planned.eventuality); + + Ok(()) } } impl> SchedulerTrait for Scheduler { + type EphemeralError = P::EphemeralError; type SignableTransaction = P::SignableTransaction; fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { @@ -324,29 +357,32 @@ impl> SchedulerTrait for Schedul } fn flush_key( + &self, txn: &mut impl DbTxn, block: &BlockFor, retiring_key: KeyFor, new_key: KeyFor, - ) -> HashMap, Vec>> { - let mut eventualities = HashMap::new(); - for coin in S::NETWORK.coins() { - // Move the payments to the new key - { - let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); - let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + let mut eventualities = HashMap::new(); + for coin in S::NETWORK.coins() { + // Move the payments to the new key + { + let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); - let mut queued = still_queued; - queued.append(&mut new_queued); + let mut queued = still_queued; + queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &[]); - Db::::set_queued_payments(txn, new_key, *coin, &queued); + Db::::set_queued_payments(txn, retiring_key, *coin, &[]); + Db::::set_queued_payments(txn, new_key, *coin, &queued); + } + + // Move the outputs to the new key + self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin).await?; } - - // Move the outputs to the new key - Self::flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin); + Ok(eventualities) } - eventualities } fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { @@ -359,155 +395,174 @@ impl> SchedulerTrait for Schedul } fn update( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, - ) -> HashMap, Vec>> { - let mut eventualities = HashMap::new(); + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + let mut eventualities = HashMap::new(); - // Accumulate the new outputs - { - let mut outputs_by_key = HashMap::new(); - for output in update.outputs() { - // If this aligns for a branch, handle it - if let Some(branch) = Db::::take_pending_branch(txn, output.key(), output.balance()) { - if Self::handle_branch( - txn, - block, - eventualities.entry(output.key().to_bytes().as_ref().to_vec()).or_insert(vec![]), - output.clone(), - branch, - ) { - // If we could use it for a branch, we do and move on - // Else, we let it be accumulated by the standard accumulation code - continue; + // Accumulate the new outputs + { + let mut outputs_by_key = HashMap::new(); + for output in update.outputs() { + // If this aligns for a branch, handle it + if let Some(branch) = Db::::take_pending_branch(txn, output.key(), output.balance()) { + if self + .handle_branch( + txn, + block, + eventualities.entry(output.key().to_bytes().as_ref().to_vec()).or_insert(vec![]), + output.clone(), + branch, + ) + .await? + { + // If we could use it for a branch, we do and move on + // Else, we let it be accumulated by the standard accumulation code + continue; + } + } + + let coin = output.balance().coin; + outputs_by_key + // Index by key and coin + .entry((output.key().to_bytes().as_ref().to_vec(), coin)) + // If we haven't accumulated here prior, read the outputs from the database + .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) + .1 + .push(output.clone()); + } + // Write the outputs back to the database + for ((_key_vec, coin), (key, outputs)) in outputs_by_key { + Db::::set_outputs(txn, key, coin, &outputs); + } + } + + // Fulfill the payments we prior couldn't + for (key, _stage) in active_keys { + eventualities + .entry(key.to_bytes().as_ref().to_vec()) + .or_insert(vec![]) + .append(&mut self.step(txn, active_keys, block, *key).await?); + } + + // If this key has been flushed, forward all outputs + match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => {} + LifetimeStage::Forwarding | LifetimeStage::Finishing => { + for coin in S::NETWORK.coins() { + self + .flush_outputs( + txn, + &mut eventualities, + block, + active_keys[0].0, + active_keys[1].0, + *coin, + ) + .await?; } } - - let coin = output.balance().coin; - outputs_by_key - // Index by key and coin - .entry((output.key().to_bytes().as_ref().to_vec(), coin)) - // If we haven't accumulated here prior, read the outputs from the database - .or_insert_with(|| (output.key(), Db::::outputs(txn, output.key(), coin).unwrap())) - .1 - .push(output.clone()); } - // Write the outputs back to the database - for ((_key_vec, coin), (key, outputs)) in outputs_by_key { - Db::::set_outputs(txn, key, coin, &outputs); - } - } - // Fulfill the payments we prior couldn't - for (key, _stage) in active_keys { - eventualities - .entry(key.to_bytes().as_ref().to_vec()) - .or_insert(vec![]) - .append(&mut Self::step(txn, active_keys, block, *key)); - } + // Create the transactions for the forwards/burns + { + let mut planned_txs = vec![]; + for forward in update.forwards() { + let key = forward.key(); - // If this key has been flushed, forward all outputs - match active_keys[0].1 { - LifetimeStage::ActiveYetNotReporting | - LifetimeStage::Active | - LifetimeStage::UsingNewForChange => {} - LifetimeStage::Forwarding | LifetimeStage::Finishing => { - for coin in S::NETWORK.coins() { - Self::flush_outputs( - txn, - &mut eventualities, - block, - active_keys[0].0, - active_keys[1].0, - *coin, - ); + assert_eq!(active_keys.len(), 2); + assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); + assert_eq!(active_keys[1].1, LifetimeStage::Active); + let forward_to_key = active_keys[1].0; + + let Some(plan) = self + .planner + .plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be forwarded, we simply drop it + &mut 0, + P::fee_rate(block, forward.balance().coin), + vec![forward.clone()], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], + None, + ) + .await? + else { + continue; + }; + planned_txs.push((key, plan)); } + for to_return in update.returns() { + let key = to_return.output().key(); + let out_instruction = + Payment::new(to_return.address().clone(), to_return.output().balance(), None); + let Some(plan) = self + .planner + .plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be returned, we simply drop it + &mut 0, + P::fee_rate(block, out_instruction.balance().coin), + vec![to_return.output().clone()], + vec![out_instruction], + None, + ) + .await? + else { + continue; + }; + planned_txs.push((key, plan)); + } + + for (key, planned_tx) in planned_txs { + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned_tx.signable); + + // Insert the Eventualities into the result + eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); + } + + Ok(eventualities) } } - - // Create the transactions for the forwards/burns - { - let mut planned_txs = vec![]; - for forward in update.forwards() { - let key = forward.key(); - - assert_eq!(active_keys.len(), 2); - assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); - assert_eq!(active_keys[1].1, LifetimeStage::Active); - let forward_to_key = active_keys[1].0; - - let Some(plan) = P::plan_transaction_with_fee_amortization( - // This uses 0 for the operating costs as we don't incur any here - // If the output can't pay for itself to be forwarded, we simply drop it - &mut 0, - P::fee_rate(block, forward.balance().coin), - vec![forward.clone()], - vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], - None, - ) else { - continue; - }; - planned_txs.push((key, plan)); - } - for to_return in update.returns() { - let key = to_return.output().key(); - let out_instruction = - Payment::new(to_return.address().clone(), to_return.output().balance(), None); - let Some(plan) = P::plan_transaction_with_fee_amortization( - // This uses 0 for the operating costs as we don't incur any here - // If the output can't pay for itself to be returned, we simply drop it - &mut 0, - P::fee_rate(block, out_instruction.balance().coin), - vec![to_return.output().clone()], - vec![out_instruction], - None, - ) else { - continue; - }; - planned_txs.push((key, plan)); - } - - for (key, planned_tx) in planned_txs { - // Send the transactions off for signing - TransactionsToSign::::send(txn, &key, &planned_tx.signable); - - // Insert the Eventualities into the result - eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); - } - - eventualities - } } fn fulfill( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], payments: Vec>>, - ) -> HashMap, Vec>> { - // Find the key to filfill these payments with - let fulfillment_key = match active_keys[0].1 { - LifetimeStage::ActiveYetNotReporting => { - panic!("expected to fulfill payments despite not reporting for the oldest key") + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + // Find the key to filfill these payments with + let fulfillment_key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") + } + LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + // Queue the payments for this key + for coin in S::NETWORK.coins() { + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + queued_payments + .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); } - LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, - LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, - }; - // Queue the payments for this key - for coin in S::NETWORK.coins() { - let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); - queued_payments - .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); - Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + // Handle the queued payments + Ok(HashMap::from([( + fulfillment_key.to_bytes().as_ref().to_vec(), + self.step(txn, active_keys, block, fulfillment_key).await?, + )])) } - - // Handle the queued payments - HashMap::from([( - fulfillment_key.to_bytes().as_ref().to_vec(), - Self::step(txn, active_keys, block, fulfillment_key), - )]) } } diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index e43f5fec..cb0a8b15 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -2,7 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::marker::PhantomData; +use core::{marker::PhantomData, future::Future}; use std::collections::HashMap; use group::GroupEncoding; @@ -14,7 +14,7 @@ use serai_db::DbTxn; use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{ LifetimeStage, ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor, - SchedulerUpdate, Scheduler as SchedulerTrait, + SchedulerUpdate, KeyScopedEventualities, Scheduler as SchedulerTrait, }; use scheduler_primitives::*; use utxo_scheduler_primitives::*; @@ -27,12 +27,19 @@ pub struct EffectedReceivedOutputs(pub Vec>); /// A scheduler of transactions for networks premised on the UTXO model which support /// transaction chaining. -pub struct Scheduler>>( - PhantomData, - PhantomData

, -); +#[allow(non_snake_case)] +#[derive(Clone)] +pub struct Scheduler>> { + planner: P, + _S: PhantomData, +} impl>> Scheduler { + /// Create a new scheduler. + pub fn new(planner: P) -> Self { + Self { planner, _S: PhantomData } + } + fn accumulate_outputs(txn: &mut impl DbTxn, outputs: Vec>, from_scanner: bool) { let mut outputs_by_key = HashMap::new(); for output in outputs { @@ -59,13 +66,14 @@ impl>> Sched } } - fn aggregate_inputs( + async fn aggregate_inputs( + &self, txn: &mut impl DbTxn, block: &BlockFor, key_for_change: KeyFor, key: KeyFor, coin: Coin, - ) -> Vec> { + ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; let mut operating_costs = Db::::operating_costs(txn, coin).0; @@ -74,13 +82,17 @@ impl>> Sched let to_aggregate = outputs.drain(.. P::MAX_INPUTS).collect::>(); Db::::set_outputs(txn, key, coin, &outputs); - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - to_aggregate, - vec![], - Some(key_for_change), - ) else { + let Some(planned) = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + to_aggregate, + vec![], + Some(key_for_change), + ) + .await? + else { continue; }; @@ -93,7 +105,7 @@ impl>> Sched } Db::::set_operating_costs(txn, coin, Amount(operating_costs)); - eventualities + Ok(eventualities) } fn fulfillable_payments( @@ -151,12 +163,13 @@ impl>> Sched } } - fn step( + async fn step( + &self, txn: &mut impl DbTxn, active_keys: &[(KeyFor, LifetimeStage)], block: &BlockFor, key: KeyFor, - ) -> Vec> { + ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; let key_for_change = match active_keys[0].1 { @@ -174,7 +187,8 @@ impl>> Sched let coin = *coin; // Perform any input aggregation we should - eventualities.append(&mut Self::aggregate_inputs(txn, block, key_for_change, key, coin)); + eventualities + .append(&mut self.aggregate_inputs(txn, block, key_for_change, key, coin).await?); // Fetch the operating costs/outputs let mut operating_costs = Db::::operating_costs(txn, coin).0; @@ -211,15 +225,19 @@ impl>> Sched // scanner API) let mut planned_outer = None; for i in 0 .. 2 { - let Some(planned) = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - outputs.clone(), - tree[0] - .payments::(coin, &branch_address, tree[0].value()) - .expect("payments were dropped despite providing an input of the needed value"), - Some(key_for_change), - ) else { + let Some(planned) = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs.clone(), + tree[0] + .payments::(coin, &branch_address, tree[0].value()) + .expect("payments were dropped despite providing an input of the needed value"), + Some(key_for_change), + ) + .await? + else { // This should trip on the first iteration or not at all assert_eq!(i, 0); // This doesn't have inputs even worth aggregating so drop the entire tree @@ -300,14 +318,18 @@ impl>> Sched }; let branch_output_id = branch_output.id(); - let Some(mut planned) = P::plan_transaction_with_fee_amortization( - // Uses 0 as there's no operating costs to incur/amortize here - &mut 0, - P::fee_rate(block, coin), - vec![branch_output], - payments, - None, - ) else { + let Some(mut planned) = self + .planner + .plan_transaction_with_fee_amortization( + // Uses 0 as there's no operating costs to incur/amortize here + &mut 0, + P::fee_rate(block, coin), + vec![branch_output], + payments, + None, + ) + .await? + else { // This Branch isn't viable, so drop it (and its children) continue; }; @@ -328,49 +350,56 @@ impl>> Sched } } - eventualities + Ok(eventualities) } - fn flush_outputs( + async fn flush_outputs( + &self, txn: &mut impl DbTxn, - eventualities: &mut HashMap, Vec>>, + eventualities: &mut KeyScopedEventualities, block: &BlockFor, from: KeyFor, to: KeyFor, coin: Coin, - ) { + ) -> Result<(), >::EphemeralError> { let from_bytes = from.to_bytes().as_ref().to_vec(); // Ensure our inputs are aggregated eventualities .entry(from_bytes.clone()) .or_insert(vec![]) - .append(&mut Self::aggregate_inputs(txn, block, to, from, coin)); + .append(&mut self.aggregate_inputs(txn, block, to, from, coin).await?); // Now that our inputs are aggregated, transfer all of them to the new key let mut operating_costs = Db::::operating_costs(txn, coin).0; let outputs = Db::::outputs(txn, from, coin).unwrap(); if outputs.is_empty() { - return; + return Ok(()); } - let planned = P::plan_transaction_with_fee_amortization( - &mut operating_costs, - P::fee_rate(block, coin), - outputs, - vec![], - Some(to), - ); + let planned = self + .planner + .plan_transaction_with_fee_amortization( + &mut operating_costs, + P::fee_rate(block, coin), + outputs, + vec![], + Some(to), + ) + .await?; Db::::set_operating_costs(txn, coin, Amount(operating_costs)); - let Some(planned) = planned else { return }; + let Some(planned) = planned else { return Ok(()) }; TransactionsToSign::::send(txn, &from, &planned.signable); eventualities.get_mut(&from_bytes).unwrap().push(planned.eventuality); Self::accumulate_outputs(txn, planned.auxilliary.0, false); + + Ok(()) } } impl>> SchedulerTrait for Scheduler { + type EphemeralError = P::EphemeralError; type SignableTransaction = P::SignableTransaction; fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { @@ -383,29 +412,32 @@ impl>> Sched } fn flush_key( + &self, txn: &mut impl DbTxn, block: &BlockFor, retiring_key: KeyFor, new_key: KeyFor, - ) -> HashMap, Vec>> { - let mut eventualities = HashMap::new(); - for coin in S::NETWORK.coins() { - // Move the payments to the new key - { - let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); - let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + let mut eventualities = HashMap::new(); + for coin in S::NETWORK.coins() { + // Move the payments to the new key + { + let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); - let mut queued = still_queued; - queued.append(&mut new_queued); + let mut queued = still_queued; + queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &[]); - Db::::set_queued_payments(txn, new_key, *coin, &queued); + Db::::set_queued_payments(txn, retiring_key, *coin, &[]); + Db::::set_queued_payments(txn, new_key, *coin, &queued); + } + + // Move the outputs to the new key + self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin).await?; } - - // Move the outputs to the new key - Self::flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin); + Ok(eventualities) } - eventualities } fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { @@ -418,121 +450,137 @@ impl>> Sched } fn update( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], update: SchedulerUpdate, - ) -> HashMap, Vec>> { - Self::accumulate_outputs(txn, update.outputs().to_vec(), true); + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + Self::accumulate_outputs(txn, update.outputs().to_vec(), true); - // Fulfill the payments we prior couldn't - let mut eventualities = HashMap::new(); - for (key, _stage) in active_keys { - assert!(eventualities - .insert(key.to_bytes().as_ref().to_vec(), Self::step(txn, active_keys, block, *key)) - .is_none()); - } + // Fulfill the payments we prior couldn't + let mut eventualities = HashMap::new(); + for (key, _stage) in active_keys { + assert!(eventualities + .insert(key.to_bytes().as_ref().to_vec(), self.step(txn, active_keys, block, *key).await?) + .is_none()); + } - // If this key has been flushed, forward all outputs - match active_keys[0].1 { - LifetimeStage::ActiveYetNotReporting | - LifetimeStage::Active | - LifetimeStage::UsingNewForChange => {} - LifetimeStage::Forwarding | LifetimeStage::Finishing => { - for coin in S::NETWORK.coins() { - Self::flush_outputs( - txn, - &mut eventualities, - block, - active_keys[0].0, - active_keys[1].0, - *coin, - ); + // If this key has been flushed, forward all outputs + match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => {} + LifetimeStage::Forwarding | LifetimeStage::Finishing => { + for coin in S::NETWORK.coins() { + self + .flush_outputs( + txn, + &mut eventualities, + block, + active_keys[0].0, + active_keys[1].0, + *coin, + ) + .await?; + } } } - } - // Create the transactions for the forwards/burns - { - let mut planned_txs = vec![]; - for forward in update.forwards() { - let key = forward.key(); + // Create the transactions for the forwards/burns + { + let mut planned_txs = vec![]; + for forward in update.forwards() { + let key = forward.key(); - assert_eq!(active_keys.len(), 2); - assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); - assert_eq!(active_keys[1].1, LifetimeStage::Active); - let forward_to_key = active_keys[1].0; + assert_eq!(active_keys.len(), 2); + assert_eq!(active_keys[0].1, LifetimeStage::Forwarding); + assert_eq!(active_keys[1].1, LifetimeStage::Active); + let forward_to_key = active_keys[1].0; - let Some(plan) = P::plan_transaction_with_fee_amortization( - // This uses 0 for the operating costs as we don't incur any here - // If the output can't pay for itself to be forwarded, we simply drop it - &mut 0, - P::fee_rate(block, forward.balance().coin), - vec![forward.clone()], - vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], - None, - ) else { - continue; - }; - planned_txs.push((key, plan)); + let Some(plan) = self + .planner + .plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be forwarded, we simply drop it + &mut 0, + P::fee_rate(block, forward.balance().coin), + vec![forward.clone()], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], + None, + ) + .await? + else { + continue; + }; + planned_txs.push((key, plan)); + } + for to_return in update.returns() { + let key = to_return.output().key(); + let out_instruction = + Payment::new(to_return.address().clone(), to_return.output().balance(), None); + let Some(plan) = self + .planner + .plan_transaction_with_fee_amortization( + // This uses 0 for the operating costs as we don't incur any here + // If the output can't pay for itself to be returned, we simply drop it + &mut 0, + P::fee_rate(block, out_instruction.balance().coin), + vec![to_return.output().clone()], + vec![out_instruction], + None, + ) + .await? + else { + continue; + }; + planned_txs.push((key, plan)); + } + + for (key, planned_tx) in planned_txs { + // Send the transactions off for signing + TransactionsToSign::::send(txn, &key, &planned_tx.signable); + + // Insert the Eventualities into the result + eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); + } + + Ok(eventualities) } - for to_return in update.returns() { - let key = to_return.output().key(); - let out_instruction = - Payment::new(to_return.address().clone(), to_return.output().balance(), None); - let Some(plan) = P::plan_transaction_with_fee_amortization( - // This uses 0 for the operating costs as we don't incur any here - // If the output can't pay for itself to be returned, we simply drop it - &mut 0, - P::fee_rate(block, out_instruction.balance().coin), - vec![to_return.output().clone()], - vec![out_instruction], - None, - ) else { - continue; - }; - planned_txs.push((key, plan)); - } - - for (key, planned_tx) in planned_txs { - // Send the transactions off for signing - TransactionsToSign::::send(txn, &key, &planned_tx.signable); - - // Insert the Eventualities into the result - eventualities.get_mut(key.to_bytes().as_ref()).unwrap().push(planned_tx.eventuality); - } - - eventualities } } fn fulfill( + &self, txn: &mut impl DbTxn, block: &BlockFor, active_keys: &[(KeyFor, LifetimeStage)], payments: Vec>>, - ) -> HashMap, Vec>> { - // Find the key to filfill these payments with - let fulfillment_key = match active_keys[0].1 { - LifetimeStage::ActiveYetNotReporting => { - panic!("expected to fulfill payments despite not reporting for the oldest key") + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + // Find the key to filfill these payments with + let fulfillment_key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting => { + panic!("expected to fulfill payments despite not reporting for the oldest key") + } + LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + // Queue the payments for this key + for coin in S::NETWORK.coins() { + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + queued_payments + .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); } - LifetimeStage::Active | LifetimeStage::UsingNewForChange => active_keys[0].0, - LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, - }; - // Queue the payments for this key - for coin in S::NETWORK.coins() { - let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); - queued_payments - .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); - Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + // Handle the queued payments + Ok(HashMap::from([( + fulfillment_key.to_bytes().as_ref().to_vec(), + self.step(txn, active_keys, block, fulfillment_key).await?, + )])) } - - // Handle the queued payments - HashMap::from([( - fulfillment_key.to_bytes().as_ref().to_vec(), - Self::step(txn, active_keys, block, fulfillment_key), - )]) } } From a2d9aeaed75b7e8be47ae35c7b5c4b25ef1d73c1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 01:38:31 -0400 Subject: [PATCH 123/368] Stub out Scheduler in the Monero processor --- processor/monero/src/lib.rs | 10 -- processor/monero/src/main.rs | 21 ++-- processor/monero/src/primitives/block.rs | 26 ++-- processor/monero/src/primitives/mod.rs | 37 +++++- processor/monero/src/primitives/output.rs | 9 +- processor/monero/src/scheduler.rs | 147 ++++++++++++++++++---- 6 files changed, 178 insertions(+), 72 deletions(-) diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs index 52ebb6cb..0848e08a 100644 --- a/processor/monero/src/lib.rs +++ b/processor/monero/src/lib.rs @@ -130,8 +130,6 @@ impl Network for Monero { const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120; const CONFIRMATIONS: usize = 10; - const MAX_OUTPUTS: usize = 16; - // TODO const COST_TO_AGGREGATE: u64 = 0; @@ -318,12 +316,4 @@ impl Network for Monero { self.get_block(block).await.unwrap() } } - -impl UtxoNetwork for Monero { - // wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction - // larger than 150kb. This fits within the 100kb mark - // Technically, it can be ~124, yet a small bit of buffer is appreciated - // TODO: Test creating a TX this big - const MAX_INPUTS: usize = 120; -} */ diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index 344b6c48..daba3255 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -6,7 +6,7 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); -use monero_wallet::rpc::Rpc as MRpc; +use monero_simple_request_rpc::SimpleRequestRpc; mod primitives; pub(crate) use crate::primitives::*; @@ -15,18 +15,15 @@ mod key_gen; use crate::key_gen::KeyGenParams; mod rpc; use rpc::Rpc; - -/* mod scheduler; -use scheduler::Scheduler; +use scheduler::{Planner, Scheduler}; #[tokio::main] async fn main() { let db = bin::init(); let feed = Rpc { - db: db.clone(), rpc: loop { - match MRpc::new(bin::url()).await { + match SimpleRequestRpc::new(bin::url()).await { Ok(rpc) => break rpc, Err(e) => { log::error!("couldn't connect to the Monero node: {e:?}"); @@ -36,9 +33,11 @@ async fn main() { }, }; - bin::main_loop::<_, KeyGenParams, Scheduler<_>, Rpc>(db, feed.clone(), feed).await; + bin::main_loop::<_, KeyGenParams, _>( + db, + feed.clone(), + Scheduler::new(Planner(feed.clone())), + feed, + ) + .await; } -*/ - -#[tokio::main] -async fn main() {} diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index 130e5ac8..70a559c1 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -1,21 +1,17 @@ use std::collections::HashMap; -use zeroize::Zeroizing; - use ciphersuite::{Ciphersuite, Ed25519}; use monero_wallet::{ - block::Block as MBlock, rpc::ScannableBlock as MScannableBlock, ViewPairError, - GuaranteedViewPair, ScanError, GuaranteedScanner, + block::Block as MBlock, rpc::ScannableBlock as MScannableBlock, ScanError, GuaranteedScanner, }; use serai_client::networks::monero::Address; use primitives::{ReceivedOutput, EventualityTracker}; -use view_keys::view_key; use crate::{ - EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, output::Output, - transaction::Eventuality, + EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, view_pair, + output::Output, transaction::Eventuality, }; #[derive(Clone, Debug)] @@ -45,17 +41,11 @@ impl primitives::Block for Block { } fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { - let view_pair = match GuaranteedViewPair::new(key.0, Zeroizing::new(*view_key::(0))) { - Ok(view_pair) => view_pair, - Err(ViewPairError::TorsionedSpendKey) => { - unreachable!("dalek_ff_group::EdwardsPoint had torsion") - } - }; - let mut scanner = GuaranteedScanner::new(view_pair); - scanner.register_subaddress(EXTERNAL_SUBADDRESS.unwrap()); - scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap()); - scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap()); - scanner.register_subaddress(FORWARDED_SUBADDRESS.unwrap()); + let mut scanner = GuaranteedScanner::new(view_pair(key)); + scanner.register_subaddress(EXTERNAL_SUBADDRESS); + scanner.register_subaddress(BRANCH_SUBADDRESS); + scanner.register_subaddress(CHANGE_SUBADDRESS); + scanner.register_subaddress(FORWARDED_SUBADDRESS); match scanner.scan(self.0.clone()) { Ok(outputs) => outputs.not_additionally_locked().into_iter().map(Output).collect(), Err(ScanError::UnsupportedProtocol(version)) => { diff --git a/processor/monero/src/primitives/mod.rs b/processor/monero/src/primitives/mod.rs index de057399..317cae28 100644 --- a/processor/monero/src/primitives/mod.rs +++ b/processor/monero/src/primitives/mod.rs @@ -1,10 +1,37 @@ -use monero_wallet::address::SubaddressIndex; +use zeroize::Zeroizing; + +use ciphersuite::{Ciphersuite, Ed25519}; + +use monero_wallet::{address::SubaddressIndex, ViewPairError, GuaranteedViewPair}; + +use view_keys::view_key; pub(crate) mod output; pub(crate) mod transaction; pub(crate) mod block; -pub(crate) const EXTERNAL_SUBADDRESS: Option = SubaddressIndex::new(1, 0); -pub(crate) const BRANCH_SUBADDRESS: Option = SubaddressIndex::new(2, 0); -pub(crate) const CHANGE_SUBADDRESS: Option = SubaddressIndex::new(2, 1); -pub(crate) const FORWARDED_SUBADDRESS: Option = SubaddressIndex::new(2, 2); +pub(crate) const EXTERNAL_SUBADDRESS: SubaddressIndex = match SubaddressIndex::new(1, 0) { + Some(index) => index, + None => panic!("SubaddressIndex for EXTERNAL_SUBADDRESS was None"), +}; +pub(crate) const BRANCH_SUBADDRESS: SubaddressIndex = match SubaddressIndex::new(2, 0) { + Some(index) => index, + None => panic!("SubaddressIndex for BRANCH_SUBADDRESS was None"), +}; +pub(crate) const CHANGE_SUBADDRESS: SubaddressIndex = match SubaddressIndex::new(2, 1) { + Some(index) => index, + None => panic!("SubaddressIndex for CHANGE_SUBADDRESS was None"), +}; +pub(crate) const FORWARDED_SUBADDRESS: SubaddressIndex = match SubaddressIndex::new(2, 2) { + Some(index) => index, + None => panic!("SubaddressIndex for FORWARDED_SUBADDRESS was None"), +}; + +pub(crate) fn view_pair(key: ::G) -> GuaranteedViewPair { + match GuaranteedViewPair::new(key.0, Zeroizing::new(*view_key::(0))) { + Ok(view_pair) => view_pair, + Err(ViewPairError::TorsionedSpendKey) => { + unreachable!("dalek_ff_group::EdwardsPoint had torsion") + } + } +} diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs index d66fd983..fea042c8 100644 --- a/processor/monero/src/primitives/output.rs +++ b/processor/monero/src/primitives/output.rs @@ -46,16 +46,17 @@ impl ReceivedOutput<::G, Address> for Output { type TransactionId = [u8; 32]; fn kind(&self) -> OutputType { - if self.0.subaddress() == EXTERNAL_SUBADDRESS { + let subaddress = self.0.subaddress().unwrap(); + if subaddress == EXTERNAL_SUBADDRESS { return OutputType::External; } - if self.0.subaddress() == BRANCH_SUBADDRESS { + if subaddress == BRANCH_SUBADDRESS { return OutputType::Branch; } - if self.0.subaddress() == CHANGE_SUBADDRESS { + if subaddress == CHANGE_SUBADDRESS { return OutputType::Change; } - if self.0.subaddress() == FORWARDED_SUBADDRESS { + if subaddress == FORWARDED_SUBADDRESS { return OutputType::Forwarded; } unreachable!("scanned output to unknown subaddress"); diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index 7666ec4f..ef52c413 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -1,3 +1,4 @@ +/* async fn make_signable_transaction( block_number: usize, plan_id: &[u8; 32], @@ -136,10 +137,106 @@ match MSignableTransaction::new( }, } } +*/ +use core::future::Future; + +use ciphersuite::{Ciphersuite, Ed25519}; + +use monero_wallet::rpc::{FeeRate, RpcError}; + +use serai_client::{ + primitives::{Coin, Amount}, + networks::monero::Address, +}; + +use primitives::{OutputType, ReceivedOutput, Payment}; +use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; +use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; + +use monero_wallet::address::Network; + +use crate::{ + EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, view_pair, + output::Output, + transaction::{SignableTransaction, Eventuality}, + rpc::Rpc, +}; + +fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { + view_pair(key) + .address( + Network::Mainnet, + Some(match kind { + OutputType::External => EXTERNAL_SUBADDRESS, + OutputType::Branch => BRANCH_SUBADDRESS, + OutputType::Change => CHANGE_SUBADDRESS, + OutputType::Forwarded => FORWARDED_SUBADDRESS, + }), + None, + ) + .try_into() + .expect("created address which wasn't representable") +} + +#[derive(Clone)] +pub(crate) struct Planner(pub(crate) Rpc); +impl TransactionPlanner for Planner { + type EphemeralError = RpcError; + + type FeeRate = FeeRate; + + type SignableTransaction = SignableTransaction; + + // wallet2 will not create a transaction larger than 100 KB, and Monero won't relay a transaction + // larger than 150 KB. This fits within the 100 KB mark to fit in and not poke the bear. + // Technically, it can be ~124, yet a small bit of buffer is appreciated + // TODO: Test creating a TX this big + const MAX_INPUTS: usize = 120; + const MAX_OUTPUTS: usize = 16; + + fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { + assert_eq!(coin, Coin::Monero); + // TODO + todo!("TODO") + } + + fn branch_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Branch) + } + fn change_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Change) + } + fn forwarding_address(key: KeyFor) -> AddressFor { + address_from_serai_key(key, OutputType::Forwarded) + } + + fn calculate_fee( + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>>, + change: Option>, + ) -> Amount { + todo!("TODO") + } + + fn plan( + &self, + fee_rate: Self::FeeRate, + inputs: Vec>, + payments: Vec>>, + change: Option>, + ) -> impl Send + + Future, RpcError>> + { + async move { todo!("TODO") } + } +} + +pub(crate) type Scheduler = utxo_standard_scheduler::Scheduler; /* -use ciphersuite::{Ciphersuite, Secp256k1}; +use ciphersuite::{Ciphersuite, Ed25519}; use bitcoin_serai::{ bitcoin::ScriptBuf, @@ -163,8 +260,8 @@ use crate::{ rpc::Rpc, }; -fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { - let offset = ::G::GENERATOR * offsets_for_key(key)[&kind]; +fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { + let offset = ::G::GENERATOR * offsets_for_key(key)[&kind]; Address::new( p2tr_script_buf(key + offset) .expect("creating address from Serai key which wasn't properly tweaked"), @@ -174,17 +271,17 @@ fn address_from_serai_key(key: ::G, kind: OutputType) fn signable_transaction( fee_per_vbyte: u64, - inputs: Vec>>, - payments: Vec>>>, - change: Option>>, + inputs: Vec>, + payments: Vec>>, + change: Option>, ) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { assert!( inputs.len() < - , ()>>::MAX_INPUTS + >::MAX_INPUTS ); assert!( (payments.len() + usize::from(u8::from(change.is_some()))) < - , ()>>::MAX_OUTPUTS + >::MAX_OUTPUTS ); let inputs = inputs.into_iter().map(|input| input.output).collect::>(); @@ -194,7 +291,7 @@ fn signable_transaction( .map(|payment| { (payment.address().clone(), { let balance = payment.balance(); - assert_eq!(balance.coin, Coin::Bitcoin); + assert_eq!(balance.coin, Coin::Monero); balance.amount.0 }) }) @@ -206,14 +303,14 @@ fn signable_transaction( */ payments.push(( // The generator is even so this is valid - Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), + Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai // TODO: Add a test for this comparing to bitcoin's `minimal_non_dust` bitcoin_serai::wallet::DUST, )); let change = change - .map(, ()>>::change_address); + .map(>::change_address); BSignableTransaction::new( inputs.clone(), @@ -231,12 +328,14 @@ fn signable_transaction( pub(crate) struct Planner; impl TransactionPlanner for Planner { + type EphemeralError = RpcError; + type FeeRate = u64; type SignableTransaction = SignableTransaction; /* - Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT). + Monero has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT). A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes. While our inputs are entirely SegWit, such fine tuning is not necessary and could create issues in @@ -255,27 +354,27 @@ impl TransactionPlanner for Planner { // to unstick any transactions which had too low of a fee. const MAX_OUTPUTS: usize = 519; - fn fee_rate(block: &BlockFor>, coin: Coin) -> Self::FeeRate { - assert_eq!(coin, Coin::Bitcoin); + fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { + assert_eq!(coin, Coin::Monero); // TODO 1 } - fn branch_address(key: KeyFor>) -> AddressFor> { + fn branch_address(key: KeyFor) -> AddressFor { address_from_serai_key(key, OutputType::Branch) } - fn change_address(key: KeyFor>) -> AddressFor> { + fn change_address(key: KeyFor) -> AddressFor { address_from_serai_key(key, OutputType::Change) } - fn forwarding_address(key: KeyFor>) -> AddressFor> { + fn forwarding_address(key: KeyFor) -> AddressFor { address_from_serai_key(key, OutputType::Forwarded) } fn calculate_fee( fee_rate: Self::FeeRate, - inputs: Vec>>, - payments: Vec>>>, - change: Option>>, + inputs: Vec>, + payments: Vec>>, + change: Option>, ) -> Amount { match signable_transaction::(fee_rate, inputs, payments, change) { Ok(tx) => Amount(tx.1.needed_fee()), @@ -294,10 +393,10 @@ impl TransactionPlanner for Planner { fn plan( fee_rate: Self::FeeRate, - inputs: Vec>>, - payments: Vec>>>, - change: Option>>, - ) -> PlannedTransaction, Self::SignableTransaction, ()> { + inputs: Vec>, + payments: Vec>>, + change: Option>, + ) -> PlannedTransaction { let key = inputs.first().unwrap().key(); for input in &inputs { assert_eq!(key, input.key()); From 5551521e58f1369b4f330d50341ea07a01d3c63b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 04:19:44 -0400 Subject: [PATCH 124/368] Tighten documentation on Block::number --- networks/monero/src/block.rs | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/networks/monero/src/block.rs b/networks/monero/src/block.rs index 62a77f8b..15a8d1fc 100644 --- a/networks/monero/src/block.rs +++ b/networks/monero/src/block.rs @@ -79,10 +79,13 @@ pub struct Block { } impl Block { - /// The zero-index position of this block within the blockchain. + /// The zero-indexed position of this block within the blockchain. /// /// This information comes from the Block's miner transaction. If the miner transaction isn't - /// structed as expected, this will return None. + /// structed as expected, this will return None. This will return Some for any Block which would + /// pass the consensus rules. + // https://github.com/monero-project/monero/blob/a1dc85c5373a30f14aaf7dcfdd95f5a7375d3623 + // /src/cryptonote_core/blockchain.cpp#L1365-L1382 pub fn number(&self) -> Option { match &self.miner_transaction { Transaction::V1 { prefix, .. } | Transaction::V2 { prefix, .. } => { From e23176deeb58f65f2847a45989ba9df6754b1996 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 04:23:42 -0400 Subject: [PATCH 125/368] Change dummy payment ID behavior on 2-output, no change This reduces the ability to fingerprint from any observer of the blockchain to just one of the two recipients. --- networks/monero/wallet/src/send/mod.rs | 9 +++++---- networks/monero/wallet/src/send/tx.rs | 16 ++++++++++++---- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/networks/monero/wallet/src/send/mod.rs b/networks/monero/wallet/src/send/mod.rs index 87d98d69..3bd883df 100644 --- a/networks/monero/wallet/src/send/mod.rs +++ b/networks/monero/wallet/src/send/mod.rs @@ -100,10 +100,11 @@ impl Change { /// /// 1) The change in the TX is shunted to the fee (making it fingerprintable). /// - /// 2) If there are two outputs in the TX, Monero would create a payment ID for the non-change - /// output so an observer can't tell apart TXs with a payment ID from TXs without a payment - /// ID. monero-wallet will simply not create a payment ID in this case, revealing it's a - /// monero-wallet TX without change. + /// 2) In two-output transactions, where the payment address doesn't have a payment ID, wallet2 + /// includes an encrypted dummy payment ID for the non-change output in order to not allow + /// differentiating if transactions send to addresses with payment IDs or not. monero-wallet + /// includes a dummy payment ID which at least one recipient will identify as not the expected + /// dummy payment ID, revealing to the recipient(s) the sender is using non-wallet2 software. pub fn fingerprintable(address: Option) -> Change { if let Some(address) = address { Change(Some(ChangeEnum::AddressOnly(address))) diff --git a/networks/monero/wallet/src/send/tx.rs b/networks/monero/wallet/src/send/tx.rs index 65962211..0ebd47f1 100644 --- a/networks/monero/wallet/src/send/tx.rs +++ b/networks/monero/wallet/src/send/tx.rs @@ -76,10 +76,18 @@ impl SignableTransaction { PaymentId::Encrypted(id).write(&mut id_vec).unwrap(); extra.push_nonce(id_vec); } else { - // If there's no payment ID, we push a dummy (as wallet2 does) if there's only one payment - if (self.payments.len() == 2) && - self.payments.iter().any(|payment| matches!(payment, InternalPayment::Change(_))) - { + /* + If there's no payment ID, we push a dummy (as wallet2 does) to the first payment. + + This does cause a random payment ID for the other recipient (a documented fingerprint). + Functionally, random payment IDs should be fine as wallet2 will trigger this same behavior + (a random payment ID being seen by the recipient) with a batch send if one of the recipient + addresses has a payment ID. + + The alternative would be to not include any payment ID, fingerprinting to the entire + blockchain this is non-standard wallet software (instead of just a single recipient). + */ + if self.payments.len() == 2 { let (_, payment_id_xor) = self .payments .iter() From 0616085109be26825ed68b8e86c327c52d9df6c5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 04:24:48 -0400 Subject: [PATCH 126/368] Monero Planner Finishes the Monero processor. --- Cargo.lock | 1 + processor/bitcoin/src/main.rs | 9 - .../bitcoin/src/primitives/transaction.rs | 25 +- processor/bitcoin/src/scheduler.rs | 57 +- processor/monero/Cargo.toml | 1 + processor/monero/src/lib.rs | 319 ----------- processor/monero/src/main.rs | 146 +++++ processor/monero/src/primitives/output.rs | 7 - .../monero/src/primitives/transaction.rs | 8 +- processor/monero/src/scheduler.rs | 535 ++++++------------ .../scheduler/utxo/primitives/src/lib.rs | 29 +- processor/scheduler/utxo/standard/src/lib.rs | 12 +- .../utxo/transaction-chaining/src/lib.rs | 12 +- 13 files changed, 406 insertions(+), 755 deletions(-) delete mode 100644 processor/monero/src/lib.rs diff --git a/Cargo.lock b/Cargo.lock index 9e34ea3c..c3e39a09 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8525,6 +8525,7 @@ dependencies = [ "monero-simple-request-rpc", "monero-wallet", "parity-scale-codec", + "rand_chacha", "rand_core", "serai-client", "serai-db", diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index d029ad8b..f260c47c 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -223,15 +223,6 @@ impl Network for Bitcoin { self.rpc.get_block_number(id).await.unwrap() } - #[cfg(test)] - async fn check_eventuality_by_claim( - &self, - eventuality: &Self::Eventuality, - _: &EmptyClaim, - ) -> bool { - self.rpc.get_transaction(&eventuality.0).await.is_ok() - } - #[cfg(test)] async fn get_transaction_by_eventuality(&self, _: usize, id: &Eventuality) -> Transaction { self.rpc.get_transaction(&id.0).await.unwrap() diff --git a/processor/bitcoin/src/primitives/transaction.rs b/processor/bitcoin/src/primitives/transaction.rs index 5fca0b91..8e7a26f6 100644 --- a/processor/bitcoin/src/primitives/transaction.rs +++ b/processor/bitcoin/src/primitives/transaction.rs @@ -49,7 +49,7 @@ impl scheduler::Transaction for Transaction { #[derive(Clone, Debug)] pub(crate) struct SignableTransaction { pub(crate) inputs: Vec, - pub(crate) payments: Vec<(Address, u64)>, + pub(crate) payments: Vec<(ScriptBuf, u64)>, pub(crate) change: Option

, pub(crate) fee_per_vbyte: u64, } @@ -58,12 +58,7 @@ impl SignableTransaction { fn signable(self) -> Result { BSignableTransaction::new( self.inputs, - &self - .payments - .iter() - .cloned() - .map(|(address, amount)| (ScriptBuf::from(address), amount)) - .collect::>(), + &self.payments, self.change.map(ScriptBuf::from), None, self.fee_per_vbyte, @@ -108,11 +103,19 @@ impl scheduler::SignableTransaction for SignableTransaction { inputs }; - let payments = <_>::deserialize_reader(reader)?; + let payments = Vec::<(Vec, u64)>::deserialize_reader(reader)?; let change = <_>::deserialize_reader(reader)?; let fee_per_vbyte = <_>::deserialize_reader(reader)?; - Ok(Self { inputs, payments, change, fee_per_vbyte }) + Ok(Self { + inputs, + payments: payments + .into_iter() + .map(|(address, amount)| (ScriptBuf::from_bytes(address), amount)) + .collect(), + change, + fee_per_vbyte, + }) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { writer.write_all(&u32::try_from(self.inputs.len()).unwrap().to_le_bytes())?; @@ -120,7 +123,9 @@ impl scheduler::SignableTransaction for SignableTransaction { input.write(writer)?; } - self.payments.serialize(writer)?; + for payment in &self.payments { + (payment.0.as_script().as_bytes(), payment.1).serialize(writer)?; + } self.change.serialize(writer)?; self.fee_per_vbyte.serialize(writer)?; diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index b6554bda..08dc508c 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -35,7 +35,7 @@ fn address_from_serai_key(key: ::G, kind: OutputType) } fn signable_transaction( - fee_per_vbyte: u64, + _reference_block: &BlockFor>, inputs: Vec>>, payments: Vec>>>, change: Option>>, @@ -49,12 +49,15 @@ fn signable_transaction( , EffectedReceivedOutputs>>>::MAX_OUTPUTS ); + // TODO + let fee_per_vbyte = 1; + let inputs = inputs.into_iter().map(|input| input.output).collect::>(); let mut payments = payments .into_iter() .map(|payment| { - (payment.address().clone(), { + (ScriptBuf::from(payment.address().clone()), { let balance = payment.balance(); assert_eq!(balance.coin, Coin::Bitcoin); balance.amount.0 @@ -68,7 +71,7 @@ fn signable_transaction( */ payments.push(( // The generator is even so this is valid - Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), + p2tr_script_buf(::G::GENERATOR).unwrap(), // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai // TODO: Add a test for this comparing to bitcoin's `minimal_non_dust` bitcoin_serai::wallet::DUST, @@ -79,11 +82,7 @@ fn signable_transaction( BSignableTransaction::new( inputs.clone(), - &payments - .iter() - .cloned() - .map(|(address, amount)| (ScriptBuf::from(address), amount)) - .collect::>(), + &payments, change.clone().map(ScriptBuf::from), None, fee_per_vbyte, @@ -95,7 +94,6 @@ fn signable_transaction( pub(crate) struct Planner; impl TransactionPlanner, EffectedReceivedOutputs>> for Planner { type EphemeralError = (); - type FeeRate = u64; type SignableTransaction = SignableTransaction; @@ -119,12 +117,6 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan // to unstick any transactions which had too low of a fee. const MAX_OUTPUTS: usize = 519; - fn fee_rate(block: &BlockFor>, coin: Coin) -> Self::FeeRate { - assert_eq!(coin, Coin::Bitcoin); - // TODO - 1 - } - fn branch_address(key: KeyFor>) -> AddressFor> { address_from_serai_key(key, OutputType::Branch) } @@ -136,29 +128,32 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan } fn calculate_fee( - fee_rate: Self::FeeRate, + &self, + reference_block: &BlockFor>, inputs: Vec>>, payments: Vec>>>, change: Option>>, - ) -> Amount { - match signable_transaction::(fee_rate, inputs, payments, change) { - Ok(tx) => Amount(tx.1.needed_fee()), - Err( - TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, - ) => panic!("malformed arguments to calculate_fee"), - // No data, we have a minimum fee rate, we checked the amount of inputs/outputs - Err( - TransactionError::TooMuchData | - TransactionError::TooLowFee | - TransactionError::TooLargeTransaction, - ) => unreachable!(), - Err(TransactionError::NotEnoughFunds { fee, .. }) => Amount(fee), + ) -> impl Send + Future> { + async move { + Ok(match signable_transaction::(reference_block, inputs, payments, change) { + Ok(tx) => Amount(tx.1.needed_fee()), + Err( + TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, + ) => panic!("malformed arguments to calculate_fee"), + // No data, we have a minimum fee rate, we checked the amount of inputs/outputs + Err( + TransactionError::TooMuchData | + TransactionError::TooLowFee | + TransactionError::TooLargeTransaction, + ) => unreachable!(), + Err(TransactionError::NotEnoughFunds { fee, .. }) => Amount(fee), + }) } } fn plan( &self, - fee_rate: Self::FeeRate, + reference_block: &BlockFor>, inputs: Vec>>, payments: Vec>>>, change: Option>>, @@ -176,7 +171,7 @@ impl TransactionPlanner, EffectedReceivedOutputs>> for Plan } let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); - match signable_transaction::(fee_rate, inputs.clone(), payments, change) { + match signable_transaction::(reference_block, inputs.clone(), payments, change) { Ok(tx) => Ok(PlannedTransaction { signable: tx.0, eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index 436f327e..cc895eda 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -18,6 +18,7 @@ workspace = true [dependencies] rand_core = { version = "0.6", default-features = false } +rand_chacha = { version = "0.3", default-features = false, features = ["std"] } zeroize = { version = "1", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } diff --git a/processor/monero/src/lib.rs b/processor/monero/src/lib.rs deleted file mode 100644 index 0848e08a..00000000 --- a/processor/monero/src/lib.rs +++ /dev/null @@ -1,319 +0,0 @@ -/* -// TODO: Consider ([u8; 32], TransactionPruned) -#[async_trait] -impl TransactionTrait for Transaction { - type Id = [u8; 32]; - fn id(&self) -> Self::Id { - self.hash() - } - - #[cfg(test)] - async fn fee(&self, _: &Monero) -> u64 { - match self { - Transaction::V1 { .. } => panic!("v1 TX in test-only function"), - Transaction::V2 { ref proofs, .. } => proofs.as_ref().unwrap().base.fee, - } - } -} - -impl EventualityTrait for Eventuality { - type Claim = [u8; 32]; - type Completion = Transaction; - - // Use the TX extra to look up potential matches - // While anyone can forge this, a transaction with distinct outputs won't actually match - // Extra includess the one time keys which are derived from the plan ID, so a collision here is a - // hash collision - fn lookup(&self) -> Vec { - self.extra() - } - - fn read(reader: &mut R) -> io::Result { - Eventuality::read(reader) - } - fn serialize(&self) -> Vec { - self.serialize() - } - - fn claim(tx: &Transaction) -> [u8; 32] { - tx.id() - } - fn serialize_completion(completion: &Transaction) -> Vec { - completion.serialize() - } - fn read_completion(reader: &mut R) -> io::Result { - Transaction::read(reader) - } -} - -#[derive(Clone, Debug)] -pub struct SignableTransaction(MSignableTransaction); -impl SignableTransactionTrait for SignableTransaction { - fn fee(&self) -> u64 { - self.0.necessary_fee() - } -} - -enum MakeSignableTransactionResult { - Fee(u64), - SignableTransaction(MSignableTransaction), -} - -impl Monero { - pub async fn new(url: String) -> Monero { - let mut res = SimpleRequestRpc::new(url.clone()).await; - while let Err(e) = res { - log::error!("couldn't connect to Monero node: {e:?}"); - tokio::time::sleep(Duration::from_secs(5)).await; - res = SimpleRequestRpc::new(url.clone()).await; - } - Monero { rpc: res.unwrap() } - } - - fn view_pair(spend: EdwardsPoint) -> GuaranteedViewPair { - GuaranteedViewPair::new(spend.0, Zeroizing::new(additional_key::(0).0)).unwrap() - } - - fn address_internal(spend: EdwardsPoint, subaddress: Option) -> Address { - Address::new(Self::view_pair(spend).address(MoneroNetwork::Mainnet, subaddress, None)).unwrap() - } - - fn scanner(spend: EdwardsPoint) -> GuaranteedScanner { - let mut scanner = GuaranteedScanner::new(Self::view_pair(spend)); - debug_assert!(EXTERNAL_SUBADDRESS.is_none()); - scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap()); - scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap()); - scanner.register_subaddress(FORWARD_SUBADDRESS.unwrap()); - scanner - } - - async fn median_fee(&self, block: &Block) -> Result { - let mut fees = vec![]; - for tx_hash in &block.transactions { - let tx = - self.rpc.get_transaction(*tx_hash).await.map_err(|_| NetworkError::ConnectionError)?; - // Only consider fees from RCT transactions, else the fee property read wouldn't be accurate - let fee = match &tx { - Transaction::V2 { proofs: Some(proofs), .. } => proofs.base.fee, - _ => continue, - }; - fees.push(fee / u64::try_from(tx.weight()).unwrap()); - } - fees.sort(); - let fee = fees.get(fees.len() / 2).copied().unwrap_or(0); - - // TODO: Set a sane minimum fee - const MINIMUM_FEE: u64 = 1_500_000; - Ok(FeeRate::new(fee.max(MINIMUM_FEE), 10000).unwrap()) - } - - #[cfg(test)] - fn test_view_pair() -> ViewPair { - ViewPair::new(*EdwardsPoint::generator(), Zeroizing::new(Scalar::ONE.0)).unwrap() - } - - #[cfg(test)] - fn test_scanner() -> Scanner { - Scanner::new(Self::test_view_pair()) - } - - #[cfg(test)] - fn test_address() -> Address { - Address::new(Self::test_view_pair().legacy_address(MoneroNetwork::Mainnet)).unwrap() - } -} - -#[async_trait] -impl Network for Monero { - const NETWORK: NetworkId = NetworkId::Monero; - const ID: &'static str = "Monero"; - const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120; - const CONFIRMATIONS: usize = 10; - - // TODO - const COST_TO_AGGREGATE: u64 = 0; - - #[cfg(test)] - async fn external_address(&self, key: EdwardsPoint) -> Address { - Self::address_internal(key, EXTERNAL_SUBADDRESS) - } - - fn branch_address(key: EdwardsPoint) -> Option
{ - Some(Self::address_internal(key, BRANCH_SUBADDRESS)) - } - - fn change_address(key: EdwardsPoint) -> Option
{ - Some(Self::address_internal(key, CHANGE_SUBADDRESS)) - } - - fn forward_address(key: EdwardsPoint) -> Option
{ - Some(Self::address_internal(key, FORWARD_SUBADDRESS)) - } - - async fn needed_fee( - &self, - block_number: usize, - inputs: &[Output], - payments: &[Payment], - change: &Option
, - ) -> Result, NetworkError> { - let res = self - .make_signable_transaction(block_number, &[0; 32], inputs, payments, change, true) - .await?; - let Some(res) = res else { return Ok(None) }; - let MakeSignableTransactionResult::Fee(fee) = res else { - panic!("told make_signable_transaction calculating_fee and got transaction") - }; - Ok(Some(fee)) - } - - async fn signable_transaction( - &self, - block_number: usize, - plan_id: &[u8; 32], - _key: EdwardsPoint, - inputs: &[Output], - payments: &[Payment], - change: &Option
, - (): &(), - ) -> Result, NetworkError> { - let res = self - .make_signable_transaction(block_number, plan_id, inputs, payments, change, false) - .await?; - let Some(res) = res else { return Ok(None) }; - let MakeSignableTransactionResult::SignableTransaction(signable) = res else { - panic!("told make_signable_transaction not calculating_fee and got fee") - }; - - let signable = SignableTransaction(signable); - let eventuality = signable.0.clone().into(); - Ok(Some((signable, eventuality))) - } - - async fn attempt_sign( - &self, - keys: ThresholdKeys, - transaction: SignableTransaction, - ) -> Result { - match transaction.0.clone().multisig(keys) { - Ok(machine) => Ok(machine), - Err(e) => panic!("failed to create a multisig machine for TX: {e}"), - } - } - - async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> { - match self.rpc.publish_transaction(tx).await { - Ok(()) => Ok(()), - Err(RpcError::ConnectionError(e)) => { - log::debug!("Monero ConnectionError: {e}"); - Err(NetworkError::ConnectionError)? - } - // TODO: Distinguish already in pool vs double spend (other signing attempt succeeded) vs - // invalid transaction - Err(e) => panic!("failed to publish TX {}: {e}", hex::encode(tx.hash())), - } - } - - #[cfg(test)] - async fn get_block_number(&self, id: &[u8; 32]) -> usize { - self.rpc.get_block(*id).await.unwrap().number().unwrap() - } - - #[cfg(test)] - async fn check_eventuality_by_claim( - &self, - eventuality: &Self::Eventuality, - claim: &[u8; 32], - ) -> bool { - return eventuality.matches(&self.rpc.get_pruned_transaction(*claim).await.unwrap()); - } - - #[cfg(test)] - async fn get_transaction_by_eventuality( - &self, - block: usize, - eventuality: &Eventuality, - ) -> Transaction { - let block = self.rpc.get_block_by_number(block).await.unwrap(); - for tx in &block.transactions { - let tx = self.rpc.get_transaction(*tx).await.unwrap(); - if eventuality.matches(&tx.clone().into()) { - return tx; - } - } - panic!("block didn't have a transaction for this eventuality") - } - - #[cfg(test)] - async fn mine_block(&self) { - // https://github.com/serai-dex/serai/issues/198 - sleep(std::time::Duration::from_millis(100)).await; - self.rpc.generate_blocks(&Self::test_address().into(), 1).await.unwrap(); - } - - #[cfg(test)] - async fn test_send(&self, address: Address) -> Block { - use zeroize::Zeroizing; - use rand_core::{RngCore, OsRng}; - use monero_wallet::rpc::FeePriority; - - let new_block = self.get_latest_block_number().await.unwrap() + 1; - for _ in 0 .. 80 { - self.mine_block().await; - } - - let new_block = self.rpc.get_block_by_number(new_block).await.unwrap(); - let mut outputs = Self::test_scanner() - .scan(self.rpc.get_scannable_block(new_block.clone()).await.unwrap()) - .unwrap() - .ignore_additional_timelock(); - let output = outputs.swap_remove(0); - - let amount = output.commitment().amount; - // The dust should always be sufficient for the fee - let fee = Monero::DUST; - - let rct_type = match new_block.header.hardfork_version { - 14 => RctType::ClsagBulletproof, - 15 | 16 => RctType::ClsagBulletproofPlus, - _ => panic!("Monero hard forked and the processor wasn't updated for it"), - }; - - let output = OutputWithDecoys::fingerprintable_deterministic_new( - &mut OsRng, - &self.rpc, - match rct_type { - RctType::ClsagBulletproof => 11, - RctType::ClsagBulletproofPlus => 16, - _ => panic!("selecting decoys for an unsupported RctType"), - }, - self.rpc.get_height().await.unwrap(), - output, - ) - .await - .unwrap(); - - let mut outgoing_view_key = Zeroizing::new([0; 32]); - OsRng.fill_bytes(outgoing_view_key.as_mut()); - let tx = MSignableTransaction::new( - rct_type, - outgoing_view_key, - vec![output], - vec![(address.into(), amount - fee)], - Change::fingerprintable(Some(Self::test_address().into())), - vec![], - self.rpc.get_fee_rate(FeePriority::Unimportant).await.unwrap(), - ) - .unwrap() - .sign(&mut OsRng, &Zeroizing::new(Scalar::ONE.0)) - .unwrap(); - - let block = self.get_latest_block_number().await.unwrap() + 1; - self.rpc.publish_transaction(&tx).await.unwrap(); - for _ in 0 .. 10 { - self.mine_block().await; - } - self.get_block(block).await.unwrap() - } -} -*/ diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index daba3255..d36118d0 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -41,3 +41,149 @@ async fn main() { ) .await; } + +/* +#[async_trait] +impl TransactionTrait for Transaction { + #[cfg(test)] + async fn fee(&self, _: &Monero) -> u64 { + match self { + Transaction::V1 { .. } => panic!("v1 TX in test-only function"), + Transaction::V2 { ref proofs, .. } => proofs.as_ref().unwrap().base.fee, + } + } +} + +impl Monero { + async fn median_fee(&self, block: &Block) -> Result { + let mut fees = vec![]; + for tx_hash in &block.transactions { + let tx = + self.rpc.get_transaction(*tx_hash).await.map_err(|_| NetworkError::ConnectionError)?; + // Only consider fees from RCT transactions, else the fee property read wouldn't be accurate + let fee = match &tx { + Transaction::V2 { proofs: Some(proofs), .. } => proofs.base.fee, + _ => continue, + }; + fees.push(fee / u64::try_from(tx.weight()).unwrap()); + } + fees.sort(); + let fee = fees.get(fees.len() / 2).copied().unwrap_or(0); + + // TODO: Set a sane minimum fee + const MINIMUM_FEE: u64 = 1_500_000; + Ok(FeeRate::new(fee.max(MINIMUM_FEE), 10000).unwrap()) + } + + #[cfg(test)] + fn test_view_pair() -> ViewPair { + ViewPair::new(*EdwardsPoint::generator(), Zeroizing::new(Scalar::ONE.0)).unwrap() + } + + #[cfg(test)] + fn test_scanner() -> Scanner { + Scanner::new(Self::test_view_pair()) + } + + #[cfg(test)] + fn test_address() -> Address { + Address::new(Self::test_view_pair().legacy_address(MoneroNetwork::Mainnet)).unwrap() + } +} + +#[async_trait] +impl Network for Monero { + #[cfg(test)] + async fn get_block_number(&self, id: &[u8; 32]) -> usize { + self.rpc.get_block(*id).await.unwrap().number().unwrap() + } + + #[cfg(test)] + async fn get_transaction_by_eventuality( + &self, + block: usize, + eventuality: &Eventuality, + ) -> Transaction { + let block = self.rpc.get_block_by_number(block).await.unwrap(); + for tx in &block.transactions { + let tx = self.rpc.get_transaction(*tx).await.unwrap(); + if eventuality.matches(&tx.clone().into()) { + return tx; + } + } + panic!("block didn't have a transaction for this eventuality") + } + + #[cfg(test)] + async fn mine_block(&self) { + // https://github.com/serai-dex/serai/issues/198 + sleep(std::time::Duration::from_millis(100)).await; + self.rpc.generate_blocks(&Self::test_address().into(), 1).await.unwrap(); + } + + #[cfg(test)] + async fn test_send(&self, address: Address) -> Block { + use zeroize::Zeroizing; + use rand_core::{RngCore, OsRng}; + use monero_wallet::rpc::FeePriority; + + let new_block = self.get_latest_block_number().await.unwrap() + 1; + for _ in 0 .. 80 { + self.mine_block().await; + } + + let new_block = self.rpc.get_block_by_number(new_block).await.unwrap(); + let mut outputs = Self::test_scanner() + .scan(self.rpc.get_scannable_block(new_block.clone()).await.unwrap()) + .unwrap() + .ignore_additional_timelock(); + let output = outputs.swap_remove(0); + + let amount = output.commitment().amount; + // The dust should always be sufficient for the fee + let fee = Monero::DUST; + + let rct_type = match new_block.header.hardfork_version { + 14 => RctType::ClsagBulletproof, + 15 | 16 => RctType::ClsagBulletproofPlus, + _ => panic!("Monero hard forked and the processor wasn't updated for it"), + }; + + let output = OutputWithDecoys::fingerprintable_deterministic_new( + &mut OsRng, + &self.rpc, + match rct_type { + RctType::ClsagBulletproof => 11, + RctType::ClsagBulletproofPlus => 16, + _ => panic!("selecting decoys for an unsupported RctType"), + }, + self.rpc.get_height().await.unwrap(), + output, + ) + .await + .unwrap(); + + let mut outgoing_view_key = Zeroizing::new([0; 32]); + OsRng.fill_bytes(outgoing_view_key.as_mut()); + let tx = MSignableTransaction::new( + rct_type, + outgoing_view_key, + vec![output], + vec![(address.into(), amount - fee)], + Change::fingerprintable(Some(Self::test_address().into())), + vec![], + self.rpc.get_fee_rate(FeePriority::Unimportant).await.unwrap(), + ) + .unwrap() + .sign(&mut OsRng, &Zeroizing::new(Scalar::ONE.0)) + .unwrap(); + + let block = self.get_latest_block_number().await.unwrap() + 1; + self.rpc.publish_transaction(&tx).await.unwrap(); + for _ in 0 .. 10 { + self.mine_block().await; + } + self.get_block(block).await.unwrap() + } +} +*/ diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs index fea042c8..201e75c9 100644 --- a/processor/monero/src/primitives/output.rs +++ b/processor/monero/src/primitives/output.rs @@ -34,13 +34,6 @@ impl AsMut<[u8]> for OutputId { #[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct Output(pub(crate) WalletOutput); - -impl Output { - pub(crate) fn new(output: WalletOutput) -> Self { - Self(output) - } -} - impl ReceivedOutput<::G, Address> for Output { type Id = OutputId; type TransactionId = [u8; 32]; diff --git a/processor/monero/src/primitives/transaction.rs b/processor/monero/src/primitives/transaction.rs index f6765cd9..eeeef81d 100644 --- a/processor/monero/src/primitives/transaction.rs +++ b/processor/monero/src/primitives/transaction.rs @@ -34,8 +34,8 @@ impl scheduler::Transaction for Transaction { #[derive(Clone, Debug)] pub(crate) struct SignableTransaction { - id: [u8; 32], - signable: MSignableTransaction, + pub(crate) id: [u8; 32], + pub(crate) signable: MSignableTransaction, } #[derive(Clone)] @@ -81,8 +81,8 @@ impl scheduler::SignableTransaction for SignableTransaction { #[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct Eventuality { - id: [u8; 32], - singular_spent_output: Option, + pub(crate) id: [u8; 32], + pub(crate) singular_spent_output: Option, pub(crate) eventuality: MEventuality, } diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index ef52c413..667840f6 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -1,146 +1,9 @@ -/* -async fn make_signable_transaction( -block_number: usize, -plan_id: &[u8; 32], -inputs: &[Output], -payments: &[Payment], -change: &Option
, -calculating_fee: bool, -) -> Result, NetworkError> { -for payment in payments { - assert_eq!(payment.balance.coin, Coin::Monero); -} - -// TODO2: Use an fee representative of several blocks, cached inside Self -let block_for_fee = self.get_block(block_number).await?; -let fee_rate = self.median_fee(&block_for_fee).await?; - -// Determine the RCT proofs to make based off the hard fork -// TODO: Make a fn for this block which is duplicated with tests -let rct_type = match block_for_fee.header.hardfork_version { - 14 => RctType::ClsagBulletproof, - 15 | 16 => RctType::ClsagBulletproofPlus, - _ => panic!("Monero hard forked and the processor wasn't updated for it"), -}; - -let mut transcript = - RecommendedTranscript::new(b"Serai Processor Monero Transaction Transcript"); -transcript.append_message(b"plan", plan_id); - -// All signers need to select the same decoys -// All signers use the same height and a seeded RNG to make sure they do so. -let mut inputs_actual = Vec::with_capacity(inputs.len()); -for input in inputs { - inputs_actual.push( - OutputWithDecoys::fingerprintable_deterministic_new( - &mut ChaCha20Rng::from_seed(transcript.rng_seed(b"decoys")), - &self.rpc, - // TODO: Have Decoys take RctType - match rct_type { - RctType::ClsagBulletproof => 11, - RctType::ClsagBulletproofPlus => 16, - _ => panic!("selecting decoys for an unsupported RctType"), - }, - block_number + 1, - input.0.clone(), - ) - .await - .map_err(map_rpc_err)?, - ); -} - -// Monero requires at least two outputs -// If we only have one output planned, add a dummy payment -let mut payments = payments.to_vec(); -let outputs = payments.len() + usize::from(u8::from(change.is_some())); -if outputs == 0 { - return Ok(None); -} else if outputs == 1 { - payments.push(Payment { - address: Address::new( - ViewPair::new(EdwardsPoint::generator().0, Zeroizing::new(Scalar::ONE.0)) - .unwrap() - .legacy_address(MoneroNetwork::Mainnet), - ) - .unwrap(), - balance: Balance { coin: Coin::Monero, amount: Amount(0) }, - data: None, - }); -} - -let payments = payments - .into_iter() - .map(|payment| (payment.address.into(), payment.balance.amount.0)) - .collect::>(); - -match MSignableTransaction::new( - rct_type, - // Use the plan ID as the outgoing view key - Zeroizing::new(*plan_id), - inputs_actual, - payments, - Change::fingerprintable(change.as_ref().map(|change| change.clone().into())), - vec![], - fee_rate, -) { - Ok(signable) => Ok(Some({ - if calculating_fee { - MakeSignableTransactionResult::Fee(signable.necessary_fee()) - } else { - MakeSignableTransactionResult::SignableTransaction(signable) - } - })), - Err(e) => match e { - SendError::UnsupportedRctType => { - panic!("trying to use an RctType unsupported by monero-wallet") - } - SendError::NoInputs | - SendError::InvalidDecoyQuantity | - SendError::NoOutputs | - SendError::TooManyOutputs | - SendError::NoChange | - SendError::TooMuchArbitraryData | - SendError::TooLargeTransaction | - SendError::WrongPrivateKey => { - panic!("created an invalid Monero transaction: {e}"); - } - SendError::MultiplePaymentIds => { - panic!("multiple payment IDs despite not supporting integrated addresses"); - } - SendError::NotEnoughFunds { inputs, outputs, necessary_fee } => { - log::debug!( - "Monero NotEnoughFunds. inputs: {:?}, outputs: {:?}, necessary_fee: {necessary_fee:?}", - inputs, - outputs - ); - match necessary_fee { - Some(necessary_fee) => { - // If we're solely calculating the fee, return the fee this TX will cost - if calculating_fee { - Ok(Some(MakeSignableTransactionResult::Fee(necessary_fee))) - } else { - // If we're actually trying to make the TX, return None - Ok(None) - } - } - // We didn't have enough funds to even cover the outputs - None => { - // Ensure we're not misinterpreting this - assert!(outputs > inputs); - Ok(None) - } - } - } - SendError::MaliciousSerialization | SendError::ClsagError(_) | SendError::FrostError(_) => { - panic!("supposedly unreachable (at this time) Monero error: {e}"); - } - }, -} -} -*/ - use core::future::Future; +use zeroize::Zeroizing; +use rand_core::SeedableRng; +use rand_chacha::ChaCha20Rng; + use ciphersuite::{Ciphersuite, Ed25519}; use monero_wallet::rpc::{FeeRate, RpcError}; @@ -154,11 +17,17 @@ use primitives::{OutputType, ReceivedOutput, Payment}; use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; -use monero_wallet::address::Network; +use monero_wallet::{ + ringct::RctType, + address::{Network, AddressType, MoneroAddress}, + OutputWithDecoys, + send::{ + Change, SendError, SignableTransaction as MSignableTransaction, Eventuality as MEventuality, + }, +}; use crate::{ EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS, FORWARDED_SUBADDRESS, view_pair, - output::Output, transaction::{SignableTransaction, Eventuality}, rpc::Rpc, }; @@ -179,13 +48,108 @@ fn address_from_serai_key(key: ::G, kind: OutputType) -> .expect("created address which wasn't representable") } +async fn signable_transaction( + rpc: &Rpc, + reference_block: &BlockFor, + inputs: Vec>, + payments: Vec>>, + change: Option>, +) -> Result, RpcError> { + assert!(inputs.len() < >::MAX_INPUTS); + assert!( + (payments.len() + usize::from(u8::from(change.is_some()))) < + >::MAX_OUTPUTS + ); + + // TODO: Set a sane minimum fee + const MINIMUM_FEE: u64 = 1_500_000; + // TODO: Set a fee rate based on the reference block + let fee_rate = FeeRate::new(MINIMUM_FEE, 10000).unwrap(); + + // Determine the RCT proofs to make based off the hard fork + let rct_type = match reference_block.0.block.header.hardfork_version { + 14 => RctType::ClsagBulletproof, + 15 | 16 => RctType::ClsagBulletproofPlus, + _ => panic!("Monero hard forked and the processor wasn't updated for it"), + }; + + // We need a unique ID to distinguish this transaction from another transaction with an identical + // set of payments (as our Eventualities only match over the payments). The output's ID is + // guaranteed to be unique, making it satisfactory + let id = inputs.first().unwrap().id().0; + + let mut inputs_actual = Vec::with_capacity(inputs.len()); + for input in inputs { + inputs_actual.push( + OutputWithDecoys::fingerprintable_deterministic_new( + // We need a deterministic RNG here with *some* seed + // The unique ID means we don't pick some static seed + // It is a public value, yet that's fine as this is assumed fully transparent + // It is a reused value (with later code), but that's not an issue. Just an oddity + &mut ChaCha20Rng::from_seed(id), + &rpc.rpc, + // TODO: Have Decoys take RctType + match rct_type { + RctType::ClsagBulletproof => 11, + RctType::ClsagBulletproofPlus => 16, + _ => panic!("selecting decoys for an unsupported RctType"), + }, + reference_block.0.block.number().unwrap() + 1, + input.0.clone(), + ) + .await?, + ); + } + let inputs = inputs_actual; + + let mut payments = payments + .into_iter() + .map(|payment| { + (MoneroAddress::from(*payment.address()), { + let balance = payment.balance(); + assert_eq!(balance.coin, Coin::Monero); + balance.amount.0 + }) + }) + .collect::>(); + if (payments.len() + usize::from(u8::from(change.is_some()))) == 1 { + // Monero requires at least two outputs, so add a dummy payment + payments.push(( + MoneroAddress::new( + Network::Mainnet, + AddressType::Legacy, + ::generator().0, + ::generator().0, + ), + 0, + )); + } + + let change = if let Some(change) = change { + Change::guaranteed(view_pair(change), Some(CHANGE_SUBADDRESS)) + } else { + Change::fingerprintable(None) + }; + + Ok( + MSignableTransaction::new( + rct_type, + Zeroizing::new(id), + inputs, + payments, + change, + vec![], + fee_rate, + ) + .map(|signable| (SignableTransaction { id, signable: signable.clone() }, signable)), + ) +} + #[derive(Clone)] pub(crate) struct Planner(pub(crate) Rpc); impl TransactionPlanner for Planner { type EphemeralError = RpcError; - type FeeRate = FeeRate; - type SignableTransaction = SignableTransaction; // wallet2 will not create a transaction larger than 100 KB, and Monero won't relay a transaction @@ -195,12 +159,6 @@ impl TransactionPlanner for Planner { const MAX_INPUTS: usize = 120; const MAX_OUTPUTS: usize = 16; - fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { - assert_eq!(coin, Coin::Monero); - // TODO - todo!("TODO") - } - fn branch_address(key: KeyFor) -> AddressFor { address_from_serai_key(key, OutputType::Branch) } @@ -212,218 +170,101 @@ impl TransactionPlanner for Planner { } fn calculate_fee( - fee_rate: Self::FeeRate, + &self, + reference_block: &BlockFor, inputs: Vec>, payments: Vec>>, change: Option>, - ) -> Amount { - todo!("TODO") + ) -> impl Send + Future> { + async move { + Ok(match signable_transaction(&self.0, reference_block, inputs, payments, change).await? { + Ok(tx) => Amount(tx.1.necessary_fee()), + Err(SendError::NotEnoughFunds { necessary_fee, .. }) => { + Amount(necessary_fee.expect("outputs value exceeded inputs value")) + } + Err(SendError::UnsupportedRctType) => { + panic!("tried to use an RctType monero-wallet doesn't support") + } + Err(SendError::NoInputs | SendError::NoOutputs | SendError::TooManyOutputs) => { + panic!("malformed plan passed to calculate_fee") + } + Err(SendError::InvalidDecoyQuantity) => panic!("selected the wrong amount of decoys"), + Err(SendError::NoChange) => { + panic!("didn't add a dummy payment to satisfy the 2-output minimum") + } + Err(SendError::MultiplePaymentIds) => { + panic!("included multiple payment IDs despite not supporting addresses with payment IDs") + } + Err(SendError::TooMuchArbitraryData) => { + panic!("included too much arbitrary data despite not including any") + } + Err(SendError::TooLargeTransaction) => { + panic!("too large transaction despite MAX_INPUTS/MAX_OUTPUTS") + } + Err( + SendError::WrongPrivateKey | + SendError::MaliciousSerialization | + SendError::ClsagError(_) | + SendError::FrostError(_), + ) => unreachable!("signing/serialization error when not signing/serializing"), + }) + } } fn plan( &self, - fee_rate: Self::FeeRate, + reference_block: &BlockFor, inputs: Vec>, payments: Vec>>, change: Option>, ) -> impl Send + Future, RpcError>> { - async move { todo!("TODO") } - } -} - -pub(crate) type Scheduler = utxo_standard_scheduler::Scheduler; - -/* -use ciphersuite::{Ciphersuite, Ed25519}; - -use bitcoin_serai::{ - bitcoin::ScriptBuf, - wallet::{TransactionError, SignableTransaction as BSignableTransaction, p2tr_script_buf}, -}; - -use serai_client::{ - primitives::{Coin, Amount}, - networks::bitcoin::Address, -}; - -use serai_db::Db; -use primitives::{OutputType, ReceivedOutput, Payment}; -use scanner::{KeyFor, AddressFor, OutputFor, BlockFor}; -use utxo_scheduler::{PlannedTransaction, TransactionPlanner}; - -use crate::{ - scan::{offsets_for_key, scanner}, - output::Output, - transaction::{SignableTransaction, Eventuality}, - rpc::Rpc, -}; - -fn address_from_serai_key(key: ::G, kind: OutputType) -> Address { - let offset = ::G::GENERATOR * offsets_for_key(key)[&kind]; - Address::new( - p2tr_script_buf(key + offset) - .expect("creating address from Serai key which wasn't properly tweaked"), - ) - .expect("couldn't create Serai-representable address for P2TR script") -} - -fn signable_transaction( - fee_per_vbyte: u64, - inputs: Vec>, - payments: Vec>>, - change: Option>, -) -> Result<(SignableTransaction, BSignableTransaction), TransactionError> { - assert!( - inputs.len() < - >::MAX_INPUTS - ); - assert!( - (payments.len() + usize::from(u8::from(change.is_some()))) < - >::MAX_OUTPUTS - ); - - let inputs = inputs.into_iter().map(|input| input.output).collect::>(); - - let mut payments = payments - .into_iter() - .map(|payment| { - (payment.address().clone(), { - let balance = payment.balance(); - assert_eq!(balance.coin, Coin::Monero); - balance.amount.0 - }) - }) - .collect::>(); - /* - Push a payment to a key with a known private key which anyone can spend. If this transaction - gets stuck, this lets anyone create a child transaction spending this output, raising the fee, - getting the transaction unstuck (via CPFP). - */ - payments.push(( - // The generator is even so this is valid - Address::new(p2tr_script_buf(::G::GENERATOR).unwrap()).unwrap(), - // This uses the minimum output value allowed, as defined as a constant in bitcoin-serai - // TODO: Add a test for this comparing to bitcoin's `minimal_non_dust` - bitcoin_serai::wallet::DUST, - )); - - let change = change - .map(>::change_address); - - BSignableTransaction::new( - inputs.clone(), - &payments - .iter() - .cloned() - .map(|(address, amount)| (ScriptBuf::from(address), amount)) - .collect::>(), - change.clone().map(ScriptBuf::from), - None, - fee_per_vbyte, - ) - .map(|bst| (SignableTransaction { inputs, payments, change, fee_per_vbyte }, bst)) -} - -pub(crate) struct Planner; -impl TransactionPlanner for Planner { - type EphemeralError = RpcError; - - type FeeRate = u64; - - type SignableTransaction = SignableTransaction; - - /* - Monero has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT). - - A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes. While - our inputs are entirely SegWit, such fine tuning is not necessary and could create issues in - the future (if the size decreases or we misevaluate it). It also offers a minimal amount of - benefit when we are able to logarithmically accumulate inputs/fulfill payments. - - For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and - 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192 - bytes. - - 100,000 / 192 = 520 - 520 * 192 leaves 160 bytes of overhead for the transaction structure itself. - */ - const MAX_INPUTS: usize = 520; - // We always reserve one output to create an anyone-can-spend output enabling anyone to use CPFP - // to unstick any transactions which had too low of a fee. - const MAX_OUTPUTS: usize = 519; - - fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate { - assert_eq!(coin, Coin::Monero); - // TODO - 1 - } - - fn branch_address(key: KeyFor) -> AddressFor { - address_from_serai_key(key, OutputType::Branch) - } - fn change_address(key: KeyFor) -> AddressFor { - address_from_serai_key(key, OutputType::Change) - } - fn forwarding_address(key: KeyFor) -> AddressFor { - address_from_serai_key(key, OutputType::Forwarded) - } - - fn calculate_fee( - fee_rate: Self::FeeRate, - inputs: Vec>, - payments: Vec>>, - change: Option>, - ) -> Amount { - match signable_transaction::(fee_rate, inputs, payments, change) { - Ok(tx) => Amount(tx.1.needed_fee()), - Err( - TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, - ) => panic!("malformed arguments to calculate_fee"), - // No data, we have a minimum fee rate, we checked the amount of inputs/outputs - Err( - TransactionError::TooMuchData | - TransactionError::TooLowFee | - TransactionError::TooLargeTransaction, - ) => unreachable!(), - Err(TransactionError::NotEnoughFunds { fee, .. }) => Amount(fee), - } - } - - fn plan( - fee_rate: Self::FeeRate, - inputs: Vec>, - payments: Vec>>, - change: Option>, - ) -> PlannedTransaction { - let key = inputs.first().unwrap().key(); - for input in &inputs { - assert_eq!(key, input.key()); - } - let singular_spent_output = (inputs.len() == 1).then(|| inputs[0].id()); - match signable_transaction::(fee_rate, inputs.clone(), payments, change) { - Ok(tx) => PlannedTransaction { - signable: tx.0, - eventuality: Eventuality { txid: tx.1.txid(), singular_spent_output }, - auxilliary: (), - }, - Err( - TransactionError::NoInputs | TransactionError::NoOutputs | TransactionError::DustPayment, - ) => panic!("malformed arguments to plan"), - // No data, we have a minimum fee rate, we checked the amount of inputs/outputs - Err( - TransactionError::TooMuchData | - TransactionError::TooLowFee | - TransactionError::TooLargeTransaction, - ) => unreachable!(), - Err(TransactionError::NotEnoughFunds { .. }) => { - panic!("plan called for a transaction without enough funds") - } + + async move { + Ok(match signable_transaction(&self.0, reference_block, inputs, payments, change).await? { + Ok(tx) => { + let id = tx.0.id; + PlannedTransaction { + signable: tx.0, + eventuality: Eventuality { + id, + singular_spent_output, + eventuality: MEventuality::from(tx.1), + }, + auxilliary: (), + } + } + Err(SendError::NotEnoughFunds { .. }) => panic!("failed to successfully amortize the fee"), + Err(SendError::UnsupportedRctType) => { + panic!("tried to use an RctType monero-wallet doesn't support") + } + Err(SendError::NoInputs | SendError::NoOutputs | SendError::TooManyOutputs) => { + panic!("malformed plan passed to calculate_fee") + } + Err(SendError::InvalidDecoyQuantity) => panic!("selected the wrong amount of decoys"), + Err(SendError::NoChange) => { + panic!("didn't add a dummy payment to satisfy the 2-output minimum") + } + Err(SendError::MultiplePaymentIds) => { + panic!("included multiple payment IDs despite not supporting addresses with payment IDs") + } + Err(SendError::TooMuchArbitraryData) => { + panic!("included too much arbitrary data despite not including any") + } + Err(SendError::TooLargeTransaction) => { + panic!("too large transaction despite MAX_INPUTS/MAX_OUTPUTS") + } + Err( + SendError::WrongPrivateKey | + SendError::MaliciousSerialization | + SendError::ClsagError(_) | + SendError::FrostError(_), + ) => unreachable!("signing/serialization error when not signing/serializing"), + }) } } } pub(crate) type Scheduler = utxo_standard_scheduler::Scheduler; -*/ diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index 00b2d10f..c01baf02 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -4,7 +4,7 @@ use core::{fmt::Debug, future::Future}; -use serai_primitives::{Coin, Amount}; +use serai_primitives::Amount; use primitives::{ReceivedOutput, Payment}; use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor, EventualityFor, BlockFor}; @@ -48,9 +48,6 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// resolve manual intervention/changing the arguments. type EphemeralError: Debug; - /// The type representing a fee rate to use for transactions. - type FeeRate: Send + Clone + Copy; - /// The type representing a signable transaction. type SignableTransaction: SignableTransaction; @@ -59,11 +56,6 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// The maximum amount of outputs allowed in a transaction, including the change output. const MAX_OUTPUTS: usize; - /// Obtain the fee rate to pay. - /// - /// This must be constant to the block and coin. - fn fee_rate(block: &BlockFor, coin: Coin) -> Self::FeeRate; - /// The branch address for this key of Serai's. fn branch_address(key: KeyFor) -> AddressFor; /// The change address for this key of Serai's. @@ -76,11 +68,12 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// The fee rate, inputs, and payments, will all be for the same coin. The returned fee is /// denominated in this coin. fn calculate_fee( - fee_rate: Self::FeeRate, + &self, + reference_block: &BlockFor, inputs: Vec>, payments: Vec>>, change: Option>, - ) -> Amount; + ) -> impl Send + Future>; /// Plan a transaction. /// @@ -91,7 +84,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// output must be created. fn plan( &self, - fee_rate: Self::FeeRate, + reference_block: &BlockFor, inputs: Vec>, payments: Vec>>, change: Option>, @@ -112,7 +105,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { fn plan_transaction_with_fee_amortization( &self, operating_costs: &mut u64, - fee_rate: Self::FeeRate, + reference_block: &BlockFor, inputs: Vec>, mut payments: Vec>>, mut change: Option>, @@ -156,7 +149,8 @@ pub trait TransactionPlanner: 'static + Send + Sync { // Sort payments from high amount to low amount payments.sort_by(|a, b| a.balance().amount.0.cmp(&b.balance().amount.0).reverse()); - let mut fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; + let mut fee = + self.calculate_fee(reference_block, inputs.clone(), payments.clone(), change).await?.0; let mut amortized = 0; while !payments.is_empty() { // We need to pay the fee, and any accrued operating costs, minus what we've already @@ -176,7 +170,10 @@ pub trait TransactionPlanner: 'static + Send + Sync { if payments.last().unwrap().balance().amount.0 <= (per_payment_fee + S::dust(coin).0) { amortized += payments.pop().unwrap().balance().amount.0; // Recalculate the fee and try again - fee = Self::calculate_fee(fee_rate, inputs.clone(), payments.clone(), change).0; + fee = self + .calculate_fee(reference_block, inputs.clone(), payments.clone(), change) + .await? + .0; continue; } // Break since all of these payments shouldn't be dropped @@ -237,7 +234,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { let has_change = change.is_some(); let PlannedTransaction { signable, eventuality, auxilliary } = - self.plan(fee_rate, inputs, payments, change).await?; + self.plan(reference_block, inputs, payments, change).await?; Ok(Some(AmortizePlannedTransaction { effected_payments, has_change, diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index 5ff786a7..208ae8a0 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -56,7 +56,7 @@ impl> Scheduler { .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, to_aggregate, vec![], Some(key_for_change), @@ -176,7 +176,7 @@ impl> Scheduler { .plan_transaction_with_fee_amortization( // Uses 0 as there's no operating costs to incur/amortize here &mut 0, - P::fee_rate(block, coin), + block, vec![output], payments, None, @@ -254,7 +254,7 @@ impl> Scheduler { .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, outputs.clone(), tree[0] .payments::(coin, &branch_address, tree[0].value()) @@ -327,7 +327,7 @@ impl> Scheduler { .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, outputs, vec![], Some(to), @@ -487,7 +487,7 @@ impl> SchedulerTrait for Schedul // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be forwarded, we simply drop it &mut 0, - P::fee_rate(block, forward.balance().coin), + block, vec![forward.clone()], vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], None, @@ -508,7 +508,7 @@ impl> SchedulerTrait for Schedul // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be returned, we simply drop it &mut 0, - P::fee_rate(block, out_instruction.balance().coin), + block, vec![to_return.output().clone()], vec![out_instruction], None, diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index cb0a8b15..961c6fcb 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -86,7 +86,7 @@ impl>> Sched .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, to_aggregate, vec![], Some(key_for_change), @@ -229,7 +229,7 @@ impl>> Sched .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, outputs.clone(), tree[0] .payments::(coin, &branch_address, tree[0].value()) @@ -323,7 +323,7 @@ impl>> Sched .plan_transaction_with_fee_amortization( // Uses 0 as there's no operating costs to incur/amortize here &mut 0, - P::fee_rate(block, coin), + block, vec![branch_output], payments, None, @@ -379,7 +379,7 @@ impl>> Sched .planner .plan_transaction_with_fee_amortization( &mut operating_costs, - P::fee_rate(block, coin), + block, outputs, vec![], Some(to), @@ -505,7 +505,7 @@ impl>> Sched // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be forwarded, we simply drop it &mut 0, - P::fee_rate(block, forward.balance().coin), + block, vec![forward.clone()], vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], None, @@ -526,7 +526,7 @@ impl>> Sched // This uses 0 for the operating costs as we don't incur any here // If the output can't pay for itself to be returned, we simply drop it &mut 0, - P::fee_rate(block, out_instruction.balance().coin), + block, vec![to_return.output().clone()], vec![out_instruction], None, From 72a18bf8bb1de80360dc4b69d6dcf05b3e325c0f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 05:20:02 -0400 Subject: [PATCH 127/368] Smart Contract Scheduler --- .github/workflows/tests.yml | 1 + Cargo.lock | 33 ++- Cargo.toml | 1 + deny.toml | 1 + processor/ethereum/Cargo.toml | 31 ++- processor/scheduler/smart-contract/Cargo.toml | 34 +++ processor/scheduler/smart-contract/LICENSE | 15 ++ processor/scheduler/smart-contract/README.md | 3 + processor/scheduler/smart-contract/src/lib.rs | 136 ++++++++++++ processor/scheduler/utxo/standard/src/lib.rs | 2 +- .../utxo/transaction-chaining/src/lib.rs | 2 +- .../src/multisigs/scheduler/smart_contract.rs | 208 ------------------ 12 files changed, 241 insertions(+), 226 deletions(-) create mode 100644 processor/scheduler/smart-contract/Cargo.toml create mode 100644 processor/scheduler/smart-contract/LICENSE create mode 100644 processor/scheduler/smart-contract/README.md create mode 100644 processor/scheduler/smart-contract/src/lib.rs delete mode 100644 processor/src/multisigs/scheduler/smart_contract.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 8bf4084d..e1c54349 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -48,6 +48,7 @@ jobs: -p serai-processor-utxo-scheduler-primitives \ -p serai-processor-utxo-scheduler \ -p serai-processor-transaction-chaining-scheduler \ + -p serai-processor-smart-contract-scheduler \ -p serai-processor-signers \ -p serai-processor-bin \ -p serai-bitcoin-processor \ diff --git a/Cargo.lock b/Cargo.lock index c3e39a09..147cc295 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8350,18 +8350,25 @@ name = "serai-ethereum-processor" version = "0.1.0" dependencies = [ "borsh", - "const-hex", - "env_logger", + "ciphersuite", + "dkg", "ethereum-serai", + "flexible-transcript", "hex", "k256", "log", + "modular-frost", "parity-scale-codec", + "rand_core", + "serai-client", "serai-db", - "serai-env", - "serai-message-queue", - "serai-processor-messages", - "serde_json", + "serai-processor-bin", + "serai-processor-key-gen", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", + "serai-processor-signers", + "serai-processor-smart-contract-scheduler", "tokio", "zalloc", ] @@ -8781,6 +8788,20 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-smart-contract-scheduler" +version = "0.1.0" +dependencies = [ + "borsh", + "group", + "parity-scale-codec", + "serai-db", + "serai-primitives", + "serai-processor-primitives", + "serai-processor-scanner", + "serai-processor-scheduler-primitives", +] + [[package]] name = "serai-processor-tests" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index b35b3318..adaa63db 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -81,6 +81,7 @@ members = [ "processor/scheduler/utxo/primitives", "processor/scheduler/utxo/standard", "processor/scheduler/utxo/transaction-chaining", + "processor/scheduler/smart-contract", "processor/signers", "processor/bin", diff --git a/deny.toml b/deny.toml index ef195411..0e013f5e 100644 --- a/deny.toml +++ b/deny.toml @@ -55,6 +55,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-standard-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, + { allow = ["AGPL-3.0"], name = "serai-processor-smart-contract-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index ea65d570..ede9c71b 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -17,27 +17,38 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -const-hex = { version = "1", default-features = false } +rand_core = { version = "0.6", default-features = false } + hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serde_json = { version = "1", default-features = false, features = ["std"] } + +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std"] } ethereum-serai = { path = "../../networks/ethereum", default-features = false, optional = true } -log = { version = "0.4", default-features = false, features = ["std"] } -env_logger = { version = "0.10", default-features = false, features = ["humantime"] } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } zalloc = { path = "../../common/zalloc" } +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } + serai-db = { path = "../../common/db" } -serai-env = { path = "../../common/env" } -messages = { package = "serai-processor-messages", path = "../messages" } +key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } -message-queue = { package = "serai-message-queue", path = "../../message-queue" } +primitives = { package = "serai-processor-primitives", path = "../primitives" } +scheduler = { package = "serai-processor-scheduler-primitives", path = "../scheduler/primitives" } +scanner = { package = "serai-processor-scanner", path = "../scanner" } +smart-contract-scheduler = { package = "serai-processor-smart-contract-scheduler", path = "../scheduler/smart-contract" } +signers = { package = "serai-processor-signers", path = "../signers" } + +bin = { package = "serai-processor-bin", path = "../bin" } [features] -parity-db = ["serai-db/parity-db"] -rocksdb = ["serai-db/rocksdb"] +parity-db = ["bin/parity-db"] +rocksdb = ["bin/rocksdb"] diff --git a/processor/scheduler/smart-contract/Cargo.toml b/processor/scheduler/smart-contract/Cargo.toml new file mode 100644 index 00000000..69ce9840 --- /dev/null +++ b/processor/scheduler/smart-contract/Cargo.toml @@ -0,0 +1,34 @@ +[package] +name = "serai-processor-smart-contract-scheduler" +version = "0.1.0" +description = "Scheduler for a smart contract representing the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/scheduler/smart-contract" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[package.metadata.cargo-machete] +ignored = ["scale", "borsh"] + +[lints] +workspace = true + +[dependencies] +group = { version = "0.13", default-features = false } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] } + +serai-db = { path = "../../../common/db" } + +primitives = { package = "serai-processor-primitives", path = "../../primitives" } +scanner = { package = "serai-processor-scanner", path = "../../scanner" } +scheduler-primitives = { package = "serai-processor-scheduler-primitives", path = "../primitives" } diff --git a/processor/scheduler/smart-contract/LICENSE b/processor/scheduler/smart-contract/LICENSE new file mode 100644 index 00000000..e091b149 --- /dev/null +++ b/processor/scheduler/smart-contract/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/scheduler/smart-contract/README.md b/processor/scheduler/smart-contract/README.md new file mode 100644 index 00000000..0be94d20 --- /dev/null +++ b/processor/scheduler/smart-contract/README.md @@ -0,0 +1,3 @@ +# Smart Contract Scheduler + +A scheduler for a smart contract representing the Serai processor. diff --git a/processor/scheduler/smart-contract/src/lib.rs b/processor/scheduler/smart-contract/src/lib.rs new file mode 100644 index 00000000..091ffe6a --- /dev/null +++ b/processor/scheduler/smart-contract/src/lib.rs @@ -0,0 +1,136 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::{marker::PhantomData, future::Future}; +use std::collections::HashMap; + +use group::GroupEncoding; + +use serai_db::{Get, DbTxn, create_db}; + +use primitives::{ReceivedOutput, Payment}; +use scanner::{ + LifetimeStage, ScannerFeed, KeyFor, AddressFor, EventualityFor, BlockFor, SchedulerUpdate, + KeyScopedEventualities, Scheduler as SchedulerTrait, +}; +use scheduler_primitives::*; + +create_db! { + SmartContractScheduler { + NextNonce: () -> u64, + } +} + +/// A smart contract. +pub trait SmartContract: 'static + Send { + /// The type representing a signable transaction. + type SignableTransaction: SignableTransaction; + + /// Rotate from the retiring key to the new key. + fn rotate( + nonce: u64, + retiring_key: KeyFor, + new_key: KeyFor, + ) -> (Self::SignableTransaction, EventualityFor); + /// Fulfill the set of payments, dropping any not worth handling. + fn fulfill( + starting_nonce: u64, + payments: Vec>>, + ) -> Vec<(Self::SignableTransaction, EventualityFor)>; +} + +/// A scheduler for a smart contract representing the Serai processor. +#[allow(non_snake_case)] +#[derive(Clone, Default)] +pub struct Scheduler> { + _S: PhantomData, + _SC: PhantomData, +} + +fn fulfill_payments>( + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + payments: Vec>>, +) -> KeyScopedEventualities { + let key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + let mut nonce = NextNonce::get(txn).unwrap_or(0); + let mut eventualities = Vec::with_capacity(1); + for (signable, eventuality) in SC::fulfill(nonce, payments) { + TransactionsToSign::::send(txn, &key, &signable); + nonce += 1; + eventualities.push(eventuality); + } + NextNonce::set(txn, &nonce); + HashMap::from([(key.to_bytes().as_ref().to_vec(), eventualities)]) +} + +impl> SchedulerTrait for Scheduler { + type EphemeralError = (); + type SignableTransaction = SC::SignableTransaction; + + fn activate_key(_txn: &mut impl DbTxn, _key: KeyFor) {} + + fn flush_key( + &self, + txn: &mut impl DbTxn, + _block: &BlockFor, + retiring_key: KeyFor, + new_key: KeyFor, + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + let nonce = NextNonce::get(txn).unwrap_or(0); + let (signable, eventuality) = SC::rotate(nonce, retiring_key, new_key); + NextNonce::set(txn, &(nonce + 1)); + TransactionsToSign::::send(txn, &retiring_key, &signable); + Ok(HashMap::from([(retiring_key.to_bytes().as_ref().to_vec(), vec![eventuality])])) + } + } + + fn retire_key(_txn: &mut impl DbTxn, _key: KeyFor) {} + + fn update( + &self, + txn: &mut impl DbTxn, + _block: &BlockFor, + active_keys: &[(KeyFor, LifetimeStage)], + update: SchedulerUpdate, + ) -> impl Send + Future, Self::EphemeralError>> { + async move { + // We ignore the outputs as we don't need to know our current state as it never suffers + // partial availability + + // We shouldn't have any forwards though + assert!(update.forwards().is_empty()); + + // Create the transactions for the returns + Ok(fulfill_payments::( + txn, + active_keys, + update + .returns() + .iter() + .map(|to_return| { + Payment::new(to_return.address().clone(), to_return.output().balance(), None) + }) + .collect::>(), + )) + } + } + + fn fulfill( + &self, + txn: &mut impl DbTxn, + _block: &BlockFor, + active_keys: &[(KeyFor, LifetimeStage)], + payments: Vec>>, + ) -> impl Send + Future, Self::EphemeralError>> { + async move { Ok(fulfill_payments::(txn, active_keys, payments)) } + } +} diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index 208ae8a0..dc2ccb06 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -470,7 +470,7 @@ impl> SchedulerTrait for Schedul } } - // Create the transactions for the forwards/burns + // Create the transactions for the forwards/returns { let mut planned_txs = vec![]; for forward in update.forwards() { diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 961c6fcb..93bdf1f3 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -488,7 +488,7 @@ impl>> Sched } } - // Create the transactions for the forwards/burns + // Create the transactions for the forwards/returns { let mut planned_txs = vec![]; for forward in update.forwards() { diff --git a/processor/src/multisigs/scheduler/smart_contract.rs b/processor/src/multisigs/scheduler/smart_contract.rs deleted file mode 100644 index 3da8acf4..00000000 --- a/processor/src/multisigs/scheduler/smart_contract.rs +++ /dev/null @@ -1,208 +0,0 @@ -use std::{io, collections::HashSet}; - -use ciphersuite::{group::GroupEncoding, Ciphersuite}; - -use serai_client::primitives::{NetworkId, Coin, Balance}; - -use crate::{ - Get, DbTxn, Db, Payment, Plan, create_db, - networks::{Output, Network}, - multisigs::scheduler::{SchedulerAddendum, Scheduler as SchedulerTrait}, -}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Scheduler { - key: ::G, - coins: HashSet, - rotated: bool, -} - -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub enum Addendum { - Nonce(u64), - RotateTo { nonce: u64, new_key: ::G }, -} - -impl SchedulerAddendum for Addendum { - fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - match kind[0] { - 0 => { - let mut nonce = [0; 8]; - reader.read_exact(&mut nonce)?; - Ok(Addendum::Nonce(u64::from_le_bytes(nonce))) - } - 1 => { - let mut nonce = [0; 8]; - reader.read_exact(&mut nonce)?; - let nonce = u64::from_le_bytes(nonce); - - let new_key = N::Curve::read_G(reader)?; - Ok(Addendum::RotateTo { nonce, new_key }) - } - _ => Err(io::Error::other("reading unknown Addendum type"))?, - } - } - fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - Addendum::Nonce(nonce) => { - writer.write_all(&[0])?; - writer.write_all(&nonce.to_le_bytes()) - } - Addendum::RotateTo { nonce, new_key } => { - writer.write_all(&[1])?; - writer.write_all(&nonce.to_le_bytes())?; - writer.write_all(new_key.to_bytes().as_ref()) - } - } - } -} - -create_db! { - SchedulerDb { - LastNonce: () -> u64, - RotatedTo: (key: &[u8]) -> Vec, - } -} - -impl> SchedulerTrait for Scheduler { - type Addendum = Addendum; - - /// Check if this Scheduler is empty. - fn empty(&self) -> bool { - self.rotated - } - - /// Create a new Scheduler. - fn new( - _txn: &mut D::Transaction<'_>, - key: ::G, - network: NetworkId, - ) -> Self { - assert!(N::branch_address(key).is_none()); - assert!(N::change_address(key).is_none()); - assert!(N::forward_address(key).is_none()); - - Scheduler { key, coins: network.coins().iter().copied().collect(), rotated: false } - } - - /// Load a Scheduler from the DB. - fn from_db( - db: &D, - key: ::G, - network: NetworkId, - ) -> io::Result { - Ok(Scheduler { - key, - coins: network.coins().iter().copied().collect(), - rotated: RotatedTo::get(db, key.to_bytes().as_ref()).is_some(), - }) - } - - fn can_use_branch(&self, _balance: Balance) -> bool { - false - } - - fn schedule( - &mut self, - txn: &mut D::Transaction<'_>, - utxos: Vec, - payments: Vec>, - key_for_any_change: ::G, - force_spend: bool, - ) -> Vec> { - for utxo in utxos { - assert!(self.coins.contains(&utxo.balance().coin)); - } - - let mut nonce = LastNonce::get(txn).unwrap_or(1); - let mut plans = vec![]; - for chunk in payments.as_slice().chunks(N::MAX_OUTPUTS) { - // Once we rotate, all further payments should be scheduled via the new multisig - assert!(!self.rotated); - plans.push(Plan { - key: self.key, - inputs: vec![], - payments: chunk.to_vec(), - change: None, - scheduler_addendum: Addendum::Nonce(nonce), - }); - nonce += 1; - } - - // If we're supposed to rotate to the new key, create an empty Plan which will signify the key - // update - if force_spend && (!self.rotated) { - plans.push(Plan { - key: self.key, - inputs: vec![], - payments: vec![], - change: None, - scheduler_addendum: Addendum::RotateTo { nonce, new_key: key_for_any_change }, - }); - nonce += 1; - self.rotated = true; - RotatedTo::set( - txn, - self.key.to_bytes().as_ref(), - &key_for_any_change.to_bytes().as_ref().to_vec(), - ); - } - - LastNonce::set(txn, &nonce); - - plans - } - - fn consume_payments(&mut self, _txn: &mut D::Transaction<'_>) -> Vec> { - vec![] - } - - fn created_output( - &mut self, - _txn: &mut D::Transaction<'_>, - _expected: u64, - _actual: Option, - ) { - panic!("Smart Contract Scheduler created a Branch output") - } - - /// Refund a specific output. - fn refund_plan( - &mut self, - txn: &mut D::Transaction<'_>, - output: N::Output, - refund_to: N::Address, - ) -> Plan { - let current_key = RotatedTo::get(txn, self.key.to_bytes().as_ref()) - .and_then(|key_bytes| ::read_G(&mut key_bytes.as_slice()).ok()) - .unwrap_or(self.key); - - let nonce = LastNonce::get(txn).map_or(1, |nonce| nonce + 1); - LastNonce::set(txn, &(nonce + 1)); - Plan { - key: current_key, - inputs: vec![], - payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }], - change: None, - scheduler_addendum: Addendum::Nonce(nonce), - } - } - - fn shim_forward_plan(_output: N::Output, _to: ::G) -> Option> { - None - } - - /// Forward a specific output to the new multisig. - /// - /// Returns None if no forwarding is necessary. - fn forward_plan( - &mut self, - _txn: &mut D::Transaction<'_>, - _output: N::Output, - _to: ::G, - ) -> Option> { - None - } -} From 7761798a78fdb04114a1e8a01a17aa9b05429de1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 07:54:18 -0400 Subject: [PATCH 128/368] Outline the Ethereum processor This was only half-finished to begin with, unfortunately... --- Cargo.lock | 8 +- networks/ethereum/src/crypto.rs | 15 +- processor/ethereum/Cargo.toml | 5 +- processor/ethereum/src/key_gen.rs | 25 + processor/ethereum/src/lib.rs | 467 +----------------- processor/ethereum/src/main.rs | 65 +++ processor/ethereum/src/primitives/block.rs | 71 +++ processor/ethereum/src/primitives/mod.rs | 3 + processor/ethereum/src/primitives/output.rs | 123 +++++ .../ethereum/src/primitives/transaction.rs | 117 +++++ processor/ethereum/src/publisher.rs | 60 +++ processor/ethereum/src/rpc.rs | 135 +++++ processor/ethereum/src/scheduler.rs | 90 ++++ processor/monero/Cargo.toml | 5 - processor/scheduler/smart-contract/Cargo.toml | 2 - processor/scheduler/smart-contract/src/lib.rs | 88 ++-- substrate/client/Cargo.toml | 1 + substrate/client/src/networks/ethereum.rs | 51 ++ substrate/client/src/networks/mod.rs | 3 + 19 files changed, 810 insertions(+), 524 deletions(-) create mode 100644 processor/ethereum/src/key_gen.rs create mode 100644 processor/ethereum/src/main.rs create mode 100644 processor/ethereum/src/primitives/block.rs create mode 100644 processor/ethereum/src/primitives/mod.rs create mode 100644 processor/ethereum/src/primitives/output.rs create mode 100644 processor/ethereum/src/primitives/transaction.rs create mode 100644 processor/ethereum/src/publisher.rs create mode 100644 processor/ethereum/src/rpc.rs create mode 100644 processor/ethereum/src/scheduler.rs create mode 100644 substrate/client/src/networks/ethereum.rs diff --git a/Cargo.lock b/Cargo.lock index 147cc295..e98a8f34 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8351,9 +8351,9 @@ version = "0.1.0" dependencies = [ "borsh", "ciphersuite", + "const-hex", "dkg", "ethereum-serai", - "flexible-transcript", "hex", "k256", "log", @@ -8362,6 +8362,7 @@ dependencies = [ "rand_core", "serai-client", "serai-db", + "serai-env", "serai-processor-bin", "serai-processor-key-gen", "serai-processor-primitives", @@ -8522,11 +8523,8 @@ version = "0.1.0" dependencies = [ "borsh", "ciphersuite", - "curve25519-dalek", "dalek-ff-group", "dkg", - "flexible-transcript", - "hex", "log", "modular-frost", "monero-simple-request-rpc", @@ -8535,7 +8533,6 @@ dependencies = [ "rand_chacha", "rand_core", "serai-client", - "serai-db", "serai-processor-bin", "serai-processor-key-gen", "serai-processor-primitives", @@ -8796,7 +8793,6 @@ dependencies = [ "group", "parity-scale-codec", "serai-db", - "serai-primitives", "serai-processor-primitives", "serai-processor-scanner", "serai-processor-scheduler-primitives", diff --git a/networks/ethereum/src/crypto.rs b/networks/ethereum/src/crypto.rs index 6ea6a0b0..326343d8 100644 --- a/networks/ethereum/src/crypto.rs +++ b/networks/ethereum/src/crypto.rs @@ -1,10 +1,12 @@ use group::ff::PrimeField; use k256::{ - elliptic_curve::{ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint}, - ProjectivePoint, Scalar, U256 as KU256, + elliptic_curve::{ + ops::Reduce, + point::{AffineCoordinates, DecompressPoint}, + sec1::ToEncodedPoint, + }, + AffinePoint, ProjectivePoint, Scalar, U256 as KU256, }; -#[cfg(test)] -use k256::{elliptic_curve::point::DecompressPoint, AffinePoint}; use frost::{ algorithm::{Hram, SchnorrSignature}, @@ -99,12 +101,11 @@ impl PublicKey { self.A } - pub(crate) fn eth_repr(&self) -> [u8; 32] { + pub fn eth_repr(&self) -> [u8; 32] { self.px.to_repr().into() } - #[cfg(test)] - pub(crate) fn from_eth_repr(repr: [u8; 32]) -> Option { + pub fn from_eth_repr(repr: [u8; 32]) -> Option { #[allow(non_snake_case)] let A = Option::::from(AffinePoint::decompress(&repr.into(), 0.into()))?.into(); Option::from(Scalar::from_repr(repr.into())).map(|px| PublicKey { A, px }) diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index ede9c71b..12f56d72 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -19,11 +19,11 @@ workspace = true [dependencies] rand_core = { version = "0.6", default-features = false } +const-hex = { version = "1", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } @@ -31,12 +31,13 @@ frost = { package = "modular-frost", path = "../../crypto/frost", default-featur k256 = { version = "^0.13.1", default-features = false, features = ["std"] } ethereum-serai = { path = "../../networks/ethereum", default-features = false, optional = true } -serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } +serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } zalloc = { path = "../../common/zalloc" } log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +serai-env = { path = "../../common/env" } serai-db = { path = "../../common/db" } key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } diff --git a/processor/ethereum/src/key_gen.rs b/processor/ethereum/src/key_gen.rs new file mode 100644 index 00000000..73b7c1e1 --- /dev/null +++ b/processor/ethereum/src/key_gen.rs @@ -0,0 +1,25 @@ +use ciphersuite::{Ciphersuite, Secp256k1}; +use dkg::ThresholdKeys; + +use ethereum_serai::crypto::PublicKey; + +pub(crate) struct KeyGenParams; +impl key_gen::KeyGenParams for KeyGenParams { + const ID: &'static str = "Ethereum"; + + type ExternalNetworkCiphersuite = Secp256k1; + + fn tweak_keys(keys: &mut ThresholdKeys) { + while PublicKey::new(keys.group_key()).is_none() { + *keys = keys.offset(::F::ONE); + } + } + + fn encode_key(key: ::G) -> Vec { + PublicKey::new(key).unwrap().eth_repr().to_vec() + } + + fn decode_key(key: &[u8]) -> Option<::G> { + PublicKey::from_eth_repr(key.try_into().ok()?).map(|key| key.point()) + } +} diff --git a/processor/ethereum/src/lib.rs b/processor/ethereum/src/lib.rs index 99d04203..a8f55c79 100644 --- a/processor/ethereum/src/lib.rs +++ b/processor/ethereum/src/lib.rs @@ -1,3 +1,4 @@ +/* #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] #![deny(missing_docs)] @@ -59,240 +60,6 @@ use crate::{ }, }; -#[cfg(not(test))] -const DAI: [u8; 20] = - match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { - Ok(res) => res, - Err(_) => panic!("invalid non-test DAI hex address"), - }; -#[cfg(test)] // TODO -const DAI: [u8; 20] = - match const_hex::const_decode_to_array(b"0000000000000000000000000000000000000000") { - Ok(res) => res, - Err(_) => panic!("invalid test DAI hex address"), - }; - -fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { - match coin { - EthereumCoin::Ether => Some(Coin::Ether), - EthereumCoin::Erc20(token) => { - if *token == DAI { - return Some(Coin::Dai); - } - None - } - } -} - -fn amount_to_serai_amount(coin: Coin, amount: U256) -> Amount { - assert_eq!(coin.network(), NetworkId::Ethereum); - assert_eq!(coin.decimals(), 8); - // Remove 10 decimals so we go from 18 decimals to 8 decimals - let divisor = U256::from(10_000_000_000u64); - // This is valid up to 184b, which is assumed for the coins allowed - Amount(u64::try_from(amount / divisor).unwrap()) -} - -fn balance_to_ethereum_amount(balance: Balance) -> U256 { - assert_eq!(balance.coin.network(), NetworkId::Ethereum); - assert_eq!(balance.coin.decimals(), 8); - // Restore 10 decimals so we go from 8 decimals to 18 decimals - let factor = U256::from(10_000_000_000u64); - U256::from(balance.amount.0) * factor -} - -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct Address(pub [u8; 20]); -impl TryFrom> for Address { - type Error = (); - fn try_from(bytes: Vec) -> Result { - if bytes.len() != 20 { - Err(())?; - } - let mut res = [0; 20]; - res.copy_from_slice(&bytes); - Ok(Address(res)) - } -} -impl TryInto> for Address { - type Error = (); - fn try_into(self) -> Result, ()> { - Ok(self.0.to_vec()) - } -} - -impl fmt::Display for Address { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - ethereum_serai::alloy::primitives::Address::from(self.0).fmt(f) - } -} - -impl SignableTransaction for RouterCommand { - fn fee(&self) -> u64 { - // Return a fee of 0 as we'll handle amortization on our end - 0 - } -} - -#[async_trait] -impl TransactionTrait> for Transaction { - type Id = [u8; 32]; - fn id(&self) -> Self::Id { - self.hash.0 - } - - #[cfg(test)] - async fn fee(&self, _network: &Ethereum) -> u64 { - // Return a fee of 0 as we'll handle amortization on our end - 0 - } -} - -// We use 32-block Epochs to represent blocks. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct Epoch { - // The hash of the block which ended the prior Epoch. - prior_end_hash: [u8; 32], - // The first block number within this Epoch. - start: u64, - // The hash of the last block within this Epoch. - end_hash: [u8; 32], - // The monotonic time for this Epoch. - time: u64, -} - -impl Epoch { - fn end(&self) -> u64 { - self.start + 31 - } -} - -#[async_trait] -impl Block> for Epoch { - type Id = [u8; 32]; - fn id(&self) -> [u8; 32] { - self.end_hash - } - fn parent(&self) -> [u8; 32] { - self.prior_end_hash - } - async fn time(&self, _: &Ethereum) -> u64 { - self.time - } -} - -impl Output> for EthereumInInstruction { - type Id = [u8; 32]; - - fn kind(&self) -> OutputType { - OutputType::External - } - - fn id(&self) -> Self::Id { - let mut id = [0; 40]; - id[.. 32].copy_from_slice(&self.id.0); - id[32 ..].copy_from_slice(&self.id.1.to_le_bytes()); - *ethereum_serai::alloy::primitives::keccak256(id) - } - fn tx_id(&self) -> [u8; 32] { - self.id.0 - } - fn key(&self) -> ::G { - self.key_at_end_of_block - } - - fn presumed_origin(&self) -> Option
{ - Some(Address(self.from)) - } - - fn balance(&self) -> Balance { - let coin = coin_to_serai_coin(&self.coin).unwrap_or_else(|| { - panic!( - "requesting coin for an EthereumInInstruction with a coin {}", - "we don't handle. this never should have been yielded" - ) - }); - Balance { coin, amount: amount_to_serai_amount(coin, self.amount) } - } - fn data(&self) -> &[u8] { - &self.data - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - EthereumInInstruction::write(self, writer) - } - fn read(reader: &mut R) -> io::Result { - EthereumInInstruction::read(reader) - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Claim { - signature: [u8; 64], -} -impl AsRef<[u8]> for Claim { - fn as_ref(&self) -> &[u8] { - &self.signature - } -} -impl AsMut<[u8]> for Claim { - fn as_mut(&mut self) -> &mut [u8] { - &mut self.signature - } -} -impl Default for Claim { - fn default() -> Self { - Self { signature: [0; 64] } - } -} -impl From<&Signature> for Claim { - fn from(sig: &Signature) -> Self { - Self { signature: sig.to_bytes() } - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Eventuality(PublicKey, RouterCommand); -impl EventualityTrait for Eventuality { - type Claim = Claim; - type Completion = SignedRouterCommand; - - fn lookup(&self) -> Vec { - match self.1 { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - nonce.as_le_bytes().to_vec() - } - } - } - - fn read(reader: &mut R) -> io::Result { - let point = Secp256k1::read_G(reader)?; - let command = RouterCommand::read(reader)?; - Ok(Eventuality( - PublicKey::new(point).ok_or(io::Error::other("unusable key within Eventuality"))?, - command, - )) - } - fn serialize(&self) -> Vec { - let mut res = vec![]; - res.extend(self.0.point().to_bytes().as_slice()); - self.1.write(&mut res).unwrap(); - res - } - - fn claim(completion: &Self::Completion) -> Self::Claim { - Claim::from(completion.signature()) - } - fn serialize_completion(completion: &Self::Completion) -> Vec { - let mut res = vec![]; - completion.write(&mut res).unwrap(); - res - } - fn read_completion(reader: &mut R) -> io::Result { - SignedRouterCommand::read(reader) - } -} - #[derive(Clone)] pub struct Ethereum { // This DB is solely used to access the first key generated, as needed to determine the Router's @@ -305,20 +72,6 @@ pub struct Ethereum { deployer: Deployer, router: Arc>>, } -impl PartialEq for Ethereum { - fn eq(&self, _other: &Ethereum) -> bool { - true - } -} -impl fmt::Debug for Ethereum { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt - .debug_struct("Ethereum") - .field("deployer", &self.deployer) - .field("router", &self.router) - .finish_non_exhaustive() - } -} impl Ethereum { pub async fn new(db: D, daemon_url: String, relayer_url: String) -> Self { let provider = Arc::new(RootProvider::new( @@ -384,110 +137,10 @@ impl Ethereum { #[async_trait] impl Network for Ethereum { - type Curve = Secp256k1; - - type Transaction = Transaction; - type Block = Epoch; - - type Output = EthereumInInstruction; - type SignableTransaction = RouterCommand; - type Eventuality = Eventuality; - type TransactionMachine = RouterCommandMachine; - - type Scheduler = Scheduler; - - type Address = Address; - - const NETWORK: NetworkId = NetworkId::Ethereum; - const ID: &'static str = "Ethereum"; - const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 32 * 12; - const CONFIRMATIONS: usize = 1; - const DUST: u64 = 0; // TODO const COST_TO_AGGREGATE: u64 = 0; - // TODO: usize::max, with a merkle tree in the router - const MAX_OUTPUTS: usize = 256; - - fn tweak_keys(keys: &mut ThresholdKeys) { - while PublicKey::new(keys.group_key()).is_none() { - *keys = keys.offset(::F::ONE); - } - } - - #[cfg(test)] - async fn external_address(&self, _key: ::G) -> Address { - Address(self.router().await.as_ref().unwrap().address()) - } - - fn branch_address(_key: ::G) -> Option
{ - None - } - - fn change_address(_key: ::G) -> Option
{ - None - } - - fn forward_address(_key: ::G) -> Option
{ - None - } - - async fn get_latest_block_number(&self) -> Result { - let actual_number = self - .provider - .get_block(BlockNumberOrTag::Finalized.into(), BlockTransactionsKind::Hashes) - .await - .map_err(|_| NetworkError::ConnectionError)? - .ok_or(NetworkError::ConnectionError)? - .header - .number; - // Error if there hasn't been a full epoch yet - if actual_number < 32 { - Err(NetworkError::ConnectionError)? - } - // If this is 33, the division will return 1, yet 1 is the epoch in progress - let latest_full_epoch = (actual_number / 32).saturating_sub(1); - Ok(latest_full_epoch.try_into().unwrap()) - } - - async fn get_block(&self, number: usize) -> Result { - let latest_finalized = self.get_latest_block_number().await?; - if number > latest_finalized { - Err(NetworkError::ConnectionError)? - } - - let start = number * 32; - let prior_end_hash = if start == 0 { - [0; 32] - } else { - self - .provider - .get_block(u64::try_from(start - 1).unwrap().into(), BlockTransactionsKind::Hashes) - .await - .ok() - .flatten() - .ok_or(NetworkError::ConnectionError)? - .header - .hash - .into() - }; - - let end_header = self - .provider - .get_block(u64::try_from(start + 31).unwrap().into(), BlockTransactionsKind::Hashes) - .await - .ok() - .flatten() - .ok_or(NetworkError::ConnectionError)? - .header; - - let end_hash = end_header.hash.into(); - let time = end_header.timestamp; - - Ok(Epoch { prior_end_hash, start: start.try_into().unwrap(), end_hash, time }) - } - async fn get_outputs( &self, block: &Self::Block, @@ -627,97 +280,6 @@ impl Network for Ethereum { res } - async fn needed_fee( - &self, - _block_number: usize, - inputs: &[Self::Output], - _payments: &[Payment], - _change: &Option, - ) -> Result, NetworkError> { - assert_eq!(inputs.len(), 0); - // Claim no fee is needed so we can perform amortization ourselves - Ok(Some(0)) - } - - async fn signable_transaction( - &self, - _block_number: usize, - _plan_id: &[u8; 32], - key: ::G, - inputs: &[Self::Output], - payments: &[Payment], - change: &Option, - scheduler_addendum: &>::Addendum, - ) -> Result, NetworkError> { - assert_eq!(inputs.len(), 0); - assert!(change.is_none()); - let chain_id = self.provider.get_chain_id().await.map_err(|_| NetworkError::ConnectionError)?; - - // TODO: Perform fee amortization (in scheduler? - // TODO: Make this function internal and have needed_fee properly return None as expected? - // TODO: signable_transaction is written as cannot return None if needed_fee returns Some - // TODO: Why can this return None at all if it isn't allowed to return None? - - let command = match scheduler_addendum { - Addendum::Nonce(nonce) => RouterCommand::Execute { - chain_id: U256::try_from(chain_id).unwrap(), - nonce: U256::try_from(*nonce).unwrap(), - outs: payments - .iter() - .filter_map(|payment| { - Some(OutInstruction { - target: if let Some(data) = payment.data.as_ref() { - // This introspects the Call serialization format, expecting the first 20 bytes to - // be the address - // This avoids wasting the 20-bytes allocated within address - let full_data = [payment.address.0.as_slice(), data].concat(); - let mut reader = full_data.as_slice(); - - let mut calls = vec![]; - while !reader.is_empty() { - calls.push(Call::read(&mut reader).ok()?) - } - // The above must have executed at least once since reader contains the address - assert_eq!(calls[0].to, payment.address.0); - - OutInstructionTarget::Calls(calls) - } else { - OutInstructionTarget::Direct(payment.address.0) - }, - value: { - assert_eq!(payment.balance.coin, Coin::Ether); // TODO - balance_to_ethereum_amount(payment.balance) - }, - }) - }) - .collect(), - }, - Addendum::RotateTo { nonce, new_key } => { - assert!(payments.is_empty()); - RouterCommand::UpdateSeraiKey { - chain_id: U256::try_from(chain_id).unwrap(), - nonce: U256::try_from(*nonce).unwrap(), - key: PublicKey::new(*new_key).expect("new key wasn't a valid ETH public key"), - } - } - }; - Ok(Some(( - command.clone(), - Eventuality(PublicKey::new(key).expect("key wasn't a valid ETH public key"), command), - ))) - } - - async fn attempt_sign( - &self, - keys: ThresholdKeys, - transaction: Self::SignableTransaction, - ) -> Result { - Ok( - RouterCommandMachine::new(keys, transaction) - .expect("keys weren't usable to sign router commands"), - ) - } - async fn publish_completion( &self, completion: &::Completion, @@ -725,32 +287,6 @@ impl Network for Ethereum { // Publish this to the dedicated TX server for a solver to actually publish #[cfg(not(test))] { - let mut msg = vec![]; - match completion.command() { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - msg.extend(&u32::try_from(nonce).unwrap().to_le_bytes()); - } - } - completion.write(&mut msg).unwrap(); - - let Ok(mut socket) = TcpStream::connect(&self.relayer_url).await else { - log::warn!("couldn't connect to the relayer server"); - Err(NetworkError::ConnectionError)? - }; - let Ok(()) = socket.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await else { - log::warn!("couldn't send the message's len to the relayer server"); - Err(NetworkError::ConnectionError)? - }; - let Ok(()) = socket.write_all(&msg).await else { - log::warn!("couldn't write the message to the relayer server"); - Err(NetworkError::ConnectionError)? - }; - if socket.read_u8().await.ok() != Some(1) { - log::warn!("didn't get the ack from the relayer server"); - Err(NetworkError::ConnectionError)?; - } - - Ok(()) } // Publish this using a dummy account we fund with magic RPC commands @@ -938,3 +474,4 @@ impl Network for Ethereum { self.get_block(self.get_latest_block_number().await.unwrap()).await.unwrap() } } +*/ diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs new file mode 100644 index 00000000..e4ec3701 --- /dev/null +++ b/processor/ethereum/src/main.rs @@ -0,0 +1,65 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +#[global_allocator] +static ALLOCATOR: zalloc::ZeroizingAlloc = + zalloc::ZeroizingAlloc(std::alloc::System); + +use std::sync::Arc; + +use ethereum_serai::alloy::{ + primitives::U256, + simple_request_transport::SimpleRequest, + rpc_client::ClientBuilder, + provider::{Provider, RootProvider}, +}; + +use serai_env as env; + +mod primitives; +pub(crate) use crate::primitives::*; + +mod key_gen; +use crate::key_gen::KeyGenParams; +mod rpc; +use rpc::Rpc; +mod scheduler; +use scheduler::{SmartContract, Scheduler}; +mod publisher; +use publisher::TransactionPublisher; + +#[tokio::main] +async fn main() { + let db = bin::init(); + let feed = { + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(bin::url()), true), + )); + Rpc { provider } + }; + let chain_id = loop { + match feed.provider.get_chain_id().await { + Ok(chain_id) => break U256::try_from(chain_id).unwrap(), + Err(e) => { + log::error!("couldn't connect to the Ethereum node for the chain ID: {e:?}"); + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } + } + }; + + bin::main_loop::<_, KeyGenParams, _>( + db, + feed.clone(), + Scheduler::new(SmartContract { chain_id }), + TransactionPublisher::new({ + let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") + .expect("ethereum relayer hostname wasn't specified") + .to_string(); + let relayer_port = + env::var("ETHEREUM_RELAYER_PORT").expect("ethereum relayer port wasn't specified"); + relayer_hostname + ":" + &relayer_port + }), + ) + .await; +} diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs new file mode 100644 index 00000000..e947e851 --- /dev/null +++ b/processor/ethereum/src/primitives/block.rs @@ -0,0 +1,71 @@ +use std::collections::HashMap; + +use ciphersuite::{Ciphersuite, Secp256k1}; + +use serai_client::networks::ethereum::Address; + +use primitives::{ReceivedOutput, EventualityTracker}; +use crate::{output::Output, transaction::Eventuality}; + +// We interpret 32-block Epochs as singular blocks. +// There's no reason for further accuracy when these will all finalize at the same time. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub(crate) struct Epoch { + // The hash of the block which ended the prior Epoch. + pub(crate) prior_end_hash: [u8; 32], + // The first block number within this Epoch. + pub(crate) start: u64, + // The hash of the last block within this Epoch. + pub(crate) end_hash: [u8; 32], + // The monotonic time for this Epoch. + pub(crate) time: u64, +} + +impl Epoch { + // The block number of the last block within this epoch. + fn end(&self) -> u64 { + self.start + 31 + } +} + +impl primitives::BlockHeader for Epoch { + fn id(&self) -> [u8; 32] { + self.end_hash + } + fn parent(&self) -> [u8; 32] { + self.prior_end_hash + } +} + +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub(crate) struct FullEpoch { + epoch: Epoch, +} + +impl primitives::Block for FullEpoch { + type Header = Epoch; + + type Key = ::G; + type Address = Address; + type Output = Output; + type Eventuality = Eventuality; + + fn id(&self) -> [u8; 32] { + self.epoch.end_hash + } + + fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + todo!("TODO") + } + + #[allow(clippy::type_complexity)] + fn check_for_eventuality_resolutions( + &self, + eventualities: &mut EventualityTracker, + ) -> HashMap< + >::TransactionId, + Self::Eventuality, + > { + todo!("TODO") + } +} diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs new file mode 100644 index 00000000..fba52dd9 --- /dev/null +++ b/processor/ethereum/src/primitives/mod.rs @@ -0,0 +1,3 @@ +pub(crate) mod output; +pub(crate) mod transaction; +pub(crate) mod block; diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs new file mode 100644 index 00000000..fcafae75 --- /dev/null +++ b/processor/ethereum/src/primitives/output.rs @@ -0,0 +1,123 @@ +use std::io; + +use ciphersuite::{Ciphersuite, Secp256k1}; + +use ethereum_serai::{ + alloy::primitives::U256, + router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction}, +}; + +use scale::{Encode, Decode}; +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::{ + primitives::{NetworkId, Coin, Amount, Balance}, + networks::ethereum::Address, +}; + +use primitives::{OutputType, ReceivedOutput}; + +#[cfg(not(test))] +const DAI: [u8; 20] = + match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { + Ok(res) => res, + Err(_) => panic!("invalid non-test DAI hex address"), + }; +#[cfg(test)] // TODO +const DAI: [u8; 20] = + match const_hex::const_decode_to_array(b"0000000000000000000000000000000000000000") { + Ok(res) => res, + Err(_) => panic!("invalid test DAI hex address"), + }; + +fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { + match coin { + EthereumCoin::Ether => Some(Coin::Ether), + EthereumCoin::Erc20(token) => { + if *token == DAI { + return Some(Coin::Dai); + } + None + } + } +} + +fn amount_to_serai_amount(coin: Coin, amount: U256) -> Amount { + assert_eq!(coin.network(), NetworkId::Ethereum); + assert_eq!(coin.decimals(), 8); + // Remove 10 decimals so we go from 18 decimals to 8 decimals + let divisor = U256::from(10_000_000_000u64); + // This is valid up to 184b, which is assumed for the coins allowed + Amount(u64::try_from(amount / divisor).unwrap()) +} + +#[derive( + Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, BorshSerialize, BorshDeserialize, +)] +pub(crate) struct OutputId(pub(crate) [u8; 40]); +impl Default for OutputId { + fn default() -> Self { + Self([0; 40]) + } +} +impl AsRef<[u8]> for OutputId { + fn as_ref(&self) -> &[u8] { + self.0.as_ref() + } +} +impl AsMut<[u8]> for OutputId { + fn as_mut(&mut self) -> &mut [u8] { + self.0.as_mut() + } +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Output(pub(crate) EthereumInInstruction); +impl ReceivedOutput<::G, Address> for Output { + type Id = OutputId; + type TransactionId = [u8; 32]; + + // We only scan external outputs as we don't have branch/change/forwards + fn kind(&self) -> OutputType { + OutputType::External + } + + fn id(&self) -> Self::Id { + let mut id = [0; 40]; + id[.. 32].copy_from_slice(&self.0.id.0); + id[32 ..].copy_from_slice(&self.0.id.1.to_le_bytes()); + OutputId(id) + } + + fn transaction_id(&self) -> Self::TransactionId { + self.0.id.0 + } + + fn key(&self) -> ::G { + self.0.key_at_end_of_block + } + + fn presumed_origin(&self) -> Option
{ + Some(Address::from(self.0.from)) + } + + fn balance(&self) -> Balance { + let coin = coin_to_serai_coin(&self.0.coin).unwrap_or_else(|| { + panic!( + "mapping coin from an EthereumInInstruction with coin {}, which we don't handle.", + "this never should have been yielded" + ) + }); + Balance { coin, amount: amount_to_serai_amount(coin, self.0.amount) } + } + fn data(&self) -> &[u8] { + &self.0.data + } + + fn write(&self, writer: &mut W) -> io::Result<()> { + self.0.write(writer) + } + fn read(reader: &mut R) -> io::Result { + EthereumInInstruction::read(reader).map(Self) + } +} diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs new file mode 100644 index 00000000..908358ec --- /dev/null +++ b/processor/ethereum/src/primitives/transaction.rs @@ -0,0 +1,117 @@ +use std::io; + +use rand_core::{RngCore, CryptoRng}; + +use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; +use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; + +use ethereum_serai::{crypto::PublicKey, machine::*}; + +use crate::output::OutputId; + +#[derive(Clone, Debug)] +pub(crate) struct Transaction(pub(crate) SignedRouterCommand); + +impl From for Transaction { + fn from(signed_router_command: SignedRouterCommand) -> Self { + Self(signed_router_command) + } +} + +impl scheduler::Transaction for Transaction { + fn read(reader: &mut impl io::Read) -> io::Result { + SignedRouterCommand::read(reader).map(Self) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.0.write(writer) + } +} + +#[derive(Clone, Debug)] +pub(crate) struct SignableTransaction(pub(crate) RouterCommand); + +#[derive(Clone)] +pub(crate) struct ClonableTransctionMachine(RouterCommand, ThresholdKeys); +impl PreprocessMachine for ClonableTransctionMachine { + type Preprocess = ::Preprocess; + type Signature = ::Signature; + type SignMachine = ::SignMachine; + + fn preprocess( + self, + rng: &mut R, + ) -> (Self::SignMachine, Self::Preprocess) { + // TODO: Use a proper error here, not an Option + RouterCommandMachine::new(self.1.clone(), self.0.clone()).unwrap().preprocess(rng) + } +} + +impl scheduler::SignableTransaction for SignableTransaction { + type Transaction = Transaction; + type Ciphersuite = Secp256k1; + type PreprocessMachine = ClonableTransctionMachine; + + fn read(reader: &mut impl io::Read) -> io::Result { + RouterCommand::read(reader).map(Self) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + self.0.write(writer) + } + + fn id(&self) -> [u8; 32] { + let mut res = [0; 32]; + // TODO: Add getter for the nonce + match self.0 { + RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { + res[.. 8].copy_from_slice(&nonce.as_le_bytes()); + } + } + res + } + + fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine { + ClonableTransctionMachine(self.0, keys) + } +} + +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Eventuality(pub(crate) PublicKey, pub(crate) RouterCommand); + +impl primitives::Eventuality for Eventuality { + type OutputId = OutputId; + + fn id(&self) -> [u8; 32] { + let mut res = [0; 32]; + match self.1 { + RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { + res[.. 8].copy_from_slice(&nonce.as_le_bytes()); + } + } + res + } + + fn lookup(&self) -> Vec { + match self.1 { + RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { + nonce.as_le_bytes().to_vec() + } + } + } + + fn singular_spent_output(&self) -> Option { + None + } + + fn read(reader: &mut impl io::Read) -> io::Result { + let point = Secp256k1::read_G(reader)?; + let command = RouterCommand::read(reader)?; + Ok(Eventuality( + PublicKey::new(point).ok_or(io::Error::other("unusable key within Eventuality"))?, + command, + )) + } + fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + writer.write_all(self.0.point().to_bytes().as_slice())?; + self.1.write(writer) + } +} diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs new file mode 100644 index 00000000..ad8bd09d --- /dev/null +++ b/processor/ethereum/src/publisher.rs @@ -0,0 +1,60 @@ +use core::future::Future; + +use crate::transaction::Transaction; + +#[derive(Clone)] +pub(crate) struct TransactionPublisher { + relayer_url: String, +} + +impl TransactionPublisher { + pub(crate) fn new(relayer_url: String) -> Self { + Self { relayer_url } + } +} + +impl signers::TransactionPublisher for TransactionPublisher { + type EphemeralError = (); + + fn publish( + &self, + tx: Transaction, + ) -> impl Send + Future> { + async move { + /* + use tokio::{ + io::{AsyncReadExt, AsyncWriteExt}, + net::TcpStream, + }; + + let mut msg = vec![]; + match completion.command() { + RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { + msg.extend(&u32::try_from(nonce).unwrap().to_le_bytes()); + } + } + completion.write(&mut msg).unwrap(); + + let Ok(mut socket) = TcpStream::connect(&self.relayer_url).await else { + log::warn!("couldn't connect to the relayer server"); + Err(NetworkError::ConnectionError)? + }; + let Ok(()) = socket.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await else { + log::warn!("couldn't send the message's len to the relayer server"); + Err(NetworkError::ConnectionError)? + }; + let Ok(()) = socket.write_all(&msg).await else { + log::warn!("couldn't write the message to the relayer server"); + Err(NetworkError::ConnectionError)? + }; + if socket.read_u8().await.ok() != Some(1) { + log::warn!("didn't get the ack from the relayer server"); + Err(NetworkError::ConnectionError)?; + } + + Ok(()) + */ + todo!("TODO") + } + } +} diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs new file mode 100644 index 00000000..58b3933e --- /dev/null +++ b/processor/ethereum/src/rpc.rs @@ -0,0 +1,135 @@ +use core::future::Future; +use std::sync::Arc; + +use ethereum_serai::{ + alloy::{ + rpc_types::{BlockTransactionsKind, BlockNumberOrTag}, + simple_request_transport::SimpleRequest, + provider::{Provider, RootProvider}, + }, +}; + +use serai_client::primitives::{NetworkId, Coin, Amount}; + +use scanner::ScannerFeed; + +use crate::block::{Epoch, FullEpoch}; + +#[derive(Clone)] +pub(crate) struct Rpc { + pub(crate) provider: Arc>, +} + +impl ScannerFeed for Rpc { + const NETWORK: NetworkId = NetworkId::Ethereum; + + // We only need one confirmation as Ethereum properly finalizes + const CONFIRMATIONS: u64 = 1; + // The window length should be roughly an hour + const WINDOW_LENGTH: u64 = 10; + + const TEN_MINUTES: u64 = 2; + + type Block = FullEpoch; + + type EphemeralError = String; + + fn latest_finalized_block_number( + &self, + ) -> impl Send + Future> { + async move { + let actual_number = self + .provider + .get_block(BlockNumberOrTag::Finalized.into(), BlockTransactionsKind::Hashes) + .await + .map_err(|e| format!("couldn't get the latest finalized block: {e:?}"))? + .ok_or_else(|| "there was no finalized block".to_string())? + .header + .number; + // Error if there hasn't been a full epoch yet + if actual_number < 32 { + Err("there has not been a completed epoch yet".to_string())? + } + // The divison by 32 returns the amount of completed epochs + // Converting from amount of completed epochs to the latest completed epoch requires + // subtracting 1 + let latest_full_epoch = (actual_number / 32) - 1; + Ok(latest_full_epoch) + } + } + + fn time_of_block( + &self, + number: u64, + ) -> impl Send + Future> { + async move { todo!("TODO") } + } + + fn unchecked_block_header_by_number( + &self, + number: u64, + ) -> impl Send + + Future::Header, Self::EphemeralError>> + { + async move { + let start = number * 32; + let prior_end_hash = if start == 0 { + [0; 32] + } else { + self + .provider + .get_block((start - 1).into(), BlockTransactionsKind::Hashes) + .await + .map_err(|e| format!("couldn't get block: {e:?}"))? + .ok_or_else(|| { + format!("ethereum node didn't have requested block: {number:?}. did we reorg?") + })? + .header + .hash + .into() + }; + + let end_header = self + .provider + .get_block((start + 31).into(), BlockTransactionsKind::Hashes) + .await + .map_err(|e| format!("couldn't get block: {e:?}"))? + .ok_or_else(|| { + format!("ethereum node didn't have requested block: {number:?}. did we reorg?") + })? + .header; + + let end_hash = end_header.hash.into(); + let time = end_header.timestamp; + + Ok(Epoch { prior_end_hash, start, end_hash, time }) + } + } + + #[rustfmt::skip] // It wants to improperly format the `async move` to a single line + fn unchecked_block_by_number( + &self, + number: u64, + ) -> impl Send + Future> { + async move { + todo!("TODO") + } + } + + fn dust(coin: Coin) -> Amount { + assert_eq!(coin.network(), NetworkId::Ethereum); + todo!("TODO") + } + + fn cost_to_aggregate( + &self, + coin: Coin, + _reference_block: &Self::Block, + ) -> impl Send + Future> { + async move { + assert_eq!(coin.network(), NetworkId::Ethereum); + // TODO + Ok(Amount(0)) + } + } +} diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs new file mode 100644 index 00000000..6e17ef70 --- /dev/null +++ b/processor/ethereum/src/scheduler.rs @@ -0,0 +1,90 @@ +use serai_client::primitives::{NetworkId, Balance}; + +use ethereum_serai::{alloy::primitives::U256, router::PublicKey, machine::*}; + +use primitives::Payment; +use scanner::{KeyFor, AddressFor, EventualityFor}; + +use crate::{ + transaction::{SignableTransaction, Eventuality}, + rpc::Rpc, +}; + +fn balance_to_ethereum_amount(balance: Balance) -> U256 { + assert_eq!(balance.coin.network(), NetworkId::Ethereum); + assert_eq!(balance.coin.decimals(), 8); + // Restore 10 decimals so we go from 8 decimals to 18 decimals + // TODO: Document the expectation all integrated coins have 18 decimals + let factor = U256::from(10_000_000_000u64); + U256::from(balance.amount.0) * factor +} + +#[derive(Clone)] +pub(crate) struct SmartContract { + pub(crate) chain_id: U256, +} +impl smart_contract_scheduler::SmartContract for SmartContract { + type SignableTransaction = SignableTransaction; + + fn rotate( + &self, + nonce: u64, + retiring_key: KeyFor, + new_key: KeyFor, + ) -> (Self::SignableTransaction, EventualityFor) { + let command = RouterCommand::UpdateSeraiKey { + chain_id: self.chain_id, + nonce: U256::try_from(nonce).unwrap(), + key: PublicKey::new(new_key).expect("rotating to an invald key"), + }; + ( + SignableTransaction(command.clone()), + Eventuality(PublicKey::new(retiring_key).expect("retiring an invalid key"), command), + ) + } + fn fulfill( + &self, + nonce: u64, + key: KeyFor, + payments: Vec>>, + ) -> Vec<(Self::SignableTransaction, EventualityFor)> { + let mut outs = Vec::with_capacity(payments.len()); + for payment in payments { + outs.push(OutInstruction { + target: if let Some(data) = payment.data() { + // This introspects the Call serialization format, expecting the first 20 bytes to + // be the address + // This avoids wasting the 20-bytes allocated within address + let full_data = [<[u8; 20]>::from(*payment.address()).as_slice(), data].concat(); + let mut reader = full_data.as_slice(); + + let mut calls = vec![]; + while !reader.is_empty() { + let Ok(call) = Call::read(&mut reader) else { break }; + calls.push(call); + } + // The above must have executed at least once since reader contains the address + assert_eq!(calls[0].to, <[u8; 20]>::from(*payment.address())); + + OutInstructionTarget::Calls(calls) + } else { + OutInstructionTarget::Direct((*payment.address()).into()) + }, + value: { balance_to_ethereum_amount(payment.balance()) }, + }); + } + + let command = RouterCommand::Execute { + chain_id: self.chain_id, + nonce: U256::try_from(nonce).unwrap(), + outs, + }; + + vec![( + SignableTransaction(command.clone()), + Eventuality(PublicKey::new(key).expect("fulfilling payments with an invalid key"), command), + )] + } +} + +pub(crate) type Scheduler = smart_contract_scheduler::Scheduler; diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index cc895eda..6ea49a0c 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -21,12 +21,9 @@ rand_core = { version = "0.6", default-features = false } rand_chacha = { version = "0.3", default-features = false, features = ["std"] } zeroize = { version = "1", default-features = false, features = ["std"] } -hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } -curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize"] } dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ed25519"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ed25519"] } @@ -41,8 +38,6 @@ zalloc = { path = "../../common/zalloc" } log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } -serai-db = { path = "../../common/db" } - key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } view-keys = { package = "serai-processor-view-keys", path = "../view-keys" } diff --git a/processor/scheduler/smart-contract/Cargo.toml b/processor/scheduler/smart-contract/Cargo.toml index 69ce9840..c43569fb 100644 --- a/processor/scheduler/smart-contract/Cargo.toml +++ b/processor/scheduler/smart-contract/Cargo.toml @@ -25,8 +25,6 @@ group = { version = "0.13", default-features = false } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] } - serai-db = { path = "../../../common/db" } primitives = { package = "serai-processor-primitives", path = "../../primitives" } diff --git a/processor/scheduler/smart-contract/src/lib.rs b/processor/scheduler/smart-contract/src/lib.rs index 091ffe6a..7630a026 100644 --- a/processor/scheduler/smart-contract/src/lib.rs +++ b/processor/scheduler/smart-contract/src/lib.rs @@ -29,49 +29,61 @@ pub trait SmartContract: 'static + Send { /// Rotate from the retiring key to the new key. fn rotate( + &self, nonce: u64, retiring_key: KeyFor, new_key: KeyFor, ) -> (Self::SignableTransaction, EventualityFor); + /// Fulfill the set of payments, dropping any not worth handling. fn fulfill( + &self, starting_nonce: u64, + key: KeyFor, payments: Vec>>, ) -> Vec<(Self::SignableTransaction, EventualityFor)>; } /// A scheduler for a smart contract representing the Serai processor. #[allow(non_snake_case)] -#[derive(Clone, Default)] -pub struct Scheduler> { +#[derive(Clone)] +pub struct Scheduler> { + smart_contract: SC, _S: PhantomData, - _SC: PhantomData, } -fn fulfill_payments>( - txn: &mut impl DbTxn, - active_keys: &[(KeyFor, LifetimeStage)], - payments: Vec>>, -) -> KeyScopedEventualities { - let key = match active_keys[0].1 { - LifetimeStage::ActiveYetNotReporting | - LifetimeStage::Active | - LifetimeStage::UsingNewForChange => active_keys[0].0, - LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, - }; - - let mut nonce = NextNonce::get(txn).unwrap_or(0); - let mut eventualities = Vec::with_capacity(1); - for (signable, eventuality) in SC::fulfill(nonce, payments) { - TransactionsToSign::::send(txn, &key, &signable); - nonce += 1; - eventualities.push(eventuality); +impl> Scheduler { + /// Create a new scheduler. + pub fn new(smart_contract: SC) -> Self { + Self { smart_contract, _S: PhantomData } + } + + fn fulfill_payments( + &self, + txn: &mut impl DbTxn, + active_keys: &[(KeyFor, LifetimeStage)], + payments: Vec>>, + ) -> KeyScopedEventualities { + let key = match active_keys[0].1 { + LifetimeStage::ActiveYetNotReporting | + LifetimeStage::Active | + LifetimeStage::UsingNewForChange => active_keys[0].0, + LifetimeStage::Forwarding | LifetimeStage::Finishing => active_keys[1].0, + }; + + let mut nonce = NextNonce::get(txn).unwrap_or(0); + let mut eventualities = Vec::with_capacity(1); + for (signable, eventuality) in self.smart_contract.fulfill(nonce, key, payments) { + TransactionsToSign::::send(txn, &key, &signable); + nonce += 1; + eventualities.push(eventuality); + } + NextNonce::set(txn, &nonce); + HashMap::from([(key.to_bytes().as_ref().to_vec(), eventualities)]) } - NextNonce::set(txn, &nonce); - HashMap::from([(key.to_bytes().as_ref().to_vec(), eventualities)]) } -impl> SchedulerTrait for Scheduler { +impl> SchedulerTrait for Scheduler { type EphemeralError = (); type SignableTransaction = SC::SignableTransaction; @@ -86,7 +98,7 @@ impl> SchedulerTrait for Scheduler impl Send + Future, Self::EphemeralError>> { async move { let nonce = NextNonce::get(txn).unwrap_or(0); - let (signable, eventuality) = SC::rotate(nonce, retiring_key, new_key); + let (signable, eventuality) = self.smart_contract.rotate(nonce, retiring_key, new_key); NextNonce::set(txn, &(nonce + 1)); TransactionsToSign::::send(txn, &retiring_key, &signable); Ok(HashMap::from([(retiring_key.to_bytes().as_ref().to_vec(), vec![eventuality])])) @@ -110,17 +122,19 @@ impl> SchedulerTrait for Scheduler( - txn, - active_keys, - update - .returns() - .iter() - .map(|to_return| { - Payment::new(to_return.address().clone(), to_return.output().balance(), None) - }) - .collect::>(), - )) + Ok( + self.fulfill_payments( + txn, + active_keys, + update + .returns() + .iter() + .map(|to_return| { + Payment::new(to_return.address().clone(), to_return.output().balance(), None) + }) + .collect::>(), + ), + ) } } @@ -131,6 +145,6 @@ impl> SchedulerTrait for Scheduler, LifetimeStage)], payments: Vec>>, ) -> impl Send + Future, Self::EphemeralError>> { - async move { Ok(fulfill_payments::(txn, active_keys, payments)) } + async move { Ok(self.fulfill_payments(txn, active_keys, payments)) } } } diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index 5f7a24d4..33bfabf9 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -65,6 +65,7 @@ borsh = ["serai-abi/borsh"] networks = [] bitcoin = ["networks", "dep:bitcoin"] +ethereum = ["networks"] monero = ["networks", "ciphersuite/ed25519", "monero-address"] # Assumes the default usage is to use Serai as a DEX, which doesn't actually diff --git a/substrate/client/src/networks/ethereum.rs b/substrate/client/src/networks/ethereum.rs new file mode 100644 index 00000000..09285169 --- /dev/null +++ b/substrate/client/src/networks/ethereum.rs @@ -0,0 +1,51 @@ +use core::{str::FromStr, fmt}; + +use borsh::{BorshSerialize, BorshDeserialize}; + +use crate::primitives::ExternalAddress; + +/// A representation of an Ethereum address. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub struct Address([u8; 20]); + +impl From<[u8; 20]> for Address { + fn from(address: [u8; 20]) -> Self { + Self(address) + } +} + +impl From
for [u8; 20] { + fn from(address: Address) -> Self { + address.0 + } +} + +impl TryFrom for Address { + type Error = (); + fn try_from(data: ExternalAddress) -> Result { + Ok(Self(data.as_ref().try_into().map_err(|_| ())?)) + } +} +impl From
for ExternalAddress { + fn from(address: Address) -> ExternalAddress { + // This is 20 bytes which is less than MAX_ADDRESS_LEN + ExternalAddress::new(address.0.to_vec()).unwrap() + } +} + +impl FromStr for Address { + type Err = (); + fn from_str(str: &str) -> Result { + let Some(address) = str.strip_prefix("0x") else { Err(())? }; + if address.len() != 40 { + Err(())? + }; + Ok(Self(hex::decode(address.to_lowercase()).map_err(|_| ())?.try_into().unwrap())) + } +} + +impl fmt::Display for Address { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!(f, "0x{}", hex::encode(self.0)) + } +} diff --git a/substrate/client/src/networks/mod.rs b/substrate/client/src/networks/mod.rs index 63ebf481..7a99631a 100644 --- a/substrate/client/src/networks/mod.rs +++ b/substrate/client/src/networks/mod.rs @@ -1,5 +1,8 @@ #[cfg(feature = "bitcoin")] pub mod bitcoin; +#[cfg(feature = "ethereum")] +pub mod ethereum; + #[cfg(feature = "monero")] pub mod monero; From 8746b54a433d838d5c532111282781da7d7a62c9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 12:50:14 -0400 Subject: [PATCH 129/368] Don't use a different address for DAI in test anvil will let us deploy to the existing address. --- processor/ethereum/src/primitives/output.rs | 7 ------- 1 file changed, 7 deletions(-) diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index fcafae75..4dadb147 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -17,18 +17,11 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; -#[cfg(not(test))] const DAI: [u8; 20] = match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { Ok(res) => res, Err(_) => panic!("invalid non-test DAI hex address"), }; -#[cfg(test)] // TODO -const DAI: [u8; 20] = - match const_hex::const_decode_to_array(b"0000000000000000000000000000000000000000") { - Ok(res) => res, - Err(_) => panic!("invalid test DAI hex address"), - }; fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { match coin { From d9543bee40fce49d20afb37e571a1ea9e870e14d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 12:58:57 -0400 Subject: [PATCH 130/368] Move ethereum-serai under the processor It isn't generally usable and should be directly integrated at this point. --- .github/workflows/networks-tests.yml | 1 - .github/workflows/tests.yml | 1 + Cargo.toml | 2 +- deny.toml | 2 +- processor/ethereum/Cargo.toml | 2 +- .../ethereum/ethereum-serai}/.gitignore | 0 .../ethereum/ethereum-serai}/Cargo.toml | 8 ++++---- .../ethereum/ethereum-serai}/LICENSE | 0 .../ethereum/ethereum-serai}/README.md | 0 .../ethereum/ethereum-serai}/build.rs | 0 .../ethereum/ethereum-serai}/contracts/Deployer.sol | 0 .../ethereum/ethereum-serai}/contracts/IERC20.sol | 0 .../ethereum/ethereum-serai}/contracts/Router.sol | 0 .../ethereum/ethereum-serai}/contracts/Sandbox.sol | 0 .../ethereum/ethereum-serai}/contracts/Schnorr.sol | 0 .../ethereum/ethereum-serai}/src/abi/mod.rs | 0 .../ethereum/ethereum-serai}/src/crypto.rs | 0 .../ethereum/ethereum-serai}/src/deployer.rs | 0 .../ethereum/ethereum-serai}/src/erc20.rs | 0 .../ethereum/ethereum-serai}/src/lib.rs | 0 .../ethereum/ethereum-serai}/src/machine.rs | 0 .../ethereum/ethereum-serai}/src/router.rs | 0 .../ethereum/ethereum-serai}/src/tests/abi/mod.rs | 0 .../ethereum-serai}/src/tests/contracts/ERC20.sol | 0 .../ethereum-serai}/src/tests/contracts/Schnorr.sol | 0 .../ethereum/ethereum-serai}/src/tests/crypto.rs | 0 .../ethereum/ethereum-serai}/src/tests/mod.rs | 0 .../ethereum/ethereum-serai}/src/tests/router.rs | 0 .../ethereum/ethereum-serai}/src/tests/schnorr.rs | 0 tests/processor/Cargo.toml | 2 +- 30 files changed, 9 insertions(+), 9 deletions(-) rename {networks/ethereum => processor/ethereum/ethereum-serai}/.gitignore (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/Cargo.toml (75%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/LICENSE (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/README.md (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/build.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/contracts/Deployer.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/contracts/IERC20.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/contracts/Router.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/contracts/Sandbox.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/contracts/Schnorr.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/abi/mod.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/crypto.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/deployer.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/erc20.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/lib.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/machine.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/router.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/abi/mod.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/contracts/ERC20.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/contracts/Schnorr.sol (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/crypto.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/mod.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/router.rs (100%) rename {networks/ethereum => processor/ethereum/ethereum-serai}/src/tests/schnorr.rs (100%) diff --git a/.github/workflows/networks-tests.yml b/.github/workflows/networks-tests.yml index f346b986..7fde517b 100644 --- a/.github/workflows/networks-tests.yml +++ b/.github/workflows/networks-tests.yml @@ -31,7 +31,6 @@ jobs: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ -p bitcoin-serai \ -p alloy-simple-request-transport \ - -p ethereum-serai \ -p serai-ethereum-relayer \ -p monero-io \ -p monero-generators \ diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index e1c54349..4e1c167a 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -52,6 +52,7 @@ jobs: -p serai-processor-signers \ -p serai-processor-bin \ -p serai-bitcoin-processor \ + -p ethereum-serai \ -p serai-ethereum-processor \ -p serai-monero-processor \ -p tendermint-machine \ diff --git a/Cargo.toml b/Cargo.toml index adaa63db..09e51255 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -47,7 +47,6 @@ members = [ "networks/bitcoin", "networks/ethereum/alloy-simple-request-transport", - "networks/ethereum", "networks/ethereum/relayer", "networks/monero/io", @@ -86,6 +85,7 @@ members = [ "processor/bin", "processor/bitcoin", + "processor/ethereum/ethereum-serai", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index 0e013f5e..183122e8 100644 --- a/deny.toml +++ b/deny.toml @@ -40,7 +40,6 @@ allow = [ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-env" }, - { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-relayer" }, { allow = ["AGPL-3.0"], name = "serai-message-queue" }, @@ -59,6 +58,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, + { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 12f56d72..dfed2f9d 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -29,7 +29,7 @@ dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std"] } -ethereum-serai = { path = "../../networks/ethereum", default-features = false, optional = true } +ethereum-serai = { path = "./ethereum-serai", default-features = false, optional = true } serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } diff --git a/networks/ethereum/.gitignore b/processor/ethereum/ethereum-serai/.gitignore similarity index 100% rename from networks/ethereum/.gitignore rename to processor/ethereum/ethereum-serai/.gitignore diff --git a/networks/ethereum/Cargo.toml b/processor/ethereum/ethereum-serai/Cargo.toml similarity index 75% rename from networks/ethereum/Cargo.toml rename to processor/ethereum/ethereum-serai/Cargo.toml index a91b83c5..ed4520d1 100644 --- a/networks/ethereum/Cargo.toml +++ b/processor/ethereum/ethereum-serai/Cargo.toml @@ -21,11 +21,11 @@ thiserror = { version = "1", default-features = false } rand_core = { version = "0.6", default-features = false, features = ["std"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["recommended"] } +transcript = { package = "flexible-transcript", path = "../../../crypto/transcript", default-features = false, features = ["recommended"] } group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa", "arithmetic"] } -frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false, features = ["secp256k1"] } +frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false, features = ["secp256k1"] } alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false, features = ["json"] } @@ -33,13 +33,13 @@ alloy-consensus = { version = "0.3", default-features = false, features = ["k256 alloy-network = { version = "0.3", default-features = false } alloy-rpc-types-eth = { version = "0.3", default-features = false } alloy-rpc-client = { version = "0.3", default-features = false } -alloy-simple-request-transport = { path = "./alloy-simple-request-transport", default-features = false } +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-provider = { version = "0.3", default-features = false } alloy-node-bindings = { version = "0.3", default-features = false, optional = true } [dev-dependencies] -frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false, features = ["tests"] } +frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false, features = ["tests"] } tokio = { version = "1", features = ["macros"] } diff --git a/networks/ethereum/LICENSE b/processor/ethereum/ethereum-serai/LICENSE similarity index 100% rename from networks/ethereum/LICENSE rename to processor/ethereum/ethereum-serai/LICENSE diff --git a/networks/ethereum/README.md b/processor/ethereum/ethereum-serai/README.md similarity index 100% rename from networks/ethereum/README.md rename to processor/ethereum/ethereum-serai/README.md diff --git a/networks/ethereum/build.rs b/processor/ethereum/ethereum-serai/build.rs similarity index 100% rename from networks/ethereum/build.rs rename to processor/ethereum/ethereum-serai/build.rs diff --git a/networks/ethereum/contracts/Deployer.sol b/processor/ethereum/ethereum-serai/contracts/Deployer.sol similarity index 100% rename from networks/ethereum/contracts/Deployer.sol rename to processor/ethereum/ethereum-serai/contracts/Deployer.sol diff --git a/networks/ethereum/contracts/IERC20.sol b/processor/ethereum/ethereum-serai/contracts/IERC20.sol similarity index 100% rename from networks/ethereum/contracts/IERC20.sol rename to processor/ethereum/ethereum-serai/contracts/IERC20.sol diff --git a/networks/ethereum/contracts/Router.sol b/processor/ethereum/ethereum-serai/contracts/Router.sol similarity index 100% rename from networks/ethereum/contracts/Router.sol rename to processor/ethereum/ethereum-serai/contracts/Router.sol diff --git a/networks/ethereum/contracts/Sandbox.sol b/processor/ethereum/ethereum-serai/contracts/Sandbox.sol similarity index 100% rename from networks/ethereum/contracts/Sandbox.sol rename to processor/ethereum/ethereum-serai/contracts/Sandbox.sol diff --git a/networks/ethereum/contracts/Schnorr.sol b/processor/ethereum/ethereum-serai/contracts/Schnorr.sol similarity index 100% rename from networks/ethereum/contracts/Schnorr.sol rename to processor/ethereum/ethereum-serai/contracts/Schnorr.sol diff --git a/networks/ethereum/src/abi/mod.rs b/processor/ethereum/ethereum-serai/src/abi/mod.rs similarity index 100% rename from networks/ethereum/src/abi/mod.rs rename to processor/ethereum/ethereum-serai/src/abi/mod.rs diff --git a/networks/ethereum/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs similarity index 100% rename from networks/ethereum/src/crypto.rs rename to processor/ethereum/ethereum-serai/src/crypto.rs diff --git a/networks/ethereum/src/deployer.rs b/processor/ethereum/ethereum-serai/src/deployer.rs similarity index 100% rename from networks/ethereum/src/deployer.rs rename to processor/ethereum/ethereum-serai/src/deployer.rs diff --git a/networks/ethereum/src/erc20.rs b/processor/ethereum/ethereum-serai/src/erc20.rs similarity index 100% rename from networks/ethereum/src/erc20.rs rename to processor/ethereum/ethereum-serai/src/erc20.rs diff --git a/networks/ethereum/src/lib.rs b/processor/ethereum/ethereum-serai/src/lib.rs similarity index 100% rename from networks/ethereum/src/lib.rs rename to processor/ethereum/ethereum-serai/src/lib.rs diff --git a/networks/ethereum/src/machine.rs b/processor/ethereum/ethereum-serai/src/machine.rs similarity index 100% rename from networks/ethereum/src/machine.rs rename to processor/ethereum/ethereum-serai/src/machine.rs diff --git a/networks/ethereum/src/router.rs b/processor/ethereum/ethereum-serai/src/router.rs similarity index 100% rename from networks/ethereum/src/router.rs rename to processor/ethereum/ethereum-serai/src/router.rs diff --git a/networks/ethereum/src/tests/abi/mod.rs b/processor/ethereum/ethereum-serai/src/tests/abi/mod.rs similarity index 100% rename from networks/ethereum/src/tests/abi/mod.rs rename to processor/ethereum/ethereum-serai/src/tests/abi/mod.rs diff --git a/networks/ethereum/src/tests/contracts/ERC20.sol b/processor/ethereum/ethereum-serai/src/tests/contracts/ERC20.sol similarity index 100% rename from networks/ethereum/src/tests/contracts/ERC20.sol rename to processor/ethereum/ethereum-serai/src/tests/contracts/ERC20.sol diff --git a/networks/ethereum/src/tests/contracts/Schnorr.sol b/processor/ethereum/ethereum-serai/src/tests/contracts/Schnorr.sol similarity index 100% rename from networks/ethereum/src/tests/contracts/Schnorr.sol rename to processor/ethereum/ethereum-serai/src/tests/contracts/Schnorr.sol diff --git a/networks/ethereum/src/tests/crypto.rs b/processor/ethereum/ethereum-serai/src/tests/crypto.rs similarity index 100% rename from networks/ethereum/src/tests/crypto.rs rename to processor/ethereum/ethereum-serai/src/tests/crypto.rs diff --git a/networks/ethereum/src/tests/mod.rs b/processor/ethereum/ethereum-serai/src/tests/mod.rs similarity index 100% rename from networks/ethereum/src/tests/mod.rs rename to processor/ethereum/ethereum-serai/src/tests/mod.rs diff --git a/networks/ethereum/src/tests/router.rs b/processor/ethereum/ethereum-serai/src/tests/router.rs similarity index 100% rename from networks/ethereum/src/tests/router.rs rename to processor/ethereum/ethereum-serai/src/tests/router.rs diff --git a/networks/ethereum/src/tests/schnorr.rs b/processor/ethereum/ethereum-serai/src/tests/schnorr.rs similarity index 100% rename from networks/ethereum/src/tests/schnorr.rs rename to processor/ethereum/ethereum-serai/src/tests/schnorr.rs diff --git a/tests/processor/Cargo.toml b/tests/processor/Cargo.toml index 13299b93..e37dc2a9 100644 --- a/tests/processor/Cargo.toml +++ b/tests/processor/Cargo.toml @@ -29,7 +29,7 @@ dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] bitcoin-serai = { path = "../../networks/bitcoin" } k256 = "0.13" -ethereum-serai = { path = "../../networks/ethereum" } +ethereum-serai = { path = "../../processor/ethereum/ethereum-serai" } monero-simple-request-rpc = { path = "../../networks/monero/rpc/simple-request" } monero-wallet = { path = "../../networks/monero/wallet" } From 239127aae54b16a19d7f556e3a2b57f4ab93f8da Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 22:12:32 -0400 Subject: [PATCH 131/368] Add crate for the Ethereum contracts --- .github/workflows/tests.yml | 2 +- Cargo.lock | 8 ++++ Cargo.toml | 1 + deny.toml | 1 + .../{ethereum-serai => contracts}/.gitignore | 0 processor/ethereum/contracts/Cargo.toml | 20 ++++++++ processor/ethereum/contracts/LICENSE | 15 ++++++ processor/ethereum/contracts/README.md | 7 +++ .../{ethereum-serai => contracts}/build.rs | 12 +++-- .../contracts/Deployer.sol | 0 .../contracts/IERC20.sol | 0 .../contracts/Router.sol | 0 .../contracts/Sandbox.sol | 0 .../contracts/Schnorr.sol | 0 .../contracts/tests}/ERC20.sol | 0 .../contracts/tests}/Schnorr.sol | 0 processor/ethereum/contracts/src/lib.rs | 48 +++++++++++++++++++ .../abi/mod.rs => contracts/src/tests.rs} | 4 +- processor/ethereum/ethereum-serai/Cargo.toml | 2 + .../ethereum/ethereum-serai/src/abi/mod.rs | 37 -------------- .../ethereum/ethereum-serai/src/deployer.rs | 2 +- processor/ethereum/ethereum-serai/src/lib.rs | 6 ++- .../ethereum/ethereum-serai/src/router.rs | 2 +- .../ethereum/ethereum-serai/src/tests/mod.rs | 2 +- 24 files changed, 121 insertions(+), 48 deletions(-) rename processor/ethereum/{ethereum-serai => contracts}/.gitignore (100%) create mode 100644 processor/ethereum/contracts/Cargo.toml create mode 100644 processor/ethereum/contracts/LICENSE create mode 100644 processor/ethereum/contracts/README.md rename processor/ethereum/{ethereum-serai => contracts}/build.rs (78%) rename processor/ethereum/{ethereum-serai => contracts}/contracts/Deployer.sol (100%) rename processor/ethereum/{ethereum-serai => contracts}/contracts/IERC20.sol (100%) rename processor/ethereum/{ethereum-serai => contracts}/contracts/Router.sol (100%) rename processor/ethereum/{ethereum-serai => contracts}/contracts/Sandbox.sol (100%) rename processor/ethereum/{ethereum-serai => contracts}/contracts/Schnorr.sol (100%) rename processor/ethereum/{ethereum-serai/src/tests/contracts => contracts/contracts/tests}/ERC20.sol (100%) rename processor/ethereum/{ethereum-serai/src/tests/contracts => contracts/contracts/tests}/Schnorr.sol (100%) create mode 100644 processor/ethereum/contracts/src/lib.rs rename processor/ethereum/{ethereum-serai/src/tests/abi/mod.rs => contracts/src/tests.rs} (71%) delete mode 100644 processor/ethereum/ethereum-serai/src/abi/mod.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 4e1c167a..9b90ee91 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -52,7 +52,7 @@ jobs: -p serai-processor-signers \ -p serai-processor-bin \ -p serai-bitcoin-processor \ - -p ethereum-serai \ + -p serai-processor-ethereum-contracts \ -p serai-ethereum-processor \ -p serai-monero-processor \ -p tendermint-machine \ diff --git a/Cargo.lock b/Cargo.lock index e98a8f34..55108241 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2497,6 +2497,7 @@ dependencies = [ "k256", "modular-frost", "rand_core", + "serai-processor-ethereum-contracts", "thiserror", "tokio", ] @@ -8671,6 +8672,13 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-processor-ethereum-contracts" +version = "0.1.0" +dependencies = [ + "alloy-sol-types", +] + [[package]] name = "serai-processor-frost-attempt-manager" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 09e51255..f06d76ef 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -85,6 +85,7 @@ members = [ "processor/bin", "processor/bitcoin", + "processor/ethereum/contracts", "processor/ethereum/ethereum-serai", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index 183122e8..cef3a683 100644 --- a/deny.toml +++ b/deny.toml @@ -59,6 +59,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, { allow = ["AGPL-3.0"], name = "ethereum-serai" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-contracts" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, diff --git a/processor/ethereum/ethereum-serai/.gitignore b/processor/ethereum/contracts/.gitignore similarity index 100% rename from processor/ethereum/ethereum-serai/.gitignore rename to processor/ethereum/contracts/.gitignore diff --git a/processor/ethereum/contracts/Cargo.toml b/processor/ethereum/contracts/Cargo.toml new file mode 100644 index 00000000..87beba08 --- /dev/null +++ b/processor/ethereum/contracts/Cargo.toml @@ -0,0 +1,20 @@ +[package] +name = "serai-processor-ethereum-contracts" +version = "0.1.0" +description = "Ethereum contracts for the Serai processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/contracts" +authors = ["Luke Parker ", "Elizabeth Binks "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +alloy-sol-types = { version = "0.8", default-features = false } diff --git a/processor/ethereum/contracts/LICENSE b/processor/ethereum/contracts/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/contracts/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/contracts/README.md b/processor/ethereum/contracts/README.md new file mode 100644 index 00000000..fcd8f3c7 --- /dev/null +++ b/processor/ethereum/contracts/README.md @@ -0,0 +1,7 @@ +# Serai Processor Ethereum Contracts + +The Ethereum contracts used for (and for testing) the Serai processor. This is +its own crate for organizational and build-time reasons. It is not intended to +be publicly used. + +This crate will fail to build if `solc` is not installed and available. diff --git a/processor/ethereum/ethereum-serai/build.rs b/processor/ethereum/contracts/build.rs similarity index 78% rename from processor/ethereum/ethereum-serai/build.rs rename to processor/ethereum/contracts/build.rs index 38fcfe00..fe79fcc1 100644 --- a/processor/ethereum/ethereum-serai/build.rs +++ b/processor/ethereum/contracts/build.rs @@ -28,14 +28,18 @@ fn main() { "./contracts/Sandbox.sol", "./contracts/Router.sol", - "./src/tests/contracts/Schnorr.sol", - "./src/tests/contracts/ERC20.sol", + "./contracts/tests/Schnorr.sol", + "./contracts/tests/ERC20.sol", "--no-color", ]; let solc = Command::new("solc").args(args).output().unwrap(); assert!(solc.status.success()); - for line in String::from_utf8(solc.stderr).unwrap().lines() { - assert!(!line.starts_with("Error:")); + let stderr = String::from_utf8(solc.stderr).unwrap(); + for line in stderr.lines() { + if line.contains("Error:") { + println!("{stderr}"); + panic!() + } } } diff --git a/processor/ethereum/ethereum-serai/contracts/Deployer.sol b/processor/ethereum/contracts/contracts/Deployer.sol similarity index 100% rename from processor/ethereum/ethereum-serai/contracts/Deployer.sol rename to processor/ethereum/contracts/contracts/Deployer.sol diff --git a/processor/ethereum/ethereum-serai/contracts/IERC20.sol b/processor/ethereum/contracts/contracts/IERC20.sol similarity index 100% rename from processor/ethereum/ethereum-serai/contracts/IERC20.sol rename to processor/ethereum/contracts/contracts/IERC20.sol diff --git a/processor/ethereum/ethereum-serai/contracts/Router.sol b/processor/ethereum/contracts/contracts/Router.sol similarity index 100% rename from processor/ethereum/ethereum-serai/contracts/Router.sol rename to processor/ethereum/contracts/contracts/Router.sol diff --git a/processor/ethereum/ethereum-serai/contracts/Sandbox.sol b/processor/ethereum/contracts/contracts/Sandbox.sol similarity index 100% rename from processor/ethereum/ethereum-serai/contracts/Sandbox.sol rename to processor/ethereum/contracts/contracts/Sandbox.sol diff --git a/processor/ethereum/ethereum-serai/contracts/Schnorr.sol b/processor/ethereum/contracts/contracts/Schnorr.sol similarity index 100% rename from processor/ethereum/ethereum-serai/contracts/Schnorr.sol rename to processor/ethereum/contracts/contracts/Schnorr.sol diff --git a/processor/ethereum/ethereum-serai/src/tests/contracts/ERC20.sol b/processor/ethereum/contracts/contracts/tests/ERC20.sol similarity index 100% rename from processor/ethereum/ethereum-serai/src/tests/contracts/ERC20.sol rename to processor/ethereum/contracts/contracts/tests/ERC20.sol diff --git a/processor/ethereum/ethereum-serai/src/tests/contracts/Schnorr.sol b/processor/ethereum/contracts/contracts/tests/Schnorr.sol similarity index 100% rename from processor/ethereum/ethereum-serai/src/tests/contracts/Schnorr.sol rename to processor/ethereum/contracts/contracts/tests/Schnorr.sol diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs new file mode 100644 index 00000000..fef10288 --- /dev/null +++ b/processor/ethereum/contracts/src/lib.rs @@ -0,0 +1,48 @@ +use alloy_sol_types::sol; + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod erc20_container { + use super::*; + sol!("contracts/IERC20.sol"); +} +pub mod erc20 { + pub const BYTECODE: &str = include_str!("../artifacts/Deployer.bin"); + pub use super::erc20_container::IERC20::*; +} + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod deployer_container { + use super::*; + sol!("contracts/Deployer.sol"); +} +pub mod deployer { + pub const BYTECODE: &str = include_str!("../artifacts/Deployer.bin"); + pub use super::deployer_container::Deployer::*; +} + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod router_container { + use super::*; + sol!(Router, "artifacts/Router.abi"); +} +pub mod router { + pub const BYTECODE: &str = include_str!("../artifacts/Router.bin"); + pub use super::router_container::Router::*; +} + +pub mod tests; diff --git a/processor/ethereum/ethereum-serai/src/tests/abi/mod.rs b/processor/ethereum/contracts/src/tests.rs similarity index 71% rename from processor/ethereum/ethereum-serai/src/tests/abi/mod.rs rename to processor/ethereum/contracts/src/tests.rs index 57ea8811..9f141c29 100644 --- a/processor/ethereum/ethereum-serai/src/tests/abi/mod.rs +++ b/processor/ethereum/contracts/src/tests.rs @@ -8,6 +8,6 @@ use alloy_sol_types::sol; #[allow(clippy::redundant_closure_for_method_calls)] mod schnorr_container { use super::*; - sol!("src/tests/contracts/Schnorr.sol"); + sol!("contracts/tests/Schnorr.sol"); } -pub(crate) use schnorr_container::TestSchnorr as schnorr; +pub use schnorr_container::TestSchnorr as schnorr; diff --git a/processor/ethereum/ethereum-serai/Cargo.toml b/processor/ethereum/ethereum-serai/Cargo.toml index ed4520d1..f0ea323f 100644 --- a/processor/ethereum/ethereum-serai/Cargo.toml +++ b/processor/ethereum/ethereum-serai/Cargo.toml @@ -38,6 +38,8 @@ alloy-provider = { version = "0.3", default-features = false } alloy-node-bindings = { version = "0.3", default-features = false, optional = true } +contracts = { package = "serai-processor-ethereum-contracts", path = "../contracts" } + [dev-dependencies] frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false, features = ["tests"] } diff --git a/processor/ethereum/ethereum-serai/src/abi/mod.rs b/processor/ethereum/ethereum-serai/src/abi/mod.rs deleted file mode 100644 index 1ae23374..00000000 --- a/processor/ethereum/ethereum-serai/src/abi/mod.rs +++ /dev/null @@ -1,37 +0,0 @@ -use alloy_sol_types::sol; - -#[rustfmt::skip] -#[allow(warnings)] -#[allow(needless_pass_by_value)] -#[allow(clippy::all)] -#[allow(clippy::ignored_unit_patterns)] -#[allow(clippy::redundant_closure_for_method_calls)] -mod erc20_container { - use super::*; - sol!("contracts/IERC20.sol"); -} -pub use erc20_container::IERC20 as erc20; - -#[rustfmt::skip] -#[allow(warnings)] -#[allow(needless_pass_by_value)] -#[allow(clippy::all)] -#[allow(clippy::ignored_unit_patterns)] -#[allow(clippy::redundant_closure_for_method_calls)] -mod deployer_container { - use super::*; - sol!("contracts/Deployer.sol"); -} -pub use deployer_container::Deployer as deployer; - -#[rustfmt::skip] -#[allow(warnings)] -#[allow(needless_pass_by_value)] -#[allow(clippy::all)] -#[allow(clippy::ignored_unit_patterns)] -#[allow(clippy::redundant_closure_for_method_calls)] -mod router_container { - use super::*; - sol!(Router, "artifacts/Router.abi"); -} -pub use router_container::Router as router; diff --git a/processor/ethereum/ethereum-serai/src/deployer.rs b/processor/ethereum/ethereum-serai/src/deployer.rs index 19aa328d..88f4a5fb 100644 --- a/processor/ethereum/ethereum-serai/src/deployer.rs +++ b/processor/ethereum/ethereum-serai/src/deployer.rs @@ -30,7 +30,7 @@ impl Deployer { /// funded for this transaction to be submitted. This account has no known private key to anyone, /// so ETH sent can be neither misappropriated nor returned. pub fn deployment_tx() -> Signed { - let bytecode = include_str!("../artifacts/Deployer.bin"); + let bytecode = contracts::deployer::BYTECODE; let bytecode = Bytes::from_hex(bytecode).expect("compiled-in Deployer bytecode wasn't valid hex"); diff --git a/processor/ethereum/ethereum-serai/src/lib.rs b/processor/ethereum/ethereum-serai/src/lib.rs index 38bd79e7..76121401 100644 --- a/processor/ethereum/ethereum-serai/src/lib.rs +++ b/processor/ethereum/ethereum-serai/src/lib.rs @@ -15,7 +15,11 @@ pub mod alloy { pub mod crypto; -pub(crate) mod abi; +pub(crate) mod abi { + pub use contracts::erc20; + pub use contracts::deployer; + pub use contracts::router; +} pub mod erc20; pub mod deployer; diff --git a/processor/ethereum/ethereum-serai/src/router.rs b/processor/ethereum/ethereum-serai/src/router.rs index c569d409..95866e67 100644 --- a/processor/ethereum/ethereum-serai/src/router.rs +++ b/processor/ethereum/ethereum-serai/src/router.rs @@ -135,7 +135,7 @@ pub struct Executed { pub struct Router(Arc>, Address); impl Router { pub(crate) fn code() -> Vec { - let bytecode = include_str!("../artifacts/Router.bin"); + let bytecode = contracts::router::BYTECODE; Bytes::from_hex(bytecode).expect("compiled-in Router bytecode wasn't valid hex").to_vec() } diff --git a/processor/ethereum/ethereum-serai/src/tests/mod.rs b/processor/ethereum/ethereum-serai/src/tests/mod.rs index dcdbedce..bdfa8414 100644 --- a/processor/ethereum/ethereum-serai/src/tests/mod.rs +++ b/processor/ethereum/ethereum-serai/src/tests/mod.rs @@ -21,7 +21,7 @@ use crate::crypto::{address, deterministically_sign, PublicKey}; mod crypto; #[cfg(test)] -mod abi; +use contracts::tests as abi; #[cfg(test)] mod schnorr; #[cfg(test)] From bdf89f53505040d16fb31cf67135714c430b2241 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 14 Sep 2024 22:44:16 -0400 Subject: [PATCH 132/368] Add dedicated crate for building Solidity contracts --- .github/workflows/networks-tests.yml | 1 + Cargo.lock | 5 ++ Cargo.toml | 1 + networks/ethereum/build-contracts/Cargo.toml | 15 ++++ networks/ethereum/build-contracts/LICENSE | 15 ++++ networks/ethereum/build-contracts/README.md | 4 + networks/ethereum/build-contracts/src/lib.rs | 88 ++++++++++++++++++++ processor/ethereum/contracts/Cargo.toml | 3 + processor/ethereum/contracts/build.rs | 44 +--------- 9 files changed, 133 insertions(+), 43 deletions(-) create mode 100644 networks/ethereum/build-contracts/Cargo.toml create mode 100644 networks/ethereum/build-contracts/LICENSE create mode 100644 networks/ethereum/build-contracts/README.md create mode 100644 networks/ethereum/build-contracts/src/lib.rs diff --git a/.github/workflows/networks-tests.yml b/.github/workflows/networks-tests.yml index 7fde517b..ee095df6 100644 --- a/.github/workflows/networks-tests.yml +++ b/.github/workflows/networks-tests.yml @@ -30,6 +30,7 @@ jobs: run: | GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ -p bitcoin-serai \ + -p build-solidity-contracts \ -p alloy-simple-request-transport \ -p serai-ethereum-relayer \ -p monero-io \ diff --git a/Cargo.lock b/Cargo.lock index 55108241..f4584f65 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1318,6 +1318,10 @@ dependencies = [ "semver 0.6.0", ] +[[package]] +name = "build-solidity-contracts" +version = "0.1.0" + [[package]] name = "bumpalo" version = "3.16.0" @@ -8677,6 +8681,7 @@ name = "serai-processor-ethereum-contracts" version = "0.1.0" dependencies = [ "alloy-sol-types", + "build-solidity-contracts", ] [[package]] diff --git a/Cargo.toml b/Cargo.toml index f06d76ef..08e0aabe 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -46,6 +46,7 @@ members = [ "networks/bitcoin", + "networks/ethereum/build-contracts", "networks/ethereum/alloy-simple-request-transport", "networks/ethereum/relayer", diff --git a/networks/ethereum/build-contracts/Cargo.toml b/networks/ethereum/build-contracts/Cargo.toml new file mode 100644 index 00000000..cb47a28d --- /dev/null +++ b/networks/ethereum/build-contracts/Cargo.toml @@ -0,0 +1,15 @@ +[package] +name = "build-solidity-contracts" +version = "0.1.0" +description = "A helper function to build Solidity contracts" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/build-contracts" +authors = ["Luke Parker "] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true diff --git a/networks/ethereum/build-contracts/LICENSE b/networks/ethereum/build-contracts/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/networks/ethereum/build-contracts/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/networks/ethereum/build-contracts/README.md b/networks/ethereum/build-contracts/README.md new file mode 100644 index 00000000..437f15c2 --- /dev/null +++ b/networks/ethereum/build-contracts/README.md @@ -0,0 +1,4 @@ +# Build Solidity Contracts + +A helper function to build Solidity contracts. This is intended to be called +from within build scripts. diff --git a/networks/ethereum/build-contracts/src/lib.rs b/networks/ethereum/build-contracts/src/lib.rs new file mode 100644 index 00000000..c546b111 --- /dev/null +++ b/networks/ethereum/build-contracts/src/lib.rs @@ -0,0 +1,88 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use std::{path::PathBuf, fs, process::Command}; + +/// Build contracts placed in `contracts/`, outputting to `artifacts/`. +/// +/// Requires solc 0.8.25. +pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { + println!("cargo:rerun-if-changed={contracts_path}/*"); + println!("cargo:rerun-if-changed={artifacts_path}/*"); + + for line in String::from_utf8( + Command::new("solc") + .args(["--version"]) + .output() + .map_err(|_| "couldn't fetch solc output".to_string())? + .stdout, + ) + .map_err(|_| "solc stdout wasn't UTF-8")? + .lines() + { + if let Some(version) = line.strip_prefix("Version: ") { + let version = + version.split('+').next().ok_or_else(|| "no value present on line".to_string())?; + if version != "0.8.25" { + Err(format!("version was {version}, 0.8.25 required"))? + } + } + } + + #[rustfmt::skip] + let args = [ + "--base-path", ".", + "-o", "./artifacts", "--overwrite", + "--bin", "--abi", + "--via-ir", "--optimize", + "--no-color", + ]; + let mut args = args.into_iter().map(str::to_string).collect::>(); + + let mut queue = vec![PathBuf::from(contracts_path)]; + while let Some(folder) = queue.pop() { + for entry in fs::read_dir(folder).map_err(|e| format!("couldn't read directory: {e:?}"))? { + let entry = entry.map_err(|e| format!("couldn't read directory in entry: {e:?}"))?; + let kind = entry.file_type().map_err(|e| format!("couldn't fetch file type: {e:?}"))?; + if kind.is_dir() { + queue.push(entry.path()); + } + + if kind.is_file() && + entry + .file_name() + .into_string() + .map_err(|_| "file name wasn't a valid UTF-8 string".to_string())? + .ends_with(".sol") + { + args.push( + entry + .path() + .into_os_string() + .into_string() + .map_err(|_| "file path wasn't a valid UTF-8 string".to_string())?, + ); + } + + // We on purposely ignore symlinks to avoid recursive structures + } + } + + let solc = Command::new("solc") + .args(args) + .output() + .map_err(|_| "couldn't fetch solc output".to_string())?; + let stderr = + String::from_utf8(solc.stderr).map_err(|_| "solc stderr wasn't UTF-8".to_string())?; + if !solc.status.success() { + Err(format!("solc didn't successfully execute: {stderr}"))?; + } + for line in stderr.lines() { + if line.contains("Error:") { + Err(format!("solc output had error: {stderr}"))?; + } + } + + Ok(()) +} diff --git a/processor/ethereum/contracts/Cargo.toml b/processor/ethereum/contracts/Cargo.toml index 87beba08..f09eb938 100644 --- a/processor/ethereum/contracts/Cargo.toml +++ b/processor/ethereum/contracts/Cargo.toml @@ -18,3 +18,6 @@ workspace = true [dependencies] alloy-sol-types = { version = "0.8", default-features = false } + +[build-dependencies] +build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts" } diff --git a/processor/ethereum/contracts/build.rs b/processor/ethereum/contracts/build.rs index fe79fcc1..8e310b60 100644 --- a/processor/ethereum/contracts/build.rs +++ b/processor/ethereum/contracts/build.rs @@ -1,45 +1,3 @@ -use std::process::Command; - fn main() { - println!("cargo:rerun-if-changed=contracts/*"); - println!("cargo:rerun-if-changed=artifacts/*"); - - for line in String::from_utf8(Command::new("solc").args(["--version"]).output().unwrap().stdout) - .unwrap() - .lines() - { - if let Some(version) = line.strip_prefix("Version: ") { - let version = version.split('+').next().unwrap(); - assert_eq!(version, "0.8.25"); - } - } - - #[rustfmt::skip] - let args = [ - "--base-path", ".", - "-o", "./artifacts", "--overwrite", - "--bin", "--abi", - "--via-ir", "--optimize", - - "./contracts/IERC20.sol", - - "./contracts/Schnorr.sol", - "./contracts/Deployer.sol", - "./contracts/Sandbox.sol", - "./contracts/Router.sol", - - "./contracts/tests/Schnorr.sol", - "./contracts/tests/ERC20.sol", - - "--no-color", - ]; - let solc = Command::new("solc").args(args).output().unwrap(); - assert!(solc.status.success()); - let stderr = String::from_utf8(solc.stderr).unwrap(); - for line in stderr.lines() { - if line.contains("Error:") { - println!("{stderr}"); - panic!() - } - } + build_solidity_contracts::build("contracts", "artifacts").unwrap(); } From 1c5bc2259e609890d852ca8291980b9addbb001c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 00:41:16 -0400 Subject: [PATCH 133/368] Dedicated crate for the Schnorr contract --- .github/workflows/networks-tests.yml | 1 + Cargo.lock | 20 ++++ Cargo.toml | 1 + deny.toml | 1 + networks/ethereum/build-contracts/src/lib.rs | 2 +- networks/ethereum/schnorr/.gitignore | 1 + networks/ethereum/schnorr/Cargo.toml | 42 +++++++ networks/ethereum/schnorr/LICENSE | 15 +++ networks/ethereum/schnorr/README.md | 5 + networks/ethereum/schnorr/build.rs | 3 + .../ethereum/schnorr}/contracts/Schnorr.sol | 32 +++--- .../schnorr}/contracts/tests/Schnorr.sol | 8 +- networks/ethereum/schnorr/src/lib.rs | 15 +++ networks/ethereum/schnorr/src/public_key.rs | 68 ++++++++++++ networks/ethereum/schnorr/src/signature.rs | 95 ++++++++++++++++ networks/ethereum/schnorr/src/tests.rs | 103 ++++++++++++++++++ processor/ethereum/contracts/.gitignore | 2 - .../ethereum/ethereum-serai/src/crypto.rs | 102 ----------------- .../ethereum/ethereum-serai/src/tests/mod.rs | 2 - .../ethereum-serai/src/tests/schnorr.rs | 93 ---------------- 20 files changed, 389 insertions(+), 222 deletions(-) create mode 100644 networks/ethereum/schnorr/.gitignore create mode 100644 networks/ethereum/schnorr/Cargo.toml create mode 100644 networks/ethereum/schnorr/LICENSE create mode 100644 networks/ethereum/schnorr/README.md create mode 100644 networks/ethereum/schnorr/build.rs rename {processor/ethereum/contracts => networks/ethereum/schnorr}/contracts/Schnorr.sol (50%) rename {processor/ethereum/contracts => networks/ethereum/schnorr}/contracts/tests/Schnorr.sol (53%) create mode 100644 networks/ethereum/schnorr/src/lib.rs create mode 100644 networks/ethereum/schnorr/src/public_key.rs create mode 100644 networks/ethereum/schnorr/src/signature.rs create mode 100644 networks/ethereum/schnorr/src/tests.rs delete mode 100644 processor/ethereum/ethereum-serai/src/tests/schnorr.rs diff --git a/.github/workflows/networks-tests.yml b/.github/workflows/networks-tests.yml index ee095df6..92044978 100644 --- a/.github/workflows/networks-tests.yml +++ b/.github/workflows/networks-tests.yml @@ -31,6 +31,7 @@ jobs: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ -p bitcoin-serai \ -p build-solidity-contracts \ + -p ethereum-schnorr-contract \ -p alloy-simple-request-transport \ -p serai-ethereum-relayer \ -p monero-io \ diff --git a/Cargo.lock b/Cargo.lock index f4584f65..353206e9 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2483,6 +2483,26 @@ dependencies = [ "windows-sys 0.52.0", ] +[[package]] +name = "ethereum-schnorr-contract" +version = "0.1.0" +dependencies = [ + "alloy-core", + "alloy-node-bindings", + "alloy-provider", + "alloy-rpc-client", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", + "alloy-sol-types", + "build-solidity-contracts", + "group", + "k256", + "rand_core", + "sha3", + "subtle", + "tokio", +] + [[package]] name = "ethereum-serai" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 08e0aabe..b30112b2 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -47,6 +47,7 @@ members = [ "networks/bitcoin", "networks/ethereum/build-contracts", + "networks/ethereum/schnorr", "networks/ethereum/alloy-simple-request-transport", "networks/ethereum/relayer", diff --git a/deny.toml b/deny.toml index cef3a683..ec948fef 100644 --- a/deny.toml +++ b/deny.toml @@ -40,6 +40,7 @@ allow = [ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-env" }, + { allow = ["AGPL-3.0"], name = "ethereum-schnorr-contract" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-relayer" }, { allow = ["AGPL-3.0"], name = "serai-message-queue" }, diff --git a/networks/ethereum/build-contracts/src/lib.rs b/networks/ethereum/build-contracts/src/lib.rs index c546b111..93ab253e 100644 --- a/networks/ethereum/build-contracts/src/lib.rs +++ b/networks/ethereum/build-contracts/src/lib.rs @@ -34,7 +34,7 @@ pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { let args = [ "--base-path", ".", "-o", "./artifacts", "--overwrite", - "--bin", "--abi", + "--bin", "--bin-runtime", "--abi", "--via-ir", "--optimize", "--no-color", ]; diff --git a/networks/ethereum/schnorr/.gitignore b/networks/ethereum/schnorr/.gitignore new file mode 100644 index 00000000..de153db3 --- /dev/null +++ b/networks/ethereum/schnorr/.gitignore @@ -0,0 +1 @@ +artifacts diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml new file mode 100644 index 00000000..1c5d4f02 --- /dev/null +++ b/networks/ethereum/schnorr/Cargo.toml @@ -0,0 +1,42 @@ +[package] +name = "ethereum-schnorr-contract" +version = "0.1.0" +description = "A Solidity contract to verify Schnorr signatures" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/schnorr" +authors = ["Luke Parker ", "Elizabeth Binks "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +subtle = { version = "2", default-features = false, features = ["std"] } +sha3 = { version = "0.10", default-features = false, features = ["std"] } +group = { version = "0.13", default-features = false, features = ["alloc"] } +k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } + +alloy-sol-types = { version = "0.8", default-features = false } + +[build-dependencies] +build-solidity-contracts = { path = "../build-contracts", version = "0.1" } + +[dev-dependencies] +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +alloy-core = { version = "0.8", default-features = false } + +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-rpc-client = { version = "0.3", default-features = false } +alloy-provider = { version = "0.3", default-features = false } + +alloy-node-bindings = { version = "0.3", default-features = false } + +tokio = { version = "1", default-features = false, features = ["macros"] } diff --git a/networks/ethereum/schnorr/LICENSE b/networks/ethereum/schnorr/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/networks/ethereum/schnorr/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/networks/ethereum/schnorr/README.md b/networks/ethereum/schnorr/README.md new file mode 100644 index 00000000..410cf520 --- /dev/null +++ b/networks/ethereum/schnorr/README.md @@ -0,0 +1,5 @@ +# Ethereum Schnorr Contract + +An Ethereum contract to verify Schnorr signatures. + +This crate will fail to build if `solc` is not installed and available. diff --git a/networks/ethereum/schnorr/build.rs b/networks/ethereum/schnorr/build.rs new file mode 100644 index 00000000..8e310b60 --- /dev/null +++ b/networks/ethereum/schnorr/build.rs @@ -0,0 +1,3 @@ +fn main() { + build_solidity_contracts::build("contracts", "artifacts").unwrap(); +} diff --git a/processor/ethereum/contracts/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol similarity index 50% rename from processor/ethereum/contracts/contracts/Schnorr.sol rename to networks/ethereum/schnorr/contracts/Schnorr.sol index 8edcdffd..1c39c6d7 100644 --- a/processor/ethereum/contracts/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -1,24 +1,20 @@ -// SPDX-License-Identifier: AGPLv3 +// SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.0; -// see https://github.com/noot/schnorr-verify for implementation details +// See https://github.com/noot/schnorr-verify for implementation details library Schnorr { // secp256k1 group order - uint256 constant public Q = + uint256 constant private Q = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; - // Fixed parity for the public keys used in this contract - // This avoids spending a word passing the parity in a similar style to - // Bitcoin's Taproot - uint8 constant public KEY_PARITY = 27; + // We fix the key to have an even y coordinate to save a word when verifying + // signatures. This is comparable to Bitcoin Taproot's encoding of keys + uint8 constant private KEY_PARITY = 27; - error InvalidSOrA(); - error MalformedSignature(); - - // px := public key x-coord, where the public key has a parity of KEY_PARITY - // message := 32-byte hash of the message - // c := schnorr signature challenge - // s := schnorr signature + // px := public key x-coordinate, where the public key has an even y-coordinate + // message := the message signed + // c := Schnorr signature challenge + // s := Schnorr signature solution function verify( bytes32 px, bytes memory message, @@ -31,12 +27,12 @@ library Schnorr { bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q)); bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q)); - // For safety, we want each input to ecrecover to be 0 (sa, px, ca) - // The ecreover precomple checks `r` and `s` (`px` and `ca`) are non-zero + // For safety, we want each input to ecrecover to not be 0 (sa, px, ca) + // The ecrecover precompile checks `r` and `s` (`px` and `ca`) are non-zero // That leaves us to check `sa` are non-zero - if (sa == 0) revert InvalidSOrA(); + if (sa == 0) return false; address R = ecrecover(sa, KEY_PARITY, px, ca); - if (R == address(0)) revert MalformedSignature(); + if (R == address(0)) return false; // Check the signature is correct by rebuilding the challenge return c == keccak256(abi.encodePacked(R, px, message)); diff --git a/processor/ethereum/contracts/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol similarity index 53% rename from processor/ethereum/contracts/contracts/tests/Schnorr.sol rename to networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 832cd2fe..18a58cf9 100644 --- a/processor/ethereum/contracts/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -1,15 +1,15 @@ -// SPDX-License-Identifier: AGPLv3 +// SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.0; -import "../../../contracts/Schnorr.sol"; +import "../Schnorr.sol"; contract TestSchnorr { function verify( - bytes32 px, + bytes32 public_key, bytes calldata message, bytes32 c, bytes32 s ) external pure returns (bool) { - return Schnorr.verify(px, message, c, s); + return Schnorr.verify(public_key, message, c, s); } } diff --git a/networks/ethereum/schnorr/src/lib.rs b/networks/ethereum/schnorr/src/lib.rs new file mode 100644 index 00000000..79e2e094 --- /dev/null +++ b/networks/ethereum/schnorr/src/lib.rs @@ -0,0 +1,15 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] +#![allow(non_snake_case)] + +/// The initialization bytecode of the Schnorr library. +pub const INIT_BYTECODE: &str = include_str!("../artifacts/Schnorr.bin"); + +mod public_key; +pub use public_key::PublicKey; +mod signature; +pub use signature::Signature; + +#[cfg(test)] +mod tests; diff --git a/networks/ethereum/schnorr/src/public_key.rs b/networks/ethereum/schnorr/src/public_key.rs new file mode 100644 index 00000000..b0cc04df --- /dev/null +++ b/networks/ethereum/schnorr/src/public_key.rs @@ -0,0 +1,68 @@ +use subtle::Choice; +use group::ff::PrimeField; +use k256::{ + elliptic_curve::{ + ops::Reduce, + point::{AffineCoordinates, DecompressPoint}, + }, + AffinePoint, ProjectivePoint, Scalar, U256 as KU256, +}; + +/// A public key for the Schnorr Solidity library. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct PublicKey { + A: ProjectivePoint, + x_coordinate: [u8; 32], +} + +impl PublicKey { + /// Construct a new `PublicKey`. + /// + /// This will return None if the provided point isn't eligible to be a public key (due to + /// bounds such as parity). + #[must_use] + pub fn new(A: ProjectivePoint) -> Option { + let affine = A.to_affine(); + + // Only allow even keys to save a word within Ethereum + if bool::from(affine.y_is_odd()) { + None?; + } + + let x_coordinate = affine.x(); + // Return None if the x-coordinate isn't mutual to both fields + // While reductions shouldn't be an issue, it's one less headache/concern to have + // The trivial amount of public keys this makes non-representable aren't a concern + if >::reduce_bytes(&x_coordinate).to_repr() != x_coordinate { + None?; + } + + Some(PublicKey { A, x_coordinate: x_coordinate.into() }) + } + + /// The point for this public key. + #[must_use] + pub fn point(&self) -> ProjectivePoint { + self.A + } + + /// The Ethereum representation of this public key. + #[must_use] + pub fn eth_repr(&self) -> [u8; 32] { + // We only encode the x-coordinate due to fixing the sign of the y-coordinate + self.x_coordinate + } + + /// Construct a PublicKey from its Ethereum representation. + // This wouldn't be possible if the x-coordinate had been reduced + #[must_use] + pub fn from_eth_repr(repr: [u8; 32]) -> Option { + let x_coordinate = repr; + + let y_is_odd = Choice::from(0); + let A_affine = + Option::::from(AffinePoint::decompress(&x_coordinate.into(), y_is_odd))?; + let A = ProjectivePoint::from(A_affine); + Some(PublicKey { A, x_coordinate }) + } +} diff --git a/networks/ethereum/schnorr/src/signature.rs b/networks/ethereum/schnorr/src/signature.rs new file mode 100644 index 00000000..cd467cea --- /dev/null +++ b/networks/ethereum/schnorr/src/signature.rs @@ -0,0 +1,95 @@ +use std::io; + +use sha3::{Digest, Keccak256}; + +use group::ff::PrimeField; +use k256::{ + elliptic_curve::{ops::Reduce, sec1::ToEncodedPoint}, + ProjectivePoint, Scalar, U256 as KU256, +}; + +use crate::PublicKey; + +/// A signature for the Schnorr Solidity library. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct Signature { + c: Scalar, + s: Scalar, +} + +impl Signature { + /// Construct a new `Signature`. + #[must_use] + pub fn new(c: Scalar, s: Scalar) -> Signature { + Signature { c, s } + } + + /// The challenge for a signature. + #[must_use] + pub fn challenge(R: ProjectivePoint, key: &PublicKey, message: &[u8]) -> Scalar { + // H(R || A || m) + let mut hash = Keccak256::new(); + // We transcript the nonce as an address since ecrecover yields an address + hash.update({ + let uncompressed_encoded_point = R.to_encoded_point(false); + // Skip the prefix byte marking this as uncompressed + let x_and_y_coordinates = &uncompressed_encoded_point.as_ref()[1 ..]; + // Last 20 bytes of the hash of the x and y coordinates + &Keccak256::digest(x_and_y_coordinates)[12 ..] + }); + hash.update(key.eth_repr()); + hash.update(message); + >::reduce_bytes(&hash.finalize()) + } + + /// Verify a signature. + #[must_use] + pub fn verify(&self, key: &PublicKey, message: &[u8]) -> bool { + // Recover the nonce + let R = (ProjectivePoint::GENERATOR * self.s) - (key.point() * self.c); + // Check the challenge + Self::challenge(R, key, message) == self.c + } + + /// The challenge present within this signature. + pub fn c(&self) -> Scalar { + self.c + } + + /// The signature solution present within this signature. + pub fn s(&self) -> Scalar { + self.s + } + + /// Convert the signature to bytes. + #[must_use] + pub fn to_bytes(&self) -> [u8; 64] { + let mut res = [0; 64]; + res[.. 32].copy_from_slice(self.c.to_repr().as_ref()); + res[32 ..].copy_from_slice(self.s.to_repr().as_ref()); + res + } + + /// Write the signature. + pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + writer.write_all(&self.to_bytes()) + } + + /// Read a signature. + pub fn read(reader: &mut impl io::Read) -> io::Result { + let mut read_F = || -> io::Result { + let mut bytes = [0; 32]; + reader.read_exact(&mut bytes)?; + Option::::from(Scalar::from_repr(bytes.into())) + .ok_or_else(|| io::Error::other("invalid scalar")) + }; + let c = read_F()?; + let s = read_F()?; + Ok(Signature { c, s }) + } + + /// Read a signature from bytes. + pub fn from_bytes(bytes: [u8; 64]) -> io::Result { + Self::read(&mut bytes.as_slice()) + } +} diff --git a/networks/ethereum/schnorr/src/tests.rs b/networks/ethereum/schnorr/src/tests.rs new file mode 100644 index 00000000..1c3509cc --- /dev/null +++ b/networks/ethereum/schnorr/src/tests.rs @@ -0,0 +1,103 @@ +use std::sync::Arc; + +use rand_core::{RngCore, OsRng}; + +use group::ff::{Field, PrimeField}; +use k256::{Scalar, ProjectivePoint}; + +use alloy_core::primitives::Address; +use alloy_sol_types::SolCall; + +use alloy_simple_request_transport::SimpleRequest; +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; +use alloy_rpc_client::ClientBuilder; +use alloy_provider::{Provider, RootProvider}; + +use alloy_node_bindings::{Anvil, AnvilInstance}; + +use crate::{PublicKey, Signature}; + +#[allow(warnings)] +#[allow(needless_pass_by_value)] +#[allow(clippy::all)] +#[allow(clippy::ignored_unit_patterns)] +#[allow(clippy::redundant_closure_for_method_calls)] +mod abi { + alloy_sol_types::sol!("contracts/tests/Schnorr.sol"); + pub(crate) use TestSchnorr::*; +} + +async fn setup_test() -> (AnvilInstance, Arc>, Address) { + let anvil = Anvil::new().spawn(); + + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), + )); + + let mut address = [0; 20]; + OsRng.fill_bytes(&mut address); + let address = Address::from(address); + let _: () = provider + .raw_request( + "anvil_setCode".into(), + [address.to_string(), include_str!("../artifacts/TestSchnorr.bin-runtime").to_string()], + ) + .await + .unwrap(); + + (anvil, provider, address) +} + +async fn call_verify( + provider: &RootProvider, + address: Address, + public_key: &PublicKey, + message: &[u8], + signature: &Signature, +) -> bool { + let public_key: [u8; 32] = public_key.eth_repr(); + let c_bytes: [u8; 32] = signature.c().to_repr().into(); + let s_bytes: [u8; 32] = signature.s().to_repr().into(); + let call = TransactionRequest::default().to(address).input(TransactionInput::new( + abi::verifyCall::new(( + public_key.into(), + message.to_vec().into(), + c_bytes.into(), + s_bytes.into(), + )) + .abi_encode() + .into(), + )); + let bytes = provider.call(&call).await.unwrap(); + let res = abi::verifyCall::abi_decode_returns(&bytes, true).unwrap(); + + res._0 +} + +#[tokio::test] +async fn test_verify() { + let (_anvil, provider, address) = setup_test().await; + + for _ in 0 .. 100 { + let (key, public_key) = loop { + let key = Scalar::random(&mut OsRng); + if let Some(public_key) = PublicKey::new(ProjectivePoint::GENERATOR * key) { + break (key, public_key); + } + }; + + let nonce = Scalar::random(&mut OsRng); + let mut message = vec![0; 1 + usize::try_from(OsRng.next_u32() % 256).unwrap()]; + OsRng.fill_bytes(&mut message); + + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &public_key, &message); + let s = nonce + (c * key); + + let sig = Signature::new(c, s); + assert!(sig.verify(&public_key, &message)); + assert!(call_verify(&provider, address, &public_key, &message, &sig).await); + // Mutate the message and make sure the signature now fails to verify + message[0] = message[0].wrapping_add(1); + assert!(!call_verify(&provider, address, &public_key, &message, &sig).await); + } +} diff --git a/processor/ethereum/contracts/.gitignore b/processor/ethereum/contracts/.gitignore index 2dccdce9..de153db3 100644 --- a/processor/ethereum/contracts/.gitignore +++ b/processor/ethereum/contracts/.gitignore @@ -1,3 +1 @@ -# Solidity build outputs -cache artifacts diff --git a/processor/ethereum/ethereum-serai/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs index 326343d8..3366b744 100644 --- a/processor/ethereum/ethereum-serai/src/crypto.rs +++ b/processor/ethereum/ethereum-serai/src/crypto.rs @@ -62,56 +62,6 @@ pub fn deterministically_sign(tx: &TxLegacy) -> Signed { } } -/// The public key for a Schnorr-signing account. -#[allow(non_snake_case)] -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct PublicKey { - pub(crate) A: ProjectivePoint, - pub(crate) px: Scalar, -} - -impl PublicKey { - /// Construct a new `PublicKey`. - /// - /// This will return None if the provided point isn't eligible to be a public key (due to - /// bounds such as parity). - #[allow(non_snake_case)] - pub fn new(A: ProjectivePoint) -> Option { - let affine = A.to_affine(); - // Only allow even keys to save a word within Ethereum - let is_odd = bool::from(affine.y_is_odd()); - if is_odd { - None?; - } - - let x_coord = affine.x(); - let x_coord_scalar = >::reduce_bytes(&x_coord); - // Return None if a reduction would occur - // Reductions would be incredibly unlikely and shouldn't be an issue, yet it's one less - // headache/concern to have - // This does ban a trivial amoount of public keys - if x_coord_scalar.to_repr() != x_coord { - None?; - } - - Some(PublicKey { A, px: x_coord_scalar }) - } - - pub fn point(&self) -> ProjectivePoint { - self.A - } - - pub fn eth_repr(&self) -> [u8; 32] { - self.px.to_repr().into() - } - - pub fn from_eth_repr(repr: [u8; 32]) -> Option { - #[allow(non_snake_case)] - let A = Option::::from(AffinePoint::decompress(&repr.into(), 0.into()))?.into(); - Option::from(Scalar::from_repr(repr.into())).map(|px| PublicKey { A, px }) - } -} - /// The HRAm to use for the Schnorr contract. #[derive(Clone, Default)] pub struct EthereumHram {} @@ -128,58 +78,6 @@ impl Hram for EthereumHram { } } -/// A signature for the Schnorr contract. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct Signature { - pub(crate) c: Scalar, - pub(crate) s: Scalar, -} -impl Signature { - pub fn verify(&self, public_key: &PublicKey, message: &[u8]) -> bool { - #[allow(non_snake_case)] - let R = (Secp256k1::generator() * self.s) - (public_key.A * self.c); - EthereumHram::hram(&R, &public_key.A, message) == self.c - } - - /// Construct a new `Signature`. - /// - /// This will return None if the signature is invalid. - pub fn new( - public_key: &PublicKey, - message: &[u8], - signature: SchnorrSignature, - ) -> Option { - let c = EthereumHram::hram(&signature.R, &public_key.A, message); - if !signature.verify(public_key.A, c) { - None?; - } - - let res = Signature { c, s: signature.s }; - assert!(res.verify(public_key, message)); - Some(res) - } - - pub fn c(&self) -> Scalar { - self.c - } - pub fn s(&self) -> Scalar { - self.s - } - - pub fn to_bytes(&self) -> [u8; 64] { - let mut res = [0; 64]; - res[.. 32].copy_from_slice(self.c.to_repr().as_ref()); - res[32 ..].copy_from_slice(self.s.to_repr().as_ref()); - res - } - - pub fn from_bytes(bytes: [u8; 64]) -> std::io::Result { - let mut reader = bytes.as_slice(); - let c = Secp256k1::read_F(&mut reader)?; - let s = Secp256k1::read_F(&mut reader)?; - Ok(Signature { c, s }) - } -} impl From<&Signature> for AbiSignature { fn from(sig: &Signature) -> AbiSignature { let c: [u8; 32] = sig.c.to_repr().into(); diff --git a/processor/ethereum/ethereum-serai/src/tests/mod.rs b/processor/ethereum/ethereum-serai/src/tests/mod.rs index bdfa8414..91b03d9b 100644 --- a/processor/ethereum/ethereum-serai/src/tests/mod.rs +++ b/processor/ethereum/ethereum-serai/src/tests/mod.rs @@ -23,8 +23,6 @@ mod crypto; #[cfg(test)] use contracts::tests as abi; #[cfg(test)] -mod schnorr; -#[cfg(test)] mod router; pub fn key_gen() -> (HashMap>, PublicKey) { diff --git a/processor/ethereum/ethereum-serai/src/tests/schnorr.rs b/processor/ethereum/ethereum-serai/src/tests/schnorr.rs deleted file mode 100644 index 2c72ed19..00000000 --- a/processor/ethereum/ethereum-serai/src/tests/schnorr.rs +++ /dev/null @@ -1,93 +0,0 @@ -use std::sync::Arc; - -use rand_core::OsRng; - -use group::ff::PrimeField; -use k256::Scalar; - -use frost::{ - curve::Secp256k1, - algorithm::IetfSchnorr, - tests::{algorithm_machines, sign}, -}; - -use alloy_core::primitives::Address; - -use alloy_sol_types::SolCall; - -use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; -use alloy_simple_request_transport::SimpleRequest; -use alloy_rpc_client::ClientBuilder; -use alloy_provider::{Provider, RootProvider}; - -use alloy_node_bindings::{Anvil, AnvilInstance}; - -use crate::{ - Error, - crypto::*, - tests::{key_gen, deploy_contract, abi::schnorr as abi}, -}; - -async fn setup_test() -> (AnvilInstance, Arc>, Address) { - let anvil = Anvil::new().spawn(); - - let provider = RootProvider::new( - ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), - ); - let wallet = anvil.keys()[0].clone().into(); - let client = Arc::new(provider); - - let address = deploy_contract(client.clone(), &wallet, "TestSchnorr").await.unwrap(); - (anvil, client, address) -} - -#[tokio::test] -async fn test_deploy_contract() { - setup_test().await; -} - -pub async fn call_verify( - provider: &RootProvider, - contract: Address, - public_key: &PublicKey, - message: &[u8], - signature: &Signature, -) -> Result<(), Error> { - let px: [u8; 32] = public_key.px.to_repr().into(); - let c_bytes: [u8; 32] = signature.c.to_repr().into(); - let s_bytes: [u8; 32] = signature.s.to_repr().into(); - let call = TransactionRequest::default().to(contract).input(TransactionInput::new( - abi::verifyCall::new((px.into(), message.to_vec().into(), c_bytes.into(), s_bytes.into())) - .abi_encode() - .into(), - )); - let bytes = provider.call(&call).await.map_err(|_| Error::ConnectionError)?; - let res = - abi::verifyCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?; - - if res._0 { - Ok(()) - } else { - Err(Error::InvalidSignature) - } -} - -#[tokio::test] -async fn test_ecrecover_hack() { - let (_anvil, client, contract) = setup_test().await; - - let (keys, public_key) = key_gen(); - - const MESSAGE: &[u8] = b"Hello, World!"; - - let algo = IetfSchnorr::::ietf(); - let sig = - sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE); - let sig = Signature::new(&public_key, MESSAGE, sig).unwrap(); - - call_verify(&client, contract, &public_key, MESSAGE, &sig).await.unwrap(); - // Test an invalid signature fails - let mut sig = sig; - sig.s += Scalar::ONE; - assert!(call_verify(&client, contract, &public_key, MESSAGE, &sig).await.is_err()); -} From 67f9f76fdf9784c80b8b884efbc3d0f2f772f4a6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 00:42:05 -0400 Subject: [PATCH 134/368] Remove publish = false --- networks/ethereum/schnorr/Cargo.toml | 1 - 1 file changed, 1 deletion(-) diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 1c5d4f02..d9bb77b0 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -6,7 +6,6 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/schnorr" authors = ["Luke Parker ", "Elizabeth Binks "] edition = "2021" -publish = false rust-version = "1.79" [package.metadata.docs.rs] From a38d1350599e40880c9020e7dcc054bf026313f0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 00:56:38 -0400 Subject: [PATCH 135/368] rust-toolchain 1.81 --- rust-toolchain.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rust-toolchain.toml b/rust-toolchain.toml index 73cb338c..d99e6588 100644 --- a/rust-toolchain.toml +++ b/rust-toolchain.toml @@ -1,5 +1,5 @@ [toolchain] -channel = "1.80" +channel = "1.81" targets = ["wasm32-unknown-unknown"] profile = "minimal" components = ["rust-src", "rustfmt", "clippy"] From 0813351f1f3b5d50bfecbaa43f738b5a72c0cb64 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 00:57:43 -0400 Subject: [PATCH 136/368] OUT_DIR > artifacts --- Cargo.lock | 2 +- networks/ethereum/build-contracts/Cargo.toml | 2 +- networks/ethereum/build-contracts/src/lib.rs | 4 ++-- networks/ethereum/schnorr/Cargo.toml | 2 +- networks/ethereum/schnorr/build.rs | 8 +++++++- networks/ethereum/schnorr/src/lib.rs | 3 ++- networks/ethereum/schnorr/src/tests.rs | 19 +++++++++++++------ 7 files changed, 27 insertions(+), 13 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 353206e9..1338ae26 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1320,7 +1320,7 @@ dependencies = [ [[package]] name = "build-solidity-contracts" -version = "0.1.0" +version = "0.1.1" [[package]] name = "bumpalo" diff --git a/networks/ethereum/build-contracts/Cargo.toml b/networks/ethereum/build-contracts/Cargo.toml index cb47a28d..41d1f993 100644 --- a/networks/ethereum/build-contracts/Cargo.toml +++ b/networks/ethereum/build-contracts/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "build-solidity-contracts" -version = "0.1.0" +version = "0.1.1" description = "A helper function to build Solidity contracts" license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/build-contracts" diff --git a/networks/ethereum/build-contracts/src/lib.rs b/networks/ethereum/build-contracts/src/lib.rs index 93ab253e..4fee315a 100644 --- a/networks/ethereum/build-contracts/src/lib.rs +++ b/networks/ethereum/build-contracts/src/lib.rs @@ -4,7 +4,7 @@ use std::{path::PathBuf, fs, process::Command}; -/// Build contracts placed in `contracts/`, outputting to `artifacts/`. +/// Build contracts from the specified path, outputting the artifacts to the specified path. /// /// Requires solc 0.8.25. pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { @@ -33,7 +33,7 @@ pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { #[rustfmt::skip] let args = [ "--base-path", ".", - "-o", "./artifacts", "--overwrite", + "-o", artifacts_path, "--overwrite", "--bin", "--bin-runtime", "--abi", "--via-ir", "--optimize", "--no-color", diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index d9bb77b0..5c9c1596 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/schnorr" authors = ["Luke Parker ", "Elizabeth Binks "] edition = "2021" -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true diff --git a/networks/ethereum/schnorr/build.rs b/networks/ethereum/schnorr/build.rs index 8e310b60..300f8949 100644 --- a/networks/ethereum/schnorr/build.rs +++ b/networks/ethereum/schnorr/build.rs @@ -1,3 +1,9 @@ +use std::{env, fs}; + fn main() { - build_solidity_contracts::build("contracts", "artifacts").unwrap(); + let artifacts_path = env::var("OUT_DIR").unwrap().to_string() + "/ethereum-schnorr-contract"; + if !fs::exists(&artifacts_path).unwrap() { + fs::create_dir(&artifacts_path).unwrap(); + } + build_solidity_contracts::build("contracts", &artifacts_path).unwrap(); } diff --git a/networks/ethereum/schnorr/src/lib.rs b/networks/ethereum/schnorr/src/lib.rs index 79e2e094..3f67fbbf 100644 --- a/networks/ethereum/schnorr/src/lib.rs +++ b/networks/ethereum/schnorr/src/lib.rs @@ -4,7 +4,8 @@ #![allow(non_snake_case)] /// The initialization bytecode of the Schnorr library. -pub const INIT_BYTECODE: &str = include_str!("../artifacts/Schnorr.bin"); +pub const INIT_BYTECODE: &str = + include_str!(concat!(env!("OUT_DIR"), "/ethereum-schnorr-contract/Schnorr.bin")); mod public_key; pub use public_key::PublicKey; diff --git a/networks/ethereum/schnorr/src/tests.rs b/networks/ethereum/schnorr/src/tests.rs index 1c3509cc..62bb8542 100644 --- a/networks/ethereum/schnorr/src/tests.rs +++ b/networks/ethereum/schnorr/src/tests.rs @@ -17,11 +17,11 @@ use alloy_node_bindings::{Anvil, AnvilInstance}; use crate::{PublicKey, Signature}; -#[allow(warnings)] -#[allow(needless_pass_by_value)] -#[allow(clippy::all)] -#[allow(clippy::ignored_unit_patterns)] -#[allow(clippy::redundant_closure_for_method_calls)] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] mod abi { alloy_sol_types::sol!("contracts/tests/Schnorr.sol"); pub(crate) use TestSchnorr::*; @@ -40,7 +40,14 @@ async fn setup_test() -> (AnvilInstance, Arc>, Addre let _: () = provider .raw_request( "anvil_setCode".into(), - [address.to_string(), include_str!("../artifacts/TestSchnorr.bin-runtime").to_string()], + [ + address.to_string(), + include_str!(concat!( + env!("OUT_DIR"), + "/ethereum-schnorr-contract/TestSchnorr.bin-runtime" + )) + .to_string(), + ], ) .await .unwrap(); From 80ca2b780acc045b5927635aae36380d61ef54f9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 02:11:49 -0400 Subject: [PATCH 137/368] Add tests for the premise of the Schnorr contract to the Schnorr crate --- networks/ethereum/schnorr/Cargo.toml | 5 +- .../ethereum/schnorr/contracts/Schnorr.sol | 19 ++- networks/ethereum/schnorr/src/public_key.rs | 8 +- .../schnorr/src/{tests.rs => tests/mod.rs} | 2 + .../ethereum/schnorr/src/tests/premise.rs | 111 ++++++++++++++++++ .../ethereum-serai/src/tests/crypto.rs | 76 ------------ 6 files changed, 136 insertions(+), 85 deletions(-) rename networks/ethereum/schnorr/src/{tests.rs => tests/mod.rs} (99%) create mode 100644 networks/ethereum/schnorr/src/tests/premise.rs diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 5c9c1596..2e9597c8 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -21,15 +21,16 @@ sha3 = { version = "0.10", default-features = false, features = ["std"] } group = { version = "0.13", default-features = false, features = ["alloc"] } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } -alloy-sol-types = { version = "0.8", default-features = false } - [build-dependencies] build-solidity-contracts = { path = "../build-contracts", version = "0.1" } [dev-dependencies] rand_core = { version = "0.6", default-features = false, features = ["std"] } +k256 = { version = "^0.13.1", default-features = false, features = ["ecdsa"] } + alloy-core = { version = "0.8", default-features = false } +alloy-sol-types = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-rpc-types-eth = { version = "0.3", default-features = false } diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 1c39c6d7..b13696cf 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -7,8 +7,9 @@ library Schnorr { uint256 constant private Q = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; - // We fix the key to have an even y coordinate to save a word when verifying - // signatures. This is comparable to Bitcoin Taproot's encoding of keys + // We fix the key to have: + // 1) An even y-coordinate + // 2) An x-coordinate < Q uint8 constant private KEY_PARITY = 27; // px := public key x-coordinate, where the public key has an even y-coordinate @@ -27,11 +28,17 @@ library Schnorr { bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q)); bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q)); - // For safety, we want each input to ecrecover to not be 0 (sa, px, ca) - // The ecrecover precompile checks `r` and `s` (`px` and `ca`) are non-zero - // That leaves us to check `sa` are non-zero - if (sa == 0) return false; + /* + The ecrecover precompile checks `r` and `s` (`px` and `ca`) are non-zero, + banning the two keys with zero for their x-coordinate and zero challenge. + Each has negligible probability of occuring (assuming zero x-coordinates + are even on-curve in the first place). + + `sa` is not checked to be non-zero yet it does not need to be. The inverse + of it is never taken. + */ address R = ecrecover(sa, KEY_PARITY, px, ca); + // The ecrecover failed if (R == address(0)) return false; // Check the signature is correct by rebuilding the challenge diff --git a/networks/ethereum/schnorr/src/public_key.rs b/networks/ethereum/schnorr/src/public_key.rs index b0cc04df..3c39552f 100644 --- a/networks/ethereum/schnorr/src/public_key.rs +++ b/networks/ethereum/schnorr/src/public_key.rs @@ -37,7 +37,13 @@ impl PublicKey { None?; } - Some(PublicKey { A, x_coordinate: x_coordinate.into() }) + let x_coordinate: [u8; 32] = x_coordinate.into(); + // Returns None if the x-coordinate is 0 + // Such keys will never have their signatures able to be verified + if x_coordinate == [0; 32] { + None?; + } + Some(PublicKey { A, x_coordinate }) } /// The point for this public key. diff --git a/networks/ethereum/schnorr/src/tests.rs b/networks/ethereum/schnorr/src/tests/mod.rs similarity index 99% rename from networks/ethereum/schnorr/src/tests.rs rename to networks/ethereum/schnorr/src/tests/mod.rs index 62bb8542..90774e30 100644 --- a/networks/ethereum/schnorr/src/tests.rs +++ b/networks/ethereum/schnorr/src/tests/mod.rs @@ -17,6 +17,8 @@ use alloy_node_bindings::{Anvil, AnvilInstance}; use crate::{PublicKey, Signature}; +mod premise; + #[expect(warnings)] #[expect(needless_pass_by_value)] #[expect(clippy::all)] diff --git a/networks/ethereum/schnorr/src/tests/premise.rs b/networks/ethereum/schnorr/src/tests/premise.rs new file mode 100644 index 00000000..01571a43 --- /dev/null +++ b/networks/ethereum/schnorr/src/tests/premise.rs @@ -0,0 +1,111 @@ +use rand_core::{RngCore, OsRng}; + +use sha3::{Digest, Keccak256}; +use group::ff::{Field, PrimeField}; +use k256::{ + elliptic_curve::{ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint}, + ecdsa::{ + self, hazmat::SignPrimitive, signature::hazmat::PrehashVerifier, SigningKey, VerifyingKey, + }, + U256, Scalar, ProjectivePoint, +}; + +use alloy_core::primitives::Address; + +use crate::{PublicKey, Signature}; + +// The ecrecover opcode, yet with if the y is odd replacing v +fn ecrecover(message: Scalar, odd_y: bool, r: Scalar, s: Scalar) -> Option<[u8; 20]> { + let sig = ecdsa::Signature::from_scalars(r, s).ok()?; + let message: [u8; 32] = message.to_repr().into(); + alloy_core::primitives::Signature::from_signature_and_parity( + sig, + alloy_core::primitives::Parity::Parity(odd_y), + ) + .ok()? + .recover_address_from_prehash(&alloy_core::primitives::B256::from(message)) + .ok() + .map(Into::into) +} + +// Test ecrecover behaves as expected +#[test] +fn test_ecrecover() { + let private = SigningKey::random(&mut OsRng); + let public = VerifyingKey::from(&private); + + // Sign the signature + const MESSAGE: &[u8] = b"Hello, World!"; + let (sig, recovery_id) = private + .as_nonzero_scalar() + .try_sign_prehashed(Scalar::random(&mut OsRng), &Keccak256::digest(MESSAGE)) + .unwrap(); + + // Sanity check the signature verifies + #[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result + { + assert_eq!(public.verify_prehash(&Keccak256::digest(MESSAGE), &sig).unwrap(), ()); + } + + // Perform the ecrecover + assert_eq!( + ecrecover( + >::reduce_bytes(&Keccak256::digest(MESSAGE)), + u8::from(recovery_id.unwrap().is_y_odd()) == 1, + *sig.r(), + *sig.s() + ) + .unwrap(), + Address::from_raw_public_key(&public.to_encoded_point(false).as_ref()[1 ..]), + ); +} + +// Test that we can recover the nonce from a Schnorr signature via a call to ecrecover, the premise +// of efficiently verifying Schnorr signatures in an Ethereum contract +#[test] +fn nonce_recovery_via_ecrecover() { + let (key, public_key) = loop { + let key = Scalar::random(&mut OsRng); + if let Some(public_key) = PublicKey::new(ProjectivePoint::GENERATOR * key) { + break (key, public_key); + } + }; + + let nonce = Scalar::random(&mut OsRng); + let R = ProjectivePoint::GENERATOR * nonce; + + let mut message = vec![0; 1 + usize::try_from(OsRng.next_u32() % 256).unwrap()]; + OsRng.fill_bytes(&mut message); + + let c = Signature::challenge(R, &public_key, &message); + let s = nonce + (c * key); + + /* + An ECDSA signature is `(r, s)` with `s = (H(m) + rx) / k`, where: + - `m` is the message + - `r` is the x-coordinate of the nonce, reduced into a scalar + - `x` is the private key + - `k` is the nonce + + We fix the recovery ID to be for the even key with an x-coordinate < the order. Accordingly, + `kG = Point::from(Even, r)`. This enables recovering the public key via + `((s Point::from(Even, r)) - H(m)G) / r`. + + We want to calculate `R` from `(c, s)` where `s = r + cx`. That means we need to calculate + `sG - cX`. + + We can calculate `sG - cX` with `((s Point::from(Even, r)) - H(m)G) / r` if: + - Latter `r` = `X.x` + - Latter `s` = `c` + - `H(m)` = former `s` + This gets us to `(cX - sG) / X.x`. If we additionally scale the latter's `s, H(m)` values (the + former's `c, s` values) by `X.x`, we get `cX - sG`. This just requires negating each to achieve + `sG - cX`. + */ + let x_scalar = >::reduce_bytes(&public_key.point().to_affine().x()); + let sa = -(s * x_scalar); + let ca = -(c * x_scalar); + + let q = ecrecover(sa, false, x_scalar, ca).unwrap(); + assert_eq!(q, Address::from_raw_public_key(&R.to_encoded_point(false).as_ref()[1 ..])); +} diff --git a/processor/ethereum/ethereum-serai/src/tests/crypto.rs b/processor/ethereum/ethereum-serai/src/tests/crypto.rs index a668b2d6..a4f86ae9 100644 --- a/processor/ethereum/ethereum-serai/src/tests/crypto.rs +++ b/processor/ethereum/ethereum-serai/src/tests/crypto.rs @@ -16,54 +16,6 @@ use frost::{ use crate::{crypto::*, tests::key_gen}; -// The ecrecover opcode, yet with parity replacing v -pub(crate) fn ecrecover(message: Scalar, odd_y: bool, r: Scalar, s: Scalar) -> Option<[u8; 20]> { - let sig = ecdsa::Signature::from_scalars(r, s).ok()?; - let message: [u8; 32] = message.to_repr().into(); - alloy_core::primitives::Signature::from_signature_and_parity( - sig, - alloy_core::primitives::Parity::Parity(odd_y), - ) - .ok()? - .recover_address_from_prehash(&alloy_core::primitives::B256::from(message)) - .ok() - .map(Into::into) -} - -#[test] -fn test_ecrecover() { - let private = SigningKey::random(&mut OsRng); - let public = VerifyingKey::from(&private); - - // Sign the signature - const MESSAGE: &[u8] = b"Hello, World!"; - let (sig, recovery_id) = private - .as_nonzero_scalar() - .try_sign_prehashed( - ::F::random(&mut OsRng), - &keccak256(MESSAGE).into(), - ) - .unwrap(); - - // Sanity check the signature verifies - #[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result - { - assert_eq!(public.verify_prehash(&keccak256(MESSAGE), &sig).unwrap(), ()); - } - - // Perform the ecrecover - assert_eq!( - ecrecover( - hash_to_scalar(MESSAGE), - u8::from(recovery_id.unwrap().is_y_odd()) == 1, - *sig.r(), - *sig.s() - ) - .unwrap(), - address(&ProjectivePoint::from(public.as_affine())) - ); -} - // Run the sign test with the EthereumHram #[test] fn test_signing() { @@ -75,31 +27,3 @@ fn test_signing() { let _sig = sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE); } - -#[allow(non_snake_case)] -pub fn preprocess_signature_for_ecrecover( - R: ProjectivePoint, - public_key: &PublicKey, - m: &[u8], - s: Scalar, -) -> (Scalar, Scalar) { - let c = EthereumHram::hram(&R, &public_key.A, m); - let sa = -(s * public_key.px); - let ca = -(c * public_key.px); - (sa, ca) -} - -#[test] -fn test_ecrecover_hack() { - let (keys, public_key) = key_gen(); - - const MESSAGE: &[u8] = b"Hello, World!"; - - let algo = IetfSchnorr::::ietf(); - let sig = - sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE); - - let (sa, ca) = preprocess_signature_for_ecrecover(sig.R, &public_key, MESSAGE, sig.s); - let q = ecrecover(sa, false, public_key.px, ca).unwrap(); - assert_eq!(q, address(&sig.R)); -} From 3f0f4d520db08c675aa59a848c5514f8879441d0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 05:56:57 -0400 Subject: [PATCH 138/368] Remove the Sandbox contract If instead of intaking calls, we intake code, we can deploy a fresh contract which makes arbitrary calls *without* attempting to build our abstraction layer over the concept. This should have the same gas costs, as we still have one contract deployment. The new contract only has a constructor, so it should have no actual code and beat the Sandbox in that regard? We do have to call into ourselves to meter the gas, yet we already had to call into the deployed Sandbox to achieve that. Also re-defines the OutInstruction to include tokens, implements OutInstruction-specified gas amounts, bumps the Solidity version, and other such misc changes. --- .github/actions/build-dependencies/action.yml | 4 +- networks/ethereum/build-contracts/src/lib.rs | 24 +- networks/ethereum/schnorr/.gitignore | 1 - networks/ethereum/schnorr/build.rs | 2 +- .../ethereum/schnorr/contracts/Schnorr.sol | 2 +- .../schnorr/contracts/tests/Schnorr.sol | 2 +- orchestration/src/processor.rs | 4 +- processor/ethereum/contracts/Cargo.toml | 2 +- processor/ethereum/contracts/build.rs | 7 +- .../ethereum/contracts/contracts/Deployer.sol | 4 +- .../ethereum/contracts/contracts/IERC20.sol | 2 +- .../ethereum/contracts/contracts/Router.sol | 278 +++++++++--------- .../ethereum/contracts/contracts/Sandbox.sol | 48 --- .../contracts/contracts/tests/ERC20.sol | 4 +- processor/ethereum/contracts/src/lib.rs | 2 - processor/ethereum/contracts/src/tests.rs | 13 - 16 files changed, 170 insertions(+), 229 deletions(-) delete mode 100644 networks/ethereum/schnorr/.gitignore delete mode 100644 processor/ethereum/contracts/contracts/Sandbox.sol delete mode 100644 processor/ethereum/contracts/src/tests.rs diff --git a/.github/actions/build-dependencies/action.yml b/.github/actions/build-dependencies/action.yml index 5994b723..47d77522 100644 --- a/.github/actions/build-dependencies/action.yml +++ b/.github/actions/build-dependencies/action.yml @@ -42,8 +42,8 @@ runs: shell: bash run: | cargo install svm-rs - svm install 0.8.25 - svm use 0.8.25 + svm install 0.8.26 + svm use 0.8.26 # - name: Cache Rust # uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 diff --git a/networks/ethereum/build-contracts/src/lib.rs b/networks/ethereum/build-contracts/src/lib.rs index 4fee315a..5213059e 100644 --- a/networks/ethereum/build-contracts/src/lib.rs +++ b/networks/ethereum/build-contracts/src/lib.rs @@ -6,8 +6,12 @@ use std::{path::PathBuf, fs, process::Command}; /// Build contracts from the specified path, outputting the artifacts to the specified path. /// -/// Requires solc 0.8.25. -pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { +/// Requires solc 0.8.26. +pub fn build( + include_paths: &[&str], + contracts_path: &str, + artifacts_path: &str, +) -> Result<(), String> { println!("cargo:rerun-if-changed={contracts_path}/*"); println!("cargo:rerun-if-changed={artifacts_path}/*"); @@ -24,20 +28,24 @@ pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { if let Some(version) = line.strip_prefix("Version: ") { let version = version.split('+').next().ok_or_else(|| "no value present on line".to_string())?; - if version != "0.8.25" { - Err(format!("version was {version}, 0.8.25 required"))? + if version != "0.8.26" { + Err(format!("version was {version}, 0.8.26 required"))? } } } #[rustfmt::skip] - let args = [ + let mut args = vec![ "--base-path", ".", "-o", artifacts_path, "--overwrite", "--bin", "--bin-runtime", "--abi", "--via-ir", "--optimize", "--no-color", ]; + for include_path in include_paths { + args.push("--include-path"); + args.push(include_path); + } let mut args = args.into_iter().map(str::to_string).collect::>(); let mut queue = vec![PathBuf::from(contracts_path)]; @@ -70,17 +78,17 @@ pub fn build(contracts_path: &str, artifacts_path: &str) -> Result<(), String> { } let solc = Command::new("solc") - .args(args) + .args(args.clone()) .output() .map_err(|_| "couldn't fetch solc output".to_string())?; let stderr = String::from_utf8(solc.stderr).map_err(|_| "solc stderr wasn't UTF-8".to_string())?; if !solc.status.success() { - Err(format!("solc didn't successfully execute: {stderr}"))?; + Err(format!("solc (`{}`) didn't successfully execute: {stderr}", args.join(" ")))?; } for line in stderr.lines() { if line.contains("Error:") { - Err(format!("solc output had error: {stderr}"))?; + Err(format!("solc (`{}`) output had error: {stderr}", args.join(" ")))?; } } diff --git a/networks/ethereum/schnorr/.gitignore b/networks/ethereum/schnorr/.gitignore deleted file mode 100644 index de153db3..00000000 --- a/networks/ethereum/schnorr/.gitignore +++ /dev/null @@ -1 +0,0 @@ -artifacts diff --git a/networks/ethereum/schnorr/build.rs b/networks/ethereum/schnorr/build.rs index 300f8949..7b7c30fd 100644 --- a/networks/ethereum/schnorr/build.rs +++ b/networks/ethereum/schnorr/build.rs @@ -5,5 +5,5 @@ fn main() { if !fs::exists(&artifacts_path).unwrap() { fs::create_dir(&artifacts_path).unwrap(); } - build_solidity_contracts::build("contracts", &artifacts_path).unwrap(); + build_solidity_contracts::build(&[], "contracts", &artifacts_path).unwrap(); } diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index b13696cf..182e90e3 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -1,5 +1,5 @@ // SPDX-License-Identifier: AGPL-3.0-only -pragma solidity ^0.8.0; +pragma solidity ^0.8.26; // See https://github.com/noot/schnorr-verify for implementation details library Schnorr { diff --git a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 18a58cf9..26be683d 100644 --- a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -1,5 +1,5 @@ // SPDX-License-Identifier: AGPL-3.0-only -pragma solidity ^0.8.0; +pragma solidity ^0.8.26; import "../Schnorr.sol"; diff --git a/orchestration/src/processor.rs b/orchestration/src/processor.rs index 3387c4ed..00f9243d 100644 --- a/orchestration/src/processor.rs +++ b/orchestration/src/processor.rs @@ -21,8 +21,8 @@ pub fn processor( if coin == "ethereum" { r#" RUN cargo install svm-rs -RUN svm install 0.8.25 -RUN svm use 0.8.25 +RUN svm install 0.8.26 +RUN svm use 0.8.26 "# } else { "" diff --git a/processor/ethereum/contracts/Cargo.toml b/processor/ethereum/contracts/Cargo.toml index f09eb938..64fbccad 100644 --- a/processor/ethereum/contracts/Cargo.toml +++ b/processor/ethereum/contracts/Cargo.toml @@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -alloy-sol-types = { version = "0.8", default-features = false } +alloy-sol-types = { version = "0.8", default-features = false, features = ["json"] } [build-dependencies] build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts" } diff --git a/processor/ethereum/contracts/build.rs b/processor/ethereum/contracts/build.rs index 8e310b60..0af41608 100644 --- a/processor/ethereum/contracts/build.rs +++ b/processor/ethereum/contracts/build.rs @@ -1,3 +1,8 @@ fn main() { - build_solidity_contracts::build("contracts", "artifacts").unwrap(); + build_solidity_contracts::build( + &["../../../networks/ethereum/schnorr/contracts"], + "contracts", + "artifacts", + ) + .unwrap(); } diff --git a/processor/ethereum/contracts/contracts/Deployer.sol b/processor/ethereum/contracts/contracts/Deployer.sol index 475be4c1..1c05e38a 100644 --- a/processor/ethereum/contracts/contracts/Deployer.sol +++ b/processor/ethereum/contracts/contracts/Deployer.sol @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: AGPLv3 -pragma solidity ^0.8.0; +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; /* The expected deployment process of the Router is as follows: diff --git a/processor/ethereum/contracts/contracts/IERC20.sol b/processor/ethereum/contracts/contracts/IERC20.sol index 70f1f93c..c2de5ca0 100644 --- a/processor/ethereum/contracts/contracts/IERC20.sol +++ b/processor/ethereum/contracts/contracts/IERC20.sol @@ -1,5 +1,5 @@ // SPDX-License-Identifier: CC0 -pragma solidity ^0.8.0; +pragma solidity ^0.8.26; interface IERC20 { event Transfer(address indexed from, address indexed to, uint256 value); diff --git a/processor/ethereum/contracts/contracts/Router.sol b/processor/ethereum/contracts/contracts/Router.sol index c5e1efa2..65541a10 100644 --- a/processor/ethereum/contracts/contracts/Router.sol +++ b/processor/ethereum/contracts/contracts/Router.sol @@ -1,23 +1,31 @@ -// SPDX-License-Identifier: AGPLv3 -pragma solidity ^0.8.0; +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; import "./IERC20.sol"; -import "./Schnorr.sol"; -import "./Sandbox.sol"; +import "Schnorr.sol"; +// _ is used as a prefix for internal functions and smart-contract-scoped variables contract Router { - // Nonce is incremented for each batch of transactions executed/key update - uint256 public nonce; + // Nonce is incremented for each command executed, preventing replays + uint256 private _nonce; - // Current public key's x-coordinate - // This key must always have the parity defined within the Schnorr contract - bytes32 public seraiKey; + // The nonce which will be used for the smart contracts we deploy, enabling + // predicting their addresses + uint256 private _smartContractNonce; + + // The current public key, defined as per the Schnorr library + bytes32 private _seraiKey; + + enum DestinationType { + Address, + Code + } struct OutInstruction { - address to; - Call[] calls; - + DestinationType destinationType; + bytes destination; + address coin; uint256 value; } @@ -26,70 +34,42 @@ contract Router { bytes32 s; } - event SeraiKeyUpdated( - uint256 indexed nonce, - bytes32 indexed key, - Signature signature - ); - event InInstruction( - address indexed from, - address indexed coin, - uint256 amount, - bytes instruction - ); - // success is a uint256 representing a bitfield of transaction successes - event Executed( - uint256 indexed nonce, - bytes32 indexed batch, - uint256 success, - Signature signature - ); + event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); + event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); + event Executed(uint256 indexed nonce, bytes32 indexed batch); - // error types - error InvalidKey(); error InvalidSignature(); error InvalidAmount(); error FailedTransfer(); - error TooManyTransactions(); - - modifier _updateSeraiKeyAtEndOfFn( - uint256 _nonce, - bytes32 key, - Signature memory sig - ) { - if ( - (key == bytes32(0)) || - ((bytes32(uint256(key) % Schnorr.Q)) != key) - ) { - revert InvalidKey(); - } + // Update the Serai key at the end of the current function. + modifier _updateSeraiKeyAtEndOfFn(uint256 nonceUpdatedWith, bytes32 newSeraiKey) { + // Run the function itself. _; - seraiKey = key; - emit SeraiKeyUpdated(_nonce, key, sig); + // Update the key. + _seraiKey = newSeraiKey; + emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); } - constructor(bytes32 _seraiKey) _updateSeraiKeyAtEndOfFn( - 0, - _seraiKey, - Signature({ c: bytes32(0), s: bytes32(0) }) - ) { - nonce = 1; + constructor(bytes32 initialSeraiKey) _updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { + // We consumed nonce 0 when setting the initial Serai key + _nonce = 1; + // Nonces are incremented by 1 upon account creation, prior to any code execution, per EIP-161 + // This is incompatible with any networks which don't have their nonces start at 0 + _smartContractNonce = 1; } - // updateSeraiKey validates the given Schnorr signature against the current - // public key, and if successful, updates the contract's public key to the - // given one. + // updateSeraiKey validates the given Schnorr signature against the current public key, and if + // successful, updates the contract's public key to the one specified. function updateSeraiKey( - bytes32 _seraiKey, - Signature calldata sig - ) external _updateSeraiKeyAtEndOfFn(nonce, _seraiKey, sig) { - bytes memory message = - abi.encodePacked("updateSeraiKey", block.chainid, nonce, _seraiKey); - nonce++; + bytes32 newSeraiKey, + Signature calldata signature + ) external _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) { + bytes memory message = abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey); + _nonce++; - if (!Schnorr.verify(seraiKey, message, sig.c, sig.s)) { + if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } } @@ -114,109 +94,121 @@ contract Router { ) ); - // Require there was nothing returned, which is done by some non-standard - // tokens, or that the ERC20 contract did in fact return true - bool nonStandardResOrTrue = - (res.length == 0) || abi.decode(res, (bool)); + // Require there was nothing returned, which is done by some non-standard tokens, or that the + // ERC20 contract did in fact return true + bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); if (!(success && nonStandardResOrTrue)) { revert FailedTransfer(); } } /* - Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. - The amount instructed to transfer may not actually be the amount - transferred. + Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. The amount + instructed to be transferred may not actually be the amount transferred. - If we add nonReentrant to every single function which can effect the - balance, we can check the amount exactly matches. This prevents transfers of - less value than expected occurring, at least, not without an additional - transfer to top up the difference (which isn't routed through this contract - and accordingly isn't trying to artificially create events). + If we add nonReentrant to every single function which can effect the balance, we can check the + amount exactly matches. This prevents transfers of less value than expected occurring, at + least, not without an additional transfer to top up the difference (which isn't routed through + this contract and accordingly isn't trying to artificially create events from this contract). - If we don't add nonReentrant, a transfer can be started, and then a new - transfer for the difference can follow it up (again and again until a - rounding error is reached). This contract would believe all transfers were - done in full, despite each only being done in part (except for the last - one). + If we don't add nonReentrant, a transfer can be started, and then a new transfer for the + difference can follow it up (again and again until a rounding error is reached). This contract + would believe all transfers were done in full, despite each only being done in part (except + for the last one). - Given fee-on-transfer tokens aren't intended to be supported, the only - token planned to be supported is Dai and it doesn't have any fee-on-transfer - logic, fee-on-transfer tokens aren't even able to be supported at this time, - we simply classify this entire class of tokens as non-standard - implementations which induce undefined behavior. It is the Serai network's - role not to add support for any non-standard implementations. + Given fee-on-transfer tokens aren't intended to be supported, the only token actively planned + to be supported is Dai and it doesn't have any fee-on-transfer logic, and how fee-on-transfer + tokens aren't even able to be supported at this time by the larger Serai network, we simply + classify this entire class of tokens as non-standard implementations which induce undefined + behavior. + + It is the Serai network's role not to add support for any non-standard implementations. */ emit InInstruction(msg.sender, coin, amount, instruction); } - // execute accepts a list of transactions to execute as well as a signature. - // if signature verification passes, the given transactions are executed. - // if signature verification fails, this function will revert. - function execute( - OutInstruction[] calldata transactions, - Signature calldata sig - ) external { - if (transactions.length > 256) { - revert TooManyTransactions(); + // Perform a transfer out + function _transferOut(address to, address coin, uint256 value) private { + /* + We on purposely do not check if these calls succeed. A call either succeeded, and there's no + problem, or the call failed due to: + A) An insolvency + B) A malicious receiver + C) A non-standard token + A is an invariant, B should be dropped, C is something out of the control of this contract. + It is again the Serai's network role to not add support for any non-standard tokens, + */ + if (coin == address(0)) { + // Enough gas to service the transfer and a minimal amount of logic + to.call{ value: value, gas: 5_000 }(""); + } else { + coin.call{ gas: 100_000 }(abi.encodeWithSelector(IERC20.transfer.selector, msg.sender, value)); } + } - bytes memory message = - abi.encode("execute", block.chainid, nonce, transactions); - uint256 executed_with_nonce = nonce; - // This prevents re-entrancy from causing double spends yet does allow - // out-of-order execution via re-entrancy - nonce++; + /* + Serai supports arbitrary calls out via deploying smart contracts (with user-specified code), + letting them execute whatever calls they're coded for. Since we can't meter CREATE, we call + CREATE from this function which we call not internally, but with CALL (which we can meter). + */ + function arbitaryCallOut(bytes memory code) external { + // Because we're creating a contract, increment our nonce + _smartContractNonce += 1; - if (!Schnorr.verify(seraiKey, message, sig.c, sig.s)) { + address contractAddress; + assembly { + contractAddress := create(0, add(code, 0x20), mload(code)) + } + } + + // Execute a list of transactions if they were signed by the current key with the current nonce + function execute(OutInstruction[] calldata transactions, Signature calldata signature) external { + // Verify the signature + bytes memory message = abi.encode("execute", block.chainid, _nonce, transactions); + if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } - uint256 successes; + // Since the signature was verified, perform execution + emit Executed(_nonce, keccak256(message)); + // While this is sufficient to prevent replays, it's still technically possible for instructions + // from later batches to be executed before these instructions upon re-entrancy + _nonce++; + for (uint256 i = 0; i < transactions.length; i++) { - bool success; - - // If there are no calls, send to `to` the value - if (transactions[i].calls.length == 0) { - (success, ) = transactions[i].to.call{ - value: transactions[i].value, - gas: 5_000 - }(""); + // If the destination is an address, we perform a direct transfer + if (transactions[i].destinationType == DestinationType.Address) { + // This may cause a panic and the contract to become stuck if the destination isn't actually + // 20 bytes. Serai is trusted to not pass a malformed destination + (address destination) = abi.decode(transactions[i].destination, (address)); + _transferOut(destination, transactions[i].coin, transactions[i].value); } else { - // If there are calls, ignore `to`. Deploy a new Sandbox and proxy the - // calls through that - // - // We could use a single sandbox in order to reduce gas costs, yet that - // risks one person creating an approval that's hooked before another - // user's intended action executes, in order to drain their coins - // - // While technically, that would be a flaw in the sandboxed flow, this - // is robust and prevents such flaws from being possible - // - // We also don't want people to set state via the Sandbox and expect it - // future available when anyone else could set a distinct value - Sandbox sandbox = new Sandbox(); - (success, ) = address(sandbox).call{ - value: transactions[i].value, - // TODO: Have the Call specify the gas up front - gas: 350_000 - }( - abi.encodeWithSelector( - Sandbox.sandbox.selector, - transactions[i].calls - ) - ); - } + // The destination is a piece of initcode. We calculate the hash of the will-be contract, + // transfer to it, and then run the initcode + address nextAddress = + address(uint160(uint256(keccak256(abi.encode(address(this), _smartContractNonce))))); - assembly { - successes := or(successes, shl(i, success)) + // Perform the transfer + _transferOut(nextAddress, transactions[i].coin, transactions[i].value); + + // Perform the calls with a set gas budget + (uint24 gas, bytes memory code) = abi.decode(transactions[i].destination, (uint24, bytes)); + address(this).call{ + gas: gas + }(abi.encodeWithSelector(Router.arbitaryCallOut.selector, code)); } } - emit Executed( - executed_with_nonce, - keccak256(message), - successes, - sig - ); + } + + function nonce() external view returns (uint256) { + return _nonce; + } + + function smartContractNonce() external view returns (uint256) { + return _smartContractNonce; + } + + function seraiKey() external view returns (bytes32) { + return _seraiKey; } } diff --git a/processor/ethereum/contracts/contracts/Sandbox.sol b/processor/ethereum/contracts/contracts/Sandbox.sol deleted file mode 100644 index a82a3afd..00000000 --- a/processor/ethereum/contracts/contracts/Sandbox.sol +++ /dev/null @@ -1,48 +0,0 @@ -// SPDX-License-Identifier: AGPLv3 -pragma solidity ^0.8.24; - -struct Call { - address to; - uint256 value; - bytes data; -} - -// A minimal sandbox focused on gas efficiency. -// -// The first call is executed if any of the calls fail, making it a fallback. -// All other calls are executed sequentially. -contract Sandbox { - error AlreadyCalled(); - error CallsFailed(); - - function sandbox(Call[] calldata calls) external payable { - // Prevent re-entrancy due to this executing arbitrary calls from anyone - // and anywhere - bool called; - assembly { called := tload(0) } - if (called) { - revert AlreadyCalled(); - } - assembly { tstore(0, 1) } - - // Execute the calls, starting from 1 - for (uint256 i = 1; i < calls.length; i++) { - (bool success, ) = - calls[i].to.call{ value: calls[i].value }(calls[i].data); - - // If this call failed, execute the fallback (call 0) - if (!success) { - (success, ) = - calls[0].to.call{ value: address(this).balance }(calls[0].data); - // If this call also failed, revert entirely - if (!success) { - revert CallsFailed(); - } - return; - } - } - - // We don't clear the re-entrancy guard as this contract should never be - // called again, so there's no reason to spend the effort - } -} diff --git a/processor/ethereum/contracts/contracts/tests/ERC20.sol b/processor/ethereum/contracts/contracts/tests/ERC20.sol index e157974c..f38bfea4 100644 --- a/processor/ethereum/contracts/contracts/tests/ERC20.sol +++ b/processor/ethereum/contracts/contracts/tests/ERC20.sol @@ -1,5 +1,5 @@ -// SPDX-License-Identifier: AGPLv3 -pragma solidity ^0.8.0; +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; contract TestERC20 { event Transfer(address indexed from, address indexed to, uint256 value); diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs index fef10288..d8de29b3 100644 --- a/processor/ethereum/contracts/src/lib.rs +++ b/processor/ethereum/contracts/src/lib.rs @@ -44,5 +44,3 @@ pub mod router { pub const BYTECODE: &str = include_str!("../artifacts/Router.bin"); pub use super::router_container::Router::*; } - -pub mod tests; diff --git a/processor/ethereum/contracts/src/tests.rs b/processor/ethereum/contracts/src/tests.rs deleted file mode 100644 index 9f141c29..00000000 --- a/processor/ethereum/contracts/src/tests.rs +++ /dev/null @@ -1,13 +0,0 @@ -use alloy_sol_types::sol; - -#[rustfmt::skip] -#[allow(warnings)] -#[allow(needless_pass_by_value)] -#[allow(clippy::all)] -#[allow(clippy::ignored_unit_patterns)] -#[allow(clippy::redundant_closure_for_method_calls)] -mod schnorr_container { - use super::*; - sol!("contracts/tests/Schnorr.sol"); -} -pub use schnorr_container::TestSchnorr as schnorr; From 39be23d807c6289390a9b1e361ced8f70b3e4c02 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 12:04:57 -0400 Subject: [PATCH 139/368] Remove artifacts for serai-processor-ethereum-contracts --- Cargo.lock | 19 +- networks/ethereum/build-contracts/src/lib.rs | 7 + networks/ethereum/schnorr/build.rs | 7 +- processor/ethereum/contracts/.gitignore | 1 - processor/ethereum/contracts/Cargo.toml | 9 + processor/ethereum/contracts/build.rs | 71 +- .../ethereum/contracts/src/abigen/deployer.rs | 584 ++++ .../ethereum/contracts/src/abigen/erc20.rs | 1838 ++++++++++ .../ethereum/contracts/src/abigen/mod.rs | 3 + .../ethereum/contracts/src/abigen/router.rs | 2958 +++++++++++++++++ processor/ethereum/contracts/src/lib.rs | 43 +- processor/ethereum/ethereum-serai/Cargo.toml | 1 + .../ethereum/ethereum-serai/src/crypto.rs | 10 +- .../ethereum/ethereum-serai/src/machine.rs | 8 +- .../ethereum/ethereum-serai/src/router.rs | 11 +- 15 files changed, 5501 insertions(+), 69 deletions(-) delete mode 100644 processor/ethereum/contracts/.gitignore create mode 100644 processor/ethereum/contracts/src/abigen/deployer.rs create mode 100644 processor/ethereum/contracts/src/abigen/erc20.rs create mode 100644 processor/ethereum/contracts/src/abigen/mod.rs create mode 100644 processor/ethereum/contracts/src/abigen/router.rs diff --git a/Cargo.lock b/Cargo.lock index 1338ae26..d6224093 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2516,6 +2516,7 @@ dependencies = [ "alloy-rpc-types-eth", "alloy-simple-request-transport", "alloy-sol-types", + "ethereum-schnorr-contract", "flexible-transcript", "group", "k256", @@ -6126,6 +6127,16 @@ dependencies = [ "syn 1.0.109", ] +[[package]] +name = "prettyplease" +version = "0.2.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "479cf940fbbb3426c32c5d5176f62ad57549a0bb84773423ba8be9d089f5faba" +dependencies = [ + "proc-macro2", + "syn 2.0.77", +] + [[package]] name = "primeorder" version = "0.13.6" @@ -6291,7 +6302,7 @@ dependencies = [ "log", "multimap", "petgraph", - "prettyplease", + "prettyplease 0.1.25", "prost", "prost-types", "regex", @@ -8700,8 +8711,14 @@ dependencies = [ name = "serai-processor-ethereum-contracts" version = "0.1.0" dependencies = [ + "alloy-sol-macro-expander", + "alloy-sol-macro-input", "alloy-sol-types", "build-solidity-contracts", + "prettyplease 0.2.22", + "serde_json", + "syn 2.0.77", + "syn-solidity", ] [[package]] diff --git a/networks/ethereum/build-contracts/src/lib.rs b/networks/ethereum/build-contracts/src/lib.rs index 5213059e..b1c9c87f 100644 --- a/networks/ethereum/build-contracts/src/lib.rs +++ b/networks/ethereum/build-contracts/src/lib.rs @@ -12,6 +12,13 @@ pub fn build( contracts_path: &str, artifacts_path: &str, ) -> Result<(), String> { + if !fs::exists(artifacts_path) + .map_err(|e| format!("couldn't check if artifacts directory already exists: {e:?}"))? + { + fs::create_dir(artifacts_path) + .map_err(|e| format!("couldn't create the non-existent artifacts directory: {e:?}"))?; + } + println!("cargo:rerun-if-changed={contracts_path}/*"); println!("cargo:rerun-if-changed={artifacts_path}/*"); diff --git a/networks/ethereum/schnorr/build.rs b/networks/ethereum/schnorr/build.rs index 7b7c30fd..cf12f948 100644 --- a/networks/ethereum/schnorr/build.rs +++ b/networks/ethereum/schnorr/build.rs @@ -1,9 +1,4 @@ -use std::{env, fs}; - fn main() { - let artifacts_path = env::var("OUT_DIR").unwrap().to_string() + "/ethereum-schnorr-contract"; - if !fs::exists(&artifacts_path).unwrap() { - fs::create_dir(&artifacts_path).unwrap(); - } + let artifacts_path = std::env::var("OUT_DIR").unwrap().to_string() + "/ethereum-schnorr-contract"; build_solidity_contracts::build(&[], "contracts", &artifacts_path).unwrap(); } diff --git a/processor/ethereum/contracts/.gitignore b/processor/ethereum/contracts/.gitignore deleted file mode 100644 index de153db3..00000000 --- a/processor/ethereum/contracts/.gitignore +++ /dev/null @@ -1 +0,0 @@ -artifacts diff --git a/processor/ethereum/contracts/Cargo.toml b/processor/ethereum/contracts/Cargo.toml index 64fbccad..5ed540b6 100644 --- a/processor/ethereum/contracts/Cargo.toml +++ b/processor/ethereum/contracts/Cargo.toml @@ -21,3 +21,12 @@ alloy-sol-types = { version = "0.8", default-features = false, features = ["json [build-dependencies] build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts" } + +syn = { version = "2", default-features = false, features = ["proc-macro"] } + +serde_json = { version = "1", default-features = false, features = ["std"] } + +syn-solidity = { version = "0.8", default-features = false } +alloy-sol-macro-input = { version = "0.8", default-features = false } +alloy-sol-macro-expander = { version = "0.8", default-features = false } +prettyplease = { version = "0.2", default-features = false } diff --git a/processor/ethereum/contracts/build.rs b/processor/ethereum/contracts/build.rs index 0af41608..23d1e907 100644 --- a/processor/ethereum/contracts/build.rs +++ b/processor/ethereum/contracts/build.rs @@ -1,8 +1,69 @@ -fn main() { - build_solidity_contracts::build( - &["../../../networks/ethereum/schnorr/contracts"], - "contracts", - "artifacts", +use std::{env, fs}; + +use alloy_sol_macro_input::{SolInputKind, SolInput}; + +fn write(sol: syn_solidity::File, file: &str) { + let sol = alloy_sol_macro_expander::expand::expand(sol).unwrap(); + fs::write( + file, + // TODO: Replace `prettyplease::unparse` with `to_string` + prettyplease::unparse(&syn::File { + attrs: vec![], + items: vec![syn::parse2(sol).unwrap()], + shebang: None, + }) + .as_bytes(), ) .unwrap(); } + +fn sol(sol: &str, file: &str) { + let alloy_sol_macro_input::SolInputKind::Sol(sol) = + syn::parse_str(&std::fs::read_to_string(sol).unwrap()).unwrap() + else { + panic!("parsed .sol file wasn't SolInputKind::Sol"); + }; + write(sol, file); +} + +fn abi(ident: &str, abi: &str, file: &str) { + let SolInputKind::Sol(sol) = (SolInput { + attrs: vec![], + path: None, + kind: SolInputKind::Json( + syn::parse_str(ident).unwrap(), + serde_json::from_str(&fs::read_to_string(abi).unwrap()).unwrap(), + ), + }) + .normalize_json() + .unwrap() + .kind + else { + panic!("normalized JSON wasn't SolInputKind::Sol"); + }; + write(sol, file); +} + +fn main() { + let artifacts_path = + env::var("OUT_DIR").unwrap().to_string() + "/serai-processor-ethereum-contracts"; + build_solidity_contracts::build( + &["../../../networks/ethereum/schnorr/contracts"], + "contracts", + &artifacts_path, + ) + .unwrap(); + + // TODO: Use OUT_DIR for the generated code + if !fs::exists("src/abigen").unwrap() { + fs::create_dir("src/abigen").unwrap(); + } + + // These can be handled with the sol! macro + sol("contracts/IERC20.sol", "src/abigen/erc20.rs"); + sol("contracts/Deployer.sol", "src/abigen/deployer.rs"); + // This cannot be handled with the sol! macro. The Solidity requires an import, the ABI is built + // to OUT_DIR and the macro doesn't support non-static paths: + // https://github.com/alloy-rs/core/issues/738 + abi("Router", &(artifacts_path.clone() + "/Router.abi"), "src/abigen/router.rs"); +} diff --git a/processor/ethereum/contracts/src/abigen/deployer.rs b/processor/ethereum/contracts/src/abigen/deployer.rs new file mode 100644 index 00000000..f4bcb3a6 --- /dev/null +++ b/processor/ethereum/contracts/src/abigen/deployer.rs @@ -0,0 +1,584 @@ +///Module containing a contract's types and functions. +/** + +```solidity +contract Deployer { + event Deployment(bytes32 indexed init_code_hash, address created); + error DeploymentFailed(); + function deploy(bytes memory init_code) external { } +} +```*/ +#[allow(non_camel_case_types, non_snake_case, clippy::style)] +pub mod Deployer { + use super::*; + use ::alloy_sol_types as alloy_sol_types; + /**Event with signature `Deployment(bytes32,address)` and selector `0x60b877a3bae7bf0f0bd5e1c40ebf44ea158201397f6b72d7c05360157b1ec0fc`. +```solidity +event Deployment(bytes32 indexed init_code_hash, address created); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct Deployment { + #[allow(missing_docs)] + pub init_code_hash: ::alloy_sol_types::private::FixedBytes<32>, + #[allow(missing_docs)] + pub created: ::alloy_sol_types::private::Address, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for Deployment { + type DataTuple<'a> = (::alloy_sol_types::sol_data::Address,); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::FixedBytes<32>, + ); + const SIGNATURE: &'static str = "Deployment(bytes32,address)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 96u8, + 184u8, + 119u8, + 163u8, + 186u8, + 231u8, + 191u8, + 15u8, + 11u8, + 213u8, + 225u8, + 196u8, + 14u8, + 191u8, + 68u8, + 234u8, + 21u8, + 130u8, + 1u8, + 57u8, + 127u8, + 107u8, + 114u8, + 215u8, + 192u8, + 83u8, + 96u8, + 21u8, + 123u8, + 30u8, + 192u8, + 252u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + init_code_hash: topics.1, + created: data.0, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.created, + ), + ) + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.init_code_hash.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::encode_topic(&self.init_code_hash); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for Deployment { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&Deployment> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &Deployment) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Custom error with signature `DeploymentFailed()` and selector `0x30116425`. +```solidity +error DeploymentFailed(); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct DeploymentFailed {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: DeploymentFailed) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for DeploymentFailed { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + #[automatically_derived] + impl alloy_sol_types::SolError for DeploymentFailed { + type Parameters<'a> = UnderlyingSolTuple<'a>; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "DeploymentFailed()"; + const SELECTOR: [u8; 4] = [48u8, 17u8, 100u8, 37u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + } + }; + /**Function with signature `deploy(bytes)` and selector `0x00774360`. +```solidity +function deploy(bytes memory init_code) external { } +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct deployCall { + pub init_code: ::alloy_sol_types::private::Bytes, + } + ///Container type for the return parameters of the [`deploy(bytes)`](deployCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct deployReturn {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bytes,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Bytes,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: deployCall) -> Self { + (value.init_code,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for deployCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { init_code: tuple.0 } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: deployReturn) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for deployReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for deployCall { + type Parameters<'a> = (::alloy_sol_types::sol_data::Bytes,); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = deployReturn; + type ReturnTuple<'a> = (); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "deploy(bytes)"; + const SELECTOR: [u8; 4] = [0u8, 119u8, 67u8, 96u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( + &self.init_code, + ), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + ///Container for all the [`Deployer`](self) function calls. + pub enum DeployerCalls { + deploy(deployCall), + } + #[automatically_derived] + impl DeployerCalls { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 4usize]] = &[[0u8, 119u8, 67u8, 96u8]]; + } + #[automatically_derived] + impl alloy_sol_types::SolInterface for DeployerCalls { + const NAME: &'static str = "DeployerCalls"; + const MIN_DATA_LENGTH: usize = 64usize; + const COUNT: usize = 1usize; + #[inline] + fn selector(&self) -> [u8; 4] { + match self { + Self::deploy(_) => ::SELECTOR, + } + } + #[inline] + fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { + Self::SELECTORS.get(i).copied() + } + #[inline] + fn valid_selector(selector: [u8; 4]) -> bool { + Self::SELECTORS.binary_search(&selector).is_ok() + } + #[inline] + #[allow(unsafe_code, non_snake_case)] + fn abi_decode_raw( + selector: [u8; 4], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + static DECODE_SHIMS: &[fn( + &[u8], + bool, + ) -> alloy_sol_types::Result] = &[ + { + fn deploy( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(DeployerCalls::deploy) + } + deploy + }, + ]; + let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { + return Err( + alloy_sol_types::Error::unknown_selector( + ::NAME, + selector, + ), + ); + }; + (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) + } + #[inline] + fn abi_encoded_size(&self) -> usize { + match self { + Self::deploy(inner) => { + ::abi_encoded_size(inner) + } + } + } + #[inline] + fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { + match self { + Self::deploy(inner) => { + ::abi_encode_raw(inner, out) + } + } + } + } + ///Container for all the [`Deployer`](self) custom errors. + pub enum DeployerErrors { + DeploymentFailed(DeploymentFailed), + } + #[automatically_derived] + impl DeployerErrors { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 4usize]] = &[[48u8, 17u8, 100u8, 37u8]]; + } + #[automatically_derived] + impl alloy_sol_types::SolInterface for DeployerErrors { + const NAME: &'static str = "DeployerErrors"; + const MIN_DATA_LENGTH: usize = 0usize; + const COUNT: usize = 1usize; + #[inline] + fn selector(&self) -> [u8; 4] { + match self { + Self::DeploymentFailed(_) => { + ::SELECTOR + } + } + } + #[inline] + fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { + Self::SELECTORS.get(i).copied() + } + #[inline] + fn valid_selector(selector: [u8; 4]) -> bool { + Self::SELECTORS.binary_search(&selector).is_ok() + } + #[inline] + #[allow(unsafe_code, non_snake_case)] + fn abi_decode_raw( + selector: [u8; 4], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + static DECODE_SHIMS: &[fn( + &[u8], + bool, + ) -> alloy_sol_types::Result] = &[ + { + fn DeploymentFailed( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(DeployerErrors::DeploymentFailed) + } + DeploymentFailed + }, + ]; + let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { + return Err( + alloy_sol_types::Error::unknown_selector( + ::NAME, + selector, + ), + ); + }; + (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) + } + #[inline] + fn abi_encoded_size(&self) -> usize { + match self { + Self::DeploymentFailed(inner) => { + ::abi_encoded_size( + inner, + ) + } + } + } + #[inline] + fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { + match self { + Self::DeploymentFailed(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + } + } + } + ///Container for all the [`Deployer`](self) events. + pub enum DeployerEvents { + Deployment(Deployment), + } + #[automatically_derived] + impl DeployerEvents { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 32usize]] = &[ + [ + 96u8, + 184u8, + 119u8, + 163u8, + 186u8, + 231u8, + 191u8, + 15u8, + 11u8, + 213u8, + 225u8, + 196u8, + 14u8, + 191u8, + 68u8, + 234u8, + 21u8, + 130u8, + 1u8, + 57u8, + 127u8, + 107u8, + 114u8, + 215u8, + 192u8, + 83u8, + 96u8, + 21u8, + 123u8, + 30u8, + 192u8, + 252u8, + ], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolEventInterface for DeployerEvents { + const NAME: &'static str = "DeployerEvents"; + const COUNT: usize = 1usize; + fn decode_raw_log( + topics: &[alloy_sol_types::Word], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + match topics.first().copied() { + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::Deployment) + } + _ => { + alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { + name: ::NAME, + log: alloy_sol_types::private::Box::new( + alloy_sol_types::private::LogData::new_unchecked( + topics.to_vec(), + data.to_vec().into(), + ), + ), + }) + } + } + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for DeployerEvents { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + match self { + Self::Deployment(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + } + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + match self { + Self::Deployment(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + } + } + } +} diff --git a/processor/ethereum/contracts/src/abigen/erc20.rs b/processor/ethereum/contracts/src/abigen/erc20.rs new file mode 100644 index 00000000..d9c0dd6e --- /dev/null +++ b/processor/ethereum/contracts/src/abigen/erc20.rs @@ -0,0 +1,1838 @@ +///Module containing a contract's types and functions. +/** + +```solidity +interface IERC20 { + event Transfer(address indexed from, address indexed to, uint256 value); + event Approval(address indexed owner, address indexed spender, uint256 value); + function name() external view returns (string memory); + function symbol() external view returns (string memory); + function decimals() external view returns (uint8); + function totalSupply() external view returns (uint256); + function balanceOf(address owner) external view returns (uint256); + function transfer(address to, uint256 value) external returns (bool); + function transferFrom(address from, address to, uint256 value) external returns (bool); + function approve(address spender, uint256 value) external returns (bool); + function allowance(address owner, address spender) external view returns (uint256); +} +```*/ +#[allow(non_camel_case_types, non_snake_case, clippy::style)] +pub mod IERC20 { + use super::*; + use ::alloy_sol_types as alloy_sol_types; + /**Event with signature `Transfer(address,address,uint256)` and selector `0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef`. +```solidity +event Transfer(address indexed from, address indexed to, uint256 value); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct Transfer { + #[allow(missing_docs)] + pub from: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub to: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for Transfer { + type DataTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ); + const SIGNATURE: &'static str = "Transfer(address,address,uint256)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 221u8, + 242u8, + 82u8, + 173u8, + 27u8, + 226u8, + 200u8, + 155u8, + 105u8, + 194u8, + 176u8, + 104u8, + 252u8, + 55u8, + 141u8, + 170u8, + 149u8, + 43u8, + 167u8, + 241u8, + 99u8, + 196u8, + 161u8, + 22u8, + 40u8, + 245u8, + 90u8, + 77u8, + 245u8, + 35u8, + 179u8, + 239u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + from: topics.1, + to: topics.2, + value: data.0, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + ( + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.from.clone(), self.to.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.from, + ); + out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.to, + ); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for Transfer { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&Transfer> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &Transfer) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Event with signature `Approval(address,address,uint256)` and selector `0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925`. +```solidity +event Approval(address indexed owner, address indexed spender, uint256 value); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct Approval { + #[allow(missing_docs)] + pub owner: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub spender: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for Approval { + type DataTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ); + const SIGNATURE: &'static str = "Approval(address,address,uint256)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 140u8, + 91u8, + 225u8, + 229u8, + 235u8, + 236u8, + 125u8, + 91u8, + 209u8, + 79u8, + 113u8, + 66u8, + 125u8, + 30u8, + 132u8, + 243u8, + 221u8, + 3u8, + 20u8, + 192u8, + 247u8, + 178u8, + 41u8, + 30u8, + 91u8, + 32u8, + 10u8, + 200u8, + 199u8, + 195u8, + 185u8, + 37u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + owner: topics.1, + spender: topics.2, + value: data.0, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + ( + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.owner.clone(), self.spender.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.owner, + ); + out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.spender, + ); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for Approval { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&Approval> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &Approval) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Function with signature `name()` and selector `0x06fdde03`. +```solidity +function name() external view returns (string memory); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct nameCall {} + ///Container type for the return parameters of the [`name()`](nameCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct nameReturn { + pub _0: ::alloy_sol_types::private::String, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: nameCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for nameCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::String,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::String,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: nameReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for nameReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for nameCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = nameReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::String,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "name()"; + const SELECTOR: [u8; 4] = [6u8, 253u8, 222u8, 3u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `symbol()` and selector `0x95d89b41`. +```solidity +function symbol() external view returns (string memory); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct symbolCall {} + ///Container type for the return parameters of the [`symbol()`](symbolCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct symbolReturn { + pub _0: ::alloy_sol_types::private::String, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: symbolCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for symbolCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::String,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::String,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: symbolReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for symbolReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for symbolCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = symbolReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::String,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "symbol()"; + const SELECTOR: [u8; 4] = [149u8, 216u8, 155u8, 65u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `decimals()` and selector `0x313ce567`. +```solidity +function decimals() external view returns (uint8); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct decimalsCall {} + ///Container type for the return parameters of the [`decimals()`](decimalsCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct decimalsReturn { + pub _0: u8, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: decimalsCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for decimalsCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<8>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (u8,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: decimalsReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for decimalsReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for decimalsCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = decimalsReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<8>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "decimals()"; + const SELECTOR: [u8; 4] = [49u8, 60u8, 229u8, 103u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `totalSupply()` and selector `0x18160ddd`. +```solidity +function totalSupply() external view returns (uint256); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct totalSupplyCall {} + ///Container type for the return parameters of the [`totalSupply()`](totalSupplyCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct totalSupplyReturn { + pub _0: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: totalSupplyCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for totalSupplyCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: totalSupplyReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for totalSupplyReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for totalSupplyCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = totalSupplyReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "totalSupply()"; + const SELECTOR: [u8; 4] = [24u8, 22u8, 13u8, 221u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `balanceOf(address)` and selector `0x70a08231`. +```solidity +function balanceOf(address owner) external view returns (uint256); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct balanceOfCall { + pub owner: ::alloy_sol_types::private::Address, + } + ///Container type for the return parameters of the [`balanceOf(address)`](balanceOfCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct balanceOfReturn { + pub _0: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Address,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Address,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: balanceOfCall) -> Self { + (value.owner,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for balanceOfCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { owner: tuple.0 } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: balanceOfReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for balanceOfReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for balanceOfCall { + type Parameters<'a> = (::alloy_sol_types::sol_data::Address,); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = balanceOfReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "balanceOf(address)"; + const SELECTOR: [u8; 4] = [112u8, 160u8, 130u8, 49u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.owner, + ), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `transfer(address,uint256)` and selector `0xa9059cbb`. +```solidity +function transfer(address to, uint256 value) external returns (bool); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct transferCall { + pub to: ::alloy_sol_types::private::Address, + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + ///Container type for the return parameters of the [`transfer(address,uint256)`](transferCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct transferReturn { + pub _0: bool, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: transferCall) -> Self { + (value.to, value.value) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for transferCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + to: tuple.0, + value: tuple.1, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (bool,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: transferReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for transferReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for transferCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = transferReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "transfer(address,uint256)"; + const SELECTOR: [u8; 4] = [169u8, 5u8, 156u8, 187u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.to, + ), + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `transferFrom(address,address,uint256)` and selector `0x23b872dd`. +```solidity +function transferFrom(address from, address to, uint256 value) external returns (bool); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct transferFromCall { + pub from: ::alloy_sol_types::private::Address, + pub to: ::alloy_sol_types::private::Address, + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + ///Container type for the return parameters of the [`transferFrom(address,address,uint256)`](transferFromCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct transferFromReturn { + pub _0: bool, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: transferFromCall) -> Self { + (value.from, value.to, value.value) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for transferFromCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + from: tuple.0, + to: tuple.1, + value: tuple.2, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (bool,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: transferFromReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for transferFromReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for transferFromCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = transferFromReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "transferFrom(address,address,uint256)"; + const SELECTOR: [u8; 4] = [35u8, 184u8, 114u8, 221u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.from, + ), + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.to, + ), + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `approve(address,uint256)` and selector `0x095ea7b3`. +```solidity +function approve(address spender, uint256 value) external returns (bool); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct approveCall { + pub spender: ::alloy_sol_types::private::Address, + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + ///Container type for the return parameters of the [`approve(address,uint256)`](approveCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct approveReturn { + pub _0: bool, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: approveCall) -> Self { + (value.spender, value.value) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for approveCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + spender: tuple.0, + value: tuple.1, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (bool,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: approveReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for approveReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for approveCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = approveReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "approve(address,uint256)"; + const SELECTOR: [u8; 4] = [9u8, 94u8, 167u8, 179u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.spender, + ), + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `allowance(address,address)` and selector `0xdd62ed3e`. +```solidity +function allowance(address owner, address spender) external view returns (uint256); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct allowanceCall { + pub owner: ::alloy_sol_types::private::Address, + pub spender: ::alloy_sol_types::private::Address, + } + ///Container type for the return parameters of the [`allowance(address,address)`](allowanceCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct allowanceReturn { + pub _0: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::Address, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: allowanceCall) -> Self { + (value.owner, value.spender) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for allowanceCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + owner: tuple.0, + spender: tuple.1, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: allowanceReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for allowanceReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for allowanceCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = allowanceReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "allowance(address,address)"; + const SELECTOR: [u8; 4] = [221u8, 98u8, 237u8, 62u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.owner, + ), + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.spender, + ), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + ///Container for all the [`IERC20`](self) function calls. + pub enum IERC20Calls { + name(nameCall), + symbol(symbolCall), + decimals(decimalsCall), + totalSupply(totalSupplyCall), + balanceOf(balanceOfCall), + transfer(transferCall), + transferFrom(transferFromCall), + approve(approveCall), + allowance(allowanceCall), + } + #[automatically_derived] + impl IERC20Calls { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 4usize]] = &[ + [6u8, 253u8, 222u8, 3u8], + [9u8, 94u8, 167u8, 179u8], + [24u8, 22u8, 13u8, 221u8], + [35u8, 184u8, 114u8, 221u8], + [49u8, 60u8, 229u8, 103u8], + [112u8, 160u8, 130u8, 49u8], + [149u8, 216u8, 155u8, 65u8], + [169u8, 5u8, 156u8, 187u8], + [221u8, 98u8, 237u8, 62u8], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolInterface for IERC20Calls { + const NAME: &'static str = "IERC20Calls"; + const MIN_DATA_LENGTH: usize = 0usize; + const COUNT: usize = 9usize; + #[inline] + fn selector(&self) -> [u8; 4] { + match self { + Self::name(_) => ::SELECTOR, + Self::symbol(_) => ::SELECTOR, + Self::decimals(_) => ::SELECTOR, + Self::totalSupply(_) => { + ::SELECTOR + } + Self::balanceOf(_) => { + ::SELECTOR + } + Self::transfer(_) => ::SELECTOR, + Self::transferFrom(_) => { + ::SELECTOR + } + Self::approve(_) => ::SELECTOR, + Self::allowance(_) => { + ::SELECTOR + } + } + } + #[inline] + fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { + Self::SELECTORS.get(i).copied() + } + #[inline] + fn valid_selector(selector: [u8; 4]) -> bool { + Self::SELECTORS.binary_search(&selector).is_ok() + } + #[inline] + #[allow(unsafe_code, non_snake_case)] + fn abi_decode_raw( + selector: [u8; 4], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + static DECODE_SHIMS: &[fn( + &[u8], + bool, + ) -> alloy_sol_types::Result] = &[ + { + fn name( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::name) + } + name + }, + { + fn approve( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::approve) + } + approve + }, + { + fn totalSupply( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::totalSupply) + } + totalSupply + }, + { + fn transferFrom( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::transferFrom) + } + transferFrom + }, + { + fn decimals( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::decimals) + } + decimals + }, + { + fn balanceOf( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::balanceOf) + } + balanceOf + }, + { + fn symbol( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::symbol) + } + symbol + }, + { + fn transfer( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::transfer) + } + transfer + }, + { + fn allowance( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(IERC20Calls::allowance) + } + allowance + }, + ]; + let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { + return Err( + alloy_sol_types::Error::unknown_selector( + ::NAME, + selector, + ), + ); + }; + (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) + } + #[inline] + fn abi_encoded_size(&self) -> usize { + match self { + Self::name(inner) => { + ::abi_encoded_size(inner) + } + Self::symbol(inner) => { + ::abi_encoded_size(inner) + } + Self::decimals(inner) => { + ::abi_encoded_size(inner) + } + Self::totalSupply(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::balanceOf(inner) => { + ::abi_encoded_size(inner) + } + Self::transfer(inner) => { + ::abi_encoded_size(inner) + } + Self::transferFrom(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::approve(inner) => { + ::abi_encoded_size(inner) + } + Self::allowance(inner) => { + ::abi_encoded_size(inner) + } + } + } + #[inline] + fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { + match self { + Self::name(inner) => { + ::abi_encode_raw(inner, out) + } + Self::symbol(inner) => { + ::abi_encode_raw(inner, out) + } + Self::decimals(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::totalSupply(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::balanceOf(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::transfer(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::transferFrom(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::approve(inner) => { + ::abi_encode_raw(inner, out) + } + Self::allowance(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + } + } + } + ///Container for all the [`IERC20`](self) events. + pub enum IERC20Events { + Transfer(Transfer), + Approval(Approval), + } + #[automatically_derived] + impl IERC20Events { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 32usize]] = &[ + [ + 140u8, + 91u8, + 225u8, + 229u8, + 235u8, + 236u8, + 125u8, + 91u8, + 209u8, + 79u8, + 113u8, + 66u8, + 125u8, + 30u8, + 132u8, + 243u8, + 221u8, + 3u8, + 20u8, + 192u8, + 247u8, + 178u8, + 41u8, + 30u8, + 91u8, + 32u8, + 10u8, + 200u8, + 199u8, + 195u8, + 185u8, + 37u8, + ], + [ + 221u8, + 242u8, + 82u8, + 173u8, + 27u8, + 226u8, + 200u8, + 155u8, + 105u8, + 194u8, + 176u8, + 104u8, + 252u8, + 55u8, + 141u8, + 170u8, + 149u8, + 43u8, + 167u8, + 241u8, + 99u8, + 196u8, + 161u8, + 22u8, + 40u8, + 245u8, + 90u8, + 77u8, + 245u8, + 35u8, + 179u8, + 239u8, + ], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolEventInterface for IERC20Events { + const NAME: &'static str = "IERC20Events"; + const COUNT: usize = 2usize; + fn decode_raw_log( + topics: &[alloy_sol_types::Word], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + match topics.first().copied() { + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::Transfer) + } + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::Approval) + } + _ => { + alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { + name: ::NAME, + log: alloy_sol_types::private::Box::new( + alloy_sol_types::private::LogData::new_unchecked( + topics.to_vec(), + data.to_vec().into(), + ), + ), + }) + } + } + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for IERC20Events { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + match self { + Self::Transfer(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + Self::Approval(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + } + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + match self { + Self::Transfer(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + Self::Approval(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + } + } + } +} diff --git a/processor/ethereum/contracts/src/abigen/mod.rs b/processor/ethereum/contracts/src/abigen/mod.rs new file mode 100644 index 00000000..541c2980 --- /dev/null +++ b/processor/ethereum/contracts/src/abigen/mod.rs @@ -0,0 +1,3 @@ +pub mod erc20; +pub mod deployer; +pub mod router; diff --git a/processor/ethereum/contracts/src/abigen/router.rs b/processor/ethereum/contracts/src/abigen/router.rs new file mode 100644 index 00000000..cea1858f --- /dev/null +++ b/processor/ethereum/contracts/src/abigen/router.rs @@ -0,0 +1,2958 @@ +/** + +Generated by the following Solidity interface... +```solidity +interface Router { + type DestinationType is uint8; + struct OutInstruction { + DestinationType destinationType; + bytes destination; + address coin; + uint256 value; + } + struct Signature { + bytes32 c; + bytes32 s; + } + + error FailedTransfer(); + error InvalidAmount(); + error InvalidSignature(); + + event Executed(uint256 indexed nonce, bytes32 indexed batch); + event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); + event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); + + constructor(bytes32 initialSeraiKey); + + function arbitaryCallOut(bytes memory code) external; + function execute(OutInstruction[] memory transactions, Signature memory signature) external; + function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable; + function nonce() external view returns (uint256); + function seraiKey() external view returns (bytes32); + function smartContractNonce() external view returns (uint256); + function updateSeraiKey(bytes32 newSeraiKey, Signature memory signature) external; +} +``` + +...which was generated by the following JSON ABI: +```json +[ + { + "type": "constructor", + "inputs": [ + { + "name": "initialSeraiKey", + "type": "bytes32", + "internalType": "bytes32" + } + ], + "stateMutability": "nonpayable" + }, + { + "type": "function", + "name": "arbitaryCallOut", + "inputs": [ + { + "name": "code", + "type": "bytes", + "internalType": "bytes" + } + ], + "outputs": [], + "stateMutability": "nonpayable" + }, + { + "type": "function", + "name": "execute", + "inputs": [ + { + "name": "transactions", + "type": "tuple[]", + "internalType": "struct Router.OutInstruction[]", + "components": [ + { + "name": "destinationType", + "type": "uint8", + "internalType": "enum Router.DestinationType" + }, + { + "name": "destination", + "type": "bytes", + "internalType": "bytes" + }, + { + "name": "coin", + "type": "address", + "internalType": "address" + }, + { + "name": "value", + "type": "uint256", + "internalType": "uint256" + } + ] + }, + { + "name": "signature", + "type": "tuple", + "internalType": "struct Router.Signature", + "components": [ + { + "name": "c", + "type": "bytes32", + "internalType": "bytes32" + }, + { + "name": "s", + "type": "bytes32", + "internalType": "bytes32" + } + ] + } + ], + "outputs": [], + "stateMutability": "nonpayable" + }, + { + "type": "function", + "name": "inInstruction", + "inputs": [ + { + "name": "coin", + "type": "address", + "internalType": "address" + }, + { + "name": "amount", + "type": "uint256", + "internalType": "uint256" + }, + { + "name": "instruction", + "type": "bytes", + "internalType": "bytes" + } + ], + "outputs": [], + "stateMutability": "payable" + }, + { + "type": "function", + "name": "nonce", + "inputs": [], + "outputs": [ + { + "name": "", + "type": "uint256", + "internalType": "uint256" + } + ], + "stateMutability": "view" + }, + { + "type": "function", + "name": "seraiKey", + "inputs": [], + "outputs": [ + { + "name": "", + "type": "bytes32", + "internalType": "bytes32" + } + ], + "stateMutability": "view" + }, + { + "type": "function", + "name": "smartContractNonce", + "inputs": [], + "outputs": [ + { + "name": "", + "type": "uint256", + "internalType": "uint256" + } + ], + "stateMutability": "view" + }, + { + "type": "function", + "name": "updateSeraiKey", + "inputs": [ + { + "name": "newSeraiKey", + "type": "bytes32", + "internalType": "bytes32" + }, + { + "name": "signature", + "type": "tuple", + "internalType": "struct Router.Signature", + "components": [ + { + "name": "c", + "type": "bytes32", + "internalType": "bytes32" + }, + { + "name": "s", + "type": "bytes32", + "internalType": "bytes32" + } + ] + } + ], + "outputs": [], + "stateMutability": "nonpayable" + }, + { + "type": "event", + "name": "Executed", + "inputs": [ + { + "name": "nonce", + "type": "uint256", + "indexed": true, + "internalType": "uint256" + }, + { + "name": "batch", + "type": "bytes32", + "indexed": true, + "internalType": "bytes32" + } + ], + "anonymous": false + }, + { + "type": "event", + "name": "InInstruction", + "inputs": [ + { + "name": "from", + "type": "address", + "indexed": true, + "internalType": "address" + }, + { + "name": "coin", + "type": "address", + "indexed": true, + "internalType": "address" + }, + { + "name": "amount", + "type": "uint256", + "indexed": false, + "internalType": "uint256" + }, + { + "name": "instruction", + "type": "bytes", + "indexed": false, + "internalType": "bytes" + } + ], + "anonymous": false + }, + { + "type": "event", + "name": "SeraiKeyUpdated", + "inputs": [ + { + "name": "nonce", + "type": "uint256", + "indexed": true, + "internalType": "uint256" + }, + { + "name": "key", + "type": "bytes32", + "indexed": true, + "internalType": "bytes32" + } + ], + "anonymous": false + }, + { + "type": "error", + "name": "FailedTransfer", + "inputs": [] + }, + { + "type": "error", + "name": "InvalidAmount", + "inputs": [] + }, + { + "type": "error", + "name": "InvalidSignature", + "inputs": [] + } +] +```*/ +#[allow(non_camel_case_types, non_snake_case, clippy::style)] +pub mod Router { + use super::*; + use ::alloy_sol_types as alloy_sol_types; + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct DestinationType(u8); + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::private::SolTypeValue for u8 { + #[inline] + fn stv_to_tokens( + &self, + ) -> <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::Token<'_> { + alloy_sol_types::private::SolTypeValue::< + ::alloy_sol_types::sol_data::Uint<8>, + >::stv_to_tokens(self) + } + #[inline] + fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::tokenize(self) + .0 + } + #[inline] + fn stv_abi_encode_packed_to( + &self, + out: &mut alloy_sol_types::private::Vec, + ) { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::abi_encode_packed_to(self, out) + } + #[inline] + fn stv_abi_packed_encoded_size(&self) -> usize { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::abi_encoded_size(self) + } + } + #[automatically_derived] + impl DestinationType { + /// The Solidity type name. + pub const NAME: &'static str = stringify!(@ name); + /// Convert from the underlying value type. + #[inline] + pub const fn from(value: u8) -> Self { + Self(value) + } + /// Return the underlying value. + #[inline] + pub const fn into(self) -> u8 { + self.0 + } + /// Return the single encoding of this value, delegating to the + /// underlying type. + #[inline] + pub fn abi_encode(&self) -> alloy_sol_types::private::Vec { + ::abi_encode(&self.0) + } + /// Return the packed encoding of this value, delegating to the + /// underlying type. + #[inline] + pub fn abi_encode_packed(&self) -> alloy_sol_types::private::Vec { + ::abi_encode_packed(&self.0) + } + } + #[automatically_derived] + impl alloy_sol_types::SolType for DestinationType { + type RustType = u8; + type Token<'a> = <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::Token<'a>; + const SOL_NAME: &'static str = Self::NAME; + const ENCODED_SIZE: Option = <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::ENCODED_SIZE; + const PACKED_ENCODED_SIZE: Option = <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; + #[inline] + fn valid_token(token: &Self::Token<'_>) -> bool { + Self::type_check(token).is_ok() + } + #[inline] + fn type_check(token: &Self::Token<'_>) -> alloy_sol_types::Result<()> { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::type_check(token) + } + #[inline] + fn detokenize(token: Self::Token<'_>) -> Self::RustType { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::SolType>::detokenize(token) + } + } + #[automatically_derived] + impl alloy_sol_types::EventTopic for DestinationType { + #[inline] + fn topic_preimage_length(rust: &Self::RustType) -> usize { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::EventTopic>::topic_preimage_length(rust) + } + #[inline] + fn encode_topic_preimage( + rust: &Self::RustType, + out: &mut alloy_sol_types::private::Vec, + ) { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::EventTopic>::encode_topic_preimage(rust, out) + } + #[inline] + fn encode_topic( + rust: &Self::RustType, + ) -> alloy_sol_types::abi::token::WordToken { + <::alloy_sol_types::sol_data::Uint< + 8, + > as alloy_sol_types::EventTopic>::encode_topic(rust) + } + } + }; + /**```solidity +struct OutInstruction { DestinationType destinationType; bytes destination; address coin; uint256 value; } +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct OutInstruction { + pub destinationType: ::RustType, + pub destination: ::alloy_sol_types::private::Bytes, + pub coin: ::alloy_sol_types::private::Address, + pub value: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + DestinationType, + ::alloy_sol_types::sol_data::Bytes, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::RustType, + ::alloy_sol_types::private::Bytes, + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: OutInstruction) -> Self { + (value.destinationType, value.destination, value.coin, value.value) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for OutInstruction { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + destinationType: tuple.0, + destination: tuple.1, + coin: tuple.2, + value: tuple.3, + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolValue for OutInstruction { + type SolType = Self; + } + #[automatically_derived] + impl alloy_sol_types::private::SolTypeValue for OutInstruction { + #[inline] + fn stv_to_tokens(&self) -> ::Token<'_> { + ( + ::tokenize( + &self.destinationType, + ), + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( + &self.destination, + ), + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.coin, + ), + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.value), + ) + } + #[inline] + fn stv_abi_encoded_size(&self) -> usize { + if let Some(size) = ::ENCODED_SIZE { + return size; + } + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_encoded_size(&tuple) + } + #[inline] + fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { + ::eip712_hash_struct(self) + } + #[inline] + fn stv_abi_encode_packed_to( + &self, + out: &mut alloy_sol_types::private::Vec, + ) { + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_encode_packed_to(&tuple, out) + } + #[inline] + fn stv_abi_packed_encoded_size(&self) -> usize { + if let Some(size) = ::PACKED_ENCODED_SIZE { + return size; + } + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_packed_encoded_size(&tuple) + } + } + #[automatically_derived] + impl alloy_sol_types::SolType for OutInstruction { + type RustType = Self; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SOL_NAME: &'static str = ::NAME; + const ENCODED_SIZE: Option = as alloy_sol_types::SolType>::ENCODED_SIZE; + const PACKED_ENCODED_SIZE: Option = as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; + #[inline] + fn valid_token(token: &Self::Token<'_>) -> bool { + as alloy_sol_types::SolType>::valid_token(token) + } + #[inline] + fn detokenize(token: Self::Token<'_>) -> Self::RustType { + let tuple = as alloy_sol_types::SolType>::detokenize(token); + >>::from(tuple) + } + } + #[automatically_derived] + impl alloy_sol_types::SolStruct for OutInstruction { + const NAME: &'static str = "OutInstruction"; + #[inline] + fn eip712_root_type() -> alloy_sol_types::private::Cow<'static, str> { + alloy_sol_types::private::Cow::Borrowed( + "OutInstruction(uint8 destinationType,bytes destination,address coin,uint256 value)", + ) + } + #[inline] + fn eip712_components() -> alloy_sol_types::private::Vec< + alloy_sol_types::private::Cow<'static, str>, + > { + alloy_sol_types::private::Vec::new() + } + #[inline] + fn eip712_encode_type() -> alloy_sol_types::private::Cow<'static, str> { + ::eip712_root_type() + } + #[inline] + fn eip712_encode_data(&self) -> alloy_sol_types::private::Vec { + [ + ::eip712_data_word( + &self.destinationType, + ) + .0, + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::eip712_data_word( + &self.destination, + ) + .0, + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::eip712_data_word( + &self.coin, + ) + .0, + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::eip712_data_word(&self.value) + .0, + ] + .concat() + } + } + #[automatically_derived] + impl alloy_sol_types::EventTopic for OutInstruction { + #[inline] + fn topic_preimage_length(rust: &Self::RustType) -> usize { + 0usize + + ::topic_preimage_length( + &rust.destinationType, + ) + + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::EventTopic>::topic_preimage_length( + &rust.destination, + ) + + <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::topic_preimage_length( + &rust.coin, + ) + + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.value) + } + #[inline] + fn encode_topic_preimage( + rust: &Self::RustType, + out: &mut alloy_sol_types::private::Vec, + ) { + out.reserve( + ::topic_preimage_length(rust), + ); + ::encode_topic_preimage( + &rust.destinationType, + out, + ); + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::EventTopic>::encode_topic_preimage( + &rust.destination, + out, + ); + <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic_preimage( + &rust.coin, + out, + ); + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::EventTopic>::encode_topic_preimage( + &rust.value, + out, + ); + } + #[inline] + fn encode_topic( + rust: &Self::RustType, + ) -> alloy_sol_types::abi::token::WordToken { + let mut out = alloy_sol_types::private::Vec::new(); + ::encode_topic_preimage( + rust, + &mut out, + ); + alloy_sol_types::abi::token::WordToken( + alloy_sol_types::private::keccak256(out), + ) + } + } + }; + /**```solidity +struct Signature { bytes32 c; bytes32 s; } +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct Signature { + pub c: ::alloy_sol_types::private::FixedBytes<32>, + pub s: ::alloy_sol_types::private::FixedBytes<32>, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::FixedBytes<32>, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::FixedBytes<32>, + ::alloy_sol_types::private::FixedBytes<32>, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: Signature) -> Self { + (value.c, value.s) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for Signature { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { c: tuple.0, s: tuple.1 } + } + } + #[automatically_derived] + impl alloy_sol_types::SolValue for Signature { + type SolType = Self; + } + #[automatically_derived] + impl alloy_sol_types::private::SolTypeValue for Signature { + #[inline] + fn stv_to_tokens(&self) -> ::Token<'_> { + ( + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::tokenize(&self.c), + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::tokenize(&self.s), + ) + } + #[inline] + fn stv_abi_encoded_size(&self) -> usize { + if let Some(size) = ::ENCODED_SIZE { + return size; + } + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_encoded_size(&tuple) + } + #[inline] + fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { + ::eip712_hash_struct(self) + } + #[inline] + fn stv_abi_encode_packed_to( + &self, + out: &mut alloy_sol_types::private::Vec, + ) { + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_encode_packed_to(&tuple, out) + } + #[inline] + fn stv_abi_packed_encoded_size(&self) -> usize { + if let Some(size) = ::PACKED_ENCODED_SIZE { + return size; + } + let tuple = as ::core::convert::From>::from(self.clone()); + as alloy_sol_types::SolType>::abi_packed_encoded_size(&tuple) + } + } + #[automatically_derived] + impl alloy_sol_types::SolType for Signature { + type RustType = Self; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SOL_NAME: &'static str = ::NAME; + const ENCODED_SIZE: Option = as alloy_sol_types::SolType>::ENCODED_SIZE; + const PACKED_ENCODED_SIZE: Option = as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; + #[inline] + fn valid_token(token: &Self::Token<'_>) -> bool { + as alloy_sol_types::SolType>::valid_token(token) + } + #[inline] + fn detokenize(token: Self::Token<'_>) -> Self::RustType { + let tuple = as alloy_sol_types::SolType>::detokenize(token); + >>::from(tuple) + } + } + #[automatically_derived] + impl alloy_sol_types::SolStruct for Signature { + const NAME: &'static str = "Signature"; + #[inline] + fn eip712_root_type() -> alloy_sol_types::private::Cow<'static, str> { + alloy_sol_types::private::Cow::Borrowed("Signature(bytes32 c,bytes32 s)") + } + #[inline] + fn eip712_components() -> alloy_sol_types::private::Vec< + alloy_sol_types::private::Cow<'static, str>, + > { + alloy_sol_types::private::Vec::new() + } + #[inline] + fn eip712_encode_type() -> alloy_sol_types::private::Cow<'static, str> { + ::eip712_root_type() + } + #[inline] + fn eip712_encode_data(&self) -> alloy_sol_types::private::Vec { + [ + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::eip712_data_word(&self.c) + .0, + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::eip712_data_word(&self.s) + .0, + ] + .concat() + } + } + #[automatically_derived] + impl alloy_sol_types::EventTopic for Signature { + #[inline] + fn topic_preimage_length(rust: &Self::RustType) -> usize { + 0usize + + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.c) + + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.s) + } + #[inline] + fn encode_topic_preimage( + rust: &Self::RustType, + out: &mut alloy_sol_types::private::Vec, + ) { + out.reserve( + ::topic_preimage_length(rust), + ); + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::encode_topic_preimage(&rust.c, out); + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::encode_topic_preimage(&rust.s, out); + } + #[inline] + fn encode_topic( + rust: &Self::RustType, + ) -> alloy_sol_types::abi::token::WordToken { + let mut out = alloy_sol_types::private::Vec::new(); + ::encode_topic_preimage( + rust, + &mut out, + ); + alloy_sol_types::abi::token::WordToken( + alloy_sol_types::private::keccak256(out), + ) + } + } + }; + /**Custom error with signature `FailedTransfer()` and selector `0xbfa871c5`. +```solidity +error FailedTransfer(); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct FailedTransfer {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: FailedTransfer) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for FailedTransfer { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + #[automatically_derived] + impl alloy_sol_types::SolError for FailedTransfer { + type Parameters<'a> = UnderlyingSolTuple<'a>; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "FailedTransfer()"; + const SELECTOR: [u8; 4] = [191u8, 168u8, 113u8, 197u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + } + }; + /**Custom error with signature `InvalidAmount()` and selector `0x2c5211c6`. +```solidity +error InvalidAmount(); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct InvalidAmount {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: InvalidAmount) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for InvalidAmount { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + #[automatically_derived] + impl alloy_sol_types::SolError for InvalidAmount { + type Parameters<'a> = UnderlyingSolTuple<'a>; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "InvalidAmount()"; + const SELECTOR: [u8; 4] = [44u8, 82u8, 17u8, 198u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + } + }; + /**Custom error with signature `InvalidSignature()` and selector `0x8baa579f`. +```solidity +error InvalidSignature(); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct InvalidSignature {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: InvalidSignature) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for InvalidSignature { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + #[automatically_derived] + impl alloy_sol_types::SolError for InvalidSignature { + type Parameters<'a> = UnderlyingSolTuple<'a>; + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "InvalidSignature()"; + const SELECTOR: [u8; 4] = [139u8, 170u8, 87u8, 159u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + } + }; + /**Event with signature `Executed(uint256,bytes32)` and selector `0xc218c77e54cac1162571e52b65bb27aa0cdfcc70b7c7296ad83933914b132091`. +```solidity +event Executed(uint256 indexed nonce, bytes32 indexed batch); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct Executed { + #[allow(missing_docs)] + pub nonce: ::alloy_sol_types::private::primitives::aliases::U256, + #[allow(missing_docs)] + pub batch: ::alloy_sol_types::private::FixedBytes<32>, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for Executed { + type DataTuple<'a> = (); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::Uint<256>, + ::alloy_sol_types::sol_data::FixedBytes<32>, + ); + const SIGNATURE: &'static str = "Executed(uint256,bytes32)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 194u8, + 24u8, + 199u8, + 126u8, + 84u8, + 202u8, + 193u8, + 22u8, + 37u8, + 113u8, + 229u8, + 43u8, + 101u8, + 187u8, + 39u8, + 170u8, + 12u8, + 223u8, + 204u8, + 112u8, + 183u8, + 199u8, + 41u8, + 106u8, + 216u8, + 57u8, + 51u8, + 145u8, + 75u8, + 19u8, + 32u8, + 145u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + nonce: topics.1, + batch: topics.2, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + () + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.nonce.clone(), self.batch.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::EventTopic>::encode_topic(&self.nonce); + out[2usize] = <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::encode_topic(&self.batch); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for Executed { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&Executed> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &Executed) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Event with signature `InInstruction(address,address,uint256,bytes)` and selector `0x346fd5cd6d19d26d3afd222f43033ecd0d5614ca64bec0aed101482cd87e922f`. +```solidity +event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct InInstruction { + #[allow(missing_docs)] + pub from: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub coin: ::alloy_sol_types::private::Address, + #[allow(missing_docs)] + pub amount: ::alloy_sol_types::private::primitives::aliases::U256, + #[allow(missing_docs)] + pub instruction: ::alloy_sol_types::private::Bytes, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for InInstruction { + type DataTuple<'a> = ( + ::alloy_sol_types::sol_data::Uint<256>, + ::alloy_sol_types::sol_data::Bytes, + ); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Address, + ); + const SIGNATURE: &'static str = "InInstruction(address,address,uint256,bytes)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 52u8, + 111u8, + 213u8, + 205u8, + 109u8, + 25u8, + 210u8, + 109u8, + 58u8, + 253u8, + 34u8, + 47u8, + 67u8, + 3u8, + 62u8, + 205u8, + 13u8, + 86u8, + 20u8, + 202u8, + 100u8, + 190u8, + 192u8, + 174u8, + 209u8, + 1u8, + 72u8, + 44u8, + 216u8, + 126u8, + 146u8, + 47u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + from: topics.1, + coin: topics.2, + amount: data.0, + instruction: data.1, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + ( + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.amount), + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( + &self.instruction, + ), + ) + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.from.clone(), self.coin.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.from, + ); + out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( + &self.coin, + ); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for InInstruction { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&InInstruction> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &InInstruction) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Event with signature `SeraiKeyUpdated(uint256,bytes32)` and selector `0x1b9ff0164e811045a617ae783e807501a8e27762a7cb8f2fbd027851752570b5`. +```solidity +event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); +```*/ + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + #[derive(Clone)] + pub struct SeraiKeyUpdated { + #[allow(missing_docs)] + pub nonce: ::alloy_sol_types::private::primitives::aliases::U256, + #[allow(missing_docs)] + pub key: ::alloy_sol_types::private::FixedBytes<32>, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + #[automatically_derived] + impl alloy_sol_types::SolEvent for SeraiKeyUpdated { + type DataTuple<'a> = (); + type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + type TopicList = ( + alloy_sol_types::sol_data::FixedBytes<32>, + ::alloy_sol_types::sol_data::Uint<256>, + ::alloy_sol_types::sol_data::FixedBytes<32>, + ); + const SIGNATURE: &'static str = "SeraiKeyUpdated(uint256,bytes32)"; + const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ + 27u8, + 159u8, + 240u8, + 22u8, + 78u8, + 129u8, + 16u8, + 69u8, + 166u8, + 23u8, + 174u8, + 120u8, + 62u8, + 128u8, + 117u8, + 1u8, + 168u8, + 226u8, + 119u8, + 98u8, + 167u8, + 203u8, + 143u8, + 47u8, + 189u8, + 2u8, + 120u8, + 81u8, + 117u8, + 37u8, + 112u8, + 181u8, + ]); + const ANONYMOUS: bool = false; + #[allow(unused_variables)] + #[inline] + fn new( + topics: ::RustType, + data: as alloy_sol_types::SolType>::RustType, + ) -> Self { + Self { + nonce: topics.1, + key: topics.2, + } + } + #[inline] + fn tokenize_body(&self) -> Self::DataToken<'_> { + () + } + #[inline] + fn topics(&self) -> ::RustType { + (Self::SIGNATURE_HASH.into(), self.nonce.clone(), self.key.clone()) + } + #[inline] + fn encode_topics_raw( + &self, + out: &mut [alloy_sol_types::abi::token::WordToken], + ) -> alloy_sol_types::Result<()> { + if out.len() < ::COUNT { + return Err(alloy_sol_types::Error::Overrun); + } + out[0usize] = alloy_sol_types::abi::token::WordToken( + Self::SIGNATURE_HASH, + ); + out[1usize] = <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::EventTopic>::encode_topic(&self.nonce); + out[2usize] = <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::EventTopic>::encode_topic(&self.key); + Ok(()) + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for SeraiKeyUpdated { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + From::from(self) + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + From::from(&self) + } + } + #[automatically_derived] + impl From<&SeraiKeyUpdated> for alloy_sol_types::private::LogData { + #[inline] + fn from(this: &SeraiKeyUpdated) -> alloy_sol_types::private::LogData { + alloy_sol_types::SolEvent::encode_log_data(this) + } + } + }; + /**Constructor`. +```solidity +constructor(bytes32 initialSeraiKey); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct constructorCall { + pub initialSeraiKey: ::alloy_sol_types::private::FixedBytes<32>, + } + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::FixedBytes<32>,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: constructorCall) -> Self { + (value.initialSeraiKey,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for constructorCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { initialSeraiKey: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolConstructor for constructorCall { + type Parameters<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::tokenize(&self.initialSeraiKey), + ) + } + } + }; + /**Function with signature `arbitaryCallOut(bytes)` and selector `0x3cbd2bf6`. +```solidity +function arbitaryCallOut(bytes memory code) external; +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct arbitaryCallOutCall { + pub code: ::alloy_sol_types::private::Bytes, + } + ///Container type for the return parameters of the [`arbitaryCallOut(bytes)`](arbitaryCallOutCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct arbitaryCallOutReturn {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bytes,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Bytes,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: arbitaryCallOutCall) -> Self { + (value.code,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for arbitaryCallOutCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { code: tuple.0 } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From + for UnderlyingRustTuple<'_> { + fn from(value: arbitaryCallOutReturn) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> + for arbitaryCallOutReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for arbitaryCallOutCall { + type Parameters<'a> = (::alloy_sol_types::sol_data::Bytes,); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = arbitaryCallOutReturn; + type ReturnTuple<'a> = (); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "arbitaryCallOut(bytes)"; + const SELECTOR: [u8; 4] = [60u8, 189u8, 43u8, 246u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( + &self.code, + ), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))` and selector `0xd5f22182`. +```solidity +function execute(OutInstruction[] memory transactions, Signature memory signature) external; +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct executeCall { + pub transactions: ::alloy_sol_types::private::Vec< + ::RustType, + >, + pub signature: ::RustType, + } + ///Container type for the return parameters of the [`execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))`](executeCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct executeReturn {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Array, + Signature, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Vec< + ::RustType, + >, + ::RustType, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: executeCall) -> Self { + (value.transactions, value.signature) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for executeCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + transactions: tuple.0, + signature: tuple.1, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: executeReturn) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for executeReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for executeCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Array, + Signature, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = executeReturn; + type ReturnTuple<'a> = (); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))"; + const SELECTOR: [u8; 4] = [213u8, 242u8, 33u8, 130u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Array< + OutInstruction, + > as alloy_sol_types::SolType>::tokenize(&self.transactions), + ::tokenize(&self.signature), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `inInstruction(address,uint256,bytes)` and selector `0x0759a1a4`. +```solidity +function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable; +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct inInstructionCall { + pub coin: ::alloy_sol_types::private::Address, + pub amount: ::alloy_sol_types::private::primitives::aliases::U256, + pub instruction: ::alloy_sol_types::private::Bytes, + } + ///Container type for the return parameters of the [`inInstruction(address,uint256,bytes)`](inInstructionCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct inInstructionReturn {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ::alloy_sol_types::sol_data::Bytes, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::Address, + ::alloy_sol_types::private::primitives::aliases::U256, + ::alloy_sol_types::private::Bytes, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: inInstructionCall) -> Self { + (value.coin, value.amount, value.instruction) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for inInstructionCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + coin: tuple.0, + amount: tuple.1, + instruction: tuple.2, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: inInstructionReturn) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for inInstructionReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for inInstructionCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::Address, + ::alloy_sol_types::sol_data::Uint<256>, + ::alloy_sol_types::sol_data::Bytes, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = inInstructionReturn; + type ReturnTuple<'a> = (); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "inInstruction(address,uint256,bytes)"; + const SELECTOR: [u8; 4] = [7u8, 89u8, 161u8, 164u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( + &self.coin, + ), + <::alloy_sol_types::sol_data::Uint< + 256, + > as alloy_sol_types::SolType>::tokenize(&self.amount), + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( + &self.instruction, + ), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `nonce()` and selector `0xaffed0e0`. +```solidity +function nonce() external view returns (uint256); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct nonceCall {} + ///Container type for the return parameters of the [`nonce()`](nonceCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct nonceReturn { + pub _0: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: nonceCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for nonceCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: nonceReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for nonceReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for nonceCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = nonceReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "nonce()"; + const SELECTOR: [u8; 4] = [175u8, 254u8, 208u8, 224u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `seraiKey()` and selector `0x9d6eea0a`. +```solidity +function seraiKey() external view returns (bytes32); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct seraiKeyCall {} + ///Container type for the return parameters of the [`seraiKey()`](seraiKeyCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct seraiKeyReturn { + pub _0: ::alloy_sol_types::private::FixedBytes<32>, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: seraiKeyCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for seraiKeyCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::FixedBytes<32>,); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: seraiKeyReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for seraiKeyReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for seraiKeyCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = seraiKeyReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "seraiKey()"; + const SELECTOR: [u8; 4] = [157u8, 110u8, 234u8, 10u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `smartContractNonce()` and selector `0xc3727534`. +```solidity +function smartContractNonce() external view returns (uint256); +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct smartContractNonceCall {} + ///Container type for the return parameters of the [`smartContractNonce()`](smartContractNonceCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct smartContractNonceReturn { + pub _0: ::alloy_sol_types::private::primitives::aliases::U256, + } + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From + for UnderlyingRustTuple<'_> { + fn from(value: smartContractNonceCall) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> + for smartContractNonceCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::primitives::aliases::U256, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From + for UnderlyingRustTuple<'_> { + fn from(value: smartContractNonceReturn) -> Self { + (value._0,) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> + for smartContractNonceReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { _0: tuple.0 } + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for smartContractNonceCall { + type Parameters<'a> = (); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = smartContractNonceReturn; + type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "smartContractNonce()"; + const SELECTOR: [u8; 4] = [195u8, 114u8, 117u8, 52u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + () + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + /**Function with signature `updateSeraiKey(bytes32,(bytes32,bytes32))` and selector `0xb5071c6a`. +```solidity +function updateSeraiKey(bytes32 newSeraiKey, Signature memory signature) external; +```*/ + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct updateSeraiKeyCall { + pub newSeraiKey: ::alloy_sol_types::private::FixedBytes<32>, + pub signature: ::RustType, + } + ///Container type for the return parameters of the [`updateSeraiKey(bytes32,(bytes32,bytes32))`](updateSeraiKeyCall) function. + #[allow(non_camel_case_types, non_snake_case)] + #[derive(Clone)] + pub struct updateSeraiKeyReturn {} + #[allow(non_camel_case_types, non_snake_case, clippy::style)] + const _: () = { + use ::alloy_sol_types as alloy_sol_types; + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = ( + ::alloy_sol_types::sol_data::FixedBytes<32>, + Signature, + ); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = ( + ::alloy_sol_types::private::FixedBytes<32>, + ::RustType, + ); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From for UnderlyingRustTuple<'_> { + fn from(value: updateSeraiKeyCall) -> Self { + (value.newSeraiKey, value.signature) + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> for updateSeraiKeyCall { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self { + newSeraiKey: tuple.0, + signature: tuple.1, + } + } + } + } + { + #[doc(hidden)] + type UnderlyingSolTuple<'a> = (); + #[doc(hidden)] + type UnderlyingRustTuple<'a> = (); + #[cfg(test)] + #[allow(dead_code, unreachable_patterns)] + fn _type_assertion( + _t: alloy_sol_types::private::AssertTypeEq, + ) { + match _t { + alloy_sol_types::private::AssertTypeEq::< + ::RustType, + >(_) => {} + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From + for UnderlyingRustTuple<'_> { + fn from(value: updateSeraiKeyReturn) -> Self { + () + } + } + #[automatically_derived] + #[doc(hidden)] + impl ::core::convert::From> + for updateSeraiKeyReturn { + fn from(tuple: UnderlyingRustTuple<'_>) -> Self { + Self {} + } + } + } + #[automatically_derived] + impl alloy_sol_types::SolCall for updateSeraiKeyCall { + type Parameters<'a> = ( + ::alloy_sol_types::sol_data::FixedBytes<32>, + Signature, + ); + type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; + type Return = updateSeraiKeyReturn; + type ReturnTuple<'a> = (); + type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; + const SIGNATURE: &'static str = "updateSeraiKey(bytes32,(bytes32,bytes32))"; + const SELECTOR: [u8; 4] = [181u8, 7u8, 28u8, 106u8]; + #[inline] + fn new<'a>( + tuple: as alloy_sol_types::SolType>::RustType, + ) -> Self { + tuple.into() + } + #[inline] + fn tokenize(&self) -> Self::Token<'_> { + ( + <::alloy_sol_types::sol_data::FixedBytes< + 32, + > as alloy_sol_types::SolType>::tokenize(&self.newSeraiKey), + ::tokenize(&self.signature), + ) + } + #[inline] + fn abi_decode_returns( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) + .map(Into::into) + } + } + }; + ///Container for all the [`Router`](self) function calls. + pub enum RouterCalls { + arbitaryCallOut(arbitaryCallOutCall), + execute(executeCall), + inInstruction(inInstructionCall), + nonce(nonceCall), + seraiKey(seraiKeyCall), + smartContractNonce(smartContractNonceCall), + updateSeraiKey(updateSeraiKeyCall), + } + #[automatically_derived] + impl RouterCalls { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 4usize]] = &[ + [7u8, 89u8, 161u8, 164u8], + [60u8, 189u8, 43u8, 246u8], + [157u8, 110u8, 234u8, 10u8], + [175u8, 254u8, 208u8, 224u8], + [181u8, 7u8, 28u8, 106u8], + [195u8, 114u8, 117u8, 52u8], + [213u8, 242u8, 33u8, 130u8], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolInterface for RouterCalls { + const NAME: &'static str = "RouterCalls"; + const MIN_DATA_LENGTH: usize = 0usize; + const COUNT: usize = 7usize; + #[inline] + fn selector(&self) -> [u8; 4] { + match self { + Self::arbitaryCallOut(_) => { + ::SELECTOR + } + Self::execute(_) => ::SELECTOR, + Self::inInstruction(_) => { + ::SELECTOR + } + Self::nonce(_) => ::SELECTOR, + Self::seraiKey(_) => ::SELECTOR, + Self::smartContractNonce(_) => { + ::SELECTOR + } + Self::updateSeraiKey(_) => { + ::SELECTOR + } + } + } + #[inline] + fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { + Self::SELECTORS.get(i).copied() + } + #[inline] + fn valid_selector(selector: [u8; 4]) -> bool { + Self::SELECTORS.binary_search(&selector).is_ok() + } + #[inline] + #[allow(unsafe_code, non_snake_case)] + fn abi_decode_raw( + selector: [u8; 4], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + static DECODE_SHIMS: &[fn( + &[u8], + bool, + ) -> alloy_sol_types::Result] = &[ + { + fn inInstruction( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::inInstruction) + } + inInstruction + }, + { + fn arbitaryCallOut( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::arbitaryCallOut) + } + arbitaryCallOut + }, + { + fn seraiKey( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::seraiKey) + } + seraiKey + }, + { + fn nonce( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::nonce) + } + nonce + }, + { + fn updateSeraiKey( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::updateSeraiKey) + } + updateSeraiKey + }, + { + fn smartContractNonce( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::smartContractNonce) + } + smartContractNonce + }, + { + fn execute( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterCalls::execute) + } + execute + }, + ]; + let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { + return Err( + alloy_sol_types::Error::unknown_selector( + ::NAME, + selector, + ), + ); + }; + (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) + } + #[inline] + fn abi_encoded_size(&self) -> usize { + match self { + Self::arbitaryCallOut(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::execute(inner) => { + ::abi_encoded_size(inner) + } + Self::inInstruction(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::nonce(inner) => { + ::abi_encoded_size(inner) + } + Self::seraiKey(inner) => { + ::abi_encoded_size(inner) + } + Self::smartContractNonce(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::updateSeraiKey(inner) => { + ::abi_encoded_size( + inner, + ) + } + } + } + #[inline] + fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { + match self { + Self::arbitaryCallOut(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::execute(inner) => { + ::abi_encode_raw(inner, out) + } + Self::inInstruction(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::nonce(inner) => { + ::abi_encode_raw(inner, out) + } + Self::seraiKey(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::smartContractNonce(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::updateSeraiKey(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + } + } + } + ///Container for all the [`Router`](self) custom errors. + pub enum RouterErrors { + FailedTransfer(FailedTransfer), + InvalidAmount(InvalidAmount), + InvalidSignature(InvalidSignature), + } + #[automatically_derived] + impl RouterErrors { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 4usize]] = &[ + [44u8, 82u8, 17u8, 198u8], + [139u8, 170u8, 87u8, 159u8], + [191u8, 168u8, 113u8, 197u8], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolInterface for RouterErrors { + const NAME: &'static str = "RouterErrors"; + const MIN_DATA_LENGTH: usize = 0usize; + const COUNT: usize = 3usize; + #[inline] + fn selector(&self) -> [u8; 4] { + match self { + Self::FailedTransfer(_) => { + ::SELECTOR + } + Self::InvalidAmount(_) => { + ::SELECTOR + } + Self::InvalidSignature(_) => { + ::SELECTOR + } + } + } + #[inline] + fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { + Self::SELECTORS.get(i).copied() + } + #[inline] + fn valid_selector(selector: [u8; 4]) -> bool { + Self::SELECTORS.binary_search(&selector).is_ok() + } + #[inline] + #[allow(unsafe_code, non_snake_case)] + fn abi_decode_raw( + selector: [u8; 4], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + static DECODE_SHIMS: &[fn( + &[u8], + bool, + ) -> alloy_sol_types::Result] = &[ + { + fn InvalidAmount( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterErrors::InvalidAmount) + } + InvalidAmount + }, + { + fn InvalidSignature( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterErrors::InvalidSignature) + } + InvalidSignature + }, + { + fn FailedTransfer( + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + ::abi_decode_raw( + data, + validate, + ) + .map(RouterErrors::FailedTransfer) + } + FailedTransfer + }, + ]; + let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { + return Err( + alloy_sol_types::Error::unknown_selector( + ::NAME, + selector, + ), + ); + }; + (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) + } + #[inline] + fn abi_encoded_size(&self) -> usize { + match self { + Self::FailedTransfer(inner) => { + ::abi_encoded_size( + inner, + ) + } + Self::InvalidAmount(inner) => { + ::abi_encoded_size(inner) + } + Self::InvalidSignature(inner) => { + ::abi_encoded_size( + inner, + ) + } + } + } + #[inline] + fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { + match self { + Self::FailedTransfer(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::InvalidAmount(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + Self::InvalidSignature(inner) => { + ::abi_encode_raw( + inner, + out, + ) + } + } + } + } + ///Container for all the [`Router`](self) events. + pub enum RouterEvents { + Executed(Executed), + InInstruction(InInstruction), + SeraiKeyUpdated(SeraiKeyUpdated), + } + #[automatically_derived] + impl RouterEvents { + /// All the selectors of this enum. + /// + /// Note that the selectors might not be in the same order as the variants. + /// No guarantees are made about the order of the selectors. + /// + /// Prefer using `SolInterface` methods instead. + pub const SELECTORS: &'static [[u8; 32usize]] = &[ + [ + 27u8, + 159u8, + 240u8, + 22u8, + 78u8, + 129u8, + 16u8, + 69u8, + 166u8, + 23u8, + 174u8, + 120u8, + 62u8, + 128u8, + 117u8, + 1u8, + 168u8, + 226u8, + 119u8, + 98u8, + 167u8, + 203u8, + 143u8, + 47u8, + 189u8, + 2u8, + 120u8, + 81u8, + 117u8, + 37u8, + 112u8, + 181u8, + ], + [ + 52u8, + 111u8, + 213u8, + 205u8, + 109u8, + 25u8, + 210u8, + 109u8, + 58u8, + 253u8, + 34u8, + 47u8, + 67u8, + 3u8, + 62u8, + 205u8, + 13u8, + 86u8, + 20u8, + 202u8, + 100u8, + 190u8, + 192u8, + 174u8, + 209u8, + 1u8, + 72u8, + 44u8, + 216u8, + 126u8, + 146u8, + 47u8, + ], + [ + 194u8, + 24u8, + 199u8, + 126u8, + 84u8, + 202u8, + 193u8, + 22u8, + 37u8, + 113u8, + 229u8, + 43u8, + 101u8, + 187u8, + 39u8, + 170u8, + 12u8, + 223u8, + 204u8, + 112u8, + 183u8, + 199u8, + 41u8, + 106u8, + 216u8, + 57u8, + 51u8, + 145u8, + 75u8, + 19u8, + 32u8, + 145u8, + ], + ]; + } + #[automatically_derived] + impl alloy_sol_types::SolEventInterface for RouterEvents { + const NAME: &'static str = "RouterEvents"; + const COUNT: usize = 3usize; + fn decode_raw_log( + topics: &[alloy_sol_types::Word], + data: &[u8], + validate: bool, + ) -> alloy_sol_types::Result { + match topics.first().copied() { + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::Executed) + } + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::InInstruction) + } + Some(::SIGNATURE_HASH) => { + ::decode_raw_log( + topics, + data, + validate, + ) + .map(Self::SeraiKeyUpdated) + } + _ => { + alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { + name: ::NAME, + log: alloy_sol_types::private::Box::new( + alloy_sol_types::private::LogData::new_unchecked( + topics.to_vec(), + data.to_vec().into(), + ), + ), + }) + } + } + } + } + #[automatically_derived] + impl alloy_sol_types::private::IntoLogData for RouterEvents { + fn to_log_data(&self) -> alloy_sol_types::private::LogData { + match self { + Self::Executed(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + Self::InInstruction(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + Self::SeraiKeyUpdated(inner) => { + alloy_sol_types::private::IntoLogData::to_log_data(inner) + } + } + } + fn into_log_data(self) -> alloy_sol_types::private::LogData { + match self { + Self::Executed(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + Self::InInstruction(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + Self::SeraiKeyUpdated(inner) => { + alloy_sol_types::private::IntoLogData::into_log_data(inner) + } + } + } + } +} diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs index d8de29b3..45176067 100644 --- a/processor/ethereum/contracts/src/lib.rs +++ b/processor/ethereum/contracts/src/lib.rs @@ -1,46 +1,21 @@ -use alloy_sol_types::sol; - #[rustfmt::skip] #[expect(warnings)] #[expect(needless_pass_by_value)] #[expect(clippy::all)] #[expect(clippy::ignored_unit_patterns)] #[expect(clippy::redundant_closure_for_method_calls)] -mod erc20_container { - use super::*; - sol!("contracts/IERC20.sol"); -} +mod abigen; + pub mod erc20 { - pub const BYTECODE: &str = include_str!("../artifacts/Deployer.bin"); - pub use super::erc20_container::IERC20::*; -} - -#[rustfmt::skip] -#[expect(warnings)] -#[expect(needless_pass_by_value)] -#[expect(clippy::all)] -#[expect(clippy::ignored_unit_patterns)] -#[expect(clippy::redundant_closure_for_method_calls)] -mod deployer_container { - use super::*; - sol!("contracts/Deployer.sol"); + pub use super::abigen::erc20::IERC20::*; } pub mod deployer { - pub const BYTECODE: &str = include_str!("../artifacts/Deployer.bin"); - pub use super::deployer_container::Deployer::*; -} - -#[rustfmt::skip] -#[expect(warnings)] -#[expect(needless_pass_by_value)] -#[expect(clippy::all)] -#[expect(clippy::ignored_unit_patterns)] -#[expect(clippy::redundant_closure_for_method_calls)] -mod router_container { - use super::*; - sol!(Router, "artifacts/Router.abi"); + pub const BYTECODE: &str = + include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Deployer.bin")); + pub use super::abigen::deployer::Deployer::*; } pub mod router { - pub const BYTECODE: &str = include_str!("../artifacts/Router.bin"); - pub use super::router_container::Router::*; + pub const BYTECODE: &str = + include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Router.bin")); + pub use super::abigen::router::Router::*; } diff --git a/processor/ethereum/ethereum-serai/Cargo.toml b/processor/ethereum/ethereum-serai/Cargo.toml index f0ea323f..a2bec481 100644 --- a/processor/ethereum/ethereum-serai/Cargo.toml +++ b/processor/ethereum/ethereum-serai/Cargo.toml @@ -38,6 +38,7 @@ alloy-provider = { version = "0.3", default-features = false } alloy-node-bindings = { version = "0.3", default-features = false, optional = true } +ethereum-schnorr-contract = { path = "../../../networks/ethereum/schnorr", default-features = false } contracts = { package = "serai-processor-ethereum-contracts", path = "../contracts" } [dev-dependencies] diff --git a/processor/ethereum/ethereum-serai/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs index 3366b744..d013eeff 100644 --- a/processor/ethereum/ethereum-serai/src/crypto.rs +++ b/processor/ethereum/ethereum-serai/src/crypto.rs @@ -13,6 +13,8 @@ use frost::{ curve::{Ciphersuite, Secp256k1}, }; +pub use ethereum_schnorr_contract::*; + use alloy_core::primitives::{Parity, Signature as AlloySignature}; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; @@ -77,11 +79,3 @@ impl Hram for EthereumHram { >::reduce_bytes(&keccak256(&data).into()) } } - -impl From<&Signature> for AbiSignature { - fn from(sig: &Signature) -> AbiSignature { - let c: [u8; 32] = sig.c.to_repr().into(); - let s: [u8; 32] = sig.s.to_repr().into(); - AbiSignature { c: c.into(), s: s.into() } - } -} diff --git a/processor/ethereum/ethereum-serai/src/machine.rs b/processor/ethereum/ethereum-serai/src/machine.rs index 0d5dc7a5..b9a0628e 100644 --- a/processor/ethereum/ethereum-serai/src/machine.rs +++ b/processor/ethereum/ethereum-serai/src/machine.rs @@ -236,7 +236,7 @@ impl RouterCommand { writer.write_all(&[0])?; writer.write_all(&chain_id.as_le_bytes())?; writer.write_all(&nonce.as_le_bytes())?; - writer.write_all(&key.A.to_bytes()) + writer.write_all(&key.point().to_bytes()) } RouterCommand::Execute { chain_id, nonce, outs } => { writer.write_all(&[1])?; @@ -406,9 +406,9 @@ impl SignatureMachine for RouterCommandSignatureMachine { self, shares: HashMap, ) -> Result { - let sig = self.machine.complete(shares)?; - let signature = Signature::new(&self.key, &self.command.msg(), sig) - .expect("machine produced an invalid signature"); + let signature = self.machine.complete(shares)?; + let signature = Signature::new(signature).expect("machine produced an invalid signature"); + assert!(signature.verify(&self.key, &self.command.msg())); Ok(SignedRouterCommand { command: self.command, signature }) } } diff --git a/processor/ethereum/ethereum-serai/src/router.rs b/processor/ethereum/ethereum-serai/src/router.rs index 95866e67..3dbd8fa8 100644 --- a/processor/ethereum/ethereum-serai/src/router.rs +++ b/processor/ethereum/ethereum-serai/src/router.rs @@ -127,7 +127,6 @@ impl InInstruction { pub struct Executed { pub tx_id: [u8; 32], pub nonce: u64, - pub signature: [u8; 64], } /// The contract Serai uses to manage its state. @@ -142,7 +141,7 @@ impl Router { pub(crate) fn init_code(key: &PublicKey) -> Vec { let mut bytecode = Self::code(); // Append the constructor arguments - bytecode.extend((abi::constructorCall { _seraiKey: key.eth_repr().into() }).abi_encode()); + bytecode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); bytecode } @@ -392,13 +391,9 @@ impl Router { let log = log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; - let mut signature = [0; 64]; - signature[.. 32].copy_from_slice(log.signature.c.as_ref()); - signature[32 ..].copy_from_slice(log.signature.s.as_ref()); res.push(Executed { tx_id, nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?, - signature, }); } } @@ -418,13 +413,9 @@ impl Router { let log = log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; - let mut signature = [0; 64]; - signature[.. 32].copy_from_slice(log.signature.c.as_ref()); - signature[32 ..].copy_from_slice(log.signature.s.as_ref()); res.push(Executed { tx_id, nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?, - signature, }); } } From eb9bce6862836a0b89f24035e0258c73e002e811 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 12:48:09 -0400 Subject: [PATCH 140/368] Remove OutInstruction's data field It makes sense for networks which support arbitrary data to do as part of their address. This reduces the ability to perform DoSs, achieves better performance, and better uses the type system (as now networks we don't support data on don't have a data field). Updates the Ethereum address definition in serai-client accordingly --- .../ethereum/contracts/contracts/Router.sol | 2 +- processor/primitives/src/payment.rs | 16 +--- processor/scanner/src/scan/mod.rs | 11 --- processor/scanner/src/substrate/mod.rs | 2 +- processor/scheduler/smart-contract/src/lib.rs | 2 +- .../scheduler/utxo/primitives/src/tree.rs | 8 +- processor/scheduler/utxo/standard/src/lib.rs | 4 +- .../utxo/transaction-chaining/src/lib.rs | 4 +- processor/src/tests/signer.rs | 1 - processor/src/tests/wallet.rs | 2 - substrate/client/src/networks/ethereum.rs | 96 +++++++++++++++---- substrate/client/tests/burn.rs | 58 ++++++----- substrate/coins/primitives/src/lib.rs | 4 +- substrate/in-instructions/pallet/src/lib.rs | 6 +- substrate/primitives/src/lib.rs | 50 +--------- tests/coordinator/src/tests/sign.rs | 1 - tests/full-stack/src/tests/mint_and_burn.rs | 2 +- tests/processor/src/tests/send.rs | 2 +- 18 files changed, 121 insertions(+), 150 deletions(-) diff --git a/processor/ethereum/contracts/contracts/Router.sol b/processor/ethereum/contracts/contracts/Router.sol index 65541a10..1d084698 100644 --- a/processor/ethereum/contracts/contracts/Router.sol +++ b/processor/ethereum/contracts/contracts/Router.sol @@ -192,7 +192,7 @@ contract Router { _transferOut(nextAddress, transactions[i].coin, transactions[i].value); // Perform the calls with a set gas budget - (uint24 gas, bytes memory code) = abi.decode(transactions[i].destination, (uint24, bytes)); + (uint32 gas, bytes memory code) = abi.decode(transactions[i].destination, (uint32, bytes)); address(this).call{ gas: gas }(abi.encodeWithSelector(Router.arbitaryCallOut.selector, code)); diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs index 4c1e04f4..59b10f7f 100644 --- a/processor/primitives/src/payment.rs +++ b/processor/primitives/src/payment.rs @@ -3,7 +3,7 @@ use std::io; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_primitives::{Balance, Data}; +use serai_primitives::Balance; use serai_coins_primitives::OutInstructionWithBalance; use crate::Address; @@ -13,7 +13,6 @@ use crate::Address; pub struct Payment { address: A, balance: Balance, - data: Option>, } impl TryFrom for Payment { @@ -22,15 +21,14 @@ impl TryFrom for Payment { Ok(Payment { address: out_instruction_with_balance.instruction.address.try_into().map_err(|_| ())?, balance: out_instruction_with_balance.balance, - data: out_instruction_with_balance.instruction.data.map(Data::consume), }) } } impl Payment { /// Create a new Payment. - pub fn new(address: A, balance: Balance, data: Option>) -> Self { - Payment { address, balance, data } + pub fn new(address: A, balance: Balance) -> Self { + Payment { address, balance } } /// The address to pay. @@ -41,24 +39,18 @@ impl Payment { pub fn balance(&self) -> Balance { self.balance } - /// The data to associate with this payment. - pub fn data(&self) -> &Option> { - &self.data - } /// Read a Payment. pub fn read(reader: &mut impl io::Read) -> io::Result { let address = A::deserialize_reader(reader)?; let reader = &mut IoReader(reader); let balance = Balance::decode(reader).map_err(io::Error::other)?; - let data = Option::>::decode(reader).map_err(io::Error::other)?; - Ok(Self { address, balance, data }) + Ok(Self { address, balance }) } /// Write the Payment. pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { self.address.serialize(writer)?; self.balance.encode_to(writer); - self.data.encode_to(writer); Ok(()) } } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index c54dc3e0..b235ff15 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -4,7 +4,6 @@ use std::collections::HashMap; use scale::Decode; use serai_db::{Get, DbTxn, Db}; -use serai_primitives::MAX_DATA_LEN; use serai_in_instructions_primitives::{ Shorthand, RefundableInInstruction, InInstruction, InInstructionWithBalance, }; @@ -56,16 +55,6 @@ fn in_instruction_from_output( let presumed_origin = output.presumed_origin(); let mut data = output.data(); - let max_data_len = usize::try_from(MAX_DATA_LEN).unwrap(); - if data.len() > max_data_len { - log::info!( - "data in output {} exceeded MAX_DATA_LEN ({MAX_DATA_LEN}): {}. skipping", - hex::encode(output.id()), - data.len(), - ); - return (presumed_origin, None); - } - let shorthand = match Shorthand::decode(&mut data) { Ok(shorthand) => shorthand, Err(e) => { diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index a7302e5c..89186c69 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -142,7 +142,7 @@ impl ContinuallyRan for SubstrateTask { if let Some(report::ReturnInformation { address, balance }) = return_information { burns.push(OutInstructionWithBalance { - instruction: OutInstruction { address: address.into(), data: None }, + instruction: OutInstruction { address: address.into() }, balance, }); } diff --git a/processor/scheduler/smart-contract/src/lib.rs b/processor/scheduler/smart-contract/src/lib.rs index 7630a026..0c9c690b 100644 --- a/processor/scheduler/smart-contract/src/lib.rs +++ b/processor/scheduler/smart-contract/src/lib.rs @@ -130,7 +130,7 @@ impl> SchedulerTrait for S .returns() .iter() .map(|to_return| { - Payment::new(to_return.address().clone(), to_return.output().balance(), None) + Payment::new(to_return.address().clone(), to_return.output().balance()) }) .collect::>(), ), diff --git a/processor/scheduler/utxo/primitives/src/tree.rs b/processor/scheduler/utxo/primitives/src/tree.rs index b52f3ba3..d5b47309 100644 --- a/processor/scheduler/utxo/primitives/src/tree.rs +++ b/processor/scheduler/utxo/primitives/src/tree.rs @@ -115,11 +115,7 @@ impl TreeTransaction { .filter_map(|(payment, amount)| { amount.map(|amount| { // The existing payment, with the new amount - Payment::new( - payment.address().clone(), - Balance { coin, amount: Amount(amount) }, - payment.data().clone(), - ) + Payment::new(payment.address().clone(), Balance { coin, amount: Amount(amount) }) }) }) .collect() @@ -130,7 +126,7 @@ impl TreeTransaction { .filter_map(|amount| { amount.map(|amount| { // A branch output with the new amount - Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }, None) + Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }) }) }) .collect() diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index dc2ccb06..e826c300 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -489,7 +489,7 @@ impl> SchedulerTrait for Schedul &mut 0, block, vec![forward.clone()], - vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance())], None, ) .await? @@ -501,7 +501,7 @@ impl> SchedulerTrait for Schedul for to_return in update.returns() { let key = to_return.output().key(); let out_instruction = - Payment::new(to_return.address().clone(), to_return.output().balance(), None); + Payment::new(to_return.address().clone(), to_return.output().balance()); let Some(plan) = self .planner .plan_transaction_with_fee_amortization( diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index 93bdf1f3..bb39dcd3 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -507,7 +507,7 @@ impl>> Sched &mut 0, block, vec![forward.clone()], - vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance(), None)], + vec![Payment::new(P::forwarding_address(forward_to_key), forward.balance())], None, ) .await? @@ -519,7 +519,7 @@ impl>> Sched for to_return in update.returns() { let key = to_return.output().key(); let out_instruction = - Payment::new(to_return.address().clone(), to_return.output().balance(), None); + Payment::new(to_return.address().clone(), to_return.output().balance()); let Some(plan) = self .planner .plan_transaction_with_fee_amortization( diff --git a/processor/src/tests/signer.rs b/processor/src/tests/signer.rs index 77307ef2..6b445608 100644 --- a/processor/src/tests/signer.rs +++ b/processor/src/tests/signer.rs @@ -184,7 +184,6 @@ pub async fn test_signer( let mut scheduler = N::Scheduler::new::(&mut txn, key, N::NETWORK); let payments = vec![Payment { address: N::external_address(&network, key).await, - data: None, balance: Balance { coin: match N::NETWORK { NetworkId::Serai => panic!("test_signer called with Serai"), diff --git a/processor/src/tests/wallet.rs b/processor/src/tests/wallet.rs index 86a27349..0451f30c 100644 --- a/processor/src/tests/wallet.rs +++ b/processor/src/tests/wallet.rs @@ -88,7 +88,6 @@ pub async fn test_wallet( outputs.clone(), vec![Payment { address: N::external_address(&network, key).await, - data: None, balance: Balance { coin: match N::NETWORK { NetworkId::Serai => panic!("test_wallet called with Serai"), @@ -116,7 +115,6 @@ pub async fn test_wallet( plans[0].payments, vec![Payment { address: N::external_address(&network, key).await, - data: None, balance: Balance { coin: match N::NETWORK { NetworkId::Serai => panic!("test_wallet called with Serai"), diff --git a/substrate/client/src/networks/ethereum.rs b/substrate/client/src/networks/ethereum.rs index 09285169..28ada635 100644 --- a/substrate/client/src/networks/ethereum.rs +++ b/substrate/client/src/networks/ethereum.rs @@ -1,35 +1,93 @@ -use core::{str::FromStr, fmt}; +use core::str::FromStr; +use std::io::Read; use borsh::{BorshSerialize, BorshDeserialize}; -use crate::primitives::ExternalAddress; +use crate::primitives::{MAX_ADDRESS_LEN, ExternalAddress}; -/// A representation of an Ethereum address. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -pub struct Address([u8; 20]); +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub struct ContractDeployment { + /// The gas limit to use for this contract's execution. + /// + /// THis MUST be less than the Serai gas limit. The cost of it will be deducted from the amount + /// transferred. + gas: u32, + /// The initialization code of the contract to deploy. + /// + /// This contract will be deployed (executing the initialization code). No further calls will + /// be made. + code: Vec, +} -impl From<[u8; 20]> for Address { - fn from(address: [u8; 20]) -> Self { - Self(address) +/// A contract to deploy, enabling executing arbitrary code. +impl ContractDeployment { + pub fn new(gas: u32, code: Vec) -> Option { + // The max address length, minus the type byte, minus the size of the gas + const MAX_CODE_LEN: usize = (MAX_ADDRESS_LEN as usize) - (1 + core::mem::size_of::()); + if code.len() > MAX_CODE_LEN { + None?; + } + Some(Self { gas, code }) } } -impl From
for [u8; 20] { - fn from(address: Address) -> Self { - address.0 +/// A representation of an Ethereum address. +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub enum Address { + /// A traditional address. + Address([u8; 20]), + /// A contract to deploy, enabling executing arbitrary code. + Contract(ContractDeployment), +} + +impl From<[u8; 20]> for Address { + fn from(address: [u8; 20]) -> Self { + Address::Address(address) } } impl TryFrom for Address { type Error = (); fn try_from(data: ExternalAddress) -> Result { - Ok(Self(data.as_ref().try_into().map_err(|_| ())?)) + let mut kind = [0xff]; + let mut reader: &[u8] = data.as_ref(); + reader.read_exact(&mut kind).map_err(|_| ())?; + Ok(match kind[0] { + 0 => { + let mut address = [0xff; 20]; + reader.read_exact(&mut address).map_err(|_| ())?; + Address::Address(address) + } + 1 => { + let mut gas = [0xff; 4]; + reader.read_exact(&mut gas).map_err(|_| ())?; + // The code is whatever's left since the ExternalAddress is a delimited container of + // appropriately bounded length + Address::Contract(ContractDeployment { + gas: u32::from_le_bytes(gas), + code: reader.to_vec(), + }) + } + _ => Err(())?, + }) } } impl From
for ExternalAddress { fn from(address: Address) -> ExternalAddress { - // This is 20 bytes which is less than MAX_ADDRESS_LEN - ExternalAddress::new(address.0.to_vec()).unwrap() + let mut res = Vec::with_capacity(1 + 20); + match address { + Address::Address(address) => { + res.push(0); + res.extend(&address); + } + Address::Contract(ContractDeployment { gas, code }) => { + res.push(1); + res.extend(&gas.to_le_bytes()); + res.extend(&code); + } + } + // We only construct addresses whose code is small enough this can safely be constructed + ExternalAddress::new(res).unwrap() } } @@ -40,12 +98,8 @@ impl FromStr for Address { if address.len() != 40 { Err(())? }; - Ok(Self(hex::decode(address.to_lowercase()).map_err(|_| ())?.try_into().unwrap())) - } -} - -impl fmt::Display for Address { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - write!(f, "0x{}", hex::encode(self.0)) + Ok(Address::Address( + hex::decode(address.to_lowercase()).map_err(|_| ())?.try_into().map_err(|_| ())?, + )) } } diff --git a/substrate/client/tests/burn.rs b/substrate/client/tests/burn.rs index a30dabec..b8b849d3 100644 --- a/substrate/client/tests/burn.rs +++ b/substrate/client/tests/burn.rs @@ -12,7 +12,7 @@ use sp_core::Pair; use serai_client::{ primitives::{ - Amount, NetworkId, Coin, Balance, BlockHash, SeraiAddress, Data, ExternalAddress, + Amount, NetworkId, Coin, Balance, BlockHash, SeraiAddress, ExternalAddress, insecure_pair_from_name, }, in_instructions::{ @@ -55,39 +55,35 @@ serai_test!( let block = provide_batch(&serai, batch.clone()).await; let instruction = { - let serai = serai.as_of(block); - let batches = serai.in_instructions().batch_events().await.unwrap(); - assert_eq!( - batches, - vec![InInstructionsEvent::Batch { - network, - id, - block: block_hash, - instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), - }] - ); + let serai = serai.as_of(block); + let batches = serai.in_instructions().batch_events().await.unwrap(); + assert_eq!( + batches, + vec![InInstructionsEvent::Batch { + network, + id, + block: block_hash, + instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), + }] + ); - assert_eq!( - serai.coins().mint_events().await.unwrap(), - vec![CoinsEvent::Mint { to: address, balance }] - ); - assert_eq!(serai.coins().coin_supply(coin).await.unwrap(), amount); - assert_eq!(serai.coins().coin_balance(coin, address).await.unwrap(), amount); + assert_eq!( + serai.coins().mint_events().await.unwrap(), + vec![CoinsEvent::Mint { to: address, balance }] + ); + assert_eq!(serai.coins().coin_supply(coin).await.unwrap(), amount); + assert_eq!(serai.coins().coin_balance(coin, address).await.unwrap(), amount); - // Now burn it - let mut rand_bytes = vec![0; 32]; - OsRng.fill_bytes(&mut rand_bytes); - let external_address = ExternalAddress::new(rand_bytes).unwrap(); + // Now burn it + let mut rand_bytes = vec![0; 32]; + OsRng.fill_bytes(&mut rand_bytes); + let external_address = ExternalAddress::new(rand_bytes).unwrap(); - let mut rand_bytes = vec![0; 32]; - OsRng.fill_bytes(&mut rand_bytes); - let data = Data::new(rand_bytes).unwrap(); - - OutInstructionWithBalance { - balance, - instruction: OutInstruction { address: external_address, data: Some(data) }, - } -}; + OutInstructionWithBalance { + balance, + instruction: OutInstruction { address: external_address }, + } + }; let block = publish_tx( &serai, diff --git a/substrate/coins/primitives/src/lib.rs b/substrate/coins/primitives/src/lib.rs index a7b45cf0..53db7382 100644 --- a/substrate/coins/primitives/src/lib.rs +++ b/substrate/coins/primitives/src/lib.rs @@ -13,17 +13,17 @@ use serde::{Serialize, Deserialize}; use scale::{Encode, Decode, MaxEncodedLen}; use scale_info::TypeInfo; -use serai_primitives::{Balance, SeraiAddress, ExternalAddress, Data, system_address}; +use serai_primitives::{Balance, SeraiAddress, ExternalAddress, system_address}; pub const FEE_ACCOUNT: SeraiAddress = system_address(b"Coins-fees"); +// TODO: Replace entirely with just Address #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, MaxEncodedLen, TypeInfo)] #[cfg_attr(feature = "std", derive(Zeroize))] #[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] pub struct OutInstruction { pub address: ExternalAddress, - pub data: Option, } #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, MaxEncodedLen, TypeInfo)] diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index f90ae412..1cb05c40 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -205,11 +205,7 @@ pub mod pallet { let coin_balance = Coins::::balance(IN_INSTRUCTION_EXECUTOR.into(), out_balance.coin); let instruction = OutInstructionWithBalance { - instruction: OutInstruction { - address: out_address.as_external().unwrap(), - // TODO: Properly pass data. Replace address with an OutInstruction entirely? - data: None, - }, + instruction: OutInstruction { address: out_address.as_external().unwrap() }, balance: Balance { coin: out_balance.coin, amount: coin_balance }, }; Coins::::burn_with_instruction(origin.into(), instruction)?; diff --git a/substrate/primitives/src/lib.rs b/substrate/primitives/src/lib.rs index 2cf37e00..b2515a7e 100644 --- a/substrate/primitives/src/lib.rs +++ b/substrate/primitives/src/lib.rs @@ -59,10 +59,7 @@ pub fn borsh_deserialize_bounded_vec for ExternalAddress { } } -// Should be enough for a Uniswap v3 call -pub const MAX_DATA_LEN: u32 = 512; -#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, MaxEncodedLen, TypeInfo)] -#[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] -#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] -pub struct Data( - #[cfg_attr( - feature = "borsh", - borsh( - serialize_with = "borsh_serialize_bounded_vec", - deserialize_with = "borsh_deserialize_bounded_vec" - ) - )] - BoundedVec>, -); - -#[cfg(feature = "std")] -impl Zeroize for Data { - fn zeroize(&mut self) { - self.0.as_mut().zeroize() - } -} - -impl Data { - #[cfg(feature = "std")] - pub fn new(data: Vec) -> Result { - Ok(Data(data.try_into().map_err(|_| "data length exceeds {MAX_DATA_LEN}")?)) - } - - pub fn data(&self) -> &[u8] { - self.0.as_ref() - } - - #[cfg(feature = "std")] - pub fn consume(self) -> Vec { - self.0.into_inner() - } -} - -impl AsRef<[u8]> for Data { - fn as_ref(&self) -> &[u8] { - self.0.as_ref() - } -} - /// Lexicographically reverses a given byte array. pub fn reverse_lexicographic_order(bytes: [u8; N]) -> [u8; N] { let mut res = [0u8; N]; diff --git a/tests/coordinator/src/tests/sign.rs b/tests/coordinator/src/tests/sign.rs index db8a7203..6e9142fe 100644 --- a/tests/coordinator/src/tests/sign.rs +++ b/tests/coordinator/src/tests/sign.rs @@ -247,7 +247,6 @@ async fn sign_test() { balance, instruction: OutInstruction { address: ExternalAddress::new(b"external".to_vec()).unwrap(), - data: None, }, }; serai diff --git a/tests/full-stack/src/tests/mint_and_burn.rs b/tests/full-stack/src/tests/mint_and_burn.rs index ce19808f..8987facc 100644 --- a/tests/full-stack/src/tests/mint_and_burn.rs +++ b/tests/full-stack/src/tests/mint_and_burn.rs @@ -493,7 +493,7 @@ async fn mint_and_burn_test() { move |nonce, coin, amount, address| async move { let out_instruction = OutInstructionWithBalance { balance: Balance { coin, amount: Amount(amount) }, - instruction: OutInstruction { address, data: None }, + instruction: OutInstruction { address }, }; serai diff --git a/tests/processor/src/tests/send.rs b/tests/processor/src/tests/send.rs index 8dfb5353..4c811e2b 100644 --- a/tests/processor/src/tests/send.rs +++ b/tests/processor/src/tests/send.rs @@ -246,7 +246,7 @@ fn send_test() { }, block: substrate_block_num, burns: vec![OutInstructionWithBalance { - instruction: OutInstruction { address: wallet.address(), data: None }, + instruction: OutInstruction { address: wallet.address() }, balance: Balance { coin: balance_sent.coin, amount: amount_minted }, }], batches: vec![batch.batch.id], From 4bcea31c2a523be44cc0465fa5d195d9d2fec963 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 15 Sep 2024 17:13:10 -0400 Subject: [PATCH 141/368] Break Ethereum Deployer into crate --- .github/workflows/tests.yml | 3 + Cargo.lock | 26 +++++ Cargo.toml | 2 + deny.toml | 4 +- .../ethereum/contracts/contracts/Deployer.sol | 52 --------- processor/ethereum/contracts/src/lib.rs | 5 - processor/ethereum/deployer/Cargo.toml | 34 ++++++ processor/ethereum/deployer/LICENSE | 15 +++ processor/ethereum/deployer/README.md | 23 ++++ processor/ethereum/deployer/build.rs | 5 + .../ethereum/deployer/contracts/Deployer.sol | 81 ++++++++++++++ processor/ethereum/deployer/src/lib.rs | 104 ++++++++++++++++++ processor/ethereum/ethereum-serai/Cargo.toml | 2 +- .../ethereum/ethereum-serai/src/crypto.rs | 23 ++-- processor/ethereum/ethereum-serai/src/lib.rs | 2 + .../ethereum/ethereum-serai/src/machine.rs | 13 +++ processor/ethereum/primitives/Cargo.toml | 24 ++++ processor/ethereum/primitives/LICENSE | 15 +++ processor/ethereum/primitives/README.md | 3 + processor/ethereum/primitives/src/lib.rs | 49 +++++++++ 20 files changed, 411 insertions(+), 74 deletions(-) delete mode 100644 processor/ethereum/contracts/contracts/Deployer.sol create mode 100644 processor/ethereum/deployer/Cargo.toml create mode 100644 processor/ethereum/deployer/LICENSE create mode 100644 processor/ethereum/deployer/README.md create mode 100644 processor/ethereum/deployer/build.rs create mode 100644 processor/ethereum/deployer/contracts/Deployer.sol create mode 100644 processor/ethereum/deployer/src/lib.rs create mode 100644 processor/ethereum/primitives/Cargo.toml create mode 100644 processor/ethereum/primitives/LICENSE create mode 100644 processor/ethereum/primitives/README.md create mode 100644 processor/ethereum/primitives/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 9b90ee91..382d9a2f 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -53,6 +53,9 @@ jobs: -p serai-processor-bin \ -p serai-bitcoin-processor \ -p serai-processor-ethereum-contracts \ + -p serai-processor-ethereum-primitives \ + -p serai-processor-ethereum-deployer \ + -p ethereum-serai \ -p serai-ethereum-processor \ -p serai-monero-processor \ -p tendermint-machine \ diff --git a/Cargo.lock b/Cargo.lock index d6224093..0253cf32 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8721,6 +8721,32 @@ dependencies = [ "syn-solidity", ] +[[package]] +name = "serai-processor-ethereum-deployer" +version = "0.1.0" +dependencies = [ + "alloy-consensus", + "alloy-core", + "alloy-provider", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", + "alloy-sol-macro", + "alloy-sol-types", + "alloy-transport", + "build-solidity-contracts", + "serai-processor-ethereum-primitives", +] + +[[package]] +name = "serai-processor-ethereum-primitives" +version = "0.1.0" +dependencies = [ + "alloy-consensus", + "alloy-core", + "group", + "k256", +] + [[package]] name = "serai-processor-frost-attempt-manager" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index b30112b2..c0010659 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -88,6 +88,8 @@ members = [ "processor/bin", "processor/bitcoin", "processor/ethereum/contracts", + "processor/ethereum/primitives", + "processor/ethereum/deployer", "processor/ethereum/ethereum-serai", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index ec948fef..8b630fb9 100644 --- a/deny.toml +++ b/deny.toml @@ -59,8 +59,10 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, - { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-contracts" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, + { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, diff --git a/processor/ethereum/contracts/contracts/Deployer.sol b/processor/ethereum/contracts/contracts/Deployer.sol deleted file mode 100644 index 1c05e38a..00000000 --- a/processor/ethereum/contracts/contracts/Deployer.sol +++ /dev/null @@ -1,52 +0,0 @@ -// SPDX-License-Identifier: AGPL-3.0-only -pragma solidity ^0.8.26; - -/* -The expected deployment process of the Router is as follows: - -1) A transaction deploying Deployer is made. Then, a deterministic signature is - created such that an account with an unknown private key is the creator of - the contract. Anyone can fund this address, and once anyone does, the - transaction deploying Deployer can be published by anyone. No other - transaction may be made from that account. - -2) Anyone deploys the Router through the Deployer. This uses a sequential nonce - such that meet-in-the-middle attacks, with complexity 2**80, aren't feasible. - While such attacks would still be feasible if the Deployer's address was - controllable, the usage of a deterministic signature with a NUMS method - prevents that. - -This doesn't have any denial-of-service risks and will resolve once anyone steps -forward as deployer. This does fail to guarantee an identical address across -every chain, though it enables letting anyone efficiently ask the Deployer for -the address (with the Deployer having an identical address on every chain). - -Unfortunately, guaranteeing identical addresses aren't feasible. We'd need the -Deployer contract to use a consistent salt for the Router, yet the Router must -be deployed with a specific public key for Serai. Since Ethereum isn't able to -determine a valid public key (one the result of a Serai DKG) from a dishonest -public key, we have to allow multiple deployments with Serai being the one to -determine which to use. - -The alternative would be to have a council publish the Serai key on-Ethereum, -with Serai verifying the published result. This would introduce a DoS risk in -the council not publishing the correct key/not publishing any key. -*/ - -contract Deployer { - event Deployment(bytes32 indexed init_code_hash, address created); - - error DeploymentFailed(); - - function deploy(bytes memory init_code) external { - address created; - assembly { - created := create(0, add(init_code, 0x20), mload(init_code)) - } - if (created == address(0)) { - revert DeploymentFailed(); - } - // These may be emitted out of order upon re-entrancy - emit Deployment(keccak256(init_code), created); - } -} diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs index 45176067..d0a5c076 100644 --- a/processor/ethereum/contracts/src/lib.rs +++ b/processor/ethereum/contracts/src/lib.rs @@ -9,11 +9,6 @@ mod abigen; pub mod erc20 { pub use super::abigen::erc20::IERC20::*; } -pub mod deployer { - pub const BYTECODE: &str = - include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Deployer.bin")); - pub use super::abigen::deployer::Deployer::*; -} pub mod router { pub const BYTECODE: &str = include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Router.bin")); diff --git a/processor/ethereum/deployer/Cargo.toml b/processor/ethereum/deployer/Cargo.toml new file mode 100644 index 00000000..9b0ed146 --- /dev/null +++ b/processor/ethereum/deployer/Cargo.toml @@ -0,0 +1,34 @@ +[package] +name = "serai-processor-ethereum-deployer" +version = "0.1.0" +description = "The deployer for Serai's Ethereum contracts" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/deployer" +authors = ["Luke Parker "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +alloy-core = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.3", default-features = false } + +alloy-sol-types = { version = "0.8", default-features = false } +alloy-sol-macro = { version = "0.8", default-features = false } + +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-transport = { version = "0.3", default-features = false } +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-provider = { version = "0.3", default-features = false } + +ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } + +[build-dependencies] +build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts", default-features = false } diff --git a/processor/ethereum/deployer/LICENSE b/processor/ethereum/deployer/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/deployer/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/deployer/README.md b/processor/ethereum/deployer/README.md new file mode 100644 index 00000000..6b439650 --- /dev/null +++ b/processor/ethereum/deployer/README.md @@ -0,0 +1,23 @@ +# Ethereum Smart Contracts Deployer + +The deployer for Serai's Ethereum contracts. + +## Goals + +It should be possible to efficiently locate the Serai Router on an blockchain with the EVM, without +relying on any centralized (or even federated) entities. While deploying and locating an instance of +the Router would be trivial, by using a fixed signature for the deployment transaction, the Router +must be constructed with the correct key for the Serai network (or set to have the correct key +post-construction). Since this cannot be guaranteed to occur, the process must be retryable and the +first successful invocation must be efficiently findable. + +## Methodology + +We define a contract, the Deployer, to deploy the router. This contract could use `CREATE2` with the +key representing Serai as the salt, yet this would be open to collision attacks with just 2**80 +complexity. Instead, we use `CREATE` which would require 2**80 on-chain transactions (infeasible) to +use as the basis of a collision. + +In order to efficiently find the contract for a key, the Deployer contract saves the addresses of +deployed contracts (indexed by the initialization code hash). This allows using a single call to a +contract with a known address to find the proper Router. diff --git a/processor/ethereum/deployer/build.rs b/processor/ethereum/deployer/build.rs new file mode 100644 index 00000000..1906f1df --- /dev/null +++ b/processor/ethereum/deployer/build.rs @@ -0,0 +1,5 @@ +fn main() { + let artifacts_path = + std::env::var("OUT_DIR").unwrap().to_string() + "/serai-processor-ethereum-deployer"; + build_solidity_contracts::build(&[], "contracts", &artifacts_path).unwrap(); +} diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol new file mode 100644 index 00000000..24ea1cb4 --- /dev/null +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -0,0 +1,81 @@ +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; + +/* + The expected deployment process of the Router is as follows: + + 1) A transaction deploying Deployer is made. Then, a deterministic signature is + created such that an account with an unknown private key is the creator of + the contract. Anyone can fund this address, and once anyone does, the + transaction deploying Deployer can be published by anyone. No other + transaction may be made from that account. + + 2) Anyone deploys the Router through the Deployer. This uses a sequential nonce + such that meet-in-the-middle attacks, with complexity 2**80, aren't feasible. + While such attacks would still be feasible if the Deployer's address was + controllable, the usage of a deterministic signature with a NUMS method + prevents that. + + This doesn't have any denial-of-service risks and will resolve once anyone steps + forward as deployer. This does fail to guarantee an identical address across + every chain, though it enables letting anyone efficiently ask the Deployer for + the address (with the Deployer having an identical address on every chain). + + Unfortunately, guaranteeing identical addresses aren't feasible. We'd need the + Deployer contract to use a consistent salt for the Router, yet the Router must + be deployed with a specific public key for Serai. Since Ethereum isn't able to + determine a valid public key (one the result of a Serai DKG) from a dishonest + public key, we have to allow multiple deployments with Serai being the one to + determine which to use. + + The alternative would be to have a council publish the Serai key on-Ethereum, + with Serai verifying the published result. This would introduce a DoS risk in + the council not publishing the correct key/not publishing any key. +*/ + +contract Deployer { + struct Deployment { + uint64 block_number; + address created_contract; + } + mapping(bytes32 => Deployment) public deployments; + + error Reentrancy(); + error PriorDeployed(); + error DeploymentFailed(); + + function deploy(bytes memory init_code) external { + // Prevent re-entrancy + // If we did allow it, one could deploy the same contract multiple times (with one overwriting + // the other's set value in storage) + bool called; + // This contract doesn't have any other use of transient storage, nor is to be inherited, making + // this usage of the zero address safe + assembly { called := tload(0) } + if (called) { + revert Reentrancy(); + } + assembly { tstore(0, 1) } + + // Check this wasn't prior deployed + bytes32 init_code_hash = keccak256(init_code); + Deployment memory deployment = deployments[init_code_hash]; + if (deployment.created_contract == address(0)) { + revert PriorDeployed(); + } + + // Deploy the contract + address created_contract; + assembly { + created_contract := create(0, add(init_code, 0x20), mload(init_code)) + } + if (created_contract == address(0)) { + revert DeploymentFailed(); + } + + // Set the dpeloyment to storage + deployment.block_number = uint64(block.number); + deployment.created_contract = created_contract; + deployments[init_code_hash] = deployment; + } +} diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs new file mode 100644 index 00000000..bf2d1a9c --- /dev/null +++ b/processor/ethereum/deployer/src/lib.rs @@ -0,0 +1,104 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use std::sync::Arc; + +use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; +use alloy_consensus::{Signed, TxLegacy}; + +use alloy_sol_types::SolCall; + +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; +use alloy_transport::{TransportErrorKind, RpcError}; +use alloy_simple_request_transport::SimpleRequest; +use alloy_provider::{Provider, RootProvider}; + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod abi { + alloy_sol_macro::sol!("contracts/Deployer.sol"); +} + +/// The Deployer contract for the Serai Router contract. +/// +/// This Deployer has a deterministic address, letting it be immediately identified on any +/// compatible chain. It then supports retrieving the Router contract's address (which isn't +/// deterministic) using a single call. +#[derive(Clone, Debug)] +pub struct Deployer; +impl Deployer { + /// Obtain the transaction to deploy this contract, already signed. + /// + /// The account this transaction is sent from (which is populated in `from`) must be sufficiently + /// funded for this transaction to be submitted. This account has no known private key to anyone + /// so ETH sent can be neither misappropriated nor returned. + pub fn deployment_tx() -> Signed { + pub const BYTECODE: &str = + include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); + let bytecode = + Bytes::from_hex(BYTECODE).expect("compiled-in Deployer bytecode wasn't valid hex"); + + let tx = TxLegacy { + chain_id: None, + nonce: 0, + // 100 gwei + gas_price: 100_000_000_000u128, + // TODO: Use a more accurate gas limit + gas_limit: 1_000_000u128, + to: TxKind::Create, + value: U256::ZERO, + input: bytecode, + }; + + ethereum_primitives::deterministically_sign(&tx) + } + + /// Obtain the deterministic address for this contract. + pub(crate) fn address() -> Address { + let deployer_deployer = + Self::deployment_tx().recover_signer().expect("deployment_tx didn't have a valid signature"); + Address::create(&deployer_deployer, 0) + } + + /// Construct a new view of the Deployer. + pub async fn new( + provider: Arc>, + ) -> Result, RpcError> { + let address = Self::address(); + let code = provider.get_code_at(address).await?; + // Contract has yet to be deployed + if code.is_empty() { + return Ok(None); + } + Ok(Some(Self)) + } + + /// Find the deployment of a contract. + pub async fn find_deployment( + &self, + provider: Arc>, + init_code_hash: [u8; 32], + ) -> Result, RpcError> { + let call = TransactionRequest::default().to(Self::address()).input(TransactionInput::new( + abi::Deployer::deploymentsCall::new((init_code_hash.into(),)).abi_encode().into(), + )); + let bytes = provider.call(&call).await?; + let deployment = abi::Deployer::deploymentsCall::abi_decode_returns(&bytes, true) + .map_err(|e| { + TransportErrorKind::Custom( + format!("node returned a non-Deployment for function returning Deployment: {e:?}").into(), + ) + })? + ._0; + + if deployment.created_contract == [0; 20] { + return Ok(None); + } + Ok(Some(deployment)) + } +} diff --git a/processor/ethereum/ethereum-serai/Cargo.toml b/processor/ethereum/ethereum-serai/Cargo.toml index a2bec481..73c5b267 100644 --- a/processor/ethereum/ethereum-serai/Cargo.toml +++ b/processor/ethereum/ethereum-serai/Cargo.toml @@ -3,7 +3,7 @@ name = "ethereum-serai" version = "0.1.0" description = "An Ethereum library supporting Schnorr signing and on-chain verification" license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/ethereum-serai" authors = ["Luke Parker ", "Elizabeth Binks "] edition = "2021" publish = false diff --git a/processor/ethereum/ethereum-serai/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs index d013eeff..fc51ae6b 100644 --- a/processor/ethereum/ethereum-serai/src/crypto.rs +++ b/processor/ethereum/ethereum-serai/src/crypto.rs @@ -15,11 +15,9 @@ use frost::{ pub use ethereum_schnorr_contract::*; -use alloy_core::primitives::{Parity, Signature as AlloySignature}; +use alloy_core::primitives::{Parity, Signature as AlloySignature, Address}; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; -use crate::abi::router::{Signature as AbiSignature}; - pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] { alloy_core::primitives::keccak256(data).into() } @@ -28,11 +26,9 @@ pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar { >::reduce_bytes(&keccak256(data).into()) } -pub fn address(point: &ProjectivePoint) -> [u8; 20] { +pub(crate) fn address(point: &ProjectivePoint) -> [u8; 20] { let encoded_point = point.to_encoded_point(false); - // Last 20 bytes of the hash of the concatenated x and y coordinates - // We obtain the concatenated x and y coordinates via the uncompressed encoding of the point - keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap() + **Address::from_raw_public_key(&encoded_point.as_ref()[1 .. 65]) } /// Deterministically sign a transaction. @@ -64,18 +60,15 @@ pub fn deterministically_sign(tx: &TxLegacy) -> Signed { } } -/// The HRAm to use for the Schnorr contract. +/// The HRAm to use for the Schnorr Solidity library. +/// +/// This will panic if the public key being signed for is not representable within the Schnorr +/// Solidity library. #[derive(Clone, Default)] pub struct EthereumHram {} impl Hram for EthereumHram { #[allow(non_snake_case)] fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar { - let x_coord = A.to_affine().x(); - - let mut data = address(R).to_vec(); - data.extend(x_coord.as_slice()); - data.extend(m); - - >::reduce_bytes(&keccak256(&data).into()) + Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) } } diff --git a/processor/ethereum/ethereum-serai/src/lib.rs b/processor/ethereum/ethereum-serai/src/lib.rs index 76121401..1a013ddf 100644 --- a/processor/ethereum/ethereum-serai/src/lib.rs +++ b/processor/ethereum/ethereum-serai/src/lib.rs @@ -15,6 +15,7 @@ pub mod alloy { pub mod crypto; +/* pub(crate) mod abi { pub use contracts::erc20; pub use contracts::deployer; @@ -37,3 +38,4 @@ pub enum Error { #[error("couldn't make call/send TX")] ConnectionError, } +*/ diff --git a/processor/ethereum/ethereum-serai/src/machine.rs b/processor/ethereum/ethereum-serai/src/machine.rs index b9a0628e..404922f5 100644 --- a/processor/ethereum/ethereum-serai/src/machine.rs +++ b/processor/ethereum/ethereum-serai/src/machine.rs @@ -25,6 +25,19 @@ use crate::{ }, }; +/// The HRAm to use for the Schnorr Solidity library. +/// +/// This will panic if the public key being signed for is not representable within the Schnorr +/// Solidity library. +#[derive(Clone, Default)] +pub struct EthereumHram {} +impl Hram for EthereumHram { + #[allow(non_snake_case)] + fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar { + Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) + } +} + #[derive(Clone, PartialEq, Eq, Debug)] pub struct Call { pub to: [u8; 20], diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml new file mode 100644 index 00000000..6c6ff886 --- /dev/null +++ b/processor/ethereum/primitives/Cargo.toml @@ -0,0 +1,24 @@ +[package] +name = "serai-processor-ethereum-primitives" +version = "0.1.0" +description = "Primitives for Serai's Ethereum Processor" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/primitives" +authors = ["Luke Parker "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +group = { version = "0.13", default-features = false } +k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } + +alloy-core = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.3", default-features = false, features = ["k256"] } diff --git a/processor/ethereum/primitives/LICENSE b/processor/ethereum/primitives/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/primitives/README.md b/processor/ethereum/primitives/README.md new file mode 100644 index 00000000..90da68c6 --- /dev/null +++ b/processor/ethereum/primitives/README.md @@ -0,0 +1,3 @@ +# Ethereum Processor Primitives + +This library contains miscellaneous primitives and helper functions. diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs new file mode 100644 index 00000000..ccf41344 --- /dev/null +++ b/processor/ethereum/primitives/src/lib.rs @@ -0,0 +1,49 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use group::ff::PrimeField; +use k256::{elliptic_curve::ops::Reduce, U256, Scalar}; + +use alloy_core::primitives::{Parity, Signature}; +use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; + +/// The Keccak256 hash function. +pub fn keccak256(data: impl AsRef<[u8]>) -> [u8; 32] { + alloy_core::primitives::keccak256(data.as_ref()).into() +} + +/// Deterministically sign a transaction. +/// +/// This function panics if passed a transaction with a non-None chain ID. +pub fn deterministically_sign(tx: &TxLegacy) -> Signed { + pub fn hash_to_scalar(data: impl AsRef<[u8]>) -> Scalar { + >::reduce_bytes(&keccak256(data).into()) + } + + assert!( + tx.chain_id.is_none(), + "chain ID was Some when deterministically signing a TX (causing a non-deterministic signer)" + ); + + let sig_hash = tx.signature_hash().0; + let mut r = hash_to_scalar([sig_hash.as_slice(), b"r"].concat()); + let mut s = hash_to_scalar([sig_hash.as_slice(), b"s"].concat()); + loop { + // Create the signature + let r_bytes: [u8; 32] = r.to_repr().into(); + let s_bytes: [u8; 32] = s.to_repr().into(); + let v = Parity::NonEip155(false); + let signature = Signature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), v).unwrap(); + + // Check if this is a valid signature + let tx = tx.clone().into_signed(signature); + if tx.recover_signer().is_ok() { + return tx; + } + + // Re-hash until valid + r = hash_to_scalar(r_bytes); + s = hash_to_scalar(s_bytes); + } +} From ae61f3d3598b25fe642d98d54c9648e35b316300 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 16 Sep 2024 21:34:59 -0400 Subject: [PATCH 142/368] forge fmt --- .github/workflows/lint.yml | 9 +++++ .../ethereum/schnorr/contracts/Schnorr.sol | 16 ++++----- .../schnorr/contracts/tests/Schnorr.sol | 11 +++--- .../ethereum/contracts/contracts/Router.sol | 36 ++++++++----------- .../contracts/contracts/tests/ERC20.sol | 5 +++ .../ethereum/deployer/contracts/Deployer.sol | 9 +++-- 6 files changed, 47 insertions(+), 39 deletions(-) diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index da0bdcfa..63a67649 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -73,6 +73,15 @@ jobs: - name: Run rustfmt run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check + - name: Install foundry + uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773 + with: + version: nightly-41d4e5437107f6f42c7711123890147bc736a609 + cache: false + + - name: Run forge fmt + run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TABLE_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol") + machete: runs-on: ubuntu-latest steps: diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 182e90e3..69dc208a 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -4,24 +4,22 @@ pragma solidity ^0.8.26; // See https://github.com/noot/schnorr-verify for implementation details library Schnorr { // secp256k1 group order - uint256 constant private Q = - 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; + uint256 private constant Q = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; // We fix the key to have: // 1) An even y-coordinate // 2) An x-coordinate < Q - uint8 constant private KEY_PARITY = 27; + uint8 private constant KEY_PARITY = 27; // px := public key x-coordinate, where the public key has an even y-coordinate // message := the message signed // c := Schnorr signature challenge // s := Schnorr signature solution - function verify( - bytes32 px, - bytes memory message, - bytes32 c, - bytes32 s - ) internal pure returns (bool) { + function verify(bytes32 px, bytes memory message, bytes32 c, bytes32 s) + internal + pure + returns (bool) + { // ecrecover = (m, v, r, s) -> key // We instead pass the following to obtain the nonce (not the key) // Then we hash it and verify it matches the challenge diff --git a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 26be683d..11a3c3bc 100644 --- a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -4,12 +4,11 @@ pragma solidity ^0.8.26; import "../Schnorr.sol"; contract TestSchnorr { - function verify( - bytes32 public_key, - bytes calldata message, - bytes32 c, - bytes32 s - ) external pure returns (bool) { + function verify(bytes32 public_key, bytes calldata message, bytes32 c, bytes32 s) + external + pure + returns (bool) + { return Schnorr.verify(public_key, message, c, s); } } diff --git a/processor/ethereum/contracts/contracts/Router.sol b/processor/ethereum/contracts/contracts/Router.sol index 1d084698..136c1e62 100644 --- a/processor/ethereum/contracts/contracts/Router.sol +++ b/processor/ethereum/contracts/contracts/Router.sol @@ -35,7 +35,9 @@ contract Router { } event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); - event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); + event InInstruction( + address indexed from, address indexed coin, uint256 amount, bytes instruction + ); event Executed(uint256 indexed nonce, bytes32 indexed batch); error InvalidSignature(); @@ -62,10 +64,10 @@ contract Router { // updateSeraiKey validates the given Schnorr signature against the current public key, and if // successful, updates the contract's public key to the one specified. - function updateSeraiKey( - bytes32 newSeraiKey, - Signature calldata signature - ) external _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) { + function updateSeraiKey(bytes32 newSeraiKey, Signature calldata signature) + external + _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) + { bytes memory message = abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey); _nonce++; @@ -74,25 +76,15 @@ contract Router { } } - function inInstruction( - address coin, - uint256 amount, - bytes memory instruction - ) external payable { + function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { if (coin == address(0)) { if (amount != msg.value) { revert InvalidAmount(); } } else { - (bool success, bytes memory res) = - address(coin).call( - abi.encodeWithSelector( - IERC20.transferFrom.selector, - msg.sender, - address(this), - amount - ) - ); + (bool success, bytes memory res) = address(coin).call( + abi.encodeWithSelector(IERC20.transferFrom.selector, msg.sender, address(this), amount) + ); // Require there was nothing returned, which is done by some non-standard tokens, or that the // ERC20 contract did in fact return true @@ -193,9 +185,9 @@ contract Router { // Perform the calls with a set gas budget (uint32 gas, bytes memory code) = abi.decode(transactions[i].destination, (uint32, bytes)); - address(this).call{ - gas: gas - }(abi.encodeWithSelector(Router.arbitaryCallOut.selector, code)); + address(this).call{ gas: gas }( + abi.encodeWithSelector(Router.arbitaryCallOut.selector, code) + ); } } } diff --git a/processor/ethereum/contracts/contracts/tests/ERC20.sol b/processor/ethereum/contracts/contracts/tests/ERC20.sol index f38bfea4..9ce4bad7 100644 --- a/processor/ethereum/contracts/contracts/tests/ERC20.sol +++ b/processor/ethereum/contracts/contracts/tests/ERC20.sol @@ -8,9 +8,11 @@ contract TestERC20 { function name() public pure returns (string memory) { return "Test ERC20"; } + function symbol() public pure returns (string memory) { return "TEST"; } + function decimals() public pure returns (uint8) { return 18; } @@ -29,11 +31,13 @@ contract TestERC20 { function balanceOf(address owner) public view returns (uint256) { return balances[owner]; } + function transfer(address to, uint256 value) public returns (bool) { balances[msg.sender] -= value; balances[to] += value; return true; } + function transferFrom(address from, address to, uint256 value) public returns (bool) { allowances[from][msg.sender] -= value; balances[from] -= value; @@ -45,6 +49,7 @@ contract TestERC20 { allowances[msg.sender][spender] = value; return true; } + function allowance(address owner, address spender) public view returns (uint256) { return allowances[owner][spender]; } diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol index 24ea1cb4..ad217fdc 100644 --- a/processor/ethereum/deployer/contracts/Deployer.sol +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -38,6 +38,7 @@ contract Deployer { uint64 block_number; address created_contract; } + mapping(bytes32 => Deployment) public deployments; error Reentrancy(); @@ -51,11 +52,15 @@ contract Deployer { bool called; // This contract doesn't have any other use of transient storage, nor is to be inherited, making // this usage of the zero address safe - assembly { called := tload(0) } + assembly { + called := tload(0) + } if (called) { revert Reentrancy(); } - assembly { tstore(0, 1) } + assembly { + tstore(0, 1) + } // Check this wasn't prior deployed bytes32 init_code_hash = keccak256(init_code); From a7d56406423b8f85d291658c77efc76da97c3e0c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 16 Sep 2024 21:59:12 -0400 Subject: [PATCH 143/368] Smash ERC20 into its own library --- .github/workflows/tests.yml | 1 + Cargo.lock | 13 +++++ Cargo.toml | 1 + deny.toml | 1 + processor/ethereum/erc20/Cargo.toml | 28 ++++++++++ processor/ethereum/erc20/LICENSE | 15 +++++ processor/ethereum/erc20/README.md | 3 + .../{contracts => erc20}/contracts/IERC20.sol | 0 .../src/erc20.rs => erc20/src/lib.rs} | 56 +++++++++++++++---- 9 files changed, 108 insertions(+), 10 deletions(-) create mode 100644 processor/ethereum/erc20/Cargo.toml create mode 100644 processor/ethereum/erc20/LICENSE create mode 100644 processor/ethereum/erc20/README.md rename processor/ethereum/{contracts => erc20}/contracts/IERC20.sol (100%) rename processor/ethereum/{ethereum-serai/src/erc20.rs => erc20/src/lib.rs} (61%) diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 382d9a2f..f08b457b 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -55,6 +55,7 @@ jobs: -p serai-processor-ethereum-contracts \ -p serai-processor-ethereum-primitives \ -p serai-processor-ethereum-deployer \ + -p serai-processor-ethereum-erc20 \ -p ethereum-serai \ -p serai-ethereum-processor \ -p serai-monero-processor \ diff --git a/Cargo.lock b/Cargo.lock index 0253cf32..b52aca05 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8737,6 +8737,19 @@ dependencies = [ "serai-processor-ethereum-primitives", ] +[[package]] +name = "serai-processor-ethereum-erc20" +version = "0.1.0" +dependencies = [ + "alloy-core", + "alloy-provider", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", + "alloy-sol-macro", + "alloy-sol-types", + "alloy-transport", +] + [[package]] name = "serai-processor-ethereum-primitives" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index c0010659..e2de489d 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -90,6 +90,7 @@ members = [ "processor/ethereum/contracts", "processor/ethereum/primitives", "processor/ethereum/deployer", + "processor/ethereum/erc20", "processor/ethereum/ethereum-serai", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index 8b630fb9..1091d103 100644 --- a/deny.toml +++ b/deny.toml @@ -62,6 +62,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-contracts" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-erc20" }, { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml new file mode 100644 index 00000000..85bc83c3 --- /dev/null +++ b/processor/ethereum/erc20/Cargo.toml @@ -0,0 +1,28 @@ +[package] +name = "serai-processor-ethereum-erc20" +version = "0.1.0" +description = "A library for the Serai Processor to interact with ERC20s" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/erc20" +authors = ["Luke Parker "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +alloy-core = { version = "0.8", default-features = false } + +alloy-sol-types = { version = "0.8", default-features = false } +alloy-sol-macro = { version = "0.8", default-features = false } + +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-transport = { version = "0.3", default-features = false } +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-provider = { version = "0.3", default-features = false } diff --git a/processor/ethereum/erc20/LICENSE b/processor/ethereum/erc20/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/erc20/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/erc20/README.md b/processor/ethereum/erc20/README.md new file mode 100644 index 00000000..f1e447b0 --- /dev/null +++ b/processor/ethereum/erc20/README.md @@ -0,0 +1,3 @@ +# ERC20 + +A library for the Serai Processor to interact with ERC20s. diff --git a/processor/ethereum/contracts/contracts/IERC20.sol b/processor/ethereum/erc20/contracts/IERC20.sol similarity index 100% rename from processor/ethereum/contracts/contracts/IERC20.sol rename to processor/ethereum/erc20/contracts/IERC20.sol diff --git a/processor/ethereum/ethereum-serai/src/erc20.rs b/processor/ethereum/erc20/src/lib.rs similarity index 61% rename from processor/ethereum/ethereum-serai/src/erc20.rs rename to processor/ethereum/erc20/src/lib.rs index 6a32f7cc..560ea86c 100644 --- a/processor/ethereum/ethereum-serai/src/erc20.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use std::{sync::Arc, collections::HashSet}; use alloy_core::primitives::{Address, B256, U256}; @@ -5,18 +9,31 @@ use alloy_core::primitives::{Address, B256, U256}; use alloy_sol_types::{SolInterface, SolEvent}; use alloy_rpc_types_eth::Filter; +use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; -use crate::Error; -pub use crate::abi::erc20 as abi; -use abi::{IERC20Calls, Transfer, transferCall, transferFromCall}; +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod abi { + alloy_sol_macro::sol!("contracts/IERC20.sol"); +} +use abi::IERC20::{IERC20Calls, Transfer, transferCall, transferFromCall}; +/// A top-level ERC20 transfer #[derive(Clone, Debug)] pub struct TopLevelErc20Transfer { + /// The transaction ID which effected this transfer. pub id: [u8; 32], + /// The address which made the transfer. pub from: [u8; 20], + /// The amount transferred. pub amount: U256, + /// The data appended after the call itself. pub data: Vec, } @@ -29,30 +46,43 @@ impl Erc20 { Self(provider, Address::from(&address)) } + /// Fetch all top-level transfers to the specified ERC20. pub async fn top_level_transfers( &self, block: u64, to: [u8; 20], - ) -> Result, Error> { + ) -> Result, RpcError> { let filter = Filter::new().from_block(block).to_block(block).address(self.1); let filter = filter.event_signature(Transfer::SIGNATURE_HASH); let mut to_topic = [0; 32]; to_topic[12 ..].copy_from_slice(&to); let filter = filter.topic2(B256::from(to_topic)); - let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; + let logs = self.0.get_logs(&filter).await?; + /* + A set of all transactions we've handled a transfer from. This handles the edge case where a + top-level transfer T somehow triggers another transfer T', with equivalent contents, within + the same transaction. We only want to report one transfer as only one is top-level. + */ let mut handled = HashSet::new(); let mut top_level_transfers = vec![]; for log in logs { // Double check the address which emitted this log if log.address() != self.1 { - Err(Error::ConnectionError)?; + Err(TransportErrorKind::Custom( + "node returned logs for a different address than requested".to_string().into(), + ))?; } - let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?; - let tx = - self.0.get_transaction_by_hash(tx_id).await.ok().flatten().ok_or(Error::ConnectionError)?; + let tx_id = log.transaction_hash.ok_or_else(|| { + TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) + })?; + let tx = self.0.get_transaction_by_hash(tx_id).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have the transaction which emitted a log it had".to_string().into(), + ) + })?; // If this is a top-level call... if tx.to == Some(self.1) { @@ -70,7 +100,13 @@ impl Erc20 { _ => continue, }; - let log = log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom(format!("failed to decode Transfer log: {e:?}").into()) + })? + .inner + .data; // Ensure the top-level transfer is equivalent, and this presumably isn't a log for an // internal transfer From cc75a9264155b825d4784cdfd495db47774eb9bf Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 01:04:08 -0400 Subject: [PATCH 144/368] Smash out the router library --- .github/workflows/tests.yml | 1 + Cargo.lock | 26 + Cargo.toml | 1 + deny.toml | 1 + .../ethereum/ethereum-serai/src/router.rs | 434 ------------- processor/ethereum/router/Cargo.toml | 49 ++ processor/ethereum/router/LICENSE | 15 + processor/ethereum/router/README.md | 1 + processor/ethereum/router/build.rs | 42 ++ .../contracts/Router.sol | 33 +- processor/ethereum/router/src/lib.rs | 582 ++++++++++++++++++ substrate/client/Cargo.toml | 2 +- substrate/client/src/networks/ethereum.rs | 7 + 13 files changed, 749 insertions(+), 445 deletions(-) delete mode 100644 processor/ethereum/ethereum-serai/src/router.rs create mode 100644 processor/ethereum/router/Cargo.toml create mode 100644 processor/ethereum/router/LICENSE create mode 100644 processor/ethereum/router/README.md create mode 100644 processor/ethereum/router/build.rs rename processor/ethereum/{contracts => router}/contracts/Router.sol (86%) create mode 100644 processor/ethereum/router/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index f08b457b..e374d4f1 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -55,6 +55,7 @@ jobs: -p serai-processor-ethereum-contracts \ -p serai-processor-ethereum-primitives \ -p serai-processor-ethereum-deployer \ + -p serai-processor-ethereum-router \ -p serai-processor-ethereum-erc20 \ -p ethereum-serai \ -p serai-ethereum-processor \ diff --git a/Cargo.lock b/Cargo.lock index b52aca05..981275fb 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8760,6 +8760,32 @@ dependencies = [ "k256", ] +[[package]] +name = "serai-processor-ethereum-router" +version = "0.1.0" +dependencies = [ + "alloy-consensus", + "alloy-core", + "alloy-provider", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", + "alloy-sol-macro", + "alloy-sol-macro-expander", + "alloy-sol-macro-input", + "alloy-sol-types", + "alloy-transport", + "build-solidity-contracts", + "ethereum-schnorr-contract", + "group", + "k256", + "serai-client", + "serai-processor-ethereum-deployer", + "serai-processor-ethereum-erc20", + "serai-processor-ethereum-primitives", + "syn 2.0.77", + "syn-solidity", +] + [[package]] name = "serai-processor-frost-attempt-manager" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index e2de489d..3c203ced 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -90,6 +90,7 @@ members = [ "processor/ethereum/contracts", "processor/ethereum/primitives", "processor/ethereum/deployer", + "processor/ethereum/router", "processor/ethereum/erc20", "processor/ethereum/ethereum-serai", "processor/ethereum", diff --git a/deny.toml b/deny.toml index 1091d103..9ee16043 100644 --- a/deny.toml +++ b/deny.toml @@ -62,6 +62,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-contracts" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-router" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-erc20" }, { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, diff --git a/processor/ethereum/ethereum-serai/src/router.rs b/processor/ethereum/ethereum-serai/src/router.rs deleted file mode 100644 index 3dbd8fa8..00000000 --- a/processor/ethereum/ethereum-serai/src/router.rs +++ /dev/null @@ -1,434 +0,0 @@ -use std::{sync::Arc, io, collections::HashSet}; - -use k256::{ - elliptic_curve::{group::GroupEncoding, sec1}, - ProjectivePoint, -}; - -use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; -#[cfg(test)] -use alloy_core::primitives::B256; -use alloy_consensus::TxLegacy; - -use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; - -use alloy_rpc_types_eth::Filter; -#[cfg(test)] -use alloy_rpc_types_eth::{BlockId, TransactionRequest, TransactionInput}; -use alloy_simple_request_transport::SimpleRequest; -use alloy_provider::{Provider, RootProvider}; - -pub use crate::{ - Error, - crypto::{PublicKey, Signature}, - abi::{erc20::Transfer, router as abi}, -}; -use abi::{SeraiKeyUpdated, InInstruction as InInstructionEvent, Executed as ExecutedEvent}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum Coin { - Ether, - Erc20([u8; 20]), -} - -impl Coin { - pub fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - Ok(match kind[0] { - 0 => Coin::Ether, - 1 => { - let mut address = [0; 20]; - reader.read_exact(&mut address)?; - Coin::Erc20(address) - } - _ => Err(io::Error::other("unrecognized Coin type"))?, - }) - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - Coin::Ether => writer.write_all(&[0]), - Coin::Erc20(token) => { - writer.write_all(&[1])?; - writer.write_all(token) - } - } - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct InInstruction { - pub id: ([u8; 32], u64), - pub from: [u8; 20], - pub coin: Coin, - pub amount: U256, - pub data: Vec, - pub key_at_end_of_block: ProjectivePoint, -} - -impl InInstruction { - pub fn read(reader: &mut R) -> io::Result { - let id = { - let mut id_hash = [0; 32]; - reader.read_exact(&mut id_hash)?; - let mut id_pos = [0; 8]; - reader.read_exact(&mut id_pos)?; - let id_pos = u64::from_le_bytes(id_pos); - (id_hash, id_pos) - }; - - let mut from = [0; 20]; - reader.read_exact(&mut from)?; - - let coin = Coin::read(reader)?; - let mut amount = [0; 32]; - reader.read_exact(&mut amount)?; - let amount = U256::from_le_slice(&amount); - - let mut data_len = [0; 4]; - reader.read_exact(&mut data_len)?; - let data_len = usize::try_from(u32::from_le_bytes(data_len)) - .map_err(|_| io::Error::other("InInstruction data exceeded 2**32 in length"))?; - let mut data = vec![0; data_len]; - reader.read_exact(&mut data)?; - - let mut key_at_end_of_block = ::Repr::default(); - reader.read_exact(&mut key_at_end_of_block)?; - let key_at_end_of_block = Option::from(ProjectivePoint::from_bytes(&key_at_end_of_block)) - .ok_or(io::Error::other("InInstruction had key at end of block which wasn't valid"))?; - - Ok(InInstruction { id, from, coin, amount, data, key_at_end_of_block }) - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&self.id.0)?; - writer.write_all(&self.id.1.to_le_bytes())?; - - writer.write_all(&self.from)?; - - self.coin.write(writer)?; - writer.write_all(&self.amount.as_le_bytes())?; - - writer.write_all( - &u32::try_from(self.data.len()) - .map_err(|_| { - io::Error::other("InInstruction being written had data exceeding 2**32 in length") - })? - .to_le_bytes(), - )?; - writer.write_all(&self.data)?; - - writer.write_all(&self.key_at_end_of_block.to_bytes()) - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Executed { - pub tx_id: [u8; 32], - pub nonce: u64, -} - -/// The contract Serai uses to manage its state. -#[derive(Clone, Debug)] -pub struct Router(Arc>, Address); -impl Router { - pub(crate) fn code() -> Vec { - let bytecode = contracts::router::BYTECODE; - Bytes::from_hex(bytecode).expect("compiled-in Router bytecode wasn't valid hex").to_vec() - } - - pub(crate) fn init_code(key: &PublicKey) -> Vec { - let mut bytecode = Self::code(); - // Append the constructor arguments - bytecode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); - bytecode - } - - // This isn't pub in order to force users to use `Deployer::find_router`. - pub(crate) fn new(provider: Arc>, address: Address) -> Self { - Self(provider, address) - } - - pub fn address(&self) -> [u8; 20] { - **self.1 - } - - /// Get the key for Serai at the specified block. - #[cfg(test)] - pub async fn serai_key(&self, at: [u8; 32]) -> Result { - let call = TransactionRequest::default() - .to(self.1) - .input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into())); - let bytes = self - .0 - .call(&call) - .block(BlockId::Hash(B256::from(at).into())) - .await - .map_err(|_| Error::ConnectionError)?; - let res = - abi::seraiKeyCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?; - PublicKey::from_eth_repr(res._0.0).ok_or(Error::ConnectionError) - } - - /// Get the message to be signed in order to update the key for Serai. - pub(crate) fn update_serai_key_message(chain_id: U256, nonce: U256, key: &PublicKey) -> Vec { - let mut buffer = b"updateSeraiKey".to_vec(); - buffer.extend(&chain_id.to_be_bytes::<32>()); - buffer.extend(&nonce.to_be_bytes::<32>()); - buffer.extend(&key.eth_repr()); - buffer - } - - /// Update the key representing Serai. - pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { - // TODO: Set a more accurate gas - TxLegacy { - to: TxKind::Call(self.1), - input: abi::updateSeraiKeyCall::new((public_key.eth_repr().into(), sig.into())) - .abi_encode() - .into(), - gas_limit: 100_000, - ..Default::default() - } - } - - /// Get the current nonce for the published batches. - #[cfg(test)] - pub async fn nonce(&self, at: [u8; 32]) -> Result { - let call = TransactionRequest::default() - .to(self.1) - .input(TransactionInput::new(abi::nonceCall::new(()).abi_encode().into())); - let bytes = self - .0 - .call(&call) - .block(BlockId::Hash(B256::from(at).into())) - .await - .map_err(|_| Error::ConnectionError)?; - let res = - abi::nonceCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?; - Ok(res._0) - } - - /// Get the message to be signed in order to update the key for Serai. - pub(crate) fn execute_message( - chain_id: U256, - nonce: U256, - outs: Vec, - ) -> Vec { - ("execute".to_string(), chain_id, nonce, outs).abi_encode_params() - } - - /// Execute a batch of `OutInstruction`s. - pub fn execute(&self, outs: &[abi::OutInstruction], sig: &Signature) -> TxLegacy { - TxLegacy { - to: TxKind::Call(self.1), - input: abi::executeCall::new((outs.to_vec(), sig.into())).abi_encode().into(), - // TODO - gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs.len()).unwrap()), - ..Default::default() - } - } - - pub async fn key_at_end_of_block(&self, block: u64) -> Result, Error> { - let filter = Filter::new().from_block(0).to_block(block).address(self.1); - let filter = filter.event_signature(SeraiKeyUpdated::SIGNATURE_HASH); - let all_keys = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; - if all_keys.is_empty() { - return Ok(None); - }; - - let last_key_x_coordinate_log = all_keys.last().ok_or(Error::ConnectionError)?; - let last_key_x_coordinate = last_key_x_coordinate_log - .log_decode::() - .map_err(|_| Error::ConnectionError)? - .inner - .data - .key; - - let mut compressed_point = ::Repr::default(); - compressed_point[0] = u8::from(sec1::Tag::CompressedEvenY); - compressed_point[1 ..].copy_from_slice(last_key_x_coordinate.as_slice()); - - let key = - Option::from(ProjectivePoint::from_bytes(&compressed_point)).ok_or(Error::ConnectionError)?; - Ok(Some(key)) - } - - pub async fn in_instructions( - &self, - block: u64, - allowed_tokens: &HashSet<[u8; 20]>, - ) -> Result, Error> { - let Some(key_at_end_of_block) = self.key_at_end_of_block(block).await? else { - return Ok(vec![]); - }; - - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; - - let mut transfer_check = HashSet::new(); - let mut in_instructions = vec![]; - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(Error::ConnectionError)?; - } - - let id = ( - log.block_hash.ok_or(Error::ConnectionError)?.into(), - log.log_index.ok_or(Error::ConnectionError)?, - ); - - let tx_hash = log.transaction_hash.ok_or(Error::ConnectionError)?; - let tx = self - .0 - .get_transaction_by_hash(tx_hash) - .await - .ok() - .flatten() - .ok_or(Error::ConnectionError)?; - - let log = - log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; - - let coin = if log.coin.0 == [0; 20] { - Coin::Ether - } else { - let token = *log.coin.0; - - if !allowed_tokens.contains(&token) { - continue; - } - - // If this also counts as a top-level transfer via the token, drop it - // - // Necessary in order to handle a potential edge case with some theoretical token - // implementations - // - // This will either let it be handled by the top-level transfer hook or will drop it - // entirely on the side of caution - if tx.to == Some(token.into()) { - continue; - } - - // Get all logs for this TX - let receipt = self - .0 - .get_transaction_receipt(tx_hash) - .await - .map_err(|_| Error::ConnectionError)? - .ok_or(Error::ConnectionError)?; - let tx_logs = receipt.inner.logs(); - - // Find a matching transfer log - let mut found_transfer = false; - for tx_log in tx_logs { - let log_index = tx_log.log_index.ok_or(Error::ConnectionError)?; - // Ensure we didn't already use this transfer to check a distinct InInstruction event - if transfer_check.contains(&log_index) { - continue; - } - - // Check if this log is from the token we expected to be transferred - if tx_log.address().0 != token { - continue; - } - // Check if this is a transfer log - // https://github.com/alloy-rs/core/issues/589 - if tx_log.topics()[0] != Transfer::SIGNATURE_HASH { - continue; - } - let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue }; - // Check if this is a transfer to us for the expected amount - if (transfer.to == self.1) && (transfer.value == log.amount) { - transfer_check.insert(log_index); - found_transfer = true; - break; - } - } - if !found_transfer { - // This shouldn't be a ConnectionError - // This is an exploit, a non-conforming ERC20, or an invalid connection - // This should halt the process which is sufficient, yet this is sub-optimal - // TODO - Err(Error::ConnectionError)?; - } - - Coin::Erc20(token) - }; - - in_instructions.push(InInstruction { - id, - from: *log.from.0, - coin, - amount: log.amount, - data: log.instruction.as_ref().to_vec(), - key_at_end_of_block, - }); - } - - Ok(in_instructions) - } - - pub async fn executed_commands(&self, block: u64) -> Result, Error> { - let mut res = vec![]; - - { - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(SeraiKeyUpdated::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; - - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(Error::ConnectionError)?; - } - - let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?.into(); - - let log = - log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; - - res.push(Executed { - tx_id, - nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?, - }); - } - } - - { - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(ExecutedEvent::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; - - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(Error::ConnectionError)?; - } - - let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?.into(); - - let log = log.log_decode::().map_err(|_| Error::ConnectionError)?.inner.data; - - res.push(Executed { - tx_id, - nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?, - }); - } - } - - Ok(res) - } - - #[cfg(feature = "tests")] - pub fn key_updated_filter(&self) -> Filter { - Filter::new().address(self.1).event_signature(SeraiKeyUpdated::SIGNATURE_HASH) - } - #[cfg(feature = "tests")] - pub fn executed_filter(&self) -> Filter { - Filter::new().address(self.1).event_signature(ExecutedEvent::SIGNATURE_HASH) - } -} diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml new file mode 100644 index 00000000..ed5417c0 --- /dev/null +++ b/processor/ethereum/router/Cargo.toml @@ -0,0 +1,49 @@ +[package] +name = "serai-processor-ethereum-router" +version = "0.1.0" +description = "The Router used by the Serai Processor for Ethereum" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/router" +authors = ["Luke Parker "] +edition = "2021" +publish = false +rust-version = "1.79" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +group = { version = "0.13", default-features = false } +k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa", "arithmetic"] } + +alloy-core = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.3", default-features = false } + +alloy-sol-types = { version = "0.8", default-features = false } +alloy-sol-macro = { version = "0.8", default-features = false } + +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-transport = { version = "0.3", default-features = false } +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-provider = { version = "0.3", default-features = false } + +ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../../networks/ethereum/schnorr", default-features = false } + +ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } +ethereum-deployer = { package = "serai-processor-ethereum-deployer", path = "../deployer", default-features = false } +erc20 = { package = "serai-processor-ethereum-erc20", path = "../erc20", default-features = false } + +serai-client = { path = "../../../substrate/client", default-features = false, features = ["ethereum"] } + +[build-dependencies] +build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts", default-features = false } + +syn = { version = "2", default-features = false, features = ["proc-macro"] } + +syn-solidity = { version = "0.8", default-features = false } +alloy-sol-macro-input = { version = "0.8", default-features = false } +alloy-sol-macro-expander = { version = "0.8", default-features = false } diff --git a/processor/ethereum/router/LICENSE b/processor/ethereum/router/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/router/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/router/README.md b/processor/ethereum/router/README.md new file mode 100644 index 00000000..b93c3219 --- /dev/null +++ b/processor/ethereum/router/README.md @@ -0,0 +1 @@ +# Ethereum Router diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs new file mode 100644 index 00000000..1ce6d4f5 --- /dev/null +++ b/processor/ethereum/router/build.rs @@ -0,0 +1,42 @@ +use std::{env, fs}; + +use alloy_sol_macro_input::SolInputKind; + +fn write(sol: syn_solidity::File, file: &str) { + let sol = alloy_sol_macro_expander::expand::expand(sol).unwrap(); + fs::write(file, sol.to_string()).unwrap(); +} + +fn sol(sol_files: &[&str], file: &str) { + let mut sol = String::new(); + for sol_file in sol_files { + sol += &fs::read_to_string(sol_file).unwrap(); + } + let SolInputKind::Sol(sol) = syn::parse_str(&sol).unwrap() else { + panic!("parsed .sols file wasn't SolInputKind::Sol"); + }; + write(sol, file); +} + +fn main() { + let artifacts_path = + env::var("OUT_DIR").unwrap().to_string() + "/serai-processor-ethereum-router"; + + if !fs::exists(&artifacts_path).unwrap() { + fs::create_dir(&artifacts_path).unwrap(); + } + + build_solidity_contracts::build( + &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts"], + "contracts", + &artifacts_path, + ) + .unwrap(); + + // This cannot be handled with the sol! macro. The Solidity requires an import + // https://github.com/alloy-rs/core/issues/602 + sol( + &["../../../networks/ethereum/schnorr/contracts/Schnorr.sol", "contracts/Router.sol"], + &(artifacts_path + "/router.rs"), + ); +} diff --git a/processor/ethereum/contracts/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol similarity index 86% rename from processor/ethereum/contracts/contracts/Router.sol rename to processor/ethereum/router/contracts/Router.sol index 136c1e62..e5a5c53f 100644 --- a/processor/ethereum/contracts/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -1,7 +1,7 @@ // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.26; -import "./IERC20.sol"; +import "IERC20.sol"; import "Schnorr.sol"; @@ -22,6 +22,15 @@ contract Router { Code } + struct AddressDestination { + address destination; + } + + struct CodeDestination { + uint32 gas; + bytes code; + } + struct OutInstruction { DestinationType destinationType; bytes destination; @@ -38,7 +47,7 @@ contract Router { event InInstruction( address indexed from, address indexed coin, uint256 amount, bytes instruction ); - event Executed(uint256 indexed nonce, bytes32 indexed batch); + event Executed(uint256 indexed nonce, bytes32 indexed message_hash); error InvalidSignature(); error InvalidAmount(); @@ -68,7 +77,7 @@ contract Router { external _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) { - bytes memory message = abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey); + bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey)); _nonce++; if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { @@ -132,6 +141,7 @@ contract Router { */ if (coin == address(0)) { // Enough gas to service the transfer and a minimal amount of logic + // TODO: If we're constructing a contract, we can do this at the same time as construction to.call{ value: value, gas: 5_000 }(""); } else { coin.call{ gas: 100_000 }(abi.encodeWithSelector(IERC20.transfer.selector, msg.sender, value)); @@ -156,13 +166,16 @@ contract Router { // Execute a list of transactions if they were signed by the current key with the current nonce function execute(OutInstruction[] calldata transactions, Signature calldata signature) external { // Verify the signature - bytes memory message = abi.encode("execute", block.chainid, _nonce, transactions); + // We hash the message here as we need the message's hash for the Executed event + // Since we're already going to hash it, hashing it prior to verifying the signature reduces the + // amount of words hashed by its challenge function (reducing our gas costs) + bytes32 message = keccak256(abi.encode("execute", block.chainid, _nonce, transactions)); if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } // Since the signature was verified, perform execution - emit Executed(_nonce, keccak256(message)); + emit Executed(_nonce, message); // While this is sufficient to prevent replays, it's still technically possible for instructions // from later batches to be executed before these instructions upon re-entrancy _nonce++; @@ -172,8 +185,8 @@ contract Router { if (transactions[i].destinationType == DestinationType.Address) { // This may cause a panic and the contract to become stuck if the destination isn't actually // 20 bytes. Serai is trusted to not pass a malformed destination - (address destination) = abi.decode(transactions[i].destination, (address)); - _transferOut(destination, transactions[i].coin, transactions[i].value); + (AddressDestination memory destination) = abi.decode(transactions[i].destination, (AddressDestination)); + _transferOut(destination.destination, transactions[i].coin, transactions[i].value); } else { // The destination is a piece of initcode. We calculate the hash of the will-be contract, // transfer to it, and then run the initcode @@ -184,9 +197,9 @@ contract Router { _transferOut(nextAddress, transactions[i].coin, transactions[i].value); // Perform the calls with a set gas budget - (uint32 gas, bytes memory code) = abi.decode(transactions[i].destination, (uint32, bytes)); - address(this).call{ gas: gas }( - abi.encodeWithSelector(Router.arbitaryCallOut.selector, code) + (CodeDestination memory destination) = abi.decode(transactions[i].destination, (CodeDestination)); + address(this).call{ gas: destination.gas }( + abi.encodeWithSelector(Router.arbitaryCallOut.selector, destination.code) ); } } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs new file mode 100644 index 00000000..4e4abec8 --- /dev/null +++ b/processor/ethereum/router/src/lib.rs @@ -0,0 +1,582 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use std::{sync::Arc, io, collections::HashSet}; + +use group::ff::PrimeField; + +/* +use k256::{ + elliptic_curve::{group::GroupEncoding, sec1}, + ProjectivePoint, +}; +*/ + +use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; +use alloy_consensus::TxLegacy; + +use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; + +use alloy_rpc_types_eth::Filter; +use alloy_transport::{TransportErrorKind, RpcError}; +use alloy_simple_request_transport::SimpleRequest; +use alloy_provider::{Provider, RootProvider}; + +use ethereum_schnorr::{PublicKey, Signature}; +use ethereum_deployer::Deployer; +use erc20::Transfer; + +use serai_client::{primitives::Amount, networks::ethereum::Address as SeraiAddress}; + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod _abi { + include!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/router.rs")); +} +use _abi::Router as abi; +use abi::{ + SeraiKeyUpdated as SeraiKeyUpdatedEvent, InInstruction as InInstructionEvent, + Executed as ExecutedEvent, +}; + +impl From<&Signature> for abi::Signature { + fn from(signature: &Signature) -> Self { + Self { + c: <[u8; 32]>::from(signature.c().to_repr()).into(), + s: <[u8; 32]>::from(signature.s().to_repr()).into(), + } + } +} + +/// A coin on Ethereum. +#[derive(Clone, PartialEq, Eq, Debug)] +pub enum Coin { + /// Ether, the native coin of Ethereum. + Ether, + /// An ERC20 token. + Erc20([u8; 20]), +} + +impl Coin { + /// Read a `Coin`. + pub fn read(reader: &mut R) -> io::Result { + let mut kind = [0xff]; + reader.read_exact(&mut kind)?; + Ok(match kind[0] { + 0 => Coin::Ether, + 1 => { + let mut address = [0; 20]; + reader.read_exact(&mut address)?; + Coin::Erc20(address) + } + _ => Err(io::Error::other("unrecognized Coin type"))?, + }) + } + + /// Write the `Coin`. + pub fn write(&self, writer: &mut W) -> io::Result<()> { + match self { + Coin::Ether => writer.write_all(&[0]), + Coin::Erc20(token) => { + writer.write_all(&[1])?; + writer.write_all(token) + } + } + } +} + +/// An InInstruction from the Router. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct InInstruction { + /// The ID for this `InInstruction`. + pub id: ([u8; 32], u64), + /// The address which transferred these coins to Serai. + pub from: [u8; 20], + /// The coin transferred. + pub coin: Coin, + /// The amount transferred. + pub amount: U256, + /// The data associated with the transfer. + pub data: Vec, +} + +impl InInstruction { + /// Read an `InInstruction`. + pub fn read(reader: &mut R) -> io::Result { + let id = { + let mut id_hash = [0; 32]; + reader.read_exact(&mut id_hash)?; + let mut id_pos = [0; 8]; + reader.read_exact(&mut id_pos)?; + let id_pos = u64::from_le_bytes(id_pos); + (id_hash, id_pos) + }; + + let mut from = [0; 20]; + reader.read_exact(&mut from)?; + + let coin = Coin::read(reader)?; + let mut amount = [0; 32]; + reader.read_exact(&mut amount)?; + let amount = U256::from_le_slice(&amount); + + let mut data_len = [0; 4]; + reader.read_exact(&mut data_len)?; + let data_len = usize::try_from(u32::from_le_bytes(data_len)) + .map_err(|_| io::Error::other("InInstruction data exceeded 2**32 in length"))?; + let mut data = vec![0; data_len]; + reader.read_exact(&mut data)?; + + Ok(InInstruction { id, from, coin, amount, data }) + } + + /// Write the `InInstruction`. + pub fn write(&self, writer: &mut W) -> io::Result<()> { + writer.write_all(&self.id.0)?; + writer.write_all(&self.id.1.to_le_bytes())?; + + writer.write_all(&self.from)?; + + self.coin.write(writer)?; + writer.write_all(&self.amount.as_le_bytes())?; + + writer.write_all( + &u32::try_from(self.data.len()) + .map_err(|_| { + io::Error::other("InInstruction being written had data exceeding 2**32 in length") + })? + .to_le_bytes(), + )?; + writer.write_all(&self.data) + } +} + +/// Executed an command. +#[derive(Clone, PartialEq, Eq, Debug)] +pub enum Executed { + /// Set a new key. + SetKey { + /// The nonce this was done with. + nonce: u64, + /// The key set. + key: [u8; 32], + }, + /// Executed Batch. + Batch { + /// The nonce this was done with. + nonce: u64, + /// The hash of the signed message for the Batch executed. + message_hash: [u8; 32], + }, +} + +impl Executed { + /// The nonce consumed by this executed event. + pub fn nonce(&self) -> u64 { + match self { + Executed::SetKey { nonce, .. } | Executed::Batch { nonce, .. } => *nonce, + } + } +} + +/// A view of the Router for Serai. +#[derive(Clone, Debug)] +pub struct Router(Arc>, Address); +impl Router { + pub(crate) fn code() -> Vec { + const BYTECODE: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); + Bytes::from_hex(BYTECODE).expect("compiled-in Router bytecode wasn't valid hex").to_vec() + } + + pub(crate) fn init_code(key: &PublicKey) -> Vec { + let mut bytecode = Self::code(); + // Append the constructor arguments + bytecode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); + bytecode + } + + /// Create a new view of the Router. + /// + /// This performs an on-chain lookup for the first deployed Router constructed with this public + /// key. This lookup is of a constant amount of calls and does not read any logs. + pub async fn new( + provider: Arc>, + initial_serai_key: &PublicKey, + ) -> Result, RpcError> { + let Some(deployer) = Deployer::new(provider.clone()).await? else { + return Ok(None); + }; + let Some(deployment) = deployer + .find_deployment(ethereum_primitives::keccak256(Self::init_code(initial_serai_key))) + .await? + else { + return Ok(None); + }; + Ok(Some(Self(provider, deployment))) + } + + /// The address of the router. + pub fn address(&self) -> Address { + self.1 + } + + /// Construct a transaction to update the key representing Serai. + pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { + // TODO: Set a more accurate gas + TxLegacy { + to: TxKind::Call(self.1), + input: abi::updateSeraiKeyCall::new((public_key.eth_repr().into(), sig.into())) + .abi_encode() + .into(), + gas_limit: 100_000, + ..Default::default() + } + } + + /// Construct a transaction to execute a batch of `OutInstruction`s. + pub fn execute(&self, outs: &[(SeraiAddress, (Coin, Amount))], sig: &Signature) -> TxLegacy { + TxLegacy { + to: TxKind::Call(self.1), + input: abi::executeCall::new(( + outs + .iter() + .map(|(address, (coin, amount))| { + #[allow(non_snake_case)] + let (destinationType, destination) = match address { + SeraiAddress::Address(address) => ( + abi::DestinationType::Address, + (abi::AddressDestination { destination: Address::from(address) }).abi_encode(), + ), + SeraiAddress::Contract(contract) => ( + abi::DestinationType::Code, + (abi::CodeDestination { + gas: contract.gas(), + code: contract.code().to_vec().into(), + }) + .abi_encode(), + ), + }; + abi::OutInstruction { + destinationType, + destination: destination.into(), + coin: match coin { + Coin::Ether => [0; 20].into(), + Coin::Erc20(address) => address.into(), + }, + value: amount.0.try_into().expect("couldn't convert u64 to u256"), + } + }) + .collect(), + sig.into(), + )) + .abi_encode() + .into(), + // TODO + gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs.len()).unwrap()), + ..Default::default() + } + } + + /* + /// Get the key for Serai at the specified block. + #[cfg(test)] + pub async fn serai_key(&self, at: [u8; 32]) -> Result> { + let call = TransactionRequest::default() + .to(self.1) + .input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into())); + let bytes = self + .0 + .call(&call) + .block(BlockId::Hash(B256::from(at).into())) + .await + ?; + let res = + abi::seraiKeyCall::abi_decode_returns(&bytes, true)?; + PublicKey::from_eth_repr(res._0.0).ok_or_else(|| TransportErrorKind::Custom( + "TODO".to_string().into())) + } + */ + + /* + /// Get the message to be signed in order to update the key for Serai. + pub(crate) fn update_serai_key_message(chain_id: U256, nonce: U256, key: &PublicKey) -> Vec { + let mut buffer = b"updateSeraiKey".to_vec(); + buffer.extend(&chain_id.to_be_bytes::<32>()); + buffer.extend(&nonce.to_be_bytes::<32>()); + buffer.extend(&key.eth_repr()); + buffer + } + */ + + /* + /// Get the current nonce for the published batches. + #[cfg(test)] + pub async fn nonce(&self, at: [u8; 32]) -> Result> { + let call = TransactionRequest::default() + .to(self.1) + .input(TransactionInput::new(abi::nonceCall::new(()).abi_encode().into())); + let bytes = self + .0 + .call(&call) + .block(BlockId::Hash(B256::from(at).into())) + .await + ?; + let res = + abi::nonceCall::abi_decode_returns(&bytes, true)?; + Ok(res._0) + } + */ + + /* + /// Get the message to be signed in order to update the key for Serai. + pub(crate) fn execute_message( + chain_id: U256, + nonce: U256, + outs: Vec, + ) -> Vec { + ("execute".to_string(), chain_id, nonce, outs).abi_encode_params() + } + */ + + /// Fetch the `InInstruction`s emitted by the Router from this block. + pub async fn in_instructions( + &self, + block: u64, + allowed_tokens: &HashSet<[u8; 20]>, + ) -> Result, RpcError> { + // The InInstruction events for this block + let filter = Filter::new().from_block(block).to_block(block).address(self.1); + let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); + let logs = self.0.get_logs(&filter).await?; + + /* + We check that for all InInstructions for ERC20s emitted, a corresponding transfer occurred. + In order to prevent a transfer from being used to justify multiple distinct InInstructions, + we insert the transfer's log index into this HashSet. + */ + let mut transfer_check = HashSet::new(); + + let mut in_instructions = vec![]; + for log in logs { + // Double check the address which emitted this log + if log.address() != self.1 { + Err(TransportErrorKind::Custom( + "node returned a log from a different address than requested".to_string().into(), + ))?; + } + + let id = ( + log + .block_hash + .ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) + })? + .into(), + log.log_index.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its index set".to_string().into()) + })?, + ); + + let tx_hash = log.transaction_hash.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) + })?; + let tx = self.0.get_transaction_by_hash(tx_hash).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have a transaction it had the logs of".to_string().into(), + ) + })?; + + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom( + format!("filtered to InInstructionEvent yet couldn't decode log: {e:?}").into(), + ) + })? + .inner + .data; + + let coin = if log.coin.0 == [0; 20] { + Coin::Ether + } else { + let token = *log.coin.0; + + if !allowed_tokens.contains(&token) { + continue; + } + + /* + If this also counts as a top-level transfer of a token, drop it. + + This event will only exist if there's an ERC20 which has some form of programmability + (`onTransferFrom`), and when a top-level transfer was made, that hook made its own call + into the Serai router. + + If such an ERC20 exists, Serai would parse it as a top-level transfer and as a router + InInstruction. While no such ERC20 is planned to be integrated, this enures we don't + allow a double-spend on that premise. + + TODO: See below note. + */ + if tx.to == Some(token.into()) { + continue; + } + + // Get all logs for this TX + let receipt = self.0.get_transaction_receipt(tx_hash).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have the receipt for a transaction it had".to_string().into(), + ) + })?; + let tx_logs = receipt.inner.logs(); + + /* + TODO: If this is also a top-level transfer, drop the log from the top-level transfer and + only iterate over the rest of the logs. + */ + + // Find a matching transfer log + let mut found_transfer = false; + for tx_log in tx_logs { + let log_index = tx_log.log_index.ok_or_else(|| { + TransportErrorKind::Custom( + "log in transaction receipt didn't have its log index set".to_string().into(), + ) + })?; + // Ensure we didn't already use this transfer to check a distinct InInstruction event + if transfer_check.contains(&log_index) { + continue; + } + + // Check if this log is from the token we expected to be transferred + if tx_log.address().0 != token { + continue; + } + // Check if this is a transfer log + // https://github.com/alloy-rs/core/issues/589 + if tx_log.topics()[0] != Transfer::SIGNATURE_HASH { + continue; + } + let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue }; + // Check if this is a transfer to us for the expected amount + if (transfer.to == self.1) && (transfer.value == log.amount) { + transfer_check.insert(log_index); + found_transfer = true; + break; + } + } + if !found_transfer { + // This shouldn't be a simple error + // This is an exploit, a non-conforming ERC20, or a malicious connection + // This should halt the process. While this is sufficient, it's sub-optimal + // TODO + Err(TransportErrorKind::Custom( + "ERC20 InInstruction with no matching transfer log".to_string().into(), + ))?; + } + + Coin::Erc20(token) + }; + + in_instructions.push(InInstruction { + id, + from: *log.from.0, + coin, + amount: log.amount, + data: log.instruction.as_ref().to_vec(), + }); + } + + Ok(in_instructions) + } + + /// Fetch the executed actions from this block. + pub async fn executed(&self, block: u64) -> Result, RpcError> { + let mut res = vec![]; + + { + let filter = Filter::new().from_block(block).to_block(block).address(self.1); + let filter = filter.event_signature(SeraiKeyUpdatedEvent::SIGNATURE_HASH); + let logs = self.0.get_logs(&filter).await?; + + for log in logs { + // Double check the address which emitted this log + if log.address() != self.1 { + Err(TransportErrorKind::Custom( + "node returned a log from a different address than requested".to_string().into(), + ))?; + } + + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom( + format!("filtered to SeraiKeyUpdatedEvent yet couldn't decode log: {e:?}").into(), + ) + })? + .inner + .data; + + res.push(Executed::SetKey { + nonce: log.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("filtered to convert nonce to u64: {e:?}").into()) + })?, + key: log.key.into(), + }); + } + } + + { + let filter = Filter::new().from_block(block).to_block(block).address(self.1); + let filter = filter.event_signature(ExecutedEvent::SIGNATURE_HASH); + let logs = self.0.get_logs(&filter).await?; + + for log in logs { + // Double check the address which emitted this log + if log.address() != self.1 { + Err(TransportErrorKind::Custom( + "node returned a log from a different address than requested".to_string().into(), + ))?; + } + + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom( + format!("filtered to ExecutedEvent yet couldn't decode log: {e:?}").into(), + ) + })? + .inner + .data; + + res.push(Executed::Batch { + nonce: log.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("filtered to convert nonce to u64: {e:?}").into()) + })?, + message_hash: log.message_hash.into(), + }); + } + } + + res.sort_by_key(Executed::nonce); + + Ok(res) + } + + /* + #[cfg(feature = "tests")] + pub fn key_updated_filter(&self) -> Filter { + Filter::new().address(self.1).event_signature(SeraiKeyUpdated::SIGNATURE_HASH) + } + #[cfg(feature = "tests")] + pub fn executed_filter(&self) -> Filter { + Filter::new().address(self.1).event_signature(ExecutedEvent::SIGNATURE_HASH) + } + */ +} diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index 33bfabf9..f59c70fe 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -24,7 +24,7 @@ bitvec = { version = "1", default-features = false, features = ["alloc", "serde" hex = "0.4" scale = { package = "parity-scale-codec", version = "3" } -borsh = { version = "1" } +borsh = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"], optional = true } serde_json = { version = "1", optional = true } diff --git a/substrate/client/src/networks/ethereum.rs b/substrate/client/src/networks/ethereum.rs index 28ada635..ddf15480 100644 --- a/substrate/client/src/networks/ethereum.rs +++ b/substrate/client/src/networks/ethereum.rs @@ -29,6 +29,13 @@ impl ContractDeployment { } Some(Self { gas, code }) } + + pub fn gas(&self) -> u32 { + self.gas + } + pub fn code(&self) -> &[u8] { + &self.code + } } /// A representation of an Ethereum address. From 7feb7aed2255f3fe6ea952c6eebf9cba4abf72f0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 01:04:22 -0400 Subject: [PATCH 145/368] Hash the message before the challenge function in the Schnorr contract Slightly more efficient. --- networks/ethereum/schnorr/contracts/Schnorr.sol | 2 +- networks/ethereum/schnorr/contracts/tests/Schnorr.sol | 2 +- networks/ethereum/schnorr/src/signature.rs | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 69dc208a..247e0fbe 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -15,7 +15,7 @@ library Schnorr { // message := the message signed // c := Schnorr signature challenge // s := Schnorr signature solution - function verify(bytes32 px, bytes memory message, bytes32 c, bytes32 s) + function verify(bytes32 px, bytes32 message, bytes32 c, bytes32 s) internal pure returns (bool) diff --git a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 11a3c3bc..412786a3 100644 --- a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -9,6 +9,6 @@ contract TestSchnorr { pure returns (bool) { - return Schnorr.verify(public_key, message, c, s); + return Schnorr.verify(public_key, keccak256(message), c, s); } } diff --git a/networks/ethereum/schnorr/src/signature.rs b/networks/ethereum/schnorr/src/signature.rs index cd467cea..1af1d60f 100644 --- a/networks/ethereum/schnorr/src/signature.rs +++ b/networks/ethereum/schnorr/src/signature.rs @@ -38,7 +38,7 @@ impl Signature { &Keccak256::digest(x_and_y_coordinates)[12 ..] }); hash.update(key.eth_repr()); - hash.update(message); + hash.update(Keccak256::digest(message)); >::reduce_bytes(&hash.finalize()) } From ee0efe7cde4f762936598673eaf5ba6a74972b51 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 01:05:31 -0400 Subject: [PATCH 146/368] Don't have the Deployer store the deployment block Also updates how re-entrancy is handled to a more efficient and portable mechanism. --- .../ethereum/deployer/contracts/Deployer.sol | 45 +++++-------------- processor/ethereum/deployer/src/lib.rs | 17 ++++--- 2 files changed, 19 insertions(+), 43 deletions(-) diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol index ad217fdc..2d4904e4 100644 --- a/processor/ethereum/deployer/contracts/Deployer.sol +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -34,41 +34,12 @@ pragma solidity ^0.8.26; */ contract Deployer { - struct Deployment { - uint64 block_number; - address created_contract; - } + mapping(bytes32 => address) public deployments; - mapping(bytes32 => Deployment) public deployments; - - error Reentrancy(); error PriorDeployed(); error DeploymentFailed(); function deploy(bytes memory init_code) external { - // Prevent re-entrancy - // If we did allow it, one could deploy the same contract multiple times (with one overwriting - // the other's set value in storage) - bool called; - // This contract doesn't have any other use of transient storage, nor is to be inherited, making - // this usage of the zero address safe - assembly { - called := tload(0) - } - if (called) { - revert Reentrancy(); - } - assembly { - tstore(0, 1) - } - - // Check this wasn't prior deployed - bytes32 init_code_hash = keccak256(init_code); - Deployment memory deployment = deployments[init_code_hash]; - if (deployment.created_contract == address(0)) { - revert PriorDeployed(); - } - // Deploy the contract address created_contract; assembly { @@ -78,9 +49,15 @@ contract Deployer { revert DeploymentFailed(); } - // Set the dpeloyment to storage - deployment.block_number = uint64(block.number); - deployment.created_contract = created_contract; - deployments[init_code_hash] = deployment; + bytes32 init_code_hash = keccak256(init_code); + + // Check this wasn't prior deployed + // We check this *after* deploymeing (in violation of CEI) to handle re-entrancy related bugs + if (deployments[init_code_hash] != address(0)) { + revert PriorDeployed(); + } + + // Write the deployment to storage + deployments[init_code_hash] = created_contract; } } diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index bf2d1a9c..6fa59ee3 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -30,7 +30,7 @@ mod abi { /// compatible chain. It then supports retrieving the Router contract's address (which isn't /// deterministic) using a single call. #[derive(Clone, Debug)] -pub struct Deployer; +pub struct Deployer(Arc>); impl Deployer { /// Obtain the transaction to deploy this contract, already signed. /// @@ -38,8 +38,8 @@ impl Deployer { /// funded for this transaction to be submitted. This account has no known private key to anyone /// so ETH sent can be neither misappropriated nor returned. pub fn deployment_tx() -> Signed { - pub const BYTECODE: &str = - include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); + pub const BYTECODE: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); let bytecode = Bytes::from_hex(BYTECODE).expect("compiled-in Deployer bytecode wasn't valid hex"); @@ -75,28 +75,27 @@ impl Deployer { if code.is_empty() { return Ok(None); } - Ok(Some(Self)) + Ok(Some(Self(provider))) } /// Find the deployment of a contract. pub async fn find_deployment( &self, - provider: Arc>, init_code_hash: [u8; 32], - ) -> Result, RpcError> { + ) -> Result, RpcError> { let call = TransactionRequest::default().to(Self::address()).input(TransactionInput::new( abi::Deployer::deploymentsCall::new((init_code_hash.into(),)).abi_encode().into(), )); - let bytes = provider.call(&call).await?; + let bytes = self.0.call(&call).await?; let deployment = abi::Deployer::deploymentsCall::abi_decode_returns(&bytes, true) .map_err(|e| { TransportErrorKind::Custom( - format!("node returned a non-Deployment for function returning Deployment: {e:?}").into(), + format!("node returned a non-address for function returning address: {e:?}").into(), ) })? ._0; - if deployment.created_contract == [0; 20] { + if **deployment == [0; 20] { return Ok(None); } Ok(Some(deployment)) From 381495618c507196e06f8f1a9b193d836f9bc86d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 01:07:08 -0400 Subject: [PATCH 147/368] Trim dead code --- processor/ethereum/contracts/src/lib.rs | 4 +- processor/ethereum/erc20/src/lib.rs | 7 +- .../ethereum/ethereum-serai/src/crypto.rs | 42 ------- .../ethereum/ethereum-serai/src/deployer.rs | 113 ------------------ 4 files changed, 6 insertions(+), 160 deletions(-) delete mode 100644 processor/ethereum/ethereum-serai/src/deployer.rs diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs index d0a5c076..9087eaed 100644 --- a/processor/ethereum/contracts/src/lib.rs +++ b/processor/ethereum/contracts/src/lib.rs @@ -10,7 +10,7 @@ pub mod erc20 { pub use super::abigen::erc20::IERC20::*; } pub mod router { - pub const BYTECODE: &str = - include_str!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Router.bin")); + pub const BYTECODE: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Router.bin")); pub use super::abigen::router::Router::*; } diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 560ea86c..51f68d0e 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -22,7 +22,8 @@ use alloy_provider::{Provider, RootProvider}; mod abi { alloy_sol_macro::sol!("contracts/IERC20.sol"); } -use abi::IERC20::{IERC20Calls, Transfer, transferCall, transferFromCall}; +use abi::IERC20::{IERC20Calls, transferCall, transferFromCall}; +pub use abi::IERC20::Transfer; /// A top-level ERC20 transfer #[derive(Clone, Debug)] @@ -50,12 +51,12 @@ impl Erc20 { pub async fn top_level_transfers( &self, block: u64, - to: [u8; 20], + to: Address, ) -> Result, RpcError> { let filter = Filter::new().from_block(block).to_block(block).address(self.1); let filter = filter.event_signature(Transfer::SIGNATURE_HASH); let mut to_topic = [0; 32]; - to_topic[12 ..].copy_from_slice(&to); + to_topic[12 ..].copy_from_slice(to.as_ref()); let filter = filter.topic2(B256::from(to_topic)); let logs = self.0.get_logs(&filter).await?; diff --git a/processor/ethereum/ethereum-serai/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs index fc51ae6b..3b9dc58a 100644 --- a/processor/ethereum/ethereum-serai/src/crypto.rs +++ b/processor/ethereum/ethereum-serai/src/crypto.rs @@ -18,48 +18,6 @@ pub use ethereum_schnorr_contract::*; use alloy_core::primitives::{Parity, Signature as AlloySignature, Address}; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; -pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] { - alloy_core::primitives::keccak256(data).into() -} - -pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar { - >::reduce_bytes(&keccak256(data).into()) -} - -pub(crate) fn address(point: &ProjectivePoint) -> [u8; 20] { - let encoded_point = point.to_encoded_point(false); - **Address::from_raw_public_key(&encoded_point.as_ref()[1 .. 65]) -} - -/// Deterministically sign a transaction. -/// -/// This function panics if passed a transaction with a non-None chain ID. -pub fn deterministically_sign(tx: &TxLegacy) -> Signed { - assert!( - tx.chain_id.is_none(), - "chain ID was Some when deterministically signing a TX (causing a non-deterministic signer)" - ); - - let sig_hash = tx.signature_hash().0; - let mut r = hash_to_scalar(&[sig_hash.as_slice(), b"r"].concat()); - let mut s = hash_to_scalar(&[sig_hash.as_slice(), b"s"].concat()); - loop { - let r_bytes: [u8; 32] = r.to_repr().into(); - let s_bytes: [u8; 32] = s.to_repr().into(); - let v = Parity::NonEip155(false); - let signature = - AlloySignature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), v).unwrap(); - let tx = tx.clone().into_signed(signature); - if tx.recover_signer().is_ok() { - return tx; - } - - // Re-hash until valid - r = hash_to_scalar(r_bytes.as_ref()); - s = hash_to_scalar(s_bytes.as_ref()); - } -} - /// The HRAm to use for the Schnorr Solidity library. /// /// This will panic if the public key being signed for is not representable within the Schnorr diff --git a/processor/ethereum/ethereum-serai/src/deployer.rs b/processor/ethereum/ethereum-serai/src/deployer.rs deleted file mode 100644 index 88f4a5fb..00000000 --- a/processor/ethereum/ethereum-serai/src/deployer.rs +++ /dev/null @@ -1,113 +0,0 @@ -use std::sync::Arc; - -use alloy_core::primitives::{hex::FromHex, Address, B256, U256, Bytes, TxKind}; -use alloy_consensus::{Signed, TxLegacy}; - -use alloy_sol_types::{SolCall, SolEvent}; - -use alloy_rpc_types_eth::{BlockNumberOrTag, Filter}; -use alloy_simple_request_transport::SimpleRequest; -use alloy_provider::{Provider, RootProvider}; - -use crate::{ - Error, - crypto::{self, keccak256, PublicKey}, - router::Router, -}; -pub use crate::abi::deployer as abi; - -/// The Deployer contract for the Router contract. -/// -/// This Deployer has a deterministic address, letting it be immediately identified on any -/// compatible chain. It then supports retrieving the Router contract's address (which isn't -/// deterministic) using a single log query. -#[derive(Clone, Debug)] -pub struct Deployer; -impl Deployer { - /// Obtain the transaction to deploy this contract, already signed. - /// - /// The account this transaction is sent from (which is populated in `from`) must be sufficiently - /// funded for this transaction to be submitted. This account has no known private key to anyone, - /// so ETH sent can be neither misappropriated nor returned. - pub fn deployment_tx() -> Signed { - let bytecode = contracts::deployer::BYTECODE; - let bytecode = - Bytes::from_hex(bytecode).expect("compiled-in Deployer bytecode wasn't valid hex"); - - let tx = TxLegacy { - chain_id: None, - nonce: 0, - gas_price: 100_000_000_000u128, - // TODO: Use a more accurate gas limit - gas_limit: 1_000_000u128, - to: TxKind::Create, - value: U256::ZERO, - input: bytecode, - }; - - crypto::deterministically_sign(&tx) - } - - /// Obtain the deterministic address for this contract. - pub fn address() -> [u8; 20] { - let deployer_deployer = - Self::deployment_tx().recover_signer().expect("deployment_tx didn't have a valid signature"); - **Address::create(&deployer_deployer, 0) - } - - /// Construct a new view of the `Deployer`. - pub async fn new(provider: Arc>) -> Result, Error> { - let address = Self::address(); - let code = provider.get_code_at(address.into()).await.map_err(|_| Error::ConnectionError)?; - // Contract has yet to be deployed - if code.is_empty() { - return Ok(None); - } - Ok(Some(Self)) - } - - /// Yield the `ContractCall` necessary to deploy the Router. - pub fn deploy_router(&self, key: &PublicKey) -> TxLegacy { - TxLegacy { - to: TxKind::Call(Self::address().into()), - input: abi::deployCall::new((Router::init_code(key).into(),)).abi_encode().into(), - gas_limit: 1_000_000, - ..Default::default() - } - } - - /// Find the first Router deployed with the specified key as its first key. - /// - /// This is the Router Serai will use, and is the only way to construct a `Router`. - pub async fn find_router( - &self, - provider: Arc>, - key: &PublicKey, - ) -> Result, Error> { - let init_code = Router::init_code(key); - let init_code_hash = keccak256(&init_code); - - #[cfg(not(test))] - let to_block = BlockNumberOrTag::Finalized; - #[cfg(test)] - let to_block = BlockNumberOrTag::Latest; - - // Find the first log using this init code (where the init code is binding to the key) - // TODO: Make an abstraction for event filtering (de-duplicating common code) - let filter = - Filter::new().from_block(0).to_block(to_block).address(Address::from(Self::address())); - let filter = filter.event_signature(abi::Deployment::SIGNATURE_HASH); - let filter = filter.topic1(B256::from(init_code_hash)); - let logs = provider.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?; - - let Some(first_log) = logs.first() else { return Ok(None) }; - let router = first_log - .log_decode::() - .map_err(|_| Error::ConnectionError)? - .inner - .data - .created; - - Ok(Some(Router::new(provider, router))) - } -} From d21034c349fb79905ecd6ea8c230f8c8eceb6eef Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 01:26:37 -0400 Subject: [PATCH 148/368] Add calls to get the messages to sign for the router --- .../ethereum/router/contracts/Router.sol | 2 + processor/ethereum/router/src/lib.rs | 169 ++++++------------ 2 files changed, 61 insertions(+), 110 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index e5a5c53f..bc0debde 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -77,6 +77,8 @@ contract Router { external _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) { + // This DST needs a length prefix as well to prevent DSTs potentially being substrings of each + // other, yet this fine for our very well-defined, limited use bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey)); _nonce++; diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 4e4abec8..ef1dfd00 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -156,6 +156,42 @@ impl InInstruction { } } +/// A list of `OutInstruction`s. +#[derive(Clone)] +pub struct OutInstructions(Vec); +impl From<&[(SeraiAddress, (Coin, Amount))]> for OutInstructions { + fn from(outs: &[(SeraiAddress, (Coin, Amount))]) -> Self { + Self( + outs + .iter() + .map(|(address, (coin, amount))| { + #[allow(non_snake_case)] + let (destinationType, destination) = match address { + SeraiAddress::Address(address) => ( + abi::DestinationType::Address, + (abi::AddressDestination { destination: Address::from(address) }).abi_encode(), + ), + SeraiAddress::Contract(contract) => ( + abi::DestinationType::Code, + (abi::CodeDestination { gas: contract.gas(), code: contract.code().to_vec().into() }) + .abi_encode(), + ), + }; + abi::OutInstruction { + destinationType, + destination: destination.into(), + coin: match coin { + Coin::Ether => [0; 20].into(), + Coin::Erc20(address) => address.into(), + }, + value: amount.0.try_into().expect("couldn't convert u64 to u256"), + } + }) + .collect(), + ) + } +} + /// Executed an command. #[derive(Clone, PartialEq, Eq, Debug)] pub enum Executed { @@ -188,13 +224,13 @@ impl Executed { #[derive(Clone, Debug)] pub struct Router(Arc>, Address); impl Router { - pub(crate) fn code() -> Vec { + fn code() -> Vec { const BYTECODE: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); Bytes::from_hex(BYTECODE).expect("compiled-in Router bytecode wasn't valid hex").to_vec() } - pub(crate) fn init_code(key: &PublicKey) -> Vec { + fn init_code(key: &PublicKey) -> Vec { let mut bytecode = Self::code(); // Append the constructor arguments bytecode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); @@ -226,6 +262,17 @@ impl Router { self.1 } + /// Get the message to be signed in order to update the key for Serai. + pub fn update_serai_key_message(chain_id: U256, nonce: u64, key: &PublicKey) -> Vec { + ( + "updateSeraiKey", + chain_id, + U256::try_from(nonce).expect("couldn't convert u64 to u256"), + key.eth_repr(), + ) + .abi_encode_packed() + } + /// Construct a transaction to update the key representing Serai. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { // TODO: Set a more accurate gas @@ -239,111 +286,24 @@ impl Router { } } + /// Get the message to be signed in order to execute a series of `OutInstruction`s. + pub fn execute_message(chain_id: U256, nonce: u64, outs: OutInstructions) -> Vec { + ("execute", chain_id, U256::try_from(nonce).expect("couldn't convert u64 to u256"), outs.0) + .abi_encode() + } + /// Construct a transaction to execute a batch of `OutInstruction`s. - pub fn execute(&self, outs: &[(SeraiAddress, (Coin, Amount))], sig: &Signature) -> TxLegacy { + pub fn execute(&self, outs: OutInstructions, sig: &Signature) -> TxLegacy { + let outs_len = outs.0.len(); TxLegacy { to: TxKind::Call(self.1), - input: abi::executeCall::new(( - outs - .iter() - .map(|(address, (coin, amount))| { - #[allow(non_snake_case)] - let (destinationType, destination) = match address { - SeraiAddress::Address(address) => ( - abi::DestinationType::Address, - (abi::AddressDestination { destination: Address::from(address) }).abi_encode(), - ), - SeraiAddress::Contract(contract) => ( - abi::DestinationType::Code, - (abi::CodeDestination { - gas: contract.gas(), - code: contract.code().to_vec().into(), - }) - .abi_encode(), - ), - }; - abi::OutInstruction { - destinationType, - destination: destination.into(), - coin: match coin { - Coin::Ether => [0; 20].into(), - Coin::Erc20(address) => address.into(), - }, - value: amount.0.try_into().expect("couldn't convert u64 to u256"), - } - }) - .collect(), - sig.into(), - )) - .abi_encode() - .into(), + input: abi::executeCall::new((outs.0, sig.into())).abi_encode().into(), // TODO - gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs.len()).unwrap()), + gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap()), ..Default::default() } } - /* - /// Get the key for Serai at the specified block. - #[cfg(test)] - pub async fn serai_key(&self, at: [u8; 32]) -> Result> { - let call = TransactionRequest::default() - .to(self.1) - .input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into())); - let bytes = self - .0 - .call(&call) - .block(BlockId::Hash(B256::from(at).into())) - .await - ?; - let res = - abi::seraiKeyCall::abi_decode_returns(&bytes, true)?; - PublicKey::from_eth_repr(res._0.0).ok_or_else(|| TransportErrorKind::Custom( - "TODO".to_string().into())) - } - */ - - /* - /// Get the message to be signed in order to update the key for Serai. - pub(crate) fn update_serai_key_message(chain_id: U256, nonce: U256, key: &PublicKey) -> Vec { - let mut buffer = b"updateSeraiKey".to_vec(); - buffer.extend(&chain_id.to_be_bytes::<32>()); - buffer.extend(&nonce.to_be_bytes::<32>()); - buffer.extend(&key.eth_repr()); - buffer - } - */ - - /* - /// Get the current nonce for the published batches. - #[cfg(test)] - pub async fn nonce(&self, at: [u8; 32]) -> Result> { - let call = TransactionRequest::default() - .to(self.1) - .input(TransactionInput::new(abi::nonceCall::new(()).abi_encode().into())); - let bytes = self - .0 - .call(&call) - .block(BlockId::Hash(B256::from(at).into())) - .await - ?; - let res = - abi::nonceCall::abi_decode_returns(&bytes, true)?; - Ok(res._0) - } - */ - - /* - /// Get the message to be signed in order to update the key for Serai. - pub(crate) fn execute_message( - chain_id: U256, - nonce: U256, - outs: Vec, - ) -> Vec { - ("execute".to_string(), chain_id, nonce, outs).abi_encode_params() - } - */ - /// Fetch the `InInstruction`s emitted by the Router from this block. pub async fn in_instructions( &self, @@ -568,15 +528,4 @@ impl Router { Ok(res) } - - /* - #[cfg(feature = "tests")] - pub fn key_updated_filter(&self) -> Filter { - Filter::new().address(self.1).event_signature(SeraiKeyUpdated::SIGNATURE_HASH) - } - #[cfg(feature = "tests")] - pub fn executed_filter(&self) -> Filter { - Filter::new().address(self.1).event_signature(ExecutedEvent::SIGNATURE_HASH) - } - */ } From 8f2a9301cf32ba7c336ef25195a3a39e43c599a4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 17 Sep 2024 02:59:01 -0400 Subject: [PATCH 149/368] Don't have the router drop transactions which may have top-level transfers The router will now match the top-level transfer so it isn't used as the justification for the InInstruction it's handling. This allows the theoretical case where a top-level transfer occurs (to any entity) and an internal call performs a transfer to Serai. Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20 crate. This does add a dependency on tokio yet improves performance, and it's scoped under serai-processor (which is always presumed to be tokio-based). While we could instead import futures for join_all, https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't be a good idea. While we could prefer async-executor over tokio's JoinSet, JoinSet doesn't share the same issues as FuturesUnordered. That means our question is solely if we want the async-executor executor or the tokio executor, when we've already established the Serai processor is always presumed to be tokio-based. --- Cargo.lock | 1 + processor/ethereum/erc20/Cargo.toml | 2 + processor/ethereum/erc20/src/lib.rs | 215 +++++++++++++++++---------- processor/ethereum/router/src/lib.rs | 36 ++--- 4 files changed, 153 insertions(+), 101 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 981275fb..f928e57e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8748,6 +8748,7 @@ dependencies = [ "alloy-sol-macro", "alloy-sol-types", "alloy-transport", + "tokio", ] [[package]] diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index 85bc83c3..3c7f5101 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -26,3 +26,5 @@ alloy-rpc-types-eth = { version = "0.3", default-features = false } alloy-transport = { version = "0.3", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-provider = { version = "0.3", default-features = false } + +tokio = { version = "1", default-features = false, features = ["rt"] } diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 51f68d0e..920915e9 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -13,6 +13,8 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; +use tokio::task::JoinSet; + #[rustfmt::skip] #[expect(warnings)] #[expect(needless_pass_by_value)] @@ -27,7 +29,7 @@ pub use abi::IERC20::Transfer; /// A top-level ERC20 transfer #[derive(Clone, Debug)] -pub struct TopLevelErc20Transfer { +pub struct TopLevelTransfer { /// The transaction ID which effected this transfer. pub id: [u8; 32], /// The address which made the transfer. @@ -38,6 +40,14 @@ pub struct TopLevelErc20Transfer { pub data: Vec, } +/// A transaction with a top-level transfer, matched to the log index of the transfer. +pub struct MatchedTopLevelTransfer { + /// The transfer. + pub transfer: TopLevelTransfer, + /// The log index of the transfer. + pub log_index: u64, +} + /// A view for an ERC20 contract. #[derive(Clone, Debug)] pub struct Erc20(Arc>, Address); @@ -47,12 +57,104 @@ impl Erc20 { Self(provider, Address::from(&address)) } - /// Fetch all top-level transfers to the specified ERC20. + /// Match a transaction for its top-level transfer to the specified address (if one exists). + pub async fn match_top_level_transfer( + provider: impl AsRef>, + transaction_id: B256, + to: Address, + ) -> Result, RpcError> { + // Fetch the transaction + let transaction = + provider.as_ref().get_transaction_by_hash(transaction_id).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have the transaction which emitted a log it had".to_string().into(), + ) + })?; + + // If this is a top-level call... + // Don't validate the encoding as this can't be re-encoded to an identical bytestring due + // to the `InInstruction` appended after the call itself + if let Ok(call) = IERC20Calls::abi_decode(&transaction.input, false) { + // Extract the top-level call's from/to/value + let (from, call_to, value) = match call { + IERC20Calls::transfer(transferCall { to, value }) => (transaction.from, to, value), + IERC20Calls::transferFrom(transferFromCall { from, to, value }) => (from, to, value), + // Treat any other function selectors as unrecognized + _ => return Ok(None), + }; + // If this isn't a transfer to the expected address, return None + if call_to != to { + return Ok(None); + } + + // Fetch the transaction's logs + let receipt = + provider.as_ref().get_transaction_receipt(transaction_id).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have receipt for a transaction we were matching for a top-level transfer" + .to_string() + .into(), + ) + })?; + + // Find the log for this transfer + for log in receipt.inner.logs() { + // If this log was emitted by a different contract, continue + if Some(log.address()) != transaction.to { + continue; + } + + // Check if this is actually a transfer log + // https://github.com/alloy-rs/core/issues/589 + if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { + continue; + } + + let log_index = log.log_index.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its index set".to_string().into()) + })?; + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom(format!("failed to decode Transfer log: {e:?}").into()) + })? + .inner + .data; + + // Ensure the top-level transfer is equivalent to the transfer this log represents. Since + // we can't find the exact top-level transfer without tracing the call, we just rule the + // first equivalent transfer as THE top-level transfer + if !((log.from == from) && (log.to == to) && (log.value == value)) { + continue; + } + + // Read the data appended after + let encoded = call.abi_encode(); + let data = transaction.input.as_ref()[encoded.len() ..].to_vec(); + + return Ok(Some(MatchedTopLevelTransfer { + transfer: TopLevelTransfer { + // Since there's only one top-level transfer per TX, set the ID to the TX ID + id: *transaction_id, + from: *log.from.0, + amount: log.value, + data, + }, + log_index, + })); + } + } + + Ok(None) + } + + /// Fetch all top-level transfers to the specified address. pub async fn top_level_transfers( &self, block: u64, to: Address, - ) -> Result, RpcError> { + ) -> Result, RpcError> { + // Get all transfers within this block let filter = Filter::new().from_block(block).to_block(block).address(self.1); let filter = filter.event_signature(Transfer::SIGNATURE_HASH); let mut to_topic = [0; 32]; @@ -60,83 +162,46 @@ impl Erc20 { let filter = filter.topic2(B256::from(to_topic)); let logs = self.0.get_logs(&filter).await?; - /* - A set of all transactions we've handled a transfer from. This handles the edge case where a - top-level transfer T somehow triggers another transfer T', with equivalent contents, within - the same transaction. We only want to report one transfer as only one is top-level. - */ - let mut handled = HashSet::new(); + // These logs are for all transactions which performed any transfer + // We now check each transaction for having a top-level transfer to the specified address + let tx_ids = logs + .into_iter() + .map(|log| { + // Double check the address which emitted this log + if log.address() != self.1 { + Err(TransportErrorKind::Custom( + "node returned logs for a different address than requested".to_string().into(), + ))?; + } + + log.transaction_hash.ok_or_else(|| { + TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) + }) + }) + .collect::, _>>()?; + + let mut join_set = JoinSet::new(); + for tx_id in tx_ids { + join_set.spawn(Self::match_top_level_transfer(self.0.clone(), tx_id, to)); + } let mut top_level_transfers = vec![]; - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(TransportErrorKind::Custom( - "node returned logs for a different address than requested".to_string().into(), - ))?; - } - - let tx_id = log.transaction_hash.ok_or_else(|| { - TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) - })?; - let tx = self.0.get_transaction_by_hash(tx_id).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have the transaction which emitted a log it had".to_string().into(), - ) - })?; - - // If this is a top-level call... - if tx.to == Some(self.1) { - // And we recognize the call... - // Don't validate the encoding as this can't be re-encoded to an identical bytestring due - // to the InInstruction appended - if let Ok(call) = IERC20Calls::abi_decode(&tx.input, false) { - // Extract the top-level call's from/to/value - let (from, call_to, value) = match call { - IERC20Calls::transfer(transferCall { to: call_to, value }) => (tx.from, call_to, value), - IERC20Calls::transferFrom(transferFromCall { from, to: call_to, value }) => { - (from, call_to, value) - } - // Treat any other function selectors as unrecognized - _ => continue, - }; - - let log = log - .log_decode::() - .map_err(|e| { - TransportErrorKind::Custom(format!("failed to decode Transfer log: {e:?}").into()) - })? - .inner - .data; - - // Ensure the top-level transfer is equivalent, and this presumably isn't a log for an - // internal transfer - if (log.from != from) || (call_to != to) || (value != log.value) { - continue; - } - - // Now that the top-level transfer is confirmed to be equivalent to the log, ensure it's - // the only log we handle - if handled.contains(&tx_id) { - continue; - } - handled.insert(tx_id); - - // Read the data appended after - let encoded = call.abi_encode(); - let data = tx.input.as_ref()[encoded.len() ..].to_vec(); - - // Push the transfer - top_level_transfers.push(TopLevelErc20Transfer { - // Since we'll only handle one log for this TX, set the ID to the TX ID - id: *tx_id, - from: *log.from.0, - amount: log.value, - data, - }); + while let Some(top_level_transfer) = join_set.join_next().await { + // This is an error if a task panics or aborts + // Panicking on a task panic is desired behavior, and we haven't aborted any tasks + match top_level_transfer.unwrap() { + // Top-level transfer + Ok(Some(top_level_transfer)) => top_level_transfers.push(top_level_transfer.transfer), + // Not a top-level transfer + Ok(None) => continue, + // Failed to get this transaction's information so abort + Err(e) => { + join_set.abort_all(); + Err(e)? } } } + Ok(top_level_transfers) } } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index ef1dfd00..18bc3d4b 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -25,7 +25,7 @@ use alloy_provider::{Provider, RootProvider}; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; -use erc20::Transfer; +use erc20::{Transfer, Erc20}; use serai_client::{primitives::Amount, networks::ethereum::Address as SeraiAddress}; @@ -346,11 +346,6 @@ impl Router { let tx_hash = log.transaction_hash.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) })?; - let tx = self.0.get_transaction_by_hash(tx_hash).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have a transaction it had the logs of".to_string().into(), - ) - })?; let log = log .log_decode::() @@ -371,23 +366,6 @@ impl Router { continue; } - /* - If this also counts as a top-level transfer of a token, drop it. - - This event will only exist if there's an ERC20 which has some form of programmability - (`onTransferFrom`), and when a top-level transfer was made, that hook made its own call - into the Serai router. - - If such an ERC20 exists, Serai would parse it as a top-level transfer and as a router - InInstruction. While no such ERC20 is planned to be integrated, this enures we don't - allow a double-spend on that premise. - - TODO: See below note. - */ - if tx.to == Some(token.into()) { - continue; - } - // Get all logs for this TX let receipt = self.0.get_transaction_receipt(tx_hash).await?.ok_or_else(|| { TransportErrorKind::Custom( @@ -397,9 +375,14 @@ impl Router { let tx_logs = receipt.inner.logs(); /* - TODO: If this is also a top-level transfer, drop the log from the top-level transfer and - only iterate over the rest of the logs. + The transfer which causes an InInstruction event won't be a top-level transfer. + Accordingly, when looking for the matching transfer, disregard the top-level transfer (if + one exists). */ + if let Some(matched) = Erc20::match_top_level_transfer(&self.0, tx_hash, self.1).await? { + // Mark this log index as used so it isn't used again + transfer_check.insert(matched.log_index); + } // Find a matching transfer log let mut found_transfer = false; @@ -409,6 +392,7 @@ impl Router { "log in transaction receipt didn't have its log index set".to_string().into(), ) })?; + // Ensure we didn't already use this transfer to check a distinct InInstruction event if transfer_check.contains(&log_index) { continue; @@ -420,7 +404,7 @@ impl Router { } // Check if this is a transfer log // https://github.com/alloy-rs/core/issues/589 - if tx_log.topics()[0] != Transfer::SIGNATURE_HASH { + if tx_log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { continue; } let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue }; From 433beac93a320628f55e01fc40ce6ec9cc6f03dc Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 00:54:20 -0400 Subject: [PATCH 150/368] Ethereum SignableTransaction, Eventuality --- processor/ethereum/Cargo.toml | 17 +- processor/ethereum/router/Cargo.toml | 1 - processor/ethereum/router/src/lib.rs | 48 ++- processor/ethereum/src/key_gen.rs | 2 +- processor/ethereum/src/main.rs | 10 +- processor/ethereum/src/primitives/block.rs | 39 ++- processor/ethereum/src/primitives/mod.rs | 6 + processor/ethereum/src/primitives/output.rs | 14 +- .../ethereum/src/primitives/transaction.rs | 297 +++++++++++++++--- processor/ethereum/src/rpc.rs | 10 +- processor/ethereum/src/scheduler.rs | 75 ++--- 11 files changed, 390 insertions(+), 129 deletions(-) diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index dfed2f9d..9a3b264c 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -26,10 +26,18 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } -frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } +frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false, features = ["secp256k1"] } k256 = { version = "^0.13.1", default-features = false, features = ["std"] } -ethereum-serai = { path = "./ethereum-serai", default-features = false, optional = true } + +alloy-core = { version = "0.8", default-features = false } +alloy-rlp = { version = "0.3", default-features = false } +alloy-consensus = { version = "0.3", default-features = false } + +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-simple-request-transport = { path = "../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-rpc-client = { version = "0.3", default-features = false } +alloy-provider = { version = "0.3", default-features = false } serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } @@ -48,6 +56,11 @@ scanner = { package = "serai-processor-scanner", path = "../scanner" } smart-contract-scheduler = { package = "serai-processor-smart-contract-scheduler", path = "../scheduler/smart-contract" } signers = { package = "serai-processor-signers", path = "../signers" } +ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../networks/ethereum/schnorr" } +ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "./primitives" } +ethereum-router = { package = "serai-processor-ethereum-router", path = "./router" } +ethereum-erc20 = { package = "serai-processor-ethereum-erc20", path = "./erc20" } + bin = { package = "serai-processor-bin", path = "../bin" } [features] diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index ed5417c0..e8884eae 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -24,7 +24,6 @@ alloy-core = { version = "0.8", default-features = false } alloy-consensus = { version = "0.3", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } -alloy-sol-macro = { version = "0.8", default-features = false } alloy-rpc-types-eth = { version = "0.3", default-features = false } alloy-transport = { version = "0.3", default-features = false } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 18bc3d4b..344e2bee 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -27,7 +27,7 @@ use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; use erc20::{Transfer, Erc20}; -use serai_client::{primitives::Amount, networks::ethereum::Address as SeraiAddress}; +use serai_client::networks::ethereum::Address as SeraiAddress; #[rustfmt::skip] #[expect(warnings)] @@ -159,8 +159,8 @@ impl InInstruction { /// A list of `OutInstruction`s. #[derive(Clone)] pub struct OutInstructions(Vec); -impl From<&[(SeraiAddress, (Coin, Amount))]> for OutInstructions { - fn from(outs: &[(SeraiAddress, (Coin, Amount))]) -> Self { +impl From<&[(SeraiAddress, (Coin, U256))]> for OutInstructions { + fn from(outs: &[(SeraiAddress, (Coin, U256))]) -> Self { Self( outs .iter() @@ -184,7 +184,7 @@ impl From<&[(SeraiAddress, (Coin, Amount))]> for OutInstructions { Coin::Ether => [0; 20].into(), Coin::Erc20(address) => address.into(), }, - value: amount.0.try_into().expect("couldn't convert u64 to u256"), + value: *amount, } }) .collect(), @@ -192,7 +192,7 @@ impl From<&[(SeraiAddress, (Coin, Amount))]> for OutInstructions { } } -/// Executed an command. +/// An action which was executed by the Router. #[derive(Clone, PartialEq, Eq, Debug)] pub enum Executed { /// Set a new key. @@ -218,6 +218,44 @@ impl Executed { Executed::SetKey { nonce, .. } | Executed::Batch { nonce, .. } => *nonce, } } + + /// Write the Executed. + pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { + match self { + Self::SetKey { nonce, key } => { + writer.write_all(&[0])?; + writer.write_all(&nonce.to_le_bytes())?; + writer.write_all(key) + } + Self::Batch { nonce, message_hash } => { + writer.write_all(&[1])?; + writer.write_all(&nonce.to_le_bytes())?; + writer.write_all(message_hash) + } + } + } + + /// Read an Executed. + pub fn read(reader: &mut impl io::Read) -> io::Result { + let mut kind = [0xff]; + reader.read_exact(&mut kind)?; + if kind[0] >= 2 { + Err(io::Error::other("unrecognized type of Executed"))?; + } + + let mut nonce = [0; 8]; + reader.read_exact(&mut nonce)?; + let nonce = u64::from_le_bytes(nonce); + + let mut payload = [0; 32]; + reader.read_exact(&mut payload)?; + + Ok(match kind[0] { + 0 => Self::SetKey { nonce, key: payload }, + 1 => Self::Batch { nonce, message_hash: payload }, + _ => unreachable!(), + }) + } } /// A view of the Router for Serai. diff --git a/processor/ethereum/src/key_gen.rs b/processor/ethereum/src/key_gen.rs index 73b7c1e1..581684ef 100644 --- a/processor/ethereum/src/key_gen.rs +++ b/processor/ethereum/src/key_gen.rs @@ -1,7 +1,7 @@ use ciphersuite::{Ciphersuite, Secp256k1}; use dkg::ThresholdKeys; -use ethereum_serai::crypto::PublicKey; +use ethereum_schnorr::PublicKey; pub(crate) struct KeyGenParams; impl key_gen::KeyGenParams for KeyGenParams { diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index e4ec3701..06c0bc98 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -8,12 +8,10 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = use std::sync::Arc; -use ethereum_serai::alloy::{ - primitives::U256, - simple_request_transport::SimpleRequest, - rpc_client::ClientBuilder, - provider::{Provider, RootProvider}, -}; +use alloy_core::primitives::U256; +use alloy_simple_request_transport::SimpleRequest; +use alloy_rpc_client::ClientBuilder; +use alloy_provider::{Provider, RootProvider}; use serai_env as env; diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index e947e851..2c0e0505 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -5,6 +5,9 @@ use ciphersuite::{Ciphersuite, Secp256k1}; use serai_client::networks::ethereum::Address; use primitives::{ReceivedOutput, EventualityTracker}; + +use ethereum_router::Executed; + use crate::{output::Output, transaction::Eventuality}; // We interpret 32-block Epochs as singular blocks. @@ -37,9 +40,11 @@ impl primitives::BlockHeader for Epoch { } } -#[derive(Clone, Copy, PartialEq, Eq, Debug)] +#[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct FullEpoch { epoch: Epoch, + outputs: Vec, + executed: Vec, } impl primitives::Block for FullEpoch { @@ -54,7 +59,8 @@ impl primitives::Block for FullEpoch { self.epoch.end_hash } - fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + fn scan_for_outputs_unordered(&self, _key: Self::Key) -> Vec { + // Only return these outputs for the latest key todo!("TODO") } @@ -66,6 +72,33 @@ impl primitives::Block for FullEpoch { >::TransactionId, Self::Eventuality, > { - todo!("TODO") + let mut res = HashMap::new(); + for executed in &self.executed { + let Some(expected) = + eventualities.active_eventualities.remove(executed.nonce().to_le_bytes().as_slice()) + else { + continue; + }; + assert_eq!( + executed, + &expected.0, + "Router emitted distinct event for nonce {}", + executed.nonce() + ); + /* + The transaction ID is used to determine how internal outputs from this transaction should + be handled (if they were actually internal or if they were just to an internal address). + The Ethereum integration doesn't have internal addresses, and this transaction wasn't made + by Serai. It was simply authorized by Serai yet may or may not be associated with other + actions we don't want to flag as our own. + + Accordingly, we set the transaction ID to the nonce. This is unique barring someone finding + the preimage which hashes to this nonce, and won't cause any other data to be associated. + */ + let mut tx_id = [0; 32]; + tx_id[.. 8].copy_from_slice(executed.nonce().to_le_bytes().as_slice()); + res.insert(tx_id, expected); + } + res } } diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs index fba52dd9..8d2a9118 100644 --- a/processor/ethereum/src/primitives/mod.rs +++ b/processor/ethereum/src/primitives/mod.rs @@ -1,3 +1,9 @@ pub(crate) mod output; pub(crate) mod transaction; pub(crate) mod block; + +pub(crate) const DAI: [u8; 20] = + match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { + Ok(res) => res, + Err(_) => panic!("invalid non-test DAI hex address"), + }; diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index 4dadb147..843f22f6 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -2,10 +2,7 @@ use std::io; use ciphersuite::{Ciphersuite, Secp256k1}; -use ethereum_serai::{ - alloy::primitives::U256, - router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction}, -}; +use alloy_core::primitives::U256; use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; @@ -16,12 +13,9 @@ use serai_client::{ }; use primitives::{OutputType, ReceivedOutput}; +use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction}; -const DAI: [u8; 20] = - match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { - Ok(res) => res, - Err(_) => panic!("invalid non-test DAI hex address"), - }; +use crate::DAI; fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { match coin { @@ -87,7 +81,7 @@ impl ReceivedOutput<::G, Address> for Output { } fn key(&self) -> ::G { - self.0.key_at_end_of_block + todo!("TODO") } fn presumed_origin(&self) -> Option
{ diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index 908358ec..f77153ff 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -1,101 +1,304 @@ -use std::io; +use std::{io, collections::HashMap}; use rand_core::{RngCore, CryptoRng}; -use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; -use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; +use ciphersuite::{Ciphersuite, Secp256k1}; +use frost::{ + dkg::{Participant, ThresholdKeys}, + FrostError, + algorithm::*, + sign::*, +}; -use ethereum_serai::{crypto::PublicKey, machine::*}; +use alloy_core::primitives::U256; + +use serai_client::networks::ethereum::Address; + +use scheduler::SignableTransaction; + +use ethereum_primitives::keccak256; +use ethereum_schnorr::{PublicKey, Signature}; +use ethereum_router::{Coin, OutInstructions, Executed, Router}; use crate::output::OutputId; -#[derive(Clone, Debug)] -pub(crate) struct Transaction(pub(crate) SignedRouterCommand); +#[derive(Clone, PartialEq, Debug)] +pub(crate) enum Action { + SetKey { chain_id: U256, nonce: u64, key: PublicKey }, + Batch { chain_id: U256, nonce: u64, outs: Vec<(Address, (Coin, U256))> }, +} -impl From for Transaction { - fn from(signed_router_command: SignedRouterCommand) -> Self { - Self(signed_router_command) +#[derive(Clone, PartialEq, Eq, Debug)] +pub(crate) struct Eventuality(pub(crate) Executed); + +impl Action { + fn nonce(&self) -> u64 { + match self { + Action::SetKey { nonce, .. } | Action::Batch { nonce, .. } => *nonce, + } + } + + fn message(&self) -> Vec { + match self { + Action::SetKey { chain_id, nonce, key } => Router::update_serai_key_message(*chain_id, *nonce, key), + Action::Batch { chain_id, nonce, outs } => Router::execute_message(*chain_id, *nonce, OutInstructions::from(outs.as_ref())), + } + } + + pub(crate) fn eventuality(&self) -> Eventuality { + Eventuality(match self { + Self::SetKey { chain_id: _, nonce, key } => { + Executed::SetKey { nonce: *nonce, key: key.eth_repr() } + } + Self::Batch { chain_id, nonce, outs } => Executed::Batch { + nonce: *nonce, + message_hash: keccak256(Router::execute_message( + *chain_id, + *nonce, + OutInstructions::from(outs.as_ref()), + )), + }, + }) } } +#[derive(Clone, PartialEq, Debug)] +pub(crate) struct Transaction(Action, Signature); impl scheduler::Transaction for Transaction { fn read(reader: &mut impl io::Read) -> io::Result { - SignedRouterCommand::read(reader).map(Self) + /* + let buf: Vec = borsh::from_reader(reader)?; + // We can only read this from a &[u8], hence prior reading into a Vec + ::decode(&mut buf.as_slice()) + .map(Self) + .map_err(io::Error::other) + */ + let action = Action::read(reader)?; + let signature = Signature::read(reader)?; + Ok(Transaction(action, signature)) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - self.0.write(writer) + /* + let mut buf = Vec::with_capacity(256); + ::encode(&self.0, &mut buf); + borsh::BorshSerialize::serialize(&buf, writer) + */ + self.0.write(writer)?; + self.1.write(writer)?; + Ok(()) } } -#[derive(Clone, Debug)] -pub(crate) struct SignableTransaction(pub(crate) RouterCommand); +/// The HRAm to use for the Schnorr Solidity library. +/// +/// This will panic if the public key being signed for is not representable within the Schnorr +/// Solidity library. +#[derive(Clone, Default, Debug)] +pub struct EthereumHram; +impl Hram for EthereumHram { + #[allow(non_snake_case)] + fn hram( + R: &::G, + A: &::G, + m: &[u8], + ) -> ::F { + Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) + } +} #[derive(Clone)] -pub(crate) struct ClonableTransctionMachine(RouterCommand, ThresholdKeys); +pub(crate) struct ClonableTransctionMachine(ThresholdKeys, Action); + +type LiteralAlgorithmMachine = AlgorithmMachine>; +type LiteralAlgorithmSignMachine = + AlgorithmSignMachine>; + +pub(crate) struct ActionSignMachine(PublicKey, Action, LiteralAlgorithmSignMachine); + +type LiteralAlgorithmSignatureMachine = + AlgorithmSignatureMachine>; + +pub(crate) struct ActionSignatureMachine(PublicKey, Action, LiteralAlgorithmSignatureMachine); + impl PreprocessMachine for ClonableTransctionMachine { - type Preprocess = ::Preprocess; - type Signature = ::Signature; - type SignMachine = ::SignMachine; + type Preprocess = ::Preprocess; + type Signature = Transaction; + type SignMachine = ActionSignMachine; fn preprocess( self, rng: &mut R, ) -> (Self::SignMachine, Self::Preprocess) { - // TODO: Use a proper error here, not an Option - RouterCommandMachine::new(self.1.clone(), self.0.clone()).unwrap().preprocess(rng) + let (machine, preprocess) = AlgorithmMachine::new(IetfSchnorr::::ietf(), self.0.clone()) + .preprocess(rng); + (ActionSignMachine(PublicKey::new(self.0.group_key()).expect("signing with non-representable key"), self.1, machine), preprocess) } } -impl scheduler::SignableTransaction for SignableTransaction { +impl SignMachine for ActionSignMachine { + type Params = ::Signature, + >>::Params; + type Keys = ::Signature, + >>::Keys; + type Preprocess = ::Signature, + >>::Preprocess; + type SignatureShare = ::Signature, + >>::SignatureShare; + type SignatureMachine = ActionSignatureMachine; + + fn cache(self) -> CachedPreprocess { + unimplemented!() + } + fn from_cache( + params: Self::Params, + keys: Self::Keys, + cache: CachedPreprocess, +) -> (Self, Self::Preprocess) { + unimplemented!() + } + + fn read_preprocess(&self, reader: &mut R) -> io::Result { + self.2.read_preprocess(reader) + } + fn sign( + self, + commitments: HashMap, + msg: &[u8], + ) -> Result<(Self::SignatureMachine, Self::SignatureShare), FrostError> { + assert!(msg.is_empty()); + self + .2 + .sign(commitments, &self.1.message()) + .map(|(machine, shares)| (ActionSignatureMachine(self.0, self.1, machine), shares)) + } +} + +impl SignatureMachine for ActionSignatureMachine { + type SignatureShare = ::Signature, + >>::SignatureShare; + + fn read_share(&self, reader: &mut R) -> io::Result { + self.2.read_share(reader) + } + + fn complete( + self, + shares: HashMap, + ) -> Result { + /* + match self.1 { + Action::SetKey { chain_id: _, nonce: _, key } => self.0.update_serai_key(key, signature), + Action::Batch { chain_id: _, nonce: _, outs } => self.0.execute(outs, signature), + } + */ + self.2.complete(shares).map(|signature| { + let s = signature.s; + let c = Signature::challenge(signature.R, &self.0, &self.1.message()); + Transaction(self.1, Signature::new(c, s)) + }) + } +} + +impl SignableTransaction for Action { type Transaction = Transaction; type Ciphersuite = Secp256k1; type PreprocessMachine = ClonableTransctionMachine; fn read(reader: &mut impl io::Read) -> io::Result { - RouterCommand::read(reader).map(Self) + let mut kind = [0xff]; + reader.read_exact(&mut kind)?; + if kind[0] >= 2 { + Err(io::Error::other("unrecognized Action type"))?; + } + + let mut chain_id = [0; 32]; + reader.read_exact(&mut chain_id)?; + let chain_id = U256::from_le_bytes(chain_id); + + let mut nonce = [0; 8]; + reader.read_exact(&mut nonce)?; + let nonce = u64::from_le_bytes(nonce); + + Ok(match kind[0] { + 0 => { + let mut key = [0; 32]; + reader.read_exact(&mut key)?; + let key = + PublicKey::from_eth_repr(key).ok_or_else(|| io::Error::other("invalid key in Action"))?; + + Action::SetKey { chain_id, nonce, key } + } + 1 => { + let mut outs_len = [0; 4]; + reader.read_exact(&mut outs_len)?; + let outs_len = usize::try_from(u32::from_le_bytes(outs_len)).unwrap(); + + let mut outs = vec![]; + for _ in 0 .. outs_len { + let address = borsh::from_reader(reader)?; + let coin = Coin::read(reader)?; + + let mut amount = [0; 32]; + reader.read_exact(&mut amount)?; + let amount = U256::from_le_bytes(amount); + + outs.push((address, (coin, amount))); + } + Action::Batch { chain_id, nonce, outs } + } + _ => unreachable!(), + }) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - self.0.write(writer) + match self { + Self::SetKey { chain_id, nonce, key } => { + writer.write_all(&[0])?; + writer.write_all(&chain_id.as_le_bytes())?; + writer.write_all(&nonce.to_le_bytes())?; + writer.write_all(&key.eth_repr()) + } + Self::Batch { chain_id, nonce, outs } => { + writer.write_all(&[1])?; + writer.write_all(&chain_id.as_le_bytes())?; + writer.write_all(&nonce.to_le_bytes())?; + writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?; + for (address, (coin, amount)) in outs { + borsh::BorshSerialize::serialize(address, writer)?; + coin.write(writer)?; + writer.write_all(&amount.as_le_bytes())?; + } + Ok(()) + } + } } fn id(&self) -> [u8; 32] { let mut res = [0; 32]; - // TODO: Add getter for the nonce - match self.0 { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - res[.. 8].copy_from_slice(&nonce.as_le_bytes()); - } - } + res[.. 8].copy_from_slice(&self.nonce().to_le_bytes()); res } fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine { - ClonableTransctionMachine(self.0, keys) + ClonableTransctionMachine(keys, self) } } -#[derive(Clone, PartialEq, Eq, Debug)] -pub(crate) struct Eventuality(pub(crate) PublicKey, pub(crate) RouterCommand); - impl primitives::Eventuality for Eventuality { type OutputId = OutputId; fn id(&self) -> [u8; 32] { let mut res = [0; 32]; - match self.1 { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - res[.. 8].copy_from_slice(&nonce.as_le_bytes()); - } - } + res[.. 8].copy_from_slice(&self.0.nonce().to_le_bytes()); res } fn lookup(&self) -> Vec { - match self.1 { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - nonce.as_le_bytes().to_vec() - } - } + self.0.nonce().to_le_bytes().to_vec() } fn singular_spent_output(&self) -> Option { @@ -103,15 +306,9 @@ impl primitives::Eventuality for Eventuality { } fn read(reader: &mut impl io::Read) -> io::Result { - let point = Secp256k1::read_G(reader)?; - let command = RouterCommand::read(reader)?; - Ok(Eventuality( - PublicKey::new(point).ok_or(io::Error::other("unusable key within Eventuality"))?, - command, - )) + Executed::read(reader).map(Self) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - writer.write_all(self.0.point().to_bytes().as_slice())?; - self.1.write(writer) + self.0.write(writer) } } diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 58b3933e..819fbf48 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -1,13 +1,9 @@ use core::future::Future; use std::sync::Arc; -use ethereum_serai::{ - alloy::{ - rpc_types::{BlockTransactionsKind, BlockNumberOrTag}, - simple_request_transport::SimpleRequest, - provider::{Provider, RootProvider}, - }, -}; +use alloy_rpc_types_eth::{BlockTransactionsKind, BlockNumberOrTag}; +use alloy_simple_request_transport::SimpleRequest; +use alloy_provider::{Provider, RootProvider}; use serai_client::primitives::{NetworkId, Coin, Amount}; diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index 6e17ef70..ca636b5b 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -1,14 +1,23 @@ -use serai_client::primitives::{NetworkId, Balance}; +use alloy_core::primitives::U256; -use ethereum_serai::{alloy::primitives::U256, router::PublicKey, machine::*}; +use serai_client::primitives::{NetworkId, Coin, Balance}; use primitives::Payment; use scanner::{KeyFor, AddressFor, EventualityFor}; -use crate::{ - transaction::{SignableTransaction, Eventuality}, - rpc::Rpc, -}; +use ethereum_schnorr::PublicKey; +use ethereum_router::Coin as EthereumCoin; + +use crate::{DAI, transaction::Action, rpc::Rpc}; + +fn coin_to_ethereum_coin(coin: Coin) -> EthereumCoin { + assert_eq!(coin.network(), NetworkId::Ethereum); + match coin { + Coin::Ether => EthereumCoin::Ether, + Coin::Dai => EthereumCoin::Erc20(DAI), + _ => unreachable!(), + } +} fn balance_to_ethereum_amount(balance: Balance) -> U256 { assert_eq!(balance.coin.network(), NetworkId::Ethereum); @@ -24,7 +33,7 @@ pub(crate) struct SmartContract { pub(crate) chain_id: U256, } impl smart_contract_scheduler::SmartContract for SmartContract { - type SignableTransaction = SignableTransaction; + type SignableTransaction = Action; fn rotate( &self, @@ -32,16 +41,14 @@ impl smart_contract_scheduler::SmartContract for SmartContract { retiring_key: KeyFor, new_key: KeyFor, ) -> (Self::SignableTransaction, EventualityFor) { - let command = RouterCommand::UpdateSeraiKey { + let action = Action::SetKey { chain_id: self.chain_id, - nonce: U256::try_from(nonce).unwrap(), + nonce, key: PublicKey::new(new_key).expect("rotating to an invald key"), }; - ( - SignableTransaction(command.clone()), - Eventuality(PublicKey::new(retiring_key).expect("retiring an invalid key"), command), - ) + (action.clone(), action.eventuality()) } + fn fulfill( &self, nonce: u64, @@ -50,40 +57,20 @@ impl smart_contract_scheduler::SmartContract for SmartContract { ) -> Vec<(Self::SignableTransaction, EventualityFor)> { let mut outs = Vec::with_capacity(payments.len()); for payment in payments { - outs.push(OutInstruction { - target: if let Some(data) = payment.data() { - // This introspects the Call serialization format, expecting the first 20 bytes to - // be the address - // This avoids wasting the 20-bytes allocated within address - let full_data = [<[u8; 20]>::from(*payment.address()).as_slice(), data].concat(); - let mut reader = full_data.as_slice(); - - let mut calls = vec![]; - while !reader.is_empty() { - let Ok(call) = Call::read(&mut reader) else { break }; - calls.push(call); - } - // The above must have executed at least once since reader contains the address - assert_eq!(calls[0].to, <[u8; 20]>::from(*payment.address())); - - OutInstructionTarget::Calls(calls) - } else { - OutInstructionTarget::Direct((*payment.address()).into()) - }, - value: { balance_to_ethereum_amount(payment.balance()) }, - }); + outs.push(( + payment.address().clone(), + ( + coin_to_ethereum_coin(payment.balance().coin), + balance_to_ethereum_amount(payment.balance()), + ), + )); } - let command = RouterCommand::Execute { - chain_id: self.chain_id, - nonce: U256::try_from(nonce).unwrap(), - outs, - }; + // TODO: Per-batch gas limit + // TODO: Create several batches + let action = Action::Batch { chain_id: self.chain_id, nonce, outs }; - vec![( - SignableTransaction(command.clone()), - Eventuality(PublicKey::new(key).expect("fulfilling payments with an invalid key"), command), - )] + vec![(action.clone(), action.eventuality())] } } From bdc3bda04a727465cf42e18b42eba8b4ec8ffb94 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 00:57:10 -0400 Subject: [PATCH 151/368] Remove ethereum-serai/serai-processor-ethereum-contracts contracts was smashed out of ethereum-serai. Both have now been smashed into individual crates. Creates a TODO directory with left-over test code yet to be moved. --- .github/workflows/tests.yml | 2 - Cargo.lock | 93 +- Cargo.toml | 2 - deny.toml | 2 - .../contracts/tests/ERC20.sol | 0 .../{src/lib.rs => TODO/old_processor.rs} | 0 .../src => TODO}/tests/crypto.rs | 0 .../{ethereum-serai/src => TODO}/tests/mod.rs | 0 .../src => TODO}/tests/router.rs | 0 processor/ethereum/contracts/Cargo.toml | 32 - processor/ethereum/contracts/LICENSE | 15 - processor/ethereum/contracts/README.md | 7 - processor/ethereum/contracts/build.rs | 69 - .../ethereum/contracts/src/abigen/deployer.rs | 584 ---- .../ethereum/contracts/src/abigen/erc20.rs | 1838 ---------- .../ethereum/contracts/src/abigen/mod.rs | 3 - .../ethereum/contracts/src/abigen/router.rs | 2958 ----------------- processor/ethereum/contracts/src/lib.rs | 16 - processor/ethereum/ethereum-serai/Cargo.toml | 52 - processor/ethereum/ethereum-serai/LICENSE | 15 - processor/ethereum/ethereum-serai/README.md | 15 - .../ethereum/ethereum-serai/src/crypto.rs | 32 - processor/ethereum/ethereum-serai/src/lib.rs | 41 - .../ethereum/ethereum-serai/src/machine.rs | 427 --- .../ethereum/src/primitives/transaction.rs | 24 +- tests/processor/Cargo.toml | 1 - 26 files changed, 35 insertions(+), 6193 deletions(-) rename processor/ethereum/{contracts => TODO}/contracts/tests/ERC20.sol (100%) rename processor/ethereum/{src/lib.rs => TODO/old_processor.rs} (100%) rename processor/ethereum/{ethereum-serai/src => TODO}/tests/crypto.rs (100%) rename processor/ethereum/{ethereum-serai/src => TODO}/tests/mod.rs (100%) rename processor/ethereum/{ethereum-serai/src => TODO}/tests/router.rs (100%) delete mode 100644 processor/ethereum/contracts/Cargo.toml delete mode 100644 processor/ethereum/contracts/LICENSE delete mode 100644 processor/ethereum/contracts/README.md delete mode 100644 processor/ethereum/contracts/build.rs delete mode 100644 processor/ethereum/contracts/src/abigen/deployer.rs delete mode 100644 processor/ethereum/contracts/src/abigen/erc20.rs delete mode 100644 processor/ethereum/contracts/src/abigen/mod.rs delete mode 100644 processor/ethereum/contracts/src/abigen/router.rs delete mode 100644 processor/ethereum/contracts/src/lib.rs delete mode 100644 processor/ethereum/ethereum-serai/Cargo.toml delete mode 100644 processor/ethereum/ethereum-serai/LICENSE delete mode 100644 processor/ethereum/ethereum-serai/README.md delete mode 100644 processor/ethereum/ethereum-serai/src/crypto.rs delete mode 100644 processor/ethereum/ethereum-serai/src/lib.rs delete mode 100644 processor/ethereum/ethereum-serai/src/machine.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index e374d4f1..d207e9cd 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -52,12 +52,10 @@ jobs: -p serai-processor-signers \ -p serai-processor-bin \ -p serai-bitcoin-processor \ - -p serai-processor-ethereum-contracts \ -p serai-processor-ethereum-primitives \ -p serai-processor-ethereum-deployer \ -p serai-processor-ethereum-router \ -p serai-processor-ethereum-erc20 \ - -p ethereum-serai \ -p serai-ethereum-processor \ -p serai-monero-processor \ -p tendermint-machine \ diff --git a/Cargo.lock b/Cargo.lock index f928e57e..a7f3792a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -184,17 +184,6 @@ dependencies = [ "serde", ] -[[package]] -name = "alloy-json-abi" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "299d2a937b6c60968df3dad2a988b0f0e03277b344639a4f7a31bd68e6285e59" -dependencies = [ - "alloy-primitives", - "alloy-sol-type-parser", - "serde", -] - [[package]] name = "alloy-json-rpc" version = "0.3.1" @@ -426,7 +415,6 @@ version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "71c4d842beb7a6686d04125603bc57614d5ed78bf95e4753274db3db4ba95214" dependencies = [ - "alloy-json-abi", "alloy-sol-macro-input", "const-hex", "heck 0.5.0", @@ -445,33 +433,21 @@ version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1306e8d3c9e6e6ecf7a39ffaf7291e73a5f655a2defd366ee92c2efebcdf7fee" dependencies = [ - "alloy-json-abi", "const-hex", "dunce", "heck 0.5.0", "proc-macro2", "quote", - "serde_json", "syn 2.0.77", "syn-solidity", ] -[[package]] -name = "alloy-sol-type-parser" -version = "0.8.0" -source = "git+https://github.com/alloy-rs/core?rev=446b9d2fbce12b88456152170709a3eaac929af0#446b9d2fbce12b88456152170709a3eaac929af0" -dependencies = [ - "serde", - "winnow 0.6.18", -] - [[package]] name = "alloy-sol-types" version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "577e262966e92112edbd15b1b2c0947cc434d6e8311df96d3329793fe8047da9" dependencies = [ - "alloy-json-abi", "alloy-primitives", "alloy-sol-macro", "const-hex", @@ -2503,30 +2479,6 @@ dependencies = [ "tokio", ] -[[package]] -name = "ethereum-serai" -version = "0.1.0" -dependencies = [ - "alloy-consensus", - "alloy-core", - "alloy-network", - "alloy-node-bindings", - "alloy-provider", - "alloy-rpc-client", - "alloy-rpc-types-eth", - "alloy-simple-request-transport", - "alloy-sol-types", - "ethereum-schnorr-contract", - "flexible-transcript", - "group", - "k256", - "modular-frost", - "rand_core", - "serai-processor-ethereum-contracts", - "thiserror", - "tokio", -] - [[package]] name = "event-listener" version = "2.5.3" @@ -6127,16 +6079,6 @@ dependencies = [ "syn 1.0.109", ] -[[package]] -name = "prettyplease" -version = "0.2.22" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "479cf940fbbb3426c32c5d5176f62ad57549a0bb84773423ba8be9d089f5faba" -dependencies = [ - "proc-macro2", - "syn 2.0.77", -] - [[package]] name = "primeorder" version = "0.13.6" @@ -6302,7 +6244,7 @@ dependencies = [ "log", "multimap", "petgraph", - "prettyplease 0.1.25", + "prettyplease", "prost", "prost-types", "regex", @@ -8385,11 +8327,18 @@ version = "0.1.0" name = "serai-ethereum-processor" version = "0.1.0" dependencies = [ + "alloy-consensus", + "alloy-core", + "alloy-provider", + "alloy-rlp", + "alloy-rpc-client", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", "borsh", "ciphersuite", "const-hex", "dkg", - "ethereum-serai", + "ethereum-schnorr-contract", "hex", "k256", "log", @@ -8400,6 +8349,9 @@ dependencies = [ "serai-db", "serai-env", "serai-processor-bin", + "serai-processor-ethereum-erc20", + "serai-processor-ethereum-primitives", + "serai-processor-ethereum-router", "serai-processor-key-gen", "serai-processor-primitives", "serai-processor-scanner", @@ -8707,20 +8659,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "serai-processor-ethereum-contracts" -version = "0.1.0" -dependencies = [ - "alloy-sol-macro-expander", - "alloy-sol-macro-input", - "alloy-sol-types", - "build-solidity-contracts", - "prettyplease 0.2.22", - "serde_json", - "syn 2.0.77", - "syn-solidity", -] - [[package]] name = "serai-processor-ethereum-deployer" version = "0.1.0" @@ -8770,7 +8708,6 @@ dependencies = [ "alloy-provider", "alloy-rpc-types-eth", "alloy-simple-request-transport", - "alloy-sol-macro", "alloy-sol-macro-expander", "alloy-sol-macro-input", "alloy-sol-types", @@ -8924,7 +8861,6 @@ dependencies = [ "curve25519-dalek", "dkg", "dockertest", - "ethereum-serai", "hex", "k256", "monero-simple-request-rpc", @@ -11954,3 +11890,8 @@ dependencies = [ "cc", "pkg-config", ] + +[[patch.unused]] +name = "alloy-sol-type-parser" +version = "0.8.0" +source = "git+https://github.com/alloy-rs/core?rev=446b9d2fbce12b88456152170709a3eaac929af0#446b9d2fbce12b88456152170709a3eaac929af0" diff --git a/Cargo.toml b/Cargo.toml index 3c203ced..99a10be0 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -87,12 +87,10 @@ members = [ "processor/bin", "processor/bitcoin", - "processor/ethereum/contracts", "processor/ethereum/primitives", "processor/ethereum/deployer", "processor/ethereum/router", "processor/ethereum/erc20", - "processor/ethereum/ethereum-serai", "processor/ethereum", "processor/monero", diff --git a/deny.toml b/deny.toml index 9ee16043..d09fc8eb 100644 --- a/deny.toml +++ b/deny.toml @@ -59,12 +59,10 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, - { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-contracts" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-router" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-erc20" }, - { allow = ["AGPL-3.0"], name = "ethereum-serai" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, diff --git a/processor/ethereum/contracts/contracts/tests/ERC20.sol b/processor/ethereum/TODO/contracts/tests/ERC20.sol similarity index 100% rename from processor/ethereum/contracts/contracts/tests/ERC20.sol rename to processor/ethereum/TODO/contracts/tests/ERC20.sol diff --git a/processor/ethereum/src/lib.rs b/processor/ethereum/TODO/old_processor.rs similarity index 100% rename from processor/ethereum/src/lib.rs rename to processor/ethereum/TODO/old_processor.rs diff --git a/processor/ethereum/ethereum-serai/src/tests/crypto.rs b/processor/ethereum/TODO/tests/crypto.rs similarity index 100% rename from processor/ethereum/ethereum-serai/src/tests/crypto.rs rename to processor/ethereum/TODO/tests/crypto.rs diff --git a/processor/ethereum/ethereum-serai/src/tests/mod.rs b/processor/ethereum/TODO/tests/mod.rs similarity index 100% rename from processor/ethereum/ethereum-serai/src/tests/mod.rs rename to processor/ethereum/TODO/tests/mod.rs diff --git a/processor/ethereum/ethereum-serai/src/tests/router.rs b/processor/ethereum/TODO/tests/router.rs similarity index 100% rename from processor/ethereum/ethereum-serai/src/tests/router.rs rename to processor/ethereum/TODO/tests/router.rs diff --git a/processor/ethereum/contracts/Cargo.toml b/processor/ethereum/contracts/Cargo.toml deleted file mode 100644 index 5ed540b6..00000000 --- a/processor/ethereum/contracts/Cargo.toml +++ /dev/null @@ -1,32 +0,0 @@ -[package] -name = "serai-processor-ethereum-contracts" -version = "0.1.0" -description = "Ethereum contracts for the Serai processor" -license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/contracts" -authors = ["Luke Parker ", "Elizabeth Binks "] -edition = "2021" -publish = false -rust-version = "1.79" - -[package.metadata.docs.rs] -all-features = true -rustdoc-args = ["--cfg", "docsrs"] - -[lints] -workspace = true - -[dependencies] -alloy-sol-types = { version = "0.8", default-features = false, features = ["json"] } - -[build-dependencies] -build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts" } - -syn = { version = "2", default-features = false, features = ["proc-macro"] } - -serde_json = { version = "1", default-features = false, features = ["std"] } - -syn-solidity = { version = "0.8", default-features = false } -alloy-sol-macro-input = { version = "0.8", default-features = false } -alloy-sol-macro-expander = { version = "0.8", default-features = false } -prettyplease = { version = "0.2", default-features = false } diff --git a/processor/ethereum/contracts/LICENSE b/processor/ethereum/contracts/LICENSE deleted file mode 100644 index 41d5a261..00000000 --- a/processor/ethereum/contracts/LICENSE +++ /dev/null @@ -1,15 +0,0 @@ -AGPL-3.0-only license - -Copyright (c) 2022-2024 Luke Parker - -This program is free software: you can redistribute it and/or modify -it under the terms of the GNU Affero General Public License Version 3 as -published by the Free Software Foundation. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU Affero General Public License for more details. - -You should have received a copy of the GNU Affero General Public License -along with this program. If not, see . diff --git a/processor/ethereum/contracts/README.md b/processor/ethereum/contracts/README.md deleted file mode 100644 index fcd8f3c7..00000000 --- a/processor/ethereum/contracts/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Serai Processor Ethereum Contracts - -The Ethereum contracts used for (and for testing) the Serai processor. This is -its own crate for organizational and build-time reasons. It is not intended to -be publicly used. - -This crate will fail to build if `solc` is not installed and available. diff --git a/processor/ethereum/contracts/build.rs b/processor/ethereum/contracts/build.rs deleted file mode 100644 index 23d1e907..00000000 --- a/processor/ethereum/contracts/build.rs +++ /dev/null @@ -1,69 +0,0 @@ -use std::{env, fs}; - -use alloy_sol_macro_input::{SolInputKind, SolInput}; - -fn write(sol: syn_solidity::File, file: &str) { - let sol = alloy_sol_macro_expander::expand::expand(sol).unwrap(); - fs::write( - file, - // TODO: Replace `prettyplease::unparse` with `to_string` - prettyplease::unparse(&syn::File { - attrs: vec![], - items: vec![syn::parse2(sol).unwrap()], - shebang: None, - }) - .as_bytes(), - ) - .unwrap(); -} - -fn sol(sol: &str, file: &str) { - let alloy_sol_macro_input::SolInputKind::Sol(sol) = - syn::parse_str(&std::fs::read_to_string(sol).unwrap()).unwrap() - else { - panic!("parsed .sol file wasn't SolInputKind::Sol"); - }; - write(sol, file); -} - -fn abi(ident: &str, abi: &str, file: &str) { - let SolInputKind::Sol(sol) = (SolInput { - attrs: vec![], - path: None, - kind: SolInputKind::Json( - syn::parse_str(ident).unwrap(), - serde_json::from_str(&fs::read_to_string(abi).unwrap()).unwrap(), - ), - }) - .normalize_json() - .unwrap() - .kind - else { - panic!("normalized JSON wasn't SolInputKind::Sol"); - }; - write(sol, file); -} - -fn main() { - let artifacts_path = - env::var("OUT_DIR").unwrap().to_string() + "/serai-processor-ethereum-contracts"; - build_solidity_contracts::build( - &["../../../networks/ethereum/schnorr/contracts"], - "contracts", - &artifacts_path, - ) - .unwrap(); - - // TODO: Use OUT_DIR for the generated code - if !fs::exists("src/abigen").unwrap() { - fs::create_dir("src/abigen").unwrap(); - } - - // These can be handled with the sol! macro - sol("contracts/IERC20.sol", "src/abigen/erc20.rs"); - sol("contracts/Deployer.sol", "src/abigen/deployer.rs"); - // This cannot be handled with the sol! macro. The Solidity requires an import, the ABI is built - // to OUT_DIR and the macro doesn't support non-static paths: - // https://github.com/alloy-rs/core/issues/738 - abi("Router", &(artifacts_path.clone() + "/Router.abi"), "src/abigen/router.rs"); -} diff --git a/processor/ethereum/contracts/src/abigen/deployer.rs b/processor/ethereum/contracts/src/abigen/deployer.rs deleted file mode 100644 index f4bcb3a6..00000000 --- a/processor/ethereum/contracts/src/abigen/deployer.rs +++ /dev/null @@ -1,584 +0,0 @@ -///Module containing a contract's types and functions. -/** - -```solidity -contract Deployer { - event Deployment(bytes32 indexed init_code_hash, address created); - error DeploymentFailed(); - function deploy(bytes memory init_code) external { } -} -```*/ -#[allow(non_camel_case_types, non_snake_case, clippy::style)] -pub mod Deployer { - use super::*; - use ::alloy_sol_types as alloy_sol_types; - /**Event with signature `Deployment(bytes32,address)` and selector `0x60b877a3bae7bf0f0bd5e1c40ebf44ea158201397f6b72d7c05360157b1ec0fc`. -```solidity -event Deployment(bytes32 indexed init_code_hash, address created); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct Deployment { - #[allow(missing_docs)] - pub init_code_hash: ::alloy_sol_types::private::FixedBytes<32>, - #[allow(missing_docs)] - pub created: ::alloy_sol_types::private::Address, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for Deployment { - type DataTuple<'a> = (::alloy_sol_types::sol_data::Address,); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::FixedBytes<32>, - ); - const SIGNATURE: &'static str = "Deployment(bytes32,address)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 96u8, - 184u8, - 119u8, - 163u8, - 186u8, - 231u8, - 191u8, - 15u8, - 11u8, - 213u8, - 225u8, - 196u8, - 14u8, - 191u8, - 68u8, - 234u8, - 21u8, - 130u8, - 1u8, - 57u8, - 127u8, - 107u8, - 114u8, - 215u8, - 192u8, - 83u8, - 96u8, - 21u8, - 123u8, - 30u8, - 192u8, - 252u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - init_code_hash: topics.1, - created: data.0, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.created, - ), - ) - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.init_code_hash.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::encode_topic(&self.init_code_hash); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for Deployment { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&Deployment> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &Deployment) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Custom error with signature `DeploymentFailed()` and selector `0x30116425`. -```solidity -error DeploymentFailed(); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct DeploymentFailed {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: DeploymentFailed) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for DeploymentFailed { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - #[automatically_derived] - impl alloy_sol_types::SolError for DeploymentFailed { - type Parameters<'a> = UnderlyingSolTuple<'a>; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "DeploymentFailed()"; - const SELECTOR: [u8; 4] = [48u8, 17u8, 100u8, 37u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - } - }; - /**Function with signature `deploy(bytes)` and selector `0x00774360`. -```solidity -function deploy(bytes memory init_code) external { } -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct deployCall { - pub init_code: ::alloy_sol_types::private::Bytes, - } - ///Container type for the return parameters of the [`deploy(bytes)`](deployCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct deployReturn {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bytes,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Bytes,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: deployCall) -> Self { - (value.init_code,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for deployCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { init_code: tuple.0 } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: deployReturn) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for deployReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for deployCall { - type Parameters<'a> = (::alloy_sol_types::sol_data::Bytes,); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = deployReturn; - type ReturnTuple<'a> = (); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "deploy(bytes)"; - const SELECTOR: [u8; 4] = [0u8, 119u8, 67u8, 96u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( - &self.init_code, - ), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - ///Container for all the [`Deployer`](self) function calls. - pub enum DeployerCalls { - deploy(deployCall), - } - #[automatically_derived] - impl DeployerCalls { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 4usize]] = &[[0u8, 119u8, 67u8, 96u8]]; - } - #[automatically_derived] - impl alloy_sol_types::SolInterface for DeployerCalls { - const NAME: &'static str = "DeployerCalls"; - const MIN_DATA_LENGTH: usize = 64usize; - const COUNT: usize = 1usize; - #[inline] - fn selector(&self) -> [u8; 4] { - match self { - Self::deploy(_) => ::SELECTOR, - } - } - #[inline] - fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { - Self::SELECTORS.get(i).copied() - } - #[inline] - fn valid_selector(selector: [u8; 4]) -> bool { - Self::SELECTORS.binary_search(&selector).is_ok() - } - #[inline] - #[allow(unsafe_code, non_snake_case)] - fn abi_decode_raw( - selector: [u8; 4], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - static DECODE_SHIMS: &[fn( - &[u8], - bool, - ) -> alloy_sol_types::Result] = &[ - { - fn deploy( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(DeployerCalls::deploy) - } - deploy - }, - ]; - let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { - return Err( - alloy_sol_types::Error::unknown_selector( - ::NAME, - selector, - ), - ); - }; - (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) - } - #[inline] - fn abi_encoded_size(&self) -> usize { - match self { - Self::deploy(inner) => { - ::abi_encoded_size(inner) - } - } - } - #[inline] - fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { - match self { - Self::deploy(inner) => { - ::abi_encode_raw(inner, out) - } - } - } - } - ///Container for all the [`Deployer`](self) custom errors. - pub enum DeployerErrors { - DeploymentFailed(DeploymentFailed), - } - #[automatically_derived] - impl DeployerErrors { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 4usize]] = &[[48u8, 17u8, 100u8, 37u8]]; - } - #[automatically_derived] - impl alloy_sol_types::SolInterface for DeployerErrors { - const NAME: &'static str = "DeployerErrors"; - const MIN_DATA_LENGTH: usize = 0usize; - const COUNT: usize = 1usize; - #[inline] - fn selector(&self) -> [u8; 4] { - match self { - Self::DeploymentFailed(_) => { - ::SELECTOR - } - } - } - #[inline] - fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { - Self::SELECTORS.get(i).copied() - } - #[inline] - fn valid_selector(selector: [u8; 4]) -> bool { - Self::SELECTORS.binary_search(&selector).is_ok() - } - #[inline] - #[allow(unsafe_code, non_snake_case)] - fn abi_decode_raw( - selector: [u8; 4], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - static DECODE_SHIMS: &[fn( - &[u8], - bool, - ) -> alloy_sol_types::Result] = &[ - { - fn DeploymentFailed( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(DeployerErrors::DeploymentFailed) - } - DeploymentFailed - }, - ]; - let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { - return Err( - alloy_sol_types::Error::unknown_selector( - ::NAME, - selector, - ), - ); - }; - (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) - } - #[inline] - fn abi_encoded_size(&self) -> usize { - match self { - Self::DeploymentFailed(inner) => { - ::abi_encoded_size( - inner, - ) - } - } - } - #[inline] - fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { - match self { - Self::DeploymentFailed(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - } - } - } - ///Container for all the [`Deployer`](self) events. - pub enum DeployerEvents { - Deployment(Deployment), - } - #[automatically_derived] - impl DeployerEvents { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 32usize]] = &[ - [ - 96u8, - 184u8, - 119u8, - 163u8, - 186u8, - 231u8, - 191u8, - 15u8, - 11u8, - 213u8, - 225u8, - 196u8, - 14u8, - 191u8, - 68u8, - 234u8, - 21u8, - 130u8, - 1u8, - 57u8, - 127u8, - 107u8, - 114u8, - 215u8, - 192u8, - 83u8, - 96u8, - 21u8, - 123u8, - 30u8, - 192u8, - 252u8, - ], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolEventInterface for DeployerEvents { - const NAME: &'static str = "DeployerEvents"; - const COUNT: usize = 1usize; - fn decode_raw_log( - topics: &[alloy_sol_types::Word], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - match topics.first().copied() { - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::Deployment) - } - _ => { - alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { - name: ::NAME, - log: alloy_sol_types::private::Box::new( - alloy_sol_types::private::LogData::new_unchecked( - topics.to_vec(), - data.to_vec().into(), - ), - ), - }) - } - } - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for DeployerEvents { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - match self { - Self::Deployment(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - } - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - match self { - Self::Deployment(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - } - } - } -} diff --git a/processor/ethereum/contracts/src/abigen/erc20.rs b/processor/ethereum/contracts/src/abigen/erc20.rs deleted file mode 100644 index d9c0dd6e..00000000 --- a/processor/ethereum/contracts/src/abigen/erc20.rs +++ /dev/null @@ -1,1838 +0,0 @@ -///Module containing a contract's types and functions. -/** - -```solidity -interface IERC20 { - event Transfer(address indexed from, address indexed to, uint256 value); - event Approval(address indexed owner, address indexed spender, uint256 value); - function name() external view returns (string memory); - function symbol() external view returns (string memory); - function decimals() external view returns (uint8); - function totalSupply() external view returns (uint256); - function balanceOf(address owner) external view returns (uint256); - function transfer(address to, uint256 value) external returns (bool); - function transferFrom(address from, address to, uint256 value) external returns (bool); - function approve(address spender, uint256 value) external returns (bool); - function allowance(address owner, address spender) external view returns (uint256); -} -```*/ -#[allow(non_camel_case_types, non_snake_case, clippy::style)] -pub mod IERC20 { - use super::*; - use ::alloy_sol_types as alloy_sol_types; - /**Event with signature `Transfer(address,address,uint256)` and selector `0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef`. -```solidity -event Transfer(address indexed from, address indexed to, uint256 value); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct Transfer { - #[allow(missing_docs)] - pub from: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub to: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for Transfer { - type DataTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ); - const SIGNATURE: &'static str = "Transfer(address,address,uint256)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 221u8, - 242u8, - 82u8, - 173u8, - 27u8, - 226u8, - 200u8, - 155u8, - 105u8, - 194u8, - 176u8, - 104u8, - 252u8, - 55u8, - 141u8, - 170u8, - 149u8, - 43u8, - 167u8, - 241u8, - 99u8, - 196u8, - 161u8, - 22u8, - 40u8, - 245u8, - 90u8, - 77u8, - 245u8, - 35u8, - 179u8, - 239u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - from: topics.1, - to: topics.2, - value: data.0, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - ( - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.from.clone(), self.to.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.from, - ); - out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.to, - ); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for Transfer { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&Transfer> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &Transfer) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Event with signature `Approval(address,address,uint256)` and selector `0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925`. -```solidity -event Approval(address indexed owner, address indexed spender, uint256 value); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct Approval { - #[allow(missing_docs)] - pub owner: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub spender: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for Approval { - type DataTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ); - const SIGNATURE: &'static str = "Approval(address,address,uint256)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 140u8, - 91u8, - 225u8, - 229u8, - 235u8, - 236u8, - 125u8, - 91u8, - 209u8, - 79u8, - 113u8, - 66u8, - 125u8, - 30u8, - 132u8, - 243u8, - 221u8, - 3u8, - 20u8, - 192u8, - 247u8, - 178u8, - 41u8, - 30u8, - 91u8, - 32u8, - 10u8, - 200u8, - 199u8, - 195u8, - 185u8, - 37u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - owner: topics.1, - spender: topics.2, - value: data.0, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - ( - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.owner.clone(), self.spender.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.owner, - ); - out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.spender, - ); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for Approval { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&Approval> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &Approval) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Function with signature `name()` and selector `0x06fdde03`. -```solidity -function name() external view returns (string memory); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct nameCall {} - ///Container type for the return parameters of the [`name()`](nameCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct nameReturn { - pub _0: ::alloy_sol_types::private::String, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: nameCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for nameCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::String,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::String,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: nameReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for nameReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for nameCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = nameReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::String,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "name()"; - const SELECTOR: [u8; 4] = [6u8, 253u8, 222u8, 3u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `symbol()` and selector `0x95d89b41`. -```solidity -function symbol() external view returns (string memory); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct symbolCall {} - ///Container type for the return parameters of the [`symbol()`](symbolCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct symbolReturn { - pub _0: ::alloy_sol_types::private::String, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: symbolCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for symbolCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::String,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::String,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: symbolReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for symbolReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for symbolCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = symbolReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::String,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "symbol()"; - const SELECTOR: [u8; 4] = [149u8, 216u8, 155u8, 65u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `decimals()` and selector `0x313ce567`. -```solidity -function decimals() external view returns (uint8); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct decimalsCall {} - ///Container type for the return parameters of the [`decimals()`](decimalsCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct decimalsReturn { - pub _0: u8, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: decimalsCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for decimalsCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<8>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (u8,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: decimalsReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for decimalsReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for decimalsCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = decimalsReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<8>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "decimals()"; - const SELECTOR: [u8; 4] = [49u8, 60u8, 229u8, 103u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `totalSupply()` and selector `0x18160ddd`. -```solidity -function totalSupply() external view returns (uint256); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct totalSupplyCall {} - ///Container type for the return parameters of the [`totalSupply()`](totalSupplyCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct totalSupplyReturn { - pub _0: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: totalSupplyCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for totalSupplyCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: totalSupplyReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for totalSupplyReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for totalSupplyCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = totalSupplyReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "totalSupply()"; - const SELECTOR: [u8; 4] = [24u8, 22u8, 13u8, 221u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `balanceOf(address)` and selector `0x70a08231`. -```solidity -function balanceOf(address owner) external view returns (uint256); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct balanceOfCall { - pub owner: ::alloy_sol_types::private::Address, - } - ///Container type for the return parameters of the [`balanceOf(address)`](balanceOfCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct balanceOfReturn { - pub _0: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Address,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Address,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: balanceOfCall) -> Self { - (value.owner,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for balanceOfCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { owner: tuple.0 } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: balanceOfReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for balanceOfReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for balanceOfCall { - type Parameters<'a> = (::alloy_sol_types::sol_data::Address,); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = balanceOfReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "balanceOf(address)"; - const SELECTOR: [u8; 4] = [112u8, 160u8, 130u8, 49u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.owner, - ), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `transfer(address,uint256)` and selector `0xa9059cbb`. -```solidity -function transfer(address to, uint256 value) external returns (bool); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct transferCall { - pub to: ::alloy_sol_types::private::Address, - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - ///Container type for the return parameters of the [`transfer(address,uint256)`](transferCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct transferReturn { - pub _0: bool, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: transferCall) -> Self { - (value.to, value.value) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for transferCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - to: tuple.0, - value: tuple.1, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (bool,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: transferReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for transferReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for transferCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = transferReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "transfer(address,uint256)"; - const SELECTOR: [u8; 4] = [169u8, 5u8, 156u8, 187u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.to, - ), - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `transferFrom(address,address,uint256)` and selector `0x23b872dd`. -```solidity -function transferFrom(address from, address to, uint256 value) external returns (bool); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct transferFromCall { - pub from: ::alloy_sol_types::private::Address, - pub to: ::alloy_sol_types::private::Address, - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - ///Container type for the return parameters of the [`transferFrom(address,address,uint256)`](transferFromCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct transferFromReturn { - pub _0: bool, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: transferFromCall) -> Self { - (value.from, value.to, value.value) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for transferFromCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - from: tuple.0, - to: tuple.1, - value: tuple.2, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (bool,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: transferFromReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for transferFromReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for transferFromCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = transferFromReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "transferFrom(address,address,uint256)"; - const SELECTOR: [u8; 4] = [35u8, 184u8, 114u8, 221u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.from, - ), - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.to, - ), - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `approve(address,uint256)` and selector `0x095ea7b3`. -```solidity -function approve(address spender, uint256 value) external returns (bool); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct approveCall { - pub spender: ::alloy_sol_types::private::Address, - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - ///Container type for the return parameters of the [`approve(address,uint256)`](approveCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct approveReturn { - pub _0: bool, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: approveCall) -> Self { - (value.spender, value.value) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for approveCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - spender: tuple.0, - value: tuple.1, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (bool,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: approveReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for approveReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for approveCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = approveReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Bool,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "approve(address,uint256)"; - const SELECTOR: [u8; 4] = [9u8, 94u8, 167u8, 179u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.spender, - ), - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `allowance(address,address)` and selector `0xdd62ed3e`. -```solidity -function allowance(address owner, address spender) external view returns (uint256); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct allowanceCall { - pub owner: ::alloy_sol_types::private::Address, - pub spender: ::alloy_sol_types::private::Address, - } - ///Container type for the return parameters of the [`allowance(address,address)`](allowanceCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct allowanceReturn { - pub _0: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::Address, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: allowanceCall) -> Self { - (value.owner, value.spender) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for allowanceCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - owner: tuple.0, - spender: tuple.1, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: allowanceReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for allowanceReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for allowanceCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = allowanceReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "allowance(address,address)"; - const SELECTOR: [u8; 4] = [221u8, 98u8, 237u8, 62u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.owner, - ), - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.spender, - ), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - ///Container for all the [`IERC20`](self) function calls. - pub enum IERC20Calls { - name(nameCall), - symbol(symbolCall), - decimals(decimalsCall), - totalSupply(totalSupplyCall), - balanceOf(balanceOfCall), - transfer(transferCall), - transferFrom(transferFromCall), - approve(approveCall), - allowance(allowanceCall), - } - #[automatically_derived] - impl IERC20Calls { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 4usize]] = &[ - [6u8, 253u8, 222u8, 3u8], - [9u8, 94u8, 167u8, 179u8], - [24u8, 22u8, 13u8, 221u8], - [35u8, 184u8, 114u8, 221u8], - [49u8, 60u8, 229u8, 103u8], - [112u8, 160u8, 130u8, 49u8], - [149u8, 216u8, 155u8, 65u8], - [169u8, 5u8, 156u8, 187u8], - [221u8, 98u8, 237u8, 62u8], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolInterface for IERC20Calls { - const NAME: &'static str = "IERC20Calls"; - const MIN_DATA_LENGTH: usize = 0usize; - const COUNT: usize = 9usize; - #[inline] - fn selector(&self) -> [u8; 4] { - match self { - Self::name(_) => ::SELECTOR, - Self::symbol(_) => ::SELECTOR, - Self::decimals(_) => ::SELECTOR, - Self::totalSupply(_) => { - ::SELECTOR - } - Self::balanceOf(_) => { - ::SELECTOR - } - Self::transfer(_) => ::SELECTOR, - Self::transferFrom(_) => { - ::SELECTOR - } - Self::approve(_) => ::SELECTOR, - Self::allowance(_) => { - ::SELECTOR - } - } - } - #[inline] - fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { - Self::SELECTORS.get(i).copied() - } - #[inline] - fn valid_selector(selector: [u8; 4]) -> bool { - Self::SELECTORS.binary_search(&selector).is_ok() - } - #[inline] - #[allow(unsafe_code, non_snake_case)] - fn abi_decode_raw( - selector: [u8; 4], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - static DECODE_SHIMS: &[fn( - &[u8], - bool, - ) -> alloy_sol_types::Result] = &[ - { - fn name( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::name) - } - name - }, - { - fn approve( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::approve) - } - approve - }, - { - fn totalSupply( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::totalSupply) - } - totalSupply - }, - { - fn transferFrom( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::transferFrom) - } - transferFrom - }, - { - fn decimals( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::decimals) - } - decimals - }, - { - fn balanceOf( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::balanceOf) - } - balanceOf - }, - { - fn symbol( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::symbol) - } - symbol - }, - { - fn transfer( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::transfer) - } - transfer - }, - { - fn allowance( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(IERC20Calls::allowance) - } - allowance - }, - ]; - let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { - return Err( - alloy_sol_types::Error::unknown_selector( - ::NAME, - selector, - ), - ); - }; - (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) - } - #[inline] - fn abi_encoded_size(&self) -> usize { - match self { - Self::name(inner) => { - ::abi_encoded_size(inner) - } - Self::symbol(inner) => { - ::abi_encoded_size(inner) - } - Self::decimals(inner) => { - ::abi_encoded_size(inner) - } - Self::totalSupply(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::balanceOf(inner) => { - ::abi_encoded_size(inner) - } - Self::transfer(inner) => { - ::abi_encoded_size(inner) - } - Self::transferFrom(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::approve(inner) => { - ::abi_encoded_size(inner) - } - Self::allowance(inner) => { - ::abi_encoded_size(inner) - } - } - } - #[inline] - fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { - match self { - Self::name(inner) => { - ::abi_encode_raw(inner, out) - } - Self::symbol(inner) => { - ::abi_encode_raw(inner, out) - } - Self::decimals(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::totalSupply(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::balanceOf(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::transfer(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::transferFrom(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::approve(inner) => { - ::abi_encode_raw(inner, out) - } - Self::allowance(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - } - } - } - ///Container for all the [`IERC20`](self) events. - pub enum IERC20Events { - Transfer(Transfer), - Approval(Approval), - } - #[automatically_derived] - impl IERC20Events { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 32usize]] = &[ - [ - 140u8, - 91u8, - 225u8, - 229u8, - 235u8, - 236u8, - 125u8, - 91u8, - 209u8, - 79u8, - 113u8, - 66u8, - 125u8, - 30u8, - 132u8, - 243u8, - 221u8, - 3u8, - 20u8, - 192u8, - 247u8, - 178u8, - 41u8, - 30u8, - 91u8, - 32u8, - 10u8, - 200u8, - 199u8, - 195u8, - 185u8, - 37u8, - ], - [ - 221u8, - 242u8, - 82u8, - 173u8, - 27u8, - 226u8, - 200u8, - 155u8, - 105u8, - 194u8, - 176u8, - 104u8, - 252u8, - 55u8, - 141u8, - 170u8, - 149u8, - 43u8, - 167u8, - 241u8, - 99u8, - 196u8, - 161u8, - 22u8, - 40u8, - 245u8, - 90u8, - 77u8, - 245u8, - 35u8, - 179u8, - 239u8, - ], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolEventInterface for IERC20Events { - const NAME: &'static str = "IERC20Events"; - const COUNT: usize = 2usize; - fn decode_raw_log( - topics: &[alloy_sol_types::Word], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - match topics.first().copied() { - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::Transfer) - } - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::Approval) - } - _ => { - alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { - name: ::NAME, - log: alloy_sol_types::private::Box::new( - alloy_sol_types::private::LogData::new_unchecked( - topics.to_vec(), - data.to_vec().into(), - ), - ), - }) - } - } - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for IERC20Events { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - match self { - Self::Transfer(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - Self::Approval(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - } - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - match self { - Self::Transfer(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - Self::Approval(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - } - } - } -} diff --git a/processor/ethereum/contracts/src/abigen/mod.rs b/processor/ethereum/contracts/src/abigen/mod.rs deleted file mode 100644 index 541c2980..00000000 --- a/processor/ethereum/contracts/src/abigen/mod.rs +++ /dev/null @@ -1,3 +0,0 @@ -pub mod erc20; -pub mod deployer; -pub mod router; diff --git a/processor/ethereum/contracts/src/abigen/router.rs b/processor/ethereum/contracts/src/abigen/router.rs deleted file mode 100644 index cea1858f..00000000 --- a/processor/ethereum/contracts/src/abigen/router.rs +++ /dev/null @@ -1,2958 +0,0 @@ -/** - -Generated by the following Solidity interface... -```solidity -interface Router { - type DestinationType is uint8; - struct OutInstruction { - DestinationType destinationType; - bytes destination; - address coin; - uint256 value; - } - struct Signature { - bytes32 c; - bytes32 s; - } - - error FailedTransfer(); - error InvalidAmount(); - error InvalidSignature(); - - event Executed(uint256 indexed nonce, bytes32 indexed batch); - event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); - event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); - - constructor(bytes32 initialSeraiKey); - - function arbitaryCallOut(bytes memory code) external; - function execute(OutInstruction[] memory transactions, Signature memory signature) external; - function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable; - function nonce() external view returns (uint256); - function seraiKey() external view returns (bytes32); - function smartContractNonce() external view returns (uint256); - function updateSeraiKey(bytes32 newSeraiKey, Signature memory signature) external; -} -``` - -...which was generated by the following JSON ABI: -```json -[ - { - "type": "constructor", - "inputs": [ - { - "name": "initialSeraiKey", - "type": "bytes32", - "internalType": "bytes32" - } - ], - "stateMutability": "nonpayable" - }, - { - "type": "function", - "name": "arbitaryCallOut", - "inputs": [ - { - "name": "code", - "type": "bytes", - "internalType": "bytes" - } - ], - "outputs": [], - "stateMutability": "nonpayable" - }, - { - "type": "function", - "name": "execute", - "inputs": [ - { - "name": "transactions", - "type": "tuple[]", - "internalType": "struct Router.OutInstruction[]", - "components": [ - { - "name": "destinationType", - "type": "uint8", - "internalType": "enum Router.DestinationType" - }, - { - "name": "destination", - "type": "bytes", - "internalType": "bytes" - }, - { - "name": "coin", - "type": "address", - "internalType": "address" - }, - { - "name": "value", - "type": "uint256", - "internalType": "uint256" - } - ] - }, - { - "name": "signature", - "type": "tuple", - "internalType": "struct Router.Signature", - "components": [ - { - "name": "c", - "type": "bytes32", - "internalType": "bytes32" - }, - { - "name": "s", - "type": "bytes32", - "internalType": "bytes32" - } - ] - } - ], - "outputs": [], - "stateMutability": "nonpayable" - }, - { - "type": "function", - "name": "inInstruction", - "inputs": [ - { - "name": "coin", - "type": "address", - "internalType": "address" - }, - { - "name": "amount", - "type": "uint256", - "internalType": "uint256" - }, - { - "name": "instruction", - "type": "bytes", - "internalType": "bytes" - } - ], - "outputs": [], - "stateMutability": "payable" - }, - { - "type": "function", - "name": "nonce", - "inputs": [], - "outputs": [ - { - "name": "", - "type": "uint256", - "internalType": "uint256" - } - ], - "stateMutability": "view" - }, - { - "type": "function", - "name": "seraiKey", - "inputs": [], - "outputs": [ - { - "name": "", - "type": "bytes32", - "internalType": "bytes32" - } - ], - "stateMutability": "view" - }, - { - "type": "function", - "name": "smartContractNonce", - "inputs": [], - "outputs": [ - { - "name": "", - "type": "uint256", - "internalType": "uint256" - } - ], - "stateMutability": "view" - }, - { - "type": "function", - "name": "updateSeraiKey", - "inputs": [ - { - "name": "newSeraiKey", - "type": "bytes32", - "internalType": "bytes32" - }, - { - "name": "signature", - "type": "tuple", - "internalType": "struct Router.Signature", - "components": [ - { - "name": "c", - "type": "bytes32", - "internalType": "bytes32" - }, - { - "name": "s", - "type": "bytes32", - "internalType": "bytes32" - } - ] - } - ], - "outputs": [], - "stateMutability": "nonpayable" - }, - { - "type": "event", - "name": "Executed", - "inputs": [ - { - "name": "nonce", - "type": "uint256", - "indexed": true, - "internalType": "uint256" - }, - { - "name": "batch", - "type": "bytes32", - "indexed": true, - "internalType": "bytes32" - } - ], - "anonymous": false - }, - { - "type": "event", - "name": "InInstruction", - "inputs": [ - { - "name": "from", - "type": "address", - "indexed": true, - "internalType": "address" - }, - { - "name": "coin", - "type": "address", - "indexed": true, - "internalType": "address" - }, - { - "name": "amount", - "type": "uint256", - "indexed": false, - "internalType": "uint256" - }, - { - "name": "instruction", - "type": "bytes", - "indexed": false, - "internalType": "bytes" - } - ], - "anonymous": false - }, - { - "type": "event", - "name": "SeraiKeyUpdated", - "inputs": [ - { - "name": "nonce", - "type": "uint256", - "indexed": true, - "internalType": "uint256" - }, - { - "name": "key", - "type": "bytes32", - "indexed": true, - "internalType": "bytes32" - } - ], - "anonymous": false - }, - { - "type": "error", - "name": "FailedTransfer", - "inputs": [] - }, - { - "type": "error", - "name": "InvalidAmount", - "inputs": [] - }, - { - "type": "error", - "name": "InvalidSignature", - "inputs": [] - } -] -```*/ -#[allow(non_camel_case_types, non_snake_case, clippy::style)] -pub mod Router { - use super::*; - use ::alloy_sol_types as alloy_sol_types; - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct DestinationType(u8); - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::private::SolTypeValue for u8 { - #[inline] - fn stv_to_tokens( - &self, - ) -> <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::Token<'_> { - alloy_sol_types::private::SolTypeValue::< - ::alloy_sol_types::sol_data::Uint<8>, - >::stv_to_tokens(self) - } - #[inline] - fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::tokenize(self) - .0 - } - #[inline] - fn stv_abi_encode_packed_to( - &self, - out: &mut alloy_sol_types::private::Vec, - ) { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::abi_encode_packed_to(self, out) - } - #[inline] - fn stv_abi_packed_encoded_size(&self) -> usize { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::abi_encoded_size(self) - } - } - #[automatically_derived] - impl DestinationType { - /// The Solidity type name. - pub const NAME: &'static str = stringify!(@ name); - /// Convert from the underlying value type. - #[inline] - pub const fn from(value: u8) -> Self { - Self(value) - } - /// Return the underlying value. - #[inline] - pub const fn into(self) -> u8 { - self.0 - } - /// Return the single encoding of this value, delegating to the - /// underlying type. - #[inline] - pub fn abi_encode(&self) -> alloy_sol_types::private::Vec { - ::abi_encode(&self.0) - } - /// Return the packed encoding of this value, delegating to the - /// underlying type. - #[inline] - pub fn abi_encode_packed(&self) -> alloy_sol_types::private::Vec { - ::abi_encode_packed(&self.0) - } - } - #[automatically_derived] - impl alloy_sol_types::SolType for DestinationType { - type RustType = u8; - type Token<'a> = <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::Token<'a>; - const SOL_NAME: &'static str = Self::NAME; - const ENCODED_SIZE: Option = <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::ENCODED_SIZE; - const PACKED_ENCODED_SIZE: Option = <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; - #[inline] - fn valid_token(token: &Self::Token<'_>) -> bool { - Self::type_check(token).is_ok() - } - #[inline] - fn type_check(token: &Self::Token<'_>) -> alloy_sol_types::Result<()> { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::type_check(token) - } - #[inline] - fn detokenize(token: Self::Token<'_>) -> Self::RustType { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::SolType>::detokenize(token) - } - } - #[automatically_derived] - impl alloy_sol_types::EventTopic for DestinationType { - #[inline] - fn topic_preimage_length(rust: &Self::RustType) -> usize { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::EventTopic>::topic_preimage_length(rust) - } - #[inline] - fn encode_topic_preimage( - rust: &Self::RustType, - out: &mut alloy_sol_types::private::Vec, - ) { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::EventTopic>::encode_topic_preimage(rust, out) - } - #[inline] - fn encode_topic( - rust: &Self::RustType, - ) -> alloy_sol_types::abi::token::WordToken { - <::alloy_sol_types::sol_data::Uint< - 8, - > as alloy_sol_types::EventTopic>::encode_topic(rust) - } - } - }; - /**```solidity -struct OutInstruction { DestinationType destinationType; bytes destination; address coin; uint256 value; } -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct OutInstruction { - pub destinationType: ::RustType, - pub destination: ::alloy_sol_types::private::Bytes, - pub coin: ::alloy_sol_types::private::Address, - pub value: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - DestinationType, - ::alloy_sol_types::sol_data::Bytes, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::RustType, - ::alloy_sol_types::private::Bytes, - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: OutInstruction) -> Self { - (value.destinationType, value.destination, value.coin, value.value) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for OutInstruction { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - destinationType: tuple.0, - destination: tuple.1, - coin: tuple.2, - value: tuple.3, - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolValue for OutInstruction { - type SolType = Self; - } - #[automatically_derived] - impl alloy_sol_types::private::SolTypeValue for OutInstruction { - #[inline] - fn stv_to_tokens(&self) -> ::Token<'_> { - ( - ::tokenize( - &self.destinationType, - ), - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( - &self.destination, - ), - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.coin, - ), - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.value), - ) - } - #[inline] - fn stv_abi_encoded_size(&self) -> usize { - if let Some(size) = ::ENCODED_SIZE { - return size; - } - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_encoded_size(&tuple) - } - #[inline] - fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { - ::eip712_hash_struct(self) - } - #[inline] - fn stv_abi_encode_packed_to( - &self, - out: &mut alloy_sol_types::private::Vec, - ) { - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_encode_packed_to(&tuple, out) - } - #[inline] - fn stv_abi_packed_encoded_size(&self) -> usize { - if let Some(size) = ::PACKED_ENCODED_SIZE { - return size; - } - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_packed_encoded_size(&tuple) - } - } - #[automatically_derived] - impl alloy_sol_types::SolType for OutInstruction { - type RustType = Self; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SOL_NAME: &'static str = ::NAME; - const ENCODED_SIZE: Option = as alloy_sol_types::SolType>::ENCODED_SIZE; - const PACKED_ENCODED_SIZE: Option = as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; - #[inline] - fn valid_token(token: &Self::Token<'_>) -> bool { - as alloy_sol_types::SolType>::valid_token(token) - } - #[inline] - fn detokenize(token: Self::Token<'_>) -> Self::RustType { - let tuple = as alloy_sol_types::SolType>::detokenize(token); - >>::from(tuple) - } - } - #[automatically_derived] - impl alloy_sol_types::SolStruct for OutInstruction { - const NAME: &'static str = "OutInstruction"; - #[inline] - fn eip712_root_type() -> alloy_sol_types::private::Cow<'static, str> { - alloy_sol_types::private::Cow::Borrowed( - "OutInstruction(uint8 destinationType,bytes destination,address coin,uint256 value)", - ) - } - #[inline] - fn eip712_components() -> alloy_sol_types::private::Vec< - alloy_sol_types::private::Cow<'static, str>, - > { - alloy_sol_types::private::Vec::new() - } - #[inline] - fn eip712_encode_type() -> alloy_sol_types::private::Cow<'static, str> { - ::eip712_root_type() - } - #[inline] - fn eip712_encode_data(&self) -> alloy_sol_types::private::Vec { - [ - ::eip712_data_word( - &self.destinationType, - ) - .0, - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::eip712_data_word( - &self.destination, - ) - .0, - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::eip712_data_word( - &self.coin, - ) - .0, - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::eip712_data_word(&self.value) - .0, - ] - .concat() - } - } - #[automatically_derived] - impl alloy_sol_types::EventTopic for OutInstruction { - #[inline] - fn topic_preimage_length(rust: &Self::RustType) -> usize { - 0usize - + ::topic_preimage_length( - &rust.destinationType, - ) - + <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::EventTopic>::topic_preimage_length( - &rust.destination, - ) - + <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::topic_preimage_length( - &rust.coin, - ) - + <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.value) - } - #[inline] - fn encode_topic_preimage( - rust: &Self::RustType, - out: &mut alloy_sol_types::private::Vec, - ) { - out.reserve( - ::topic_preimage_length(rust), - ); - ::encode_topic_preimage( - &rust.destinationType, - out, - ); - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::EventTopic>::encode_topic_preimage( - &rust.destination, - out, - ); - <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic_preimage( - &rust.coin, - out, - ); - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::EventTopic>::encode_topic_preimage( - &rust.value, - out, - ); - } - #[inline] - fn encode_topic( - rust: &Self::RustType, - ) -> alloy_sol_types::abi::token::WordToken { - let mut out = alloy_sol_types::private::Vec::new(); - ::encode_topic_preimage( - rust, - &mut out, - ); - alloy_sol_types::abi::token::WordToken( - alloy_sol_types::private::keccak256(out), - ) - } - } - }; - /**```solidity -struct Signature { bytes32 c; bytes32 s; } -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct Signature { - pub c: ::alloy_sol_types::private::FixedBytes<32>, - pub s: ::alloy_sol_types::private::FixedBytes<32>, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::FixedBytes<32>, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::FixedBytes<32>, - ::alloy_sol_types::private::FixedBytes<32>, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: Signature) -> Self { - (value.c, value.s) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for Signature { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { c: tuple.0, s: tuple.1 } - } - } - #[automatically_derived] - impl alloy_sol_types::SolValue for Signature { - type SolType = Self; - } - #[automatically_derived] - impl alloy_sol_types::private::SolTypeValue for Signature { - #[inline] - fn stv_to_tokens(&self) -> ::Token<'_> { - ( - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::tokenize(&self.c), - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::tokenize(&self.s), - ) - } - #[inline] - fn stv_abi_encoded_size(&self) -> usize { - if let Some(size) = ::ENCODED_SIZE { - return size; - } - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_encoded_size(&tuple) - } - #[inline] - fn stv_eip712_data_word(&self) -> alloy_sol_types::Word { - ::eip712_hash_struct(self) - } - #[inline] - fn stv_abi_encode_packed_to( - &self, - out: &mut alloy_sol_types::private::Vec, - ) { - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_encode_packed_to(&tuple, out) - } - #[inline] - fn stv_abi_packed_encoded_size(&self) -> usize { - if let Some(size) = ::PACKED_ENCODED_SIZE { - return size; - } - let tuple = as ::core::convert::From>::from(self.clone()); - as alloy_sol_types::SolType>::abi_packed_encoded_size(&tuple) - } - } - #[automatically_derived] - impl alloy_sol_types::SolType for Signature { - type RustType = Self; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SOL_NAME: &'static str = ::NAME; - const ENCODED_SIZE: Option = as alloy_sol_types::SolType>::ENCODED_SIZE; - const PACKED_ENCODED_SIZE: Option = as alloy_sol_types::SolType>::PACKED_ENCODED_SIZE; - #[inline] - fn valid_token(token: &Self::Token<'_>) -> bool { - as alloy_sol_types::SolType>::valid_token(token) - } - #[inline] - fn detokenize(token: Self::Token<'_>) -> Self::RustType { - let tuple = as alloy_sol_types::SolType>::detokenize(token); - >>::from(tuple) - } - } - #[automatically_derived] - impl alloy_sol_types::SolStruct for Signature { - const NAME: &'static str = "Signature"; - #[inline] - fn eip712_root_type() -> alloy_sol_types::private::Cow<'static, str> { - alloy_sol_types::private::Cow::Borrowed("Signature(bytes32 c,bytes32 s)") - } - #[inline] - fn eip712_components() -> alloy_sol_types::private::Vec< - alloy_sol_types::private::Cow<'static, str>, - > { - alloy_sol_types::private::Vec::new() - } - #[inline] - fn eip712_encode_type() -> alloy_sol_types::private::Cow<'static, str> { - ::eip712_root_type() - } - #[inline] - fn eip712_encode_data(&self) -> alloy_sol_types::private::Vec { - [ - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::eip712_data_word(&self.c) - .0, - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::eip712_data_word(&self.s) - .0, - ] - .concat() - } - } - #[automatically_derived] - impl alloy_sol_types::EventTopic for Signature { - #[inline] - fn topic_preimage_length(rust: &Self::RustType) -> usize { - 0usize - + <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.c) - + <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::topic_preimage_length(&rust.s) - } - #[inline] - fn encode_topic_preimage( - rust: &Self::RustType, - out: &mut alloy_sol_types::private::Vec, - ) { - out.reserve( - ::topic_preimage_length(rust), - ); - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::encode_topic_preimage(&rust.c, out); - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::encode_topic_preimage(&rust.s, out); - } - #[inline] - fn encode_topic( - rust: &Self::RustType, - ) -> alloy_sol_types::abi::token::WordToken { - let mut out = alloy_sol_types::private::Vec::new(); - ::encode_topic_preimage( - rust, - &mut out, - ); - alloy_sol_types::abi::token::WordToken( - alloy_sol_types::private::keccak256(out), - ) - } - } - }; - /**Custom error with signature `FailedTransfer()` and selector `0xbfa871c5`. -```solidity -error FailedTransfer(); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct FailedTransfer {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: FailedTransfer) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for FailedTransfer { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - #[automatically_derived] - impl alloy_sol_types::SolError for FailedTransfer { - type Parameters<'a> = UnderlyingSolTuple<'a>; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "FailedTransfer()"; - const SELECTOR: [u8; 4] = [191u8, 168u8, 113u8, 197u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - } - }; - /**Custom error with signature `InvalidAmount()` and selector `0x2c5211c6`. -```solidity -error InvalidAmount(); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct InvalidAmount {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: InvalidAmount) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for InvalidAmount { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - #[automatically_derived] - impl alloy_sol_types::SolError for InvalidAmount { - type Parameters<'a> = UnderlyingSolTuple<'a>; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "InvalidAmount()"; - const SELECTOR: [u8; 4] = [44u8, 82u8, 17u8, 198u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - } - }; - /**Custom error with signature `InvalidSignature()` and selector `0x8baa579f`. -```solidity -error InvalidSignature(); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct InvalidSignature {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: InvalidSignature) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for InvalidSignature { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - #[automatically_derived] - impl alloy_sol_types::SolError for InvalidSignature { - type Parameters<'a> = UnderlyingSolTuple<'a>; - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "InvalidSignature()"; - const SELECTOR: [u8; 4] = [139u8, 170u8, 87u8, 159u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - } - }; - /**Event with signature `Executed(uint256,bytes32)` and selector `0xc218c77e54cac1162571e52b65bb27aa0cdfcc70b7c7296ad83933914b132091`. -```solidity -event Executed(uint256 indexed nonce, bytes32 indexed batch); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct Executed { - #[allow(missing_docs)] - pub nonce: ::alloy_sol_types::private::primitives::aliases::U256, - #[allow(missing_docs)] - pub batch: ::alloy_sol_types::private::FixedBytes<32>, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for Executed { - type DataTuple<'a> = (); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::Uint<256>, - ::alloy_sol_types::sol_data::FixedBytes<32>, - ); - const SIGNATURE: &'static str = "Executed(uint256,bytes32)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 194u8, - 24u8, - 199u8, - 126u8, - 84u8, - 202u8, - 193u8, - 22u8, - 37u8, - 113u8, - 229u8, - 43u8, - 101u8, - 187u8, - 39u8, - 170u8, - 12u8, - 223u8, - 204u8, - 112u8, - 183u8, - 199u8, - 41u8, - 106u8, - 216u8, - 57u8, - 51u8, - 145u8, - 75u8, - 19u8, - 32u8, - 145u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - nonce: topics.1, - batch: topics.2, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - () - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.nonce.clone(), self.batch.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::EventTopic>::encode_topic(&self.nonce); - out[2usize] = <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::encode_topic(&self.batch); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for Executed { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&Executed> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &Executed) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Event with signature `InInstruction(address,address,uint256,bytes)` and selector `0x346fd5cd6d19d26d3afd222f43033ecd0d5614ca64bec0aed101482cd87e922f`. -```solidity -event InInstruction(address indexed from, address indexed coin, uint256 amount, bytes instruction); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct InInstruction { - #[allow(missing_docs)] - pub from: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub coin: ::alloy_sol_types::private::Address, - #[allow(missing_docs)] - pub amount: ::alloy_sol_types::private::primitives::aliases::U256, - #[allow(missing_docs)] - pub instruction: ::alloy_sol_types::private::Bytes, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for InInstruction { - type DataTuple<'a> = ( - ::alloy_sol_types::sol_data::Uint<256>, - ::alloy_sol_types::sol_data::Bytes, - ); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Address, - ); - const SIGNATURE: &'static str = "InInstruction(address,address,uint256,bytes)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 52u8, - 111u8, - 213u8, - 205u8, - 109u8, - 25u8, - 210u8, - 109u8, - 58u8, - 253u8, - 34u8, - 47u8, - 67u8, - 3u8, - 62u8, - 205u8, - 13u8, - 86u8, - 20u8, - 202u8, - 100u8, - 190u8, - 192u8, - 174u8, - 209u8, - 1u8, - 72u8, - 44u8, - 216u8, - 126u8, - 146u8, - 47u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - from: topics.1, - coin: topics.2, - amount: data.0, - instruction: data.1, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - ( - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.amount), - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( - &self.instruction, - ), - ) - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.from.clone(), self.coin.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.from, - ); - out[2usize] = <::alloy_sol_types::sol_data::Address as alloy_sol_types::EventTopic>::encode_topic( - &self.coin, - ); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for InInstruction { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&InInstruction> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &InInstruction) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Event with signature `SeraiKeyUpdated(uint256,bytes32)` and selector `0x1b9ff0164e811045a617ae783e807501a8e27762a7cb8f2fbd027851752570b5`. -```solidity -event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); -```*/ - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - #[derive(Clone)] - pub struct SeraiKeyUpdated { - #[allow(missing_docs)] - pub nonce: ::alloy_sol_types::private::primitives::aliases::U256, - #[allow(missing_docs)] - pub key: ::alloy_sol_types::private::FixedBytes<32>, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - #[automatically_derived] - impl alloy_sol_types::SolEvent for SeraiKeyUpdated { - type DataTuple<'a> = (); - type DataToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - type TopicList = ( - alloy_sol_types::sol_data::FixedBytes<32>, - ::alloy_sol_types::sol_data::Uint<256>, - ::alloy_sol_types::sol_data::FixedBytes<32>, - ); - const SIGNATURE: &'static str = "SeraiKeyUpdated(uint256,bytes32)"; - const SIGNATURE_HASH: alloy_sol_types::private::B256 = alloy_sol_types::private::B256::new([ - 27u8, - 159u8, - 240u8, - 22u8, - 78u8, - 129u8, - 16u8, - 69u8, - 166u8, - 23u8, - 174u8, - 120u8, - 62u8, - 128u8, - 117u8, - 1u8, - 168u8, - 226u8, - 119u8, - 98u8, - 167u8, - 203u8, - 143u8, - 47u8, - 189u8, - 2u8, - 120u8, - 81u8, - 117u8, - 37u8, - 112u8, - 181u8, - ]); - const ANONYMOUS: bool = false; - #[allow(unused_variables)] - #[inline] - fn new( - topics: ::RustType, - data: as alloy_sol_types::SolType>::RustType, - ) -> Self { - Self { - nonce: topics.1, - key: topics.2, - } - } - #[inline] - fn tokenize_body(&self) -> Self::DataToken<'_> { - () - } - #[inline] - fn topics(&self) -> ::RustType { - (Self::SIGNATURE_HASH.into(), self.nonce.clone(), self.key.clone()) - } - #[inline] - fn encode_topics_raw( - &self, - out: &mut [alloy_sol_types::abi::token::WordToken], - ) -> alloy_sol_types::Result<()> { - if out.len() < ::COUNT { - return Err(alloy_sol_types::Error::Overrun); - } - out[0usize] = alloy_sol_types::abi::token::WordToken( - Self::SIGNATURE_HASH, - ); - out[1usize] = <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::EventTopic>::encode_topic(&self.nonce); - out[2usize] = <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::EventTopic>::encode_topic(&self.key); - Ok(()) - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for SeraiKeyUpdated { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - From::from(self) - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - From::from(&self) - } - } - #[automatically_derived] - impl From<&SeraiKeyUpdated> for alloy_sol_types::private::LogData { - #[inline] - fn from(this: &SeraiKeyUpdated) -> alloy_sol_types::private::LogData { - alloy_sol_types::SolEvent::encode_log_data(this) - } - } - }; - /**Constructor`. -```solidity -constructor(bytes32 initialSeraiKey); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct constructorCall { - pub initialSeraiKey: ::alloy_sol_types::private::FixedBytes<32>, - } - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::FixedBytes<32>,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: constructorCall) -> Self { - (value.initialSeraiKey,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for constructorCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { initialSeraiKey: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolConstructor for constructorCall { - type Parameters<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::tokenize(&self.initialSeraiKey), - ) - } - } - }; - /**Function with signature `arbitaryCallOut(bytes)` and selector `0x3cbd2bf6`. -```solidity -function arbitaryCallOut(bytes memory code) external; -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct arbitaryCallOutCall { - pub code: ::alloy_sol_types::private::Bytes, - } - ///Container type for the return parameters of the [`arbitaryCallOut(bytes)`](arbitaryCallOutCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct arbitaryCallOutReturn {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Bytes,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::Bytes,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: arbitaryCallOutCall) -> Self { - (value.code,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for arbitaryCallOutCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { code: tuple.0 } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From - for UnderlyingRustTuple<'_> { - fn from(value: arbitaryCallOutReturn) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> - for arbitaryCallOutReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for arbitaryCallOutCall { - type Parameters<'a> = (::alloy_sol_types::sol_data::Bytes,); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = arbitaryCallOutReturn; - type ReturnTuple<'a> = (); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "arbitaryCallOut(bytes)"; - const SELECTOR: [u8; 4] = [60u8, 189u8, 43u8, 246u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( - &self.code, - ), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))` and selector `0xd5f22182`. -```solidity -function execute(OutInstruction[] memory transactions, Signature memory signature) external; -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct executeCall { - pub transactions: ::alloy_sol_types::private::Vec< - ::RustType, - >, - pub signature: ::RustType, - } - ///Container type for the return parameters of the [`execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))`](executeCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct executeReturn {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Array, - Signature, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Vec< - ::RustType, - >, - ::RustType, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: executeCall) -> Self { - (value.transactions, value.signature) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for executeCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - transactions: tuple.0, - signature: tuple.1, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: executeReturn) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for executeReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for executeCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Array, - Signature, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = executeReturn; - type ReturnTuple<'a> = (); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "execute((uint8,bytes,address,uint256)[],(bytes32,bytes32))"; - const SELECTOR: [u8; 4] = [213u8, 242u8, 33u8, 130u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Array< - OutInstruction, - > as alloy_sol_types::SolType>::tokenize(&self.transactions), - ::tokenize(&self.signature), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `inInstruction(address,uint256,bytes)` and selector `0x0759a1a4`. -```solidity -function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable; -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct inInstructionCall { - pub coin: ::alloy_sol_types::private::Address, - pub amount: ::alloy_sol_types::private::primitives::aliases::U256, - pub instruction: ::alloy_sol_types::private::Bytes, - } - ///Container type for the return parameters of the [`inInstruction(address,uint256,bytes)`](inInstructionCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct inInstructionReturn {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ::alloy_sol_types::sol_data::Bytes, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::Address, - ::alloy_sol_types::private::primitives::aliases::U256, - ::alloy_sol_types::private::Bytes, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: inInstructionCall) -> Self { - (value.coin, value.amount, value.instruction) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for inInstructionCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - coin: tuple.0, - amount: tuple.1, - instruction: tuple.2, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: inInstructionReturn) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for inInstructionReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for inInstructionCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::Address, - ::alloy_sol_types::sol_data::Uint<256>, - ::alloy_sol_types::sol_data::Bytes, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = inInstructionReturn; - type ReturnTuple<'a> = (); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "inInstruction(address,uint256,bytes)"; - const SELECTOR: [u8; 4] = [7u8, 89u8, 161u8, 164u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::Address as alloy_sol_types::SolType>::tokenize( - &self.coin, - ), - <::alloy_sol_types::sol_data::Uint< - 256, - > as alloy_sol_types::SolType>::tokenize(&self.amount), - <::alloy_sol_types::sol_data::Bytes as alloy_sol_types::SolType>::tokenize( - &self.instruction, - ), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `nonce()` and selector `0xaffed0e0`. -```solidity -function nonce() external view returns (uint256); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct nonceCall {} - ///Container type for the return parameters of the [`nonce()`](nonceCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct nonceReturn { - pub _0: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: nonceCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for nonceCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: nonceReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for nonceReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for nonceCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = nonceReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "nonce()"; - const SELECTOR: [u8; 4] = [175u8, 254u8, 208u8, 224u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `seraiKey()` and selector `0x9d6eea0a`. -```solidity -function seraiKey() external view returns (bytes32); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct seraiKeyCall {} - ///Container type for the return parameters of the [`seraiKey()`](seraiKeyCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct seraiKeyReturn { - pub _0: ::alloy_sol_types::private::FixedBytes<32>, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: seraiKeyCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for seraiKeyCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (::alloy_sol_types::private::FixedBytes<32>,); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: seraiKeyReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for seraiKeyReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for seraiKeyCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = seraiKeyReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::FixedBytes<32>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "seraiKey()"; - const SELECTOR: [u8; 4] = [157u8, 110u8, 234u8, 10u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `smartContractNonce()` and selector `0xc3727534`. -```solidity -function smartContractNonce() external view returns (uint256); -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct smartContractNonceCall {} - ///Container type for the return parameters of the [`smartContractNonce()`](smartContractNonceCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct smartContractNonceReturn { - pub _0: ::alloy_sol_types::private::primitives::aliases::U256, - } - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From - for UnderlyingRustTuple<'_> { - fn from(value: smartContractNonceCall) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> - for smartContractNonceCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::primitives::aliases::U256, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From - for UnderlyingRustTuple<'_> { - fn from(value: smartContractNonceReturn) -> Self { - (value._0,) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> - for smartContractNonceReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { _0: tuple.0 } - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for smartContractNonceCall { - type Parameters<'a> = (); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = smartContractNonceReturn; - type ReturnTuple<'a> = (::alloy_sol_types::sol_data::Uint<256>,); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "smartContractNonce()"; - const SELECTOR: [u8; 4] = [195u8, 114u8, 117u8, 52u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - () - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - /**Function with signature `updateSeraiKey(bytes32,(bytes32,bytes32))` and selector `0xb5071c6a`. -```solidity -function updateSeraiKey(bytes32 newSeraiKey, Signature memory signature) external; -```*/ - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct updateSeraiKeyCall { - pub newSeraiKey: ::alloy_sol_types::private::FixedBytes<32>, - pub signature: ::RustType, - } - ///Container type for the return parameters of the [`updateSeraiKey(bytes32,(bytes32,bytes32))`](updateSeraiKeyCall) function. - #[allow(non_camel_case_types, non_snake_case)] - #[derive(Clone)] - pub struct updateSeraiKeyReturn {} - #[allow(non_camel_case_types, non_snake_case, clippy::style)] - const _: () = { - use ::alloy_sol_types as alloy_sol_types; - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = ( - ::alloy_sol_types::sol_data::FixedBytes<32>, - Signature, - ); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = ( - ::alloy_sol_types::private::FixedBytes<32>, - ::RustType, - ); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From for UnderlyingRustTuple<'_> { - fn from(value: updateSeraiKeyCall) -> Self { - (value.newSeraiKey, value.signature) - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> for updateSeraiKeyCall { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self { - newSeraiKey: tuple.0, - signature: tuple.1, - } - } - } - } - { - #[doc(hidden)] - type UnderlyingSolTuple<'a> = (); - #[doc(hidden)] - type UnderlyingRustTuple<'a> = (); - #[cfg(test)] - #[allow(dead_code, unreachable_patterns)] - fn _type_assertion( - _t: alloy_sol_types::private::AssertTypeEq, - ) { - match _t { - alloy_sol_types::private::AssertTypeEq::< - ::RustType, - >(_) => {} - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From - for UnderlyingRustTuple<'_> { - fn from(value: updateSeraiKeyReturn) -> Self { - () - } - } - #[automatically_derived] - #[doc(hidden)] - impl ::core::convert::From> - for updateSeraiKeyReturn { - fn from(tuple: UnderlyingRustTuple<'_>) -> Self { - Self {} - } - } - } - #[automatically_derived] - impl alloy_sol_types::SolCall for updateSeraiKeyCall { - type Parameters<'a> = ( - ::alloy_sol_types::sol_data::FixedBytes<32>, - Signature, - ); - type Token<'a> = as alloy_sol_types::SolType>::Token<'a>; - type Return = updateSeraiKeyReturn; - type ReturnTuple<'a> = (); - type ReturnToken<'a> = as alloy_sol_types::SolType>::Token<'a>; - const SIGNATURE: &'static str = "updateSeraiKey(bytes32,(bytes32,bytes32))"; - const SELECTOR: [u8; 4] = [181u8, 7u8, 28u8, 106u8]; - #[inline] - fn new<'a>( - tuple: as alloy_sol_types::SolType>::RustType, - ) -> Self { - tuple.into() - } - #[inline] - fn tokenize(&self) -> Self::Token<'_> { - ( - <::alloy_sol_types::sol_data::FixedBytes< - 32, - > as alloy_sol_types::SolType>::tokenize(&self.newSeraiKey), - ::tokenize(&self.signature), - ) - } - #[inline] - fn abi_decode_returns( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - as alloy_sol_types::SolType>::abi_decode_sequence(data, validate) - .map(Into::into) - } - } - }; - ///Container for all the [`Router`](self) function calls. - pub enum RouterCalls { - arbitaryCallOut(arbitaryCallOutCall), - execute(executeCall), - inInstruction(inInstructionCall), - nonce(nonceCall), - seraiKey(seraiKeyCall), - smartContractNonce(smartContractNonceCall), - updateSeraiKey(updateSeraiKeyCall), - } - #[automatically_derived] - impl RouterCalls { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 4usize]] = &[ - [7u8, 89u8, 161u8, 164u8], - [60u8, 189u8, 43u8, 246u8], - [157u8, 110u8, 234u8, 10u8], - [175u8, 254u8, 208u8, 224u8], - [181u8, 7u8, 28u8, 106u8], - [195u8, 114u8, 117u8, 52u8], - [213u8, 242u8, 33u8, 130u8], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolInterface for RouterCalls { - const NAME: &'static str = "RouterCalls"; - const MIN_DATA_LENGTH: usize = 0usize; - const COUNT: usize = 7usize; - #[inline] - fn selector(&self) -> [u8; 4] { - match self { - Self::arbitaryCallOut(_) => { - ::SELECTOR - } - Self::execute(_) => ::SELECTOR, - Self::inInstruction(_) => { - ::SELECTOR - } - Self::nonce(_) => ::SELECTOR, - Self::seraiKey(_) => ::SELECTOR, - Self::smartContractNonce(_) => { - ::SELECTOR - } - Self::updateSeraiKey(_) => { - ::SELECTOR - } - } - } - #[inline] - fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { - Self::SELECTORS.get(i).copied() - } - #[inline] - fn valid_selector(selector: [u8; 4]) -> bool { - Self::SELECTORS.binary_search(&selector).is_ok() - } - #[inline] - #[allow(unsafe_code, non_snake_case)] - fn abi_decode_raw( - selector: [u8; 4], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - static DECODE_SHIMS: &[fn( - &[u8], - bool, - ) -> alloy_sol_types::Result] = &[ - { - fn inInstruction( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::inInstruction) - } - inInstruction - }, - { - fn arbitaryCallOut( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::arbitaryCallOut) - } - arbitaryCallOut - }, - { - fn seraiKey( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::seraiKey) - } - seraiKey - }, - { - fn nonce( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::nonce) - } - nonce - }, - { - fn updateSeraiKey( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::updateSeraiKey) - } - updateSeraiKey - }, - { - fn smartContractNonce( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::smartContractNonce) - } - smartContractNonce - }, - { - fn execute( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterCalls::execute) - } - execute - }, - ]; - let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { - return Err( - alloy_sol_types::Error::unknown_selector( - ::NAME, - selector, - ), - ); - }; - (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) - } - #[inline] - fn abi_encoded_size(&self) -> usize { - match self { - Self::arbitaryCallOut(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::execute(inner) => { - ::abi_encoded_size(inner) - } - Self::inInstruction(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::nonce(inner) => { - ::abi_encoded_size(inner) - } - Self::seraiKey(inner) => { - ::abi_encoded_size(inner) - } - Self::smartContractNonce(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::updateSeraiKey(inner) => { - ::abi_encoded_size( - inner, - ) - } - } - } - #[inline] - fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { - match self { - Self::arbitaryCallOut(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::execute(inner) => { - ::abi_encode_raw(inner, out) - } - Self::inInstruction(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::nonce(inner) => { - ::abi_encode_raw(inner, out) - } - Self::seraiKey(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::smartContractNonce(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::updateSeraiKey(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - } - } - } - ///Container for all the [`Router`](self) custom errors. - pub enum RouterErrors { - FailedTransfer(FailedTransfer), - InvalidAmount(InvalidAmount), - InvalidSignature(InvalidSignature), - } - #[automatically_derived] - impl RouterErrors { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 4usize]] = &[ - [44u8, 82u8, 17u8, 198u8], - [139u8, 170u8, 87u8, 159u8], - [191u8, 168u8, 113u8, 197u8], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolInterface for RouterErrors { - const NAME: &'static str = "RouterErrors"; - const MIN_DATA_LENGTH: usize = 0usize; - const COUNT: usize = 3usize; - #[inline] - fn selector(&self) -> [u8; 4] { - match self { - Self::FailedTransfer(_) => { - ::SELECTOR - } - Self::InvalidAmount(_) => { - ::SELECTOR - } - Self::InvalidSignature(_) => { - ::SELECTOR - } - } - } - #[inline] - fn selector_at(i: usize) -> ::core::option::Option<[u8; 4]> { - Self::SELECTORS.get(i).copied() - } - #[inline] - fn valid_selector(selector: [u8; 4]) -> bool { - Self::SELECTORS.binary_search(&selector).is_ok() - } - #[inline] - #[allow(unsafe_code, non_snake_case)] - fn abi_decode_raw( - selector: [u8; 4], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - static DECODE_SHIMS: &[fn( - &[u8], - bool, - ) -> alloy_sol_types::Result] = &[ - { - fn InvalidAmount( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterErrors::InvalidAmount) - } - InvalidAmount - }, - { - fn InvalidSignature( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterErrors::InvalidSignature) - } - InvalidSignature - }, - { - fn FailedTransfer( - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - ::abi_decode_raw( - data, - validate, - ) - .map(RouterErrors::FailedTransfer) - } - FailedTransfer - }, - ]; - let Ok(idx) = Self::SELECTORS.binary_search(&selector) else { - return Err( - alloy_sol_types::Error::unknown_selector( - ::NAME, - selector, - ), - ); - }; - (unsafe { DECODE_SHIMS.get_unchecked(idx) })(data, validate) - } - #[inline] - fn abi_encoded_size(&self) -> usize { - match self { - Self::FailedTransfer(inner) => { - ::abi_encoded_size( - inner, - ) - } - Self::InvalidAmount(inner) => { - ::abi_encoded_size(inner) - } - Self::InvalidSignature(inner) => { - ::abi_encoded_size( - inner, - ) - } - } - } - #[inline] - fn abi_encode_raw(&self, out: &mut alloy_sol_types::private::Vec) { - match self { - Self::FailedTransfer(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::InvalidAmount(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - Self::InvalidSignature(inner) => { - ::abi_encode_raw( - inner, - out, - ) - } - } - } - } - ///Container for all the [`Router`](self) events. - pub enum RouterEvents { - Executed(Executed), - InInstruction(InInstruction), - SeraiKeyUpdated(SeraiKeyUpdated), - } - #[automatically_derived] - impl RouterEvents { - /// All the selectors of this enum. - /// - /// Note that the selectors might not be in the same order as the variants. - /// No guarantees are made about the order of the selectors. - /// - /// Prefer using `SolInterface` methods instead. - pub const SELECTORS: &'static [[u8; 32usize]] = &[ - [ - 27u8, - 159u8, - 240u8, - 22u8, - 78u8, - 129u8, - 16u8, - 69u8, - 166u8, - 23u8, - 174u8, - 120u8, - 62u8, - 128u8, - 117u8, - 1u8, - 168u8, - 226u8, - 119u8, - 98u8, - 167u8, - 203u8, - 143u8, - 47u8, - 189u8, - 2u8, - 120u8, - 81u8, - 117u8, - 37u8, - 112u8, - 181u8, - ], - [ - 52u8, - 111u8, - 213u8, - 205u8, - 109u8, - 25u8, - 210u8, - 109u8, - 58u8, - 253u8, - 34u8, - 47u8, - 67u8, - 3u8, - 62u8, - 205u8, - 13u8, - 86u8, - 20u8, - 202u8, - 100u8, - 190u8, - 192u8, - 174u8, - 209u8, - 1u8, - 72u8, - 44u8, - 216u8, - 126u8, - 146u8, - 47u8, - ], - [ - 194u8, - 24u8, - 199u8, - 126u8, - 84u8, - 202u8, - 193u8, - 22u8, - 37u8, - 113u8, - 229u8, - 43u8, - 101u8, - 187u8, - 39u8, - 170u8, - 12u8, - 223u8, - 204u8, - 112u8, - 183u8, - 199u8, - 41u8, - 106u8, - 216u8, - 57u8, - 51u8, - 145u8, - 75u8, - 19u8, - 32u8, - 145u8, - ], - ]; - } - #[automatically_derived] - impl alloy_sol_types::SolEventInterface for RouterEvents { - const NAME: &'static str = "RouterEvents"; - const COUNT: usize = 3usize; - fn decode_raw_log( - topics: &[alloy_sol_types::Word], - data: &[u8], - validate: bool, - ) -> alloy_sol_types::Result { - match topics.first().copied() { - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::Executed) - } - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::InInstruction) - } - Some(::SIGNATURE_HASH) => { - ::decode_raw_log( - topics, - data, - validate, - ) - .map(Self::SeraiKeyUpdated) - } - _ => { - alloy_sol_types::private::Err(alloy_sol_types::Error::InvalidLog { - name: ::NAME, - log: alloy_sol_types::private::Box::new( - alloy_sol_types::private::LogData::new_unchecked( - topics.to_vec(), - data.to_vec().into(), - ), - ), - }) - } - } - } - } - #[automatically_derived] - impl alloy_sol_types::private::IntoLogData for RouterEvents { - fn to_log_data(&self) -> alloy_sol_types::private::LogData { - match self { - Self::Executed(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - Self::InInstruction(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - Self::SeraiKeyUpdated(inner) => { - alloy_sol_types::private::IntoLogData::to_log_data(inner) - } - } - } - fn into_log_data(self) -> alloy_sol_types::private::LogData { - match self { - Self::Executed(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - Self::InInstruction(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - Self::SeraiKeyUpdated(inner) => { - alloy_sol_types::private::IntoLogData::into_log_data(inner) - } - } - } - } -} diff --git a/processor/ethereum/contracts/src/lib.rs b/processor/ethereum/contracts/src/lib.rs deleted file mode 100644 index 9087eaed..00000000 --- a/processor/ethereum/contracts/src/lib.rs +++ /dev/null @@ -1,16 +0,0 @@ -#[rustfmt::skip] -#[expect(warnings)] -#[expect(needless_pass_by_value)] -#[expect(clippy::all)] -#[expect(clippy::ignored_unit_patterns)] -#[expect(clippy::redundant_closure_for_method_calls)] -mod abigen; - -pub mod erc20 { - pub use super::abigen::erc20::IERC20::*; -} -pub mod router { - pub const BYTECODE: &[u8] = - include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-contracts/Router.bin")); - pub use super::abigen::router::Router::*; -} diff --git a/processor/ethereum/ethereum-serai/Cargo.toml b/processor/ethereum/ethereum-serai/Cargo.toml deleted file mode 100644 index 73c5b267..00000000 --- a/processor/ethereum/ethereum-serai/Cargo.toml +++ /dev/null @@ -1,52 +0,0 @@ -[package] -name = "ethereum-serai" -version = "0.1.0" -description = "An Ethereum library supporting Schnorr signing and on-chain verification" -license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/ethereum-serai" -authors = ["Luke Parker ", "Elizabeth Binks "] -edition = "2021" -publish = false -rust-version = "1.79" - -[package.metadata.docs.rs] -all-features = true -rustdoc-args = ["--cfg", "docsrs"] - -[lints] -workspace = true - -[dependencies] -thiserror = { version = "1", default-features = false } - -rand_core = { version = "0.6", default-features = false, features = ["std"] } - -transcript = { package = "flexible-transcript", path = "../../../crypto/transcript", default-features = false, features = ["recommended"] } - -group = { version = "0.13", default-features = false } -k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa", "arithmetic"] } -frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false, features = ["secp256k1"] } - -alloy-core = { version = "0.8", default-features = false } -alloy-sol-types = { version = "0.8", default-features = false, features = ["json"] } -alloy-consensus = { version = "0.3", default-features = false, features = ["k256"] } -alloy-network = { version = "0.3", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-rpc-client = { version = "0.3", default-features = false } -alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.3", default-features = false } - -alloy-node-bindings = { version = "0.3", default-features = false, optional = true } - -ethereum-schnorr-contract = { path = "../../../networks/ethereum/schnorr", default-features = false } -contracts = { package = "serai-processor-ethereum-contracts", path = "../contracts" } - -[dev-dependencies] -frost = { package = "modular-frost", path = "../../../crypto/frost", default-features = false, features = ["tests"] } - -tokio = { version = "1", features = ["macros"] } - -alloy-node-bindings = { version = "0.3", default-features = false } - -[features] -tests = ["alloy-node-bindings", "frost/tests"] diff --git a/processor/ethereum/ethereum-serai/LICENSE b/processor/ethereum/ethereum-serai/LICENSE deleted file mode 100644 index c425427c..00000000 --- a/processor/ethereum/ethereum-serai/LICENSE +++ /dev/null @@ -1,15 +0,0 @@ -AGPL-3.0-only license - -Copyright (c) 2022-2023 Luke Parker - -This program is free software: you can redistribute it and/or modify -it under the terms of the GNU Affero General Public License Version 3 as -published by the Free Software Foundation. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU Affero General Public License for more details. - -You should have received a copy of the GNU Affero General Public License -along with this program. If not, see . diff --git a/processor/ethereum/ethereum-serai/README.md b/processor/ethereum/ethereum-serai/README.md deleted file mode 100644 index 0090b26b..00000000 --- a/processor/ethereum/ethereum-serai/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Ethereum - -This package contains Ethereum-related functionality, specifically deploying and -interacting with Serai contracts. - -While `monero-serai` and `bitcoin-serai` are general purpose libraries, -`ethereum-serai` is Serai specific. If any of the utilities are generally -desired, please fork and maintain your own copy to ensure the desired -functionality is preserved, or open an issue to request we make this library -general purpose. - -### Dependencies - -- solc -- [Foundry](https://github.com/foundry-rs/foundry) diff --git a/processor/ethereum/ethereum-serai/src/crypto.rs b/processor/ethereum/ethereum-serai/src/crypto.rs deleted file mode 100644 index 3b9dc58a..00000000 --- a/processor/ethereum/ethereum-serai/src/crypto.rs +++ /dev/null @@ -1,32 +0,0 @@ -use group::ff::PrimeField; -use k256::{ - elliptic_curve::{ - ops::Reduce, - point::{AffineCoordinates, DecompressPoint}, - sec1::ToEncodedPoint, - }, - AffinePoint, ProjectivePoint, Scalar, U256 as KU256, -}; - -use frost::{ - algorithm::{Hram, SchnorrSignature}, - curve::{Ciphersuite, Secp256k1}, -}; - -pub use ethereum_schnorr_contract::*; - -use alloy_core::primitives::{Parity, Signature as AlloySignature, Address}; -use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; - -/// The HRAm to use for the Schnorr Solidity library. -/// -/// This will panic if the public key being signed for is not representable within the Schnorr -/// Solidity library. -#[derive(Clone, Default)] -pub struct EthereumHram {} -impl Hram for EthereumHram { - #[allow(non_snake_case)] - fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar { - Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) - } -} diff --git a/processor/ethereum/ethereum-serai/src/lib.rs b/processor/ethereum/ethereum-serai/src/lib.rs deleted file mode 100644 index 1a013ddf..00000000 --- a/processor/ethereum/ethereum-serai/src/lib.rs +++ /dev/null @@ -1,41 +0,0 @@ -use thiserror::Error; - -pub mod alloy { - pub use alloy_core::primitives; - pub use alloy_core as core; - pub use alloy_sol_types as sol_types; - - pub use alloy_consensus as consensus; - pub use alloy_network as network; - pub use alloy_rpc_types_eth as rpc_types; - pub use alloy_simple_request_transport as simple_request_transport; - pub use alloy_rpc_client as rpc_client; - pub use alloy_provider as provider; -} - -pub mod crypto; - -/* -pub(crate) mod abi { - pub use contracts::erc20; - pub use contracts::deployer; - pub use contracts::router; -} - -pub mod erc20; -pub mod deployer; -pub mod router; - -pub mod machine; - -#[cfg(any(test, feature = "tests"))] -pub mod tests; - -#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)] -pub enum Error { - #[error("failed to verify Schnorr signature")] - InvalidSignature, - #[error("couldn't make call/send TX")] - ConnectionError, -} -*/ diff --git a/processor/ethereum/ethereum-serai/src/machine.rs b/processor/ethereum/ethereum-serai/src/machine.rs deleted file mode 100644 index 404922f5..00000000 --- a/processor/ethereum/ethereum-serai/src/machine.rs +++ /dev/null @@ -1,427 +0,0 @@ -use std::{ - io::{self, Read}, - collections::HashMap, -}; - -use rand_core::{RngCore, CryptoRng}; - -use transcript::{Transcript, RecommendedTranscript}; - -use group::GroupEncoding; -use frost::{ - curve::{Ciphersuite, Secp256k1}, - Participant, ThresholdKeys, FrostError, - algorithm::Schnorr, - sign::*, -}; - -use alloy_core::primitives::U256; - -use crate::{ - crypto::{PublicKey, EthereumHram, Signature}, - router::{ - abi::{Call as AbiCall, OutInstruction as AbiOutInstruction}, - Router, - }, -}; - -/// The HRAm to use for the Schnorr Solidity library. -/// -/// This will panic if the public key being signed for is not representable within the Schnorr -/// Solidity library. -#[derive(Clone, Default)] -pub struct EthereumHram {} -impl Hram for EthereumHram { - #[allow(non_snake_case)] - fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar { - Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Call { - pub to: [u8; 20], - pub value: U256, - pub data: Vec, -} -impl Call { - pub fn read(reader: &mut R) -> io::Result { - let mut to = [0; 20]; - reader.read_exact(&mut to)?; - - let value = { - let mut value_bytes = [0; 32]; - reader.read_exact(&mut value_bytes)?; - U256::from_le_slice(&value_bytes) - }; - - let mut data_len = { - let mut data_len = [0; 4]; - reader.read_exact(&mut data_len)?; - usize::try_from(u32::from_le_bytes(data_len)).expect("u32 couldn't fit within a usize") - }; - - // A valid DoS would be to claim a 4 GB data is present for only 4 bytes - // We read this in 1 KB chunks to only read data actually present (with a max DoS of 1 KB) - let mut data = vec![]; - while data_len > 0 { - let chunk_len = data_len.min(1024); - let mut chunk = vec![0; chunk_len]; - reader.read_exact(&mut chunk)?; - data.extend(&chunk); - data_len -= chunk_len; - } - - Ok(Call { to, value, data }) - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&self.to)?; - writer.write_all(&self.value.as_le_bytes())?; - - let data_len = u32::try_from(self.data.len()) - .map_err(|_| io::Error::other("call data length exceeded 2**32"))?; - writer.write_all(&data_len.to_le_bytes())?; - writer.write_all(&self.data) - } -} -impl From for AbiCall { - fn from(call: Call) -> AbiCall { - AbiCall { to: call.to.into(), value: call.value, data: call.data.into() } - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum OutInstructionTarget { - Direct([u8; 20]), - Calls(Vec), -} -impl OutInstructionTarget { - fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - - match kind[0] { - 0 => { - let mut addr = [0; 20]; - reader.read_exact(&mut addr)?; - Ok(OutInstructionTarget::Direct(addr)) - } - 1 => { - let mut calls_len = [0; 4]; - reader.read_exact(&mut calls_len)?; - let calls_len = u32::from_le_bytes(calls_len); - - let mut calls = vec![]; - for _ in 0 .. calls_len { - calls.push(Call::read(reader)?); - } - Ok(OutInstructionTarget::Calls(calls)) - } - _ => Err(io::Error::other("unrecognized OutInstructionTarget"))?, - } - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - OutInstructionTarget::Direct(addr) => { - writer.write_all(&[0])?; - writer.write_all(addr)?; - } - OutInstructionTarget::Calls(calls) => { - writer.write_all(&[1])?; - let call_len = u32::try_from(calls.len()) - .map_err(|_| io::Error::other("amount of calls exceeded 2**32"))?; - writer.write_all(&call_len.to_le_bytes())?; - for call in calls { - call.write(writer)?; - } - } - } - Ok(()) - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct OutInstruction { - pub target: OutInstructionTarget, - pub value: U256, -} -impl OutInstruction { - fn read(reader: &mut R) -> io::Result { - let target = OutInstructionTarget::read(reader)?; - - let value = { - let mut value_bytes = [0; 32]; - reader.read_exact(&mut value_bytes)?; - U256::from_le_slice(&value_bytes) - }; - - Ok(OutInstruction { target, value }) - } - fn write(&self, writer: &mut W) -> io::Result<()> { - self.target.write(writer)?; - writer.write_all(&self.value.as_le_bytes()) - } -} -impl From for AbiOutInstruction { - fn from(instruction: OutInstruction) -> AbiOutInstruction { - match instruction.target { - OutInstructionTarget::Direct(addr) => { - AbiOutInstruction { to: addr.into(), calls: vec![], value: instruction.value } - } - OutInstructionTarget::Calls(calls) => AbiOutInstruction { - to: [0; 20].into(), - calls: calls.into_iter().map(Into::into).collect(), - value: instruction.value, - }, - } - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum RouterCommand { - UpdateSeraiKey { chain_id: U256, nonce: U256, key: PublicKey }, - Execute { chain_id: U256, nonce: U256, outs: Vec }, -} - -impl RouterCommand { - pub fn msg(&self) -> Vec { - match self { - RouterCommand::UpdateSeraiKey { chain_id, nonce, key } => { - Router::update_serai_key_message(*chain_id, *nonce, key) - } - RouterCommand::Execute { chain_id, nonce, outs } => Router::execute_message( - *chain_id, - *nonce, - outs.iter().map(|out| out.clone().into()).collect(), - ), - } - } - - pub fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - - match kind[0] { - 0 => { - let mut chain_id = [0; 32]; - reader.read_exact(&mut chain_id)?; - - let mut nonce = [0; 32]; - reader.read_exact(&mut nonce)?; - - let key = PublicKey::new(Secp256k1::read_G(reader)?) - .ok_or(io::Error::other("key for RouterCommand doesn't have an eth representation"))?; - Ok(RouterCommand::UpdateSeraiKey { - chain_id: U256::from_le_slice(&chain_id), - nonce: U256::from_le_slice(&nonce), - key, - }) - } - 1 => { - let mut chain_id = [0; 32]; - reader.read_exact(&mut chain_id)?; - let chain_id = U256::from_le_slice(&chain_id); - - let mut nonce = [0; 32]; - reader.read_exact(&mut nonce)?; - let nonce = U256::from_le_slice(&nonce); - - let mut outs_len = [0; 4]; - reader.read_exact(&mut outs_len)?; - let outs_len = u32::from_le_bytes(outs_len); - - let mut outs = vec![]; - for _ in 0 .. outs_len { - outs.push(OutInstruction::read(reader)?); - } - - Ok(RouterCommand::Execute { chain_id, nonce, outs }) - } - _ => Err(io::Error::other("reading unknown type of RouterCommand"))?, - } - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - RouterCommand::UpdateSeraiKey { chain_id, nonce, key } => { - writer.write_all(&[0])?; - writer.write_all(&chain_id.as_le_bytes())?; - writer.write_all(&nonce.as_le_bytes())?; - writer.write_all(&key.point().to_bytes()) - } - RouterCommand::Execute { chain_id, nonce, outs } => { - writer.write_all(&[1])?; - writer.write_all(&chain_id.as_le_bytes())?; - writer.write_all(&nonce.as_le_bytes())?; - writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?; - for out in outs { - out.write(writer)?; - } - Ok(()) - } - } - } - - pub fn serialize(&self) -> Vec { - let mut res = vec![]; - self.write(&mut res).unwrap(); - res - } -} - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct SignedRouterCommand { - command: RouterCommand, - signature: Signature, -} - -impl SignedRouterCommand { - pub fn new(key: &PublicKey, command: RouterCommand, signature: &[u8; 64]) -> Option { - let c = Secp256k1::read_F(&mut &signature[.. 32]).ok()?; - let s = Secp256k1::read_F(&mut &signature[32 ..]).ok()?; - let signature = Signature { c, s }; - - if !signature.verify(key, &command.msg()) { - None? - } - Some(SignedRouterCommand { command, signature }) - } - - pub fn command(&self) -> &RouterCommand { - &self.command - } - - pub fn signature(&self) -> &Signature { - &self.signature - } - - pub fn read(reader: &mut R) -> io::Result { - let command = RouterCommand::read(reader)?; - - let mut sig = [0; 64]; - reader.read_exact(&mut sig)?; - let signature = Signature::from_bytes(sig)?; - - Ok(SignedRouterCommand { command, signature }) - } - - pub fn write(&self, writer: &mut W) -> io::Result<()> { - self.command.write(writer)?; - writer.write_all(&self.signature.to_bytes()) - } -} - -pub struct RouterCommandMachine { - key: PublicKey, - command: RouterCommand, - machine: AlgorithmMachine>, -} - -impl RouterCommandMachine { - pub fn new(keys: ThresholdKeys, command: RouterCommand) -> Option { - // The Schnorr algorithm should be fine without this, even when using the IETF variant - // If this is better and more comprehensive, we should do it, even if not necessary - let mut transcript = RecommendedTranscript::new(b"ethereum-serai RouterCommandMachine v0.1"); - let key = keys.group_key(); - transcript.append_message(b"key", key.to_bytes()); - transcript.append_message(b"command", command.serialize()); - - Some(Self { - key: PublicKey::new(key)?, - command, - machine: AlgorithmMachine::new(Schnorr::new(transcript), keys), - }) - } -} - -impl PreprocessMachine for RouterCommandMachine { - type Preprocess = Preprocess; - type Signature = SignedRouterCommand; - type SignMachine = RouterCommandSignMachine; - - fn preprocess( - self, - rng: &mut R, - ) -> (Self::SignMachine, Self::Preprocess) { - let (machine, preprocess) = self.machine.preprocess(rng); - - (RouterCommandSignMachine { key: self.key, command: self.command, machine }, preprocess) - } -} - -pub struct RouterCommandSignMachine { - key: PublicKey, - command: RouterCommand, - machine: AlgorithmSignMachine>, -} - -impl SignMachine for RouterCommandSignMachine { - type Params = (); - type Keys = ThresholdKeys; - type Preprocess = Preprocess; - type SignatureShare = SignatureShare; - type SignatureMachine = RouterCommandSignatureMachine; - - fn cache(self) -> CachedPreprocess { - unimplemented!( - "RouterCommand machines don't support caching their preprocesses due to {}", - "being already bound to a specific command" - ); - } - - fn from_cache( - (): (), - _: ThresholdKeys, - _: CachedPreprocess, - ) -> (Self, Self::Preprocess) { - unimplemented!( - "RouterCommand machines don't support caching their preprocesses due to {}", - "being already bound to a specific command" - ); - } - - fn read_preprocess(&self, reader: &mut R) -> io::Result { - self.machine.read_preprocess(reader) - } - - fn sign( - self, - commitments: HashMap, - msg: &[u8], - ) -> Result<(RouterCommandSignatureMachine, Self::SignatureShare), FrostError> { - if !msg.is_empty() { - panic!("message was passed to a RouterCommand machine when it generates its own"); - } - - let (machine, share) = self.machine.sign(commitments, &self.command.msg())?; - - Ok((RouterCommandSignatureMachine { key: self.key, command: self.command, machine }, share)) - } -} - -pub struct RouterCommandSignatureMachine { - key: PublicKey, - command: RouterCommand, - machine: - AlgorithmSignatureMachine>, -} - -impl SignatureMachine for RouterCommandSignatureMachine { - type SignatureShare = SignatureShare; - - fn read_share(&self, reader: &mut R) -> io::Result { - self.machine.read_share(reader) - } - - fn complete( - self, - shares: HashMap, - ) -> Result { - let signature = self.machine.complete(shares)?; - let signature = Signature::new(signature).expect("machine produced an invalid signature"); - assert!(signature.verify(&self.key, &self.command.msg())); - Ok(SignedRouterCommand { command: self.command, signature }) - } -} diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index f77153ff..eeba3180 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -40,8 +40,12 @@ impl Action { fn message(&self) -> Vec { match self { - Action::SetKey { chain_id, nonce, key } => Router::update_serai_key_message(*chain_id, *nonce, key), - Action::Batch { chain_id, nonce, outs } => Router::execute_message(*chain_id, *nonce, OutInstructions::from(outs.as_ref())), + Action::SetKey { chain_id, nonce, key } => { + Router::update_serai_key_message(*chain_id, *nonce, key) + } + Action::Batch { chain_id, nonce, outs } => { + Router::execute_message(*chain_id, *nonce, OutInstructions::from(outs.as_ref())) + } } } @@ -129,9 +133,17 @@ impl PreprocessMachine for ClonableTransctionMachine { self, rng: &mut R, ) -> (Self::SignMachine, Self::Preprocess) { - let (machine, preprocess) = AlgorithmMachine::new(IetfSchnorr::::ietf(), self.0.clone()) - .preprocess(rng); - (ActionSignMachine(PublicKey::new(self.0.group_key()).expect("signing with non-representable key"), self.1, machine), preprocess) + let (machine, preprocess) = + AlgorithmMachine::new(IetfSchnorr::::ietf(), self.0.clone()) + .preprocess(rng); + ( + ActionSignMachine( + PublicKey::new(self.0.group_key()).expect("signing with non-representable key"), + self.1, + machine, + ), + preprocess, + ) } } @@ -157,7 +169,7 @@ impl SignMachine for ActionSignMachine { params: Self::Params, keys: Self::Keys, cache: CachedPreprocess, -) -> (Self, Self::Preprocess) { + ) -> (Self, Self::Preprocess) { unimplemented!() } diff --git a/tests/processor/Cargo.toml b/tests/processor/Cargo.toml index e37dc2a9..c7267b55 100644 --- a/tests/processor/Cargo.toml +++ b/tests/processor/Cargo.toml @@ -29,7 +29,6 @@ dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] bitcoin-serai = { path = "../../networks/bitcoin" } k256 = "0.13" -ethereum-serai = { path = "../../processor/ethereum/ethereum-serai" } monero-simple-request-rpc = { path = "../../networks/monero/rpc/simple-request" } monero-wallet = { path = "../../networks/monero/wallet" } From 18178f37648ea45a9dc6d9154f06f753c86b71e0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 01:09:07 -0400 Subject: [PATCH 152/368] Add note on the returned top-level transfers being unordered --- processor/ethereum/erc20/src/lib.rs | 2 ++ 1 file changed, 2 insertions(+) diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 920915e9..400a5baa 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -149,6 +149,8 @@ impl Erc20 { } /// Fetch all top-level transfers to the specified address. + /// + /// The result of this function is unordered. pub async fn top_level_transfers( &self, block: u64, From 98c3f75fa25610f7f14b1c333e5f9a4681323374 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 01:09:42 -0400 Subject: [PATCH 153/368] Move the Ethereum Action machine to its own file --- Cargo.lock | 5 - Cargo.toml | 3 - processor/ethereum/src/primitives/machine.rs | 146 ++++++++++++++++ processor/ethereum/src/primitives/mod.rs | 1 + .../ethereum/src/primitives/transaction.rs | 158 +----------------- processor/ethereum/src/publisher.rs | 8 + 6 files changed, 163 insertions(+), 158 deletions(-) create mode 100644 processor/ethereum/src/primitives/machine.rs diff --git a/Cargo.lock b/Cargo.lock index a7f3792a..7e51ec8a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -11890,8 +11890,3 @@ dependencies = [ "cc", "pkg-config", ] - -[[patch.unused]] -name = "alloy-sol-type-parser" -version = "0.8.0" -source = "git+https://github.com/alloy-rs/core?rev=446b9d2fbce12b88456152170709a3eaac929af0#446b9d2fbce12b88456152170709a3eaac929af0" diff --git a/Cargo.toml b/Cargo.toml index 99a10be0..d0c91a30 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -204,9 +204,6 @@ directories-next = { path = "patches/directories-next" } # The official pasta_curves repo doesn't support Zeroize pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" } -# https://github.com/alloy-rs/core/issues/717 -alloy-sol-type-parser = { git = "https://github.com/alloy-rs/core", rev = "446b9d2fbce12b88456152170709a3eaac929af0" } - [workspace.lints.clippy] unwrap_or_default = "allow" borrow_as_ptr = "deny" diff --git a/processor/ethereum/src/primitives/machine.rs b/processor/ethereum/src/primitives/machine.rs new file mode 100644 index 00000000..f37fb440 --- /dev/null +++ b/processor/ethereum/src/primitives/machine.rs @@ -0,0 +1,146 @@ +use std::{io, collections::HashMap}; + +use rand_core::{RngCore, CryptoRng}; + +use ciphersuite::{Ciphersuite, Secp256k1}; +use frost::{ + dkg::{Participant, ThresholdKeys}, + FrostError, + algorithm::*, + sign::*, +}; + +use ethereum_schnorr::{PublicKey, Signature}; + +use crate::transaction::{Action, Transaction}; + +/// The HRAm to use for the Schnorr Solidity library. +/// +/// This will panic if the public key being signed for is not representable within the Schnorr +/// Solidity library. +#[derive(Clone, Default, Debug)] +pub struct EthereumHram; +impl Hram for EthereumHram { + #[allow(non_snake_case)] + fn hram( + R: &::G, + A: &::G, + m: &[u8], + ) -> ::F { + Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) + } +} + +/// A clonable machine to sign an action. +/// +/// This will panic if the public key being signed with is not representable within the Schnorr +/// Solidity library. +#[derive(Clone)] +pub(crate) struct ClonableTransctionMachine { + pub(crate) keys: ThresholdKeys, + pub(crate) action: Action, +} + +type LiteralAlgorithmMachine = AlgorithmMachine>; +type LiteralAlgorithmSignMachine = + AlgorithmSignMachine>; + +pub(crate) struct ActionSignMachine { + key: PublicKey, + action: Action, + machine: LiteralAlgorithmSignMachine, +} + +type LiteralAlgorithmSignatureMachine = + AlgorithmSignatureMachine>; + +pub(crate) struct ActionSignatureMachine { + key: PublicKey, + action: Action, + machine: LiteralAlgorithmSignatureMachine, +} + +impl PreprocessMachine for ClonableTransctionMachine { + type Preprocess = ::Preprocess; + type Signature = Transaction; + type SignMachine = ActionSignMachine; + + fn preprocess( + self, + rng: &mut R, + ) -> (Self::SignMachine, Self::Preprocess) { + let (machine, preprocess) = + AlgorithmMachine::new(IetfSchnorr::::ietf(), self.keys.clone()) + .preprocess(rng); + ( + ActionSignMachine { + key: PublicKey::new(self.keys.group_key()).expect("signing with non-representable key"), + action: self.action, + machine, + }, + preprocess, + ) + } +} + +impl SignMachine for ActionSignMachine { + type Params = ::Signature, + >>::Params; + type Keys = ::Signature, + >>::Keys; + type Preprocess = ::Signature, + >>::Preprocess; + type SignatureShare = ::Signature, + >>::SignatureShare; + type SignatureMachine = ActionSignatureMachine; + + fn cache(self) -> CachedPreprocess { + unimplemented!() + } + fn from_cache( + params: Self::Params, + keys: Self::Keys, + cache: CachedPreprocess, + ) -> (Self, Self::Preprocess) { + unimplemented!() + } + + fn read_preprocess(&self, reader: &mut R) -> io::Result { + self.machine.read_preprocess(reader) + } + fn sign( + self, + commitments: HashMap, + msg: &[u8], + ) -> Result<(Self::SignatureMachine, Self::SignatureShare), FrostError> { + assert!(msg.is_empty()); + self.machine.sign(commitments, &self.action.message()).map(|(machine, shares)| { + (ActionSignatureMachine { key: self.key, action: self.action, machine }, shares) + }) + } +} + +impl SignatureMachine for ActionSignatureMachine { + type SignatureShare = ::Signature, + >>::SignatureShare; + + fn read_share(&self, reader: &mut R) -> io::Result { + self.machine.read_share(reader) + } + + fn complete( + self, + shares: HashMap, + ) -> Result { + self.machine.complete(shares).map(|signature| { + let s = signature.s; + let c = Signature::challenge(signature.R, &self.key, &self.action.message()); + Transaction(self.action, Signature::new(c, s)) + }) + } +} diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs index 8d2a9118..f0d31802 100644 --- a/processor/ethereum/src/primitives/mod.rs +++ b/processor/ethereum/src/primitives/mod.rs @@ -1,5 +1,6 @@ pub(crate) mod output; pub(crate) mod transaction; +pub(crate) mod machine; pub(crate) mod block; pub(crate) const DAI: [u8; 20] = diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index eeba3180..52595375 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -1,14 +1,7 @@ -use std::{io, collections::HashMap}; +use std::io; -use rand_core::{RngCore, CryptoRng}; - -use ciphersuite::{Ciphersuite, Secp256k1}; -use frost::{ - dkg::{Participant, ThresholdKeys}, - FrostError, - algorithm::*, - sign::*, -}; +use ciphersuite::Secp256k1; +use frost::dkg::ThresholdKeys; use alloy_core::primitives::U256; @@ -20,7 +13,7 @@ use ethereum_primitives::keccak256; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_router::{Coin, OutInstructions, Executed, Router}; -use crate::output::OutputId; +use crate::{output::OutputId, machine::ClonableTransctionMachine}; #[derive(Clone, PartialEq, Debug)] pub(crate) enum Action { @@ -32,13 +25,13 @@ pub(crate) enum Action { pub(crate) struct Eventuality(pub(crate) Executed); impl Action { - fn nonce(&self) -> u64 { + pub(crate) fn nonce(&self) -> u64 { match self { Action::SetKey { nonce, .. } | Action::Batch { nonce, .. } => *nonce, } } - fn message(&self) -> Vec { + pub(crate) fn message(&self) -> Vec { match self { Action::SetKey { chain_id, nonce, key } => { Router::update_serai_key_message(*chain_id, *nonce, key) @@ -67,155 +60,20 @@ impl Action { } #[derive(Clone, PartialEq, Debug)] -pub(crate) struct Transaction(Action, Signature); +pub(crate) struct Transaction(pub(crate) Action, pub(crate) Signature); impl scheduler::Transaction for Transaction { fn read(reader: &mut impl io::Read) -> io::Result { - /* - let buf: Vec = borsh::from_reader(reader)?; - // We can only read this from a &[u8], hence prior reading into a Vec - ::decode(&mut buf.as_slice()) - .map(Self) - .map_err(io::Error::other) - */ let action = Action::read(reader)?; let signature = Signature::read(reader)?; Ok(Transaction(action, signature)) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - /* - let mut buf = Vec::with_capacity(256); - ::encode(&self.0, &mut buf); - borsh::BorshSerialize::serialize(&buf, writer) - */ self.0.write(writer)?; self.1.write(writer)?; Ok(()) } } -/// The HRAm to use for the Schnorr Solidity library. -/// -/// This will panic if the public key being signed for is not representable within the Schnorr -/// Solidity library. -#[derive(Clone, Default, Debug)] -pub struct EthereumHram; -impl Hram for EthereumHram { - #[allow(non_snake_case)] - fn hram( - R: &::G, - A: &::G, - m: &[u8], - ) -> ::F { - Signature::challenge(*R, &PublicKey::new(*A).unwrap(), m) - } -} - -#[derive(Clone)] -pub(crate) struct ClonableTransctionMachine(ThresholdKeys, Action); - -type LiteralAlgorithmMachine = AlgorithmMachine>; -type LiteralAlgorithmSignMachine = - AlgorithmSignMachine>; - -pub(crate) struct ActionSignMachine(PublicKey, Action, LiteralAlgorithmSignMachine); - -type LiteralAlgorithmSignatureMachine = - AlgorithmSignatureMachine>; - -pub(crate) struct ActionSignatureMachine(PublicKey, Action, LiteralAlgorithmSignatureMachine); - -impl PreprocessMachine for ClonableTransctionMachine { - type Preprocess = ::Preprocess; - type Signature = Transaction; - type SignMachine = ActionSignMachine; - - fn preprocess( - self, - rng: &mut R, - ) -> (Self::SignMachine, Self::Preprocess) { - let (machine, preprocess) = - AlgorithmMachine::new(IetfSchnorr::::ietf(), self.0.clone()) - .preprocess(rng); - ( - ActionSignMachine( - PublicKey::new(self.0.group_key()).expect("signing with non-representable key"), - self.1, - machine, - ), - preprocess, - ) - } -} - -impl SignMachine for ActionSignMachine { - type Params = ::Signature, - >>::Params; - type Keys = ::Signature, - >>::Keys; - type Preprocess = ::Signature, - >>::Preprocess; - type SignatureShare = ::Signature, - >>::SignatureShare; - type SignatureMachine = ActionSignatureMachine; - - fn cache(self) -> CachedPreprocess { - unimplemented!() - } - fn from_cache( - params: Self::Params, - keys: Self::Keys, - cache: CachedPreprocess, - ) -> (Self, Self::Preprocess) { - unimplemented!() - } - - fn read_preprocess(&self, reader: &mut R) -> io::Result { - self.2.read_preprocess(reader) - } - fn sign( - self, - commitments: HashMap, - msg: &[u8], - ) -> Result<(Self::SignatureMachine, Self::SignatureShare), FrostError> { - assert!(msg.is_empty()); - self - .2 - .sign(commitments, &self.1.message()) - .map(|(machine, shares)| (ActionSignatureMachine(self.0, self.1, machine), shares)) - } -} - -impl SignatureMachine for ActionSignatureMachine { - type SignatureShare = ::Signature, - >>::SignatureShare; - - fn read_share(&self, reader: &mut R) -> io::Result { - self.2.read_share(reader) - } - - fn complete( - self, - shares: HashMap, - ) -> Result { - /* - match self.1 { - Action::SetKey { chain_id: _, nonce: _, key } => self.0.update_serai_key(key, signature), - Action::Batch { chain_id: _, nonce: _, outs } => self.0.execute(outs, signature), - } - */ - self.2.complete(shares).map(|signature| { - let s = signature.s; - let c = Signature::challenge(signature.R, &self.0, &self.1.message()); - Transaction(self.1, Signature::new(c, s)) - }) - } -} - impl SignableTransaction for Action { type Transaction = Transaction; type Ciphersuite = Secp256k1; @@ -296,7 +154,7 @@ impl SignableTransaction for Action { } fn sign(self, keys: ThresholdKeys) -> Self::PreprocessMachine { - ClonableTransctionMachine(keys, self) + ClonableTransctionMachine { keys, action: self } } } diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index ad8bd09d..1874e556 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -20,6 +20,14 @@ impl signers::TransactionPublisher for TransactionPublisher { &self, tx: Transaction, ) -> impl Send + Future> { + // Convert from an Action (an internal representation of a signable event) to a TxLegacy + /* TODO + match tx.0 { + Action::SetKey { chain_id: _, nonce: _, key } => self.router.update_serai_key(key, tx.1), + Action::Batch { chain_id: _, nonce: _, outs } => self.router.execute(outs, tx.1), + } + */ + async move { /* use tokio::{ From a717ae9ea76577640de174309af1756e5ea8de7f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 15:50:21 -0400 Subject: [PATCH 154/368] Have the TransactionPublisher build a TxLegacy from Transaction --- Cargo.lock | 1 + processor/ethereum/Cargo.toml | 1 + processor/ethereum/src/main.rs | 17 +++--- processor/ethereum/src/primitives/output.rs | 30 ++++++----- processor/ethereum/src/publisher.rs | 58 ++++++++++++++++----- processor/ethereum/src/scheduler.rs | 1 + 6 files changed, 75 insertions(+), 33 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 7e51ec8a..9afcdcc8 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8334,6 +8334,7 @@ dependencies = [ "alloy-rpc-client", "alloy-rpc-types-eth", "alloy-simple-request-transport", + "alloy-transport", "borsh", "ciphersuite", "const-hex", diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 9a3b264c..649e3fb8 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -35,6 +35,7 @@ alloy-rlp = { version = "0.3", default-features = false } alloy-consensus = { version = "0.3", default-features = false } alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-transport = { version = "0.3", default-features = false } alloy-simple-request-transport = { path = "../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-rpc-client = { version = "0.3", default-features = false } alloy-provider = { version = "0.3", default-features = false } diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index 06c0bc98..0ebf0f59 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -30,14 +30,13 @@ use publisher::TransactionPublisher; #[tokio::main] async fn main() { let db = bin::init(); - let feed = { - let provider = Arc::new(RootProvider::new( - ClientBuilder::default().transport(SimpleRequest::new(bin::url()), true), - )); - Rpc { provider } - }; + + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(bin::url()), true), + )); + let chain_id = loop { - match feed.provider.get_chain_id().await { + match provider.get_chain_id().await { Ok(chain_id) => break U256::try_from(chain_id).unwrap(), Err(e) => { log::error!("couldn't connect to the Ethereum node for the chain ID: {e:?}"); @@ -48,9 +47,9 @@ async fn main() { bin::main_loop::<_, KeyGenParams, _>( db, - feed.clone(), + Rpc { provider: provider.clone() }, Scheduler::new(SmartContract { chain_id }), - TransactionPublisher::new({ + TransactionPublisher::new(provider, { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") .expect("ethereum relayer hostname wasn't specified") .to_string(); diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index 843f22f6..0f327921 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -1,6 +1,6 @@ use std::io; -use ciphersuite::{Ciphersuite, Secp256k1}; +use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; use alloy_core::primitives::U256; @@ -59,7 +59,10 @@ impl AsMut<[u8]> for OutputId { } #[derive(Clone, PartialEq, Eq, Debug)] -pub(crate) struct Output(pub(crate) EthereumInInstruction); +pub(crate) struct Output { + pub(crate) key: ::G, + pub(crate) instruction: EthereumInInstruction, +} impl ReceivedOutput<::G, Address> for Output { type Id = OutputId; type TransactionId = [u8; 32]; @@ -71,40 +74,43 @@ impl ReceivedOutput<::G, Address> for Output { fn id(&self) -> Self::Id { let mut id = [0; 40]; - id[.. 32].copy_from_slice(&self.0.id.0); - id[32 ..].copy_from_slice(&self.0.id.1.to_le_bytes()); + id[.. 32].copy_from_slice(&self.instruction.id.0); + id[32 ..].copy_from_slice(&self.instruction.id.1.to_le_bytes()); OutputId(id) } fn transaction_id(&self) -> Self::TransactionId { - self.0.id.0 + self.instruction.id.0 } fn key(&self) -> ::G { - todo!("TODO") + self.key } fn presumed_origin(&self) -> Option
{ - Some(Address::from(self.0.from)) + Some(Address::from(self.instruction.from)) } fn balance(&self) -> Balance { - let coin = coin_to_serai_coin(&self.0.coin).unwrap_or_else(|| { + let coin = coin_to_serai_coin(&self.instruction.coin).unwrap_or_else(|| { panic!( "mapping coin from an EthereumInInstruction with coin {}, which we don't handle.", "this never should have been yielded" ) }); - Balance { coin, amount: amount_to_serai_amount(coin, self.0.amount) } + Balance { coin, amount: amount_to_serai_amount(coin, self.instruction.amount) } } fn data(&self) -> &[u8] { - &self.0.data + &self.instruction.data } fn write(&self, writer: &mut W) -> io::Result<()> { - self.0.write(writer) + writer.write_all(self.key.to_bytes().as_ref())?; + self.instruction.write(writer) } fn read(reader: &mut R) -> io::Result { - EthereumInInstruction::read(reader).map(Self) + let key = Secp256k1::read_G(reader)?; + let instruction = EthereumInInstruction::read(reader)?; + Ok(Self { key, instruction }) } } diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index 1874e556..cc9c1f5f 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -1,34 +1,68 @@ use core::future::Future; +use std::sync::Arc; -use crate::transaction::Transaction; +use alloy_transport::{TransportErrorKind, RpcError}; +use alloy_simple_request_transport::SimpleRequest; +use alloy_provider::RootProvider; + +use tokio::sync::{RwLockReadGuard, RwLock}; + +use ethereum_schnorr::PublicKey; +use ethereum_router::{OutInstructions, Router}; + +use crate::transaction::{Action, Transaction}; #[derive(Clone)] pub(crate) struct TransactionPublisher { + initial_serai_key: PublicKey, + rpc: Arc>, + router: Arc>>, relayer_url: String, } impl TransactionPublisher { - pub(crate) fn new(relayer_url: String) -> Self { - Self { relayer_url } + pub(crate) fn new(rpc: Arc>, relayer_url: String) -> Self { + Self { initial_serai_key: todo!("TODO"), rpc, router: Arc::new(RwLock::new(None)), relayer_url } + } + + // This will always return Ok(Some(_)) or Err(_), never Ok(None) + async fn router(&self) -> Result>, RpcError> { + let router = self.router.read().await; + + // If the router is None, find it on-chain + if router.is_none() { + drop(router); + let mut router = self.router.write().await; + // Check again if it's None in case a different task already did this + if router.is_none() { + let Some(router_actual) = Router::new(self.rpc.clone(), &self.initial_serai_key).await? else { + Err(TransportErrorKind::Custom("publishing transaction yet couldn't find router on chain. was our node reset?".to_string().into()))? + }; + *router = Some(router_actual); + } + return Ok(router.downgrade()); + } + + Ok(router) } } impl signers::TransactionPublisher for TransactionPublisher { - type EphemeralError = (); + type EphemeralError = RpcError; fn publish( &self, tx: Transaction, ) -> impl Send + Future> { - // Convert from an Action (an internal representation of a signable event) to a TxLegacy - /* TODO - match tx.0 { - Action::SetKey { chain_id: _, nonce: _, key } => self.router.update_serai_key(key, tx.1), - Action::Batch { chain_id: _, nonce: _, outs } => self.router.execute(outs, tx.1), - } - */ - async move { + // Convert from an Action (an internal representation of a signable event) to a TxLegacy + let router = self.router().await?; + let router = router.as_ref().unwrap(); + let tx = match tx.0 { + Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), + Action::Batch { chain_id: _, nonce: _, outs } => router.execute(OutInstructions::from(outs.as_ref()), &tx.1), + }; + /* use tokio::{ io::{AsyncReadExt, AsyncWriteExt}, diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index ca636b5b..6683eeac 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -68,6 +68,7 @@ impl smart_contract_scheduler::SmartContract for SmartContract { // TODO: Per-batch gas limit // TODO: Create several batches + // TODO: Handle fees let action = Action::Batch { chain_id: self.chain_id, nonce, outs }; vec![(action.clone(), action.eventuality())] From 9e628d217f82909cc4f5ae0349afa2ae830daeb9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 18:35:31 -0400 Subject: [PATCH 155/368] cargo fmt, move ScannerFeed from String to the RPC error --- processor/ethereum/src/publisher.rs | 19 +++++++++++++----- processor/ethereum/src/rpc.rs | 30 ++++++++++++++++++----------- 2 files changed, 33 insertions(+), 16 deletions(-) diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index cc9c1f5f..d133768b 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -26,7 +26,9 @@ impl TransactionPublisher { } // This will always return Ok(Some(_)) or Err(_), never Ok(None) - async fn router(&self) -> Result>, RpcError> { + async fn router( + &self, + ) -> Result>, RpcError> { let router = self.router.read().await; // If the router is None, find it on-chain @@ -35,9 +37,14 @@ impl TransactionPublisher { let mut router = self.router.write().await; // Check again if it's None in case a different task already did this if router.is_none() { - let Some(router_actual) = Router::new(self.rpc.clone(), &self.initial_serai_key).await? else { - Err(TransportErrorKind::Custom("publishing transaction yet couldn't find router on chain. was our node reset?".to_string().into()))? - }; + let Some(router_actual) = Router::new(self.rpc.clone(), &self.initial_serai_key).await? + else { + Err(TransportErrorKind::Custom( + "publishing transaction yet couldn't find router on chain. was our node reset?" + .to_string() + .into(), + ))? + }; *router = Some(router_actual); } return Ok(router.downgrade()); @@ -60,7 +67,9 @@ impl signers::TransactionPublisher for TransactionPublisher { let router = router.as_ref().unwrap(); let tx = match tx.0 { Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), - Action::Batch { chain_id: _, nonce: _, outs } => router.execute(OutInstructions::from(outs.as_ref()), &tx.1), + Action::Batch { chain_id: _, nonce: _, outs } => { + router.execute(OutInstructions::from(outs.as_ref()), &tx.1) + } }; /* diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 819fbf48..e3f25f86 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -2,6 +2,7 @@ use core::future::Future; use std::sync::Arc; use alloy_rpc_types_eth::{BlockTransactionsKind, BlockNumberOrTag}; +use alloy_transport::{RpcError, TransportErrorKind}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -28,7 +29,7 @@ impl ScannerFeed for Rpc { type Block = FullEpoch; - type EphemeralError = String; + type EphemeralError = RpcError; fn latest_finalized_block_number( &self, @@ -37,14 +38,17 @@ impl ScannerFeed for Rpc { let actual_number = self .provider .get_block(BlockNumberOrTag::Finalized.into(), BlockTransactionsKind::Hashes) - .await - .map_err(|e| format!("couldn't get the latest finalized block: {e:?}"))? - .ok_or_else(|| "there was no finalized block".to_string())? + .await? + .ok_or_else(|| { + TransportErrorKind::Custom("there was no finalized block".to_string().into()) + })? .header .number; // Error if there hasn't been a full epoch yet if actual_number < 32 { - Err("there has not been a completed epoch yet".to_string())? + Err(TransportErrorKind::Custom( + "there has not been a completed epoch yet".to_string().into(), + ))? } // The divison by 32 returns the amount of completed epochs // Converting from amount of completed epochs to the latest completed epoch requires @@ -75,10 +79,12 @@ impl ScannerFeed for Rpc { self .provider .get_block((start - 1).into(), BlockTransactionsKind::Hashes) - .await - .map_err(|e| format!("couldn't get block: {e:?}"))? + .await? .ok_or_else(|| { - format!("ethereum node didn't have requested block: {number:?}. did we reorg?") + TransportErrorKind::Custom( + format!("ethereum node didn't have requested block: {number:?}. was the node reset?") + .into(), + ) })? .header .hash @@ -88,10 +94,12 @@ impl ScannerFeed for Rpc { let end_header = self .provider .get_block((start + 31).into(), BlockTransactionsKind::Hashes) - .await - .map_err(|e| format!("couldn't get block: {e:?}"))? + .await? .ok_or_else(|| { - format!("ethereum node didn't have requested block: {number:?}. did we reorg?") + TransportErrorKind::Custom( + format!("ethereum node didn't have requested block: {number:?}. was the node reset?") + .into(), + ) })? .header; From e75c4ec6ed1e5518c3634b47a35123478733b510 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 18 Sep 2024 22:00:32 -0400 Subject: [PATCH 156/368] Explicitly add an unspendable script path to the processor's generated keys --- Cargo.lock | 1 + processor/bitcoin/Cargo.toml | 1 + processor/bitcoin/src/key_gen.rs | 35 ++++++++++++++++++++++++++++++++ 3 files changed, 37 insertions(+) diff --git a/Cargo.lock b/Cargo.lock index 9afcdcc8..2e2faecb 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8108,6 +8108,7 @@ dependencies = [ "dkg", "flexible-transcript", "hex", + "k256", "log", "modular-frost", "parity-scale-codec", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 52cca1ae..2a69d234 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -24,6 +24,7 @@ scale = { package = "parity-scale-codec", version = "3", default-features = fals borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +k256 = { version = "0.13", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs index 41544134..bc911676 100644 --- a/processor/bitcoin/src/key_gen.rs +++ b/processor/bitcoin/src/key_gen.rs @@ -1,6 +1,8 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; use frost::ThresholdKeys; +use bitcoin_serai::bitcoin::{hashes::Hash, TapTweakHash}; + use crate::{primitives::x_coord_to_even_point, scan::scanner}; pub(crate) struct KeyGenParams; @@ -10,6 +12,39 @@ impl key_gen::KeyGenParams for KeyGenParams { type ExternalNetworkCiphersuite = Secp256k1; fn tweak_keys(keys: &mut ThresholdKeys) { + /* + Offset the keys by their hash to prevent a malicious participant from inserting a script + path, as specified in + https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-23 + + This isn't exactly the same, as we then increment the key until it happens to be even, yet + the goal is simply that someone who biases the key-gen can't insert their own script path. + By adding the hash of the key to the key, anyone who attempts such bias will change the key + used (changing the bias necessary). + + This is also potentially unnecessary for Serai, which uses an eVRF-based DKG. While that can + be biased (by manipulating who participates as we use it robustly and only require `t` + participants), contributions cannot be arbitrarily defined. That presumably requires + performing a search of the possible keys for some collision with 2**128 work. It's better to + offset regardless and avoid this question however. + */ + { + use k256::elliptic_curve::{ + bigint::{Encoding, U256}, + ops::Reduce, + }; + let tweak_hash = TapTweakHash::hash(&keys.group_key().to_bytes().as_slice()[1 ..]); + /* + https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki#cite_ref-13-0 states how the + bias is negligible. This reduction shouldn't ever occur, yet if it did, the script path + would be unusable due to a check the script path hash is less than the order. That doesn't + impact us as we don't want the script path to be usable. + */ + *keys = keys.offset(::F::reduce(U256::from_be_bytes( + *tweak_hash.to_raw_hash().as_ref(), + ))); + } + *keys = bitcoin_serai::wallet::tweak_keys(keys); // Also create a scanner to assert these keys, and all expected paths, are usable scanner(keys.group_key()); From 118d81bc90b9f896838776993a49f545f822a6a2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 00:39:51 -0400 Subject: [PATCH 157/368] Finish the Ethereum TX publishing code --- networks/ethereum/relayer/README.md | 4 +-- networks/ethereum/relayer/src/main.rs | 20 ++++++----- processor/ethereum/src/publisher.rs | 51 ++++++++++++++------------- 3 files changed, 39 insertions(+), 36 deletions(-) diff --git a/networks/ethereum/relayer/README.md b/networks/ethereum/relayer/README.md index beed4b72..fc2d36fd 100644 --- a/networks/ethereum/relayer/README.md +++ b/networks/ethereum/relayer/README.md @@ -1,4 +1,4 @@ # Ethereum Transaction Relayer -This server collects Ethereum router commands to be published, offering an RPC -to fetch them. +This server collects Ethereum transactions to be published, offering an RPC to +fetch them. diff --git a/networks/ethereum/relayer/src/main.rs b/networks/ethereum/relayer/src/main.rs index 54593004..f5a7e0f9 100644 --- a/networks/ethereum/relayer/src/main.rs +++ b/networks/ethereum/relayer/src/main.rs @@ -40,8 +40,8 @@ async fn main() { db }; - // Start command recipience server - // This should not be publicly exposed + // Start transaction recipience server + // This MUST NOT be publicly exposed // TODO: Add auth tokio::spawn({ let db = db.clone(); @@ -58,25 +58,27 @@ async fn main() { let mut buf = vec![0; usize::try_from(msg_len).unwrap()]; let Ok(_) = socket.read_exact(&mut buf).await else { break }; - if buf.len() < 5 { + if buf.len() < (4 + 1) { break; } let nonce = u32::from_le_bytes(buf[.. 4].try_into().unwrap()); let mut txn = db.txn(); + // Save the transaction txn.put(nonce.to_le_bytes(), &buf[4 ..]); txn.commit(); let Ok(()) = socket.write_all(&[1]).await else { break }; - log::info!("received signed command #{nonce}"); + log::info!("received transaction to publish (nonce {nonce})"); } }); } } }); - // Start command fetch server + // Start transaction fetch server // 5132 ^ ((b'E' << 8) | b'R') + 1 + // TODO: JSON-RPC server which returns this as JSON? let server = TcpListener::bind("0.0.0.0:20831").await.unwrap(); loop { let (mut socket, _) = server.accept().await.unwrap(); @@ -84,16 +86,16 @@ async fn main() { tokio::spawn(async move { let db = db.clone(); loop { - // Nonce to get the router comamnd for + // Nonce to get the unsigned transaction for let mut buf = vec![0; 4]; let Ok(_) = socket.read_exact(&mut buf).await else { break }; - let command = db.get(&buf[.. 4]).unwrap_or(vec![]); - let Ok(()) = socket.write_all(&u32::try_from(command.len()).unwrap().to_le_bytes()).await + let transaction = db.get(&buf[.. 4]).unwrap_or(vec![]); + let Ok(()) = socket.write_all(&u32::try_from(transaction.len()).unwrap().to_le_bytes()).await else { break; }; - let Ok(()) = socket.write_all(&command).await else { break }; + let Ok(()) = socket.write_all(&transaction).await else { break }; } }); } diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index d133768b..03b1d24c 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -1,11 +1,17 @@ use core::future::Future; use std::sync::Arc; +use alloy_rlp::Encodable; + use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::RootProvider; -use tokio::sync::{RwLockReadGuard, RwLock}; +use tokio::{ + sync::{RwLockReadGuard, RwLock}, + io::{AsyncReadExt, AsyncWriteExt}, + net::TcpStream, +}; use ethereum_schnorr::PublicKey; use ethereum_router::{OutInstructions, Router}; @@ -62,9 +68,11 @@ impl signers::TransactionPublisher for TransactionPublisher { tx: Transaction, ) -> impl Send + Future> { async move { - // Convert from an Action (an internal representation of a signable event) to a TxLegacy let router = self.router().await?; let router = router.as_ref().unwrap(); + + let nonce = tx.0.nonce(); + // Convert from an Action (an internal representation of a signable event) to a TxLegacy let tx = match tx.0 { Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), Action::Batch { chain_id: _, nonce: _, outs } => { @@ -72,40 +80,33 @@ impl signers::TransactionPublisher for TransactionPublisher { } }; - /* - use tokio::{ - io::{AsyncReadExt, AsyncWriteExt}, - net::TcpStream, - }; - - let mut msg = vec![]; - match completion.command() { - RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => { - msg.extend(&u32::try_from(nonce).unwrap().to_le_bytes()); - } - } - completion.write(&mut msg).unwrap(); + // Nonce + let mut msg = nonce.to_le_bytes().to_vec(); + // Transaction + tx.encode(&mut msg); let Ok(mut socket) = TcpStream::connect(&self.relayer_url).await else { - log::warn!("couldn't connect to the relayer server"); - Err(NetworkError::ConnectionError)? + Err(TransportErrorKind::Custom( + "couldn't connect to the relayer server".to_string().into(), + ))? }; let Ok(()) = socket.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await else { - log::warn!("couldn't send the message's len to the relayer server"); - Err(NetworkError::ConnectionError)? + Err(TransportErrorKind::Custom( + "couldn't send the message's len to the relayer server".to_string().into(), + ))? }; let Ok(()) = socket.write_all(&msg).await else { - log::warn!("couldn't write the message to the relayer server"); - Err(NetworkError::ConnectionError)? + Err(TransportErrorKind::Custom( + "couldn't write the message to the relayer server".to_string().into(), + ))? }; if socket.read_u8().await.ok() != Some(1) { - log::warn!("didn't get the ack from the relayer server"); - Err(NetworkError::ConnectionError)?; + Err(TransportErrorKind::Custom( + "didn't get the ack from the relayer server".to_string().into(), + ))?; } Ok(()) - */ - todo!("TODO") } } } From 673cf8fd47cd0b4e7232a2e6ec4fb10184047c6c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 01:00:31 -0400 Subject: [PATCH 158/368] Pass the latest active key to the Block's scan function Effectively necessary for networks on which we utilize account abstraction in order to know what key to associate the received coins with. --- networks/ethereum/relayer/src/main.rs | 3 ++- processor/bitcoin/src/primitives/block.rs | 6 +++++- processor/ethereum/src/primitives/block.rs | 13 ++++++++++++- processor/monero/src/primitives/block.rs | 6 +++++- processor/primitives/src/block.rs | 6 +++++- processor/scanner/src/db.rs | 1 + processor/scanner/src/eventuality/mod.rs | 14 +++++++++++++- processor/scanner/src/lib.rs | 6 +++--- processor/scanner/src/scan/mod.rs | 15 ++++++++++++++- 9 files changed, 60 insertions(+), 10 deletions(-) diff --git a/networks/ethereum/relayer/src/main.rs b/networks/ethereum/relayer/src/main.rs index f5a7e0f9..6424c90f 100644 --- a/networks/ethereum/relayer/src/main.rs +++ b/networks/ethereum/relayer/src/main.rs @@ -91,7 +91,8 @@ async fn main() { let Ok(_) = socket.read_exact(&mut buf).await else { break }; let transaction = db.get(&buf[.. 4]).unwrap_or(vec![]); - let Ok(()) = socket.write_all(&u32::try_from(transaction.len()).unwrap().to_le_bytes()).await + let Ok(()) = + socket.write_all(&u32::try_from(transaction.len()).unwrap().to_le_bytes()).await else { break; }; diff --git a/processor/bitcoin/src/primitives/block.rs b/processor/bitcoin/src/primitives/block.rs index e3df7e69..02b8e595 100644 --- a/processor/bitcoin/src/primitives/block.rs +++ b/processor/bitcoin/src/primitives/block.rs @@ -43,7 +43,11 @@ impl primitives::Block for Block { primitives::BlockHeader::id(&BlockHeader(self.1.header)) } - fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + fn scan_for_outputs_unordered( + &self, + _latest_active_key: Self::Key, + key: Self::Key, + ) -> Vec { let scanner = scanner(key); let mut res = vec![]; diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index 2c0e0505..a6268c0b 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -59,8 +59,19 @@ impl primitives::Block for FullEpoch { self.epoch.end_hash } - fn scan_for_outputs_unordered(&self, _key: Self::Key) -> Vec { + fn scan_for_outputs_unordered( + &self, + latest_active_key: Self::Key, + key: Self::Key, + ) -> Vec { // Only return these outputs for the latest key + if latest_active_key != key { + return vec![]; + } + + // Associate all outputs with the latest active key + // We don't associate these with the current key within the SC as that'll cause outputs to be + // marked for forwarding if the SC is delayed to actually rotate todo!("TODO") } diff --git a/processor/monero/src/primitives/block.rs b/processor/monero/src/primitives/block.rs index 70a559c1..6afae429 100644 --- a/processor/monero/src/primitives/block.rs +++ b/processor/monero/src/primitives/block.rs @@ -40,7 +40,11 @@ impl primitives::Block for Block { self.0.block.hash() } - fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec { + fn scan_for_outputs_unordered( + &self, + _latest_active_key: Self::Key, + key: Self::Key, + ) -> Vec { let mut scanner = GuaranteedScanner::new(view_pair(key)); scanner.register_subaddress(EXTERNAL_SUBADDRESS); scanner.register_subaddress(BRANCH_SUBADDRESS); diff --git a/processor/primitives/src/block.rs b/processor/primitives/src/block.rs index da481247..a3dec40b 100644 --- a/processor/primitives/src/block.rs +++ b/processor/primitives/src/block.rs @@ -43,7 +43,11 @@ pub trait Block: Send + Sync + Sized + Clone + Debug { /// Scan all outputs within this block to find the outputs spendable by this key. /// /// No assumption on the order of the returned outputs is made. - fn scan_for_outputs_unordered(&self, key: Self::Key) -> Vec; + fn scan_for_outputs_unordered( + &self, + latest_active_key: Self::Key, + key: Self::Key, + ) -> Vec; /// Check if this block resolved any Eventualities. /// diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 49ab1785..884e0e2b 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -28,6 +28,7 @@ struct SeraiKeyDbEntry { key: K, } +#[derive(Clone)] pub(crate) struct SeraiKey { pub(crate) key: K, pub(crate) stage: LifetimeStage, diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index 99fea2fb..bb3e4b7e 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -273,6 +273,18 @@ impl> ContinuallyRan for EventualityTas log::debug!("checking eventuality completions in block: {} ({b})", hex::encode(block.id())); let (keys, keys_with_stages) = self.keys_and_keys_with_stages(b); + let latest_active_key = { + let mut keys_with_stages = keys_with_stages.clone(); + loop { + // Use the most recent key + let (key, stage) = keys_with_stages.pop().unwrap(); + // Unless this key is active, but not yet reporting + if stage == LifetimeStage::ActiveYetNotReporting { + continue; + } + break key; + } + }; let mut txn = self.db.txn(); @@ -307,7 +319,7 @@ impl> ContinuallyRan for EventualityTas } // Fetch all non-External outputs - let mut non_external_outputs = block.scan_for_outputs(key.key); + let mut non_external_outputs = block.scan_for_outputs(latest_active_key, key.key); non_external_outputs.retain(|output| output.kind() != OutputType::External); // Drop any outputs less than the dust limit non_external_outputs.retain(|output| { diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 1b6afaa9..e591d210 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -46,11 +46,11 @@ pub(crate) fn sort_outputs /// Extension traits around Block. pub(crate) trait BlockExt: Block { - fn scan_for_outputs(&self, key: Self::Key) -> Vec; + fn scan_for_outputs(&self, latest_active_key: Self::Key, key: Self::Key) -> Vec; } impl BlockExt for B { - fn scan_for_outputs(&self, key: Self::Key) -> Vec { - let mut outputs = self.scan_for_outputs_unordered(key); + fn scan_for_outputs(&self, latest_active_key: Self::Key, key: Self::Key) -> Vec { + let mut outputs = self.scan_for_outputs_unordered(latest_active_key, key); outputs.sort_by(sort_outputs); outputs } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index b235ff15..7004a4d9 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -122,6 +122,19 @@ impl ContinuallyRan for ScanTask { let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) .expect("scanning for a blockchain without any keys set"); + let latest_active_key = { + let mut keys = keys.clone(); + loop { + // Use the most recent key + let key = keys.pop().unwrap(); + // Unless this key is active, but not yet reporting + if key.stage == LifetimeStage::ActiveYetNotReporting { + continue; + } + break key.key; + } + }; + // The scan data for this block let mut scan_data = SenderScanData { block_number: b, @@ -157,7 +170,7 @@ impl ContinuallyRan for ScanTask { // Scan for each key for key in &keys { - for output in block.scan_for_outputs(key.key) { + for output in block.scan_for_outputs(latest_active_key, key.key) { assert_eq!(output.key(), key.key); /* From a691be21c84fe3ce7d69f461b651e8df7349fa2b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 01:05:36 -0400 Subject: [PATCH 159/368] Call tidy_keys upon queue_key Prevents the potential case of the substrate task and the scan task writing to the same storage slot at once. --- processor/scanner/src/db.rs | 46 +++++++++++++++++-------------- processor/scanner/src/scan/mod.rs | 3 -- 2 files changed, 25 insertions(+), 24 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 884e0e2b..a985ba43 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -116,6 +116,28 @@ impl ScannerGlobalDb { StartBlock::set(txn, &block) } + fn tidy_keys(txn: &mut impl DbTxn) { + let mut keys: Vec>>> = + ActiveKeys::get(txn).expect("retiring key yet no active keys"); + let Some(key) = keys.first() else { return }; + + // Get the block we're scanning for next + let block_number = next_to_scan_for_outputs_block::(txn).expect( + "tidying keys despite never setting the next to scan for block (done on initialization)", + ); + // If this key is scheduled for retiry... + if let Some(retire_at) = RetireAt::get(txn, key.key) { + // And is retired by/at this block... + if retire_at <= block_number { + // Remove it from the list of keys + let key = keys.remove(0); + ActiveKeys::set(txn, &keys); + // Also clean up the retiry block + RetireAt::del(txn, key.key); + } + } + } + /// Queue a key. /// /// Keys may be queued whenever, so long as they're scheduled to activate `WINDOW_LENGTH` blocks @@ -165,6 +187,9 @@ impl ScannerGlobalDb { // Push and save the next key keys.push(SeraiKeyDbEntry { activation_block_number, key: EncodableG(key) }); ActiveKeys::set(txn, &keys); + + // Now tidy the keys, ensuring this has a maximum length of 2 + Self::tidy_keys(txn); } /// Retire a key. /// @@ -181,27 +206,6 @@ impl ScannerGlobalDb { RetireAt::set(txn, EncodableG(key), &at_block); } - pub(crate) fn tidy_keys(txn: &mut impl DbTxn) { - let mut keys: Vec>>> = - ActiveKeys::get(txn).expect("retiring key yet no active keys"); - let Some(key) = keys.first() else { return }; - - // Get the block we're scanning for next - let block_number = next_to_scan_for_outputs_block::(txn).expect( - "tidying keys despite never setting the next to scan for block (done on initialization)", - ); - // If this key is scheduled for retiry... - if let Some(retire_at) = RetireAt::get(txn, key.key) { - // And is retired by/at this block... - if retire_at <= block_number { - // Remove it from the list of keys - let key = keys.remove(0); - ActiveKeys::set(txn, &keys); - // Also clean up the retiry block - RetireAt::del(txn, key.key); - } - } - } /// Fetch the active keys, as of the next-to-scan-for-outputs Block. /// /// This means the scan task should scan for all keys returned by this. diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 7004a4d9..0ebdf992 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -116,9 +116,6 @@ impl ContinuallyRan for ScanTask { assert_eq!(ScanDb::::next_to_scan_for_outputs_block(&txn).unwrap(), b); - // Tidy the keys, then fetch them - // We don't have to tidy them here, we just have to somewhere, so why not here? - ScannerGlobalDb::::tidy_keys(&mut txn); let keys = ScannerGlobalDb::::active_keys_as_of_next_to_scan_for_outputs_block(&txn) .expect("scanning for a blockchain without any keys set"); From 1367e4151021eecf9f6ce4b059e4acadbc442f8b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 01:31:52 -0400 Subject: [PATCH 160/368] Add hooks to the main loop Lets the Ethereum processor track the first key set as soon as it's set. --- Cargo.lock | 1 + processor/bin/src/lib.rs | 11 +++++++ processor/bitcoin/src/main.rs | 2 +- processor/ethereum/Cargo.toml | 1 + processor/ethereum/router/src/lib.rs | 7 ----- processor/ethereum/src/main.rs | 34 ++++++++++++++++++++-- processor/ethereum/src/primitives/block.rs | 6 ++-- processor/ethereum/src/publisher.rs | 26 ++++++++++++----- processor/monero/src/main.rs | 2 +- 9 files changed, 67 insertions(+), 23 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 2e2faecb..00cb2ac5 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8355,6 +8355,7 @@ dependencies = [ "serai-processor-ethereum-primitives", "serai-processor-ethereum-router", "serai-processor-key-gen", + "serai-processor-messages", "serai-processor-primitives", "serai-processor-scanner", "serai-processor-scheduler-primitives", diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 7758b1ea..651514ad 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -157,8 +157,18 @@ async fn first_block_after_time(feed: &S, serai_time: u64) -> u6 } } +/// Hooks to run during the main loop. +pub trait Hooks { + /// A hook to run upon receiving a message. + fn on_message(txn: &mut impl DbTxn, msg: &messages::CoordinatorMessage); +} +impl Hooks for () { + fn on_message(_: &mut impl DbTxn, _: &messages::CoordinatorMessage) {} +} + /// The main loop of a Processor, interacting with the Coordinator. pub async fn main_loop< + H: Hooks, S: ScannerFeed, K: KeyGenParams>>, Sch: Clone @@ -183,6 +193,7 @@ pub async fn main_loop< let db_clone = db.clone(); let mut txn = db.txn(); let msg = coordinator.next_message(&mut txn).await; + H::on_message(&mut txn, &msg); let mut txn = Some(txn); match msg { messages::CoordinatorMessage::KeyGen(msg) => { diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index f260c47c..5feb3e25 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -57,7 +57,7 @@ async fn main() { tokio::spawn(TxIndexTask(feed.clone()).continually_run(index_task, vec![])); core::mem::forget(index_handle); - bin::main_loop::<_, KeyGenParams, _>(db, feed.clone(), Scheduler::new(Planner), feed).await; + bin::main_loop::<(), _, KeyGenParams, _>(db, feed.clone(), Scheduler::new(Planner), feed).await; } /* diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 649e3fb8..c2a6f581 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -49,6 +49,7 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-env = { path = "../../common/env" } serai-db = { path = "../../common/db" } +messages = { package = "serai-processor-messages", path = "../messages" } key-gen = { package = "serai-processor-key-gen", path = "../key-gen" } primitives = { package = "serai-processor-primitives", path = "../primitives" } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 344e2bee..d56c514f 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -6,13 +6,6 @@ use std::{sync::Arc, io, collections::HashSet}; use group::ff::PrimeField; -/* -use k256::{ - elliptic_curve::{group::GroupEncoding, sec1}, - ProjectivePoint, -}; -*/ - use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; use alloy_consensus::TxLegacy; diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index 0ebf0f59..bfb9a8df 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -13,7 +13,13 @@ use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; use alloy_provider::{Provider, RootProvider}; +use serai_client::validator_sets::primitives::Session; + use serai_env as env; +use serai_db::{Get, DbTxn, create_db}; + +use ::primitives::EncodableG; +use ::key_gen::KeyGenParams as KeyGenParamsTrait; mod primitives; pub(crate) use crate::primitives::*; @@ -27,6 +33,28 @@ use scheduler::{SmartContract, Scheduler}; mod publisher; use publisher::TransactionPublisher; +create_db! { + EthereumProcessor { + // The initial key for Serai on Ethereum + InitialSeraiKey: () -> EncodableG, + } +} + +struct SetInitialKey; +impl bin::Hooks for SetInitialKey { + fn on_message(txn: &mut impl DbTxn, msg: &messages::CoordinatorMessage) { + if let messages::CoordinatorMessage::Substrate( + messages::substrate::CoordinatorMessage::SetKeys { session, key_pair, .. }, + ) = msg + { + assert_eq!(*session, Session(0)); + let key = KeyGenParams::decode_key(key_pair.1.as_ref()) + .expect("invalid Ethereum key confirmed on Substrate"); + InitialSeraiKey::set(txn, &EncodableG(key)); + } + } +} + #[tokio::main] async fn main() { let db = bin::init(); @@ -45,11 +73,11 @@ async fn main() { } }; - bin::main_loop::<_, KeyGenParams, _>( - db, + bin::main_loop::( + db.clone(), Rpc { provider: provider.clone() }, Scheduler::new(SmartContract { chain_id }), - TransactionPublisher::new(provider, { + TransactionPublisher::new(db, provider, { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") .expect("ethereum relayer hostname wasn't specified") .to_string(); diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index a6268c0b..cd26b400 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -6,7 +6,7 @@ use serai_client::networks::ethereum::Address; use primitives::{ReceivedOutput, EventualityTracker}; -use ethereum_router::Executed; +use ethereum_router::{InInstruction as EthereumInInstruction, Executed}; use crate::{output::Output, transaction::Eventuality}; @@ -43,7 +43,7 @@ impl primitives::BlockHeader for Epoch { #[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct FullEpoch { epoch: Epoch, - outputs: Vec, + instructions: Vec, executed: Vec, } @@ -72,7 +72,7 @@ impl primitives::Block for FullEpoch { // Associate all outputs with the latest active key // We don't associate these with the current key within the SC as that'll cause outputs to be // marked for forwarding if the SC is delayed to actually rotate - todo!("TODO") + self.instructions.iter().cloned().map(|instruction| Output { key, instruction }).collect() } #[allow(clippy::type_complexity)] diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index 03b1d24c..5a7a958a 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -13,22 +13,27 @@ use tokio::{ net::TcpStream, }; +use serai_db::Db; + use ethereum_schnorr::PublicKey; use ethereum_router::{OutInstructions, Router}; -use crate::transaction::{Action, Transaction}; +use crate::{ + InitialSeraiKey, + transaction::{Action, Transaction}, +}; #[derive(Clone)] -pub(crate) struct TransactionPublisher { - initial_serai_key: PublicKey, +pub(crate) struct TransactionPublisher { + db: D, rpc: Arc>, router: Arc>>, relayer_url: String, } -impl TransactionPublisher { - pub(crate) fn new(rpc: Arc>, relayer_url: String) -> Self { - Self { initial_serai_key: todo!("TODO"), rpc, router: Arc::new(RwLock::new(None)), relayer_url } +impl TransactionPublisher { + pub(crate) fn new(db: D, rpc: Arc>, relayer_url: String) -> Self { + Self { db, rpc, router: Arc::new(RwLock::new(None)), relayer_url } } // This will always return Ok(Some(_)) or Err(_), never Ok(None) @@ -43,7 +48,12 @@ impl TransactionPublisher { let mut router = self.router.write().await; // Check again if it's None in case a different task already did this if router.is_none() { - let Some(router_actual) = Router::new(self.rpc.clone(), &self.initial_serai_key).await? + let Some(router_actual) = Router::new( + self.rpc.clone(), + &PublicKey::new(InitialSeraiKey::get(&self.db).unwrap().0) + .expect("initial key used by Serai wasn't representable on Ethereum"), + ) + .await? else { Err(TransportErrorKind::Custom( "publishing transaction yet couldn't find router on chain. was our node reset?" @@ -60,7 +70,7 @@ impl TransactionPublisher { } } -impl signers::TransactionPublisher for TransactionPublisher { +impl signers::TransactionPublisher for TransactionPublisher { type EphemeralError = RpcError; fn publish( diff --git a/processor/monero/src/main.rs b/processor/monero/src/main.rs index d36118d0..b5c67f12 100644 --- a/processor/monero/src/main.rs +++ b/processor/monero/src/main.rs @@ -33,7 +33,7 @@ async fn main() { }, }; - bin::main_loop::<_, KeyGenParams, _>( + bin::main_loop::<(), _, KeyGenParams, _>( db, feed.clone(), Scheduler::new(Planner(feed.clone())), From 855e53164ed878636506e65757feff6da28884ae Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 02:41:07 -0400 Subject: [PATCH 161/368] Finish Ethereum ScannerFeed --- processor/ethereum/TODO/old_processor.rs | 219 --------------------- processor/ethereum/src/main.rs | 4 +- processor/ethereum/src/primitives/block.rs | 8 +- processor/ethereum/src/primitives/mod.rs | 2 + processor/ethereum/src/publisher.rs | 8 +- processor/ethereum/src/rpc.rs | 114 +++++++++-- processor/ethereum/src/scheduler.rs | 18 +- 7 files changed, 126 insertions(+), 247 deletions(-) diff --git a/processor/ethereum/TODO/old_processor.rs b/processor/ethereum/TODO/old_processor.rs index a8f55c79..2e2daa3e 100644 --- a/processor/ethereum/TODO/old_processor.rs +++ b/processor/ethereum/TODO/old_processor.rs @@ -1,146 +1,5 @@ -/* -#![cfg_attr(docsrs, feature(doc_auto_cfg))] -#![doc = include_str!("../README.md")] -#![deny(missing_docs)] - -use core::{fmt, time::Duration}; -use std::{ - sync::Arc, - collections::{HashSet, HashMap}, - io, -}; - -use async_trait::async_trait; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; -use frost::ThresholdKeys; - -use ethereum_serai::{ - alloy::{ - primitives::U256, - rpc_types::{BlockTransactionsKind, BlockNumberOrTag, Transaction}, - simple_request_transport::SimpleRequest, - rpc_client::ClientBuilder, - provider::{Provider, RootProvider}, - }, - crypto::{PublicKey, Signature}, - erc20::Erc20, - deployer::Deployer, - router::{Router, Coin as EthereumCoin, InInstruction as EthereumInInstruction}, - machine::*, -}; -#[cfg(test)] -use ethereum_serai::alloy::primitives::B256; - -use tokio::{ - time::sleep, - sync::{RwLock, RwLockReadGuard}, -}; -#[cfg(not(test))] -use tokio::{ - io::{AsyncReadExt, AsyncWriteExt}, - net::TcpStream, -}; - -use serai_client::{ - primitives::{Coin, Amount, Balance, NetworkId}, - validator_sets::primitives::Session, -}; - -use crate::{ - Db, Payment, - networks::{ - OutputType, Output, Transaction as TransactionTrait, SignableTransaction, Block, - Eventuality as EventualityTrait, EventualitiesTracker, NetworkError, Network, - }, - key_gen::NetworkKeyDb, - multisigs::scheduler::{ - Scheduler as SchedulerTrait, - smart_contract::{Addendum, Scheduler}, - }, -}; - -#[derive(Clone)] -pub struct Ethereum { - // This DB is solely used to access the first key generated, as needed to determine the Router's - // address. Accordingly, all methods present are consistent to a Serai chain with a finalized - // first key (regardless of local state), and this is safe. - db: D, - #[cfg_attr(test, allow(unused))] - relayer_url: String, - provider: Arc>, - deployer: Deployer, - router: Arc>>, -} -impl Ethereum { - pub async fn new(db: D, daemon_url: String, relayer_url: String) -> Self { - let provider = Arc::new(RootProvider::new( - ClientBuilder::default().transport(SimpleRequest::new(daemon_url), true), - )); - - let mut deployer = Deployer::new(provider.clone()).await; - while !matches!(deployer, Ok(Some(_))) { - log::error!("Deployer wasn't deployed yet or networking error"); - sleep(Duration::from_secs(5)).await; - deployer = Deployer::new(provider.clone()).await; - } - let deployer = deployer.unwrap().unwrap(); - - dbg!(&relayer_url); - dbg!(relayer_url.len()); - Ethereum { db, relayer_url, provider, deployer, router: Arc::new(RwLock::new(None)) } - } - - // Obtain a reference to the Router, sleeping until it's deployed if it hasn't already been. - // This is guaranteed to return Some. - pub async fn router(&self) -> RwLockReadGuard<'_, Option> { - // If we've already instantiated the Router, return a read reference - { - let router = self.router.read().await; - if router.is_some() { - return router; - } - } - - // Instantiate it - let mut router = self.router.write().await; - // If another attempt beat us to it, return - if router.is_some() { - drop(router); - return self.router.read().await; - } - - // Get the first key from the DB - let first_key = - NetworkKeyDb::get(&self.db, Session(0)).expect("getting outputs before confirming a key"); - let key = Secp256k1::read_G(&mut first_key.as_slice()).unwrap(); - let public_key = PublicKey::new(key).unwrap(); - - // Find the router - let mut found = self.deployer.find_router(self.provider.clone(), &public_key).await; - while !matches!(found, Ok(Some(_))) { - log::error!("Router wasn't deployed yet or networking error"); - sleep(Duration::from_secs(5)).await; - found = self.deployer.find_router(self.provider.clone(), &public_key).await; - } - - // Set it - *router = Some(found.unwrap().unwrap()); - - // Downgrade to a read lock - // Explicitly doesn't use `downgrade` so that another pending write txn can realize it's no - // longer necessary - drop(router); - self.router.read().await - } -} - #[async_trait] impl Network for Ethereum { - const DUST: u64 = 0; // TODO - - const COST_TO_AGGREGATE: u64 = 0; - async fn get_outputs( &self, block: &Self::Block, @@ -220,66 +79,6 @@ impl Network for Ethereum { all_events } - async fn get_eventuality_completions( - &self, - eventualities: &mut EventualitiesTracker, - block: &Self::Block, - ) -> HashMap< - [u8; 32], - ( - usize, - >::Id, - ::Completion, - ), - > { - let mut res = HashMap::new(); - if eventualities.map.is_empty() { - return res; - } - - let router = self.router().await; - let router = router.as_ref().unwrap(); - - let past_scanned_epoch = loop { - match self.get_block(eventualities.block_number).await { - Ok(block) => break block, - Err(e) => log::error!("couldn't get the last scanned block in the tracker: {}", e), - } - sleep(Duration::from_secs(10)).await; - }; - assert_eq!( - past_scanned_epoch.start / 32, - u64::try_from(eventualities.block_number).unwrap(), - "assumption of tracker block number's relation to epoch start is incorrect" - ); - - // Iterate from after the epoch number in the tracker to the end of this epoch - for block_num in (past_scanned_epoch.end() + 1) ..= block.end() { - let executed = loop { - match router.executed_commands(block_num).await { - Ok(executed) => break executed, - Err(e) => log::error!("couldn't get the executed commands in block {block_num}: {e}"), - } - sleep(Duration::from_secs(10)).await; - }; - - for executed in executed { - let lookup = executed.nonce.to_le_bytes().to_vec(); - if let Some((plan_id, eventuality)) = eventualities.map.get(&lookup) { - if let Some(command) = - SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &executed.signature) - { - res.insert(*plan_id, (block_num.try_into().unwrap(), executed.tx_id, command)); - eventualities.map.remove(&lookup); - } - } - } - } - eventualities.block_number = (block.start / 32).try_into().unwrap(); - - res - } - async fn publish_completion( &self, completion: &::Completion, @@ -333,14 +132,6 @@ impl Network for Ethereum { } } - async fn confirm_completion( - &self, - eventuality: &Self::Eventuality, - claim: &::Claim, - ) -> Result::Completion>, NetworkError> { - Ok(SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature)) - } - #[cfg(test)] async fn get_block_number(&self, id: &>::Id) -> usize { self @@ -355,15 +146,6 @@ impl Network for Ethereum { .unwrap() } - #[cfg(test)] - async fn check_eventuality_by_claim( - &self, - eventuality: &Self::Eventuality, - claim: &::Claim, - ) -> bool { - SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature).is_some() - } - #[cfg(test)] async fn get_transaction_by_eventuality( &self, @@ -474,4 +256,3 @@ impl Network for Ethereum { self.get_block(self.get_latest_block_number().await.unwrap()).await.unwrap() } } -*/ diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index bfb9a8df..7acdffdb 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -75,8 +75,8 @@ async fn main() { bin::main_loop::( db.clone(), - Rpc { provider: provider.clone() }, - Scheduler::new(SmartContract { chain_id }), + Rpc { db: db.clone(), provider: provider.clone() }, + Scheduler::::new(SmartContract { chain_id }), TransactionPublisher::new(db, provider, { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") .expect("ethereum relayer hostname wasn't specified") diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index cd26b400..d5f0cb99 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -20,8 +20,6 @@ pub(crate) struct Epoch { pub(crate) start: u64, // The hash of the last block within this Epoch. pub(crate) end_hash: [u8; 32], - // The monotonic time for this Epoch. - pub(crate) time: u64, } impl Epoch { @@ -42,9 +40,9 @@ impl primitives::BlockHeader for Epoch { #[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct FullEpoch { - epoch: Epoch, - instructions: Vec, - executed: Vec, + pub(crate) epoch: Epoch, + pub(crate) instructions: Vec, + pub(crate) executed: Vec, } impl primitives::Block for FullEpoch { diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs index f0d31802..00a5980f 100644 --- a/processor/ethereum/src/primitives/mod.rs +++ b/processor/ethereum/src/primitives/mod.rs @@ -8,3 +8,5 @@ pub(crate) const DAI: [u8; 20] = Ok(res) => res, Err(_) => panic!("invalid non-test DAI hex address"), }; + +pub(crate) const TOKENS: [[u8; 20]; 1] = [DAI]; diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index 5a7a958a..4a62bad7 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -50,8 +50,12 @@ impl TransactionPublisher { if router.is_none() { let Some(router_actual) = Router::new( self.rpc.clone(), - &PublicKey::new(InitialSeraiKey::get(&self.db).unwrap().0) - .expect("initial key used by Serai wasn't representable on Ethereum"), + &PublicKey::new( + InitialSeraiKey::get(&self.db) + .expect("publishing a transaction yet never confirmed a key") + .0, + ) + .expect("initial key used by Serai wasn't representable on Ethereum"), ) .await? else { diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index e3f25f86..a53e6b33 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -1,6 +1,7 @@ use core::future::Future; -use std::sync::Arc; +use std::{sync::Arc, collections::HashSet}; +use alloy_core::primitives::B256; use alloy_rpc_types_eth::{BlockTransactionsKind, BlockNumberOrTag}; use alloy_transport::{RpcError, TransportErrorKind}; use alloy_simple_request_transport::SimpleRequest; @@ -8,16 +9,26 @@ use alloy_provider::{Provider, RootProvider}; use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_db::Db; + use scanner::ScannerFeed; -use crate::block::{Epoch, FullEpoch}; +use ethereum_schnorr::PublicKey; +use ethereum_erc20::{TopLevelTransfer, Erc20}; +use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction, Router}; + +use crate::{ + TOKENS, InitialSeraiKey, + block::{Epoch, FullEpoch}, +}; #[derive(Clone)] -pub(crate) struct Rpc { +pub(crate) struct Rpc { + pub(crate) db: D, pub(crate) provider: Arc>, } -impl ScannerFeed for Rpc { +impl ScannerFeed for Rpc { const NETWORK: NetworkId = NetworkId::Ethereum; // We only need one confirmation as Ethereum properly finalizes @@ -62,7 +73,22 @@ impl ScannerFeed for Rpc { &self, number: u64, ) -> impl Send + Future> { - async move { todo!("TODO") } + async move { + let header = self + .provider + .get_block(BlockNumberOrTag::Number(number).into(), BlockTransactionsKind::Hashes) + .await? + .ok_or_else(|| { + TransportErrorKind::Custom( + "asked for time of a block our node doesn't have".to_string().into(), + ) + })? + .header; + // This is monotonic ever since the merge + // https://github.com/ethereum/consensus-specs/blob/4afe39822c9ad9747e0f5635cca117c18441ec1b + // /specs/bellatrix/beacon-chain.md?plain=1#L393-L394 + Ok(header.timestamp) + } } fn unchecked_block_header_by_number( @@ -104,25 +130,91 @@ impl ScannerFeed for Rpc { .header; let end_hash = end_header.hash.into(); - let time = end_header.timestamp; - Ok(Epoch { prior_end_hash, start, end_hash, time }) + Ok(Epoch { prior_end_hash, start, end_hash }) } } - #[rustfmt::skip] // It wants to improperly format the `async move` to a single line fn unchecked_block_by_number( &self, number: u64, ) -> impl Send + Future> { async move { - todo!("TODO") + let epoch = self.unchecked_block_header_by_number(number).await?; + let mut instructions = vec![]; + let mut executed = vec![]; + + let Some(router) = Router::new( + self.provider.clone(), + &PublicKey::new( + InitialSeraiKey::get(&self.db).expect("fetching a block yet never confirmed a key").0, + ) + .expect("initial key used by Serai wasn't representable on Ethereum"), + ) + .await? + else { + // The Router wasn't deployed yet so we cannot have any on-chain interactions + // If the Router has been deployed by the block we've synced to, it won't have any events + // for these blocks anways, so this doesn't risk a consensus split + // TODO: This does as we can have top-level transfers to the router before it's deployed + return Ok(FullEpoch { epoch, instructions, executed }); + }; + + let mut to_check = epoch.end_hash; + while to_check != epoch.prior_end_hash { + let to_check_block = self + .provider + .get_block(B256::from(to_check).into(), BlockTransactionsKind::Hashes) + .await? + .ok_or_else(|| { + TransportErrorKind::Custom( + format!( + "ethereum node didn't have requested block: {}. was the node reset?", + hex::encode(to_check) + ) + .into(), + ) + })? + .header; + + instructions.append( + &mut router.in_instructions(to_check_block.number, &HashSet::from(TOKENS)).await?, + ); + for token in TOKENS { + for TopLevelTransfer { id, from, amount, data } in + Erc20::new(self.provider.clone(), token) + .top_level_transfers(to_check_block.number, router.address()) + .await? + { + instructions.push(EthereumInInstruction { + id: (id, u64::MAX), + from, + coin: EthereumCoin::Erc20(token), + amount, + data, + }); + } + } + + executed.append(&mut router.executed(to_check_block.number).await?); + + to_check = *to_check_block.parent_hash; + } + + Ok(FullEpoch { epoch, instructions, executed }) } } fn dust(coin: Coin) -> Amount { assert_eq!(coin.network(), NetworkId::Ethereum); - todo!("TODO") + #[allow(clippy::inconsistent_digit_grouping)] + match coin { + // 5 USD if Ether is ~3300 USD + Coin::Ether => Amount(1_500_00), + // 5 DAI + Coin::Dai => Amount(5_000_000_00), + _ => unreachable!(), + } } fn cost_to_aggregate( @@ -132,7 +224,7 @@ impl ScannerFeed for Rpc { ) -> impl Send + Future> { async move { assert_eq!(coin.network(), NetworkId::Ethereum); - // TODO + // There is no cost to aggregate as we receive to an account Ok(Amount(0)) } } diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index 6683eeac..39f3fed3 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -2,6 +2,8 @@ use alloy_core::primitives::U256; use serai_client::primitives::{NetworkId, Coin, Balance}; +use serai_db::Db; + use primitives::Payment; use scanner::{KeyFor, AddressFor, EventualityFor}; @@ -32,15 +34,15 @@ fn balance_to_ethereum_amount(balance: Balance) -> U256 { pub(crate) struct SmartContract { pub(crate) chain_id: U256, } -impl smart_contract_scheduler::SmartContract for SmartContract { +impl smart_contract_scheduler::SmartContract> for SmartContract { type SignableTransaction = Action; fn rotate( &self, nonce: u64, - retiring_key: KeyFor, - new_key: KeyFor, - ) -> (Self::SignableTransaction, EventualityFor) { + retiring_key: KeyFor>, + new_key: KeyFor>, + ) -> (Self::SignableTransaction, EventualityFor>) { let action = Action::SetKey { chain_id: self.chain_id, nonce, @@ -52,9 +54,9 @@ impl smart_contract_scheduler::SmartContract for SmartContract { fn fulfill( &self, nonce: u64, - key: KeyFor, - payments: Vec>>, - ) -> Vec<(Self::SignableTransaction, EventualityFor)> { + key: KeyFor>, + payments: Vec>>>, + ) -> Vec<(Self::SignableTransaction, EventualityFor>)> { let mut outs = Vec::with_capacity(payments.len()); for payment in payments { outs.push(( @@ -75,4 +77,4 @@ impl smart_contract_scheduler::SmartContract for SmartContract { } } -pub(crate) type Scheduler = smart_contract_scheduler::Scheduler; +pub(crate) type Scheduler = smart_contract_scheduler::Scheduler, SmartContract>; From 1a08d50e16591d89ab553624cb12c9d79afb99a8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 02:46:32 -0400 Subject: [PATCH 162/368] Remove unused code in the Ethereum processor --- processor/ethereum/src/primitives/block.rs | 9 --------- processor/ethereum/src/primitives/machine.rs | 6 +++--- processor/ethereum/src/rpc.rs | 2 +- processor/ethereum/src/scheduler.rs | 4 ++-- 4 files changed, 6 insertions(+), 15 deletions(-) diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index d5f0cb99..723e099d 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -16,19 +16,10 @@ use crate::{output::Output, transaction::Eventuality}; pub(crate) struct Epoch { // The hash of the block which ended the prior Epoch. pub(crate) prior_end_hash: [u8; 32], - // The first block number within this Epoch. - pub(crate) start: u64, // The hash of the last block within this Epoch. pub(crate) end_hash: [u8; 32], } -impl Epoch { - // The block number of the last block within this epoch. - fn end(&self) -> u64 { - self.start + 31 - } -} - impl primitives::BlockHeader for Epoch { fn id(&self) -> [u8; 32] { self.end_hash diff --git a/processor/ethereum/src/primitives/machine.rs b/processor/ethereum/src/primitives/machine.rs index f37fb440..1762eb28 100644 --- a/processor/ethereum/src/primitives/machine.rs +++ b/processor/ethereum/src/primitives/machine.rs @@ -102,9 +102,9 @@ impl SignMachine for ActionSignMachine { unimplemented!() } fn from_cache( - params: Self::Params, - keys: Self::Keys, - cache: CachedPreprocess, + _params: Self::Params, + _keys: Self::Keys, + _cache: CachedPreprocess, ) -> (Self, Self::Preprocess) { unimplemented!() } diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index a53e6b33..0769c5c3 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -131,7 +131,7 @@ impl ScannerFeed for Rpc { let end_hash = end_header.hash.into(); - Ok(Epoch { prior_end_hash, start, end_hash }) + Ok(Epoch { prior_end_hash, end_hash }) } } diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index 39f3fed3..55e091fc 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -40,7 +40,7 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { fn rotate( &self, nonce: u64, - retiring_key: KeyFor>, + _retiring_key: KeyFor>, new_key: KeyFor>, ) -> (Self::SignableTransaction, EventualityFor>) { let action = Action::SetKey { @@ -54,7 +54,7 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { fn fulfill( &self, nonce: u64, - key: KeyFor>, + _key: KeyFor>, payments: Vec>>>, ) -> Vec<(Self::SignableTransaction, EventualityFor>)> { let mut outs = Vec::with_capacity(payments.len()); From 53567e91c8314fb62c7971c34236bdafeae1dee3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 02:58:02 -0400 Subject: [PATCH 163/368] Read NetworkId from ScannerFeed trait, not env --- processor/bin/src/coordinator.rs | 17 ++++++----------- processor/bin/src/lib.rs | 2 +- 2 files changed, 7 insertions(+), 12 deletions(-) diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index 6fe5aea0..e05712cf 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -5,13 +5,15 @@ use tokio::sync::mpsc; use scale::Encode; use serai_client::{ - primitives::{NetworkId, Signature}, + primitives::Signature, validator_sets::primitives::Session, in_instructions::primitives::{Batch, SignedBatch}, }; use serai_db::{Get, DbTxn, Db, create_db, db_channel}; -use serai_env as env; + +use scanner::ScannerFeed; + use message_queue::{Service, Metadata, client::MessageQueue}; create_db! { @@ -60,18 +62,11 @@ pub(crate) struct Coordinator { } impl Coordinator { - pub(crate) fn new(db: crate::Db) -> Self { + pub(crate) fn new(db: crate::Db) -> Self { let (received_message_send, received_message_recv) = mpsc::unbounded_channel(); let (sent_message_send, mut sent_message_recv) = mpsc::unbounded_channel(); - let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() { - "bitcoin" => NetworkId::Bitcoin, - "ethereum" => NetworkId::Ethereum, - "monero" => NetworkId::Monero, - _ => panic!("unrecognized network"), - }; - // TODO: Read this from ScannerFeed - let service = Service::Processor(network_id); + let service = Service::Processor(S::NETWORK); let message_queue = Arc::new(MessageQueue::from_env(service)); // Spawn a task to move messages from the message-queue to our database so we can achieve diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 651514ad..662eafb9 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -182,7 +182,7 @@ pub async fn main_loop< scheduler: Sch, publisher: impl TransactionPublisher>, ) { - let mut coordinator = Coordinator::new(db.clone()); + let mut coordinator = Coordinator::new::(db.clone()); let mut key_gen = key_gen::(); let mut scanner = Scanner::new(db.clone(), feed.clone(), scheduler.clone()).await; From c27aaf86583e52c4dc592e165ee010083ff4925a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 03:16:17 -0400 Subject: [PATCH 164/368] Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch Offers a simpler API to the coordinator. --- processor/bin/src/lib.rs | 57 +++++++++++++++----------- processor/messages/src/lib.rs | 29 ++++--------- processor/scanner/Cargo.toml | 2 + processor/scanner/src/lib.rs | 7 +--- processor/scanner/src/substrate/db.rs | 12 +++--- processor/scanner/src/substrate/mod.rs | 14 +++---- 6 files changed, 59 insertions(+), 62 deletions(-) diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 662eafb9..86a3a0cd 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -270,32 +270,43 @@ pub async fn main_loop< // This is a cheap call signers.retire_session(txn, session, &key) } - messages::substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { - block: _, - batch_id, - in_instruction_succeededs, - burns, + messages::substrate::CoordinatorMessage::Block { + serai_block_number: _, + batches, + mut burns, } => { - let mut txn = txn.take().unwrap(); let scanner = scanner.as_mut().unwrap(); - let key_to_activate = KeyToActivate::>::try_recv(&mut txn).map(|key| key.0); + + // Substrate sets this limit to prevent DoSs from malicious validator sets + // That bound lets us consume this txn in the following loop body, as an optimization + assert!(batches.len() <= 1); + for messages::substrate::ExecutedBatch { id, in_instructions } in batches { + let key_to_activate = + KeyToActivate::>::try_recv(txn.as_mut().unwrap()).map(|key| key.0); + + /* + `acknowledge_batch` takes burns to optimize handling returns with standard payments. + That's why handling these with a Batch (and not waiting until the following potential + `queue_burns` call makes sense. As for which Batch, the first is equally valid unless + we want to start introspecting (and should be our only Batch anyways). + */ + let mut this_batchs_burns = vec![]; + std::mem::swap(&mut burns, &mut this_batchs_burns); + + // This is a cheap call as it internally just queues this to be done later + let _: () = scanner.acknowledge_batch( + txn.take().unwrap(), + id, + in_instructions, + this_batchs_burns, + key_to_activate, + ); + } + // This is a cheap call as it internally just queues this to be done later - scanner.acknowledge_batch( - txn, - batch_id, - in_instruction_succeededs, - burns, - key_to_activate, - ) - } - messages::substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { - block: _, - burns, - } => { - let txn = txn.take().unwrap(); - let scanner = scanner.as_mut().unwrap(); - // This is a cheap call as it internally just queues this to be done later - scanner.queue_burns(txn, burns) + if !burns.is_empty() { + let _: () = scanner.queue_burns(txn.take().unwrap(), burns); + } } }, }; diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 080864dc..659491d4 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -181,7 +181,6 @@ pub mod coordinator { pub mod substrate { use super::*; - /* TODO #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum InInstructionResult { Succeeded, @@ -189,15 +188,9 @@ pub mod substrate { } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct ExecutedBatch { - batch_id: u32, - in_instructions: Vec, + pub id: u32, + pub in_instructions: Vec, } - Block { - block: u64, - batches: Vec, - burns: Vec, - } - */ #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { @@ -205,15 +198,12 @@ pub mod substrate { SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, /// Slashes reported on the Serai blockchain OR the process timed out. SlashesReported { session: Session }, - /// The data from a block which acknowledged a Batch. - BlockWithBatchAcknowledgement { - block: u64, - batch_id: u32, - in_instruction_succeededs: Vec, + /// A block from Serai with relevance to this processor. + Block { + serai_block_number: u64, + batches: Vec, burns: Vec, }, - /// The data from a block which didn't acknowledge a Batch. - BlockWithoutBatchAcknowledgement { block: u64, burns: Vec }, } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] @@ -323,11 +313,8 @@ impl CoordinatorMessage { let (sub, id) = match msg { substrate::CoordinatorMessage::SetKeys { session, .. } => (0, session.encode()), substrate::CoordinatorMessage::SlashesReported { session } => (1, session.encode()), - substrate::CoordinatorMessage::BlockWithBatchAcknowledgement { block, .. } => { - (2, block.encode()) - } - substrate::CoordinatorMessage::BlockWithoutBatchAcknowledgement { block, .. } => { - (3, block.encode()) + substrate::CoordinatorMessage::Block { serai_block_number, .. } => { + (2, serai_block_number.encode()) } }; diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index 1ff154cd..09a6a937 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -31,6 +31,8 @@ tokio = { version = "1", default-features = false, features = ["rt-multi-thread" serai-db = { path = "../../common/db" } +messages = { package = "serai-processor-messages", path = "../messages" } + serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std", "borsh"] } serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index e591d210..72d661a3 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -429,9 +429,6 @@ impl Scanner { /// This means the specified Batch was ordered on Serai in relation to Burn events, and all /// validators have achieved synchrony on it. /// - /// `in_instruction_succeededs` is the result of executing each InInstruction within this batch, - /// true if it succeeded and false if it did not (and did not cause any state changes on Serai). - /// /// `burns` is a list of Burns to queue with the acknowledgement of this Batch for efficiency's /// sake. Any Burns passed here MUST NOT be passed into any other call of `acknowledge_batch` nor /// `queue_burns`. Doing so will cause them to be executed multiple times. @@ -441,7 +438,7 @@ impl Scanner { &mut self, mut txn: impl DbTxn, batch_id: u32, - in_instruction_succeededs: Vec, + in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, ) { @@ -451,7 +448,7 @@ impl Scanner { substrate::queue_acknowledge_batch::( &mut txn, batch_id, - in_instruction_succeededs, + in_instruction_results, burns, key_to_activate, ); diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs index 18435856..c1a1b0e2 100644 --- a/processor/scanner/src/substrate/db.rs +++ b/processor/scanner/src/substrate/db.rs @@ -12,7 +12,7 @@ use crate::{ScannerFeed, KeyFor}; #[derive(BorshSerialize, BorshDeserialize)] struct AcknowledgeBatchEncodable { batch_id: u32, - in_instruction_succeededs: Vec, + in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, } @@ -25,7 +25,7 @@ enum ActionEncodable { pub(crate) struct AcknowledgeBatch { pub(crate) batch_id: u32, - pub(crate) in_instruction_succeededs: Vec, + pub(crate) in_instruction_results: Vec, pub(crate) burns: Vec, pub(crate) key_to_activate: Option>, } @@ -46,7 +46,7 @@ impl SubstrateDb { pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, - in_instruction_succeededs: Vec, + in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, ) { @@ -54,7 +54,7 @@ impl SubstrateDb { txn, &ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, - in_instruction_succeededs, + in_instruction_results, burns, key_to_activate: key_to_activate.map(|key| key.to_bytes().as_ref().to_vec()), }), @@ -69,12 +69,12 @@ impl SubstrateDb { Some(match action_encodable { ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, - in_instruction_succeededs, + in_instruction_results, burns, key_to_activate, }) => Action::AcknowledgeBatch(AcknowledgeBatch { batch_id, - in_instruction_succeededs, + in_instruction_results, burns, key_to_activate: key_to_activate.map(|key| { let mut repr = as GroupEncoding>::Repr::default(); diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 89186c69..ce28470d 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -16,14 +16,14 @@ use db::*; pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, - in_instruction_succeededs: Vec, + in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, ) { SubstrateDb::::queue_acknowledge_batch( txn, batch_id, - in_instruction_succeededs, + in_instruction_results, burns, key_to_activate, ) @@ -67,7 +67,7 @@ impl ContinuallyRan for SubstrateTask { match action { Action::AcknowledgeBatch(AcknowledgeBatch { batch_id, - in_instruction_succeededs, + in_instruction_results, mut burns, key_to_activate, }) => { @@ -127,16 +127,16 @@ impl ContinuallyRan for SubstrateTask { let return_information = report::take_return_information::(&mut txn, batch_id) .expect("didn't save the return information for Batch we published"); assert_eq!( - in_instruction_succeededs.len(), + in_instruction_results.len(), return_information.len(), "amount of InInstruction succeededs differed from amount of return information saved" ); // We map these into standard Burns - for (succeeded, return_information) in - in_instruction_succeededs.into_iter().zip(return_information) + for (result, return_information) in + in_instruction_results.into_iter().zip(return_information) { - if succeeded { + if result == messages::substrate::InInstructionResult::Succeeded { continue; } From e64827b6d70a38f09ae66f0e6f96007847933ec2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 03:18:14 -0400 Subject: [PATCH 165/368] Mark files in TODO/ with "TODO" to ensure it pops up on search --- Cargo.lock | 1 + processor/ethereum/TODO/old_processor.rs | 82 +----------------------- processor/ethereum/TODO/tests/crypto.rs | 2 + processor/ethereum/TODO/tests/mod.rs | 2 + processor/ethereum/TODO/tests/router.rs | 2 + 5 files changed, 8 insertions(+), 81 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 00cb2ac5..065432d0 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8801,6 +8801,7 @@ dependencies = [ "serai-db", "serai-in-instructions-primitives", "serai-primitives", + "serai-processor-messages", "serai-processor-primitives", "serai-processor-scheduler-primitives", "tokio", diff --git a/processor/ethereum/TODO/old_processor.rs b/processor/ethereum/TODO/old_processor.rs index 2e2daa3e..50250c43 100644 --- a/processor/ethereum/TODO/old_processor.rs +++ b/processor/ethereum/TODO/old_processor.rs @@ -1,83 +1,4 @@ -#[async_trait] -impl Network for Ethereum { - async fn get_outputs( - &self, - block: &Self::Block, - _: ::G, - ) -> Vec { - let router = self.router().await; - let router = router.as_ref().unwrap(); - // Grab the key at the end of the epoch - let key_at_end_of_block = loop { - match router.key_at_end_of_block(block.start + 31).await { - Ok(Some(key)) => break key, - Ok(None) => return vec![], - Err(e) => { - log::error!("couldn't connect to router for the key at the end of the block: {e:?}"); - sleep(Duration::from_secs(5)).await; - continue; - } - } - }; - - let mut all_events = vec![]; - let mut top_level_txids = HashSet::new(); - for erc20_addr in [DAI] { - let erc20 = Erc20::new(self.provider.clone(), erc20_addr); - - for block in block.start .. (block.start + 32) { - let transfers = loop { - match erc20.top_level_transfers(block, router.address()).await { - Ok(transfers) => break transfers, - Err(e) => { - log::error!("couldn't connect to Ethereum node for the top-level transfers: {e:?}"); - sleep(Duration::from_secs(5)).await; - continue; - } - } - }; - - for transfer in transfers { - top_level_txids.insert(transfer.id); - all_events.push(EthereumInInstruction { - id: (transfer.id, 0), - from: transfer.from, - coin: EthereumCoin::Erc20(erc20_addr), - amount: transfer.amount, - data: transfer.data, - key_at_end_of_block, - }); - } - } - } - - for block in block.start .. (block.start + 32) { - let mut events = router.in_instructions(block, &HashSet::from([DAI])).await; - while let Err(e) = events { - log::error!("couldn't connect to Ethereum node for the Router's events: {e:?}"); - sleep(Duration::from_secs(5)).await; - events = router.in_instructions(block, &HashSet::from([DAI])).await; - } - let mut events = events.unwrap(); - for event in &mut events { - // A transaction should either be a top-level transfer or a Router InInstruction - if top_level_txids.contains(&event.id.0) { - panic!("top-level transfer had {} and router had {:?}", hex::encode(event.id.0), event); - } - // Overwrite the key at end of block to key at end of epoch - event.key_at_end_of_block = key_at_end_of_block; - } - all_events.extend(events); - } - - for event in &all_events { - assert!( - coin_to_serai_coin(&event.coin).is_some(), - "router yielded events for unrecognized coins" - ); - } - all_events - } +TODO async fn publish_completion( &self, @@ -255,4 +176,3 @@ impl Network for Ethereum { // Yield the freshly mined block self.get_block(self.get_latest_block_number().await.unwrap()).await.unwrap() } -} diff --git a/processor/ethereum/TODO/tests/crypto.rs b/processor/ethereum/TODO/tests/crypto.rs index a4f86ae9..20ba40b8 100644 --- a/processor/ethereum/TODO/tests/crypto.rs +++ b/processor/ethereum/TODO/tests/crypto.rs @@ -1,3 +1,5 @@ +// TODO + use rand_core::OsRng; use group::ff::{Field, PrimeField}; diff --git a/processor/ethereum/TODO/tests/mod.rs b/processor/ethereum/TODO/tests/mod.rs index 91b03d9b..a865868f 100644 --- a/processor/ethereum/TODO/tests/mod.rs +++ b/processor/ethereum/TODO/tests/mod.rs @@ -1,3 +1,5 @@ +// TODO + use std::{sync::Arc, collections::HashMap}; use rand_core::OsRng; diff --git a/processor/ethereum/TODO/tests/router.rs b/processor/ethereum/TODO/tests/router.rs index 724348cc..63e5f1d5 100644 --- a/processor/ethereum/TODO/tests/router.rs +++ b/processor/ethereum/TODO/tests/router.rs @@ -1,3 +1,5 @@ +// TODO + use std::{convert::TryFrom, sync::Arc, collections::HashMap}; use rand_core::OsRng; From 861a8352e50d1f2ea0669c3442267aecbc9b1ea8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 21:19:34 -0400 Subject: [PATCH 166/368] Update to the latest bitcoin-serai --- Cargo.lock | 2 -- processor/bitcoin/Cargo.toml | 2 -- processor/bitcoin/src/key_gen.rs | 35 ------------------- .../bitcoin/src/primitives/transaction.rs | 3 +- 4 files changed, 1 insertion(+), 41 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 065432d0..12da8dd6 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8106,9 +8106,7 @@ dependencies = [ "borsh", "ciphersuite", "dkg", - "flexible-transcript", "hex", - "k256", "log", "modular-frost", "parity-scale-codec", diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 2a69d234..90b9566b 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -23,8 +23,6 @@ hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } -k256 = { version = "0.13", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std", "secp256k1"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-secp256k1"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } diff --git a/processor/bitcoin/src/key_gen.rs b/processor/bitcoin/src/key_gen.rs index bc911676..41544134 100644 --- a/processor/bitcoin/src/key_gen.rs +++ b/processor/bitcoin/src/key_gen.rs @@ -1,8 +1,6 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1}; use frost::ThresholdKeys; -use bitcoin_serai::bitcoin::{hashes::Hash, TapTweakHash}; - use crate::{primitives::x_coord_to_even_point, scan::scanner}; pub(crate) struct KeyGenParams; @@ -12,39 +10,6 @@ impl key_gen::KeyGenParams for KeyGenParams { type ExternalNetworkCiphersuite = Secp256k1; fn tweak_keys(keys: &mut ThresholdKeys) { - /* - Offset the keys by their hash to prevent a malicious participant from inserting a script - path, as specified in - https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-23 - - This isn't exactly the same, as we then increment the key until it happens to be even, yet - the goal is simply that someone who biases the key-gen can't insert their own script path. - By adding the hash of the key to the key, anyone who attempts such bias will change the key - used (changing the bias necessary). - - This is also potentially unnecessary for Serai, which uses an eVRF-based DKG. While that can - be biased (by manipulating who participates as we use it robustly and only require `t` - participants), contributions cannot be arbitrarily defined. That presumably requires - performing a search of the possible keys for some collision with 2**128 work. It's better to - offset regardless and avoid this question however. - */ - { - use k256::elliptic_curve::{ - bigint::{Encoding, U256}, - ops::Reduce, - }; - let tweak_hash = TapTweakHash::hash(&keys.group_key().to_bytes().as_slice()[1 ..]); - /* - https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki#cite_ref-13-0 states how the - bias is negligible. This reduction shouldn't ever occur, yet if it did, the script path - would be unusable due to a check the script path hash is less than the order. That doesn't - impact us as we don't want the script path to be usable. - */ - *keys = keys.offset(::F::reduce(U256::from_be_bytes( - *tweak_hash.to_raw_hash().as_ref(), - ))); - } - *keys = bitcoin_serai::wallet::tweak_keys(keys); // Also create a scanner to assert these keys, and all expected paths, are usable scanner(keys.group_key()); diff --git a/processor/bitcoin/src/primitives/transaction.rs b/processor/bitcoin/src/primitives/transaction.rs index 8e7a26f6..9b81d2f0 100644 --- a/processor/bitcoin/src/primitives/transaction.rs +++ b/processor/bitcoin/src/primitives/transaction.rs @@ -2,7 +2,6 @@ use std::io; use rand_core::{RngCore, CryptoRng}; -use transcript::{Transcript, RecommendedTranscript}; use ciphersuite::Secp256k1; use frost::{dkg::ThresholdKeys, sign::PreprocessMachine}; @@ -81,7 +80,7 @@ impl PreprocessMachine for ClonableTransctionMachine { .0 .signable() .expect("signing an invalid SignableTransaction") - .multisig(&self.1, RecommendedTranscript::new(b"Serai Processor Bitcoin Transaction")) + .multisig(&self.1) .expect("incorrect keys used for SignableTransaction") .preprocess(rng) } From 1b1aa74770f1ed6a7ce795379e9516b78c0022a3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 23:23:41 -0400 Subject: [PATCH 167/368] Correct forge fmt config --- .github/workflows/lint.yml | 2 +- networks/ethereum/schnorr/contracts/Schnorr.sol | 6 +----- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 63a67649..b994a3cb 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -80,7 +80,7 @@ jobs: cache: false - name: Run forge fmt - run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TABLE_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol") + run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol") machete: runs-on: ubuntu-latest diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 247e0fbe..7405051a 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -15,11 +15,7 @@ library Schnorr { // message := the message signed // c := Schnorr signature challenge // s := Schnorr signature solution - function verify(bytes32 px, bytes32 message, bytes32 c, bytes32 s) - internal - pure - returns (bool) - { + function verify(bytes32 px, bytes32 message, bytes32 c, bytes32 s) internal pure returns (bool) { // ecrecover = (m, v, r, s) -> key // We instead pass the following to obtain the nonce (not the key) // Then we hash it and verify it matches the challenge From 8ea5acbacb562e0b2afd608bd70a1247e3b4abb5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Sep 2024 23:24:20 -0400 Subject: [PATCH 168/368] Update the Router smart contract to pay fees to the caller The caller is paid a fixed fee per unit of gas spent. That arguably incentivizes the publisher to raise the gas used by internal calls, yet this doesn't effect the user UX as they'll have flatly paid the worst-case fee already. It does pose a risk where callers are arguably incentivized to cause transaction failures which consume all the gas, not just increased gas, yet: 1) Modern smart contracts don't error by consuming all the gas 2) This is presumably infeasible 3) Even if it was feasible, the gas fees gained presumably exceed the gas fees spent causing the failure The benefit to only paying the callers for the gas used, not the gas alotted, is it allows Serai to build up a buffer. While this should be minor, a few cents on every transaction at best, if we ever do have any costs slip through the cracks, it ideally is sufficient to handle those. --- .../ethereum/router/contracts/Router.sol | 110 +++++++++++------- processor/ethereum/router/src/lib.rs | 55 ++++++--- 2 files changed, 104 insertions(+), 61 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index bc0debde..a74f8257 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -27,14 +27,13 @@ contract Router { } struct CodeDestination { - uint32 gas; + uint32 gas_limit; bytes code; } struct OutInstruction { DestinationType destinationType; bytes destination; - address coin; uint256 value; } @@ -79,7 +78,8 @@ contract Router { { // This DST needs a length prefix as well to prevent DSTs potentially being substrings of each // other, yet this fine for our very well-defined, limited use - bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey)); + bytes32 message = + keccak256(abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey)); _nonce++; if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { @@ -89,9 +89,7 @@ contract Router { function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { if (coin == address(0)) { - if (amount != msg.value) { - revert InvalidAmount(); - } + if (amount != msg.value) revert InvalidAmount(); } else { (bool success, bytes memory res) = address(coin).call( abi.encodeWithSelector(IERC20.transferFrom.selector, msg.sender, address(this), amount) @@ -100,32 +98,30 @@ contract Router { // Require there was nothing returned, which is done by some non-standard tokens, or that the // ERC20 contract did in fact return true bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); - if (!(success && nonStandardResOrTrue)) { - revert FailedTransfer(); - } + if (!(success && nonStandardResOrTrue)) revert FailedTransfer(); } /* - Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. The amount - instructed to be transferred may not actually be the amount transferred. + Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. The amount + instructed to be transferred may not actually be the amount transferred. - If we add nonReentrant to every single function which can effect the balance, we can check the - amount exactly matches. This prevents transfers of less value than expected occurring, at - least, not without an additional transfer to top up the difference (which isn't routed through - this contract and accordingly isn't trying to artificially create events from this contract). + If we add nonReentrant to every single function which can effect the balance, we can check the + amount exactly matches. This prevents transfers of less value than expected occurring, at + least, not without an additional transfer to top up the difference (which isn't routed through + this contract and accordingly isn't trying to artificially create events from this contract). - If we don't add nonReentrant, a transfer can be started, and then a new transfer for the - difference can follow it up (again and again until a rounding error is reached). This contract - would believe all transfers were done in full, despite each only being done in part (except - for the last one). + If we don't add nonReentrant, a transfer can be started, and then a new transfer for the + difference can follow it up (again and again until a rounding error is reached). This contract + would believe all transfers were done in full, despite each only being done in part (except + for the last one). - Given fee-on-transfer tokens aren't intended to be supported, the only token actively planned - to be supported is Dai and it doesn't have any fee-on-transfer logic, and how fee-on-transfer - tokens aren't even able to be supported at this time by the larger Serai network, we simply - classify this entire class of tokens as non-standard implementations which induce undefined - behavior. + Given fee-on-transfer tokens aren't intended to be supported, the only token actively planned + to be supported is Dai and it doesn't have any fee-on-transfer logic, and how fee-on-transfer + tokens aren't even able to be supported at this time by the larger Serai network, we simply + classify this entire class of tokens as non-standard implementations which induce undefined + behavior. - It is the Serai network's role not to add support for any non-standard implementations. + It is the Serai network's role not to add support for any non-standard implementations. */ emit InInstruction(msg.sender, coin, amount, instruction); } @@ -133,13 +129,13 @@ contract Router { // Perform a transfer out function _transferOut(address to, address coin, uint256 value) private { /* - We on purposely do not check if these calls succeed. A call either succeeded, and there's no - problem, or the call failed due to: - A) An insolvency - B) A malicious receiver - C) A non-standard token - A is an invariant, B should be dropped, C is something out of the control of this contract. - It is again the Serai's network role to not add support for any non-standard tokens, + We on purposely do not check if these calls succeed. A call either succeeded, and there's no + problem, or the call failed due to: + A) An insolvency + B) A malicious receiver + C) A non-standard token + A is an invariant, B should be dropped, C is something out of the control of this contract. + It is again the Serai's network role to not add support for any non-standard tokens, */ if (coin == address(0)) { // Enough gas to service the transfer and a minimal amount of logic @@ -151,9 +147,9 @@ contract Router { } /* - Serai supports arbitrary calls out via deploying smart contracts (with user-specified code), - letting them execute whatever calls they're coded for. Since we can't meter CREATE, we call - CREATE from this function which we call not internally, but with CALL (which we can meter). + Serai supports arbitrary calls out via deploying smart contracts (with user-specified code), + letting them execute whatever calls they're coded for. Since we can't meter CREATE, we call + CREATE from this function which we call not internally, but with CALL (which we can meter). */ function arbitaryCallOut(bytes memory code) external { // Because we're creating a contract, increment our nonce @@ -166,12 +162,20 @@ contract Router { } // Execute a list of transactions if they were signed by the current key with the current nonce - function execute(OutInstruction[] calldata transactions, Signature calldata signature) external { + function execute( + address coin, + uint256 fee_per_gas, + OutInstruction[] calldata transactions, + Signature calldata signature + ) external { + uint256 gasLeftAtStart = gasleft(); + // Verify the signature // We hash the message here as we need the message's hash for the Executed event // Since we're already going to hash it, hashing it prior to verifying the signature reduces the // amount of words hashed by its challenge function (reducing our gas costs) - bytes32 message = keccak256(abi.encode("execute", block.chainid, _nonce, transactions)); + bytes32 message = + keccak256(abi.encode("execute", block.chainid, _nonce, coin, fee_per_gas, transactions)); if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } @@ -187,8 +191,9 @@ contract Router { if (transactions[i].destinationType == DestinationType.Address) { // This may cause a panic and the contract to become stuck if the destination isn't actually // 20 bytes. Serai is trusted to not pass a malformed destination - (AddressDestination memory destination) = abi.decode(transactions[i].destination, (AddressDestination)); - _transferOut(destination.destination, transactions[i].coin, transactions[i].value); + (AddressDestination memory destination) = + abi.decode(transactions[i].destination, (AddressDestination)); + _transferOut(destination.destination, coin, transactions[i].value); } else { // The destination is a piece of initcode. We calculate the hash of the will-be contract, // transfer to it, and then run the initcode @@ -196,15 +201,36 @@ contract Router { address(uint160(uint256(keccak256(abi.encode(address(this), _smartContractNonce))))); // Perform the transfer - _transferOut(nextAddress, transactions[i].coin, transactions[i].value); + _transferOut(nextAddress, coin, transactions[i].value); // Perform the calls with a set gas budget - (CodeDestination memory destination) = abi.decode(transactions[i].destination, (CodeDestination)); - address(this).call{ gas: destination.gas }( + (CodeDestination memory destination) = + abi.decode(transactions[i].destination, (CodeDestination)); + address(this).call{ gas: destination.gas_limit }( abi.encodeWithSelector(Router.arbitaryCallOut.selector, destination.code) ); } } + + // Calculate the gas which will be used to transfer the fee out + // This is meant to be always over, never under, with any excess being a tip to the publisher + uint256 gasToTransferOut; + if (coin == address(0)) { + // 5,000 gas is explicitly allowed, with another 10,000 for whatever overhead remains + // unaccounted for + gasToTransferOut = 15_000; + } else { + // 100_000 gas is explicitly allowed, with another 15,000 for whatever overhead remains + // unaccounted for. More gas is given than for ETH due to needing to ABI encode the function + // call + gasToTransferOut = 115_000; + } + + // Calculate the gas used + uint256 gasLeftAtEnd = gasleft(); + uint256 gasUsed = gasLeftAtStart - gasLeftAtEnd; + // Transfer to the caller the fee + _transferOut(msg.sender, coin, (gasUsed + gasToTransferOut) * fee_per_gas); } function nonce() external view returns (uint256) { diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index d56c514f..32fcc449 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -47,7 +47,7 @@ impl From<&Signature> for abi::Signature { } /// A coin on Ethereum. -#[derive(Clone, PartialEq, Eq, Debug)] +#[derive(Clone, Copy, PartialEq, Eq, Debug)] pub enum Coin { /// Ether, the native coin of Ethereum. Ether, @@ -56,6 +56,14 @@ pub enum Coin { } impl Coin { + fn address(&self) -> Address { + (match self { + Coin::Ether => [0; 20], + Coin::Erc20(address) => *address, + }) + .into() + } + /// Read a `Coin`. pub fn read(reader: &mut R) -> io::Result { let mut kind = [0xff]; @@ -152,12 +160,12 @@ impl InInstruction { /// A list of `OutInstruction`s. #[derive(Clone)] pub struct OutInstructions(Vec); -impl From<&[(SeraiAddress, (Coin, U256))]> for OutInstructions { - fn from(outs: &[(SeraiAddress, (Coin, U256))]) -> Self { +impl From<&[(SeraiAddress, U256)]> for OutInstructions { + fn from(outs: &[(SeraiAddress, U256)]) -> Self { Self( outs .iter() - .map(|(address, (coin, amount))| { + .map(|(address, amount)| { #[allow(non_snake_case)] let (destinationType, destination) = match address { SeraiAddress::Address(address) => ( @@ -166,19 +174,14 @@ impl From<&[(SeraiAddress, (Coin, U256))]> for OutInstructions { ), SeraiAddress::Contract(contract) => ( abi::DestinationType::Code, - (abi::CodeDestination { gas: contract.gas(), code: contract.code().to_vec().into() }) - .abi_encode(), + (abi::CodeDestination { + gas_limit: contract.gas_limit(), + code: contract.code().to_vec().into(), + }) + .abi_encode(), ), }; - abi::OutInstruction { - destinationType, - destination: destination.into(), - coin: match coin { - Coin::Ether => [0; 20].into(), - Coin::Erc20(address) => address.into(), - }, - value: *amount, - } + abi::OutInstruction { destinationType, destination: destination.into(), value: *amount } }) .collect(), ) @@ -318,17 +321,31 @@ impl Router { } /// Get the message to be signed in order to execute a series of `OutInstruction`s. - pub fn execute_message(chain_id: U256, nonce: u64, outs: OutInstructions) -> Vec { - ("execute", chain_id, U256::try_from(nonce).expect("couldn't convert u64 to u256"), outs.0) + pub fn execute_message( + chain_id: U256, + nonce: u64, + coin: Coin, + fee_per_gas: U256, + outs: OutInstructions, + ) -> Vec { + ("execute", chain_id, U256::try_from(nonce).unwrap(), coin.address(), fee_per_gas, outs.0) .abi_encode() } /// Construct a transaction to execute a batch of `OutInstruction`s. - pub fn execute(&self, outs: OutInstructions, sig: &Signature) -> TxLegacy { + pub fn execute( + &self, + coin: Coin, + fee_per_gas: U256, + outs: OutInstructions, + sig: &Signature, + ) -> TxLegacy { let outs_len = outs.0.len(); TxLegacy { to: TxKind::Call(self.1), - input: abi::executeCall::new((outs.0, sig.into())).abi_encode().into(), + input: abi::executeCall::new((coin.address(), fee_per_gas, outs.0, sig.into())) + .abi_encode() + .into(), // TODO gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap()), ..Default::default() From 4292660edaa425d6d1b2693728d178f3891a061b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 00:12:54 -0400 Subject: [PATCH 169/368] Have the Ethereum scheduler create Batches as necessary Also introduces the fee logic, despite it being stubbed. --- .../ethereum/src/primitives/transaction.rs | 38 ++++---- processor/ethereum/src/publisher.rs | 4 +- processor/ethereum/src/scheduler.rs | 94 ++++++++++++++++--- substrate/client/src/networks/ethereum.rs | 41 +++++--- 4 files changed, 132 insertions(+), 45 deletions(-) diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index 52595375..67f17d31 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -18,7 +18,7 @@ use crate::{output::OutputId, machine::ClonableTransctionMachine}; #[derive(Clone, PartialEq, Debug)] pub(crate) enum Action { SetKey { chain_id: U256, nonce: u64, key: PublicKey }, - Batch { chain_id: U256, nonce: u64, outs: Vec<(Address, (Coin, U256))> }, + Batch { chain_id: U256, nonce: u64, coin: Coin, fee_per_gas: U256, outs: Vec<(Address, U256)> }, } #[derive(Clone, PartialEq, Eq, Debug)] @@ -36,9 +36,13 @@ impl Action { Action::SetKey { chain_id, nonce, key } => { Router::update_serai_key_message(*chain_id, *nonce, key) } - Action::Batch { chain_id, nonce, outs } => { - Router::execute_message(*chain_id, *nonce, OutInstructions::from(outs.as_ref())) - } + Action::Batch { chain_id, nonce, coin, fee_per_gas, outs } => Router::execute_message( + *chain_id, + *nonce, + *coin, + *fee_per_gas, + OutInstructions::from(outs.as_ref()), + ), } } @@ -47,13 +51,9 @@ impl Action { Self::SetKey { chain_id: _, nonce, key } => { Executed::SetKey { nonce: *nonce, key: key.eth_repr() } } - Self::Batch { chain_id, nonce, outs } => Executed::Batch { + Self::Batch { nonce, .. } => Executed::Batch { nonce: *nonce, - message_hash: keccak256(Router::execute_message( - *chain_id, - *nonce, - OutInstructions::from(outs.as_ref()), - )), + message_hash: keccak256(self.message()), }, }) } @@ -104,6 +104,12 @@ impl SignableTransaction for Action { Action::SetKey { chain_id, nonce, key } } 1 => { + let coin = Coin::read(reader)?; + + let mut fee_per_gas = [0; 32]; + reader.read_exact(&mut fee_per_gas)?; + let fee_per_gas = U256::from_le_bytes(fee_per_gas); + let mut outs_len = [0; 4]; reader.read_exact(&mut outs_len)?; let outs_len = usize::try_from(u32::from_le_bytes(outs_len)).unwrap(); @@ -111,15 +117,14 @@ impl SignableTransaction for Action { let mut outs = vec![]; for _ in 0 .. outs_len { let address = borsh::from_reader(reader)?; - let coin = Coin::read(reader)?; let mut amount = [0; 32]; reader.read_exact(&mut amount)?; let amount = U256::from_le_bytes(amount); - outs.push((address, (coin, amount))); + outs.push((address, amount)); } - Action::Batch { chain_id, nonce, outs } + Action::Batch { chain_id, nonce, coin, fee_per_gas, outs } } _ => unreachable!(), }) @@ -132,14 +137,15 @@ impl SignableTransaction for Action { writer.write_all(&nonce.to_le_bytes())?; writer.write_all(&key.eth_repr()) } - Self::Batch { chain_id, nonce, outs } => { + Self::Batch { chain_id, nonce, coin, fee_per_gas, outs } => { writer.write_all(&[1])?; writer.write_all(&chain_id.as_le_bytes())?; writer.write_all(&nonce.to_le_bytes())?; + coin.write(writer)?; + writer.write_all(&fee_per_gas.as_le_bytes())?; writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?; - for (address, (coin, amount)) in outs { + for (address, amount) in outs { borsh::BorshSerialize::serialize(address, writer)?; - coin.write(writer)?; writer.write_all(&amount.as_le_bytes())?; } Ok(()) diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index 4a62bad7..a49ea67f 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -89,8 +89,8 @@ impl signers::TransactionPublisher for TransactionPublisher< // Convert from an Action (an internal representation of a signable event) to a TxLegacy let tx = match tx.0 { Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), - Action::Batch { chain_id: _, nonce: _, outs } => { - router.execute(OutInstructions::from(outs.as_ref()), &tx.1) + Action::Batch { chain_id: _, nonce: _, coin, fee_per_gas, outs } => { + router.execute(coin, fee_per_gas, OutInstructions::from(outs.as_ref()), &tx.1) } }; diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index 55e091fc..5a3fd428 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -1,6 +1,11 @@ +use std::collections::HashMap; + use alloy_core::primitives::U256; -use serai_client::primitives::{NetworkId, Coin, Balance}; +use serai_client::{ + primitives::{NetworkId, Coin, Balance}, + networks::ethereum::Address, +}; use serai_db::Db; @@ -53,27 +58,86 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { fn fulfill( &self, - nonce: u64, + mut nonce: u64, _key: KeyFor>, payments: Vec>>>, ) -> Vec<(Self::SignableTransaction, EventualityFor>)> { - let mut outs = Vec::with_capacity(payments.len()); + // Sort by coin + let mut outs = HashMap::<_, _>::new(); for payment in payments { - outs.push(( - payment.address().clone(), - ( - coin_to_ethereum_coin(payment.balance().coin), - balance_to_ethereum_amount(payment.balance()), - ), - )); + let coin = payment.balance().coin; + outs + .entry(coin) + .or_insert_with(|| Vec::with_capacity(1)) + .push((payment.address().clone(), balance_to_ethereum_amount(payment.balance()))); } - // TODO: Per-batch gas limit - // TODO: Create several batches - // TODO: Handle fees - let action = Action::Batch { chain_id: self.chain_id, nonce, outs }; + let mut res = vec![]; + for coin in [Coin::Ether, Coin::Dai] { + let Some(outs) = outs.remove(&coin) else { continue }; + assert!(!outs.is_empty()); - vec![(action.clone(), action.eventuality())] + let fee_per_gas: U256 = todo!("TODO"); + + // The gas required to perform any interaction with the Router. + const BASE_GAS: u32 = 0; // TODO + + // The gas required to handle an additional payment to an address, in the worst case. + const ADDRESS_PAYMENT_GAS: u32 = 0; // TODO + + // The gas required to handle an additional payment to an smart contract, in the worst case. + // This does not include the explicit gas budget defined within the address specification. + const CONTRACT_PAYMENT_GAS: u32 = 0; // TODO + + // The maximum amount of gas for a batch. + const BATCH_GAS_LIMIT: u32 = 10_000_000; + + // Split these outs into batches, respecting BATCH_GAS_LIMIT + let mut batches = vec![vec![]]; + let mut current_gas = BASE_GAS; + for out in outs { + let payment_gas = match out.0 { + Address::Address(_) => ADDRESS_PAYMENT_GAS, + Address::Contract(deployment) => CONTRACT_PAYMENT_GAS + deployment.gas_limit(), + }; + if (current_gas + payment_gas) > BATCH_GAS_LIMIT { + assert!(!batches.last().unwrap().is_empty()); + batches.push(vec![]); + current_gas = BASE_GAS; + } + batches.last_mut().unwrap().push(out); + current_gas += payment_gas; + } + + // Push each batch onto the result + for outs in batches { + let base_gas = BASE_GAS.div_ceil(u32::try_from(outs.len()).unwrap()); + // Deduce the fee from each out + for out in &mut outs { + let payment_gas = base_gas + + match out.0 { + Address::Address(_) => ADDRESS_PAYMENT_GAS, + Address::Contract(deployment) => CONTRACT_PAYMENT_GAS + deployment.gas_limit(), + }; + + let payment_gas_cost = fee_per_gas * U256::try_from(payment_gas).unwrap(); + out.1 -= payment_gas_cost; + } + + res.push(Action::Batch { + chain_id: self.chain_id, + nonce, + coin: coin_to_ethereum_coin(coin), + fee_per_gas, + outs, + }); + nonce += 1; + } + } + // Ensure we handled all payments we're supposed to + assert!(outs.is_empty()); + + res.into_iter().map(|action| (action.clone(), action.eventuality())).collect() } } diff --git a/substrate/client/src/networks/ethereum.rs b/substrate/client/src/networks/ethereum.rs index ddf15480..47b58af5 100644 --- a/substrate/client/src/networks/ethereum.rs +++ b/substrate/client/src/networks/ethereum.rs @@ -5,13 +5,18 @@ use borsh::{BorshSerialize, BorshDeserialize}; use crate::primitives::{MAX_ADDRESS_LEN, ExternalAddress}; +/// THe maximum amount of gas an address is allowed to specify as its gas limit. +/// +/// Payments to an address with a gas limit which exceed this value will be dropped entirely. +pub const ADDRESS_GAS_LIMIT: u32 = 950_000; + #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct ContractDeployment { /// The gas limit to use for this contract's execution. /// /// THis MUST be less than the Serai gas limit. The cost of it will be deducted from the amount /// transferred. - gas: u32, + gas_limit: u32, /// The initialization code of the contract to deploy. /// /// This contract will be deployed (executing the initialization code). No further calls will @@ -21,17 +26,23 @@ pub struct ContractDeployment { /// A contract to deploy, enabling executing arbitrary code. impl ContractDeployment { - pub fn new(gas: u32, code: Vec) -> Option { + pub fn new(gas_limit: u32, code: Vec) -> Option { + // Check the gas limit is less the address gas limit + if gas_limit > ADDRESS_GAS_LIMIT { + None?; + } + // The max address length, minus the type byte, minus the size of the gas const MAX_CODE_LEN: usize = (MAX_ADDRESS_LEN as usize) - (1 + core::mem::size_of::()); if code.len() > MAX_CODE_LEN { None?; } - Some(Self { gas, code }) + + Some(Self { gas_limit, code }) } - pub fn gas(&self) -> u32 { - self.gas + pub fn gas_limit(&self) -> u32 { + self.gas_limit } pub fn code(&self) -> &[u8] { &self.code @@ -66,12 +77,18 @@ impl TryFrom for Address { Address::Address(address) } 1 => { - let mut gas = [0xff; 4]; - reader.read_exact(&mut gas).map_err(|_| ())?; - // The code is whatever's left since the ExternalAddress is a delimited container of - // appropriately bounded length + let mut gas_limit = [0xff; 4]; + reader.read_exact(&mut gas_limit).map_err(|_| ())?; Address::Contract(ContractDeployment { - gas: u32::from_le_bytes(gas), + gas_limit: { + let gas_limit = u32::from_le_bytes(gas_limit); + if gas_limit > ADDRESS_GAS_LIMIT { + Err(())?; + } + gas_limit + }, + // The code is whatever's left since the ExternalAddress is a delimited container of + // appropriately bounded length code: reader.to_vec(), }) } @@ -87,9 +104,9 @@ impl From
for ExternalAddress { res.push(0); res.extend(&address); } - Address::Contract(ContractDeployment { gas, code }) => { + Address::Contract(ContractDeployment { gas_limit, code }) => { res.push(1); - res.extend(&gas.to_le_bytes()); + res.extend(&gas_limit.to_le_bytes()); res.extend(&code); } } From ec9211fd848fb5837668d7abc0a42e65040a234b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 00:15:08 -0400 Subject: [PATCH 170/368] Remove accidentally included bitcoin feature from processor-bin --- processor/bin/Cargo.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml index f6da8b7c..116916ab 100644 --- a/processor/bin/Cargo.toml +++ b/processor/bin/Cargo.toml @@ -26,7 +26,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } -serai-client = { path = "../../substrate/client", default-features = false, features = ["bitcoin"] } +serai-client = { path = "../../substrate/client", default-features = false } log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } From bc1bbf99514728252c3f18080153a59b6a53b319 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 00:20:05 -0400 Subject: [PATCH 171/368] Set a fixed fee transferred to the caller for publication Avoids the risk of the gas used by the contract exceeding the gas presumed to be used (causing an insolvency). --- .../ethereum/router/contracts/Router.sol | 25 +++---------------- processor/ethereum/router/src/lib.rs | 17 +++---------- .../ethereum/src/primitives/transaction.rs | 25 +++++++++---------- processor/ethereum/src/publisher.rs | 4 +-- processor/ethereum/src/scheduler.rs | 11 +++++--- 5 files changed, 28 insertions(+), 54 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index a74f8257..c4c038e2 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -164,18 +164,16 @@ contract Router { // Execute a list of transactions if they were signed by the current key with the current nonce function execute( address coin, - uint256 fee_per_gas, + uint256 fee, OutInstruction[] calldata transactions, Signature calldata signature ) external { - uint256 gasLeftAtStart = gasleft(); - // Verify the signature // We hash the message here as we need the message's hash for the Executed event // Since we're already going to hash it, hashing it prior to verifying the signature reduces the // amount of words hashed by its challenge function (reducing our gas costs) bytes32 message = - keccak256(abi.encode("execute", block.chainid, _nonce, coin, fee_per_gas, transactions)); + keccak256(abi.encode("execute", block.chainid, _nonce, coin, fee, transactions)); if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } @@ -212,25 +210,8 @@ contract Router { } } - // Calculate the gas which will be used to transfer the fee out - // This is meant to be always over, never under, with any excess being a tip to the publisher - uint256 gasToTransferOut; - if (coin == address(0)) { - // 5,000 gas is explicitly allowed, with another 10,000 for whatever overhead remains - // unaccounted for - gasToTransferOut = 15_000; - } else { - // 100_000 gas is explicitly allowed, with another 15,000 for whatever overhead remains - // unaccounted for. More gas is given than for ETH due to needing to ABI encode the function - // call - gasToTransferOut = 115_000; - } - - // Calculate the gas used - uint256 gasLeftAtEnd = gasleft(); - uint256 gasUsed = gasLeftAtStart - gasLeftAtEnd; // Transfer to the caller the fee - _transferOut(msg.sender, coin, (gasUsed + gasToTransferOut) * fee_per_gas); + _transferOut(msg.sender, coin, fee); } function nonce() external view returns (uint256) { diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 32fcc449..248523b8 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -325,27 +325,18 @@ impl Router { chain_id: U256, nonce: u64, coin: Coin, - fee_per_gas: U256, + fee: U256, outs: OutInstructions, ) -> Vec { - ("execute", chain_id, U256::try_from(nonce).unwrap(), coin.address(), fee_per_gas, outs.0) - .abi_encode() + ("execute", chain_id, U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0).abi_encode() } /// Construct a transaction to execute a batch of `OutInstruction`s. - pub fn execute( - &self, - coin: Coin, - fee_per_gas: U256, - outs: OutInstructions, - sig: &Signature, - ) -> TxLegacy { + pub fn execute(&self, coin: Coin, fee: U256, outs: OutInstructions, sig: &Signature) -> TxLegacy { let outs_len = outs.0.len(); TxLegacy { to: TxKind::Call(self.1), - input: abi::executeCall::new((coin.address(), fee_per_gas, outs.0, sig.into())) - .abi_encode() - .into(), + input: abi::executeCall::new((coin.address(), fee, outs.0, sig.into())).abi_encode().into(), // TODO gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap()), ..Default::default() diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index 67f17d31..6730e7a9 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -18,7 +18,7 @@ use crate::{output::OutputId, machine::ClonableTransctionMachine}; #[derive(Clone, PartialEq, Debug)] pub(crate) enum Action { SetKey { chain_id: U256, nonce: u64, key: PublicKey }, - Batch { chain_id: U256, nonce: u64, coin: Coin, fee_per_gas: U256, outs: Vec<(Address, U256)> }, + Batch { chain_id: U256, nonce: u64, coin: Coin, fee: U256, outs: Vec<(Address, U256)> }, } #[derive(Clone, PartialEq, Eq, Debug)] @@ -36,11 +36,11 @@ impl Action { Action::SetKey { chain_id, nonce, key } => { Router::update_serai_key_message(*chain_id, *nonce, key) } - Action::Batch { chain_id, nonce, coin, fee_per_gas, outs } => Router::execute_message( + Action::Batch { chain_id, nonce, coin, fee, outs } => Router::execute_message( *chain_id, *nonce, *coin, - *fee_per_gas, + *fee, OutInstructions::from(outs.as_ref()), ), } @@ -51,10 +51,9 @@ impl Action { Self::SetKey { chain_id: _, nonce, key } => { Executed::SetKey { nonce: *nonce, key: key.eth_repr() } } - Self::Batch { nonce, .. } => Executed::Batch { - nonce: *nonce, - message_hash: keccak256(self.message()), - }, + Self::Batch { nonce, .. } => { + Executed::Batch { nonce: *nonce, message_hash: keccak256(self.message()) } + } }) } } @@ -106,9 +105,9 @@ impl SignableTransaction for Action { 1 => { let coin = Coin::read(reader)?; - let mut fee_per_gas = [0; 32]; - reader.read_exact(&mut fee_per_gas)?; - let fee_per_gas = U256::from_le_bytes(fee_per_gas); + let mut fee = [0; 32]; + reader.read_exact(&mut fee)?; + let fee = U256::from_le_bytes(fee); let mut outs_len = [0; 4]; reader.read_exact(&mut outs_len)?; @@ -124,7 +123,7 @@ impl SignableTransaction for Action { outs.push((address, amount)); } - Action::Batch { chain_id, nonce, coin, fee_per_gas, outs } + Action::Batch { chain_id, nonce, coin, fee, outs } } _ => unreachable!(), }) @@ -137,12 +136,12 @@ impl SignableTransaction for Action { writer.write_all(&nonce.to_le_bytes())?; writer.write_all(&key.eth_repr()) } - Self::Batch { chain_id, nonce, coin, fee_per_gas, outs } => { + Self::Batch { chain_id, nonce, coin, fee, outs } => { writer.write_all(&[1])?; writer.write_all(&chain_id.as_le_bytes())?; writer.write_all(&nonce.to_le_bytes())?; coin.write(writer)?; - writer.write_all(&fee_per_gas.as_le_bytes())?; + writer.write_all(&fee.as_le_bytes())?; writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?; for (address, amount) in outs { borsh::BorshSerialize::serialize(address, writer)?; diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index a49ea67f..3d18a6ef 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -89,8 +89,8 @@ impl signers::TransactionPublisher for TransactionPublisher< // Convert from an Action (an internal representation of a signable event) to a TxLegacy let tx = match tx.0 { Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), - Action::Batch { chain_id: _, nonce: _, coin, fee_per_gas, outs } => { - router.execute(coin, fee_per_gas, OutInstructions::from(outs.as_ref()), &tx.1) + Action::Batch { chain_id: _, nonce: _, coin, fee, outs } => { + router.execute(coin, fee, OutInstructions::from(outs.as_ref()), &tx.1) } }; diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index 5a3fd428..f4c31ec6 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -111,16 +111,19 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { // Push each batch onto the result for outs in batches { - let base_gas = BASE_GAS.div_ceil(u32::try_from(outs.len()).unwrap()); + let mut total_gas = 0; + + let base_gas_per_payment = BASE_GAS.div_ceil(u32::try_from(outs.len()).unwrap()); // Deduce the fee from each out for out in &mut outs { - let payment_gas = base_gas + + let payment_gas = base_gas_per_payment + match out.0 { Address::Address(_) => ADDRESS_PAYMENT_GAS, Address::Contract(deployment) => CONTRACT_PAYMENT_GAS + deployment.gas_limit(), }; + total_gas += payment_gas; - let payment_gas_cost = fee_per_gas * U256::try_from(payment_gas).unwrap(); + let payment_gas_cost = U256::try_from(payment_gas).unwrap() * fee_per_gas; out.1 -= payment_gas_cost; } @@ -128,7 +131,7 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { chain_id: self.chain_id, nonce, coin: coin_to_ethereum_coin(coin), - fee_per_gas, + fee: U256::try_from(total_gas).unwrap() * fee_per_gas, outs, }); nonce += 1; From 702b4c860ca177fe59e3240f1a49deb663cbc21b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 00:55:03 -0400 Subject: [PATCH 172/368] Add dummy fee values to the scheduler --- processor/ethereum/src/rpc.rs | 9 +++------ processor/ethereum/src/scheduler.rs | 18 ++++++++++++++---- 2 files changed, 17 insertions(+), 10 deletions(-) diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 0769c5c3..1eaa4988 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -18,7 +18,7 @@ use ethereum_erc20::{TopLevelTransfer, Erc20}; use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction, Router}; use crate::{ - TOKENS, InitialSeraiKey, + TOKENS, ETHER_DUST, DAI_DUST, InitialSeraiKey, block::{Epoch, FullEpoch}, }; @@ -207,12 +207,9 @@ impl ScannerFeed for Rpc { fn dust(coin: Coin) -> Amount { assert_eq!(coin.network(), NetworkId::Ethereum); - #[allow(clippy::inconsistent_digit_grouping)] match coin { - // 5 USD if Ether is ~3300 USD - Coin::Ether => Amount(1_500_00), - // 5 DAI - Coin::Dai => Amount(5_000_000_00), + Coin::Ether => ETHER_DUST, + Coin::Dai => DAI_DUST, _ => unreachable!(), } } diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index f4c31ec6..e8a437c1 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -77,7 +77,17 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { let Some(outs) = outs.remove(&coin) else { continue }; assert!(!outs.is_empty()); - let fee_per_gas: U256 = todo!("TODO"); + let fee_per_gas = match coin { + // 10 gwei + Coin::Ether => { + U256::try_from(10u64).unwrap() * alloy_core::primitives::utils::Unit::GWEI.wei() + } + // 0.0003 DAI + Coin::Dai => { + U256::try_from(30u64).unwrap() * alloy_core::primitives::utils::Unit::TWEI.wei() + } + _ => unreachable!(), + }; // The gas required to perform any interaction with the Router. const BASE_GAS: u32 = 0; // TODO @@ -96,7 +106,7 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { let mut batches = vec![vec![]]; let mut current_gas = BASE_GAS; for out in outs { - let payment_gas = match out.0 { + let payment_gas = match &out.0 { Address::Address(_) => ADDRESS_PAYMENT_GAS, Address::Contract(deployment) => CONTRACT_PAYMENT_GAS + deployment.gas_limit(), }; @@ -110,14 +120,14 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { } // Push each batch onto the result - for outs in batches { + for mut outs in batches { let mut total_gas = 0; let base_gas_per_payment = BASE_GAS.div_ceil(u32::try_from(outs.len()).unwrap()); // Deduce the fee from each out for out in &mut outs { let payment_gas = base_gas_per_payment + - match out.0 { + match &out.0 { Address::Address(_) => ADDRESS_PAYMENT_GAS, Address::Contract(deployment) => CONTRACT_PAYMENT_GAS + deployment.gas_limit(), }; From 1e1b821d3461c1d73c23b7c29ba54a81e495b41a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 00:55:21 -0400 Subject: [PATCH 173/368] Report a Change Output with every Eventuality to ensure we don't fall out of synchrony --- processor/ethereum/src/primitives/block.rs | 36 +++++- processor/ethereum/src/primitives/mod.rs | 9 ++ processor/ethereum/src/primitives/output.rs | 116 +++++++++++++++----- processor/scanner/src/lib.rs | 4 +- 4 files changed, 132 insertions(+), 33 deletions(-) diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index 723e099d..780837fa 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -61,7 +61,29 @@ impl primitives::Block for FullEpoch { // Associate all outputs with the latest active key // We don't associate these with the current key within the SC as that'll cause outputs to be // marked for forwarding if the SC is delayed to actually rotate - self.instructions.iter().cloned().map(|instruction| Output { key, instruction }).collect() + let mut outputs: Vec<_> = self + .instructions + .iter() + .cloned() + .map(|instruction| Output::Output { key, instruction }) + .collect(); + + /* + The scanner requires a change output be associated with every Eventuality that came from + fulfilling payments, unless said Eventuality descends from an Eventuality meeting that + requirement from the same fulfillment. This ensures we have a fully populated Eventualities + set by the time we process the block which has an Eventuality. + + Accordingly, for any block with an Eventuality completion, we claim there's a Change output + so that the block is flagged. Ethereum doesn't actually have Change outputs, yet the scanner + won't report them to Substrate, and the Smart Contract scheduler will drop any/all outputs + passed to it (handwaving their balances as present within the Smart Contract). + */ + if !self.executed.is_empty() { + outputs.push(Output::Eventuality { key, nonce: self.executed.first().unwrap().nonce() }); + } + + outputs } #[allow(clippy::type_complexity)] @@ -85,15 +107,17 @@ impl primitives::Block for FullEpoch { "Router emitted distinct event for nonce {}", executed.nonce() ); + /* The transaction ID is used to determine how internal outputs from this transaction should be handled (if they were actually internal or if they were just to an internal address). - The Ethereum integration doesn't have internal addresses, and this transaction wasn't made - by Serai. It was simply authorized by Serai yet may or may not be associated with other - actions we don't want to flag as our own. + The Ethereum integration doesn't use internal addresses, and only uses internal outputs to + flag a block as having an Eventuality. Those internal outputs will always be scanned, and + while they may be dropped/kept by this ID, the scheduler will then always drop them. + Accordingly, we have free reign as to what to set the transaction ID to. - Accordingly, we set the transaction ID to the nonce. This is unique barring someone finding - the preimage which hashes to this nonce, and won't cause any other data to be associated. + We set the ID to the nonce as it's the most helpful value and unique barring someone + finding the premise for this as a hash. */ let mut tx_id = [0; 32]; tx_id[.. 8].copy_from_slice(executed.nonce().to_le_bytes().as_slice()); diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs index 00a5980f..197acf8f 100644 --- a/processor/ethereum/src/primitives/mod.rs +++ b/processor/ethereum/src/primitives/mod.rs @@ -1,3 +1,5 @@ +use serai_client::primitives::Amount; + pub(crate) mod output; pub(crate) mod transaction; pub(crate) mod machine; @@ -10,3 +12,10 @@ pub(crate) const DAI: [u8; 20] = }; pub(crate) const TOKENS: [[u8; 20]; 1] = [DAI]; + +// 8 decimals, so 1_000_000_00 would be 1 ETH. This is 0.0015 ETH (5 USD if Ether is ~3300 USD). +#[allow(clippy::inconsistent_digit_grouping)] +pub(crate) const ETHER_DUST: Amount = Amount(1_500_00); +// 5 DAI +#[allow(clippy::inconsistent_digit_grouping)] +pub(crate) const DAI_DUST: Amount = Amount(5_000_000_00); diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index 0f327921..2215c29d 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -15,7 +15,7 @@ use serai_client::{ use primitives::{OutputType, ReceivedOutput}; use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction}; -use crate::DAI; +use crate::{DAI, ETHER_DUST}; fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { match coin { @@ -59,58 +59,122 @@ impl AsMut<[u8]> for OutputId { } #[derive(Clone, PartialEq, Eq, Debug)] -pub(crate) struct Output { - pub(crate) key: ::G, - pub(crate) instruction: EthereumInInstruction, +pub(crate) enum Output { + Output { key: ::G, instruction: EthereumInInstruction }, + Eventuality { key: ::G, nonce: u64 }, } impl ReceivedOutput<::G, Address> for Output { type Id = OutputId; type TransactionId = [u8; 32]; - // We only scan external outputs as we don't have branch/change/forwards fn kind(&self) -> OutputType { - OutputType::External + match self { + // All outputs received are External + Output::Output { .. } => OutputType::External, + // Yet upon Eventuality completions, we report a Change output to ensure synchrony per the + // scanner's documented bounds + Output::Eventuality { .. } => OutputType::Change, + } } fn id(&self) -> Self::Id { - let mut id = [0; 40]; - id[.. 32].copy_from_slice(&self.instruction.id.0); - id[32 ..].copy_from_slice(&self.instruction.id.1.to_le_bytes()); - OutputId(id) + match self { + Output::Output { key: _, instruction } => { + let mut id = [0; 40]; + id[.. 32].copy_from_slice(&instruction.id.0); + id[32 ..].copy_from_slice(&instruction.id.1.to_le_bytes()); + OutputId(id) + } + // Yet upon Eventuality completions, we report a Change output to ensure synchrony per the + // scanner's documented bounds + Output::Eventuality { key: _, nonce } => { + let mut id = [0; 40]; + id[.. 8].copy_from_slice(&nonce.to_le_bytes()); + OutputId(id) + } + } } fn transaction_id(&self) -> Self::TransactionId { - self.instruction.id.0 + match self { + Output::Output { key: _, instruction } => instruction.id.0, + Output::Eventuality { key: _, nonce } => { + let mut id = [0; 32]; + id[.. 8].copy_from_slice(&nonce.to_le_bytes()); + id + } + } } fn key(&self) -> ::G { - self.key + match self { + Output::Output { key, .. } | Output::Eventuality { key, .. } => *key, + } } fn presumed_origin(&self) -> Option
{ - Some(Address::from(self.instruction.from)) + match self { + Output::Output { key: _, instruction } => Some(Address::from(instruction.from)), + Output::Eventuality { .. } => None, + } } fn balance(&self) -> Balance { - let coin = coin_to_serai_coin(&self.instruction.coin).unwrap_or_else(|| { - panic!( - "mapping coin from an EthereumInInstruction with coin {}, which we don't handle.", - "this never should have been yielded" - ) - }); - Balance { coin, amount: amount_to_serai_amount(coin, self.instruction.amount) } + match self { + Output::Output { key: _, instruction } => { + let coin = coin_to_serai_coin(&instruction.coin).unwrap_or_else(|| { + panic!( + "mapping coin from an EthereumInInstruction with coin {}, which we don't handle.", + "this never should have been yielded" + ) + }); + Balance { coin, amount: amount_to_serai_amount(coin, instruction.amount) } + } + Output::Eventuality { .. } => Balance { coin: Coin::Ether, amount: ETHER_DUST }, + } } fn data(&self) -> &[u8] { - &self.instruction.data + match self { + Output::Output { key: _, instruction } => &instruction.data, + Output::Eventuality { .. } => &[], + } } fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(self.key.to_bytes().as_ref())?; - self.instruction.write(writer) + match self { + Output::Output { key, instruction } => { + writer.write_all(&[0])?; + writer.write_all(key.to_bytes().as_ref())?; + instruction.write(writer) + } + Output::Eventuality { key, nonce } => { + writer.write_all(&[1])?; + writer.write_all(key.to_bytes().as_ref())?; + writer.write_all(&nonce.to_le_bytes()) + } + } } fn read(reader: &mut R) -> io::Result { - let key = Secp256k1::read_G(reader)?; - let instruction = EthereumInInstruction::read(reader)?; - Ok(Self { key, instruction }) + let mut kind = [0xff]; + reader.read_exact(&mut kind)?; + if kind[0] >= 2 { + Err(io::Error::other("unknown Output type"))?; + } + + Ok(match kind[0] { + 0 => { + let key = Secp256k1::read_G(reader)?; + let instruction = EthereumInInstruction::read(reader)?; + Self::Output { key, instruction } + } + 1 => { + let key = Secp256k1::read_G(reader)?; + let mut nonce = [0; 8]; + reader.read_exact(&mut nonce)?; + let nonce = u64::from_le_bytes(nonce); + Self::Eventuality { key, nonce } + } + _ => unreachable!(), + }) } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 72d661a3..6db07989 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -321,7 +321,9 @@ pub trait Scheduler: 'static + Send { /// /// Any Eventualities returned by this function must include an output-to-Serai (such as a Branch /// or Change), unless they descend from a transaction returned by this function which satisfies - /// that requirement. + /// that requirement. This ensures when we scan outputs from transactions we made, we report the + /// block up to Substrate, and obtain synchrony on all prior blocks (allowing us to identify our + /// own transactions, which we may be prior unaware of due to a lagging view of Substrate). /// /// `active_keys` is the list of active keys, potentially including a key for which we've already /// called `retire_key` on. If so, its stage will be `Finishing` and no further operations will From ae767495130a9d51a2a4b10edb99638f924b98eb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 01:01:45 -0400 Subject: [PATCH 174/368] Transfer ETH with CREATE, not prior to CREATE Saves a few thousand gas. --- .../ethereum/router/contracts/Router.sol | 52 ++++++++++++------- 1 file changed, 32 insertions(+), 20 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index c4c038e2..9100f59e 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -126,23 +126,28 @@ contract Router { emit InInstruction(msg.sender, coin, amount, instruction); } - // Perform a transfer out - function _transferOut(address to, address coin, uint256 value) private { - /* + /* We on purposely do not check if these calls succeed. A call either succeeded, and there's no problem, or the call failed due to: - A) An insolvency - B) A malicious receiver - C) A non-standard token + A) An insolvency + B) A malicious receiver + C) A non-standard token A is an invariant, B should be dropped, C is something out of the control of this contract. It is again the Serai's network role to not add support for any non-standard tokens, - */ + */ + + // Perform an ERC20 transfer out + function _erc20TransferOut(address to, address coin, uint256 value) private { + coin.call{ gas: 100_000 }(abi.encodeWithSelector(IERC20.transfer.selector, msg.sender, value)); + } + + // Perform an ETH/ERC20 transfer out + function _transferOut(address to, address coin, uint256 value) private { if (coin == address(0)) { // Enough gas to service the transfer and a minimal amount of logic - // TODO: If we're constructing a contract, we can do this at the same time as construction to.call{ value: value, gas: 5_000 }(""); } else { - coin.call{ gas: 100_000 }(abi.encodeWithSelector(IERC20.transfer.selector, msg.sender, value)); + _erc20TransferOut(to, coin, value); } } @@ -151,13 +156,14 @@ contract Router { letting them execute whatever calls they're coded for. Since we can't meter CREATE, we call CREATE from this function which we call not internally, but with CALL (which we can meter). */ - function arbitaryCallOut(bytes memory code) external { + function arbitaryCallOut(bytes memory code) external payable { // Because we're creating a contract, increment our nonce _smartContractNonce += 1; + uint256 msg_value = msg.value; address contractAddress; assembly { - contractAddress := create(0, add(code, 0x20), mload(code)) + contractAddress := create(msg_value, add(code, 0x20), mload(code)) } } @@ -193,18 +199,24 @@ contract Router { abi.decode(transactions[i].destination, (AddressDestination)); _transferOut(destination.destination, coin, transactions[i].value); } else { - // The destination is a piece of initcode. We calculate the hash of the will-be contract, - // transfer to it, and then run the initcode - address nextAddress = - address(uint160(uint256(keccak256(abi.encode(address(this), _smartContractNonce))))); + // Prepare for the transfer + uint256 eth_value = 0; + if (coin == address(0)) { + // If it's ETH, we transfer the value with the call + eth_value = transactions[i].value; + } else { + // If it's an ERC20, we calculate the hash of the will-be contract and transfer to it + // before deployment. This avoids needing to deploy, then call again, offering a few + // optimizations + address nextAddress = + address(uint160(uint256(keccak256(abi.encode(address(this), _smartContractNonce))))); + _erc20TransferOut(nextAddress, coin, transactions[i].value); + } - // Perform the transfer - _transferOut(nextAddress, coin, transactions[i].value); - - // Perform the calls with a set gas budget + // Perform the deployment with the defined gas budget (CodeDestination memory destination) = abi.decode(transactions[i].destination, (CodeDestination)); - address(this).call{ gas: destination.gas_limit }( + address(this).call{ gas: destination.gas_limit, value: eth_value }( abi.encodeWithSelector(Router.arbitaryCallOut.selector, destination.code) ); } From 294462641ef46f60bc6792062d4190aa721d8b36 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 01:23:26 -0400 Subject: [PATCH 175/368] Don't have the ERC20 collapse the top-level transfer ID to the transaction ID Uses the ID of the transfer event associated with the top-level transfer. --- processor/ethereum/erc20/src/lib.rs | 30 +++++++++-------------------- 1 file changed, 9 insertions(+), 21 deletions(-) diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 400a5baa..ec33989e 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -30,8 +30,8 @@ pub use abi::IERC20::Transfer; /// A top-level ERC20 transfer #[derive(Clone, Debug)] pub struct TopLevelTransfer { - /// The transaction ID which effected this transfer. - pub id: [u8; 32], + /// The ID of the event for this transfer. + pub id: ([u8; 32], u64), /// The address which made the transfer. pub from: [u8; 20], /// The amount transferred. @@ -40,14 +40,6 @@ pub struct TopLevelTransfer { pub data: Vec, } -/// A transaction with a top-level transfer, matched to the log index of the transfer. -pub struct MatchedTopLevelTransfer { - /// The transfer. - pub transfer: TopLevelTransfer, - /// The log index of the transfer. - pub log_index: u64, -} - /// A view for an ERC20 contract. #[derive(Clone, Debug)] pub struct Erc20(Arc>, Address); @@ -62,7 +54,7 @@ impl Erc20 { provider: impl AsRef>, transaction_id: B256, to: Address, - ) -> Result, RpcError> { + ) -> Result, RpcError> { // Fetch the transaction let transaction = provider.as_ref().get_transaction_by_hash(transaction_id).await?.ok_or_else(|| { @@ -132,15 +124,11 @@ impl Erc20 { let encoded = call.abi_encode(); let data = transaction.input.as_ref()[encoded.len() ..].to_vec(); - return Ok(Some(MatchedTopLevelTransfer { - transfer: TopLevelTransfer { - // Since there's only one top-level transfer per TX, set the ID to the TX ID - id: *transaction_id, - from: *log.from.0, - amount: log.value, - data, - }, - log_index, + return Ok(Some(TopLevelTransfer { + id: (*transaction_id, log_index), + from: *log.from.0, + amount: log.value, + data, })); } } @@ -193,7 +181,7 @@ impl Erc20 { // Panicking on a task panic is desired behavior, and we haven't aborted any tasks match top_level_transfer.unwrap() { // Top-level transfer - Ok(Some(top_level_transfer)) => top_level_transfers.push(top_level_transfer.transfer), + Ok(Some(top_level_transfer)) => top_level_transfers.push(top_level_transfer), // Not a top-level transfer Ok(None) => continue, // Failed to get this transaction's information so abort From 7e4c59a0a381fda5926cbd0a554c26330ba7de11 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 01:24:28 -0400 Subject: [PATCH 176/368] Have the Router track its deployment block Prevents a consensus split where some nodes would drop transfers if their node didn't think the Router was deployed, and some would handle them. --- .../ethereum/router/contracts/Router.sol | 9 ++++++++ processor/ethereum/router/src/lib.rs | 21 +++++++++++++++++-- processor/ethereum/src/rpc.rs | 12 +++++++++-- 3 files changed, 38 insertions(+), 4 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 9100f59e..d82c0d90 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -7,6 +7,9 @@ import "Schnorr.sol"; // _ is used as a prefix for internal functions and smart-contract-scoped variables contract Router { + // The block at which this contract was deployed. + uint256 private _deploymentBlock; + // Nonce is incremented for each command executed, preventing replays uint256 private _nonce; @@ -63,6 +66,8 @@ contract Router { } constructor(bytes32 initialSeraiKey) _updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { + _deploymentBlock = block.number; + // We consumed nonce 0 when setting the initial Serai key _nonce = 1; // Nonces are incremented by 1 upon account creation, prior to any code execution, per EIP-161 @@ -230,6 +235,10 @@ contract Router { return _nonce; } + function deploymentBlock() external view returns (uint256) { + return _deploymentBlock; + } + function smartContractNonce() external view returns (uint256) { return _smartContractNonce; } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 248523b8..d78b3218 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -11,7 +11,7 @@ use alloy_consensus::TxLegacy; use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; -use alloy_rpc_types_eth::Filter; +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest, Filter}; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -296,6 +296,23 @@ impl Router { self.1 } + /// Fetch the block this contract was deployed at. + pub async fn deployment_block(&self) -> Result> { + let call = TransactionRequest::default() + .to(self.address()) + .input(TransactionInput::new(abi::deploymentBlockCall::new(()).abi_encode().into())); + let bytes = self.0.call(&call).await?; + let deployment_block = abi::deploymentBlockCall::abi_decode_returns(&bytes, true) + .map_err(|e| { + TransportErrorKind::Custom( + format!("node returned a non-u256 for function returning u256: {e:?}").into(), + ) + })? + ._0; + + Ok(deployment_block.try_into().unwrap()) + } + /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(chain_id: U256, nonce: u64, key: &PublicKey) -> Vec { ( @@ -420,7 +437,7 @@ impl Router { */ if let Some(matched) = Erc20::match_top_level_transfer(&self.0, tx_hash, self.1).await? { // Mark this log index as used so it isn't used again - transfer_check.insert(matched.log_index); + transfer_check.insert(matched.id.1); } // Find a matching transfer log diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 1eaa4988..a5533484 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -156,10 +156,12 @@ impl ScannerFeed for Rpc { // The Router wasn't deployed yet so we cannot have any on-chain interactions // If the Router has been deployed by the block we've synced to, it won't have any events // for these blocks anways, so this doesn't risk a consensus split - // TODO: This does as we can have top-level transfers to the router before it's deployed return Ok(FullEpoch { epoch, instructions, executed }); }; + let router_deployment_block = router.deployment_block().await?; + + // TODO: Use a LocalSet and handle all these in parallel let mut to_check = epoch.end_hash; while to_check != epoch.prior_end_hash { let to_check_block = self @@ -177,6 +179,12 @@ impl ScannerFeed for Rpc { })? .header; + // If this is before the Router was deployed, move on + if to_check_block.number < router_deployment_block { + // This is sa + break; + } + instructions.append( &mut router.in_instructions(to_check_block.number, &HashSet::from(TOKENS)).await?, ); @@ -187,7 +195,7 @@ impl ScannerFeed for Rpc { .await? { instructions.push(EthereumInInstruction { - id: (id, u64::MAX), + id, from, coin: EthereumCoin::Erc20(token), amount, From 554c5778e4194293399fa2a8b440ab79b72b67bb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 02:06:35 -0400 Subject: [PATCH 177/368] Don't track deployment block in the Router This technically has a TOCTOU where we sync an Epoch's metadata (signifying we did sync to that point), then check if the Router was deployed, yet at that very moment the node resets to genesis. By ensuring the Router is deployed, we avoid this (and don't need to track the deployment block in-contract). Also uses a JoinSet to sync the 32 blocks in parallel. --- .../ethereum/router/contracts/Router.sol | 9 -- processor/ethereum/router/src/lib.rs | 19 +--- processor/ethereum/src/rpc.rs | 86 +++++++++++-------- 3 files changed, 50 insertions(+), 64 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index d82c0d90..9100f59e 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -7,9 +7,6 @@ import "Schnorr.sol"; // _ is used as a prefix for internal functions and smart-contract-scoped variables contract Router { - // The block at which this contract was deployed. - uint256 private _deploymentBlock; - // Nonce is incremented for each command executed, preventing replays uint256 private _nonce; @@ -66,8 +63,6 @@ contract Router { } constructor(bytes32 initialSeraiKey) _updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { - _deploymentBlock = block.number; - // We consumed nonce 0 when setting the initial Serai key _nonce = 1; // Nonces are incremented by 1 upon account creation, prior to any code execution, per EIP-161 @@ -235,10 +230,6 @@ contract Router { return _nonce; } - function deploymentBlock() external view returns (uint256) { - return _deploymentBlock; - } - function smartContractNonce() external view returns (uint256) { return _smartContractNonce; } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index d78b3218..7a7cffd8 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -11,7 +11,7 @@ use alloy_consensus::TxLegacy; use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; -use alloy_rpc_types_eth::{TransactionInput, TransactionRequest, Filter}; +use alloy_rpc_types_eth::Filter; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -296,23 +296,6 @@ impl Router { self.1 } - /// Fetch the block this contract was deployed at. - pub async fn deployment_block(&self) -> Result> { - let call = TransactionRequest::default() - .to(self.address()) - .input(TransactionInput::new(abi::deploymentBlockCall::new(()).abi_encode().into())); - let bytes = self.0.call(&call).await?; - let deployment_block = abi::deploymentBlockCall::abi_decode_returns(&bytes, true) - .map_err(|e| { - TransportErrorKind::Custom( - format!("node returned a non-u256 for function returning u256: {e:?}").into(), - ) - })? - ._0; - - Ok(deployment_block.try_into().unwrap()) - } - /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(chain_id: U256, nonce: u64, key: &PublicKey) -> Vec { ( diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index a5533484..7f8a422b 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -2,20 +2,23 @@ use core::future::Future; use std::{sync::Arc, collections::HashSet}; use alloy_core::primitives::B256; -use alloy_rpc_types_eth::{BlockTransactionsKind, BlockNumberOrTag}; +use alloy_rpc_types_eth::{Header, BlockTransactionsKind, BlockNumberOrTag}; use alloy_transport::{RpcError, TransportErrorKind}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; use serai_client::primitives::{NetworkId, Coin, Amount}; +use tokio::task::JoinSet; + use serai_db::Db; use scanner::ScannerFeed; use ethereum_schnorr::PublicKey; use ethereum_erc20::{TopLevelTransfer, Erc20}; -use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction, Router}; +#[rustfmt::skip] +use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction, Executed, Router}; use crate::{ TOKENS, ETHER_DUST, DAI_DUST, InitialSeraiKey, @@ -141,8 +144,6 @@ impl ScannerFeed for Rpc { ) -> impl Send + Future> { async move { let epoch = self.unchecked_block_header_by_number(number).await?; - let mut instructions = vec![]; - let mut executed = vec![]; let Some(router) = Router::new( self.provider.clone(), @@ -153,16 +154,42 @@ impl ScannerFeed for Rpc { ) .await? else { - // The Router wasn't deployed yet so we cannot have any on-chain interactions - // If the Router has been deployed by the block we've synced to, it won't have any events - // for these blocks anways, so this doesn't risk a consensus split - return Ok(FullEpoch { epoch, instructions, executed }); + Err(TransportErrorKind::Custom("router wasn't deployed on-chain yet".to_string().into()))? }; - let router_deployment_block = router.deployment_block().await?; + async fn sync_block( + provider: Arc>, + router: Router, + block: Header, + ) -> Result<(Vec, Vec), RpcError> { + let mut instructions = router.in_instructions(block.number, &HashSet::from(TOKENS)).await?; - // TODO: Use a LocalSet and handle all these in parallel + for token in TOKENS { + for TopLevelTransfer { id, from, amount, data } in Erc20::new(provider.clone(), token) + .top_level_transfers(block.number, router.address()) + .await? + { + instructions.push(EthereumInInstruction { + id, + from, + coin: EthereumCoin::Erc20(token), + amount, + data, + }); + } + } + + let executed = router.executed(block.number).await?; + + Ok((instructions, executed)) + } + + // We use JoinSet here to minimize the latency of the variety of requests we make. For each + // JoinError that may occur, we unwrap it as no underlying tasks should panic + let mut join_set = JoinSet::new(); let mut to_check = epoch.end_hash; + // TODO: This makes 32 sequential requests. We should run them in parallel using block + // nunbers while to_check != epoch.prior_end_hash { let to_check_block = self .provider @@ -179,34 +206,19 @@ impl ScannerFeed for Rpc { })? .header; - // If this is before the Router was deployed, move on - if to_check_block.number < router_deployment_block { - // This is sa - break; - } - - instructions.append( - &mut router.in_instructions(to_check_block.number, &HashSet::from(TOKENS)).await?, - ); - for token in TOKENS { - for TopLevelTransfer { id, from, amount, data } in - Erc20::new(self.provider.clone(), token) - .top_level_transfers(to_check_block.number, router.address()) - .await? - { - instructions.push(EthereumInInstruction { - id, - from, - coin: EthereumCoin::Erc20(token), - amount, - data, - }); - } - } - - executed.append(&mut router.executed(to_check_block.number).await?); - + // Update the next block to check to_check = *to_check_block.parent_hash; + + // Spawn a task to sync this block + join_set.spawn(sync_block(self.provider.clone(), router.clone(), to_check_block)); + } + + let mut instructions = vec![]; + let mut executed = vec![]; + while let Some(instructions_and_executed) = join_set.join_next().await { + let (mut these_instructions, mut these_executed) = instructions_and_executed.unwrap()?; + instructions.append(&mut these_instructions); + executed.append(&mut these_executed); } Ok(FullEpoch { epoch, instructions, executed }) From 2984d2f8cfa016fccc988699ea21d425488d1655 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 02:12:26 -0400 Subject: [PATCH 178/368] Misc comments --- processor/ethereum/TODO/old_processor.rs | 14 -------------- processor/ethereum/deployer/contracts/Deployer.sol | 3 +++ processor/monero/src/scheduler.rs | 1 - processor/scanner/src/lib.rs | 1 + 4 files changed, 4 insertions(+), 15 deletions(-) diff --git a/processor/ethereum/TODO/old_processor.rs b/processor/ethereum/TODO/old_processor.rs index 50250c43..a7e85a5c 100644 --- a/processor/ethereum/TODO/old_processor.rs +++ b/processor/ethereum/TODO/old_processor.rs @@ -53,20 +53,6 @@ TODO } } - #[cfg(test)] - async fn get_block_number(&self, id: &>::Id) -> usize { - self - .provider - .get_block(B256::from(*id).into(), BlockTransactionsKind::Hashes) - .await - .unwrap() - .unwrap() - .header - .number - .try_into() - .unwrap() - } - #[cfg(test)] async fn get_transaction_by_eventuality( &self, diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol index 2d4904e4..a7dac1d3 100644 --- a/processor/ethereum/deployer/contracts/Deployer.sol +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -31,6 +31,9 @@ pragma solidity ^0.8.26; The alternative would be to have a council publish the Serai key on-Ethereum, with Serai verifying the published result. This would introduce a DoS risk in the council not publishing the correct key/not publishing any key. + + This design does not work with designs expecting initialization (which may require re-deploying + the same code until the initialization successfully goes through, without being sniped). */ contract Deployer { diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index 667840f6..489db810 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -88,7 +88,6 @@ async fn signable_transaction( // It is a reused value (with later code), but that's not an issue. Just an oddity &mut ChaCha20Rng::from_seed(id), &rpc.rpc, - // TODO: Have Decoys take RctType match rct_type { RctType::ClsagBulletproof => 11, RctType::ClsagBulletproofPlus => 16, diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 6db07989..5046753c 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -260,6 +260,7 @@ impl SchedulerUpdate { pub type KeyScopedEventualities = HashMap, Vec>>; /// The object responsible for accumulating outputs and planning new transactions. +// TODO: Move this to Scheduler primitives pub trait Scheduler: 'static + Send { /// An error encountered when handling updates/payments. /// From a0ed043372f7fd4e76b2099a0c649cd2679dd912 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 02:20:59 -0400 Subject: [PATCH 179/368] Move old processor/src directory to processor/TODO --- processor/{src => TODO}/main.rs | 16 ---------------- processor/{src => TODO}/tests/addresses.rs | 2 ++ processor/{src => TODO}/tests/batch_signer.rs | 2 ++ processor/{src => TODO}/tests/cosigner.rs | 2 ++ processor/{src => TODO}/tests/key_gen.rs | 2 ++ processor/{src => TODO}/tests/literal/mod.rs | 2 ++ processor/{src => TODO}/tests/mod.rs | 2 ++ processor/{src => TODO}/tests/scanner.rs | 2 ++ processor/{src => TODO}/tests/signer.rs | 2 ++ processor/{src => TODO}/tests/wallet.rs | 2 ++ 10 files changed, 18 insertions(+), 16 deletions(-) rename processor/{src => TODO}/main.rs (74%) rename processor/{src => TODO}/tests/addresses.rs (99%) rename processor/{src => TODO}/tests/batch_signer.rs (99%) rename processor/{src => TODO}/tests/cosigner.rs (99%) rename processor/{src => TODO}/tests/key_gen.rs (99%) rename processor/{src => TODO}/tests/literal/mod.rs (99%) rename processor/{src => TODO}/tests/mod.rs (99%) rename processor/{src => TODO}/tests/scanner.rs (99%) rename processor/{src => TODO}/tests/signer.rs (99%) rename processor/{src => TODO}/tests/wallet.rs (99%) diff --git a/processor/src/main.rs b/processor/TODO/main.rs similarity index 74% rename from processor/src/main.rs rename to processor/TODO/main.rs index b4a5053a..1458a7fc 100644 --- a/processor/src/main.rs +++ b/processor/TODO/main.rs @@ -59,19 +59,3 @@ async fn handle_coordinator_msg( } } } - -#[tokio::main] -async fn main() { - match network_id { - #[cfg(feature = "ethereum")] - NetworkId::Ethereum => { - let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") - .expect("ethereum relayer hostname wasn't specified") - .to_string(); - let relayer_port = - env::var("ETHEREUM_RELAYER_PORT").expect("ethereum relayer port wasn't specified"); - let relayer_url = relayer_hostname + ":" + &relayer_port; - run(db.clone(), Ethereum::new(db, url, relayer_url).await, coordinator).await - } - } -} diff --git a/processor/src/tests/addresses.rs b/processor/TODO/tests/addresses.rs similarity index 99% rename from processor/src/tests/addresses.rs rename to processor/TODO/tests/addresses.rs index 3d4d6d4c..1a06963a 100644 --- a/processor/src/tests/addresses.rs +++ b/processor/TODO/tests/addresses.rs @@ -1,3 +1,5 @@ +// TODO + use core::{time::Duration, pin::Pin, future::Future}; use std::collections::HashMap; diff --git a/processor/src/tests/batch_signer.rs b/processor/TODO/tests/batch_signer.rs similarity index 99% rename from processor/src/tests/batch_signer.rs rename to processor/TODO/tests/batch_signer.rs index dc45ff31..cc5885fc 100644 --- a/processor/src/tests/batch_signer.rs +++ b/processor/TODO/tests/batch_signer.rs @@ -1,3 +1,5 @@ +// TODO + use std::collections::HashMap; use rand_core::{RngCore, OsRng}; diff --git a/processor/src/tests/cosigner.rs b/processor/TODO/tests/cosigner.rs similarity index 99% rename from processor/src/tests/cosigner.rs rename to processor/TODO/tests/cosigner.rs index a66161bf..98116bc3 100644 --- a/processor/src/tests/cosigner.rs +++ b/processor/TODO/tests/cosigner.rs @@ -1,3 +1,5 @@ +// TODO + use std::collections::HashMap; use rand_core::{RngCore, OsRng}; diff --git a/processor/src/tests/key_gen.rs b/processor/TODO/tests/key_gen.rs similarity index 99% rename from processor/src/tests/key_gen.rs rename to processor/TODO/tests/key_gen.rs index 43f0de05..116db11e 100644 --- a/processor/src/tests/key_gen.rs +++ b/processor/TODO/tests/key_gen.rs @@ -1,3 +1,5 @@ +// TODO + use std::collections::HashMap; use zeroize::Zeroizing; diff --git a/processor/src/tests/literal/mod.rs b/processor/TODO/tests/literal/mod.rs similarity index 99% rename from processor/src/tests/literal/mod.rs rename to processor/TODO/tests/literal/mod.rs index d45649d5..b1285e63 100644 --- a/processor/src/tests/literal/mod.rs +++ b/processor/TODO/tests/literal/mod.rs @@ -1,3 +1,5 @@ +// TODO + use dockertest::{ PullPolicy, StartPolicy, LogOptions, LogAction, LogPolicy, LogSource, Image, TestBodySpecification, DockerOperations, DockerTest, diff --git a/processor/src/tests/mod.rs b/processor/TODO/tests/mod.rs similarity index 99% rename from processor/src/tests/mod.rs rename to processor/TODO/tests/mod.rs index 7ab57bde..4691e523 100644 --- a/processor/src/tests/mod.rs +++ b/processor/TODO/tests/mod.rs @@ -1,3 +1,5 @@ +// TODO + use std::sync::OnceLock; mod key_gen; diff --git a/processor/src/tests/scanner.rs b/processor/TODO/tests/scanner.rs similarity index 99% rename from processor/src/tests/scanner.rs rename to processor/TODO/tests/scanner.rs index a40e465c..6ad87f78 100644 --- a/processor/src/tests/scanner.rs +++ b/processor/TODO/tests/scanner.rs @@ -1,3 +1,5 @@ +// TODO + use core::{pin::Pin, time::Duration, future::Future}; use std::sync::Arc; diff --git a/processor/src/tests/signer.rs b/processor/TODO/tests/signer.rs similarity index 99% rename from processor/src/tests/signer.rs rename to processor/TODO/tests/signer.rs index 6b445608..e35a048b 100644 --- a/processor/src/tests/signer.rs +++ b/processor/TODO/tests/signer.rs @@ -1,3 +1,5 @@ +// TODO + use core::{pin::Pin, future::Future}; use std::collections::HashMap; diff --git a/processor/src/tests/wallet.rs b/processor/TODO/tests/wallet.rs similarity index 99% rename from processor/src/tests/wallet.rs rename to processor/TODO/tests/wallet.rs index 0451f30c..f78a16f5 100644 --- a/processor/src/tests/wallet.rs +++ b/processor/TODO/tests/wallet.rs @@ -1,3 +1,5 @@ +// TODO + use core::{time::Duration, pin::Pin, future::Future}; use std::collections::HashMap; From 2c8af04781d5dd5ebef55d60674233035da59e4c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 20 Sep 2024 02:30:08 -0400 Subject: [PATCH 180/368] machete, drain > mem::swap for clarity reasons --- Cargo.lock | 2 -- processor/bin/src/lib.rs | 18 ++++++++---------- processor/ethereum/Cargo.toml | 1 - processor/ethereum/router/Cargo.toml | 1 - 4 files changed, 8 insertions(+), 14 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 12da8dd6..131081ab 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8326,7 +8326,6 @@ version = "0.1.0" name = "serai-ethereum-processor" version = "0.1.0" dependencies = [ - "alloy-consensus", "alloy-core", "alloy-provider", "alloy-rlp", @@ -8716,7 +8715,6 @@ dependencies = [ "build-solidity-contracts", "ethereum-schnorr-contract", "group", - "k256", "serai-client", "serai-processor-ethereum-deployer", "serai-processor-ethereum-erc20", diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 86a3a0cd..7d98f812 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -284,21 +284,19 @@ pub async fn main_loop< let key_to_activate = KeyToActivate::>::try_recv(txn.as_mut().unwrap()).map(|key| key.0); - /* - `acknowledge_batch` takes burns to optimize handling returns with standard payments. - That's why handling these with a Batch (and not waiting until the following potential - `queue_burns` call makes sense. As for which Batch, the first is equally valid unless - we want to start introspecting (and should be our only Batch anyways). - */ - let mut this_batchs_burns = vec![]; - std::mem::swap(&mut burns, &mut this_batchs_burns); - // This is a cheap call as it internally just queues this to be done later let _: () = scanner.acknowledge_batch( txn.take().unwrap(), id, in_instructions, - this_batchs_burns, + /* + `acknowledge_batch` takes burns to optimize handling returns with standard + payments. That's why handling these with a Batch (and not waiting until the + following potential `queue_burns` call makes sense. As for which Batch, the first + is equally valid unless we want to start introspecting (and should be our only + Batch anyways). + */ + burns.drain(..).collect(), key_to_activate, ); } diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index c2a6f581..13978631 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -32,7 +32,6 @@ k256 = { version = "^0.13.1", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } alloy-rlp = { version = "0.3", default-features = false } -alloy-consensus = { version = "0.3", default-features = false } alloy-rpc-types-eth = { version = "0.3", default-features = false } alloy-transport = { version = "0.3", default-features = false } diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index e8884eae..d21a26d9 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -18,7 +18,6 @@ workspace = true [dependencies] group = { version = "0.13", default-features = false } -k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa", "arithmetic"] } alloy-core = { version = "0.8", default-features = false } alloy-consensus = { version = "0.3", default-features = false } From 251a6e96e8cfa8a08eed93ec20c6ebd174e51bf1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 24 Sep 2024 14:27:05 -0700 Subject: [PATCH 181/368] Constant-time divisors (#617) * WIP constant-time implementation of the ec-divisors library * Fix misc logic errors in poly.rs * Remove accidentally committed test statements * Fix ConstantTimeEq for CoefficientIndex * Correct the iterations formula x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0. It is divisible however. The amount of iterations should be the amount of coefficients within the numerator *excluding the coefficient for y**0 x**0*. * Poly PartialEq, conditional_select_poly which checks poly structure equivalence If the first passed argument is smaller than the latter, it's padded to the necessary length. Also adds code to trim the remainder as the remainder is the value modulo, so it's very important it remains concise and workable. * Fix the line function It selected the case if both were identity before selecting the case if either were identity, the latter overwriting the former. * Final fixes re: ct_get 1) Our quotient structure does need to be of size equal to the numerator entirely to prevent out-of-bounds reads on it 2) We need to get from yx_coefficients if of length >=, so if the length is 1 we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no inner Vecs, we need to fall back with the guard y_pow != 0. * Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic Our Poly algorithm is incredibly leaky. While it presumably should be improved, we can take advantage of our known structure while constructing divisors (and the small modulus) to simply trim out the zero coefficients leaked. This maintains Polys in a manageable size. * Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors Anyone creating a divisor for the scalar mul gadget should use constant time code, so this code should at least be in the EC gadgets crate It's of non-trivial complexity to deal with otherwise. * Remove unsafe, cache timing attacks from ec-divisors --- Cargo.lock | 3 +- crypto/dalek-ff-group/src/field.rs | 2 +- crypto/dkg/Cargo.toml | 2 - crypto/dkg/src/evrf/proof.rs | 185 +-------- crypto/evrf/divisors/Cargo.toml | 6 +- crypto/evrf/divisors/src/lib.rs | 360 ++++++++++++++--- crypto/evrf/divisors/src/poly.rs | 530 +++++++++++++++++++------ crypto/evrf/divisors/src/tests/mod.rs | 8 +- crypto/evrf/divisors/src/tests/poly.rs | 35 +- 9 files changed, 763 insertions(+), 368 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 131081ab..fd9838f2 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2196,7 +2196,6 @@ dependencies = [ "schnorr-signatures", "secq256k1", "std-shims", - "subtle", "thiserror", "zeroize", ] @@ -2293,10 +2292,12 @@ name = "ec-divisors" version = "0.1.0" dependencies = [ "dalek-ff-group", + "ff", "group", "hex", "pasta_curves", "rand_core", + "subtle", "zeroize", ] diff --git a/crypto/dalek-ff-group/src/field.rs b/crypto/dalek-ff-group/src/field.rs index b1af2711..bc3078c8 100644 --- a/crypto/dalek-ff-group/src/field.rs +++ b/crypto/dalek-ff-group/src/field.rs @@ -35,7 +35,7 @@ impl_modulus!( type ResidueType = Residue; /// A constant-time implementation of the Ed25519 field. -#[derive(Clone, Copy, PartialEq, Eq, Default, Debug)] +#[derive(Clone, Copy, PartialEq, Eq, Default, Debug, Zeroize)] pub struct FieldElement(ResidueType); // Square root of -1. diff --git a/crypto/dkg/Cargo.toml b/crypto/dkg/Cargo.toml index cde0d153..39ebb6dc 100644 --- a/crypto/dkg/Cargo.toml +++ b/crypto/dkg/Cargo.toml @@ -37,7 +37,6 @@ schnorr = { package = "schnorr-signatures", path = "../schnorr", version = "^0.5 dleq = { path = "../dleq", version = "^0.4.1", default-features = false } # eVRF DKG dependencies -subtle = { version = "2", default-features = false, features = ["std"], optional = true } generic-array = { version = "1", default-features = false, features = ["alloc"], optional = true } blake2 = { version = "0.10", default-features = false, features = ["std"], optional = true } rand_chacha = { version = "0.3", default-features = false, features = ["std"], optional = true } @@ -82,7 +81,6 @@ borsh = ["dep:borsh"] evrf = [ "std", - "dep:subtle", "dep:generic-array", "dep:blake2", diff --git a/crypto/dkg/src/evrf/proof.rs b/crypto/dkg/src/evrf/proof.rs index ce9c57d1..8eb3ab00 100644 --- a/crypto/dkg/src/evrf/proof.rs +++ b/crypto/dkg/src/evrf/proof.rs @@ -1,6 +1,5 @@ use core::{marker::PhantomData, ops::Deref, fmt}; -use subtle::*; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, CryptoRng, SeedableRng}; @@ -10,10 +9,7 @@ use generic_array::{typenum::Unsigned, ArrayLength, GenericArray}; use blake2::{Digest, Blake2s256}; use ciphersuite::{ - group::{ - ff::{Field, PrimeField, PrimeFieldBits}, - Group, GroupEncoding, - }, + group::{ff::Field, Group, GroupEncoding}, Ciphersuite, }; @@ -24,7 +20,7 @@ use generalized_bulletproofs::{ }; use generalized_bulletproofs_circuit_abstraction::*; -use ec_divisors::{DivisorCurve, new_divisor}; +use ec_divisors::{DivisorCurve, ScalarDecomposition}; use generalized_bulletproofs_ec_gadgets::*; /// A pair of curves to perform the eVRF with. @@ -309,147 +305,6 @@ impl Evrf { debug_assert!(challenged_generators.next().is_none()); } - /// Convert a scalar to a sequence of coefficients for the polynomial 2**i, where the sum of the - /// coefficients is F::NUM_BITS. - /// - /// Despite the name, the returned coefficients are not guaranteed to be bits (0 or 1). - /// - /// This scalar will presumably be used in a discrete log proof. That requires calculating a - /// divisor which is variable time to the amount of points interpolated. Since the amount of - /// points interpolated is equal to the sum of the coefficients in the polynomial, we need all - /// scalars to have a constant sum of their coefficients (instead of one variable to its bits). - /// - /// We achieve this by finding the highest non-0 coefficient, decrementing it, and increasing the - /// immediately less significant coefficient by 2. This increases the sum of the coefficients by - /// 1 (-1+2=1). - fn scalar_to_bits(scalar: &::F) -> Vec { - let num_bits = u64::from(<::EmbeddedCurve as Ciphersuite>::F::NUM_BITS); - - // Obtain the bits of the private key - let num_bits_usize = usize::try_from(num_bits).unwrap(); - let mut decomposition = vec![0; num_bits_usize]; - for (i, bit) in scalar.to_le_bits().into_iter().take(num_bits_usize).enumerate() { - let bit = u64::from(u8::from(bit)); - decomposition[i] = bit; - } - - // The following algorithm only works if the value of the scalar exceeds num_bits - // If it isn't, we increase it by the modulus such that it does exceed num_bits - { - let mut less_than_num_bits = Choice::from(0); - for i in 0 .. num_bits { - less_than_num_bits |= scalar.ct_eq(&::F::from(i)); - } - let mut decomposition_of_modulus = vec![0; num_bits_usize]; - // Decompose negative one - for (i, bit) in (-::F::ONE) - .to_le_bits() - .into_iter() - .take(num_bits_usize) - .enumerate() - { - let bit = u64::from(u8::from(bit)); - decomposition_of_modulus[i] = bit; - } - // Increment it by one - decomposition_of_modulus[0] += 1; - - // Add the decomposition onto the decomposition of the modulus - for i in 0 .. num_bits_usize { - let new_decomposition = <_>::conditional_select( - &decomposition[i], - &(decomposition[i] + decomposition_of_modulus[i]), - less_than_num_bits, - ); - decomposition[i] = new_decomposition; - } - } - - // Calculcate the sum of the coefficients - let mut sum_of_coefficients: u64 = 0; - for decomposition in &decomposition { - sum_of_coefficients += *decomposition; - } - - /* - Now, because we added a log2(k)-bit number to a k-bit number, we may have our sum of - coefficients be *too high*. We attempt to reduce the sum of the coefficients accordingly. - - This algorithm is guaranteed to complete as expected. Take the sequence `222`. `222` becomes - `032` becomes `013`. Even if the next coefficient in the sequence is `2`, the third - coefficient will be reduced once and the next coefficient (`2`, increased to `3`) will only - be eligible for reduction once. This demonstrates, even for a worst case of log2(k) `2`s - followed by `1`s (as possible if the modulus is a Mersenne prime), the log2(k) `2`s can be - reduced as necessary so long as there is a single coefficient after (requiring the entire - sequence be at least of length log2(k) + 1). For a 2-bit number, log2(k) + 1 == 2, so this - holds for any odd prime field. - - To fully type out the demonstration for the Mersenne prime 3, with scalar to encode 1 (the - highest value less than the number of bits): - - 10 - Little-endian bits of 1 - 21 - Little-endian bits of 1, plus the modulus - 02 - After one reduction, where the sum of the coefficients does in fact equal 2 (the target) - */ - { - let mut log2_num_bits = 0; - while (1 << log2_num_bits) < num_bits { - log2_num_bits += 1; - } - - for _ in 0 .. log2_num_bits { - // If the sum of coefficients is the amount of bits, we're done - let mut done = sum_of_coefficients.ct_eq(&num_bits); - - for i in 0 .. (num_bits_usize - 1) { - let should_act = (!done) & decomposition[i].ct_gt(&1); - // Subtract 2 from this coefficient - let amount_to_sub = <_>::conditional_select(&0, &2, should_act); - decomposition[i] -= amount_to_sub; - // Add 1 to the next coefficient - let amount_to_add = <_>::conditional_select(&0, &1, should_act); - decomposition[i + 1] += amount_to_add; - - // Also update the sum of coefficients - sum_of_coefficients -= <_>::conditional_select(&0, &1, should_act); - - // If we updated the coefficients this loop iter, we're done for this loop iter - done |= should_act; - } - } - } - - for _ in 0 .. num_bits { - // If the sum of coefficients is the amount of bits, we're done - let mut done = sum_of_coefficients.ct_eq(&num_bits); - - // Find the highest coefficient currently non-zero - for i in (1 .. decomposition.len()).rev() { - // If this is non-zero, we should decrement this coefficient if we haven't already - // decremented a coefficient this round - let is_non_zero = !(0.ct_eq(&decomposition[i])); - let should_act = (!done) & is_non_zero; - - // Update this coefficient and the prior coefficient - let amount_to_sub = <_>::conditional_select(&0, &1, should_act); - decomposition[i] -= amount_to_sub; - - let amount_to_add = <_>::conditional_select(&0, &2, should_act); - // i must be at least 1, so i - 1 will be at least 0 (meaning it's safe to index with) - decomposition[i - 1] += amount_to_add; - - // Also update the sum of coefficients - sum_of_coefficients += <_>::conditional_select(&0, &1, should_act); - - // If we updated the coefficients this loop iter, we're done for this loop iter - done |= should_act; - } - } - debug_assert!(bool::from(decomposition.iter().sum::().ct_eq(&num_bits))); - - decomposition - } - /// Prove a point on an elliptic curve had its discrete logarithm generated via an eVRF. pub(crate) fn prove( rng: &mut (impl RngCore + CryptoRng), @@ -471,11 +326,9 @@ impl Evrf { // A function to calculate a divisor and push it onto the tape // This defines a vec, divisor_points, outside of the fn to reuse its allocation - let mut divisor_points = - Vec::with_capacity((::F::NUM_BITS as usize) + 1); let mut divisor = |vector_commitment_tape: &mut Vec<_>, - dlog: &[u64], + dlog: &ScalarDecomposition<<::EmbeddedCurve as Ciphersuite>::F>, push_generator: bool, generator: <::EmbeddedCurve as Ciphersuite>::G, dh: <::EmbeddedCurve as Ciphersuite>::G| { @@ -484,24 +337,7 @@ impl Evrf { generator_tables.push(GeneratorTable::new(&curve_spec, x, y)); } - { - let mut generator = generator; - for coefficient in dlog { - let mut coefficient = *coefficient; - while coefficient != 0 { - coefficient -= 1; - divisor_points.push(generator); - } - generator = generator.double(); - } - debug_assert_eq!( - dlog.iter().sum::(), - u64::from(::F::NUM_BITS) - ); - } - divisor_points.push(-dh); - let mut divisor = new_divisor(&divisor_points).unwrap().normalize_x_coefficient(); - divisor_points.zeroize(); + let mut divisor = dlog.scalar_mul_divisor(generator).normalize_x_coefficient(); vector_commitment_tape.push(divisor.zero_coefficient); @@ -540,11 +376,12 @@ impl Evrf { let evrf_public_key; let mut actual_coefficients = Vec::with_capacity(coefficients); { - let mut dlog = Self::scalar_to_bits(evrf_private_key); + let dlog = + ScalarDecomposition::<::F>::new(**evrf_private_key); let points = Self::transcript_to_points(transcript, coefficients); // Start by pushing the discrete logarithm onto the tape - for coefficient in &dlog { + for coefficient in dlog.decomposition() { vector_commitment_tape.push(<_>::from(*coefficient)); } @@ -573,8 +410,6 @@ impl Evrf { actual_coefficients.push(res); } debug_assert_eq!(actual_coefficients.len(), coefficients); - - dlog.zeroize(); } // Now do the ECDHs for the encryption @@ -595,14 +430,15 @@ impl Evrf { break; } } - let mut dlog = Self::scalar_to_bits(&ecdh_private_key); + let dlog = + ScalarDecomposition::<::F>::new(ecdh_private_key); let ecdh_commitment = ::generator() * ecdh_private_key; ecdh_commitments.push(ecdh_commitment); ecdh_commitments_xy.last_mut().unwrap()[j] = <::G as DivisorCurve>::to_xy(ecdh_commitment).unwrap(); // Start by pushing the discrete logarithm onto the tape - for coefficient in &dlog { + for coefficient in dlog.decomposition() { vector_commitment_tape.push(<_>::from(*coefficient)); } @@ -625,7 +461,6 @@ impl Evrf { *res += dh_x; ecdh_private_key.zeroize(); - dlog.zeroize(); } encryption_masks.push(res); } diff --git a/crypto/evrf/divisors/Cargo.toml b/crypto/evrf/divisors/Cargo.toml index d4e3a2d0..04e820b6 100644 --- a/crypto/evrf/divisors/Cargo.toml +++ b/crypto/evrf/divisors/Cargo.toml @@ -14,9 +14,11 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] rand_core = { version = "0.6", default-features = false } -zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } +zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } -group = "0.13" +subtle = { version = "2", default-features = false, features = ["std"] } +ff = { version = "0.13", default-features = false, features = ["std", "bits"] } +group = { version = "0.13", default-features = false } hex = { version = "0.4", optional = true } dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"], optional = true } diff --git a/crypto/evrf/divisors/src/lib.rs b/crypto/evrf/divisors/src/lib.rs index d71aa8a4..dbeb149f 100644 --- a/crypto/evrf/divisors/src/lib.rs +++ b/crypto/evrf/divisors/src/lib.rs @@ -3,21 +3,24 @@ #![deny(missing_docs)] #![allow(non_snake_case)] +use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable}; +use zeroize::{Zeroize, ZeroizeOnDrop}; + use group::{ - ff::{Field, PrimeField}, + ff::{Field, PrimeField, PrimeFieldBits}, Group, }; mod poly; -pub use poly::*; +pub use poly::Poly; #[cfg(test)] mod tests; /// A curve usable with this library. -pub trait DivisorCurve: Group { +pub trait DivisorCurve: Group + ConstantTimeEq + ConditionallySelectable { /// An element of the field this curve is defined over. - type FieldElement: PrimeField; + type FieldElement: Zeroize + PrimeField + ConditionallySelectable; /// The A in the curve equation y^2 = x^3 + A x + B. fn a() -> Self::FieldElement; @@ -72,46 +75,89 @@ pub(crate) fn slope_intercept(a: C, b: C) -> (C::FieldElement, } // The line interpolating two points. -fn line(a: C, mut b: C) -> Poly { - // If they're both the point at infinity, we simply set the line to one - if bool::from(a.is_identity() & b.is_identity()) { - return Poly { - y_coefficients: vec![], - yx_coefficients: vec![], - x_coefficients: vec![], - zero_coefficient: C::FieldElement::ONE, - }; +fn line(a: C, b: C) -> Poly { + #[derive(Clone, Copy)] + struct LinesRes { + y_coefficient: F, + x_coefficient: F, + zero_coefficient: F, } + impl ConditionallySelectable for LinesRes { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + Self { + y_coefficient: <_>::conditional_select(&a.y_coefficient, &b.y_coefficient, choice), + x_coefficient: <_>::conditional_select(&a.x_coefficient, &b.x_coefficient, choice), + zero_coefficient: <_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice), + } + } + } + + let a_is_identity = a.is_identity(); + let b_is_identity = b.is_identity(); + + // If they're both the point at infinity, we simply set the line to one + let both_are_identity = a_is_identity & b_is_identity; + let if_both_are_identity = LinesRes { + y_coefficient: C::FieldElement::ZERO, + x_coefficient: C::FieldElement::ZERO, + zero_coefficient: C::FieldElement::ONE, + }; // If either point is the point at infinity, or these are additive inverses, the line is // `1 * x - x`. The first `x` is a term in the polynomial, the `x` is the `x` coordinate of these // points (of which there is one, as the second point is either at infinity or has a matching `x` // coordinate). - if bool::from(a.is_identity() | b.is_identity()) || (a == -b) { - let (x, _) = C::to_xy(if !bool::from(a.is_identity()) { a } else { b }).unwrap(); - return Poly { - y_coefficients: vec![], - yx_coefficients: vec![], - x_coefficients: vec![C::FieldElement::ONE], + let one_is_identity = a_is_identity | b_is_identity; + let additive_inverses = a.ct_eq(&-b); + let one_is_identity_or_additive_inverses = one_is_identity | additive_inverses; + let if_one_is_identity_or_additive_inverses = { + // If both are identity, set `a` to the generator so we can safely evaluate the following + // (which we won't select at the end of this function) + let a = <_>::conditional_select(&a, &C::generator(), both_are_identity); + // If `a` is identity, this selects `b`. If `a` isn't identity, this selects `a` + let non_identity = <_>::conditional_select(&a, &b, a.is_identity()); + let (x, _) = C::to_xy(non_identity).unwrap(); + LinesRes { + y_coefficient: C::FieldElement::ZERO, + x_coefficient: C::FieldElement::ONE, zero_coefficient: -x, - }; - } + } + }; + + // The following calculation assumes neither point is the point at infinity + // If either are, we use a prior result + // To ensure we can calculcate a result here, set any points at infinity to the generator + let a = <_>::conditional_select(&a, &C::generator(), a_is_identity); + let b = <_>::conditional_select(&b, &C::generator(), b_is_identity); + // It also assumes a, b aren't additive inverses which is also covered by a prior result + let b = <_>::conditional_select(&b, &a.double(), additive_inverses); // If the points are equal, we use the line interpolating the sum of these points with the point // at infinity - if a == b { - b = -a.double(); - } + let b = <_>::conditional_select(&b, &-a.double(), a.ct_eq(&b)); let (slope, intercept) = slope_intercept::(a, b); // Section 4 of the proofs explicitly state the line `L = y - lambda * x - mu` // y - (slope * x) - intercept - Poly { - y_coefficients: vec![C::FieldElement::ONE], - yx_coefficients: vec![], - x_coefficients: vec![-slope], + let mut res = LinesRes { + y_coefficient: C::FieldElement::ONE, + x_coefficient: -slope, zero_coefficient: -intercept, + }; + + res = <_>::conditional_select( + &res, + &if_one_is_identity_or_additive_inverses, + one_is_identity_or_additive_inverses, + ); + res = <_>::conditional_select(&res, &if_both_are_identity, both_are_identity); + + Poly { + y_coefficients: vec![res.y_coefficient], + yx_coefficients: vec![], + x_coefficients: vec![res.x_coefficient], + zero_coefficient: res.zero_coefficient, } } @@ -121,36 +167,65 @@ fn line(a: C, mut b: C) -> Poly { /// - No points were passed in /// - The points don't sum to the point at infinity /// - A passed in point was the point at infinity +/// +/// If the arguments were valid, this function executes in an amount of time constant to the amount +/// of points. #[allow(clippy::new_ret_no_self)] pub fn new_divisor(points: &[C]) -> Option> { - // A single point is either the point at infinity, or this doesn't sum to the point at infinity - // Both cause us to return None - if points.len() < 2 { - None?; - } - if points.iter().sum::() != C::identity() { + // No points were passed in, this is the point at infinity, or the single point isn't infinity + // and accordingly doesn't sum to infinity. All three cause us to return None + // Checks a bit other than the first bit is set, meaning this is >= 2 + let mut invalid_args = (points.len() & (!1)).ct_eq(&0); + // The points don't sum to the point at infinity + invalid_args |= !points.iter().sum::().is_identity(); + // A point was the point at identity + for point in points { + invalid_args |= point.is_identity(); + } + if bool::from(invalid_args) { None?; } + let points_len = points.len(); + // Create the initial set of divisors let mut divs = vec![]; let mut iter = points.iter().copied(); while let Some(a) = iter.next() { - if a == C::identity() { - None?; - } - let b = iter.next(); - if b == Some(C::identity()) { - None?; - } // Draw the line between those points - divs.push((a + b.unwrap_or(C::identity()), line::(a, b.unwrap_or(-a)))); + // These unwraps are branching on the length of the iterator, not violating the constant-time + // priorites desired + divs.push((2, a + b.unwrap_or(C::identity()), line::(a, b.unwrap_or(-a)))); } let modulus = C::divisor_modulus(); + // Our Poly algorithm is leaky and will create an excessive amount of y x**j and x**j + // coefficients which are zero, yet as our implementation is constant time, still come with + // an immense performance cost. This code truncates the coefficients we know are zero. + let trim = |divisor: &mut Poly<_>, points_len: usize| { + // We should only be trimming divisors reduced by the modulus + debug_assert!(divisor.yx_coefficients.len() <= 1); + if divisor.yx_coefficients.len() == 1 { + let truncate_to = ((points_len + 1) / 2).saturating_sub(2); + #[cfg(debug_assertions)] + for p in truncate_to .. divisor.yx_coefficients[0].len() { + debug_assert_eq!(divisor.yx_coefficients[0][p], ::ZERO); + } + divisor.yx_coefficients[0].truncate(truncate_to); + } + { + let truncate_to = points_len / 2; + #[cfg(debug_assertions)] + for p in truncate_to .. divisor.x_coefficients.len() { + debug_assert_eq!(divisor.x_coefficients[p], ::ZERO); + } + divisor.x_coefficients.truncate(truncate_to); + } + }; + // Pair them off until only one remains while divs.len() > 1 { let mut next_divs = vec![]; @@ -159,23 +234,208 @@ pub fn new_divisor(points: &[C]) -> Option(a, b), &modulus); - let denominator = line::(a, -a).mul_mod(line::(b, -b), &modulus); - let (q, r) = numerator.div_rem(&denominator); - assert_eq!(r, Poly::zero()); + let numerator = a_div.mul_mod(&b_div, &modulus).mul_mod(&line::(a, b), &modulus); + let denominator = line::(a, -a).mul_mod(&line::(b, -b), &modulus); + let (mut q, r) = numerator.div_rem(&denominator); + debug_assert_eq!(r, Poly::zero()); - next_divs.push((a + b, q)); + trim(&mut q, 1 + points); + + next_divs.push((points, a + b, q)); } divs = next_divs; } // Return the unified divisor - Some(divs.remove(0).1) + let mut divisor = divs.remove(0).2; + trim(&mut divisor, points_len); + Some(divisor) +} + +/// The decomposition of a scalar. +/// +/// The decomposition ($d$) of a scalar ($s$) has the following two properties: +/// +/// - $\sum^{\mathsf{NUM_BITS} - 1}_{i=0} d_i * 2^i = s$ +/// - $\sum^{\mathsf{NUM_BITS} - 1}_{i=0} d_i = \mathsf{NUM_BITS}$ +#[derive(Clone, Zeroize, ZeroizeOnDrop)] +pub struct ScalarDecomposition { + scalar: F, + decomposition: Vec, +} + +impl ScalarDecomposition { + /// Decompose a scalar. + pub fn new(scalar: F) -> Self { + /* + We need the sum of the coefficients to equal F::NUM_BITS. The scalar's bits will be less than + F::NUM_BITS. Accordingly, we need to increment the sum of the coefficients without + incrementing the scalar represented. We do this by finding the highest non-0 coefficient, + decrementing it, and increasing the immediately less significant coefficient by 2. This + increases the sum of the coefficients by 1 (-1+2=1). + */ + + let num_bits = u64::from(F::NUM_BITS); + + // Obtain the bits of the scalar + let num_bits_usize = usize::try_from(num_bits).unwrap(); + let mut decomposition = vec![0; num_bits_usize]; + for (i, bit) in scalar.to_le_bits().into_iter().take(num_bits_usize).enumerate() { + let bit = u64::from(u8::from(bit)); + decomposition[i] = bit; + } + + // The following algorithm only works if the value of the scalar exceeds num_bits + // If it isn't, we increase it by the modulus such that it does exceed num_bits + { + let mut less_than_num_bits = Choice::from(0); + for i in 0 .. num_bits { + less_than_num_bits |= scalar.ct_eq(&F::from(i)); + } + let mut decomposition_of_modulus = vec![0; num_bits_usize]; + // Decompose negative one + for (i, bit) in (-F::ONE).to_le_bits().into_iter().take(num_bits_usize).enumerate() { + let bit = u64::from(u8::from(bit)); + decomposition_of_modulus[i] = bit; + } + // Increment it by one + decomposition_of_modulus[0] += 1; + + // Add the decomposition onto the decomposition of the modulus + for i in 0 .. num_bits_usize { + let new_decomposition = <_>::conditional_select( + &decomposition[i], + &(decomposition[i] + decomposition_of_modulus[i]), + less_than_num_bits, + ); + decomposition[i] = new_decomposition; + } + } + + // Calculcate the sum of the coefficients + let mut sum_of_coefficients: u64 = 0; + for decomposition in &decomposition { + sum_of_coefficients += *decomposition; + } + + /* + Now, because we added a log2(k)-bit number to a k-bit number, we may have our sum of + coefficients be *too high*. We attempt to reduce the sum of the coefficients accordingly. + + This algorithm is guaranteed to complete as expected. Take the sequence `222`. `222` becomes + `032` becomes `013`. Even if the next coefficient in the sequence is `2`, the third + coefficient will be reduced once and the next coefficient (`2`, increased to `3`) will only + be eligible for reduction once. This demonstrates, even for a worst case of log2(k) `2`s + followed by `1`s (as possible if the modulus is a Mersenne prime), the log2(k) `2`s can be + reduced as necessary so long as there is a single coefficient after (requiring the entire + sequence be at least of length log2(k) + 1). For a 2-bit number, log2(k) + 1 == 2, so this + holds for any odd prime field. + + To fully type out the demonstration for the Mersenne prime 3, with scalar to encode 1 (the + highest value less than the number of bits): + + 10 - Little-endian bits of 1 + 21 - Little-endian bits of 1, plus the modulus + 02 - After one reduction, where the sum of the coefficients does in fact equal 2 (the target) + */ + { + let mut log2_num_bits = 0; + while (1 << log2_num_bits) < num_bits { + log2_num_bits += 1; + } + + for _ in 0 .. log2_num_bits { + // If the sum of coefficients is the amount of bits, we're done + let mut done = sum_of_coefficients.ct_eq(&num_bits); + + for i in 0 .. (num_bits_usize - 1) { + let should_act = (!done) & decomposition[i].ct_gt(&1); + // Subtract 2 from this coefficient + let amount_to_sub = <_>::conditional_select(&0, &2, should_act); + decomposition[i] -= amount_to_sub; + // Add 1 to the next coefficient + let amount_to_add = <_>::conditional_select(&0, &1, should_act); + decomposition[i + 1] += amount_to_add; + + // Also update the sum of coefficients + sum_of_coefficients -= <_>::conditional_select(&0, &1, should_act); + + // If we updated the coefficients this loop iter, we're done for this loop iter + done |= should_act; + } + } + } + + for _ in 0 .. num_bits { + // If the sum of coefficients is the amount of bits, we're done + let mut done = sum_of_coefficients.ct_eq(&num_bits); + + // Find the highest coefficient currently non-zero + for i in (1 .. decomposition.len()).rev() { + // If this is non-zero, we should decrement this coefficient if we haven't already + // decremented a coefficient this round + let is_non_zero = !(0.ct_eq(&decomposition[i])); + let should_act = (!done) & is_non_zero; + + // Update this coefficient and the prior coefficient + let amount_to_sub = <_>::conditional_select(&0, &1, should_act); + decomposition[i] -= amount_to_sub; + + let amount_to_add = <_>::conditional_select(&0, &2, should_act); + // i must be at least 1, so i - 1 will be at least 0 (meaning it's safe to index with) + decomposition[i - 1] += amount_to_add; + + // Also update the sum of coefficients + sum_of_coefficients += <_>::conditional_select(&0, &1, should_act); + + // If we updated the coefficients this loop iter, we're done for this loop iter + done |= should_act; + } + } + debug_assert!(bool::from(decomposition.iter().sum::().ct_eq(&num_bits))); + + ScalarDecomposition { scalar, decomposition } + } + + /// The decomposition of the scalar. + pub fn decomposition(&self) -> &[u64] { + &self.decomposition + } + + /// A divisor to prove a scalar multiplication. + /// + /// The divisor will interpolate $d_i$ instances of $2^i \cdot G$ with $-(s \cdot G)$. + /// + /// This function executes in constant time with regards to the scalar. + /// + /// This function MAY panic if this scalar is zero. + pub fn scalar_mul_divisor>( + &self, + mut generator: C, + ) -> Poly { + // The following for loop is constant time to the sum of `dlog`'s elements + let mut divisor_points = + Vec::with_capacity(usize::try_from(::NUM_BITS).unwrap()); + divisor_points.push(-generator * self.scalar); + for coefficient in &self.decomposition { + let mut coefficient = *coefficient; + while coefficient != 0 { + coefficient -= 1; + divisor_points.push(generator); + } + generator = generator.double(); + } + + let res = new_divisor(&divisor_points).unwrap(); + divisor_points.zeroize(); + res + } } #[cfg(any(test, feature = "pasta"))] diff --git a/crypto/evrf/divisors/src/poly.rs b/crypto/evrf/divisors/src/poly.rs index b818433b..0e41bc49 100644 --- a/crypto/evrf/divisors/src/poly.rs +++ b/crypto/evrf/divisors/src/poly.rs @@ -1,25 +1,112 @@ use core::ops::{Add, Neg, Sub, Mul, Rem}; -use zeroize::Zeroize; +use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable}; +use zeroize::{Zeroize, ZeroizeOnDrop}; use group::ff::PrimeField; -/// A structure representing a Polynomial with x**i, y**i, and y**i * x**j terms. -#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] -pub struct Poly> { - /// c[i] * y ** (i + 1) +#[derive(Clone, Copy, PartialEq, Debug)] +struct CoefficientIndex { + y_pow: u64, + x_pow: u64, +} +impl ConditionallySelectable for CoefficientIndex { + fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self { + Self { + y_pow: <_>::conditional_select(&a.y_pow, &b.y_pow, choice), + x_pow: <_>::conditional_select(&a.x_pow, &b.x_pow, choice), + } + } +} +impl ConstantTimeEq for CoefficientIndex { + fn ct_eq(&self, other: &Self) -> Choice { + self.y_pow.ct_eq(&other.y_pow) & self.x_pow.ct_eq(&other.x_pow) + } +} +impl ConstantTimeGreater for CoefficientIndex { + fn ct_gt(&self, other: &Self) -> Choice { + self.y_pow.ct_gt(&other.y_pow) | + (self.y_pow.ct_eq(&other.y_pow) & self.x_pow.ct_gt(&other.x_pow)) + } +} + +/// A structure representing a Polynomial with x^i, y^i, and y^i * x^j terms. +#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)] +pub struct Poly + Zeroize + PrimeField> { + /// c\[i] * y^(i + 1) pub y_coefficients: Vec, - /// c[i][j] * y ** (i + 1) x ** (j + 1) + /// c\[i]\[j] * y^(i + 1) x^(j + 1) pub yx_coefficients: Vec>, - /// c[i] * x ** (i + 1) + /// c\[i] * x^(i + 1) pub x_coefficients: Vec, - /// Coefficient for x ** 0, y ** 0, and x ** 0 y ** 0 (the coefficient for 1) + /// Coefficient for x^0, y^0, and x^0 y^0 (the coefficient for 1) pub zero_coefficient: F, } -impl> Poly { +impl + Zeroize + PrimeField> PartialEq for Poly { + // This is not constant time and is not meant to be + fn eq(&self, b: &Poly) -> bool { + { + let mutual_y_coefficients = self.y_coefficients.len().min(b.y_coefficients.len()); + if self.y_coefficients[.. mutual_y_coefficients] != b.y_coefficients[.. mutual_y_coefficients] + { + return false; + } + for coeff in &self.y_coefficients[mutual_y_coefficients ..] { + if *coeff != F::ZERO { + return false; + } + } + for coeff in &b.y_coefficients[mutual_y_coefficients ..] { + if *coeff != F::ZERO { + return false; + } + } + } + + { + for (i, yx_coeffs) in self.yx_coefficients.iter().enumerate() { + for (j, coeff) in yx_coeffs.iter().enumerate() { + if coeff != b.yx_coefficients.get(i).unwrap_or(&vec![]).get(j).unwrap_or(&F::ZERO) { + return false; + } + } + } + // Run from the other perspective in case other is longer than self + for (i, yx_coeffs) in b.yx_coefficients.iter().enumerate() { + for (j, coeff) in yx_coeffs.iter().enumerate() { + if coeff != self.yx_coefficients.get(i).unwrap_or(&vec![]).get(j).unwrap_or(&F::ZERO) { + return false; + } + } + } + } + + { + let mutual_x_coefficients = self.x_coefficients.len().min(b.x_coefficients.len()); + if self.x_coefficients[.. mutual_x_coefficients] != b.x_coefficients[.. mutual_x_coefficients] + { + return false; + } + for coeff in &self.x_coefficients[mutual_x_coefficients ..] { + if *coeff != F::ZERO { + return false; + } + } + for coeff in &b.x_coefficients[mutual_x_coefficients ..] { + if *coeff != F::ZERO { + return false; + } + } + } + + self.zero_coefficient == b.zero_coefficient + } +} + +impl + Zeroize + PrimeField> Poly { /// A polynomial for zero. - pub fn zero() -> Self { + pub(crate) fn zero() -> Self { Poly { y_coefficients: vec![], yx_coefficients: vec![], @@ -27,37 +114,9 @@ impl> Poly { zero_coefficient: F::ZERO, } } - - /// The amount of terms in the polynomial. - #[allow(clippy::len_without_is_empty)] - #[must_use] - pub fn len(&self) -> usize { - self.y_coefficients.len() + - self.yx_coefficients.iter().map(Vec::len).sum::() + - self.x_coefficients.len() + - usize::from(u8::from(self.zero_coefficient != F::ZERO)) - } - - // Remove high-order zero terms, allowing the length of the vectors to equal the amount of terms. - pub(crate) fn tidy(&mut self) { - let tidy = |vec: &mut Vec| { - while vec.last() == Some(&F::ZERO) { - vec.pop(); - } - }; - - tidy(&mut self.y_coefficients); - for vec in self.yx_coefficients.iter_mut() { - tidy(vec); - } - while self.yx_coefficients.last() == Some(&vec![]) { - self.yx_coefficients.pop(); - } - tidy(&mut self.x_coefficients); - } } -impl> Add<&Self> for Poly { +impl + Zeroize + PrimeField> Add<&Self> for Poly { type Output = Self; fn add(mut self, other: &Self) -> Self { @@ -91,12 +150,11 @@ impl> Add<&Self> for Poly { } self.zero_coefficient += other.zero_coefficient; - self.tidy(); self } } -impl> Neg for Poly { +impl + Zeroize + PrimeField> Neg for Poly { type Output = Self; fn neg(mut self) -> Self { @@ -117,7 +175,7 @@ impl> Neg for Poly { } } -impl> Sub for Poly { +impl + Zeroize + PrimeField> Sub for Poly { type Output = Self; fn sub(self, other: Self) -> Self { @@ -125,14 +183,10 @@ impl> Sub for Poly { } } -impl> Mul for Poly { +impl + Zeroize + PrimeField> Mul for Poly { type Output = Self; fn mul(mut self, scalar: F) -> Self { - if scalar == F::ZERO { - return Poly::zero(); - } - for y_coeff in self.y_coefficients.iter_mut() { *y_coeff *= scalar; } @@ -149,7 +203,7 @@ impl> Mul for Poly { } } -impl> Poly { +impl + Zeroize + PrimeField> Poly { #[must_use] fn shift_by_x(mut self, power_of_x: usize) -> Self { if power_of_x == 0 { @@ -203,17 +257,17 @@ impl> Poly { self.zero_coefficient = F::ZERO; // Move the x coefficients - self.yx_coefficients[power_of_y - 1] = self.x_coefficients; + std::mem::swap(&mut self.yx_coefficients[power_of_y - 1], &mut self.x_coefficients); self.x_coefficients = vec![]; self } } -impl> Mul for Poly { +impl + Zeroize + PrimeField> Mul<&Poly> for Poly { type Output = Self; - fn mul(self, other: Self) -> Self { + fn mul(self, other: &Self) -> Self { let mut res = self.clone() * other.zero_coefficient; for (i, y_coeff) in other.y_coefficients.iter().enumerate() { @@ -233,94 +287,320 @@ impl> Mul for Poly { res = res + &scaled.shift_by_x(i + 1); } - res.tidy(); res } } -impl> Poly { +impl + Zeroize + PrimeField> Poly { + // The leading y coefficient and associated x coefficient. + fn leading_coefficient(&self) -> (usize, usize) { + if self.y_coefficients.len() > self.yx_coefficients.len() { + (self.y_coefficients.len(), 0) + } else if !self.yx_coefficients.is_empty() { + (self.yx_coefficients.len(), self.yx_coefficients.last().unwrap().len()) + } else { + (0, self.x_coefficients.len()) + } + } + + /// Returns the highest non-zero coefficient greater than the specified coefficient. + /// + /// If no non-zero coefficient is greater than the specified coefficient, this will return + /// (0, 0). + fn greater_than_or_equal_coefficient( + &self, + greater_than_or_equal: &CoefficientIndex, + ) -> CoefficientIndex { + let mut leading_coefficient = CoefficientIndex { y_pow: 0, x_pow: 0 }; + for (y_pow_sub_one, coeff) in self.y_coefficients.iter().enumerate() { + let y_pow = u64::try_from(y_pow_sub_one + 1).unwrap(); + let coeff_is_non_zero = !coeff.is_zero(); + let potential = CoefficientIndex { y_pow, x_pow: 0 }; + leading_coefficient = <_>::conditional_select( + &leading_coefficient, + &potential, + coeff_is_non_zero & + potential.ct_gt(&leading_coefficient) & + (potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)), + ); + } + for (y_pow_sub_one, yx_coefficients) in self.yx_coefficients.iter().enumerate() { + let y_pow = u64::try_from(y_pow_sub_one + 1).unwrap(); + for (x_pow_sub_one, coeff) in yx_coefficients.iter().enumerate() { + let x_pow = u64::try_from(x_pow_sub_one + 1).unwrap(); + let coeff_is_non_zero = !coeff.is_zero(); + let potential = CoefficientIndex { y_pow, x_pow }; + leading_coefficient = <_>::conditional_select( + &leading_coefficient, + &potential, + coeff_is_non_zero & + potential.ct_gt(&leading_coefficient) & + (potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)), + ); + } + } + for (x_pow_sub_one, coeff) in self.x_coefficients.iter().enumerate() { + let x_pow = u64::try_from(x_pow_sub_one + 1).unwrap(); + let coeff_is_non_zero = !coeff.is_zero(); + let potential = CoefficientIndex { y_pow: 0, x_pow }; + leading_coefficient = <_>::conditional_select( + &leading_coefficient, + &potential, + coeff_is_non_zero & + potential.ct_gt(&leading_coefficient) & + (potential.ct_gt(greater_than_or_equal) | potential.ct_eq(greater_than_or_equal)), + ); + } + leading_coefficient + } + /// Perform multiplication mod `modulus`. #[must_use] - pub fn mul_mod(self, other: Self, modulus: &Self) -> Self { - ((self % modulus) * (other % modulus)) % modulus + pub(crate) fn mul_mod(self, other: &Self, modulus: &Self) -> Self { + (self * other) % modulus } /// Perform division, returning the result and remainder. /// - /// Panics upon division by zero, with undefined behavior if a non-tidy divisor is used. + /// This function is constant time to the structure of the numerator and denominator. The actual + /// value of the coefficients will not introduce timing differences. + /// + /// Panics upon division by a polynomial where all coefficients are zero. #[must_use] - pub fn div_rem(self, divisor: &Self) -> (Self, Self) { - // The leading y coefficient and associated x coefficient. - let leading_y = |poly: &Self| -> (_, _) { - if poly.y_coefficients.len() > poly.yx_coefficients.len() { - (poly.y_coefficients.len(), 0) - } else if !poly.yx_coefficients.is_empty() { - (poly.yx_coefficients.len(), poly.yx_coefficients.last().unwrap().len()) - } else { - (0, poly.x_coefficients.len()) + pub(crate) fn div_rem(self, denominator: &Self) -> (Self, Self) { + // These functions have undefined behavior if this isn't a valid index for this poly + fn ct_get + Zeroize + PrimeField>( + poly: &Poly, + index: CoefficientIndex, + ) -> F { + let mut res = poly.zero_coefficient; + for (y_pow_sub_one, coeff) in poly.y_coefficients.iter().enumerate() { + res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 })); } - }; - - let (div_y, div_x) = leading_y(divisor); - // If this divisor is actually a scalar, don't perform long division - if (div_y == 0) && (div_x == 0) { - return (self * divisor.zero_coefficient.invert().unwrap(), Poly::zero()); + for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter().enumerate() { + for (x_pow_sub_one, coeff) in coeffs.iter().enumerate() { + res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + } + } + for (x_pow_sub_one, coeff) in poly.x_coefficients.iter().enumerate() { + res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + } + res } - // Remove leading terms until the value is less than the divisor - let mut quotient: Poly = Poly::zero(); - let mut remainder = self.clone(); - loop { - // If there's nothing left to divide, return - if remainder == Poly::zero() { - break; + fn ct_set + Zeroize + PrimeField>( + poly: &mut Poly, + index: CoefficientIndex, + value: F, + ) { + for (y_pow_sub_one, coeff) in poly.y_coefficients.iter_mut().enumerate() { + *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 })); } - - let (rem_y, rem_x) = leading_y(&remainder); - if (rem_y < div_y) || (rem_x < div_x) { - break; + for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter_mut().enumerate() { + for (x_pow_sub_one, coeff) in coeffs.iter_mut().enumerate() { + *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + } } + for (x_pow_sub_one, coeff) in poly.x_coefficients.iter_mut().enumerate() { + *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + } + poly.zero_coefficient = <_>::conditional_select(&poly.zero_coefficient, &value, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 })); + } - let get = |poly: &Poly, y_pow: usize, x_pow: usize| -> F { - if (y_pow == 0) && (x_pow == 0) { - poly.zero_coefficient - } else if x_pow == 0 { - poly.y_coefficients[y_pow - 1] - } else if y_pow == 0 { - poly.x_coefficients[x_pow - 1] - } else { - poly.yx_coefficients[y_pow - 1][x_pow - 1] + fn conditional_select_poly + Zeroize + PrimeField>( + mut a: Poly, + mut b: Poly, + choice: Choice, + ) -> Poly { + let pad_to = |a: &mut Poly, b: &Poly| { + while a.x_coefficients.len() < b.x_coefficients.len() { + a.x_coefficients.push(F::ZERO); + } + while a.yx_coefficients.len() < b.yx_coefficients.len() { + a.yx_coefficients.push(vec![]); + } + for (a, b) in a.yx_coefficients.iter_mut().zip(&b.yx_coefficients) { + while a.len() < b.len() { + a.push(F::ZERO); + } + } + while a.y_coefficients.len() < b.y_coefficients.len() { + a.y_coefficients.push(F::ZERO); } }; - let coeff_numerator = get(&remainder, rem_y, rem_x); - let coeff_denominator = get(divisor, div_y, div_x); + // Pad these to be the same size/layout as each other + pad_to(&mut a, &b); + pad_to(&mut b, &a); - // We want coeff_denominator scaled by x to equal coeff_numerator - // x * d = n - // n / d = x - let mut quotient_term = Poly::zero(); - // Because this is the coefficient for the leading term of a tidied polynomial, it must be - // non-zero - quotient_term.zero_coefficient = coeff_numerator * coeff_denominator.invert().unwrap(); + let mut res = Poly::zero(); + for (a, b) in a.y_coefficients.iter().zip(&b.y_coefficients) { + res.y_coefficients.push(<_>::conditional_select(a, b, choice)); + } + for (a, b) in a.yx_coefficients.iter().zip(&b.yx_coefficients) { + let mut yx_coefficients = Vec::with_capacity(a.len()); + for (a, b) in a.iter().zip(b) { + yx_coefficients.push(<_>::conditional_select(a, b, choice)) + } + res.yx_coefficients.push(yx_coefficients); + } + for (a, b) in a.x_coefficients.iter().zip(&b.x_coefficients) { + res.x_coefficients.push(<_>::conditional_select(a, b, choice)); + } + res.zero_coefficient = <_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice); - // Add the necessary yx powers - let delta_y = rem_y - div_y; - let delta_x = rem_x - div_x; - let quotient_term = quotient_term.shift_by_y(delta_y).shift_by_x(delta_x); - - let to_remove = quotient_term.clone() * divisor.clone(); - debug_assert_eq!(get(&to_remove, rem_y, rem_x), coeff_numerator); - - remainder = remainder - to_remove; - quotient = quotient + "ient_term; + res + } + + // The following long division algorithm only works if the denominator actually has a variable + // If the denominator isn't variable to anything, short-circuit to scalar 'division' + // This is safe as `leading_coefficient` is based on the structure, not the values, of the poly + let denominator_leading_coefficient = denominator.leading_coefficient(); + if denominator_leading_coefficient == (0, 0) { + return (self * denominator.zero_coefficient.invert().unwrap(), Poly::zero()); + } + + // The structure of the quotient, which is the the numerator with all coefficients set to 0 + let mut quotient_structure = Poly { + y_coefficients: vec![F::ZERO; self.y_coefficients.len()], + yx_coefficients: self.yx_coefficients.clone(), + x_coefficients: vec![F::ZERO; self.x_coefficients.len()], + zero_coefficient: F::ZERO, + }; + for coeff in quotient_structure + .yx_coefficients + .iter_mut() + .flat_map(|yx_coefficients| yx_coefficients.iter_mut()) + { + *coeff = F::ZERO; + } + + // Calculate the amount of iterations we need to perform + let iterations = self.y_coefficients.len() + + self.yx_coefficients.iter().map(|yx_coefficients| yx_coefficients.len()).sum::() + + self.x_coefficients.len(); + + // Find the highest non-zero coefficient in the denominator + // This is the coefficient which we actually perform division with + let denominator_dividing_coefficient = + denominator.greater_than_or_equal_coefficient(&CoefficientIndex { y_pow: 0, x_pow: 0 }); + let denominator_dividing_coefficient_inv = + ct_get(denominator, denominator_dividing_coefficient).invert().unwrap(); + + let mut quotient = quotient_structure.clone(); + let mut remainder = self.clone(); + for _ in 0 .. iterations { + // Find the numerator coefficient we're clearing + // This will be (0, 0) if we aren't clearing a coefficient + let numerator_coefficient = + remainder.greater_than_or_equal_coefficient(&denominator_dividing_coefficient); + + // We only apply the effects of this iteration if the numerator's coefficient is actually >= + let meaningful_iteration = numerator_coefficient.ct_gt(&denominator_dividing_coefficient) | + numerator_coefficient.ct_eq(&denominator_dividing_coefficient); + + // 1) Find the scalar `q` such that the leading coefficient of `q * denominator` is equal to + // the leading coefficient of self. + let numerator_coefficient_value = ct_get(&remainder, numerator_coefficient); + let q = numerator_coefficient_value * denominator_dividing_coefficient_inv; + + // 2) Calculate the full term of the quotient by scaling with the necessary powers of y/x + let proper_powers_of_yx = CoefficientIndex { + y_pow: numerator_coefficient.y_pow.wrapping_sub(denominator_dividing_coefficient.y_pow), + x_pow: numerator_coefficient.x_pow.wrapping_sub(denominator_dividing_coefficient.x_pow), + }; + let fallabck_powers_of_yx = CoefficientIndex { y_pow: 0, x_pow: 0 }; + let mut quotient_term = quotient_structure.clone(); + ct_set( + &mut quotient_term, + // If the numerator coefficient isn't >=, proper_powers_of_yx will have garbage in them + <_>::conditional_select(&fallabck_powers_of_yx, &proper_powers_of_yx, meaningful_iteration), + q, + ); + + let quotient_if_meaningful = quotient.clone() + "ient_term; + quotient = conditional_select_poly(quotient, quotient_if_meaningful, meaningful_iteration); + + // 3) Remove what we've divided out from self + let remainder_if_meaningful = remainder.clone() - (quotient_term * denominator); + remainder = + conditional_select_poly(remainder, remainder_if_meaningful, meaningful_iteration); + } + + quotient = conditional_select_poly( + quotient, + // If the dividing coefficient was for y**0 x**0, we return the poly scaled by its inverse + self.clone() * denominator_dividing_coefficient_inv, + denominator_dividing_coefficient.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }), + ); + remainder = conditional_select_poly( + remainder, + // If the dividing coefficient was for y**0 x**0, we're able to perfectly divide and there's + // no remainder + Poly::zero(), + denominator_dividing_coefficient.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }), + ); + + // Clear any junk terms out of the remainder which are less than the denominator + let denominator_leading_coefficient = CoefficientIndex { + y_pow: denominator_leading_coefficient.0.try_into().unwrap(), + x_pow: denominator_leading_coefficient.1.try_into().unwrap(), + }; + if denominator_leading_coefficient != (CoefficientIndex { y_pow: 0, x_pow: 0 }) { + while { + let index = + CoefficientIndex { y_pow: remainder.y_coefficients.len().try_into().unwrap(), x_pow: 0 }; + bool::from( + index.ct_gt(&denominator_leading_coefficient) | + index.ct_eq(&denominator_leading_coefficient), + ) + } { + let popped = remainder.y_coefficients.pop(); + debug_assert_eq!(popped, Some(F::ZERO)); + } + while { + let index = CoefficientIndex { + y_pow: remainder.yx_coefficients.len().try_into().unwrap(), + x_pow: remainder + .yx_coefficients + .last() + .map(|yx_coefficients| yx_coefficients.len()) + .unwrap_or(0) + .try_into() + .unwrap(), + }; + bool::from( + index.ct_gt(&denominator_leading_coefficient) | + index.ct_eq(&denominator_leading_coefficient), + ) + } { + let popped = remainder.yx_coefficients.last_mut().unwrap().pop(); + // This may have been `vec![]` + if let Some(popped) = popped { + debug_assert_eq!(popped, F::ZERO); + } + if remainder.yx_coefficients.last().unwrap().is_empty() { + let popped = remainder.yx_coefficients.pop(); + debug_assert_eq!(popped, Some(vec![])); + } + } + while { + let index = + CoefficientIndex { y_pow: 0, x_pow: remainder.x_coefficients.len().try_into().unwrap() }; + bool::from( + index.ct_gt(&denominator_leading_coefficient) | + index.ct_eq(&denominator_leading_coefficient), + ) + } { + let popped = remainder.x_coefficients.pop(); + debug_assert_eq!(popped, Some(F::ZERO)); + } } - debug_assert_eq!((quotient.clone() * divisor.clone()) + &remainder, self); (quotient, remainder) } } -impl> Rem<&Self> for Poly { +impl + Zeroize + PrimeField> Rem<&Self> for Poly { type Output = Self; fn rem(self, modulus: &Self) -> Self { @@ -328,10 +608,10 @@ impl> Rem<&Self> for Poly { } } -impl> Poly { +impl + Zeroize + PrimeField> Poly { /// Evaluate this polynomial with the specified x/y values. /// - /// Panics on polynomials with terms whose powers exceed 2**64. + /// Panics on polynomials with terms whose powers exceed 2^64. #[must_use] pub fn eval(&self, x: F, y: F) -> F { let mut res = self.zero_coefficient; @@ -358,14 +638,11 @@ impl> Poly { res } - /// Differentiate a polynomial, reduced by a modulus with a leading y term y**2 x**0, by x and y. + /// Differentiate a polynomial, reduced by a modulus with a leading y term y^2 x^0, by x and y. /// - /// This function panics if a y**2 term is present within the polynomial. + /// This function has undefined behavior if unreduced. #[must_use] pub fn differentiate(&self) -> (Poly, Poly) { - assert!(self.y_coefficients.len() <= 1); - assert!(self.yx_coefficients.len() <= 1); - // Differentation by x practically involves: // - Dropping everything without an x component // - Shifting everything down a power of x @@ -391,17 +668,18 @@ impl> Poly { if !self.yx_coefficients.is_empty() { let mut yx_coeffs = self.yx_coefficients[0].clone(); - diff_x.y_coefficients = vec![yx_coeffs.remove(0)]; - diff_x.yx_coefficients = vec![yx_coeffs]; + if !yx_coeffs.is_empty() { + diff_x.y_coefficients = vec![yx_coeffs.remove(0)]; + diff_x.yx_coefficients = vec![yx_coeffs]; - let mut prior_x_power = F::from(2); - for yx_coeff in &mut diff_x.yx_coefficients[0] { - *yx_coeff *= prior_x_power; - prior_x_power += F::ONE; + let mut prior_x_power = F::from(2); + for yx_coeff in &mut diff_x.yx_coefficients[0] { + *yx_coeff *= prior_x_power; + prior_x_power += F::ONE; + } } } - diff_x.tidy(); diff_x }; diff --git a/crypto/evrf/divisors/src/tests/mod.rs b/crypto/evrf/divisors/src/tests/mod.rs index bd8de441..c7c95567 100644 --- a/crypto/evrf/divisors/src/tests/mod.rs +++ b/crypto/evrf/divisors/src/tests/mod.rs @@ -6,6 +6,8 @@ use pasta_curves::{Ep, Eq}; use crate::{DivisorCurve, Poly, new_divisor}; +mod poly; + // Equation 4 in the security proofs fn check_divisor(points: Vec) { // Create the divisor @@ -184,16 +186,16 @@ fn test_subset_sum_to_infinity() { #[test] fn test_divisor_pallas() { - test_divisor::(); test_same_point::(); test_subset_sum_to_infinity::(); + test_divisor::(); } #[test] fn test_divisor_vesta() { - test_divisor::(); test_same_point::(); test_subset_sum_to_infinity::(); + test_divisor::(); } #[test] @@ -229,7 +231,7 @@ fn test_divisor_ed25519() { } } - test_divisor::(); test_same_point::(); test_subset_sum_to_infinity::(); + test_divisor::(); } diff --git a/crypto/evrf/divisors/src/tests/poly.rs b/crypto/evrf/divisors/src/tests/poly.rs index c630a69e..63f73a96 100644 --- a/crypto/evrf/divisors/src/tests/poly.rs +++ b/crypto/evrf/divisors/src/tests/poly.rs @@ -1,3 +1,5 @@ +use rand_core::OsRng; + use group::ff::Field; use pasta_curves::Ep; @@ -16,7 +18,24 @@ fn test_poly() { let mut modulus = Poly::zero(); modulus.y_coefficients = vec![one]; - assert_eq!(poly % &modulus, Poly::zero()); + assert_eq!( + poly.clone().div_rem(&modulus).0, + Poly { + y_coefficients: vec![one], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: zero + } + ); + assert_eq!( + poly % &modulus, + Poly { + y_coefficients: vec![], + yx_coefficients: vec![], + x_coefficients: vec![], + zero_coefficient: zero + } + ); } { @@ -25,7 +44,7 @@ fn test_poly() { let mut squared = Poly::zero(); squared.y_coefficients = vec![zero, zero, zero, one]; - assert_eq!(poly.clone() * poly.clone(), squared); + assert_eq!(poly.clone() * &poly, squared); } { @@ -37,18 +56,18 @@ fn test_poly() { let mut res = Poly::zero(); res.zero_coefficient = F::from(6u64); - assert_eq!(a.clone() * b.clone(), res); + assert_eq!(a.clone() * &b, res); b.y_coefficients = vec![F::from(4u64)]; res.y_coefficients = vec![F::from(8u64)]; - assert_eq!(a.clone() * b.clone(), res); - assert_eq!(b.clone() * a.clone(), res); + assert_eq!(a.clone() * &b, res); + assert_eq!(b.clone() * &a, res); a.x_coefficients = vec![F::from(5u64)]; res.x_coefficients = vec![F::from(15u64)]; res.yx_coefficients = vec![vec![F::from(20u64)]]; - assert_eq!(a.clone() * b.clone(), res); - assert_eq!(b * a.clone(), res); + assert_eq!(a.clone() * &b, res); + assert_eq!(b * &a, res); // res is now 20xy + 8*y + 15*x + 6 // res ** 2 = @@ -60,7 +79,7 @@ fn test_poly() { vec![vec![F::from(480u64), F::from(600u64)], vec![F::from(320u64), F::from(400u64)]]; squared.x_coefficients = vec![F::from(180u64), F::from(225u64)]; squared.zero_coefficient = F::from(36u64); - assert_eq!(res.clone() * res, squared); + assert_eq!(res.clone() * &res, squared); } } From b3e003bd5dd67422ec8b6e7bd926748f2289ba1f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Sep 2024 10:22:49 -0400 Subject: [PATCH 182/368] cargo +nightly fmt --- crypto/evrf/divisors/src/poly.rs | 63 +++++++++++++++++++++++++------- 1 file changed, 49 insertions(+), 14 deletions(-) diff --git a/crypto/evrf/divisors/src/poly.rs b/crypto/evrf/divisors/src/poly.rs index 0e41bc49..8d99aef2 100644 --- a/crypto/evrf/divisors/src/poly.rs +++ b/crypto/evrf/divisors/src/poly.rs @@ -369,21 +369,35 @@ impl + Zeroize + PrimeField> Poly { #[must_use] pub(crate) fn div_rem(self, denominator: &Self) -> (Self, Self) { // These functions have undefined behavior if this isn't a valid index for this poly - fn ct_get + Zeroize + PrimeField>( - poly: &Poly, - index: CoefficientIndex, - ) -> F { + fn ct_get + Zeroize + PrimeField>(poly: &Poly, index: CoefficientIndex) -> F { let mut res = poly.zero_coefficient; for (y_pow_sub_one, coeff) in poly.y_coefficients.iter().enumerate() { - res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 })); + res = <_>::conditional_select( + &res, + coeff, + index + .ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 }), + ); } for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter().enumerate() { for (x_pow_sub_one, coeff) in coeffs.iter().enumerate() { - res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + res = <_>::conditional_select( + &res, + coeff, + index.ct_eq(&CoefficientIndex { + y_pow: (y_pow_sub_one + 1).try_into().unwrap(), + x_pow: (x_pow_sub_one + 1).try_into().unwrap(), + }), + ); } } for (x_pow_sub_one, coeff) in poly.x_coefficients.iter().enumerate() { - res = <_>::conditional_select(&res, coeff, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + res = <_>::conditional_select( + &res, + coeff, + index + .ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() }), + ); } res } @@ -394,17 +408,38 @@ impl + Zeroize + PrimeField> Poly { value: F, ) { for (y_pow_sub_one, coeff) in poly.y_coefficients.iter_mut().enumerate() { - *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 })); + *coeff = <_>::conditional_select( + coeff, + &value, + index + .ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: 0 }), + ); } for (y_pow_sub_one, coeffs) in poly.yx_coefficients.iter_mut().enumerate() { for (x_pow_sub_one, coeff) in coeffs.iter_mut().enumerate() { - *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: (y_pow_sub_one + 1).try_into().unwrap(), x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + *coeff = <_>::conditional_select( + coeff, + &value, + index.ct_eq(&CoefficientIndex { + y_pow: (y_pow_sub_one + 1).try_into().unwrap(), + x_pow: (x_pow_sub_one + 1).try_into().unwrap(), + }), + ); } } for (x_pow_sub_one, coeff) in poly.x_coefficients.iter_mut().enumerate() { - *coeff = <_>::conditional_select(coeff, &value, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() })); + *coeff = <_>::conditional_select( + coeff, + &value, + index + .ct_eq(&CoefficientIndex { y_pow: 0, x_pow: (x_pow_sub_one + 1).try_into().unwrap() }), + ); } - poly.zero_coefficient = <_>::conditional_select(&poly.zero_coefficient, &value, index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 })); + poly.zero_coefficient = <_>::conditional_select( + &poly.zero_coefficient, + &value, + index.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }), + ); } fn conditional_select_poly + Zeroize + PrimeField>( @@ -446,7 +481,8 @@ impl + Zeroize + PrimeField> Poly { for (a, b) in a.x_coefficients.iter().zip(&b.x_coefficients) { res.x_coefficients.push(<_>::conditional_select(a, b, choice)); } - res.zero_coefficient = <_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice); + res.zero_coefficient = + <_>::conditional_select(&a.zero_coefficient, &b.zero_coefficient, choice); res } @@ -522,8 +558,7 @@ impl + Zeroize + PrimeField> Poly { // 3) Remove what we've divided out from self let remainder_if_meaningful = remainder.clone() - (quotient_term * denominator); - remainder = - conditional_select_poly(remainder, remainder_if_meaningful, meaningful_iteration); + remainder = conditional_select_poly(remainder, remainder_if_meaningful, meaningful_iteration); } quotient = conditional_select_poly( From 2aee21e5075dc509b84991048d66c09342e4def6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 29 Sep 2024 04:19:16 -0400 Subject: [PATCH 183/368] Fix decomposition -> divisor points vartime due to branch prediction/cache rules --- crypto/evrf/divisors/src/lib.rs | 43 +++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/crypto/evrf/divisors/src/lib.rs b/crypto/evrf/divisors/src/lib.rs index dbeb149f..f4bcc927 100644 --- a/crypto/evrf/divisors/src/lib.rs +++ b/crypto/evrf/divisors/src/lib.rs @@ -410,7 +410,7 @@ impl ScalarDecomposition { /// A divisor to prove a scalar multiplication. /// - /// The divisor will interpolate $d_i$ instances of $2^i \cdot G$ with $-(s \cdot G)$. + /// The divisor will interpolate $-(s \cdot G)$ with $d_i$ instances of $2^i \cdot G$. /// /// This function executes in constant time with regards to the scalar. /// @@ -419,19 +419,48 @@ impl ScalarDecomposition { &self, mut generator: C, ) -> Poly { - // The following for loop is constant time to the sum of `dlog`'s elements + // 1 is used for the resulting point, NUM_BITS is used for the decomposition, and then we store + // one additional index in a usize for the points we shouldn't write at all (hence the +2) + let _ = usize::try_from(::NUM_BITS + 2) + .expect("NUM_BITS + 2 didn't fit in usize"); let mut divisor_points = - Vec::with_capacity(usize::try_from(::NUM_BITS).unwrap()); - divisor_points.push(-generator * self.scalar); + vec![C::identity(); (::NUM_BITS + 1) as usize]; + + // Write the inverse of the resulting point + divisor_points[0] = -generator * self.scalar; + + // Write the decomposition + let mut write_to: u32 = 1; for coefficient in &self.decomposition { let mut coefficient = *coefficient; - while coefficient != 0 { - coefficient -= 1; - divisor_points.push(generator); + // Iterate over the maximum amount of iters for this value to be constant time regardless of + // any branch prediction algorithms + for _ in 0 .. ::NUM_BITS { + // Write the generator to the slot we're supposed to + /* + Without this loop, we'd increment this dependent on the distribution within the + decomposition. If the distribution is bottom-heavy, we won't access the tail of + `divisor_points` for a while, risking it being ejected out of the cache (causing a cache + miss which may not occur with a top-heavy distribution which quickly moves to the tail). + + This is O(log2(NUM_BITS) ** 3) though, as this the third loop, which is horrific. + */ + for i in 1 ..= ::NUM_BITS { + divisor_points[i as usize] = + <_>::conditional_select(&divisor_points[i as usize], &generator, i.ct_eq(&write_to)); + } + // If the coefficient isn't zero, increment write_to (so we don't overwrite this generator + // when it should be there) + let coefficient_not_zero = !coefficient.ct_eq(&0); + write_to = <_>::conditional_select(&write_to, &(write_to + 1), coefficient_not_zero); + // Subtract one from the coefficient, if it's not zero and won't underflow + coefficient = + <_>::conditional_select(&coefficient, &coefficient.wrapping_sub(1), coefficient_not_zero); } generator = generator.double(); } + // Create a divisor out of all points except the last point which is solely scratch let res = new_divisor(&divisor_points).unwrap(); divisor_points.zeroize(); res From 0b61a75afca0c9f32843eb07eeb3427693c45caa Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 2 Oct 2024 21:58:45 -0400 Subject: [PATCH 184/368] Add lint against string slicing These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint boundary. --- Cargo.toml | 1 + 1 file changed, 1 insertion(+) diff --git a/Cargo.toml b/Cargo.toml index d0c91a30..facd5a6a 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -245,6 +245,7 @@ range_plus_one = "deny" redundant_closure_for_method_calls = "deny" redundant_else = "deny" string_add_assign = "deny" +string_slice = "deny" unchecked_duration_subtraction = "deny" uninlined_format_args = "deny" unnecessary_box_returns = "deny" From d0201cf2e510b674afe251d163433e6951798d89 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 27 Oct 2024 08:51:19 -0400 Subject: [PATCH 185/368] Remove potentially vartime (due to cache side-channel attacks) table access in dalek-ff-group and minimal-ed448 --- LICENSE | 2 +- crypto/dalek-ff-group/src/field.rs | 11 ++++++++++- crypto/dalek-ff-group/src/lib.rs | 11 ++++++++++- crypto/ed448/src/backend.rs | 11 ++++++++++- crypto/ed448/src/point.rs | 11 ++++++++++- 5 files changed, 41 insertions(+), 5 deletions(-) diff --git a/LICENSE b/LICENSE index 34f2feb2..03c8975a 100644 --- a/LICENSE +++ b/LICENSE @@ -5,4 +5,4 @@ a full copy of the AGPL-3.0 License is included in the root of this repository as a reference text. This copy should be provided with any distribution of a crate licensed under the AGPL-3.0, as per its terms. -The GitHub actions (`.github/actions`) are licensed under the MIT license. +The GitHub actions/workflows (`.github`) are licensed under the MIT license. diff --git a/crypto/dalek-ff-group/src/field.rs b/crypto/dalek-ff-group/src/field.rs index b1af2711..60c6c9ea 100644 --- a/crypto/dalek-ff-group/src/field.rs +++ b/crypto/dalek-ff-group/src/field.rs @@ -244,7 +244,16 @@ impl FieldElement { res *= res; } } - res *= table[usize::from(bits)]; + + let mut scale_by = FieldElement::ONE; + #[allow(clippy::needless_range_loop)] + for i in 0 .. 16 { + #[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16 + { + scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8))); + } + } + res *= scale_by; bits = 0; } } diff --git a/crypto/dalek-ff-group/src/lib.rs b/crypto/dalek-ff-group/src/lib.rs index dcbcacc0..e6aad5b2 100644 --- a/crypto/dalek-ff-group/src/lib.rs +++ b/crypto/dalek-ff-group/src/lib.rs @@ -208,7 +208,16 @@ impl Scalar { res *= res; } } - res *= table[usize::from(bits)]; + + let mut scale_by = Scalar::ONE; + #[allow(clippy::needless_range_loop)] + for i in 0 .. 16 { + #[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16 + { + scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8))); + } + } + res *= scale_by; bits = 0; } } diff --git a/crypto/ed448/src/backend.rs b/crypto/ed448/src/backend.rs index db41e811..327fcf97 100644 --- a/crypto/ed448/src/backend.rs +++ b/crypto/ed448/src/backend.rs @@ -161,7 +161,16 @@ macro_rules! field { res *= res; } } - res *= table[usize::from(bits)]; + + let mut scale_by = $FieldName(Residue::ONE); + #[allow(clippy::needless_range_loop)] + for i in 0 .. 16 { + #[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16 + { + scale_by = <_>::conditional_select(&scale_by, &table[i], bits.ct_eq(&(i as u8))); + } + } + res *= scale_by; bits = 0; } } diff --git a/crypto/ed448/src/point.rs b/crypto/ed448/src/point.rs index c3b10f79..cd49023f 100644 --- a/crypto/ed448/src/point.rs +++ b/crypto/ed448/src/point.rs @@ -242,7 +242,16 @@ impl Mul for Point { res = res.double(); } } - res += table[usize::from(bits)]; + + let mut add_by = Point::identity(); + #[allow(clippy::needless_range_loop)] + for i in 0 .. 16 { + #[allow(clippy::cast_possible_truncation)] // Safe since 0 .. 16 + { + add_by = <_>::conditional_select(&add_by, &table[i], bits.ct_eq(&(i as u8))); + } + } + res += add_by; bits = 0; } } From ce1689b3259ce1853186210c5f330064ddc4c345 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 28 Oct 2024 18:08:31 -0400 Subject: [PATCH 186/368] Expand tests for ethereum-schnorr-contract --- .../ethereum/schnorr/contracts/Schnorr.sol | 50 +++++++++----- .../schnorr/contracts/tests/Schnorr.sol | 12 ++++ networks/ethereum/schnorr/src/public_key.rs | 3 +- networks/ethereum/schnorr/src/signature.rs | 10 ++- networks/ethereum/schnorr/src/tests/mod.rs | 42 ++++++++--- .../ethereum/schnorr/src/tests/premise.rs | 35 +++++----- .../ethereum/schnorr/src/tests/public_key.rs | 69 +++++++++++++++++++ .../ethereum/schnorr/src/tests/signature.rs | 33 +++++++++ processor/ethereum/src/primitives/machine.rs | 2 +- 9 files changed, 205 insertions(+), 51 deletions(-) create mode 100644 networks/ethereum/schnorr/src/tests/public_key.rs create mode 100644 networks/ethereum/schnorr/src/tests/signature.rs diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 7405051a..57269c5f 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -1,7 +1,13 @@ // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.26; -// See https://github.com/noot/schnorr-verify for implementation details +/// @title A library for verifying Schnorr signatures +/// @author Luke Parker +/// @author Elizabeth Binks +/// @notice Verifies a Schnorr signature for a specified public key +/// @dev This contract is not complete. Only certain public keys are compatible +/// @dev See https://github.com/serai-dex/serai/blob/next/networks/ethereum/schnorr/src/tests/premise.rs for implementation details +// TODO: Pin to a specific branch/commit once `next` is merged into `develop` library Schnorr { // secp256k1 group order uint256 private constant Q = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; @@ -11,31 +17,39 @@ library Schnorr { // 2) An x-coordinate < Q uint8 private constant KEY_PARITY = 27; - // px := public key x-coordinate, where the public key has an even y-coordinate - // message := the message signed - // c := Schnorr signature challenge - // s := Schnorr signature solution - function verify(bytes32 px, bytes32 message, bytes32 c, bytes32 s) internal pure returns (bool) { + /// @notice Verifies a Schnorr signature for the specified public key + /// @dev The y-coordinate of the public key is assumed to be even + /// @dev The x-coordinate of the public key is assumed to be less than the order of secp256k1 + /// @dev The challenge is calculated as `keccak256(abi.encodePacked(address(R), public_key, message))` where `R` is the commitment to the Schnorr signature's nonce + /// @param public_key The x-coordinate of the public key + /// @param message The (hash of the) message signed + /// @param c The challenge for the Schnorr signature + /// @param s The response to the challenge for the Schnorr signature + /// @return If the signature is valid + function verify(bytes32 public_key, bytes32 message, bytes32 c, bytes32 s) + internal + pure + returns (bool) + { // ecrecover = (m, v, r, s) -> key - // We instead pass the following to obtain the nonce (not the key) - // Then we hash it and verify it matches the challenge - bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q)); - bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q)); + // We instead pass the following to recover the Schnorr signature's nonce (not a public key) + bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(public_key), Q)); + bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(public_key), Q)); /* - The ecrecover precompile checks `r` and `s` (`px` and `ca`) are non-zero, - banning the two keys with zero for their x-coordinate and zero challenge. - Each has negligible probability of occuring (assuming zero x-coordinates - are even on-curve in the first place). + The ecrecover precompile checks `r` and `s` (`public_key` and `ca`) are non-zero, banning the + two keys with zero for their x-coordinate and zero challenges. Each already only had a + negligible probability of occuring (assuming zero x-coordinates are even on-curve in the first + place). - `sa` is not checked to be non-zero yet it does not need to be. The inverse - of it is never taken. + `sa` is not checked to be non-zero yet it does not need to be. The inverse of it is never + taken. */ - address R = ecrecover(sa, KEY_PARITY, px, ca); + address R = ecrecover(sa, KEY_PARITY, public_key, ca); // The ecrecover failed if (R == address(0)) return false; // Check the signature is correct by rebuilding the challenge - return c == keccak256(abi.encodePacked(R, px, message)); + return c == keccak256(abi.encodePacked(R, public_key, message)); } } diff --git a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 412786a3..92922f5e 100644 --- a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -3,7 +3,19 @@ pragma solidity ^0.8.26; import "../Schnorr.sol"; +/// @title A thin wrapper around the library for verifying Schnorr signatures to test it with +/// @author Luke Parker +/// @author Elizabeth Binks contract TestSchnorr { + /// @notice Verifies a Schnorr signature for the specified public key + /// @dev The y-coordinate of the public key is assumed to be even + /// @dev The x-coordinate of the public key is assumed to be less than the order of secp256k1 + /// @dev The challenge is calculated as `keccak256(abi.encodePacked(address(R), public_key, message))` where `R` is the commitment to the Schnorr signature's nonce + /// @param public_key The x-coordinate of the public key + /// @param message The (hash of the) message signed + /// @param c The challenge for the Schnorr signature + /// @param s The response to the challenge for the Schnorr signature + /// @return If the signature is valid function verify(bytes32 public_key, bytes calldata message, bytes32 c, bytes32 s) external pure diff --git a/networks/ethereum/schnorr/src/public_key.rs b/networks/ethereum/schnorr/src/public_key.rs index 3c39552f..fbf00584 100644 --- a/networks/ethereum/schnorr/src/public_key.rs +++ b/networks/ethereum/schnorr/src/public_key.rs @@ -31,8 +31,7 @@ impl PublicKey { let x_coordinate = affine.x(); // Return None if the x-coordinate isn't mutual to both fields - // While reductions shouldn't be an issue, it's one less headache/concern to have - // The trivial amount of public keys this makes non-representable aren't a concern + // The trivial amount of public keys this makes non-representable aren't considered a concern if >::reduce_bytes(&x_coordinate).to_repr() != x_coordinate { None?; } diff --git a/networks/ethereum/schnorr/src/signature.rs b/networks/ethereum/schnorr/src/signature.rs index 1af1d60f..105e6d4d 100644 --- a/networks/ethereum/schnorr/src/signature.rs +++ b/networks/ethereum/schnorr/src/signature.rs @@ -20,11 +20,17 @@ pub struct Signature { impl Signature { /// Construct a new `Signature`. #[must_use] - pub fn new(c: Scalar, s: Scalar) -> Signature { - Signature { c, s } + pub fn new(c: Scalar, s: Scalar) -> Option { + if bool::from(c.is_zero()) { + None?; + } + Some(Signature { c, s }) } /// The challenge for a signature. + /// + /// With negligible probability, this MAY return 0 which will create an invalid/unverifiable + /// signature. #[must_use] pub fn challenge(R: ProjectivePoint, key: &PublicKey, message: &[u8]) -> Scalar { // H(R || A || m) diff --git a/networks/ethereum/schnorr/src/tests/mod.rs b/networks/ethereum/schnorr/src/tests/mod.rs index 90774e30..4b23a0ba 100644 --- a/networks/ethereum/schnorr/src/tests/mod.rs +++ b/networks/ethereum/schnorr/src/tests/mod.rs @@ -17,6 +17,9 @@ use alloy_node_bindings::{Anvil, AnvilInstance}; use crate::{PublicKey, Signature}; +mod public_key; +pub(crate) use public_key::test_key; +mod signature; mod premise; #[expect(warnings)] @@ -88,12 +91,7 @@ async fn test_verify() { let (_anvil, provider, address) = setup_test().await; for _ in 0 .. 100 { - let (key, public_key) = loop { - let key = Scalar::random(&mut OsRng); - if let Some(public_key) = PublicKey::new(ProjectivePoint::GENERATOR * key) { - break (key, public_key); - } - }; + let (key, public_key) = test_key(); let nonce = Scalar::random(&mut OsRng); let mut message = vec![0; 1 + usize::try_from(OsRng.next_u32() % 256).unwrap()]; @@ -102,11 +100,37 @@ async fn test_verify() { let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &public_key, &message); let s = nonce + (c * key); - let sig = Signature::new(c, s); + let sig = Signature::new(c, s).unwrap(); assert!(sig.verify(&public_key, &message)); assert!(call_verify(&provider, address, &public_key, &message, &sig).await); + + // Test setting `s = 0` doesn't pass verification + { + let zero_s = Signature::new(c, Scalar::ZERO).unwrap(); + assert!(!zero_s.verify(&public_key, &message)); + assert!(!call_verify(&provider, address, &public_key, &message, &zero_s).await); + } + // Mutate the message and make sure the signature now fails to verify - message[0] = message[0].wrapping_add(1); - assert!(!call_verify(&provider, address, &public_key, &message, &sig).await); + { + let mut message = message.clone(); + message[0] = message[0].wrapping_add(1); + assert!(!sig.verify(&public_key, &message)); + assert!(!call_verify(&provider, address, &public_key, &message, &sig).await); + } + + // Mutate c and make sure the signature now fails to verify + { + let mutated_c = Signature::new(c + Scalar::ONE, s).unwrap(); + assert!(!mutated_c.verify(&public_key, &message)); + assert!(!call_verify(&provider, address, &public_key, &message, &mutated_c).await); + } + + // Mutate s and make sure the signature now fails to verify + { + let mutated_s = Signature::new(c, s + Scalar::ONE).unwrap(); + assert!(!mutated_s.verify(&public_key, &message)); + assert!(!call_verify(&provider, address, &public_key, &message, &mutated_s).await); + } } } diff --git a/networks/ethereum/schnorr/src/tests/premise.rs b/networks/ethereum/schnorr/src/tests/premise.rs index 01571a43..28d9135d 100644 --- a/networks/ethereum/schnorr/src/tests/premise.rs +++ b/networks/ethereum/schnorr/src/tests/premise.rs @@ -12,7 +12,7 @@ use k256::{ use alloy_core::primitives::Address; -use crate::{PublicKey, Signature}; +use crate::{Signature, tests::test_key}; // The ecrecover opcode, yet with if the y is odd replacing v fn ecrecover(message: Scalar, odd_y: bool, r: Scalar, s: Scalar) -> Option<[u8; 20]> { @@ -64,12 +64,7 @@ fn test_ecrecover() { // of efficiently verifying Schnorr signatures in an Ethereum contract #[test] fn nonce_recovery_via_ecrecover() { - let (key, public_key) = loop { - let key = Scalar::random(&mut OsRng); - if let Some(public_key) = PublicKey::new(ProjectivePoint::GENERATOR * key) { - break (key, public_key); - } - }; + let (key, public_key) = test_key(); let nonce = Scalar::random(&mut OsRng); let R = ProjectivePoint::GENERATOR * nonce; @@ -81,26 +76,28 @@ fn nonce_recovery_via_ecrecover() { let s = nonce + (c * key); /* - An ECDSA signature is `(r, s)` with `s = (H(m) + rx) / k`, where: - - `m` is the message + An ECDSA signature is `(r, s)` with `s = (m + (r * x)) / k`, where: + - `m` is the hash of the message - `r` is the x-coordinate of the nonce, reduced into a scalar - `x` is the private key - `k` is the nonce We fix the recovery ID to be for the even key with an x-coordinate < the order. Accordingly, - `kG = Point::from(Even, r)`. This enables recovering the public key via - `((s Point::from(Even, r)) - H(m)G) / r`. + `k * G = Point::from(Even, r)`. This enables recovering the public key via + `((s * Point::from(Even, r)) - (m * G)) / r`. We want to calculate `R` from `(c, s)` where `s = r + cx`. That means we need to calculate - `sG - cX`. + `(s * G) - (c * X)`. - We can calculate `sG - cX` with `((s Point::from(Even, r)) - H(m)G) / r` if: - - Latter `r` = `X.x` - - Latter `s` = `c` - - `H(m)` = former `s` - This gets us to `(cX - sG) / X.x`. If we additionally scale the latter's `s, H(m)` values (the - former's `c, s` values) by `X.x`, we get `cX - sG`. This just requires negating each to achieve - `sG - cX`. + We can calculate `(s * G) - (c * X)` with `((s * Point::from(Even, r)) - (m * G)) / r` if: + - ECDSA `r` = `X.x`, the x-coordinate of the Schnorr public key + - ECDSA `s` = `c`, the Schnorr signature's challenge + - ECDSA `m` = Schnorr `s` + This gets us to `((c * X) - (s * G)) / X.x`. If we additionally scale the ECDSA `s, m` values + (the Schnorr `c, s` values) by `X.x`, we get `(c * X) - (s * G)`. This just requires negating + to achieve `(s * G) - (c * X)`. + + With `R`, we can recalculate and compare the challenges to confirm the signature is valid. */ let x_scalar = >::reduce_bytes(&public_key.point().to_affine().x()); let sa = -(s * x_scalar); diff --git a/networks/ethereum/schnorr/src/tests/public_key.rs b/networks/ethereum/schnorr/src/tests/public_key.rs new file mode 100644 index 00000000..9294cbac --- /dev/null +++ b/networks/ethereum/schnorr/src/tests/public_key.rs @@ -0,0 +1,69 @@ +use rand_core::OsRng; + +use subtle::Choice; +use group::ff::{Field, PrimeField}; +use k256::{ + elliptic_curve::{ + FieldBytesEncoding, + ops::Reduce, + point::{AffineCoordinates, DecompressPoint}, + }, + AffinePoint, ProjectivePoint, Scalar, U256 as KU256, +}; + +use crate::PublicKey; + +// Generates a key usable within tests +pub(crate) fn test_key() -> (Scalar, PublicKey) { + loop { + let key = Scalar::random(&mut OsRng); + let point = ProjectivePoint::GENERATOR * key; + if let Some(public_key) = PublicKey::new(point) { + // While here, test `PublicKey::point` and its serialization functions + assert_eq!(point, public_key.point()); + assert_eq!(PublicKey::from_eth_repr(public_key.eth_repr()).unwrap(), public_key); + return (key, public_key); + } + } +} + +#[test] +fn test_odd_key() { + // We generate a valid key to ensure there's not some distinct reason this key is invalid + let (_, key) = test_key(); + // We then take its point and negate it so its y-coordinate is odd + let odd = -key.point(); + assert!(PublicKey::new(odd).is_none()); +} + +#[test] +fn test_non_mutual_key() { + let mut x_coordinate = KU256::from(-(Scalar::ONE)).saturating_add(&KU256::ONE); + + let y_is_odd = Choice::from(0); + let non_mutual = loop { + if let Some(point) = Option::::from(AffinePoint::decompress( + &FieldBytesEncoding::encode_field_bytes(&x_coordinate), + y_is_odd, + )) { + break point; + } + x_coordinate = x_coordinate.saturating_add(&KU256::ONE); + }; + let x_coordinate = non_mutual.x(); + assert!(>::reduce_bytes(&x_coordinate).to_repr() != x_coordinate); + + // Even point whose x-coordinate isn't mutual to both fields (making it non-zero) + assert!(PublicKey::new(non_mutual.into()).is_none()); +} + +#[test] +fn test_zero_key() { + let y_is_odd = Choice::from(0); + if let Some(A_affine) = + Option::::from(AffinePoint::decompress(&[0; 32].into(), y_is_odd)) + { + let A = ProjectivePoint::from(A_affine); + assert!(PublicKey::new(A).is_none()); + } +} diff --git a/networks/ethereum/schnorr/src/tests/signature.rs b/networks/ethereum/schnorr/src/tests/signature.rs new file mode 100644 index 00000000..27c640f8 --- /dev/null +++ b/networks/ethereum/schnorr/src/tests/signature.rs @@ -0,0 +1,33 @@ +use rand_core::OsRng; + +use group::ff::Field; +use k256::Scalar; + +use crate::Signature; + +#[test] +fn test_zero_challenge() { + assert!(Signature::new(Scalar::ZERO, Scalar::random(&mut OsRng)).is_none()); +} + +#[test] +fn test_signature_serialization() { + let c = Scalar::random(&mut OsRng); + let s = Scalar::random(&mut OsRng); + let sig = Signature::new(c, s).unwrap(); + assert_eq!(sig.c(), c); + assert_eq!(sig.s(), s); + + let sig_bytes = sig.to_bytes(); + assert_eq!(Signature::from_bytes(sig_bytes).unwrap(), sig); + + { + let mut sig_written_bytes = vec![]; + sig.write(&mut sig_written_bytes).unwrap(); + assert_eq!(sig_bytes.as_slice(), &sig_written_bytes); + } + + let mut sig_read_slice = sig_bytes.as_slice(); + assert_eq!(Signature::read(&mut sig_read_slice).unwrap(), sig); + assert!(sig_read_slice.is_empty()); +} diff --git a/processor/ethereum/src/primitives/machine.rs b/processor/ethereum/src/primitives/machine.rs index 1762eb28..e3252f30 100644 --- a/processor/ethereum/src/primitives/machine.rs +++ b/processor/ethereum/src/primitives/machine.rs @@ -140,7 +140,7 @@ impl SignatureMachine for ActionSignatureMachine { self.machine.complete(shares).map(|signature| { let s = signature.s; let c = Signature::challenge(signature.R, &self.key, &self.action.message()); - Transaction(self.action, Signature::new(c, s)) + Transaction(self.action, Signature::new(c, s).unwrap()) }) } } From dc1b8dfccd68b7c2eb4359a1e37b55ce5e4453b5 Mon Sep 17 00:00:00 2001 From: akildemir <34187742+akildemir@users.noreply.github.com> Date: Wed, 30 Oct 2024 23:05:56 +0300 Subject: [PATCH 187/368] add coins pallet tests (#606) * add tests * remove unused crate * remove serai_abi --- Cargo.lock | 1 + substrate/coins/pallet/Cargo.toml | 12 ++- substrate/coins/pallet/src/lib.rs | 6 ++ substrate/coins/pallet/src/mock.rs | 70 +++++++++++++++ substrate/coins/pallet/src/tests.rs | 129 ++++++++++++++++++++++++++++ 5 files changed, 216 insertions(+), 2 deletions(-) create mode 100644 substrate/coins/pallet/src/mock.rs create mode 100644 substrate/coins/pallet/src/tests.rs diff --git a/Cargo.lock b/Cargo.lock index 72526bc1..3b55f7b3 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8033,6 +8033,7 @@ dependencies = [ "serai-coins-primitives", "serai-primitives", "sp-core", + "sp-io", "sp-runtime", "sp-std", ] diff --git a/substrate/coins/pallet/Cargo.toml b/substrate/coins/pallet/Cargo.toml index 8c59fb3e..88ebfd32 100644 --- a/substrate/coins/pallet/Cargo.toml +++ b/substrate/coins/pallet/Cargo.toml @@ -34,6 +34,9 @@ pallet-transaction-payment = { git = "https://github.com/serai-dex/substrate", d serai-primitives = { path = "../../primitives", default-features = false, features = ["serde"] } coins-primitives = { package = "serai-coins-primitives", path = "../primitives", default-features = false } +[dev-dependencies] +sp-io = { git = "https://github.com/serai-dex/substrate", default-features = false } + [features] std = [ "frame-system/std", @@ -41,6 +44,7 @@ std = [ "sp-core/std", "sp-std/std", + "sp-io/std", "sp-runtime/std", "pallet-transaction-payment/std", @@ -49,8 +53,12 @@ std = [ "coins-primitives/std", ] -# TODO -try-runtime = [] +try-runtime = [ + "frame-system/try-runtime", + "frame-support/try-runtime", + + "sp-runtime/try-runtime", +] runtime-benchmarks = [ "frame-system/runtime-benchmarks", diff --git a/substrate/coins/pallet/src/lib.rs b/substrate/coins/pallet/src/lib.rs index dd64b2b6..4499f432 100644 --- a/substrate/coins/pallet/src/lib.rs +++ b/substrate/coins/pallet/src/lib.rs @@ -1,5 +1,11 @@ #![cfg_attr(not(feature = "std"), no_std)] +#[cfg(test)] +mod mock; + +#[cfg(test)] +mod tests; + use serai_primitives::{Balance, Coin, ExternalBalance, SubstrateAmount}; pub trait AllowMint { diff --git a/substrate/coins/pallet/src/mock.rs b/substrate/coins/pallet/src/mock.rs new file mode 100644 index 00000000..bd4ebc55 --- /dev/null +++ b/substrate/coins/pallet/src/mock.rs @@ -0,0 +1,70 @@ +//! Test environment for Coins pallet. + +use super::*; + +use frame_support::{ + construct_runtime, + traits::{ConstU32, ConstU64}, +}; + +use sp_core::{H256, sr25519::Public}; +use sp_runtime::{ + traits::{BlakeTwo256, IdentityLookup}, + BuildStorage, +}; + +use crate as coins; + +type Block = frame_system::mocking::MockBlock; + +construct_runtime!( + pub enum Test + { + System: frame_system, + Coins: coins, + } +); + +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type RuntimeOrigin = RuntimeOrigin; + type RuntimeCall = RuntimeCall; + type Nonce = u64; + type Hash = H256; + type Hashing = BlakeTwo256; + type AccountId = Public; + type Lookup = IdentityLookup; + type Block = Block; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type DbWeight = (); + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} + +impl Config for Test { + type RuntimeEvent = RuntimeEvent; + + type AllowMint = (); +} + +pub(crate) fn new_test_ext() -> sp_io::TestExternalities { + let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap(); + + crate::GenesisConfig:: { accounts: vec![], _ignore: Default::default() } + .assimilate_storage(&mut t) + .unwrap(); + + let mut ext = sp_io::TestExternalities::new(t); + ext.execute_with(|| System::set_block_number(0)); + ext +} diff --git a/substrate/coins/pallet/src/tests.rs b/substrate/coins/pallet/src/tests.rs new file mode 100644 index 00000000..a6d16afd --- /dev/null +++ b/substrate/coins/pallet/src/tests.rs @@ -0,0 +1,129 @@ +use crate::{mock::*, primitives::*}; + +use frame_system::RawOrigin; +use sp_core::Pair; + +use serai_primitives::*; + +pub type CoinsEvent = crate::Event; + +#[test] +fn mint() { + new_test_ext().execute_with(|| { + // minting u64::MAX should work + let coin = Coin::Serai; + let to = insecure_pair_from_name("random1").public(); + let balance = Balance { coin, amount: Amount(u64::MAX) }; + + Coins::mint(to, balance).unwrap(); + assert_eq!(Coins::balance(to, coin), balance.amount); + + // minting more should fail + assert!(Coins::mint(to, Balance { coin, amount: Amount(1) }).is_err()); + + // supply now should be equal to sum of the accounts balance sum + assert_eq!(Coins::supply(coin), balance.amount.0); + + // test events + let mint_events = System::events() + .iter() + .filter_map(|event| { + if let RuntimeEvent::Coins(e) = &event.event { + if matches!(e, CoinsEvent::Mint { .. }) { + Some(e.clone()) + } else { + None + } + } else { + None + } + }) + .collect::>(); + + assert_eq!(mint_events, vec![CoinsEvent::Mint { to, balance }]); + }) +} + +#[test] +fn burn_with_instruction() { + new_test_ext().execute_with(|| { + // mint some coin + let coin = Coin::External(ExternalCoin::Bitcoin); + let to = insecure_pair_from_name("random1").public(); + let balance = Balance { coin, amount: Amount(10 * 10u64.pow(coin.decimals())) }; + + Coins::mint(to, balance).unwrap(); + assert_eq!(Coins::balance(to, coin), balance.amount); + assert_eq!(Coins::supply(coin), balance.amount.0); + + // we shouldn't be able to burn more than what we have + let mut instruction = OutInstructionWithBalance { + instruction: OutInstruction { address: ExternalAddress::new(vec![]).unwrap(), data: None }, + balance: ExternalBalance { + coin: coin.try_into().unwrap(), + amount: Amount(balance.amount.0 + 1), + }, + }; + assert!( + Coins::burn_with_instruction(RawOrigin::Signed(to).into(), instruction.clone()).is_err() + ); + + // it should now work + instruction.balance.amount = balance.amount; + Coins::burn_with_instruction(RawOrigin::Signed(to).into(), instruction.clone()).unwrap(); + + // balance & supply now should be back to 0 + assert_eq!(Coins::balance(to, coin), Amount(0)); + assert_eq!(Coins::supply(coin), 0); + + let burn_events = System::events() + .iter() + .filter_map(|event| { + if let RuntimeEvent::Coins(e) = &event.event { + if matches!(e, CoinsEvent::BurnWithInstruction { .. }) { + Some(e.clone()) + } else { + None + } + } else { + None + } + }) + .collect::>(); + + assert_eq!(burn_events, vec![CoinsEvent::BurnWithInstruction { from: to, instruction }]); + }) +} + +#[test] +fn transfer() { + new_test_ext().execute_with(|| { + // mint some coin + let coin = Coin::External(ExternalCoin::Bitcoin); + let from = insecure_pair_from_name("random1").public(); + let balance = Balance { coin, amount: Amount(10 * 10u64.pow(coin.decimals())) }; + + Coins::mint(from, balance).unwrap(); + assert_eq!(Coins::balance(from, coin), balance.amount); + assert_eq!(Coins::supply(coin), balance.amount.0); + + // we can't send more than what we have + let to = insecure_pair_from_name("random2").public(); + assert!(Coins::transfer( + RawOrigin::Signed(from).into(), + to, + Balance { coin, amount: Amount(balance.amount.0 + 1) } + ) + .is_err()); + + // we can send it all + Coins::transfer(RawOrigin::Signed(from).into(), to, balance).unwrap(); + + // check the balances + assert_eq!(Coins::balance(from, coin), Amount(0)); + assert_eq!(Coins::balance(to, coin), balance.amount); + + // supply shouldn't change + assert_eq!(Coins::supply(coin), balance.amount.0); + }) +} From e9c1235b7692415f318682376c4ba23a36f61c08 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 30 Oct 2024 17:15:39 -0400 Subject: [PATCH 188/368] Tweak how features are activated in the coins pallet tests --- substrate/coins/pallet/Cargo.toml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/substrate/coins/pallet/Cargo.toml b/substrate/coins/pallet/Cargo.toml index 88ebfd32..bf6ea684 100644 --- a/substrate/coins/pallet/Cargo.toml +++ b/substrate/coins/pallet/Cargo.toml @@ -35,7 +35,7 @@ serai-primitives = { path = "../../primitives", default-features = false, featur coins-primitives = { package = "serai-coins-primitives", path = "../primitives", default-features = false } [dev-dependencies] -sp-io = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-io = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } [features] std = [ @@ -44,7 +44,6 @@ std = [ "sp-core/std", "sp-std/std", - "sp-io/std", "sp-runtime/std", "pallet-transaction-payment/std", From 2a427382f1772195f192467780d0b483c7366916 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 30 Oct 2024 21:35:43 -0400 Subject: [PATCH 189/368] Natspec, slither Deployer, Router --- .github/workflows/lint.yml | 22 + .../ethereum/schnorr/contracts/Schnorr.sol | 37 +- .../schnorr/contracts/tests/Schnorr.sol | 16 +- .../ethereum/deployer/contracts/Deployer.sol | 46 +- processor/ethereum/deployer/src/lib.rs | 6 +- processor/ethereum/router/README.md | 4 + .../ethereum/router/contracts/Router.sol | 507 +++++++++++++----- processor/ethereum/router/src/lib.rs | 6 +- 8 files changed, 481 insertions(+), 163 deletions(-) diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index b994a3cb..cdaae18d 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -90,3 +90,25 @@ jobs: run: | cargo install cargo-machete cargo machete + + slither: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + - name: Slither + run: | + python3 -m pip install solc-select + solc-select install 0.8.26 + solc-select use 0.8.26 + + python3 -m pip install slither-analyzer + + slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol + slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol + slither processor/ethereum/deployer/contracts/Deployer.sol + slither processor/ethereum/erc20/contracts/IERC20.sol + + cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/ + cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/ + cd processor/ethereum/router/contracts + slither Router.sol diff --git a/networks/ethereum/schnorr/contracts/Schnorr.sol b/networks/ethereum/schnorr/contracts/Schnorr.sol index 57269c5f..69e8d95c 100644 --- a/networks/ethereum/schnorr/contracts/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/Schnorr.sol @@ -2,12 +2,17 @@ pragma solidity ^0.8.26; /// @title A library for verifying Schnorr signatures -/// @author Luke Parker +/// @author Luke Parker /// @author Elizabeth Binks /// @notice Verifies a Schnorr signature for a specified public key -/// @dev This contract is not complete. Only certain public keys are compatible -/// @dev See https://github.com/serai-dex/serai/blob/next/networks/ethereum/schnorr/src/tests/premise.rs for implementation details -// TODO: Pin to a specific branch/commit once `next` is merged into `develop` +/** + * @dev This contract is not complete (in the cryptographic sense). Only a subset of potential + * public keys are representable here. + * + * See https://github.com/serai-dex/serai/blob/next/networks/ethereum/schnorr/src/tests/premise.rs + * for implementation details + */ +// TODO: Pin above link to a specific branch/commit once `next` is merged into `develop` library Schnorr { // secp256k1 group order uint256 private constant Q = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141; @@ -18,26 +23,30 @@ library Schnorr { uint8 private constant KEY_PARITY = 27; /// @notice Verifies a Schnorr signature for the specified public key - /// @dev The y-coordinate of the public key is assumed to be even - /// @dev The x-coordinate of the public key is assumed to be less than the order of secp256k1 - /// @dev The challenge is calculated as `keccak256(abi.encodePacked(address(R), public_key, message))` where `R` is the commitment to the Schnorr signature's nonce - /// @param public_key The x-coordinate of the public key + /** + * @dev The y-coordinate of the public key is assumed to be even. The x-coordinate of the public + * key is assumed to be less than the order of secp256k1. + * + * The challenge is calculated as `keccak256(abi.encodePacked(address(R), publicKey, message))` + * where `R` is the commitment to the Schnorr signature's nonce. + */ + /// @param publicKey The x-coordinate of the public key /// @param message The (hash of the) message signed /// @param c The challenge for the Schnorr signature /// @param s The response to the challenge for the Schnorr signature /// @return If the signature is valid - function verify(bytes32 public_key, bytes32 message, bytes32 c, bytes32 s) + function verify(bytes32 publicKey, bytes32 message, bytes32 c, bytes32 s) internal pure returns (bool) { // ecrecover = (m, v, r, s) -> key // We instead pass the following to recover the Schnorr signature's nonce (not a public key) - bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(public_key), Q)); - bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(public_key), Q)); + bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(publicKey), Q)); + bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(publicKey), Q)); /* - The ecrecover precompile checks `r` and `s` (`public_key` and `ca`) are non-zero, banning the + The ecrecover precompile checks `r` and `s` (`publicKey` and `ca`) are non-zero, banning the two keys with zero for their x-coordinate and zero challenges. Each already only had a negligible probability of occuring (assuming zero x-coordinates are even on-curve in the first place). @@ -45,11 +54,11 @@ library Schnorr { `sa` is not checked to be non-zero yet it does not need to be. The inverse of it is never taken. */ - address R = ecrecover(sa, KEY_PARITY, public_key, ca); + address R = ecrecover(sa, KEY_PARITY, publicKey, ca); // The ecrecover failed if (R == address(0)) return false; // Check the signature is correct by rebuilding the challenge - return c == keccak256(abi.encodePacked(R, public_key, message)); + return c == keccak256(abi.encodePacked(R, publicKey, message)); } } diff --git a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol index 92922f5e..1a19371a 100644 --- a/networks/ethereum/schnorr/contracts/tests/Schnorr.sol +++ b/networks/ethereum/schnorr/contracts/tests/Schnorr.sol @@ -8,19 +8,23 @@ import "../Schnorr.sol"; /// @author Elizabeth Binks contract TestSchnorr { /// @notice Verifies a Schnorr signature for the specified public key - /// @dev The y-coordinate of the public key is assumed to be even - /// @dev The x-coordinate of the public key is assumed to be less than the order of secp256k1 - /// @dev The challenge is calculated as `keccak256(abi.encodePacked(address(R), public_key, message))` where `R` is the commitment to the Schnorr signature's nonce - /// @param public_key The x-coordinate of the public key + /** + * @dev The y-coordinate of the public key is assumed to be even. The x-coordinate of the public + * key is assumed to be less than the order of secp256k1. + * + * The challenge is calculated as `keccak256(abi.encodePacked(address(R), publicKey, message))` + * where `R` is the commitment to the Schnorr signature's nonce. + */ + /// @param publicKey The x-coordinate of the public key /// @param message The (hash of the) message signed /// @param c The challenge for the Schnorr signature /// @param s The response to the challenge for the Schnorr signature /// @return If the signature is valid - function verify(bytes32 public_key, bytes calldata message, bytes32 c, bytes32 s) + function verify(bytes32 publicKey, bytes calldata message, bytes32 c, bytes32 s) external pure returns (bool) { - return Schnorr.verify(public_key, keccak256(message), c, s); + return Schnorr.verify(publicKey, keccak256(message), c, s); } } diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol index a7dac1d3..8382cf21 100644 --- a/processor/ethereum/deployer/contracts/Deployer.sol +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -2,7 +2,7 @@ pragma solidity ^0.8.26; /* - The expected deployment process of the Router is as follows: + The expected deployment process of Serai's Router is as follows: 1) A transaction deploying Deployer is made. Then, a deterministic signature is created such that an account with an unknown private key is the creator of @@ -32,35 +32,57 @@ pragma solidity ^0.8.26; with Serai verifying the published result. This would introduce a DoS risk in the council not publishing the correct key/not publishing any key. - This design does not work with designs expecting initialization (which may require re-deploying - the same code until the initialization successfully goes through, without being sniped). + This design does not work (well) with contracts expecting initialization due + to only allowing deploying init code once (which assumes contracts are + distinct via their constructors). Such designs are unused by Serai so that is + accepted. */ +/// @title Deployer of contracts for the Serai network +/// @author Luke Parker contract Deployer { + /// @return The deployment for some (hashed) init code mapping(bytes32 => address) public deployments; + /// @notice Raised if the provided init code was already prior deployed error PriorDeployed(); + /// @notice Raised if the deployment fails error DeploymentFailed(); - function deploy(bytes memory init_code) external { + /// @notice Deploy the specified init code with `CREATE` + /// @dev This init code is expected to be unique and not prior deployed + /// @param initCode The init code to pass to `CREATE` + function deploy(bytes memory initCode) external { // Deploy the contract - address created_contract; + address createdContract; + // slither-disable-next-line assembly assembly { - created_contract := create(0, add(init_code, 0x20), mload(init_code)) + createdContract := create(0, add(initCode, 0x20), mload(initCode)) } - if (created_contract == address(0)) { + if (createdContract == address(0)) { revert DeploymentFailed(); } - bytes32 init_code_hash = keccak256(init_code); + bytes32 initCodeHash = keccak256(initCode); - // Check this wasn't prior deployed - // We check this *after* deploymeing (in violation of CEI) to handle re-entrancy related bugs - if (deployments[init_code_hash] != address(0)) { + /* + Check this wasn't prior deployed. + + This is a post-check, not a pre-check (in violation of the CEI pattern). If we used a + pre-check, a deployed contract could re-enter the Deployer to deploy the same contract + multiple times due to the inner call updating state and then the outer call overwriting it. + The post-check causes the outer call to error once the inner call updates state. + + This does mean contract deployment may fail if deployment causes arbitrary execution which + maliciously nests deployment of the being-deployed contract. Such an inner call won't fail, + yet the outer call would. The usage of a re-entrancy guard would call the inner call to fail + while the outer call succeeds. This is considered so edge-case it isn't worth handling. + */ + if (deployments[initCodeHash] != address(0)) { revert PriorDeployed(); } // Write the deployment to storage - deployments[init_code_hash] = created_contract; + deployments[initCodeHash] = createdContract; } } diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index 6fa59ee3..ff8174a6 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -26,8 +26,8 @@ mod abi { /// The Deployer contract for the Serai Router contract. /// -/// This Deployer has a deterministic address, letting it be immediately identified on any -/// compatible chain. It then supports retrieving the Router contract's address (which isn't +/// This Deployer has a deterministic address, letting it be immediately identified on any instance +/// of the EVM. It then supports retrieving the deployed contracts addresses (which aren't /// deterministic) using a single call. #[derive(Clone, Debug)] pub struct Deployer(Arc>); @@ -66,6 +66,8 @@ impl Deployer { } /// Construct a new view of the Deployer. + /// + /// This will return None if the Deployer has yet to be deployed on-chain. pub async fn new( provider: Arc>, ) -> Result, RpcError> { diff --git a/processor/ethereum/router/README.md b/processor/ethereum/router/README.md index b93c3219..efb4d0a4 100644 --- a/processor/ethereum/router/README.md +++ b/processor/ethereum/router/README.md @@ -1 +1,5 @@ # Ethereum Router + +The [Router contract](./contracts/Router.sol) is extensively documented to ensure clarity and +understanding of the design decisions made. Please refer to it for understanding of why/what this +is. diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 9100f59e..2fd0f2e4 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -5,236 +5,493 @@ import "IERC20.sol"; import "Schnorr.sol"; -// _ is used as a prefix for internal functions and smart-contract-scoped variables -contract Router { - // Nonce is incremented for each command executed, preventing replays - uint256 private _nonce; +/* + The Router directly performs low-level calls in order to fine-tune the gas settings. Since this + contract is meant to relay an entire batch of transactions, the ability to exactly meter + individual transactions is critical. - // The nonce which will be used for the smart contracts we deploy, enabling - // predicting their addresses + We don't check the return values as we don't care if the calls succeeded. We solely care we made + them. If someone configures an external contract in a way which borks, we epxlicitly define that + as their fault and out-of-scope to this contract. + + If an actual invariant within Serai exists, an escape hatch exists to move to a new contract. Any + improperly handled actions can be re-signed and re-executed at that point in time. +*/ +// slither-disable-start low-level-calls,unchecked-lowlevel + +/// @title Serai Router +/// @author Luke Parker +/// @notice Intakes coins for the Serai network and handles relaying batches of transfers out +contract Router { + /** + * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is + * used to predict the addresses of deployed contracts ahead of time. + */ + /* + We don't expose a getter for this as it shouldn't be expected to have any specific value at a + given moment in time. If someone wants to know the address of their deployed contract, they can + have it emit an event and verify the emitting contract is the expected one. + */ uint256 private _smartContractNonce; - // The current public key, defined as per the Schnorr library + /// @dev A nonce incremented upon an action to prevent replays/out-of-order execution + uint256 private _nonce; + + /** + * @dev The current public key for Serai's Ethereum validator set, in the form the Schnorr library + * expects + */ bytes32 private _seraiKey; + /// @dev The address escaped to + address private _escapedTo; + + /// @title The type of destination + /// @dev A destination is either an address or a blob of code to deploy and call enum DestinationType { Address, Code } - struct AddressDestination { - address destination; - } - + /// @title A code destination + /** + * @dev If transferring an ERC20 to this destination, it will be transferred to the address the + * code will be deployed to. If transferring ETH, it will be transferred with the deployment of + * the code. `code` is deployed with CREATE (calling its constructor). The entire deployment + * (and associated sandboxing) must consume less than `gasLimit` units of gas or it will revert. + */ struct CodeDestination { - uint32 gas_limit; + uint32 gasLimit; bytes code; } + /// @title An instruction to transfer coins out + /// @dev Specifies a destination and amount but not the coin as that's assumed to be contextual struct OutInstruction { DestinationType destinationType; bytes destination; - uint256 value; + uint256 amount; } + /// @title A signature + /// @dev Thin wrapper around `c, s` to simplify the API struct Signature { bytes32 c; bytes32 s; } + /// @notice Emitted when the key for Serai's Ethereum validators is updated + /// @param nonce The nonce consumed to update this key + /// @param key The key updated to event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); + + /// @notice Emitted when an InInstruction occurs + /// @param from The address which called `inInstruction` and caused this event to be emitted + /// @param coin The coin transferred in + /// @param amount The amount of the coin transferred in + /// @param instruction The Shorthand-encoded InInstruction for Serai to decode and handle event InInstruction( address indexed from, address indexed coin, uint256 amount, bytes instruction ); - event Executed(uint256 indexed nonce, bytes32 indexed message_hash); + /// @notice Emitted when a batch of `OutInstruction`s occurs + /// @param nonce The nonce consumed to execute this batch of transactions + /// @param messageHash The hash of the message signed for the executed batch + event Executed(uint256 indexed nonce, bytes32 indexed messageHash); + + /// @notice Emitted when `escapeHatch` is invoked + /// @param escapeTo The address to escape to + event EscapeHatch(address indexed escapeTo); + + /// @notice Emitted when coins escape through the escape hatch + /// @param coin The coin which escaped + event Escaped(address indexed coin); + + /// @notice The contract has had its escape hatch invoked and won't accept further actions + error EscapeHatchInvoked(); + /// @notice The signature was invalid error InvalidSignature(); - error InvalidAmount(); - error FailedTransfer(); + /// @notice The amount specified didn't match `msg.value` + error AmountMismatchesMsgValue(); + /// @notice The call to an ERC20's `transferFrom` failed + error TransferFromFailed(); - // Update the Serai key at the end of the current function. - modifier _updateSeraiKeyAtEndOfFn(uint256 nonceUpdatedWith, bytes32 newSeraiKey) { - // Run the function itself. + /// @notice An invalid address to escape to was specified. + error InvalidEscapeAddress(); + /// @notice Escaping when escape hatch wasn't invoked. + error EscapeHatchNotInvoked(); + + /** + * @dev Updates the Serai key at the end of the current function. Executing at the end of the + * current function allows verifying a signature with the current key. This does not update + * `_nonce` + */ + /// @param nonceUpdatedWith The nonce used to update the key + /// @param newSeraiKey The key updated to + modifier updateSeraiKeyAtEndOfFn(uint256 nonceUpdatedWith, bytes32 newSeraiKey) { + // Run the function itself _; - // Update the key. + // Update the key _seraiKey = newSeraiKey; emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); } - constructor(bytes32 initialSeraiKey) _updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { - // We consumed nonce 0 when setting the initial Serai key - _nonce = 1; + /// @notice The constructor for the relayer + /// @param initialSeraiKey The initial key for Serai's Ethereum validators + constructor(bytes32 initialSeraiKey) updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { // Nonces are incremented by 1 upon account creation, prior to any code execution, per EIP-161 // This is incompatible with any networks which don't have their nonces start at 0 _smartContractNonce = 1; + + // We consumed nonce 0 when setting the initial Serai key + _nonce = 1; + + // We haven't escaped to any address yet + _escapedTo = address(0); } - // updateSeraiKey validates the given Schnorr signature against the current public key, and if - // successful, updates the contract's public key to the one specified. - function updateSeraiKey(bytes32 newSeraiKey, Signature calldata signature) - external - _updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) - { - // This DST needs a length prefix as well to prevent DSTs potentially being substrings of each - // other, yet this fine for our very well-defined, limited use - bytes32 message = - keccak256(abi.encodePacked("updateSeraiKey", block.chainid, _nonce, newSeraiKey)); - _nonce++; - + /// @dev Verify a signature + /// @param message The message to pass to the Schnorr contract + /// @param signature The signature by the current key for this message + function verifySignature(bytes32 message, Signature calldata signature) private { + // If the escape hatch was triggered, reject further signatures + if (_escapedTo != address(0)) { + revert EscapeHatchInvoked(); + } + // Verify the signature if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } + // Increment the nonce + unchecked { + _nonce++; + } } + /// @notice Update the key representing Serai's Ethereum validators + /// @param newSeraiKey The key to update to + /// @param signature The signature by the current key authorizing this update + function updateSeraiKey(bytes32 newSeraiKey, Signature calldata signature) + external + updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) + { + /* + This DST needs a length prefix as well to prevent DSTs potentially being substrings of each + other, yet this is fine for our well-defined, extremely-limited use. + + We don't encode the chain ID as Serai generates independent keys for each integration. If + Ethereum L2s are integrated, and they reuse the Ethereum validator set, we would use the + existing Serai key yet we'd apply an off-chain derivation scheme to bind it to specific + networks. This also lets Serai identify EVMs per however it wants, solving the edge case where + two instances of the EVM share a chain ID for whatever horrific reason. + + This uses encodePacked as all items present here are of fixed length. + */ + bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _nonce, newSeraiKey)); + verifySignature(message, signature); + } + + /// @notice Transfer coins into Serai with an instruction + /// @param coin The coin to transfer in (address(0) if Ether) + /// @param amount The amount to transfer in (msg.value if Ether) + /** + * @param instruction The Shorthand-encoded InInstruction for Serai to associate with this + * transfer in + */ + // Re-entrancy doesn't bork this function + // slither-disable-next-line reentrancy-events function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { + // Check the transfer if (coin == address(0)) { - if (amount != msg.value) revert InvalidAmount(); + if (amount != msg.value) revert AmountMismatchesMsgValue(); } else { (bool success, bytes memory res) = address(coin).call( abi.encodeWithSelector(IERC20.transferFrom.selector, msg.sender, address(this), amount) ); - // Require there was nothing returned, which is done by some non-standard tokens, or that the - // ERC20 contract did in fact return true + /* + Require there was nothing returned, which is done by some non-standard tokens, or that the + ERC20 contract did in fact return true + */ bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); - if (!(success && nonStandardResOrTrue)) revert FailedTransfer(); + if (!(success && nonStandardResOrTrue)) revert TransferFromFailed(); } /* - Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. The amount - instructed to be transferred may not actually be the amount transferred. + Due to fee-on-transfer tokens, emitting the amount directly is frowned upon. The amount + instructed to be transferred may not actually be the amount transferred. - If we add nonReentrant to every single function which can effect the balance, we can check the - amount exactly matches. This prevents transfers of less value than expected occurring, at - least, not without an additional transfer to top up the difference (which isn't routed through - this contract and accordingly isn't trying to artificially create events from this contract). + If we add nonReentrant to every single function which can effect the balance, we can check the + amount exactly matches. This prevents transfers of less value than expected occurring, at + least, not without an additional transfer to top up the difference (which isn't routed through + this contract and accordingly isn't trying to artificially create events from this contract). - If we don't add nonReentrant, a transfer can be started, and then a new transfer for the - difference can follow it up (again and again until a rounding error is reached). This contract - would believe all transfers were done in full, despite each only being done in part (except - for the last one). + If we don't add nonReentrant, a transfer can be started, and then a new transfer for the + difference can follow it up (again and again until a rounding error is reached). This contract + would believe all transfers were done in full, despite each only being done in part (except + for the last one). - Given fee-on-transfer tokens aren't intended to be supported, the only token actively planned - to be supported is Dai and it doesn't have any fee-on-transfer logic, and how fee-on-transfer - tokens aren't even able to be supported at this time by the larger Serai network, we simply - classify this entire class of tokens as non-standard implementations which induce undefined - behavior. + Given fee-on-transfer tokens aren't intended to be supported, the only token actively planned + to be supported is Dai and it doesn't have any fee-on-transfer logic, and how fee-on-transfer + tokens aren't even able to be supported at this time by the larger Serai network, we simply + classify this entire class of tokens as non-standard implementations which induce undefined + behavior. - It is the Serai network's role not to add support for any non-standard implementations. + It is the Serai network's role not to add support for any non-standard implementations. */ emit InInstruction(msg.sender, coin, amount, instruction); } - /* - We on purposely do not check if these calls succeed. A call either succeeded, and there's no - problem, or the call failed due to: - A) An insolvency - B) A malicious receiver - C) A non-standard token - A is an invariant, B should be dropped, C is something out of the control of this contract. - It is again the Serai's network role to not add support for any non-standard tokens, - */ + /// @dev Perform an ERC20 transfer out + /// @param to The address to transfer the coins to + /// @param coin The coin to transfer + /// @param amount The amount of the coin to transfer + function erc20TransferOut(address to, address coin, uint256 amount) private { + /* + The ERC20s integrated are presumed to have a constant gas cost, meaning this can only be + insufficient: - // Perform an ERC20 transfer out - function _erc20TransferOut(address to, address coin, uint256 value) private { - coin.call{ gas: 100_000 }(abi.encodeWithSelector(IERC20.transfer.selector, msg.sender, value)); - } + A) An integrated ERC20 uses more gas than this limit (presumed not to be the case) + B) An integrated ERC20 is upgraded (integrated ERC20s are presumed to not be upgradeable) + C) This has a variable gas cost and the user set a hook on receive which caused this (in + which case, we accept dropping this) + D) The user was blacklisted (in which case, we again accept dropping this) + E) Other extreme edge cases, for which such tokens are assumed to not be integrated + F) Ethereum opcodes are repriced in a sufficiently breaking fashion - // Perform an ETH/ERC20 transfer out - function _transferOut(address to, address coin, uint256 value) private { - if (coin == address(0)) { - // Enough gas to service the transfer and a minimal amount of logic - to.call{ value: value, gas: 5_000 }(""); - } else { - _erc20TransferOut(to, coin, value); + This should be in such excess of the gas requirements of integrated tokens we'll survive + repricing, so long as the repricing doesn't revolutionize EVM gas costs as we know it. In such + a case, Serai would have to migrate to a new smart contract using `escapeHatch`. + */ + uint256 _gas = 100_000; + + bytes memory _calldata = abi.encodeWithSelector(IERC20.transfer.selector, to, amount); + bool _success; + // slither-disable-next-line assembly + assembly { + /* + `coin` is trusted so we can accept the risk of a return bomb here, yet we won't check the + return value anyways so there's no need to spend the gas decoding it. We assume failures + are the fault of the recipient, not us, the sender. We don't want to have such errors block + the queue of transfers to make. + + If there ever was some invariant broken, off-chain actions is presumed to occur to move to a + new smart contract with whatever necessary changes made/response occurring. + */ + _success := + call( + _gas, + coin, + // Ether value + 0, + // calldata + add(_calldata, 0x20), + mload(_calldata), + // return data + 0, + 0 + ) } } - /* - Serai supports arbitrary calls out via deploying smart contracts (with user-specified code), - letting them execute whatever calls they're coded for. Since we can't meter CREATE, we call - CREATE from this function which we call not internally, but with CALL (which we can meter). - */ - function arbitaryCallOut(bytes memory code) external payable { + /// @dev Perform an ETH/ERC20 transfer out + /// @param to The address to transfer the coins to + /// @param coin The coin to transfer (address(0) if Ether) + /// @param amount The amount of the coin to transfer + function transferOut(address to, address coin, uint256 amount) private { + if (coin == address(0)) { + // Enough gas to service the transfer and a minimal amount of logic + uint256 _gas = 5_000; + // This uses assembly to prevent return bombs + bool _success; + // slither-disable-next-line assembly + assembly { + _success := + call( + _gas, + to, + amount, + // calldata + 0, + 0, + // return data + 0, + 0 + ) + } + } else { + erc20TransferOut(to, coin, amount); + } + } + + /// @notice Execute some arbitrary code within a secure sandbox + /** + * @dev This performs sandboxing by deploying this code with `CREATE`. This is an external + * function as we can't meter `CREATE`/internal functions. We work around this by calling this + * function with `CALL` (which we can meter). This does forward `msg.value` to the newly + * deployed contract. + */ + /// @param code The code to execute + function executeArbitraryCode(bytes memory code) external payable { // Because we're creating a contract, increment our nonce _smartContractNonce += 1; - uint256 msg_value = msg.value; + uint256 msgValue = msg.value; address contractAddress; + // We need to use assembly here because Solidity doesn't expose CREATE + // slither-disable-next-line assembly assembly { - contractAddress := create(msg_value, add(code, 0x20), mload(code)) + contractAddress := create(msgValue, add(code, 0x20), mload(code)) } } - // Execute a list of transactions if they were signed by the current key with the current nonce + /// @notice Execute a batch of `OutInstruction`s + /** + * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the + * fee + */ + /// @param coin The coin all of these `OutInstruction`s are for + /// @param fee The fee to pay (in coin) to the caller for their relaying of this batch + /// @param outs The `OutInstruction`s to act on + /// @param signature The signature by the current key for Serai's Ethereum validators + // Each individual call is explicitly metered to ensure there isn't a DoS here + // slither-disable-next-line calls-loop function execute( address coin, uint256 fee, - OutInstruction[] calldata transactions, + OutInstruction[] calldata outs, Signature calldata signature ) external { // Verify the signature - // We hash the message here as we need the message's hash for the Executed event - // Since we're already going to hash it, hashing it prior to verifying the signature reduces the - // amount of words hashed by its challenge function (reducing our gas costs) - bytes32 message = - keccak256(abi.encode("execute", block.chainid, _nonce, coin, fee, transactions)); - if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { - revert InvalidSignature(); - } + // This uses `encode`, not `encodePacked`, as `outs` is of variable length + // TODO: Use a custom encode in verifySignature here with assembly (benchmarking before/after) + bytes32 message = keccak256(abi.encode("execute", _nonce, coin, fee, outs)); + verifySignature(message, signature); - // Since the signature was verified, perform execution + // _nonce: Also include a bit mask here emit Executed(_nonce, message); - // While this is sufficient to prevent replays, it's still technically possible for instructions - // from later batches to be executed before these instructions upon re-entrancy - _nonce++; - for (uint256 i = 0; i < transactions.length; i++) { + /* + Since we don't have a re-entrancy guard, it is possible for instructions from later batches to + be executed before these instructions. This is deemed fine. We only require later batches be + relayed after earlier batches in order to form backpressure. This means if a batch has a fee + which isn't worth relaying the batch for, so long as later batches are sufficiently + worthwhile, every batch will be relayed. + */ + + // slither-disable-next-line reentrancy-events + for (uint256 i = 0; i < outs.length; i++) { // If the destination is an address, we perform a direct transfer - if (transactions[i].destinationType == DestinationType.Address) { - // This may cause a panic and the contract to become stuck if the destination isn't actually - // 20 bytes. Serai is trusted to not pass a malformed destination - (AddressDestination memory destination) = - abi.decode(transactions[i].destination, (AddressDestination)); - _transferOut(destination.destination, coin, transactions[i].value); + if (outs[i].destinationType == DestinationType.Address) { + /* + This may cause a revert if the destination isn't actually a valid address. Serai is + trusted to not pass a malformed destination, yet if it ever did, it could simply re-sign a + corrected batch using this nonce. + */ + address destination = abi.decode(outs[i].destination, (address)); + transferOut(destination, coin, outs[i].amount); } else { - // Prepare for the transfer - uint256 eth_value = 0; + // Prepare the transfer + uint256 ethValue = 0; if (coin == address(0)) { - // If it's ETH, we transfer the value with the call - eth_value = transactions[i].value; + // If it's ETH, we transfer the amount with the call + ethValue = outs[i].amount; } else { - // If it's an ERC20, we calculate the hash of the will-be contract and transfer to it - // before deployment. This avoids needing to deploy, then call again, offering a few - // optimizations - address nextAddress = - address(uint160(uint256(keccak256(abi.encode(address(this), _smartContractNonce))))); - _erc20TransferOut(nextAddress, coin, transactions[i].value); + /* + If it's an ERC20, we calculate the address of the will-be contract and transfer to it + before deployment. This avoids needing to deploy the contract, then call transfer, then + call the contract again + */ + address nextAddress = address( + uint160(uint256(keccak256(abi.encodePacked(address(this), _smartContractNonce)))) + ); + erc20TransferOut(nextAddress, coin, outs[i].amount); } - // Perform the deployment with the defined gas budget - (CodeDestination memory destination) = - abi.decode(transactions[i].destination, (CodeDestination)); - address(this).call{ gas: destination.gas_limit, value: eth_value }( - abi.encodeWithSelector(Router.arbitaryCallOut.selector, destination.code) + (CodeDestination memory destination) = abi.decode(outs[i].destination, (CodeDestination)); + + /* + Perform the deployment with the defined gas budget. + + We don't care if the following call fails as we don't want to block/retry if it does. + Failures are considered the recipient's fault. We explicitly do not want the surface + area/inefficiency of caching these for later attempted retires. + + We don't have to worry about a return bomb here as this is our own function which doesn't + return any data. + */ + address(this).call{ gas: destination.gasLimit, value: ethValue }( + abi.encodeWithSelector(Router.executeArbitraryCode.selector, destination.code) ); } } - // Transfer to the caller the fee - _transferOut(msg.sender, coin, fee); + // Transfer the fee to the relayer + transferOut(msg.sender, coin, fee); } + /// @notice Escapes to a new smart contract + /// @dev This should be used upon an invariant being reached or new functionality being needed + /// @param escapeTo The address to escape to + /// @param signature The signature by the current key for Serai's Ethereum validators + function escapeHatch(address escapeTo, Signature calldata signature) external { + if (escapeTo == address(0)) { + revert InvalidEscapeAddress(); + } + /* + We want to define the escape hatch so coins here now, and latently received, can be forwarded. + If the last Serai key set could update the escape hatch, they could siphon off latently + received coins without penalty (if they update the escape hatch after unstaking). + */ + if (_escapedTo != address(0)) { + revert EscapeHatchInvoked(); + } + + // Verify the signature + bytes32 message = keccak256(abi.encodePacked("escapeHatch", _nonce, escapeTo)); + verifySignature(message, signature); + + _escapedTo = escapeTo; + emit EscapeHatch(escapeTo); + } + + /// @notice Escape coins after the escape hatch has been invoked + /// @param coin The coin to escape + function escape(address coin) external { + if (_escapedTo == address(0)) { + revert EscapeHatchNotInvoked(); + } + + emit Escaped(coin); + + // Fetch the amount to escape + uint256 amount = address(this).balance; + if (coin != address(0)) { + amount = IERC20(coin).balanceOf(address(this)); + } + + // Perform the transfer + transferOut(_escapedTo, coin, amount); + } + + /// @notice Fetch the next nonce to use by an action published to this contract + /// return The next nonce to use by an action published to this contract function nonce() external view returns (uint256) { return _nonce; } - function smartContractNonce() external view returns (uint256) { - return _smartContractNonce; - } - + /// @notice Fetch the current key for Serai's Ethereum validator set + /// @return The current key for Serai's Ethereum validator set function seraiKey() external view returns (bytes32) { return _seraiKey; } + + /// @notice Fetch the address escaped to + /// @return The address which was escaped to (address(0) if the escape hatch hasn't been invoked) + function escapedTo() external view returns (address) { + return _escapedTo; + } } + +// slither-disable-end low-level-calls,unchecked-lowlevel diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 7a7cffd8..6926425a 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -297,10 +297,9 @@ impl Router { } /// Get the message to be signed in order to update the key for Serai. - pub fn update_serai_key_message(chain_id: U256, nonce: u64, key: &PublicKey) -> Vec { + pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { ( "updateSeraiKey", - chain_id, U256::try_from(nonce).expect("couldn't convert u64 to u256"), key.eth_repr(), ) @@ -322,13 +321,12 @@ impl Router { /// Get the message to be signed in order to execute a series of `OutInstruction`s. pub fn execute_message( - chain_id: U256, nonce: u64, coin: Coin, fee: U256, outs: OutInstructions, ) -> Vec { - ("execute", chain_id, U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0).abi_encode() + ("execute", U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0).abi_encode() } /// Construct a transaction to execute a batch of `OutInstruction`s. From 8e800885fbf63939d614ae25c6da8e1e47b0d960 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 30 Oct 2024 21:36:31 -0400 Subject: [PATCH 190/368] Simplify deterministic signing process in serai-processor-ethereum-primitives This should be easier to specify/do an alternative implementation of. --- processor/ethereum/primitives/src/lib.rs | 25 ++++++++++++------------ 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index ccf41344..1fbf2834 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -3,7 +3,7 @@ #![deny(missing_docs)] use group::ff::PrimeField; -use k256::{elliptic_curve::ops::Reduce, U256, Scalar}; +use k256::Scalar; use alloy_core::primitives::{Parity, Signature}; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; @@ -15,20 +15,21 @@ pub fn keccak256(data: impl AsRef<[u8]>) -> [u8; 32] { /// Deterministically sign a transaction. /// -/// This function panics if passed a transaction with a non-None chain ID. +/// This signs a transaction via setting `r = 1, s = 1`, and incrementing `r` until a signer is +/// recoverable from the signature for this transaction. The purpose of this is to be able to send +/// a transaction from a known account which no one knows the private key for. +/// +/// This function panics if passed a transaction with a non-None chain ID. This is because the +/// signer for this transaction is only singular across any/all EVM instances if it isn't binding +/// to an instance. pub fn deterministically_sign(tx: &TxLegacy) -> Signed { - pub fn hash_to_scalar(data: impl AsRef<[u8]>) -> Scalar { - >::reduce_bytes(&keccak256(data).into()) - } - assert!( tx.chain_id.is_none(), - "chain ID was Some when deterministically signing a TX (causing a non-deterministic signer)" + "chain ID was Some when deterministically signing a TX (causing a non-singular signer)" ); - let sig_hash = tx.signature_hash().0; - let mut r = hash_to_scalar([sig_hash.as_slice(), b"r"].concat()); - let mut s = hash_to_scalar([sig_hash.as_slice(), b"s"].concat()); + let mut r = Scalar::ONE; + let s = Scalar::ONE; loop { // Create the signature let r_bytes: [u8; 32] = r.to_repr().into(); @@ -42,8 +43,6 @@ pub fn deterministically_sign(tx: &TxLegacy) -> Signed { return tx; } - // Re-hash until valid - r = hash_to_scalar(r_bytes); - s = hash_to_scalar(s_bytes); + r += Scalar::ONE; } } From b2ec58a445f51eb5ba3cf5b17085403b54e5933a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 30 Oct 2024 21:48:40 -0400 Subject: [PATCH 191/368] Update serai-ethereum-processor to compile --- processor/bin/src/lib.rs | 2 +- processor/ethereum/deployer/src/lib.rs | 2 +- processor/ethereum/router/src/lib.rs | 26 +++++--------- processor/ethereum/src/main.rs | 15 ++------ .../ethereum/src/primitives/transaction.rs | 34 ++++++------------- processor/ethereum/src/publisher.rs | 4 +-- processor/ethereum/src/scheduler.rs | 12 ++----- 7 files changed, 27 insertions(+), 68 deletions(-) diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 7d98f812..5403922e 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -296,7 +296,7 @@ pub async fn main_loop< is equally valid unless we want to start introspecting (and should be our only Batch anyways). */ - burns.drain(..).collect(), + std::mem::take(&mut burns), key_to_activate, ); } diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index ff8174a6..50180236 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -67,7 +67,7 @@ impl Deployer { /// Construct a new view of the Deployer. /// - /// This will return None if the Deployer has yet to be deployed on-chain. + /// This will return `None` if the Deployer has yet to be deployed on-chain. pub async fn new( provider: Arc>, ) -> Result, RpcError> { diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 6926425a..e0a53ac6 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -168,20 +168,19 @@ impl From<&[(SeraiAddress, U256)]> for OutInstructions { .map(|(address, amount)| { #[allow(non_snake_case)] let (destinationType, destination) = match address { - SeraiAddress::Address(address) => ( - abi::DestinationType::Address, - (abi::AddressDestination { destination: Address::from(address) }).abi_encode(), - ), + SeraiAddress::Address(address) => { + (abi::DestinationType::Address, (Address::from(address)).abi_encode()) + } SeraiAddress::Contract(contract) => ( abi::DestinationType::Code, (abi::CodeDestination { - gas_limit: contract.gas_limit(), + gasLimit: contract.gas_limit(), code: contract.code().to_vec().into(), }) .abi_encode(), ), }; - abi::OutInstruction { destinationType, destination: destination.into(), value: *amount } + abi::OutInstruction { destinationType, destination: destination.into(), amount: *amount } }) .collect(), ) @@ -298,11 +297,7 @@ impl Router { /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { - ( - "updateSeraiKey", - U256::try_from(nonce).expect("couldn't convert u64 to u256"), - key.eth_repr(), - ) + ("updateSeraiKey", U256::try_from(nonce).expect("couldn't convert u64 to u256"), key.eth_repr()) .abi_encode_packed() } @@ -320,12 +315,7 @@ impl Router { } /// Get the message to be signed in order to execute a series of `OutInstruction`s. - pub fn execute_message( - nonce: u64, - coin: Coin, - fee: U256, - outs: OutInstructions, - ) -> Vec { + pub fn execute_message(nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> Vec { ("execute", U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0).abi_encode() } @@ -540,7 +530,7 @@ impl Router { nonce: log.nonce.try_into().map_err(|e| { TransportErrorKind::Custom(format!("filtered to convert nonce to u64: {e:?}").into()) })?, - message_hash: log.message_hash.into(), + message_hash: log.messageHash.into(), }); } } diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index 7acdffdb..acb5bd0d 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -8,10 +8,9 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = use std::sync::Arc; -use alloy_core::primitives::U256; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; -use alloy_provider::{Provider, RootProvider}; +use alloy_provider::RootProvider; use serai_client::validator_sets::primitives::Session; @@ -63,20 +62,10 @@ async fn main() { ClientBuilder::default().transport(SimpleRequest::new(bin::url()), true), )); - let chain_id = loop { - match provider.get_chain_id().await { - Ok(chain_id) => break U256::try_from(chain_id).unwrap(), - Err(e) => { - log::error!("couldn't connect to the Ethereum node for the chain ID: {e:?}"); - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - } - } - }; - bin::main_loop::( db.clone(), Rpc { db: db.clone(), provider: provider.clone() }, - Scheduler::::new(SmartContract { chain_id }), + Scheduler::::new(SmartContract), TransactionPublisher::new(db, provider, { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") .expect("ethereum relayer hostname wasn't specified") diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index 6730e7a9..98de30c8 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -17,8 +17,8 @@ use crate::{output::OutputId, machine::ClonableTransctionMachine}; #[derive(Clone, PartialEq, Debug)] pub(crate) enum Action { - SetKey { chain_id: U256, nonce: u64, key: PublicKey }, - Batch { chain_id: U256, nonce: u64, coin: Coin, fee: U256, outs: Vec<(Address, U256)> }, + SetKey { nonce: u64, key: PublicKey }, + Batch { nonce: u64, coin: Coin, fee: U256, outs: Vec<(Address, U256)> }, } #[derive(Clone, PartialEq, Eq, Debug)] @@ -33,24 +33,16 @@ impl Action { pub(crate) fn message(&self) -> Vec { match self { - Action::SetKey { chain_id, nonce, key } => { - Router::update_serai_key_message(*chain_id, *nonce, key) + Action::SetKey { nonce, key } => Router::update_serai_key_message(*nonce, key), + Action::Batch { nonce, coin, fee, outs } => { + Router::execute_message(*nonce, *coin, *fee, OutInstructions::from(outs.as_ref())) } - Action::Batch { chain_id, nonce, coin, fee, outs } => Router::execute_message( - *chain_id, - *nonce, - *coin, - *fee, - OutInstructions::from(outs.as_ref()), - ), } } pub(crate) fn eventuality(&self) -> Eventuality { Eventuality(match self { - Self::SetKey { chain_id: _, nonce, key } => { - Executed::SetKey { nonce: *nonce, key: key.eth_repr() } - } + Self::SetKey { nonce, key } => Executed::SetKey { nonce: *nonce, key: key.eth_repr() }, Self::Batch { nonce, .. } => { Executed::Batch { nonce: *nonce, message_hash: keccak256(self.message()) } } @@ -85,10 +77,6 @@ impl SignableTransaction for Action { Err(io::Error::other("unrecognized Action type"))?; } - let mut chain_id = [0; 32]; - reader.read_exact(&mut chain_id)?; - let chain_id = U256::from_le_bytes(chain_id); - let mut nonce = [0; 8]; reader.read_exact(&mut nonce)?; let nonce = u64::from_le_bytes(nonce); @@ -100,7 +88,7 @@ impl SignableTransaction for Action { let key = PublicKey::from_eth_repr(key).ok_or_else(|| io::Error::other("invalid key in Action"))?; - Action::SetKey { chain_id, nonce, key } + Action::SetKey { nonce, key } } 1 => { let coin = Coin::read(reader)?; @@ -123,22 +111,20 @@ impl SignableTransaction for Action { outs.push((address, amount)); } - Action::Batch { chain_id, nonce, coin, fee, outs } + Action::Batch { nonce, coin, fee, outs } } _ => unreachable!(), }) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { match self { - Self::SetKey { chain_id, nonce, key } => { + Self::SetKey { nonce, key } => { writer.write_all(&[0])?; - writer.write_all(&chain_id.as_le_bytes())?; writer.write_all(&nonce.to_le_bytes())?; writer.write_all(&key.eth_repr()) } - Self::Batch { chain_id, nonce, coin, fee, outs } => { + Self::Batch { nonce, coin, fee, outs } => { writer.write_all(&[1])?; - writer.write_all(&chain_id.as_le_bytes())?; writer.write_all(&nonce.to_le_bytes())?; coin.write(writer)?; writer.write_all(&fee.as_le_bytes())?; diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index 3d18a6ef..a4edd65f 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -88,8 +88,8 @@ impl signers::TransactionPublisher for TransactionPublisher< let nonce = tx.0.nonce(); // Convert from an Action (an internal representation of a signable event) to a TxLegacy let tx = match tx.0 { - Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), - Action::Batch { chain_id: _, nonce: _, coin, fee, outs } => { + Action::SetKey { nonce: _, key } => router.update_serai_key(&key, &tx.1), + Action::Batch { nonce: _, coin, fee, outs } => { router.execute(coin, fee, OutInstructions::from(outs.as_ref()), &tx.1) } }; diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index e8a437c1..e7752897 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -36,9 +36,7 @@ fn balance_to_ethereum_amount(balance: Balance) -> U256 { } #[derive(Clone)] -pub(crate) struct SmartContract { - pub(crate) chain_id: U256, -} +pub(crate) struct SmartContract; impl smart_contract_scheduler::SmartContract> for SmartContract { type SignableTransaction = Action; @@ -48,11 +46,8 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { _retiring_key: KeyFor>, new_key: KeyFor>, ) -> (Self::SignableTransaction, EventualityFor>) { - let action = Action::SetKey { - chain_id: self.chain_id, - nonce, - key: PublicKey::new(new_key).expect("rotating to an invald key"), - }; + let action = + Action::SetKey { nonce, key: PublicKey::new(new_key).expect("rotating to an invald key") }; (action.clone(), action.eventuality()) } @@ -138,7 +133,6 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { } res.push(Action::Batch { - chain_id: self.chain_id, nonce, coin: coin_to_ethereum_coin(coin), fee: U256::try_from(total_gas).unwrap() * fee_per_gas, From 6a520a7412be2b2effedc77a4c18c7795cee1deb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 31 Oct 2024 02:23:59 -0400 Subject: [PATCH 192/368] Work on testing the Router --- .github/workflows/tests.yml | 1 + Cargo.lock | 19 ++ Cargo.toml | 1 + deny.toml | 1 + processor/ethereum/deployer/src/lib.rs | 17 +- processor/ethereum/router/Cargo.toml | 16 +- .../ethereum/router/contracts/Router.sol | 34 +-- processor/ethereum/router/src/lib.rs | 64 +++++- processor/ethereum/router/src/tests/mod.rs | 210 ++++++++++++++++++ processor/ethereum/test-primitives/Cargo.toml | 28 +++ processor/ethereum/test-primitives/LICENSE | 15 ++ processor/ethereum/test-primitives/README.md | 5 + processor/ethereum/test-primitives/src/lib.rs | 117 ++++++++++ 13 files changed, 505 insertions(+), 23 deletions(-) create mode 100644 processor/ethereum/router/src/tests/mod.rs create mode 100644 processor/ethereum/test-primitives/Cargo.toml create mode 100644 processor/ethereum/test-primitives/LICENSE create mode 100644 processor/ethereum/test-primitives/README.md create mode 100644 processor/ethereum/test-primitives/src/lib.rs diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index d207e9cd..3adc3ac5 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -53,6 +53,7 @@ jobs: -p serai-processor-bin \ -p serai-bitcoin-processor \ -p serai-processor-ethereum-primitives \ + -p serai-processor-ethereum-test-primitives \ -p serai-processor-ethereum-deployer \ -p serai-processor-ethereum-router \ -p serai-processor-ethereum-erc20 \ diff --git a/Cargo.lock b/Cargo.lock index fd9838f2..0550b05e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8374,6 +8374,19 @@ dependencies = [ "tokio", ] +[[package]] +name = "serai-ethereum-test-primitives" +version = "0.1.0" +dependencies = [ + "alloy-consensus", + "alloy-core", + "alloy-provider", + "alloy-rpc-types-eth", + "alloy-simple-request-transport", + "k256", + "serai-processor-ethereum-primitives", +] + [[package]] name = "serai-full-stack-tests" version = "0.1.0" @@ -8706,7 +8719,9 @@ version = "0.1.0" dependencies = [ "alloy-consensus", "alloy-core", + "alloy-node-bindings", "alloy-provider", + "alloy-rpc-client", "alloy-rpc-types-eth", "alloy-simple-request-transport", "alloy-sol-macro-expander", @@ -8716,12 +8731,16 @@ dependencies = [ "build-solidity-contracts", "ethereum-schnorr-contract", "group", + "k256", + "rand_core", "serai-client", + "serai-ethereum-test-primitives", "serai-processor-ethereum-deployer", "serai-processor-ethereum-erc20", "serai-processor-ethereum-primitives", "syn 2.0.77", "syn-solidity", + "tokio", ] [[package]] diff --git a/Cargo.toml b/Cargo.toml index facd5a6a..16c12262 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -88,6 +88,7 @@ members = [ "processor/bin", "processor/bitcoin", "processor/ethereum/primitives", + "processor/ethereum/test-primitives", "processor/ethereum/deployer", "processor/ethereum/router", "processor/ethereum/erc20", diff --git a/deny.toml b/deny.toml index d09fc8eb..51bffbc0 100644 --- a/deny.toml +++ b/deny.toml @@ -60,6 +60,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-test-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-router" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-erc20" }, diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index 50180236..2293de47 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -59,12 +59,27 @@ impl Deployer { } /// Obtain the deterministic address for this contract. - pub(crate) fn address() -> Address { + pub fn address() -> Address { let deployer_deployer = Self::deployment_tx().recover_signer().expect("deployment_tx didn't have a valid signature"); Address::create(&deployer_deployer, 0) } + /// Obtain the unsigned transaction to deploy a contract. + /// + /// This will not have its `nonce`, `gas_price`, nor `gas_limit` filled out. + pub fn deploy_tx(init_code: Vec) -> TxLegacy { + TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 0, + gas_limit: 0, + to: TxKind::Call(Self::address()), + value: U256::ZERO, + input: abi::Deployer::deployCall::new((init_code.into(),)).abi_encode().into(), + } + } + /// Construct a new view of the Deployer. /// /// This will return `None` if the Deployer has yet to be deployed on-chain. diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index d21a26d9..132a9fa4 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -20,10 +20,10 @@ workspace = true group = { version = "0.13", default-features = false } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.3", default-features = false } - alloy-sol-types = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.3", default-features = false } + alloy-rpc-types-eth = { version = "0.3", default-features = false } alloy-transport = { version = "0.3", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } @@ -45,3 +45,15 @@ syn = { version = "2", default-features = false, features = ["proc-macro"] } syn-solidity = { version = "0.8", default-features = false } alloy-sol-macro-input = { version = "0.8", default-features = false } alloy-sol-macro-expander = { version = "0.8", default-features = false } + +[dev-dependencies] +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +k256 = { version = "0.13", default-features = false, features = ["std"] } + +alloy-rpc-client = { version = "0.3", default-features = false } +alloy-node-bindings = { version = "0.3", default-features = false } + +tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros"] } + +ethereum-test-primitives = { package = "serai-ethereum-test-primitives", path = "../test-primitives" } diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 2fd0f2e4..8607e732 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -1,6 +1,8 @@ // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.26; +// TODO: MIT licensed interface + import "IERC20.sol"; import "Schnorr.sol"; @@ -34,8 +36,11 @@ contract Router { */ uint256 private _smartContractNonce; - /// @dev A nonce incremented upon an action to prevent replays/out-of-order execution - uint256 private _nonce; + /** + * @dev The nonce to verify the next signature with, incremented upon an action to prevent + * replays/out-of-order execution + */ + uint256 private _nextNonce; /** * @dev The current public key for Serai's Ethereum validator set, in the form the Schnorr library @@ -124,7 +129,7 @@ contract Router { /** * @dev Updates the Serai key at the end of the current function. Executing at the end of the * current function allows verifying a signature with the current key. This does not update - * `_nonce` + * `_nextNonce` */ /// @param nonceUpdatedWith The nonce used to update the key /// @param newSeraiKey The key updated to @@ -145,7 +150,7 @@ contract Router { _smartContractNonce = 1; // We consumed nonce 0 when setting the initial Serai key - _nonce = 1; + _nextNonce = 1; // We haven't escaped to any address yet _escapedTo = address(0); @@ -163,18 +168,19 @@ contract Router { if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { revert InvalidSignature(); } - // Increment the nonce + // Set the next nonce unchecked { - _nonce++; + _nextNonce++; } } /// @notice Update the key representing Serai's Ethereum validators + /// @dev This assumes the key is correct. No checks on it are performed /// @param newSeraiKey The key to update to /// @param signature The signature by the current key authorizing this update function updateSeraiKey(bytes32 newSeraiKey, Signature calldata signature) external - updateSeraiKeyAtEndOfFn(_nonce, newSeraiKey) + updateSeraiKeyAtEndOfFn(_nextNonce, newSeraiKey) { /* This DST needs a length prefix as well to prevent DSTs potentially being substrings of each @@ -188,7 +194,7 @@ contract Router { This uses encodePacked as all items present here are of fixed length. */ - bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _nonce, newSeraiKey)); + bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _nextNonce, newSeraiKey)); verifySignature(message, signature); } @@ -366,11 +372,11 @@ contract Router { // Verify the signature // This uses `encode`, not `encodePacked`, as `outs` is of variable length // TODO: Use a custom encode in verifySignature here with assembly (benchmarking before/after) - bytes32 message = keccak256(abi.encode("execute", _nonce, coin, fee, outs)); + bytes32 message = keccak256(abi.encode("execute", _nextNonce, coin, fee, outs)); verifySignature(message, signature); - // _nonce: Also include a bit mask here - emit Executed(_nonce, message); + // TODO: Also include a bit mask here + emit Executed(_nextNonce, message); /* Since we don't have a re-entrancy guard, it is possible for instructions from later batches to @@ -449,7 +455,7 @@ contract Router { } // Verify the signature - bytes32 message = keccak256(abi.encodePacked("escapeHatch", _nonce, escapeTo)); + bytes32 message = keccak256(abi.encodePacked("escapeHatch", _nextNonce, escapeTo)); verifySignature(message, signature); _escapedTo = escapeTo; @@ -477,8 +483,8 @@ contract Router { /// @notice Fetch the next nonce to use by an action published to this contract /// return The next nonce to use by an action published to this contract - function nonce() external view returns (uint256) { - return _nonce; + function nextNonce() external view returns (uint256) { + return _nextNonce; } /// @notice Fetch the current key for Serai's Ethereum validator set diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index e0a53ac6..eeee70e7 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -7,11 +7,11 @@ use std::{sync::Arc, io, collections::HashSet}; use group::ff::PrimeField; use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; -use alloy_consensus::TxLegacy; - use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; -use alloy_rpc_types_eth::Filter; +use alloy_consensus::TxLegacy; + +use alloy_rpc_types_eth::{TransactionRequest, TransactionInput, BlockId, Filter}; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -37,6 +37,9 @@ use abi::{ Executed as ExecutedEvent, }; +#[cfg(test)] +mod tests; + impl From<&Signature> for abi::Signature { fn from(signature: &Signature) -> Self { Self { @@ -270,6 +273,15 @@ impl Router { bytecode } + /// Obtain the transaction to deploy this contract. + /// + /// This transaction assumes the `Deployer` has already been deployed. + pub fn deployment_tx(initial_serai_key: &PublicKey) -> TxLegacy { + let mut tx = Deployer::deploy_tx(Self::init_code(initial_serai_key)); + tx.gas_limit = 883654 * 120 / 100; + tx + } + /// Create a new view of the Router. /// /// This performs an on-chain lookup for the first deployed Router constructed with this public @@ -303,20 +315,20 @@ impl Router { /// Construct a transaction to update the key representing Serai. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { - // TODO: Set a more accurate gas TxLegacy { to: TxKind::Call(self.1), input: abi::updateSeraiKeyCall::new((public_key.eth_repr().into(), sig.into())) .abi_encode() .into(), - gas_limit: 100_000, + gas_limit: 40748 * 120 / 100, ..Default::default() } } /// Get the message to be signed in order to execute a series of `OutInstruction`s. pub fn execute_message(nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> Vec { - ("execute", U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0).abi_encode() + ("execute".to_string(), U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0) + .abi_encode_sequence() } /// Construct a transaction to execute a batch of `OutInstruction`s. @@ -539,4 +551,44 @@ impl Router { Ok(res) } + + /// Fetch the current key for Serai's Ethereum validators + pub async fn key(&self, block: BlockId) -> Result> { + let call = TransactionRequest::default() + .to(self.1) + .input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into())); + let bytes = self.0.call(&call).block(block).await?; + let res = abi::seraiKeyCall::abi_decode_returns(&bytes, true) + .map_err(|e| TransportErrorKind::Custom(format!("filtered to decode key: {e:?}").into()))?; + Ok( + PublicKey::from_eth_repr(res._0.into()).ok_or_else(|| { + TransportErrorKind::Custom("invalid key set on router".to_string().into()) + })?, + ) + } + + /// Fetch the nonce of the next action to execute + pub async fn next_nonce(&self, block: BlockId) -> Result> { + let call = TransactionRequest::default() + .to(self.1) + .input(TransactionInput::new(abi::nextNonceCall::new(()).abi_encode().into())); + let bytes = self.0.call(&call).block(block).await?; + let res = abi::nextNonceCall::abi_decode_returns(&bytes, true) + .map_err(|e| TransportErrorKind::Custom(format!("filtered to decode nonce: {e:?}").into()))?; + Ok(u64::try_from(res._0).map_err(|_| { + TransportErrorKind::Custom("nonce returned exceeded 2**64".to_string().into()) + })?) + } + + /// Fetch the address the escape hatch was set to + pub async fn escaped_to(&self, block: BlockId) -> Result> { + let call = TransactionRequest::default() + .to(self.1) + .input(TransactionInput::new(abi::escapedToCall::new(()).abi_encode().into())); + let bytes = self.0.call(&call).block(block).await?; + let res = abi::escapedToCall::abi_decode_returns(&bytes, true).map_err(|e| { + TransportErrorKind::Custom(format!("filtered to decode the address escaped to: {e:?}").into()) + })?; + Ok(res._0) + } } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs new file mode 100644 index 00000000..317003e8 --- /dev/null +++ b/processor/ethereum/router/src/tests/mod.rs @@ -0,0 +1,210 @@ +use std::{sync::Arc, collections::HashSet}; + +use rand_core::{RngCore, OsRng}; + +use group::ff::Field; +use k256::{Scalar, ProjectivePoint}; + +use alloy_core::primitives::{Address, U256, TxKind}; +use alloy_sol_types::SolCall; + +use alloy_consensus::TxLegacy; + +use alloy_rpc_types_eth::BlockNumberOrTag; +use alloy_simple_request_transport::SimpleRequest; +use alloy_rpc_client::ClientBuilder; +use alloy_provider::RootProvider; + +use alloy_node_bindings::{Anvil, AnvilInstance}; + +use ethereum_schnorr::{PublicKey, Signature}; +use ethereum_deployer::Deployer; + +use crate::{Coin, OutInstructions, Router}; + +pub(crate) fn test_key() -> (Scalar, PublicKey) { + loop { + let key = Scalar::random(&mut OsRng); + let point = ProjectivePoint::GENERATOR * key; + if let Some(public_key) = PublicKey::new(point) { + return (key, public_key); + } + } +} + +async fn setup_test( +) -> (AnvilInstance, Arc>, Router, (Scalar, PublicKey)) { + let anvil = Anvil::new().spawn(); + + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), + )); + + let (private_key, public_key) = test_key(); + assert!(Router::new(provider.clone(), &public_key).await.unwrap().is_none()); + + // Deploy the Deployer + let receipt = ethereum_test_primitives::publish_tx(&provider, Deployer::deployment_tx()).await; + assert!(receipt.status()); + + // Get the TX to deploy the Router + let mut tx = Router::deployment_tx(&public_key); + // Set a gas price (100 gwei) + tx.gas_price = 100_000_000_000u128; + // Sign it + let tx = ethereum_primitives::deterministically_sign(&tx); + // Publish it + let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; + assert!(receipt.status()); + println!("Router deployment used {} gas:", receipt.gas_used); + + let router = Router::new(provider.clone(), &public_key).await.unwrap().unwrap(); + + (anvil, provider, router, (private_key, public_key)) +} + +#[tokio::test] +async fn test_constructor() { + let (_anvil, _provider, router, key) = setup_test().await; + assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), key.1); + assert_eq!(router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), 1); + assert_eq!( + router.escaped_to(BlockNumberOrTag::Latest.into()).await.unwrap(), + Address::from([0; 20]) + ); +} + +#[tokio::test] +async fn test_update_serai_key() { + let (_anvil, provider, router, key) = setup_test().await; + + let update_to = test_key().1; + let msg = Router::update_serai_key_message(1, &update_to); + + let nonce = Scalar::random(&mut OsRng); + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); + let s = nonce + (c * key.0); + + let sig = Signature::new(c, s).unwrap(); + + let mut tx = router.update_serai_key(&update_to, &sig); + tx.gas_price = 100_000_000_000u128; + let tx = ethereum_primitives::deterministically_sign(&tx); + let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; + assert!(receipt.status()); + println!("update_serai_key used {} gas:", receipt.gas_used); + + assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), update_to); + assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); +} + +#[tokio::test] +async fn test_eth_in_instruction() { + let (_anvil, provider, router, _key) = setup_test().await; + + let amount = U256::try_from(OsRng.next_u64()).unwrap(); + let mut in_instruction = vec![0; usize::try_from(OsRng.next_u64() % 256).unwrap()]; + OsRng.fill_bytes(&mut in_instruction); + + let tx = TxLegacy { + chain_id: None, + nonce: 0, + // 100 gwei + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000u128, + to: TxKind::Call(router.address()), + value: amount, + input: crate::abi::inInstructionCall::new(( + [0; 20].into(), + amount, + in_instruction.clone().into(), + )) + .abi_encode() + .into(), + }; + let tx = ethereum_primitives::deterministically_sign(&tx); + let signer = tx.recover_signer().unwrap(); + + let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; + assert!(receipt.status()); + + assert_eq!(receipt.inner.logs().len(), 1); + let parsed_log = + receipt.inner.logs()[0].log_decode::().unwrap().inner.data; + assert_eq!(parsed_log.from, signer); + assert_eq!(parsed_log.coin, Address::from([0; 20])); + assert_eq!(parsed_log.amount, amount); + assert_eq!(parsed_log.instruction.as_ref(), &in_instruction); + + let parsed_in_instructions = + router.in_instructions(receipt.block_number.unwrap(), &HashSet::new()).await.unwrap(); + assert_eq!(parsed_in_instructions.len(), 1); + assert_eq!( + parsed_in_instructions[0].id, + (<[u8; 32]>::from(receipt.block_hash.unwrap()), receipt.inner.logs()[0].log_index.unwrap()) + ); + assert_eq!(parsed_in_instructions[0].from, signer); + assert_eq!(parsed_in_instructions[0].coin, Coin::Ether); + assert_eq!(parsed_in_instructions[0].amount, amount); + assert_eq!(parsed_in_instructions[0].data, in_instruction); +} + +#[tokio::test] +async fn test_erc20_in_instruction() { + todo!("TODO") +} + +async fn publish_outs(key: (Scalar, PublicKey), nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> TransactionReceipt { + let msg = Router::execute_message(nonce, coin, fee, instructions.clone()); + + let nonce = Scalar::random(&mut OsRng); + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); + let s = nonce + (c * key.0); + + let sig = Signature::new(c, s).unwrap(); + + let mut tx = router.execute(coin, fee, instructions, &sig); + tx.gas_price = 100_000_000_000u128; + let tx = ethereum_primitives::deterministically_sign(&tx); + ethereum_test_primitives::publish_tx(&provider, tx).await +} + +#[tokio::test] +async fn test_eth_address_out_instruction() { + let (_anvil, provider, router, key) = setup_test().await; + + let mut amount = U256::try_from(OsRng.next_u64()).unwrap(); + let mut fee = U256::try_from(OsRng.next_u64()).unwrap(); + if fee > amount { + core::mem::swap(&mut amount, &mut fee); + } + assert!(amount >= fee); + ethereum_test_primitives::fund_account(&provider, router.address(), amount).await; + + let instructions = OutInstructions::from([].as_slice()); + let receipt = publish_outs(key, 1, Coin::Ether, fee, instructions); + assert!(receipt.status()); + println!("empty execute used {} gas:", receipt.gas_used); + + assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); +} + +#[tokio::test] +async fn test_erc20_address_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_eth_code_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_erc20_code_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_escape_hatch() { + todo!("TODO") +} diff --git a/processor/ethereum/test-primitives/Cargo.toml b/processor/ethereum/test-primitives/Cargo.toml new file mode 100644 index 00000000..54bc6850 --- /dev/null +++ b/processor/ethereum/test-primitives/Cargo.toml @@ -0,0 +1,28 @@ +[package] +name = "serai-ethereum-test-primitives" +version = "0.1.0" +description = "Test primitives for Ethereum" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum/test-primitives" +authors = ["Luke Parker "] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +k256 = { version = "0.13", default-features = false, features = ["std"] } + +alloy-core = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.3", default-features = false, features = ["std"] } + +alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } +alloy-provider = { version = "0.3", default-features = false } + +ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } diff --git a/processor/ethereum/test-primitives/LICENSE b/processor/ethereum/test-primitives/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/processor/ethereum/test-primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/processor/ethereum/test-primitives/README.md b/processor/ethereum/test-primitives/README.md new file mode 100644 index 00000000..efb4d0a4 --- /dev/null +++ b/processor/ethereum/test-primitives/README.md @@ -0,0 +1,5 @@ +# Ethereum Router + +The [Router contract](./contracts/Router.sol) is extensively documented to ensure clarity and +understanding of the design decisions made. Please refer to it for understanding of why/what this +is. diff --git a/processor/ethereum/test-primitives/src/lib.rs b/processor/ethereum/test-primitives/src/lib.rs new file mode 100644 index 00000000..b91ba97f --- /dev/null +++ b/processor/ethereum/test-primitives/src/lib.rs @@ -0,0 +1,117 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use k256::{elliptic_curve::sec1::ToEncodedPoint, ProjectivePoint}; + +use alloy_core::{ + primitives::{Address, U256, Bytes, Signature, TxKind}, + hex::FromHex, +}; +use alloy_consensus::{SignableTransaction, TxLegacy, Signed}; + +use alloy_rpc_types_eth::TransactionReceipt; +use alloy_simple_request_transport::SimpleRequest; +use alloy_provider::{Provider, RootProvider}; + +use ethereum_primitives::{keccak256, deterministically_sign}; + +fn address(point: &ProjectivePoint) -> [u8; 20] { + let encoded_point = point.to_encoded_point(false); + // Last 20 bytes of the hash of the concatenated x and y coordinates + // We obtain the concatenated x and y coordinates via the uncompressed encoding of the point + keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap() +} + +/// Fund an account. +pub async fn fund_account(provider: &RootProvider, address: Address, value: U256) { + let _: () = provider + .raw_request("anvil_setBalance".into(), [address.to_string(), value.to_string()]) + .await + .unwrap(); +} + +/// Publish an already-signed transaction. +pub async fn publish_tx( + provider: &RootProvider, + tx: Signed, +) -> TransactionReceipt { + // Fund the sender's address + fund_account( + provider, + tx.recover_signer().unwrap(), + (U256::from(tx.tx().gas_limit) * U256::from(tx.tx().gas_price)) + tx.tx().value, + ) + .await; + + let (tx, sig, _) = tx.into_parts(); + let mut bytes = vec![]; + tx.encode_with_signature_fields(&sig, &mut bytes); + let pending_tx = provider.send_raw_transaction(&bytes).await.unwrap(); + pending_tx.get_receipt().await.unwrap() +} + +/// Deploy a contract. +/// +/// The contract deployment will be done by a random account. +pub async fn deploy_contract( + provider: &RootProvider, + file_path: &str, + constructor_arguments: &[u8], +) -> Address { + let hex_bin_buf = std::fs::read_to_string(file_path).unwrap(); + let hex_bin = + if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf }; + let mut bin = Vec::::from(Bytes::from_hex(hex_bin).unwrap()); + bin.extend(constructor_arguments); + + let deployment_tx = TxLegacy { + chain_id: None, + nonce: 0, + // 100 gwei + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000, + to: TxKind::Create, + value: U256::ZERO, + input: bin.into(), + }; + + let deployment_tx = deterministically_sign(&deployment_tx); + + let receipt = publish_tx(provider, deployment_tx).await; + assert!(receipt.status()); + + receipt.contract_address.unwrap() +} + +/// Sign and send a transaction from the specified wallet. +/// +/// This assumes the wallet is funded. +pub async fn send( + provider: &RootProvider, + wallet: &k256::ecdsa::SigningKey, + mut tx: TxLegacy, +) -> TransactionReceipt { + let verifying_key = *wallet.verifying_key().as_affine(); + let address = Address::from(address(&verifying_key.into())); + + // https://github.com/alloy-rs/alloy/issues/539 + // let chain_id = provider.get_chain_id().await.unwrap(); + // tx.chain_id = Some(chain_id); + tx.chain_id = None; + tx.nonce = provider.get_transaction_count(address).await.unwrap(); + // 100 gwei + tx.gas_price = 100_000_000_000u128; + + let sig = wallet.sign_prehash_recoverable(tx.signature_hash().as_ref()).unwrap(); + assert_eq!(address, tx.clone().into_signed(sig.into()).recover_signer().unwrap()); + assert!( + provider.get_balance(address).await.unwrap() > + ((U256::from(tx.gas_price) * U256::from(tx.gas_limit)) + tx.value) + ); + + let mut bytes = vec![]; + tx.encode_with_signature_fields(&Signature::from(sig), &mut bytes); + let pending_tx = provider.send_raw_transaction(&bytes).await.unwrap(); + pending_tx.get_receipt().await.unwrap() +} From cf4123b0f85c3d9ec5016a3b6cd503a0a2aa76e2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 10:47:09 -0400 Subject: [PATCH 193/368] Update how signatures are handled by the Router --- .../ethereum/router/contracts/Router.sol | 175 +++++++++++------- processor/ethereum/router/src/lib.rs | 33 +++- processor/ethereum/router/src/tests/mod.rs | 20 +- 3 files changed, 151 insertions(+), 77 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 8607e732..4ec4eb4e 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -126,76 +126,133 @@ contract Router { /// @notice Escaping when escape hatch wasn't invoked. error EscapeHatchNotInvoked(); - /** - * @dev Updates the Serai key at the end of the current function. Executing at the end of the - * current function allows verifying a signature with the current key. This does not update - * `_nextNonce` - */ + /// @dev Updates the Serai key. This does not update `_nextNonce` /// @param nonceUpdatedWith The nonce used to update the key /// @param newSeraiKey The key updated to - modifier updateSeraiKeyAtEndOfFn(uint256 nonceUpdatedWith, bytes32 newSeraiKey) { - // Run the function itself - _; - - // Update the key + function _updateSeraiKey(uint256 nonceUpdatedWith, bytes32 newSeraiKey) private { _seraiKey = newSeraiKey; emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); } /// @notice The constructor for the relayer /// @param initialSeraiKey The initial key for Serai's Ethereum validators - constructor(bytes32 initialSeraiKey) updateSeraiKeyAtEndOfFn(0, initialSeraiKey) { + constructor(bytes32 initialSeraiKey) { // Nonces are incremented by 1 upon account creation, prior to any code execution, per EIP-161 // This is incompatible with any networks which don't have their nonces start at 0 _smartContractNonce = 1; - // We consumed nonce 0 when setting the initial Serai key + // Set the Serai key + _updateSeraiKey(0, initialSeraiKey); + + // We just consumed nonce 0 when setting the initial Serai key _nextNonce = 1; // We haven't escaped to any address yet _escapedTo = address(0); } - /// @dev Verify a signature - /// @param message The message to pass to the Schnorr contract - /// @param signature The signature by the current key for this message - function verifySignature(bytes32 message, Signature calldata signature) private { + /** + * @dev + * Verify a signature of the calldata, placed immediately after the function selector. The calldata + * should be signed with the nonce taking the place of the signature's commitment to its nonce, and + * the signature solution zeroed. + */ + function verifySignature() + private + returns (uint256 nonceUsed, bytes memory message, bytes32 messageHash) + { // If the escape hatch was triggered, reject further signatures if (_escapedTo != address(0)) { revert EscapeHatchInvoked(); } - // Verify the signature - if (!Schnorr.verify(_seraiKey, message, signature.c, signature.s)) { + + message = msg.data; + uint256 messageLen = message.length; + /* + function selector, signature + + This check means we don't read memory, and as we attempt to clear portions, write past it + (triggering undefined behavior). + */ + if (messageLen < 68) { revert InvalidSignature(); } + + // Read _nextNonce into memory as the nonce we'll use + nonceUsed = _nextNonce; + + // Declare memory to copy the signature out to + bytes32 signatureC; + bytes32 signatureS; + + // slither-disable-next-line assembly + assembly { + // Read the signature (placed after the function signature) + signatureC := mload(add(message, 36)) + signatureS := mload(add(message, 68)) + + // Overwrite the signature challenge with the nonce + mstore(add(message, 36), nonceUsed) + // Overwrite the signature response with 0 + mstore(add(message, 68), 0) + + // Calculate the message hash + messageHash := keccak256(add(message, 32), messageLen) + } + + // Verify the signature + if (!Schnorr.verify(_seraiKey, messageHash, signatureC, signatureS)) { + revert InvalidSignature(); + } + // Set the next nonce unchecked { - _nextNonce++; + _nextNonce = nonceUsed + 1; + } + + /* + Advance the message past the function selector, enabling decoding the arguments. Ideally, we'd + also advance past the signature (to simplify decoding arguments and save some memory). This + would transfrom message from: + + message (pointer) + v + ------------------------------------------------------------ + | 32-byte length | 4-byte selector | Signature | Arguments | + ------------------------------------------------------------ + + to: + + message (pointer) + v + ---------------------------------------------- + | Junk 68 bytes | 32-byte length | Arguments | + ---------------------------------------------- + + Unfortunately, doing so corrupts the offsets defined within the ABI itself. We settle for a + transform to: + + message (pointer) + v + --------------------------------------------------------- + | Junk 4 bytes | 32-byte length | Signature | Arguments | + --------------------------------------------------------- + */ + // slither-disable-next-line assembly + assembly { + message := add(message, 4) + mstore(message, sub(messageLen, 4)) } } /// @notice Update the key representing Serai's Ethereum validators /// @dev This assumes the key is correct. No checks on it are performed - /// @param newSeraiKey The key to update to - /// @param signature The signature by the current key authorizing this update - function updateSeraiKey(bytes32 newSeraiKey, Signature calldata signature) - external - updateSeraiKeyAtEndOfFn(_nextNonce, newSeraiKey) - { - /* - This DST needs a length prefix as well to prevent DSTs potentially being substrings of each - other, yet this is fine for our well-defined, extremely-limited use. - - We don't encode the chain ID as Serai generates independent keys for each integration. If - Ethereum L2s are integrated, and they reuse the Ethereum validator set, we would use the - existing Serai key yet we'd apply an off-chain derivation scheme to bind it to specific - networks. This also lets Serai identify EVMs per however it wants, solving the edge case where - two instances of the EVM share a chain ID for whatever horrific reason. - - This uses encodePacked as all items present here are of fixed length. - */ - bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _nextNonce, newSeraiKey)); - verifySignature(message, signature); + // @param signature The signature by the current key authorizing this update + // @param newSeraiKey The key to update to + function updateSeraiKey() external { + (uint256 nonceUsed, bytes memory args,) = verifySignature(); + (,, bytes32 newSeraiKey) = abi.decode(args, (bytes32, bytes32, bytes32)); + _updateSeraiKey(nonceUsed, newSeraiKey); } /// @notice Transfer coins into Serai with an instruction @@ -357,26 +414,19 @@ contract Router { * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the * fee */ - /// @param coin The coin all of these `OutInstruction`s are for - /// @param fee The fee to pay (in coin) to the caller for their relaying of this batch - /// @param outs The `OutInstruction`s to act on - /// @param signature The signature by the current key for Serai's Ethereum validators + // @param signature The signature by the current key for Serai's Ethereum validators + // @param coin The coin all of these `OutInstruction`s are for + // @param fee The fee to pay (in coin) to the caller for their relaying of this batch + // @param outs The `OutInstruction`s to act on // Each individual call is explicitly metered to ensure there isn't a DoS here // slither-disable-next-line calls-loop - function execute( - address coin, - uint256 fee, - OutInstruction[] calldata outs, - Signature calldata signature - ) external { - // Verify the signature - // This uses `encode`, not `encodePacked`, as `outs` is of variable length - // TODO: Use a custom encode in verifySignature here with assembly (benchmarking before/after) - bytes32 message = keccak256(abi.encode("execute", _nextNonce, coin, fee, outs)); - verifySignature(message, signature); + function execute() external { + (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(); + (,, address coin, uint256 fee, OutInstruction[] memory outs) = + abi.decode(args, (bytes32, bytes32, address, uint256, OutInstruction[])); // TODO: Also include a bit mask here - emit Executed(_nextNonce, message); + emit Executed(nonceUsed, message); /* Since we don't have a re-entrancy guard, it is possible for instructions from later batches to @@ -439,9 +489,14 @@ contract Router { /// @notice Escapes to a new smart contract /// @dev This should be used upon an invariant being reached or new functionality being needed - /// @param escapeTo The address to escape to - /// @param signature The signature by the current key for Serai's Ethereum validators - function escapeHatch(address escapeTo, Signature calldata signature) external { + // @param signature The signature by the current key for Serai's Ethereum validators + // @param escapeTo The address to escape to + function escapeHatch() external { + // Verify the signature + (, bytes memory args,) = verifySignature(); + + (,, address escapeTo) = abi.decode(args, (bytes32, bytes32, address)); + if (escapeTo == address(0)) { revert InvalidEscapeAddress(); } @@ -454,10 +509,6 @@ contract Router { revert EscapeHatchInvoked(); } - // Verify the signature - bytes32 message = keccak256(abi.encodePacked("escapeHatch", _nextNonce, escapeTo)); - verifySignature(message, signature); - _escapedTo = escapeTo; emit EscapeHatch(escapeTo); } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index eeee70e7..9e2d880a 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -309,26 +309,36 @@ impl Router { /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { - ("updateSeraiKey", U256::try_from(nonce).expect("couldn't convert u64 to u256"), key.eth_repr()) - .abi_encode_packed() + [ + abi::updateSeraiKeyCall::SELECTOR.as_slice(), + &(U256::try_from(nonce).unwrap(), U256::ZERO, key.eth_repr()).abi_encode_params(), + ] + .concat() } /// Construct a transaction to update the key representing Serai. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.1), - input: abi::updateSeraiKeyCall::new((public_key.eth_repr().into(), sig.into())) - .abi_encode() - .into(), - gas_limit: 40748 * 120 / 100, + input: [ + abi::updateSeraiKeyCall::SELECTOR.as_slice(), + &(abi::Signature::from(sig), public_key.eth_repr()).abi_encode_params(), + ] + .concat() + .into(), + gas_limit: 40927 * 120 / 100, ..Default::default() } } /// Get the message to be signed in order to execute a series of `OutInstruction`s. pub fn execute_message(nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> Vec { - ("execute".to_string(), U256::try_from(nonce).unwrap(), coin.address(), fee, outs.0) - .abi_encode_sequence() + [ + abi::executeCall::SELECTOR.as_slice(), + &(U256::try_from(nonce).unwrap(), U256::ZERO, coin.address(), fee, outs.0) + .abi_encode_params(), + ] + .concat() } /// Construct a transaction to execute a batch of `OutInstruction`s. @@ -336,7 +346,12 @@ impl Router { let outs_len = outs.0.len(); TxLegacy { to: TxKind::Call(self.1), - input: abi::executeCall::new((coin.address(), fee, outs.0, sig.into())).abi_encode().into(), + input: [ + abi::executeCall::SELECTOR.as_slice(), + &(abi::Signature::from(sig), coin.address(), fee, outs.0).abi_encode_params(), + ] + .concat() + .into(), // TODO gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap()), ..Default::default() diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 317003e8..fcd22ec6 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -10,7 +10,7 @@ use alloy_sol_types::SolCall; use alloy_consensus::TxLegacy; -use alloy_rpc_types_eth::BlockNumberOrTag; +use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionReceipt}; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; use alloy_provider::RootProvider; @@ -154,8 +154,16 @@ async fn test_erc20_in_instruction() { todo!("TODO") } -async fn publish_outs(key: (Scalar, PublicKey), nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> TransactionReceipt { - let msg = Router::execute_message(nonce, coin, fee, instructions.clone()); +async fn publish_outs( + provider: &RootProvider, + router: &Router, + key: (Scalar, PublicKey), + nonce: u64, + coin: Coin, + fee: U256, + outs: OutInstructions, +) -> TransactionReceipt { + let msg = Router::execute_message(nonce, coin, fee, outs.clone()); let nonce = Scalar::random(&mut OsRng); let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); @@ -163,10 +171,10 @@ async fn publish_outs(key: (Scalar, PublicKey), nonce: u64, coin: Coin, fee: U25 let sig = Signature::new(c, s).unwrap(); - let mut tx = router.execute(coin, fee, instructions, &sig); + let mut tx = router.execute(coin, fee, outs, &sig); tx.gas_price = 100_000_000_000u128; let tx = ethereum_primitives::deterministically_sign(&tx); - ethereum_test_primitives::publish_tx(&provider, tx).await + ethereum_test_primitives::publish_tx(provider, tx).await } #[tokio::test] @@ -182,7 +190,7 @@ async fn test_eth_address_out_instruction() { ethereum_test_primitives::fund_account(&provider, router.address(), amount).await; let instructions = OutInstructions::from([].as_slice()); - let receipt = publish_outs(key, 1, Coin::Ether, fee, instructions); + let receipt = publish_outs(&provider, &router, key, 1, Coin::Ether, fee, instructions).await; assert!(receipt.status()); println!("empty execute used {} gas:", receipt.gas_used); From 8de42cc2d459456f4ada43a6324bc816dfd3e8ef Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 13:19:07 -0400 Subject: [PATCH 194/368] Add IRouter --- processor/ethereum/router/build.rs | 8 +- .../ethereum/router/contracts/IRouter.sol | 147 ++++++++++++++++++ .../ethereum/router/contracts/Router.sol | 128 ++++----------- processor/ethereum/router/src/lib.rs | 12 +- 4 files changed, 191 insertions(+), 104 deletions(-) create mode 100644 processor/ethereum/router/contracts/IRouter.sol diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index 1ce6d4f5..c931b865 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -27,7 +27,7 @@ fn main() { } build_solidity_contracts::build( - &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts"], + &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts", "contracts"], "contracts", &artifacts_path, ) @@ -36,7 +36,11 @@ fn main() { // This cannot be handled with the sol! macro. The Solidity requires an import // https://github.com/alloy-rs/core/issues/602 sol( - &["../../../networks/ethereum/schnorr/contracts/Schnorr.sol", "contracts/Router.sol"], + &[ + "../../../networks/ethereum/schnorr/contracts/Schnorr.sol", + "contracts/IRouter.sol", + "contracts/Router.sol", + ], &(artifacts_path + "/router.rs"), ); } diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol new file mode 100644 index 00000000..347553c1 --- /dev/null +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -0,0 +1,147 @@ +// SPDX-License-Identifier: MIT +pragma solidity ^0.8.26; + +/// @title Serai Router +/// @author Luke Parker +/// @notice Intakes coins for the Serai network and handles relaying batches of transfers out +interface IRouter { + /// @title A signature + /// @dev Thin wrapper around `c, s` to simplify the API + struct Signature { + bytes32 c; + bytes32 s; + } + + /// @title The type of destination + /// @dev A destination is either an address or a blob of code to deploy and call + enum DestinationType { + Address, + Code + } + + /// @title A code destination + /** + * @dev If transferring an ERC20 to this destination, it will be transferred to the address the + * code will be deployed to. If transferring ETH, it will be transferred with the deployment of + * the code. `code` is deployed with CREATE (calling its constructor). The entire deployment + * (and associated sandboxing) must consume less than `gasLimit` units of gas or it will revert. + */ + struct CodeDestination { + uint32 gasLimit; + bytes code; + } + + /// @title An instruction to transfer coins out + /// @dev Specifies a destination and amount but not the coin as that's assumed to be contextual + struct OutInstruction { + DestinationType destinationType; + bytes destination; + uint256 amount; + } + + /// @notice Emitted when the key for Serai's Ethereum validators is updated + /// @param nonce The nonce consumed to update this key + /// @param key The key updated to + event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); + + /// @notice Emitted when an InInstruction occurs + /// @param from The address which called `inInstruction` and caused this event to be emitted + /// @param coin The coin transferred in + /// @param amount The amount of the coin transferred in + /// @param instruction The Shorthand-encoded InInstruction for Serai to decode and handle + event InInstruction( + address indexed from, address indexed coin, uint256 amount, bytes instruction + ); + + /// @notice Emitted when a batch of `OutInstruction`s occurs + /// @param nonce The nonce consumed to execute this batch of transactions + /// @param messageHash The hash of the message signed for the executed batch + event Executed(uint256 indexed nonce, bytes32 indexed messageHash); + + /// @notice Emitted when `escapeHatch` is invoked + /// @param escapeTo The address to escape to + event EscapeHatch(address indexed escapeTo); + + /// @notice Emitted when coins escape through the escape hatch + /// @param coin The coin which escaped + event Escaped(address indexed coin); + + /// @notice The contract has had its escape hatch invoked and won't accept further actions + error EscapeHatchInvoked(); + /// @notice The signature was invalid + error InvalidSignature(); + /// @notice The amount specified didn't match `msg.value` + error AmountMismatchesMsgValue(); + /// @notice The call to an ERC20's `transferFrom` failed + error TransferFromFailed(); + + /// @notice An invalid address to escape to was specified. + error InvalidEscapeAddress(); + /// @notice Escaping when escape hatch wasn't invoked. + error EscapeHatchNotInvoked(); + + /// @notice Update the key representing Serai's Ethereum validators + /// @dev This assumes the key is correct. No checks on it are performed + /// @param signature The signature by the current key authorizing this update + /// @param newSeraiKey The key to update to + function updateSeraiKey(Signature calldata signature, bytes32 newSeraiKey) external; + + /// @notice Transfer coins into Serai with an instruction + /// @param coin The coin to transfer in (address(0) if Ether) + /// @param amount The amount to transfer in (msg.value if Ether) + /** + * @param instruction The Shorthand-encoded InInstruction for Serai to associate with this + * transfer in + */ + // Re-entrancy doesn't bork this function + // slither-disable-next-line reentrancy-events + function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable; + + /// @notice Execute some arbitrary code within a secure sandbox + /** + * @dev This performs sandboxing by deploying this code with `CREATE`. This is an external + * function as we can't meter `CREATE`/internal functions. We work around this by calling this + * function with `CALL` (which we can meter). This does forward `msg.value` to the newly + * deployed contract. + */ + /// @param code The code to execute + function executeArbitraryCode(bytes memory code) external payable; + + /// @notice Execute a batch of `OutInstruction`s + /** + * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the + * fee + */ + /// @param signature The signature by the current key for Serai's Ethereum validators + /// @param coin The coin all of these `OutInstruction`s are for + /// @param fee The fee to pay (in coin) to the caller for their relaying of this batch + /// @param outs The `OutInstruction`s to act on + function execute( + Signature calldata signature, + address coin, + uint256 fee, + OutInstruction[] calldata outs + ) external; + + /// @notice Escapes to a new smart contract + /// @dev This should be used upon an invariant being reached or new functionality being needed + /// @param signature The signature by the current key for Serai's Ethereum validators + /// @param escapeTo The address to escape to + function escapeHatch(Signature calldata signature, address escapeTo) external; + + /// @notice Escape coins after the escape hatch has been invoked + /// @param coin The coin to escape + function escape(address coin) external; + + /// @notice Fetch the next nonce to use by an action published to this contract + /// return The next nonce to use by an action published to this contract + function nextNonce() external view returns (uint256); + + /// @notice Fetch the current key for Serai's Ethereum validator set + /// @return The current key for Serai's Ethereum validator set + function seraiKey() external view returns (bytes32); + + /// @notice Fetch the address escaped to + /// @return The address which was escaped to (address(0) if the escape hatch hasn't been invoked) + function escapedTo() external view returns (address); +} diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 4ec4eb4e..4be55114 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -1,12 +1,12 @@ // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.26; -// TODO: MIT licensed interface - import "IERC20.sol"; import "Schnorr.sol"; +import "IRouter.sol"; + /* The Router directly performs low-level calls in order to fine-tune the gas settings. Since this contract is meant to relay an entire batch of transactions, the ability to exactly meter @@ -32,7 +32,7 @@ contract Router { /* We don't expose a getter for this as it shouldn't be expected to have any specific value at a given moment in time. If someone wants to know the address of their deployed contract, they can - have it emit an event and verify the emitting contract is the expected one. + have it emit IRouter.an event and verify the emitting contract is the expected one. */ uint256 private _smartContractNonce; @@ -51,87 +51,12 @@ contract Router { /// @dev The address escaped to address private _escapedTo; - /// @title The type of destination - /// @dev A destination is either an address or a blob of code to deploy and call - enum DestinationType { - Address, - Code - } - - /// @title A code destination - /** - * @dev If transferring an ERC20 to this destination, it will be transferred to the address the - * code will be deployed to. If transferring ETH, it will be transferred with the deployment of - * the code. `code` is deployed with CREATE (calling its constructor). The entire deployment - * (and associated sandboxing) must consume less than `gasLimit` units of gas or it will revert. - */ - struct CodeDestination { - uint32 gasLimit; - bytes code; - } - - /// @title An instruction to transfer coins out - /// @dev Specifies a destination and amount but not the coin as that's assumed to be contextual - struct OutInstruction { - DestinationType destinationType; - bytes destination; - uint256 amount; - } - - /// @title A signature - /// @dev Thin wrapper around `c, s` to simplify the API - struct Signature { - bytes32 c; - bytes32 s; - } - - /// @notice Emitted when the key for Serai's Ethereum validators is updated - /// @param nonce The nonce consumed to update this key - /// @param key The key updated to - event SeraiKeyUpdated(uint256 indexed nonce, bytes32 indexed key); - - /// @notice Emitted when an InInstruction occurs - /// @param from The address which called `inInstruction` and caused this event to be emitted - /// @param coin The coin transferred in - /// @param amount The amount of the coin transferred in - /// @param instruction The Shorthand-encoded InInstruction for Serai to decode and handle - event InInstruction( - address indexed from, address indexed coin, uint256 amount, bytes instruction - ); - - /// @notice Emitted when a batch of `OutInstruction`s occurs - /// @param nonce The nonce consumed to execute this batch of transactions - /// @param messageHash The hash of the message signed for the executed batch - event Executed(uint256 indexed nonce, bytes32 indexed messageHash); - - /// @notice Emitted when `escapeHatch` is invoked - /// @param escapeTo The address to escape to - event EscapeHatch(address indexed escapeTo); - - /// @notice Emitted when coins escape through the escape hatch - /// @param coin The coin which escaped - event Escaped(address indexed coin); - - /// @notice The contract has had its escape hatch invoked and won't accept further actions - error EscapeHatchInvoked(); - /// @notice The signature was invalid - error InvalidSignature(); - /// @notice The amount specified didn't match `msg.value` - error AmountMismatchesMsgValue(); - /// @notice The call to an ERC20's `transferFrom` failed - error TransferFromFailed(); - - /// @notice An invalid address to escape to was specified. - error InvalidEscapeAddress(); - /// @notice Escaping when escape hatch wasn't invoked. - error EscapeHatchNotInvoked(); - /// @dev Updates the Serai key. This does not update `_nextNonce` /// @param nonceUpdatedWith The nonce used to update the key /// @param newSeraiKey The key updated to function _updateSeraiKey(uint256 nonceUpdatedWith, bytes32 newSeraiKey) private { _seraiKey = newSeraiKey; - emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); + emit IRouter.SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); } /// @notice The constructor for the relayer @@ -153,9 +78,9 @@ contract Router { /** * @dev - * Verify a signature of the calldata, placed immediately after the function selector. The calldata - * should be signed with the nonce taking the place of the signature's commitment to its nonce, and - * the signature solution zeroed. + * Verify a signature of the calldata, placed immediately after the function selector. The + * calldata should be signed with the nonce taking the place of the signature's commitment to + * its nonce, and the signature solution zeroed. */ function verifySignature() private @@ -163,7 +88,7 @@ contract Router { { // If the escape hatch was triggered, reject further signatures if (_escapedTo != address(0)) { - revert EscapeHatchInvoked(); + revert IRouter.EscapeHatchInvoked(); } message = msg.data; @@ -175,7 +100,7 @@ contract Router { (triggering undefined behavior). */ if (messageLen < 68) { - revert InvalidSignature(); + revert IRouter.InvalidSignature(); } // Read _nextNonce into memory as the nonce we'll use @@ -202,7 +127,7 @@ contract Router { // Verify the signature if (!Schnorr.verify(_seraiKey, messageHash, signatureC, signatureS)) { - revert InvalidSignature(); + revert IRouter.InvalidSignature(); } // Set the next nonce @@ -251,6 +176,10 @@ contract Router { // @param newSeraiKey The key to update to function updateSeraiKey() external { (uint256 nonceUsed, bytes memory args,) = verifySignature(); + /* + We could replace this with a length check (if we don't simply assume the calldata is valid as + it was properly signed) + mload to save 24 gas but it's not worth the complexity. + */ (,, bytes32 newSeraiKey) = abi.decode(args, (bytes32, bytes32, bytes32)); _updateSeraiKey(nonceUsed, newSeraiKey); } @@ -267,7 +196,7 @@ contract Router { function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { // Check the transfer if (coin == address(0)) { - if (amount != msg.value) revert AmountMismatchesMsgValue(); + if (amount != msg.value) revert IRouter.AmountMismatchesMsgValue(); } else { (bool success, bytes memory res) = address(coin).call( abi.encodeWithSelector(IERC20.transferFrom.selector, msg.sender, address(this), amount) @@ -278,7 +207,7 @@ contract Router { ERC20 contract did in fact return true */ bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); - if (!(success && nonStandardResOrTrue)) revert TransferFromFailed(); + if (!(success && nonStandardResOrTrue)) revert IRouter.TransferFromFailed(); } /* @@ -303,7 +232,7 @@ contract Router { It is the Serai network's role not to add support for any non-standard implementations. */ - emit InInstruction(msg.sender, coin, amount, instruction); + emit IRouter.InInstruction(msg.sender, coin, amount, instruction); } /// @dev Perform an ERC20 transfer out @@ -422,11 +351,11 @@ contract Router { // slither-disable-next-line calls-loop function execute() external { (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(); - (,, address coin, uint256 fee, OutInstruction[] memory outs) = - abi.decode(args, (bytes32, bytes32, address, uint256, OutInstruction[])); + (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = + abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); // TODO: Also include a bit mask here - emit Executed(nonceUsed, message); + emit IRouter.Executed(nonceUsed, message); /* Since we don't have a re-entrancy guard, it is possible for instructions from later batches to @@ -439,9 +368,9 @@ contract Router { // slither-disable-next-line reentrancy-events for (uint256 i = 0; i < outs.length; i++) { // If the destination is an address, we perform a direct transfer - if (outs[i].destinationType == DestinationType.Address) { + if (outs[i].destinationType == IRouter.DestinationType.Address) { /* - This may cause a revert if the destination isn't actually a valid address. Serai is + This may cause a revert if the destination isn't actually a valid address. Serai is trusted to not pass a malformed destination, yet if it ever did, it could simply re-sign a corrected batch using this nonce. */ @@ -465,7 +394,8 @@ contract Router { erc20TransferOut(nextAddress, coin, outs[i].amount); } - (CodeDestination memory destination) = abi.decode(outs[i].destination, (CodeDestination)); + (IRouter.CodeDestination memory destination) = + abi.decode(outs[i].destination, (IRouter.CodeDestination)); /* Perform the deployment with the defined gas budget. @@ -498,7 +428,7 @@ contract Router { (,, address escapeTo) = abi.decode(args, (bytes32, bytes32, address)); if (escapeTo == address(0)) { - revert InvalidEscapeAddress(); + revert IRouter.InvalidEscapeAddress(); } /* We want to define the escape hatch so coins here now, and latently received, can be forwarded. @@ -506,21 +436,21 @@ contract Router { received coins without penalty (if they update the escape hatch after unstaking). */ if (_escapedTo != address(0)) { - revert EscapeHatchInvoked(); + revert IRouter.EscapeHatchInvoked(); } _escapedTo = escapeTo; - emit EscapeHatch(escapeTo); + emit IRouter.EscapeHatch(escapeTo); } /// @notice Escape coins after the escape hatch has been invoked /// @param coin The coin to escape function escape(address coin) external { if (_escapedTo == address(0)) { - revert EscapeHatchNotInvoked(); + revert IRouter.EscapeHatchNotInvoked(); } - emit Escaped(coin); + emit IRouter.Escaped(coin); // Fetch the amount to escape uint256 amount = address(this).balance; diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 9e2d880a..b2e78b96 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -31,7 +31,13 @@ use serai_client::networks::ethereum::Address as SeraiAddress; mod _abi { include!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/router.rs")); } -use _abi::Router as abi; +mod abi { + pub use super::_abi::IRouter::{ + Signature, DestinationType, CodeDestination, OutInstruction, SeraiKeyUpdated, InInstruction, + Executed, EscapeHatch, Escaped, + }; + pub use super::_abi::Router::*; +} use abi::{ SeraiKeyUpdated as SeraiKeyUpdatedEvent, InInstruction as InInstructionEvent, Executed as ExecutedEvent, @@ -326,7 +332,7 @@ impl Router { ] .concat() .into(), - gas_limit: 40927 * 120 / 100, + gas_limit: 40_889 * 120 / 100, ..Default::default() } } @@ -353,7 +359,7 @@ impl Router { .concat() .into(), // TODO - gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap()), + gas_limit: (45_501 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap())) * 120 / 100, ..Default::default() } } From 2f5c0c68d0ed9be95fb5084c88b43a9475d9f610 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 18:11:09 -0400 Subject: [PATCH 195/368] Add selector collisions to Router to make it IRouter compatible --- processor/ethereum/router/Cargo.toml | 2 + processor/ethereum/router/build.rs | 2 +- .../ethereum/router/contracts/Router.sol | 22 +++++-- processor/ethereum/router/src/lib.rs | 63 ++++++++++--------- processor/ethereum/router/src/tests/mod.rs | 12 ++++ 5 files changed, 65 insertions(+), 36 deletions(-) diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 132a9fa4..32f112c9 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -20,7 +20,9 @@ workspace = true group = { version = "0.13", default-features = false } alloy-core = { version = "0.8", default-features = false } + alloy-sol-types = { version = "0.8", default-features = false } +alloy-sol-macro = { version = "0.8", default-features = false } alloy-consensus = { version = "0.3", default-features = false } diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index c931b865..26a2bee6 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -33,7 +33,7 @@ fn main() { ) .unwrap(); - // This cannot be handled with the sol! macro. The Solidity requires an import + // This cannot be handled with the sol! macro. The Router requires an import // https://github.com/alloy-rs/core/issues/602 sol( &[ diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 4be55114..c908cc3e 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -171,10 +171,14 @@ contract Router { } /// @notice Update the key representing Serai's Ethereum validators - /// @dev This assumes the key is correct. No checks on it are performed + /** + * @dev This assumes the key is correct. No checks on it are performed. + * + * The hex bytes are to cause a collision with `IRouter.updateSeraiKey`. + */ // @param signature The signature by the current key authorizing this update // @param newSeraiKey The key to update to - function updateSeraiKey() external { + function updateSeraiKey5A8542A2() external { (uint256 nonceUsed, bytes memory args,) = verifySignature(); /* We could replace this with a length check (if we don't simply assume the calldata is valid as @@ -341,7 +345,9 @@ contract Router { /// @notice Execute a batch of `OutInstruction`s /** * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the - * fee + * fee. + * + * The hex bytes are to cause a function selector collision with `IRouter.execute`. */ // @param signature The signature by the current key for Serai's Ethereum validators // @param coin The coin all of these `OutInstruction`s are for @@ -349,7 +355,7 @@ contract Router { // @param outs The `OutInstruction`s to act on // Each individual call is explicitly metered to ensure there isn't a DoS here // slither-disable-next-line calls-loop - function execute() external { + function execute4DE42904() external { (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(); (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); @@ -418,10 +424,14 @@ contract Router { } /// @notice Escapes to a new smart contract - /// @dev This should be used upon an invariant being reached or new functionality being needed + /** + * @dev This should be used upon an invariant being reached or new functionality being needed. + * + * The hex bytes are to cause a collision with `IRouter.updateSeraiKey`. + */ // @param signature The signature by the current key for Serai's Ethereum validators // @param escapeTo The address to escape to - function escapeHatch() external { + function escapeHatchDCDD91CC() external { // Verify the signature (, bytes memory args,) = verifySignature(); diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index b2e78b96..ef5bdbcf 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -28,15 +28,23 @@ use serai_client::networks::ethereum::Address as SeraiAddress; #[expect(clippy::all)] #[expect(clippy::ignored_unit_patterns)] #[expect(clippy::redundant_closure_for_method_calls)] -mod _abi { +pub mod _irouter_abi { + alloy_sol_macro::sol!("contracts/IRouter.sol"); +} + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod _router_abi { include!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/router.rs")); } + mod abi { - pub use super::_abi::IRouter::{ - Signature, DestinationType, CodeDestination, OutInstruction, SeraiKeyUpdated, InInstruction, - Executed, EscapeHatch, Escaped, - }; - pub use super::_abi::Router::*; + pub use super::_router_abi::IRouter::*; + pub use super::_router_abi::Router::constructorCall; } use abi::{ SeraiKeyUpdated as SeraiKeyUpdatedEvent, InInstruction as InInstructionEvent, @@ -315,23 +323,22 @@ impl Router { /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { - [ - abi::updateSeraiKeyCall::SELECTOR.as_slice(), - &(U256::try_from(nonce).unwrap(), U256::ZERO, key.eth_repr()).abi_encode_params(), - ] - .concat() + abi::updateSeraiKeyCall::new(( + abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, + key.eth_repr().into(), + )) + .abi_encode() } /// Construct a transaction to update the key representing Serai. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.1), - input: [ - abi::updateSeraiKeyCall::SELECTOR.as_slice(), - &(abi::Signature::from(sig), public_key.eth_repr()).abi_encode_params(), - ] - .concat() - .into(), + input: abi::updateSeraiKeyCall::new(( + abi::Signature::from(sig), + public_key.eth_repr().into(), + )) + .abi_encode().into(), gas_limit: 40_889 * 120 / 100, ..Default::default() } @@ -339,12 +346,13 @@ impl Router { /// Get the message to be signed in order to execute a series of `OutInstruction`s. pub fn execute_message(nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> Vec { - [ - abi::executeCall::SELECTOR.as_slice(), - &(U256::try_from(nonce).unwrap(), U256::ZERO, coin.address(), fee, outs.0) - .abi_encode_params(), - ] - .concat() + abi::executeCall::new(( + abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, + coin.address(), + fee, + outs.0, + )) + .abi_encode() } /// Construct a transaction to execute a batch of `OutInstruction`s. @@ -352,12 +360,9 @@ impl Router { let outs_len = outs.0.len(); TxLegacy { to: TxKind::Call(self.1), - input: [ - abi::executeCall::SELECTOR.as_slice(), - &(abi::Signature::from(sig), coin.address(), fee, outs.0).abi_encode_params(), - ] - .concat() - .into(), + input: abi::executeCall::new((abi::Signature::from(sig), coin.address(), fee, outs.0)) + .abi_encode() + .into(), // TODO gas_limit: (45_501 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap())) * 120 / 100, ..Default::default() diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index fcd22ec6..78215d95 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -22,6 +22,18 @@ use ethereum_deployer::Deployer; use crate::{Coin, OutInstructions, Router}; +#[test] +fn selector_collisions() { + assert_eq!( + crate::_irouter_abi::IRouter::executeCall::SELECTOR, + crate::_router_abi::Router::execute4DE42904Call::SELECTOR + ); + assert_eq!( + crate::_irouter_abi::IRouter::updateSeraiKeyCall::SELECTOR, + crate::_router_abi::Router::updateSeraiKey5A8542A2Call::SELECTOR + ); +} + pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { let key = Scalar::random(&mut OsRng); From 26230377b0ca3c042f4ca711a214279eef43de2e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 19:03:47 -0400 Subject: [PATCH 196/368] Define IRouterWithoutCollisions which Router inherits from This ensures Router implements most of IRouterWithoutCollisions. It solely leaves us to confirm Router implements the extensions defined in IRouter. --- Cargo.lock | 1 + processor/ethereum/router/build.rs | 7 - .../ethereum/router/contracts/IRouter.sol | 121 +++++++++--------- .../ethereum/router/contracts/Router.sol | 32 ++--- processor/ethereum/router/src/lib.rs | 6 +- 5 files changed, 84 insertions(+), 83 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 0550b05e..df7d578e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8724,6 +8724,7 @@ dependencies = [ "alloy-rpc-client", "alloy-rpc-types-eth", "alloy-simple-request-transport", + "alloy-sol-macro", "alloy-sol-macro-expander", "alloy-sol-macro-input", "alloy-sol-types", diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index 26a2bee6..dd52985d 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -26,13 +26,6 @@ fn main() { fs::create_dir(&artifacts_path).unwrap(); } - build_solidity_contracts::build( - &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts", "contracts"], - "contracts", - &artifacts_path, - ) - .unwrap(); - // This cannot be handled with the sol! macro. The Router requires an import // https://github.com/alloy-rs/core/issues/602 sol( diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index 347553c1..91bedac5 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -1,44 +1,10 @@ // SPDX-License-Identifier: MIT pragma solidity ^0.8.26; -/// @title Serai Router +/// @title Serai Router (without functions overriden by selector collisions) /// @author Luke Parker /// @notice Intakes coins for the Serai network and handles relaying batches of transfers out -interface IRouter { - /// @title A signature - /// @dev Thin wrapper around `c, s` to simplify the API - struct Signature { - bytes32 c; - bytes32 s; - } - - /// @title The type of destination - /// @dev A destination is either an address or a blob of code to deploy and call - enum DestinationType { - Address, - Code - } - - /// @title A code destination - /** - * @dev If transferring an ERC20 to this destination, it will be transferred to the address the - * code will be deployed to. If transferring ETH, it will be transferred with the deployment of - * the code. `code` is deployed with CREATE (calling its constructor). The entire deployment - * (and associated sandboxing) must consume less than `gasLimit` units of gas or it will revert. - */ - struct CodeDestination { - uint32 gasLimit; - bytes code; - } - - /// @title An instruction to transfer coins out - /// @dev Specifies a destination and amount but not the coin as that's assumed to be contextual - struct OutInstruction { - DestinationType destinationType; - bytes destination; - uint256 amount; - } - +interface IRouterWithoutCollisions { /// @notice Emitted when the key for Serai's Ethereum validators is updated /// @param nonce The nonce consumed to update this key /// @param key The key updated to @@ -80,12 +46,6 @@ interface IRouter { /// @notice Escaping when escape hatch wasn't invoked. error EscapeHatchNotInvoked(); - /// @notice Update the key representing Serai's Ethereum validators - /// @dev This assumes the key is correct. No checks on it are performed - /// @param signature The signature by the current key authorizing this update - /// @param newSeraiKey The key to update to - function updateSeraiKey(Signature calldata signature, bytes32 newSeraiKey) external; - /// @notice Transfer coins into Serai with an instruction /// @param coin The coin to transfer in (address(0) if Ether) /// @param amount The amount to transfer in (msg.value if Ether) @@ -107,6 +67,67 @@ interface IRouter { /// @param code The code to execute function executeArbitraryCode(bytes memory code) external payable; + /// @notice Escape coins after the escape hatch has been invoked + /// @param coin The coin to escape + function escape(address coin) external; + + /// @notice Fetch the next nonce to use by an action published to this contract + /// return The next nonce to use by an action published to this contract + function nextNonce() external view returns (uint256); + + /// @notice Fetch the current key for Serai's Ethereum validator set + /// @return The current key for Serai's Ethereum validator set + function seraiKey() external view returns (bytes32); + + /// @notice Fetch the address escaped to + /// @return The address which was escaped to (address(0) if the escape hatch hasn't been invoked) + function escapedTo() external view returns (address); +} + +/// @title Serai Router +/// @author Luke Parker +/// @notice Intakes coins for the Serai network and handles relaying batches of transfers out +interface IRouter is IRouterWithoutCollisions { + /// @title A signature + /// @dev Thin wrapper around `c, s` to simplify the API + struct Signature { + bytes32 c; + bytes32 s; + } + + /// @title The type of destination + /// @dev A destination is either an address or a blob of code to deploy and call + enum DestinationType { + Address, + Code + } + + /// @title A code destination + /** + * @dev If transferring an ERC20 to this destination, it will be transferred to the address the + * code will be deployed to. If transferring ETH, it will be transferred with the deployment of + * the code. `code` is deployed with CREATE (calling its constructor). The entire deployment + * (and associated sandboxing) must consume less than `gasLimit` units of gas or it will revert. + */ + struct CodeDestination { + uint32 gasLimit; + bytes code; + } + + /// @title An instruction to transfer coins out + /// @dev Specifies a destination and amount but not the coin as that's assumed to be contextual + struct OutInstruction { + DestinationType destinationType; + bytes destination; + uint256 amount; + } + + /// @notice Update the key representing Serai's Ethereum validators + /// @dev This assumes the key is correct. No checks on it are performed + /// @param signature The signature by the current key authorizing this update + /// @param newSeraiKey The key to update to + function updateSeraiKey(Signature calldata signature, bytes32 newSeraiKey) external; + /// @notice Execute a batch of `OutInstruction`s /** * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the @@ -128,20 +149,4 @@ interface IRouter { /// @param signature The signature by the current key for Serai's Ethereum validators /// @param escapeTo The address to escape to function escapeHatch(Signature calldata signature, address escapeTo) external; - - /// @notice Escape coins after the escape hatch has been invoked - /// @param coin The coin to escape - function escape(address coin) external; - - /// @notice Fetch the next nonce to use by an action published to this contract - /// return The next nonce to use by an action published to this contract - function nextNonce() external view returns (uint256); - - /// @notice Fetch the current key for Serai's Ethereum validator set - /// @return The current key for Serai's Ethereum validator set - function seraiKey() external view returns (bytes32); - - /// @notice Fetch the address escaped to - /// @return The address which was escaped to (address(0) if the escape hatch hasn't been invoked) - function escapedTo() external view returns (address); } diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index c908cc3e..e8f54653 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -24,7 +24,7 @@ import "IRouter.sol"; /// @title Serai Router /// @author Luke Parker /// @notice Intakes coins for the Serai network and handles relaying batches of transfers out -contract Router { +contract Router is IRouterWithoutCollisions { /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is * used to predict the addresses of deployed contracts ahead of time. @@ -32,7 +32,7 @@ contract Router { /* We don't expose a getter for this as it shouldn't be expected to have any specific value at a given moment in time. If someone wants to know the address of their deployed contract, they can - have it emit IRouter.an event and verify the emitting contract is the expected one. + have it emit an event and verify the emitting contract is the expected one. */ uint256 private _smartContractNonce; @@ -56,7 +56,7 @@ contract Router { /// @param newSeraiKey The key updated to function _updateSeraiKey(uint256 nonceUpdatedWith, bytes32 newSeraiKey) private { _seraiKey = newSeraiKey; - emit IRouter.SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); + emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); } /// @notice The constructor for the relayer @@ -88,7 +88,7 @@ contract Router { { // If the escape hatch was triggered, reject further signatures if (_escapedTo != address(0)) { - revert IRouter.EscapeHatchInvoked(); + revert EscapeHatchInvoked(); } message = msg.data; @@ -100,7 +100,7 @@ contract Router { (triggering undefined behavior). */ if (messageLen < 68) { - revert IRouter.InvalidSignature(); + revert InvalidSignature(); } // Read _nextNonce into memory as the nonce we'll use @@ -127,7 +127,7 @@ contract Router { // Verify the signature if (!Schnorr.verify(_seraiKey, messageHash, signatureC, signatureS)) { - revert IRouter.InvalidSignature(); + revert InvalidSignature(); } // Set the next nonce @@ -200,7 +200,7 @@ contract Router { function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { // Check the transfer if (coin == address(0)) { - if (amount != msg.value) revert IRouter.AmountMismatchesMsgValue(); + if (amount != msg.value) revert AmountMismatchesMsgValue(); } else { (bool success, bytes memory res) = address(coin).call( abi.encodeWithSelector(IERC20.transferFrom.selector, msg.sender, address(this), amount) @@ -211,7 +211,7 @@ contract Router { ERC20 contract did in fact return true */ bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); - if (!(success && nonStandardResOrTrue)) revert IRouter.TransferFromFailed(); + if (!(success && nonStandardResOrTrue)) revert TransferFromFailed(); } /* @@ -236,7 +236,7 @@ contract Router { It is the Serai network's role not to add support for any non-standard implementations. */ - emit IRouter.InInstruction(msg.sender, coin, amount, instruction); + emit InInstruction(msg.sender, coin, amount, instruction); } /// @dev Perform an ERC20 transfer out @@ -361,7 +361,7 @@ contract Router { abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); // TODO: Also include a bit mask here - emit IRouter.Executed(nonceUsed, message); + emit Executed(nonceUsed, message); /* Since we don't have a re-entrancy guard, it is possible for instructions from later batches to @@ -427,7 +427,7 @@ contract Router { /** * @dev This should be used upon an invariant being reached or new functionality being needed. * - * The hex bytes are to cause a collision with `IRouter.updateSeraiKey`. + * The hex bytes are to cause a collision with `IRouter.escapeHatch`. */ // @param signature The signature by the current key for Serai's Ethereum validators // @param escapeTo The address to escape to @@ -438,7 +438,7 @@ contract Router { (,, address escapeTo) = abi.decode(args, (bytes32, bytes32, address)); if (escapeTo == address(0)) { - revert IRouter.InvalidEscapeAddress(); + revert InvalidEscapeAddress(); } /* We want to define the escape hatch so coins here now, and latently received, can be forwarded. @@ -446,21 +446,21 @@ contract Router { received coins without penalty (if they update the escape hatch after unstaking). */ if (_escapedTo != address(0)) { - revert IRouter.EscapeHatchInvoked(); + revert EscapeHatchInvoked(); } _escapedTo = escapeTo; - emit IRouter.EscapeHatch(escapeTo); + emit EscapeHatch(escapeTo); } /// @notice Escape coins after the escape hatch has been invoked /// @param coin The coin to escape function escape(address coin) external { if (_escapedTo == address(0)) { - revert IRouter.EscapeHatchNotInvoked(); + revert EscapeHatchNotInvoked(); } - emit IRouter.Escaped(coin); + emit Escaped(coin); // Fetch the amount to escape uint256 amount = address(this).balance; diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index ef5bdbcf..71b4bca4 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -28,7 +28,7 @@ use serai_client::networks::ethereum::Address as SeraiAddress; #[expect(clippy::all)] #[expect(clippy::ignored_unit_patterns)] #[expect(clippy::redundant_closure_for_method_calls)] -pub mod _irouter_abi { +mod _irouter_abi { alloy_sol_macro::sol!("contracts/IRouter.sol"); } @@ -43,6 +43,7 @@ mod _router_abi { } mod abi { + pub use super::_router_abi::IRouterWithoutCollisions::*; pub use super::_router_abi::IRouter::*; pub use super::_router_abi::Router::constructorCall; } @@ -338,7 +339,8 @@ impl Router { abi::Signature::from(sig), public_key.eth_repr().into(), )) - .abi_encode().into(), + .abi_encode() + .into(), gas_limit: 40_889 * 120 / 100, ..Default::default() } From 2920987173cf0bbba8c5f499470ea85415c240d7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 20:12:48 -0400 Subject: [PATCH 197/368] Add a re-entrancy guard to Router.execute --- .../ethereum/router/contracts/IRouter.sol | 4 +++ .../ethereum/router/contracts/Router.sol | 27 +++++++++++++++++++ processor/ethereum/router/src/tests/mod.rs | 16 +++++++++++ 3 files changed, 47 insertions(+) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index 91bedac5..f3fab5d6 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -36,11 +36,15 @@ interface IRouterWithoutCollisions { error EscapeHatchInvoked(); /// @notice The signature was invalid error InvalidSignature(); + /// @notice The amount specified didn't match `msg.value` error AmountMismatchesMsgValue(); /// @notice The call to an ERC20's `transferFrom` failed error TransferFromFailed(); + /// @notice `execute` was re-entered + error ReenteredExecute(); + /// @notice An invalid address to escape to was specified. error InvalidEscapeAddress(); /// @notice Escaping when escape hatch wasn't invoked. diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index e8f54653..fbff77c9 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -25,6 +25,14 @@ import "IRouter.sol"; /// @author Luke Parker /// @notice Intakes coins for the Serai network and handles relaying batches of transfers out contract Router is IRouterWithoutCollisions { + /// @dev The address in transient storage used for the reentrancy guard + bytes32 constant EXECUTE_REENTRANCY_GUARD_SLOT = bytes32( + /* + keccak256("ReentrancyGuard Router.execute") - 1 + */ + 0xcf124a063de1614fedbd6b47187f98bf8873a1ae83da5c179a5881162f5b2401 + ); + /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is * used to predict the addresses of deployed contracts ahead of time. @@ -356,6 +364,25 @@ contract Router is IRouterWithoutCollisions { // Each individual call is explicitly metered to ensure there isn't a DoS here // slither-disable-next-line calls-loop function execute4DE42904() external { + /* + Prevent re-entrancy. + + We emit a bitmask of which `OutInstruction`s succeeded. Doing that requires executing the + `OutInstruction`s, which may re-enter here. While our application of CEI with verifySignature + prevents replays, re-entrancy would allow out-of-order execution of batches (despite their + in-order start of execution) which isn't a headache worth dealing with. + */ + bytes32 executeReentrancyGuardSlot = EXECUTE_REENTRANCY_GUARD_SLOT; + bytes32 priorEntered; + // slither-disable-next-line assembly + assembly { + priorEntered := tload(executeReentrancyGuardSlot) + tstore(executeReentrancyGuardSlot, 1) + } + if (priorEntered != bytes32(0)) { + revert ReenteredExecute(); + } + (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(); (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 78215d95..2da5422d 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -22,6 +22,18 @@ use ethereum_deployer::Deployer; use crate::{Coin, OutInstructions, Router}; +#[test] +fn execute_reentrancy_guard() { + let hash = alloy_core::primitives::keccak256(b"ReentrancyGuard Router.execute"); + assert_eq!( + alloy_core::primitives::hex::encode( + (U256::from_be_slice(hash.as_ref()) - U256::from(1u8)).to_be_bytes::<32>() + ), + // Constant from the Router contract + "cf124a063de1614fedbd6b47187f98bf8873a1ae83da5c179a5881162f5b2401", + ); +} + #[test] fn selector_collisions() { assert_eq!( @@ -32,6 +44,10 @@ fn selector_collisions() { crate::_irouter_abi::IRouter::updateSeraiKeyCall::SELECTOR, crate::_router_abi::Router::updateSeraiKey5A8542A2Call::SELECTOR ); + assert_eq!( + crate::_irouter_abi::IRouter::escapeHatchCall::SELECTOR, + crate::_router_abi::Router::escapeHatchDCDD91CCCall::SELECTOR + ); } pub(crate) fn test_key() -> (Scalar, PublicKey) { From 834c16930b4fbb18edba291e6365881a61c101cb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 2 Nov 2024 21:00:01 -0400 Subject: [PATCH 198/368] Add a bitmask of OutInstruction events to Executed Allows explorers to provide clarity on what occurred. --- .../ethereum/router/contracts/IRouter.sol | 9 +- .../ethereum/router/contracts/Router.sol | 137 ++++++++++-------- 2 files changed, 87 insertions(+), 59 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index f3fab5d6..196263d6 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -22,7 +22,14 @@ interface IRouterWithoutCollisions { /// @notice Emitted when a batch of `OutInstruction`s occurs /// @param nonce The nonce consumed to execute this batch of transactions /// @param messageHash The hash of the message signed for the executed batch - event Executed(uint256 indexed nonce, bytes32 indexed messageHash); + /** + * @param results The result of each `OutInstruction` executed. This is a bitmask with true + * representing success and false representing failure. The high bit (1 << 7) in the first byte + * is used for the first `OutInstruction`, before the next bit, and so on, before the next byte. + * An `OutInstruction` is considered as having succeeded if the call transferring ETH doesn't + * fail, the ERC20 transfer doesn't fail, and any executed code doesn't revert. + */ + event Executed(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); /// @notice Emitted when `escapeHatch` is invoked /// @param escapeTo The address to escape to diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index fbff77c9..0eb31176 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -218,7 +218,8 @@ contract Router is IRouterWithoutCollisions { Require there was nothing returned, which is done by some non-standard tokens, or that the ERC20 contract did in fact return true */ - bool nonStandardResOrTrue = (res.length == 0) || abi.decode(res, (bool)); + bool nonStandardResOrTrue = + (res.length == 0) || ((res.length == 32) && abi.decode(res, (bool))); if (!(success && nonStandardResOrTrue)) revert TransferFromFailed(); } @@ -251,7 +252,16 @@ contract Router is IRouterWithoutCollisions { /// @param to The address to transfer the coins to /// @param coin The coin to transfer /// @param amount The amount of the coin to transfer - function erc20TransferOut(address to, address coin, uint256 amount) private { + /** + * @return success If the coins were successfully transferred out. This is defined as if the + * call succeeded and returned true or nothing. + */ + // execute has this annotation yet this still flags (even when it doesn't have its own loop) + // slither-disable-next-line calls-loop + function erc20TransferOut(address to, address coin, uint256 amount) + private + returns (bool success) + { /* The ERC20s integrated are presumed to have a constant gas cost, meaning this can only be insufficient: @@ -270,48 +280,39 @@ contract Router is IRouterWithoutCollisions { */ uint256 _gas = 100_000; - bytes memory _calldata = abi.encodeWithSelector(IERC20.transfer.selector, to, amount); - bool _success; - // slither-disable-next-line assembly - assembly { - /* - `coin` is trusted so we can accept the risk of a return bomb here, yet we won't check the - return value anyways so there's no need to spend the gas decoding it. We assume failures - are the fault of the recipient, not us, the sender. We don't want to have such errors block - the queue of transfers to make. + /* + `coin` is either signed (from `execute`) or called from `escape` (which can safely be + arbitrarily called). We accordingly don't need to be worried about return bombs here. + */ + // slither-disable-next-line return-bomb + (bool erc20Success, bytes memory res) = + address(coin).call{ gas: _gas }(abi.encodeWithSelector(IERC20.transfer.selector, to, amount)); - If there ever was some invariant broken, off-chain actions is presumed to occur to move to a - new smart contract with whatever necessary changes made/response occurring. - */ - _success := - call( - _gas, - coin, - // Ether value - 0, - // calldata - add(_calldata, 0x20), - mload(_calldata), - // return data - 0, - 0 - ) - } + /* + Require there was nothing returned, which is done by some non-standard tokens, or that the + ERC20 contract did in fact return true. + */ + // slither-disable-next-line incorrect-equality + bool nonStandardResOrTrue = (res.length == 0) || ((res.length == 32) && abi.decode(res, (bool))); + success = erc20Success && nonStandardResOrTrue; } /// @dev Perform an ETH/ERC20 transfer out /// @param to The address to transfer the coins to /// @param coin The coin to transfer (address(0) if Ether) /// @param amount The amount of the coin to transfer - function transferOut(address to, address coin, uint256 amount) private { + /** + * @return success If the coins were successfully transferred out. For Ethereum, this is if the + * call succeeded. For the ERC20, it's if the call succeeded and returned true or nothing. + */ + function transferOut(address to, address coin, uint256 amount) private returns (bool success) { if (coin == address(0)) { // Enough gas to service the transfer and a minimal amount of logic uint256 _gas = 5_000; // This uses assembly to prevent return bombs - bool _success; // slither-disable-next-line assembly assembly { - _success := + success := call( _gas, to, @@ -325,7 +326,7 @@ contract Router is IRouterWithoutCollisions { ) } } else { - erc20TransferOut(to, coin, amount); + success = erc20TransferOut(to, coin, amount); } } @@ -362,7 +363,7 @@ contract Router is IRouterWithoutCollisions { // @param fee The fee to pay (in coin) to the caller for their relaying of this batch // @param outs The `OutInstruction`s to act on // Each individual call is explicitly metered to ensure there isn't a DoS here - // slither-disable-next-line calls-loop + // slither-disable-next-line calls-loop,reentrancy-events function execute4DE42904() external { /* Prevent re-entrancy. @@ -387,19 +388,13 @@ contract Router is IRouterWithoutCollisions { (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); - // TODO: Also include a bit mask here - emit Executed(nonceUsed, message); - - /* - Since we don't have a re-entrancy guard, it is possible for instructions from later batches to - be executed before these instructions. This is deemed fine. We only require later batches be - relayed after earlier batches in order to form backpressure. This means if a batch has a fee - which isn't worth relaying the batch for, so long as later batches are sufficiently - worthwhile, every batch will be relayed. - */ + // Define a bitmask to store the results of all following `OutInstruction`s + bytes memory results = new bytes((outs.length + 7) / 8); // slither-disable-next-line reentrancy-events for (uint256 i = 0; i < outs.length; i++) { + bool success = true; + // If the destination is an address, we perform a direct transfer if (outs[i].destinationType == IRouter.DestinationType.Address) { /* @@ -408,7 +403,7 @@ contract Router is IRouterWithoutCollisions { corrected batch using this nonce. */ address destination = abi.decode(outs[i].destination, (address)); - transferOut(destination, coin, outs[i].amount); + success = transferOut(destination, coin, outs[i].amount); } else { // Prepare the transfer uint256 ethValue = 0; @@ -424,28 +419,54 @@ contract Router is IRouterWithoutCollisions { address nextAddress = address( uint160(uint256(keccak256(abi.encodePacked(address(this), _smartContractNonce)))) ); - erc20TransferOut(nextAddress, coin, outs[i].amount); + + success = erc20TransferOut(nextAddress, coin, outs[i].amount); } - (IRouter.CodeDestination memory destination) = - abi.decode(outs[i].destination, (IRouter.CodeDestination)); - /* - Perform the deployment with the defined gas budget. + If success is false, we presume it a fault with an ERC20, not with us, and move on. If we + reverted here, we'd halt the execution of every single batch (now and future). By simply + moving on, we may have reached some invariant with this specific ERC20, yet the project + entire isn't put into a halted state. - We don't care if the following call fails as we don't want to block/retry if it does. - Failures are considered the recipient's fault. We explicitly do not want the surface - area/inefficiency of caching these for later attempted retires. - - We don't have to worry about a return bomb here as this is our own function which doesn't - return any data. + Since the recipient is a fresh account, this presumably isn't the recipient being + blacklisted (the most likely invariant upon the integration of a popular, standard ERC20). + That means there likely is some invariant with this integration to be resolved later. + Since reaching this invariant state requires an invariant, and for the reasons above, this + is accepted. */ - address(this).call{ gas: destination.gasLimit, value: ethValue }( - abi.encodeWithSelector(Router.executeArbitraryCode.selector, destination.code) - ); + if (success) { + (IRouter.CodeDestination memory destination) = + abi.decode(outs[i].destination, (IRouter.CodeDestination)); + + /* + Perform the deployment with the defined gas budget. + + We don't care if the following call fails as we don't want to block/retry if it does. + Failures are considered the recipient's fault. We explicitly do not want the surface + area/inefficiency of caching these for later attempted retires. + + We don't have to worry about a return bomb here as this is our own function which + doesn't return any data. + */ + (success,) = address(this).call{ gas: destination.gasLimit, value: ethValue }( + abi.encodeWithSelector(Router.executeArbitraryCode.selector, destination.code) + ); + } + } + + if (success) { + results[i / 8] |= bytes1(uint8(1 << (7 - (i % 8)))); } } + /* + Emit execution with the status of all included events. + + This is an effect after interactions yet we have a reentrancy guard making this safe. + */ + emit Executed(nonceUsed, message, results); + // Transfer the fee to the relayer transferOut(msg.sender, coin, fee); } From 8013c56195990f37d07f921eba6efa36f92cf355 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Dec 2024 18:27:01 -0500 Subject: [PATCH 199/368] Add/correct msrv labels --- .github/workflows/msrv.yml | 253 ++++++++++++++++++ Cargo.lock | 2 +- common/db/Cargo.toml | 2 +- common/env/Cargo.toml | 2 +- common/patchable-async-sleep/Cargo.toml | 1 + common/request/Cargo.toml | 2 +- common/std-shims/Cargo.toml | 2 +- common/zalloc/Cargo.toml | 2 +- crypto/ciphersuite/Cargo.toml | 2 +- crypto/dalek-ff-group/Cargo.toml | 2 +- crypto/dkg/Cargo.toml | 2 +- crypto/ed448/Cargo.toml | 2 +- crypto/evrf/circuit-abstraction/Cargo.toml | 1 + crypto/evrf/divisors/Cargo.toml | 1 + crypto/evrf/ec-gadgets/Cargo.toml | 1 + crypto/evrf/embedwards25519/Cargo.toml | 1 + .../evrf/generalized-bulletproofs/Cargo.toml | 1 + crypto/evrf/secq256k1/Cargo.toml | 1 + crypto/frost/Cargo.toml | 2 +- crypto/multiexp/Cargo.toml | 2 +- crypto/schnorr/Cargo.toml | 2 +- crypto/schnorrkel/Cargo.toml | 2 +- crypto/transcript/Cargo.toml | 2 +- message-queue/Cargo.toml | 1 + message-queue/src/main.rs | 3 +- mini/Cargo.toml | 1 + .../alloy-simple-request-transport/Cargo.toml | 6 +- networks/ethereum/build-contracts/Cargo.toml | 1 + networks/ethereum/relayer/Cargo.toml | 1 + networks/monero/generators/Cargo.toml | 1 + networks/monero/verify-chain/Cargo.toml | 2 +- networks/monero/wallet/Cargo.toml | 1 - orchestration/Cargo.toml | 1 + patches/directories-next/Cargo.toml | 1 - patches/option-ext/Cargo.toml | 1 - patches/parking_lot/Cargo.toml | 1 - patches/parking_lot_core/Cargo.toml | 1 - patches/rocksdb/Cargo.toml | 1 - patches/zstd/Cargo.toml | 1 - processor/bin/Cargo.toml | 1 + processor/bin/src/lib.rs | 2 +- processor/bitcoin/Cargo.toml | 1 + processor/frost-attempt-manager/Cargo.toml | 1 + processor/key-gen/Cargo.toml | 1 + processor/messages/Cargo.toml | 1 + processor/monero/Cargo.toml | 1 + processor/primitives/Cargo.toml | 5 +- processor/scanner/Cargo.toml | 1 + processor/scheduler/primitives/Cargo.toml | 1 + processor/scheduler/smart-contract/Cargo.toml | 1 + .../scheduler/utxo/primitives/Cargo.toml | 1 + processor/scheduler/utxo/standard/Cargo.toml | 1 + .../utxo/transaction-chaining/Cargo.toml | 1 + processor/signers/Cargo.toml | 1 + processor/view-keys/Cargo.toml | 1 + substrate/abi/Cargo.toml | 2 +- substrate/client/Cargo.toml | 2 +- substrate/coins/pallet/Cargo.toml | 2 +- substrate/coins/primitives/Cargo.toml | 2 +- substrate/dex/pallet/Cargo.toml | 2 +- substrate/economic-security/pallet/Cargo.toml | 2 +- substrate/emissions/pallet/Cargo.toml | 4 +- substrate/emissions/primitives/Cargo.toml | 2 +- substrate/genesis-liquidity/pallet/Cargo.toml | 2 +- .../genesis-liquidity/primitives/Cargo.toml | 4 +- substrate/in-instructions/pallet/Cargo.toml | 2 +- .../in-instructions/primitives/Cargo.toml | 2 +- substrate/node/Cargo.toml | 2 +- substrate/primitives/Cargo.toml | 2 +- substrate/runtime/Cargo.toml | 2 +- substrate/signals/pallet/Cargo.toml | 2 +- substrate/signals/primitives/Cargo.toml | 2 +- substrate/validator-sets/pallet/Cargo.toml | 2 +- .../validator-sets/primitives/Cargo.toml | 2 +- 74 files changed, 326 insertions(+), 51 deletions(-) create mode 100644 .github/workflows/msrv.yml diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml new file mode 100644 index 00000000..f969f755 --- /dev/null +++ b/.github/workflows/msrv.yml @@ -0,0 +1,253 @@ +name: Weekly MSRV Check + +on: + schedule: + - cron: "0 0 * * 0" + workflow_dispatch: + +jobs: + msrv-common: + name: Run cargo msrv on common + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on common + run: | + cargo msrv verify --manifest-path common/zalloc/Cargo.toml + cargo msrv verify --manifest-path common/std-shims/Cargo.toml + cargo msrv verify --manifest-path common/env/Cargo.toml + cargo msrv verify --manifest-path common/db/Cargo.toml + cargo msrv verify --manifest-path common/request/Cargo.toml + cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml + + msrv-crypto: + name: Run cargo msrv on crypto + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on crypto + run: | + cargo msrv verify --manifest-path crypto/transcript/Cargo.toml + + cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml + cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml + cargo msrv verify --manifest-path crypto/ed448/Cargo.toml + + cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml + + cargo msrv verify --manifest-path crypto/dleq/Cargo.toml + cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml + cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml + + cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml + cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml + cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml + cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml + cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml + cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml + + cargo msrv verify --manifest-path crypto/dkg/Cargo.toml + cargo msrv verify --manifest-path crypto/frost/Cargo.toml + cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml + + msrv-networks: + name: Run cargo msrv on networks + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on networks + run: | + cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml + + cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml + cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml + cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml + cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db + + cargo msrv verify --manifest-path networks/monero/io/Cargo.toml + cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml + cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml + cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml + cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml + cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml + cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml + cargo msrv verify --manifest-path networks/monero/Cargo.toml + cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml + cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml + cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml + cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml + cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml + + msrv-message-queue: + name: Run cargo msrv on message-queue + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on message-queue + run: | + cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db + + msrv-processor: + name: Run cargo msrv on processor + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on processor + run: | + cargo msrv verify --manifest-path processor/view-keys/Cargo.toml + + cargo msrv verify --manifest-path processor/primitives/Cargo.toml + cargo msrv verify --manifest-path processor/messages/Cargo.toml + + cargo msrv verify --manifest-path processor/scanner/Cargo.toml + + cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml + cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml + cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml + cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml + cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml + + cargo msrv verify --manifest-path processor/key-gen/Cargo.toml + cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml + cargo msrv verify --manifest-path processor/signers/Cargo.toml + cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db + + cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml + + cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml + cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml + cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml + cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml + cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml + cargo msrv verify --manifest-path processor/ethereum/Cargo.toml + + cargo msrv verify --manifest-path processor/monero/Cargo.toml + + msrv-coordinator: + name: Run cargo msrv on coordinator + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on coordinator + run: | + cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml + cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml + cargo msrv verify --manifest-path coordinator/Cargo.toml + + msrv-substrate: + name: Run cargo msrv on substrate + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on substrate + run: | + cargo msrv verify --manifest-path substrate/primitives/Cargo.toml + + cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml + cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml + cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml + cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml + cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml + + cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml + cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml + cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml + + cargo msrv verify --manifest-path substrate/abi/Cargo.toml + cargo msrv verify --manifest-path substrate/client/Cargo.toml + + cargo msrv verify --manifest-path substrate/runtime/Cargo.toml + cargo msrv verify --manifest-path substrate/node/Cargo.toml + + msrv-orchestration: + name: Run cargo msrv on orchestration + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on message-queue + run: | + cargo msrv verify --manifest-path orchestration/Cargo.toml + + msrv-mini: + name: Run cargo msrv on mini + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac + + - name: Install Build Dependencies + uses: ./.github/actions/build-dependencies + + - name: Install cargo msrv + run: cargo install --locked cargo-msrv + + - name: Run cargo msrv on mini + run: | + cargo msrv verify --manifest-path mini/Cargo.toml diff --git a/Cargo.lock b/Cargo.lock index df7d578e..279fb59f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -386,7 +386,7 @@ dependencies = [ [[package]] name = "alloy-simple-request-transport" -version = "0.1.0" +version = "0.1.1" dependencies = [ "alloy-json-rpc", "alloy-transport", diff --git a/common/db/Cargo.toml b/common/db/Cargo.toml index e422b346..66f798a4 100644 --- a/common/db/Cargo.toml +++ b/common/db/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/db" authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.65" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/common/env/Cargo.toml b/common/env/Cargo.toml index 8e296a66..be34cbac 100644 --- a/common/env/Cargo.toml +++ b/common/env/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env" authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.60" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/common/patchable-async-sleep/Cargo.toml b/common/patchable-async-sleep/Cargo.toml index e2d1e9cf..b4a19c5a 100644 --- a/common/patchable-async-sleep/Cargo.toml +++ b/common/patchable-async-sleep/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-a authors = ["Luke Parker "] keywords = ["async", "sleep", "tokio", "smol", "async-std"] edition = "2021" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/common/request/Cargo.toml b/common/request/Cargo.toml index e5018056..d960e91b 100644 --- a/common/request/Cargo.toml +++ b/common/request/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-requ authors = ["Luke Parker "] keywords = ["http", "https", "async", "request", "ssl"] edition = "2021" -rust-version = "1.64" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/common/std-shims/Cargo.toml b/common/std-shims/Cargo.toml index 534a4216..983a2522 100644 --- a/common/std-shims/Cargo.toml +++ b/common/std-shims/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims" authors = ["Luke Parker "] keywords = ["nostd", "no_std", "alloc", "io"] edition = "2021" -rust-version = "1.70" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/common/zalloc/Cargo.toml b/common/zalloc/Cargo.toml index af4e7c1c..88e59ec0 100644 --- a/common/zalloc/Cargo.toml +++ b/common/zalloc/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc" authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.77.0" +rust-version = "1.77" [package.metadata.docs.rs] all-features = true diff --git a/crypto/ciphersuite/Cargo.toml b/crypto/ciphersuite/Cargo.toml index 9fcf60a6..b666dbaa 100644 --- a/crypto/ciphersuite/Cargo.toml +++ b/crypto/ciphersuite/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite authors = ["Luke Parker "] keywords = ["ciphersuite", "ff", "group"] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/dalek-ff-group/Cargo.toml b/crypto/dalek-ff-group/Cargo.toml index 29b8806c..b41e1f4e 100644 --- a/crypto/dalek-ff-group/Cargo.toml +++ b/crypto/dalek-ff-group/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dalek-ff-gr authors = ["Luke Parker "] keywords = ["curve25519", "ed25519", "ristretto", "dalek", "group"] edition = "2021" -rust-version = "1.66" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/crypto/dkg/Cargo.toml b/crypto/dkg/Cargo.toml index 39ebb6dc..337b702e 100644 --- a/crypto/dkg/Cargo.toml +++ b/crypto/dkg/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg" authors = ["Luke Parker "] keywords = ["dkg", "multisig", "threshold", "ff", "group"] edition = "2021" -rust-version = "1.79" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/ed448/Cargo.toml b/crypto/ed448/Cargo.toml index b0d0026e..64c1b243 100644 --- a/crypto/ed448/Cargo.toml +++ b/crypto/ed448/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ed448" authors = ["Luke Parker "] keywords = ["ed448", "ff", "group"] edition = "2021" -rust-version = "1.66" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/circuit-abstraction/Cargo.toml b/crypto/evrf/circuit-abstraction/Cargo.toml index 1346be49..ec2767fe 100644 --- a/crypto/evrf/circuit-abstraction/Cargo.toml +++ b/crypto/evrf/circuit-abstraction/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/circui authors = ["Luke Parker "] keywords = ["bulletproofs", "circuit"] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/divisors/Cargo.toml b/crypto/evrf/divisors/Cargo.toml index 04e820b6..a5e0663c 100644 --- a/crypto/evrf/divisors/Cargo.toml +++ b/crypto/evrf/divisors/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/diviso authors = ["Luke Parker "] keywords = ["ciphersuite", "ff", "group"] edition = "2021" +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/ec-gadgets/Cargo.toml b/crypto/evrf/ec-gadgets/Cargo.toml index cbd35639..dad62a93 100644 --- a/crypto/evrf/ec-gadgets/Cargo.toml +++ b/crypto/evrf/ec-gadgets/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/ec-gad authors = ["Luke Parker "] keywords = ["bulletproofs", "circuit", "divisors"] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/embedwards25519/Cargo.toml b/crypto/evrf/embedwards25519/Cargo.toml index bbae482b..eae36b13 100644 --- a/crypto/evrf/embedwards25519/Cargo.toml +++ b/crypto/evrf/embedwards25519/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/embedw authors = ["Luke Parker "] keywords = ["curve25519", "ed25519", "ristretto255", "group"] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/generalized-bulletproofs/Cargo.toml b/crypto/evrf/generalized-bulletproofs/Cargo.toml index 9dfc95a5..7744f5ed 100644 --- a/crypto/evrf/generalized-bulletproofs/Cargo.toml +++ b/crypto/evrf/generalized-bulletproofs/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/genera authors = ["Luke Parker "] keywords = ["ciphersuite", "ff", "group"] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/evrf/secq256k1/Cargo.toml b/crypto/evrf/secq256k1/Cargo.toml index c363ca4f..9d7d6ef4 100644 --- a/crypto/evrf/secq256k1/Cargo.toml +++ b/crypto/evrf/secq256k1/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/secq25 authors = ["Luke Parker "] keywords = ["secp256k1", "secq256k1", "group"] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/frost/Cargo.toml b/crypto/frost/Cargo.toml index 7c32b6f0..00b18cd0 100644 --- a/crypto/frost/Cargo.toml +++ b/crypto/frost/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/frost" authors = ["Luke Parker "] keywords = ["frost", "multisig", "threshold"] edition = "2021" -rust-version = "1.79" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/multiexp/Cargo.toml b/crypto/multiexp/Cargo.toml index 228b85ab..36efbfe2 100644 --- a/crypto/multiexp/Cargo.toml +++ b/crypto/multiexp/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/multiexp" authors = ["Luke Parker "] keywords = ["multiexp", "ff", "group"] edition = "2021" -rust-version = "1.79" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/schnorr/Cargo.toml b/crypto/schnorr/Cargo.toml index 2ea04f5b..06a9710e 100644 --- a/crypto/schnorr/Cargo.toml +++ b/crypto/schnorr/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/schnorr" authors = ["Luke Parker "] keywords = ["schnorr", "ff", "group"] edition = "2021" -rust-version = "1.79" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/schnorrkel/Cargo.toml b/crypto/schnorrkel/Cargo.toml index 2508bef0..68be2135 100644 --- a/crypto/schnorrkel/Cargo.toml +++ b/crypto/schnorrkel/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/schnorrkel" authors = ["Luke Parker "] keywords = ["frost", "multisig", "threshold", "schnorrkel"] edition = "2021" -rust-version = "1.79" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/crypto/transcript/Cargo.toml b/crypto/transcript/Cargo.toml index 84e08abf..566ad56b 100644 --- a/crypto/transcript/Cargo.toml +++ b/crypto/transcript/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/transcript" authors = ["Luke Parker "] keywords = ["transcript"] edition = "2021" -rust-version = "1.79" +rust-version = "1.73" [package.metadata.docs.rs] all-features = true diff --git a/message-queue/Cargo.toml b/message-queue/Cargo.toml index 9eeaa5ce..7f180933 100644 --- a/message-queue/Cargo.toml +++ b/message-queue/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/message-queue/src/main.rs b/message-queue/src/main.rs index 03c580ce..c148f13e 100644 --- a/message-queue/src/main.rs +++ b/message-queue/src/main.rs @@ -73,7 +73,8 @@ pub(crate) fn queue_message( assert!(matches!(meta.from, Service::Coordinator) ^ matches!(meta.to, Service::Coordinator)); // Lock the queue - let queue_lock = QUEUES.read().unwrap()[&(meta.from, meta.to)].write().unwrap(); + let queue_lock = QUEUES.read().unwrap(); + let mut queue_lock = queue_lock[&(meta.from, meta.to)].write().unwrap(); // Verify (from, to, intent) hasn't been prior seen fn key(domain: &'static [u8], key: impl AsRef<[u8]>) -> Vec { diff --git a/mini/Cargo.toml b/mini/Cargo.toml index dfef7e56..075c2887 100644 --- a/mini/Cargo.toml +++ b/mini/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.71" [package.metadata.docs.rs] all-features = true diff --git a/networks/ethereum/alloy-simple-request-transport/Cargo.toml b/networks/ethereum/alloy-simple-request-transport/Cargo.toml index bd89aa0d..e87fbe5b 100644 --- a/networks/ethereum/alloy-simple-request-transport/Cargo.toml +++ b/networks/ethereum/alloy-simple-request-transport/Cargo.toml @@ -1,12 +1,12 @@ [package] name = "alloy-simple-request-transport" -version = "0.1.0" +version = "0.1.1" description = "A transport for alloy based off simple-request" license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/alloy-simple-request-transport" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.78" [package.metadata.docs.rs] all-features = true @@ -19,7 +19,7 @@ workspace = true tower = "0.4" serde_json = { version = "1", default-features = false } -simple-request = { path = "../../../common/request", default-features = false } +simple-request = { path = "../../../common/request", version = "0.1", default-features = false } alloy-json-rpc = { version = "0.3", default-features = false } alloy-transport = { version = "0.3", default-features = false } diff --git a/networks/ethereum/build-contracts/Cargo.toml b/networks/ethereum/build-contracts/Cargo.toml index 41d1f993..6b288d72 100644 --- a/networks/ethereum/build-contracts/Cargo.toml +++ b/networks/ethereum/build-contracts/Cargo.toml @@ -6,6 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/build-contracts" authors = ["Luke Parker "] edition = "2021" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true diff --git a/networks/ethereum/relayer/Cargo.toml b/networks/ethereum/relayer/Cargo.toml index 89d8e99e..37b99827 100644 --- a/networks/ethereum/relayer/Cargo.toml +++ b/networks/ethereum/relayer/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.72" [package.metadata.docs.rs] all-features = true diff --git a/networks/monero/generators/Cargo.toml b/networks/monero/generators/Cargo.toml index af8cbcd9..11a33897 100644 --- a/networks/monero/generators/Cargo.toml +++ b/networks/monero/generators/Cargo.toml @@ -6,6 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/monero/generators" authors = ["Luke Parker "] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/networks/monero/verify-chain/Cargo.toml b/networks/monero/verify-chain/Cargo.toml index e1aba16e..f7164357 100644 --- a/networks/monero/verify-chain/Cargo.toml +++ b/networks/monero/verify-chain/Cargo.toml @@ -6,8 +6,8 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/monero/verify-chain" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.80" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/networks/monero/wallet/Cargo.toml b/networks/monero/wallet/Cargo.toml index af787e49..12cc3245 100644 --- a/networks/monero/wallet/Cargo.toml +++ b/networks/monero/wallet/Cargo.toml @@ -11,7 +11,6 @@ rust-version = "1.80" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] -rust-version = "1.80" [package.metadata.cargo-machete] ignored = ["monero-clsag"] diff --git a/orchestration/Cargo.toml b/orchestration/Cargo.toml index a70e9936..00b0d99b 100644 --- a/orchestration/Cargo.toml +++ b/orchestration/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/orchestration/" authors = ["Luke Parker "] keywords = [] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/patches/directories-next/Cargo.toml b/patches/directories-next/Cargo.toml index 8c2b21dc..3ffcb6ce 100644 --- a/patches/directories-next/Cargo.toml +++ b/patches/directories-next/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/directorie authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.74" [package.metadata.docs.rs] all-features = true diff --git a/patches/option-ext/Cargo.toml b/patches/option-ext/Cargo.toml index 6f039c31..64bf3838 100644 --- a/patches/option-ext/Cargo.toml +++ b/patches/option-ext/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/option-ext authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.74" [package.metadata.docs.rs] all-features = true diff --git a/patches/parking_lot/Cargo.toml b/patches/parking_lot/Cargo.toml index 957b19bf..20cdd271 100644 --- a/patches/parking_lot/Cargo.toml +++ b/patches/parking_lot/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/parking_lo authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.70" [package.metadata.docs.rs] all-features = true diff --git a/patches/parking_lot_core/Cargo.toml b/patches/parking_lot_core/Cargo.toml index 37dcc703..cafd432b 100644 --- a/patches/parking_lot_core/Cargo.toml +++ b/patches/parking_lot_core/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/parking_lo authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.70" [package.metadata.docs.rs] all-features = true diff --git a/patches/rocksdb/Cargo.toml b/patches/rocksdb/Cargo.toml index 3a92fafc..b77ee4bc 100644 --- a/patches/rocksdb/Cargo.toml +++ b/patches/rocksdb/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/rocksdb" authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.70" [package.metadata.docs.rs] all-features = true diff --git a/patches/zstd/Cargo.toml b/patches/zstd/Cargo.toml index 0d1368e4..488c80ff 100644 --- a/patches/zstd/Cargo.toml +++ b/patches/zstd/Cargo.toml @@ -7,7 +7,6 @@ repository = "https://github.com/serai-dex/serai/tree/develop/patches/zstd" authors = ["Luke Parker "] keywords = [] edition = "2021" -rust-version = "1.70" [package.metadata.docs.rs] all-features = true diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml index 116916ab..52ebaeb9 100644 --- a/processor/bin/Cargo.toml +++ b/processor/bin/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 5403922e..ef83d7e0 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -36,7 +36,7 @@ db_channel! { /// The type used for the database. #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] -pub type Db = serai_db::ParityDb; +pub type Db = std::sync::Arc; /// The type used for the database. #[cfg(feature = "rocksdb")] pub type Db = serai_db::RocksDB; diff --git a/processor/bitcoin/Cargo.toml b/processor/bitcoin/Cargo.toml index 90b9566b..bc3a1dd0 100644 --- a/processor/bitcoin/Cargo.toml +++ b/processor/bitcoin/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/frost-attempt-manager/Cargo.toml b/processor/frost-attempt-manager/Cargo.toml index ad8d2a4c..2a397bac 100644 --- a/processor/frost-attempt-manager/Cargo.toml +++ b/processor/frost-attempt-manager/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = ["frost", "multisig", "threshold"] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/key-gen/Cargo.toml b/processor/key-gen/Cargo.toml index f1f00564..d5051168 100644 --- a/processor/key-gen/Cargo.toml +++ b/processor/key-gen/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/messages/Cargo.toml b/processor/messages/Cargo.toml index dbadd9db..03dc0441 100644 --- a/processor/messages/Cargo.toml +++ b/processor/messages/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/monero/Cargo.toml b/processor/monero/Cargo.toml index 6ea49a0c..d596bb9e 100644 --- a/processor/monero/Cargo.toml +++ b/processor/monero/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml index 6dd3082b..6eba2e5b 100644 --- a/processor/primitives/Cargo.toml +++ b/processor/primitives/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true @@ -19,8 +20,8 @@ workspace = true [dependencies] group = { version = "0.13", default-features = false } -serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } -serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std"] } +serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std", "borsh"] } +serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index 09a6a937..c46adde3 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/scheduler/primitives/Cargo.toml b/processor/scheduler/primitives/Cargo.toml index f847300a..7540dc84 100644 --- a/processor/scheduler/primitives/Cargo.toml +++ b/processor/scheduler/primitives/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/scheduler/smart-contract/Cargo.toml b/processor/scheduler/smart-contract/Cargo.toml index c43569fb..0a2a0ff2 100644 --- a/processor/scheduler/smart-contract/Cargo.toml +++ b/processor/scheduler/smart-contract/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/scheduler/utxo/primitives/Cargo.toml b/processor/scheduler/utxo/primitives/Cargo.toml index 80b1f22e..d1f9c8cf 100644 --- a/processor/scheduler/utxo/primitives/Cargo.toml +++ b/processor/scheduler/utxo/primitives/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/scheduler/utxo/standard/Cargo.toml b/processor/scheduler/utxo/standard/Cargo.toml index d6c16161..e3d574ac 100644 --- a/processor/scheduler/utxo/standard/Cargo.toml +++ b/processor/scheduler/utxo/standard/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/scheduler/utxo/transaction-chaining/Cargo.toml b/processor/scheduler/utxo/transaction-chaining/Cargo.toml index 0b1eb155..f8a676f8 100644 --- a/processor/scheduler/utxo/transaction-chaining/Cargo.toml +++ b/processor/scheduler/utxo/transaction-chaining/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 65222896..07e42052 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/processor/view-keys/Cargo.toml b/processor/view-keys/Cargo.toml index 6fdd9134..d76ca32b 100644 --- a/processor/view-keys/Cargo.toml +++ b/processor/view-keys/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/view-key authors = ["Luke Parker "] keywords = [] edition = "2021" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/abi/Cargo.toml b/substrate/abi/Cargo.toml index ea26485f..772cdd32 100644 --- a/substrate/abi/Cargo.toml +++ b/substrate/abi/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/abi" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index f59c70fe..f8672718 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/substrate/client" authors = ["Luke Parker "] keywords = ["serai"] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/coins/pallet/Cargo.toml b/substrate/coins/pallet/Cargo.toml index 8c59fb3e..f1afa1c1 100644 --- a/substrate/coins/pallet/Cargo.toml +++ b/substrate/coins/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/coins/pallet" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/coins/primitives/Cargo.toml b/substrate/coins/primitives/Cargo.toml index ec906929..ed160f9d 100644 --- a/substrate/coins/primitives/Cargo.toml +++ b/substrate/coins/primitives/Cargo.toml @@ -5,7 +5,7 @@ description = "Serai coins primitives" license = "MIT" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/dex/pallet/Cargo.toml b/substrate/dex/pallet/Cargo.toml index 7e8a83e8..d27ffdeb 100644 --- a/substrate/dex/pallet/Cargo.toml +++ b/substrate/dex/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/dex/pallet" authors = ["Parity Technologies , Akil Demir "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/economic-security/pallet/Cargo.toml b/substrate/economic-security/pallet/Cargo.toml index cefeee8e..033a05fc 100644 --- a/substrate/economic-security/pallet/Cargo.toml +++ b/substrate/economic-security/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/economic-security/pallet" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.77" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/emissions/pallet/Cargo.toml b/substrate/emissions/pallet/Cargo.toml index a56bcee4..742dff08 100644 --- a/substrate/emissions/pallet/Cargo.toml +++ b/substrate/emissions/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/emissions/pallet" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.77" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true @@ -54,7 +54,7 @@ std = [ "validator-sets-pallet/std", "dex-pallet/std", "genesis-liquidity-pallet/std", - + "economic-security-pallet/std", "serai-primitives/std", diff --git a/substrate/emissions/primitives/Cargo.toml b/substrate/emissions/primitives/Cargo.toml index 15ecbe25..077de439 100644 --- a/substrate/emissions/primitives/Cargo.toml +++ b/substrate/emissions/primitives/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/emissions/primitives" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.77" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/genesis-liquidity/pallet/Cargo.toml b/substrate/genesis-liquidity/pallet/Cargo.toml index 3668b995..4162c038 100644 --- a/substrate/genesis-liquidity/pallet/Cargo.toml +++ b/substrate/genesis-liquidity/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/genesis-liquidity/pallet" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.77" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/genesis-liquidity/primitives/Cargo.toml b/substrate/genesis-liquidity/primitives/Cargo.toml index ec05bbd6..d026ccf4 100644 --- a/substrate/genesis-liquidity/primitives/Cargo.toml +++ b/substrate/genesis-liquidity/primitives/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/genesis-liquidity/primitives" authors = ["Akil Demir "] edition = "2021" -rust-version = "1.77" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true @@ -36,7 +36,7 @@ std = [ "borsh?/std", "serde?/std", "scale-info/std", - + "serai-primitives/std", "validator-sets-primitives/std", diff --git a/substrate/in-instructions/pallet/Cargo.toml b/substrate/in-instructions/pallet/Cargo.toml index a12e38b3..27ecb7bd 100644 --- a/substrate/in-instructions/pallet/Cargo.toml +++ b/substrate/in-instructions/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/in-instructions/primitives/Cargo.toml b/substrate/in-instructions/primitives/Cargo.toml index 54551134..bd926749 100644 --- a/substrate/in-instructions/primitives/Cargo.toml +++ b/substrate/in-instructions/primitives/Cargo.toml @@ -5,7 +5,7 @@ description = "Serai instructions library, enabling encoding and decoding" license = "MIT" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/node/Cargo.toml b/substrate/node/Cargo.toml index 5da8ce85..5e6cd3f1 100644 --- a/substrate/node/Cargo.toml +++ b/substrate/node/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/substrate/node" authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/primitives/Cargo.toml b/substrate/primitives/Cargo.toml index 4a495b53..b9ebfcfa 100644 --- a/substrate/primitives/Cargo.toml +++ b/substrate/primitives/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/primitives" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/runtime/Cargo.toml b/substrate/runtime/Cargo.toml index 9cd0f5ab..2d914ab1 100644 --- a/substrate/runtime/Cargo.toml +++ b/substrate/runtime/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/runtime" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/signals/pallet/Cargo.toml b/substrate/signals/pallet/Cargo.toml index e06b5e6b..4c3e3407 100644 --- a/substrate/signals/pallet/Cargo.toml +++ b/substrate/signals/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/signals/pallet" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/signals/primitives/Cargo.toml b/substrate/signals/primitives/Cargo.toml index 1c338145..dbaba0a5 100644 --- a/substrate/signals/primitives/Cargo.toml +++ b/substrate/signals/primitives/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/signals/primitives" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/validator-sets/pallet/Cargo.toml b/substrate/validator-sets/pallet/Cargo.toml index e6f559e1..7f0a9850 100644 --- a/substrate/validator-sets/pallet/Cargo.toml +++ b/substrate/validator-sets/pallet/Cargo.toml @@ -6,7 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/validator-sets/pallet" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true diff --git a/substrate/validator-sets/primitives/Cargo.toml b/substrate/validator-sets/primitives/Cargo.toml index 844e6134..8eea9d55 100644 --- a/substrate/validator-sets/primitives/Cargo.toml +++ b/substrate/validator-sets/primitives/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/substrate/validator-sets/primitives" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.74" +rust-version = "1.80" [package.metadata.docs.rs] all-features = true From 3192370484df675524ef70ee6884a2fbe9266807 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Dec 2024 20:42:37 -0500 Subject: [PATCH 200/368] Add Serai key confirmation to prevent rotating to an unusable key Also updates alloy to the latest version --- Cargo.lock | 618 +++++++++++------- .../alloy-simple-request-transport/Cargo.toml | 8 +- networks/ethereum/schnorr/Cargo.toml | 8 +- processor/ethereum/Cargo.toml | 9 +- processor/ethereum/deployer/Cargo.toml | 11 +- processor/ethereum/deployer/src/lib.rs | 2 +- processor/ethereum/erc20/Cargo.toml | 8 +- processor/ethereum/erc20/src/lib.rs | 8 +- processor/ethereum/primitives/Cargo.toml | 4 +- processor/ethereum/primitives/src/lib.rs | 6 +- processor/ethereum/router/Cargo.toml | 14 +- processor/ethereum/router/build.rs | 7 + .../ethereum/router/contracts/IRouter.sol | 35 +- .../ethereum/router/contracts/Router.sol | 84 ++- processor/ethereum/router/src/lib.rs | 85 ++- processor/ethereum/router/src/tests/mod.rs | 85 ++- processor/ethereum/test-primitives/Cargo.toml | 7 +- processor/ethereum/test-primitives/src/lib.rs | 6 +- 18 files changed, 679 insertions(+), 326 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 279fb59f..34f71890 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -111,15 +111,33 @@ dependencies = [ [[package]] name = "alloy-consensus" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4177d135789e282e925092be8939d421b701c6d92c0a16679faa659d9166289d" +checksum = "a101d4d016f47f13890a74290fdd17b05dd175191d9337bc600791fb96e4dea8" dependencies = [ "alloy-eips", "alloy-primitives", "alloy-rlp", "alloy-serde", + "alloy-trie", + "auto_impl", "c-kzg", + "derive_more 1.0.0", + "k256", + "serde", +] + +[[package]] +name = "alloy-consensus-any" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fa60357dda9a3d0f738f18844bd6d0f4a5924cc5cf00bfad2ff1369897966123" +dependencies = [ + "alloy-consensus", + "alloy-eips", + "alloy-primitives", + "alloy-rlp", + "alloy-serde", "serde", ] @@ -145,21 +163,22 @@ dependencies = [ [[package]] name = "alloy-eip7702" -version = "0.1.0" +version = "0.4.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "37d319bb544ca6caeab58c39cea8921c55d924d4f68f2c60f24f914673f9a74a" +checksum = "4c986539255fb839d1533c128e190e557e52ff652c9ef62939e233a81dd93f7e" dependencies = [ "alloy-primitives", "alloy-rlp", + "derive_more 1.0.0", "k256", "serde", ] [[package]] name = "alloy-eips" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "499ee14d296a133d142efd215eb36bf96124829fe91cf8f5d4e5ccdd381eae00" +checksum = "8b6755b093afef5925f25079dd5a7c8d096398b804ba60cb5275397b06b31689" dependencies = [ "alloy-eip2930", "alloy-eip7702", @@ -175,40 +194,43 @@ dependencies = [ [[package]] name = "alloy-genesis" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4b85dfc693e4a1193f0372a8f789df12ab51fcbe7be0733baa04939a86dd813b" +checksum = "aeec8e6eab6e52b7c9f918748c9b811e87dbef7312a2e3a2ca1729a92966a6af" dependencies = [ "alloy-primitives", "alloy-serde", + "alloy-trie", "serde", ] [[package]] name = "alloy-json-rpc" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4207166c79cfdf7f3bed24bbc84f5c7c5d4db1970f8c82e3fcc76257f16d2166" +checksum = "4fa077efe0b834bcd89ff4ba547f48fb081e4fdc3673dd7da1b295a2cf2bb7b7" dependencies = [ "alloy-primitives", "alloy-sol-types", "serde", "serde_json", - "thiserror", + "thiserror 2.0.6", "tracing", ] [[package]] name = "alloy-network" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dfbe2802d5b8c632f18d68c352073378f02a3407c1b6a4487194e7d21ab0f002" +checksum = "209a1882a08e21aca4aac6e2a674dc6fcf614058ef8cb02947d63782b1899552" dependencies = [ "alloy-consensus", + "alloy-consensus-any", "alloy-eips", "alloy-json-rpc", "alloy-network-primitives", "alloy-primitives", + "alloy-rpc-types-any", "alloy-rpc-types-eth", "alloy-serde", "alloy-signer", @@ -216,15 +238,19 @@ dependencies = [ "async-trait", "auto_impl", "futures-utils-wasm", - "thiserror", + "serde", + "serde_json", + "thiserror 2.0.6", ] [[package]] name = "alloy-network-primitives" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "396c07726030fa0f9dab5da8c71ccd69d5eb74a7fe1072b7ae453a67e4fe553e" +checksum = "c20219d1ad261da7a6331c16367214ee7ded41d001fabbbd656fbf71898b2773" dependencies = [ + "alloy-consensus", + "alloy-eips", "alloy-primitives", "alloy-serde", "serde", @@ -232,47 +258,54 @@ dependencies = [ [[package]] name = "alloy-node-bindings" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c847311cc7386684ef38ab404069d795bee07da945f63d884265436870a17276" +checksum = "bffcf33dd319f21cd6f066d81cbdef0326d4bdaaf7cfe91110bc090707858e9f" dependencies = [ "alloy-genesis", "alloy-primitives", "k256", + "rand", "serde_json", "tempfile", - "thiserror", + "thiserror 2.0.6", "tracing", "url", ] [[package]] name = "alloy-primitives" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a767e59c86900dd7c3ce3ecef04f3ace5ac9631ee150beb8b7d22f7fa3bbb2d7" +checksum = "9db948902dfbae96a73c2fbf1f7abec62af034ab883e4c777c3fd29702bd6e2c" dependencies = [ "alloy-rlp", "bytes", "cfg-if", "const-hex", - "derive_more 0.99.18", + "derive_more 1.0.0", + "foldhash", + "hashbrown 0.15.2", "hex-literal", + "indexmap 2.5.0", "itoa", "k256", "keccak-asm", + "paste", "proptest", "rand", "ruint", + "rustc-hash 2.1.0", "serde", + "sha3", "tiny-keccak", ] [[package]] name = "alloy-provider" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1376948df782ffee83a54cac4b2aba14134edd997229a3db97da0a606586eb5c" +checksum = "9eefa6f4c798ad01f9b4202d02cea75f5ec11fa180502f4701e2b47965a8c0bb" dependencies = [ "alloy-chains", "alloy-consensus", @@ -291,19 +324,22 @@ dependencies = [ "futures", "futures-utils-wasm", "lru", + "parking_lot 0.12.3", "pin-project", + "schnellru", "serde", "serde_json", - "thiserror", + "thiserror 2.0.6", "tokio", "tracing", + "wasmtimer", ] [[package]] name = "alloy-rlp" -version = "0.3.8" +version = "0.3.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "26154390b1d205a4a7ac7352aa2eb4f81f391399d4e2f546fb81a2f8bb383f62" +checksum = "f542548a609dca89fcd72b3b9f355928cf844d4363c5eed9c5273a3dd225e097" dependencies = [ "alloy-rlp-derive", "arrayvec", @@ -312,22 +348,23 @@ dependencies = [ [[package]] name = "alloy-rlp-derive" -version = "0.3.8" +version = "0.3.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4d0f2d905ebd295e7effec65e5f6868d153936130ae718352771de3e7d03c75c" +checksum = "5a833d97bf8a5f0f878daf2c8451fff7de7f9de38baa5a45d936ec718d81255a" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] name = "alloy-rpc-client" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "02378418a429f8a14a0ad8ffaa15b2d25ff34914fc4a1e366513c6a3800e03b3" +checksum = "ed30bf1041e84cabc5900f52978ca345dd9969f2194a945e6fdec25b0620705c" dependencies = [ "alloy-json-rpc", + "alloy-primitives", "alloy-transport", "alloy-transport-http", "futures", @@ -336,34 +373,47 @@ dependencies = [ "serde_json", "tokio", "tokio-stream", - "tower", + "tower 0.5.1", "tracing", + "wasmtimer", +] + +[[package]] +name = "alloy-rpc-types-any" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "200661999b6e235d9840be5d60a6e8ae2f0af9eb2a256dd378786744660e36ec" +dependencies = [ + "alloy-consensus-any", + "alloy-rpc-types-eth", + "alloy-serde", ] [[package]] name = "alloy-rpc-types-eth" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "15bb3506ab1cf415d4752778c93e102050399fb8de97b7da405a5bf3e31f5f3b" +checksum = "a0600b8b5e2dc0cab12cbf91b5a885c35871789fb7b3a57b434bd4fced5b7a8b" dependencies = [ "alloy-consensus", + "alloy-consensus-any", "alloy-eips", "alloy-network-primitives", "alloy-primitives", "alloy-rlp", "alloy-serde", "alloy-sol-types", + "derive_more 1.0.0", "itertools 0.13.0", "serde", "serde_json", - "thiserror", ] [[package]] name = "alloy-serde" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae417978015f573b4a8c02af17f88558fb22e3fccd12e8a910cf6a2ff331cfcb" +checksum = "9afa753a97002a33b2ccb707d9f15f31c81b8c1b786c95b73cc62bb1d1fd0c3f" dependencies = [ "alloy-primitives", "serde", @@ -372,16 +422,16 @@ dependencies = [ [[package]] name = "alloy-signer" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b750c9b61ac0646f8f4a61231c2732a337b2c829866fc9a191b96b7eedf80ffe" +checksum = "9b2cbff01a673936c2efd7e00d4c0e9a4dbbd6d600e2ce298078d33efbb19cd7" dependencies = [ "alloy-primitives", "async-trait", "auto_impl", "elliptic-curve", "k256", - "thiserror", + "thiserror 2.0.6", ] [[package]] @@ -392,61 +442,61 @@ dependencies = [ "alloy-transport", "serde_json", "simple-request", - "tower", + "tower 0.5.1", ] [[package]] name = "alloy-sol-macro" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "183bcfc0f3291d9c41a3774172ee582fb2ce6eb6569085471d8f225de7bb86fc" +checksum = "3bfd7853b65a2b4f49629ec975fee274faf6dff15ab8894c620943398ef283c0" dependencies = [ "alloy-sol-macro-expander", "alloy-sol-macro-input", - "proc-macro-error", + "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] name = "alloy-sol-macro-expander" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "71c4d842beb7a6686d04125603bc57614d5ed78bf95e4753274db3db4ba95214" +checksum = "82ec42f342d9a9261699f8078e57a7a4fda8aaa73c1a212ed3987080e6a9cd13" dependencies = [ "alloy-sol-macro-input", "const-hex", "heck 0.5.0", "indexmap 2.5.0", - "proc-macro-error", + "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", "syn-solidity", "tiny-keccak", ] [[package]] name = "alloy-sol-macro-input" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1306e8d3c9e6e6ecf7a39ffaf7291e73a5f655a2defd366ee92c2efebcdf7fee" +checksum = "ed2c50e6a62ee2b4f7ab3c6d0366e5770a21cad426e109c2f40335a1b3aff3df" dependencies = [ "const-hex", "dunce", "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", "syn-solidity", ] [[package]] name = "alloy-sol-types" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "577e262966e92112edbd15b1b2c0947cc434d6e8311df96d3329793fe8047da9" +checksum = "c9dc0fffe397aa17628160e16b89f704098bf3c9d74d5d369ebc239575936de5" dependencies = [ "alloy-primitives", "alloy-sol-macro", @@ -455,9 +505,9 @@ dependencies = [ [[package]] name = "alloy-transport" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2799749ca692ae145f54968778877afd7c95e788488f176cfdfcf2a8abeb2062" +checksum = "d69d36982b9e46075ae6b792b0f84208c6c2c15ad49f6c500304616ef67b70e0" dependencies = [ "alloy-json-rpc", "base64 0.22.1", @@ -465,23 +515,40 @@ dependencies = [ "futures-utils-wasm", "serde", "serde_json", - "thiserror", + "thiserror 2.0.6", "tokio", - "tower", + "tower 0.5.1", "tracing", "url", + "wasmtimer", ] [[package]] name = "alloy-transport-http" -version = "0.3.1" +version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bc10c4dd932f66e0db6cc5735241e0c17a6a18564b430bbc1839f7db18587a93" +checksum = "2e02ffd5d93ffc51d72786e607c97de3b60736ca3e636ead0ec1f7dce68ea3fd" dependencies = [ "alloy-transport", "url", ] +[[package]] +name = "alloy-trie" +version = "0.7.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3a5fd8fea044cc9a8c8a50bb6f28e31f0385d820f116c5b98f6f4e55d6e5590b" +dependencies = [ + "alloy-primitives", + "alloy-rlp", + "arrayvec", + "derive_more 1.0.0", + "nybbles", + "serde", + "smallvec", + "tracing", +] + [[package]] name = "android-tzdata" version = "0.1.1" @@ -717,6 +784,9 @@ name = "arrayvec" version = "0.7.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50" +dependencies = [ + "serde", +] [[package]] name = "asn1-rs" @@ -730,7 +800,7 @@ dependencies = [ "nom", "num-traits", "rusticata-macros", - "thiserror", + "thiserror 1.0.63", "time", ] @@ -817,7 +887,7 @@ checksum = "16e62a023e7c117e27523144c5d2459f4397fcc3cab0085af8e2224f643a0193" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -828,7 +898,7 @@ checksum = "a27b8a3a6e1a44fa4c8baf1f653e4172e81486d4941f2237e20dc2d0cf4ddff1" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -863,7 +933,7 @@ checksum = "3c87f3f15e7794432337fc718554eaa4dc8f04c9677a950ffe366f20a162ae42" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -972,9 +1042,9 @@ dependencies = [ "proc-macro2", "quote", "regex", - "rustc-hash", + "rustc-hash 1.1.0", "shlex", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -1039,7 +1109,7 @@ dependencies = [ "serde_json", "simple-request", "std-shims", - "thiserror", + "thiserror 1.0.63", "tokio", "zeroize", ] @@ -1211,7 +1281,7 @@ dependencies = [ "serde_json", "serde_repr", "serde_urlencoded", - "thiserror", + "thiserror 1.0.63", "tokio", "tokio-util", "tower-service", @@ -1250,7 +1320,7 @@ dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", "syn_derive", ] @@ -1386,7 +1456,7 @@ dependencies = [ "semver 1.0.23", "serde", "serde_json", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -1558,7 +1628,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -1594,9 +1664,9 @@ dependencies = [ [[package]] name = "const-hex" -version = "1.12.0" +version = "1.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "94fb8a24a26d37e1ffd45343323dc9fe6654ceea44c12f2fcb3d7ac29e610bc6" +checksum = "4b0485bab839b018a8f1723fc5391819fea5f8f0f32288ef8a735fd096b6160c" dependencies = [ "cfg-if", "cpufeatures", @@ -1637,12 +1707,6 @@ version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6" -[[package]] -name = "convert_case" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e" - [[package]] name = "core-foundation" version = "0.9.4" @@ -1892,7 +1956,7 @@ checksum = "f46882e17999c6cc590af592290432be3bce0428cb0d5f8b6715e4dc7b383eb3" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -1919,7 +1983,7 @@ dependencies = [ "proc-macro2", "quote", "scratch", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -1936,7 +2000,7 @@ checksum = "98532a60dedaebc4848cb2cba5023337cc9ea3af16a5b062633fabfd9f18fb60" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2066,11 +2130,9 @@ version = "0.99.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f33878137e4dafd7fa914ad4e259e18a4e8e532b9617a2d0150262bf53abfce" dependencies = [ - "convert_case", "proc-macro2", "quote", - "rustc_version 0.4.1", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2090,7 +2152,8 @@ checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", + "unicode-xid", ] [[package]] @@ -2169,7 +2232,7 @@ checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2196,7 +2259,7 @@ dependencies = [ "schnorr-signatures", "secq256k1", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -2215,7 +2278,7 @@ dependencies = [ "multiexp", "rand_core", "rustversion", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -2237,7 +2300,7 @@ dependencies = [ "serde", "serde_json", "strum 0.26.3", - "thiserror", + "thiserror 1.0.63", "tokio", "tracing", ] @@ -2422,7 +2485,7 @@ dependencies = [ "heck 0.4.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2526,7 +2589,7 @@ dependencies = [ "fs-err", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2676,6 +2739,12 @@ version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" +[[package]] +name = "foldhash" +version = "0.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f81ec6369c545a7d40e4589b5597581fa1c441fe1cce96dd1de43159910a36a2" + [[package]] name = "fork-tree" version = "3.0.0" @@ -2801,7 +2870,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2813,7 +2882,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2823,7 +2892,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -2982,7 +3051,7 @@ checksum = "87750cf4b7a4c0625b1529e4c543c2182106e4dedc60a2a6455e00d212c489ac" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -3271,6 +3340,16 @@ dependencies = [ "allocator-api2", ] +[[package]] +name = "hashbrown" +version = "0.15.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289" +dependencies = [ + "foldhash", + "serde", +] + [[package]] name = "heck" version = "0.4.1" @@ -3535,7 +3614,7 @@ dependencies = [ "pin-project-lite", "socket2 0.5.7", "tokio", - "tower", + "tower 0.4.13", "tower-service", "tracing", ] @@ -3851,11 +3930,11 @@ dependencies = [ "jsonrpsee-types", "parking_lot 0.12.3", "rand", - "rustc-hash", + "rustc-hash 1.1.0", "serde", "serde_json", "soketto 0.7.1", - "thiserror", + "thiserror 1.0.63", "tokio", "tracing", ] @@ -3891,7 +3970,7 @@ dependencies = [ "tokio", "tokio-stream", "tokio-util", - "tower", + "tower 0.4.13", "tracing", ] @@ -3905,7 +3984,7 @@ dependencies = [ "beef", "serde", "serde_json", - "thiserror", + "thiserror 1.0.63", "tracing", ] @@ -4052,7 +4131,7 @@ dependencies = [ "multiaddr", "pin-project", "rw-stream-sink", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -4102,7 +4181,7 @@ dependencies = [ "rand", "rw-stream-sink", "smallvec", - "thiserror", + "thiserror 1.0.63", "unsigned-varint", "void", ] @@ -4174,7 +4253,7 @@ dependencies = [ "quick-protobuf", "quick-protobuf-codec", "smallvec", - "thiserror", + "thiserror 1.0.63", "void", ] @@ -4191,7 +4270,7 @@ dependencies = [ "quick-protobuf", "rand", "sha2", - "thiserror", + "thiserror 1.0.63", "tracing", "zeroize", ] @@ -4219,7 +4298,7 @@ dependencies = [ "rand", "sha2", "smallvec", - "thiserror", + "thiserror 1.0.63", "uint", "unsigned-varint", "void", @@ -4284,7 +4363,7 @@ dependencies = [ "sha2", "snow", "static_assertions", - "thiserror", + "thiserror 1.0.63", "x25519-dalek", "zeroize", ] @@ -4327,7 +4406,7 @@ dependencies = [ "ring 0.16.20", "rustls 0.21.12", "socket2 0.5.7", - "thiserror", + "thiserror 1.0.63", "tokio", ] @@ -4382,7 +4461,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -4416,7 +4495,7 @@ dependencies = [ "ring 0.16.20", "rustls 0.21.12", "rustls-webpki 0.101.7", - "thiserror", + "thiserror 1.0.63", "x509-parser", "yasna", ] @@ -4467,7 +4546,7 @@ dependencies = [ "pin-project-lite", "rw-stream-sink", "soketto 0.8.0", - "thiserror", + "thiserror 1.0.63", "url", "webpki-roots", ] @@ -4481,7 +4560,7 @@ dependencies = [ "futures", "libp2p-core", "log", - "thiserror", + "thiserror 1.0.63", "yamux", ] @@ -4647,7 +4726,7 @@ dependencies = [ "macro_magic_core", "macro_magic_macros", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -4661,7 +4740,7 @@ dependencies = [ "macro_magic_core_macros", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -4672,7 +4751,7 @@ checksum = "d710e1214dffbab3b5dacb21475dde7d6ed84c69ff722b3a47a782668d44fbac" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -4683,7 +4762,7 @@ checksum = "b8fb85ec1620619edf2984a7693497d4ec88a9665d8b87e942856884c92dbf2a" dependencies = [ "macro_magic_core", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -4882,7 +4961,7 @@ dependencies = [ "schnorr-signatures", "serde_json", "subtle", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -4899,7 +4978,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -4926,7 +5005,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -4946,7 +5025,7 @@ dependencies = [ "rand_core", "std-shims", "subtle", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -4981,7 +5060,7 @@ dependencies = [ "monero-io", "monero-primitives", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -5009,7 +5088,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -5022,7 +5101,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -5095,7 +5174,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror", + "thiserror 1.0.63", "tokio", "zeroize", ] @@ -5111,7 +5190,7 @@ dependencies = [ "polyseed", "rand_core", "std-shims", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -5288,7 +5367,7 @@ checksum = "254a5372af8fc138e36684761d3c0cdb758a4410e938babcff1c860ce14ddbfc" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -5335,7 +5414,7 @@ dependencies = [ "anyhow", "byteorder", "paste", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -5349,7 +5428,7 @@ dependencies = [ "log", "netlink-packet-core", "netlink-sys", - "thiserror", + "thiserror 1.0.63", "tokio", ] @@ -5501,7 +5580,18 @@ checksum = "af1844ef2428cc3e1cb900be36181049ef3d3193c63e43026cfe202983b27a56" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", +] + +[[package]] +name = "nybbles" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "95f06be0417d97f81fe4e5c86d7d01b392655a9cac9c19a848aa033e18937b23" +dependencies = [ + "const-hex", + "serde", + "smallvec", ] [[package]] @@ -5911,7 +6001,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cd53dff83f26735fdc1ca837098ccf133605d794cdae66acfc2bfac3ec809d95" dependencies = [ "memchr", - "thiserror", + "thiserror 1.0.63", "ucd-trie", ] @@ -5942,7 +6032,7 @@ checksum = "2f38a4412a78282e09a2cf38d195ea5420d15ba0602cb375210efbc877243965" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -6009,7 +6099,7 @@ dependencies = [ "sha3", "std-shims", "subtle", - "thiserror", + "thiserror 1.0.63", "zeroize", ] @@ -6145,6 +6235,28 @@ dependencies = [ "version_check", ] +[[package]] +name = "proc-macro-error-attr2" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "96de42df36bb9bba5542fe9f1a054b8cc87e172759a1868aa05c1f3acc89dfc5" +dependencies = [ + "proc-macro2", + "quote", +] + +[[package]] +name = "proc-macro-error2" +version = "2.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "11ec05c52be0a07b08061f7dd003e7d7092e0472bc731b4af7bb1ef876109802" +dependencies = [ + "proc-macro-error-attr2", + "proc-macro2", + "quote", + "syn 2.0.90", +] + [[package]] name = "proc-macro-warning" version = "0.4.2" @@ -6153,14 +6265,14 @@ checksum = "3d1eaa7fa0aa1929ffdf7eeb6eac234dde6268914a14ad44d23521ab6a9b258e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] name = "proc-macro2" -version = "1.0.86" +version = "1.0.92" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5e719e8df665df0d1c8fbfd238015744736151d4445ec0836b8e628aae103b77" +checksum = "37d3544b3f2748c54e147655edb5025752e2303145b5aefb3c3ea2c78b973bb0" dependencies = [ "unicode-ident", ] @@ -6176,7 +6288,7 @@ dependencies = [ "lazy_static", "memchr", "parking_lot 0.12.3", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -6199,7 +6311,7 @@ checksum = "440f724eba9f6996b75d63681b0a92b06947f1457076d503a4d2e2c8f56442b8" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -6309,7 +6421,7 @@ dependencies = [ "asynchronous-codec", "bytes", "quick-protobuf", - "thiserror", + "thiserror 1.0.63", "unsigned-varint", ] @@ -6324,9 +6436,9 @@ dependencies = [ "pin-project-lite", "quinn-proto", "quinn-udp", - "rustc-hash", + "rustc-hash 1.1.0", "rustls 0.21.12", - "thiserror", + "thiserror 1.0.63", "tokio", "tracing", ] @@ -6340,10 +6452,10 @@ dependencies = [ "bytes", "rand", "ring 0.16.20", - "rustc-hash", + "rustc-hash 1.1.0", "rustls 0.21.12", "slab", - "thiserror", + "thiserror 1.0.63", "tinyvec", "tracing", ] @@ -6385,6 +6497,7 @@ dependencies = [ "libc", "rand_chacha", "rand_core", + "serde", ] [[package]] @@ -6489,7 +6602,7 @@ checksum = "ba009ff324d1fc1b900bd1fdb31564febe58a8ccc8a6fdbb93b543d33b13ca43" dependencies = [ "getrandom", "libredox", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -6509,7 +6622,7 @@ checksum = "bcc303e793d3734489387d205e9b186fac9c6cfacedd98cbb2e8a5943595f3e6" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -6520,7 +6633,7 @@ checksum = "ad156d539c879b7a24a363a2016d77961786e71f48f2e2fc8302a92abd2429a6" dependencies = [ "hashbrown 0.13.2", "log", - "rustc-hash", + "rustc-hash 1.1.0", "slice-group-by", "smallvec", ] @@ -6677,7 +6790,7 @@ dependencies = [ "netlink-packet-route", "netlink-proto", "nix", - "thiserror", + "thiserror 1.0.63", "tokio", ] @@ -6733,6 +6846,12 @@ version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2" +[[package]] +name = "rustc-hash" +version = "2.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c7fb8039b3032c191086b10f11f319a6e99e1e82889c5cc6046f515c9db1d497" + [[package]] name = "rustc-hex" version = "2.1.0" @@ -6916,7 +7035,7 @@ dependencies = [ "log", "sp-core", "sp-wasm-interface", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -6944,7 +7063,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7012,7 +7131,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -7049,7 +7168,7 @@ dependencies = [ "sp-panic-handler", "sp-runtime", "sp-version", - "thiserror", + "thiserror 1.0.63", "tiny-bip39", "tokio", ] @@ -7127,7 +7246,7 @@ dependencies = [ "sp-runtime", "sp-state-machine", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7163,7 +7282,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7217,7 +7336,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7273,7 +7392,7 @@ dependencies = [ "sc-allocator", "sp-maybe-compressed-blob", "sp-wasm-interface", - "thiserror", + "thiserror 1.0.63", "wasm-instrument", ] @@ -7321,7 +7440,7 @@ dependencies = [ "sp-application-crypto", "sp-core", "sp-keystore", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7359,7 +7478,7 @@ dependencies = [ "sp-core", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", "unsigned-varint", "void", "wasm-timer", @@ -7382,7 +7501,7 @@ dependencies = [ "sc-network", "sp-blockchain", "sp-runtime", - "thiserror", + "thiserror 1.0.63", "unsigned-varint", ] @@ -7440,7 +7559,7 @@ dependencies = [ "sp-blockchain", "sp-core", "sp-runtime", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7474,7 +7593,7 @@ dependencies = [ "sp-core", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7581,7 +7700,7 @@ dependencies = [ "sp-rpc", "sp-runtime", "sp-version", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7595,7 +7714,7 @@ dependencies = [ "serde_json", "substrate-prometheus-endpoint", "tokio", - "tower", + "tower 0.4.13", "tower-http", ] @@ -7621,7 +7740,7 @@ dependencies = [ "sp-core", "sp-runtime", "sp-version", - "thiserror", + "thiserror 1.0.63", "tokio-stream", ] @@ -7682,7 +7801,7 @@ dependencies = [ "static_init", "substrate-prometheus-endpoint", "tempfile", - "thiserror", + "thiserror 1.0.63", "tokio", "tracing", "tracing-futures", @@ -7733,7 +7852,7 @@ dependencies = [ "sc-utils", "serde", "serde_json", - "thiserror", + "thiserror 1.0.63", "wasm-timer", ] @@ -7749,7 +7868,7 @@ dependencies = [ "log", "parking_lot 0.12.3", "regex", - "rustc-hash", + "rustc-hash 1.1.0", "sc-client-api", "sc-tracing-proc-macro", "serde", @@ -7759,7 +7878,7 @@ dependencies = [ "sp-rpc", "sp-runtime", "sp-tracing", - "thiserror", + "thiserror 1.0.63", "tracing", "tracing-log", "tracing-subscriber 0.2.25", @@ -7773,7 +7892,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -7799,7 +7918,7 @@ dependencies = [ "sp-tracing", "sp-transaction-pool", "substrate-prometheus-endpoint", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -7815,7 +7934,7 @@ dependencies = [ "sp-blockchain", "sp-core", "sp-runtime", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -8153,7 +8272,7 @@ dependencies = [ "simple-request", "sp-core", "sp-runtime", - "thiserror", + "thiserror 1.0.63", "tokio", "zeroize", ] @@ -8739,7 +8858,7 @@ dependencies = [ "serai-processor-ethereum-deployer", "serai-processor-ethereum-erc20", "serai-processor-ethereum-primitives", - "syn 2.0.77", + "syn 2.0.90", "syn-solidity", "tokio", ] @@ -9097,7 +9216,7 @@ checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9120,7 +9239,7 @@ checksum = "6c64451ba24fc7a6a2d60fc75dd9c83c90903b19028d4eff35e88fc1e86564e9" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9303,6 +9422,9 @@ name = "smallvec" version = "1.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67" +dependencies = [ + "serde", +] [[package]] name = "snap" @@ -9396,7 +9518,7 @@ dependencies = [ "sp-std", "sp-trie", "sp-version", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9410,7 +9532,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9478,7 +9600,7 @@ dependencies = [ "sp-database", "sp-runtime", "sp-state-machine", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9492,7 +9614,7 @@ dependencies = [ "sp-inherents", "sp-runtime", "sp-state-machine", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9581,7 +9703,7 @@ dependencies = [ "sp-storage", "ss58-registry", "substrate-bip39", - "thiserror", + "thiserror 1.0.63", "tiny-bip39", "tracing", "zeroize", @@ -9606,7 +9728,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "quote", "sp-core-hashing", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9625,7 +9747,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9650,7 +9772,7 @@ dependencies = [ "scale-info", "sp-runtime", "sp-std", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9695,7 +9817,7 @@ dependencies = [ "parking_lot 0.12.3", "sp-core", "sp-externalities", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9703,7 +9825,7 @@ name = "sp-maybe-compressed-blob" version = "4.1.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "thiserror", + "thiserror 1.0.63", "zstd 0.12.4", ] @@ -9743,7 +9865,7 @@ name = "sp-rpc" version = "6.0.0" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "rustc-hash", + "rustc-hash 1.1.0", "serde", "sp-core", ] @@ -9797,7 +9919,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -9845,7 +9967,7 @@ dependencies = [ "sp-panic-handler", "sp-std", "sp-trie", - "thiserror", + "thiserror 1.0.63", "tracing", "trie-db", ] @@ -9878,7 +10000,7 @@ dependencies = [ "sp-inherents", "sp-runtime", "sp-std", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9919,7 +10041,7 @@ dependencies = [ "schnellru", "sp-core", "sp-std", - "thiserror", + "thiserror 1.0.63", "tracing", "trie-db", "trie-root", @@ -9939,7 +10061,7 @@ dependencies = [ "sp-runtime", "sp-std", "sp-version-proc-macro", - "thiserror", + "thiserror 1.0.63", ] [[package]] @@ -9950,7 +10072,7 @@ dependencies = [ "parity-scale-codec", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10138,7 +10260,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10151,7 +10273,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10198,7 +10320,7 @@ dependencies = [ "hyper 0.14.30", "log", "prometheus", - "thiserror", + "thiserror 1.0.63", "tokio", ] @@ -10239,9 +10361,9 @@ dependencies = [ [[package]] name = "syn" -version = "2.0.77" +version = "2.0.90" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9f35bcdf61fd8e7be6caf75f429fdca8beb3ed76584befb503b1569faee373ed" +checksum = "919d3b74a5dd0ccd15aeb8f93e7006bd9e14c295087c9896a110f490752bcf31" dependencies = [ "proc-macro2", "quote", @@ -10250,14 +10372,14 @@ dependencies = [ [[package]] name = "syn-solidity" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "284c41c2919303438fcf8dede4036fd1e82d4fc0fbb2b279bd2a1442c909ca92" +checksum = "da0523f59468a2696391f2a772edc089342aacd53c3caa2ac3264e598edf119b" dependencies = [ "paste", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10269,9 +10391,15 @@ dependencies = [ "proc-macro-error", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] +[[package]] +name = "sync_wrapper" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2047c6ded9c721764247e62cd3b03c09ffc529b2ba5b10ec482ae507a4a70160" + [[package]] name = "synstructure" version = "0.12.6" @@ -10342,7 +10470,7 @@ dependencies = [ "parity-scale-codec", "patchable-async-sleep", "serai-db", - "thiserror", + "thiserror 1.0.63", "tokio", ] @@ -10367,7 +10495,16 @@ version = "1.0.63" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c0342370b38b6a11b6cc11d6a805569958d54cfa061a29969c3b5ce2ea405724" dependencies = [ - "thiserror-impl", + "thiserror-impl 1.0.63", +] + +[[package]] +name = "thiserror" +version = "2.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8fec2a1820ebd077e2b90c4df007bebf344cd394098a13c563957d0afc83ea47" +dependencies = [ + "thiserror-impl 2.0.6", ] [[package]] @@ -10378,7 +10515,18 @@ checksum = "a4558b58466b9ad7ca0f102865eccc95938dca1a74a856f2b57b6629050da261" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", +] + +[[package]] +name = "thiserror-impl" +version = "2.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d65750cab40f4ff1929fb1ba509e9914eb756131cef4210da8d5d700d26f6312" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.90", ] [[package]] @@ -10442,9 +10590,9 @@ dependencies = [ "once_cell", "pbkdf2 0.11.0", "rand", - "rustc-hash", + "rustc-hash 1.1.0", "sha2", - "thiserror", + "thiserror 1.0.63", "unicode-normalization", "wasm-bindgen", "zeroize", @@ -10500,7 +10648,7 @@ checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10610,6 +10758,20 @@ dependencies = [ "tracing", ] +[[package]] +name = "tower" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2873938d487c3cfb9aed7546dc9f2711d867c9f90c46b889989a2cb84eba6b4f" +dependencies = [ + "futures-core", + "futures-util", + "pin-project-lite", + "sync_wrapper", + "tower-layer", + "tower-service", +] + [[package]] name = "tower-http" version = "0.4.4" @@ -10660,7 +10822,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -10764,7 +10926,7 @@ dependencies = [ "serai-db", "subtle", "tendermint-machine", - "thiserror", + "thiserror 1.0.63", "tokio", "zeroize", ] @@ -10810,7 +10972,7 @@ dependencies = [ "rand", "smallvec", "socket2 0.4.10", - "thiserror", + "thiserror 1.0.63", "tinyvec", "tokio", "tracing", @@ -10835,7 +10997,7 @@ dependencies = [ "once_cell", "rand", "smallvec", - "thiserror", + "thiserror 1.0.63", "tinyvec", "tokio", "tracing", @@ -10857,7 +11019,7 @@ dependencies = [ "rand", "resolv-conf", "smallvec", - "thiserror", + "thiserror 1.0.63", "tokio", "tracing", "trust-dns-proto 0.23.2", @@ -11087,7 +11249,7 @@ dependencies = [ "once_cell", "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", "wasm-bindgen-shared", ] @@ -11121,7 +11283,7 @@ checksum = "afc340c74d9005395cf9dd098506f7f44e38f2b4a21c6aaacf9a105ea5e1e836" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", "wasm-bindgen-backend", "wasm-bindgen-shared", ] @@ -11161,7 +11323,7 @@ dependencies = [ "strum 0.24.1", "strum_macros 0.24.3", "tempfile", - "thiserror", + "thiserror 1.0.63", "wasm-opt-cxx-sys", "wasm-opt-sys", ] @@ -11293,7 +11455,7 @@ dependencies = [ "log", "object 0.31.1", "target-lexicon", - "thiserror", + "thiserror 1.0.63", "wasmparser", "wasmtime-cranelift-shared", "wasmtime-environ", @@ -11330,7 +11492,7 @@ dependencies = [ "object 0.31.1", "serde", "target-lexicon", - "thiserror", + "thiserror 1.0.63", "wasmparser", "wasmtime-types", ] @@ -11418,7 +11580,7 @@ checksum = "77943729d4b46141538e8d0b6168915dc5f88575ecdfea26753fd3ba8bab244a" dependencies = [ "cranelift-entity", "serde", - "thiserror", + "thiserror 1.0.63", "wasmparser", ] @@ -11430,7 +11592,21 @@ checksum = "ca7af9bb3ee875c4907835e607a275d10b04d15623d3aebe01afe8fbd3f85050" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", +] + +[[package]] +name = "wasmtimer" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0048ad49a55b9deb3953841fa1fc5858f0efbcb7a18868c899a360269fac1b23" +dependencies = [ + "futures", + "js-sys", + "parking_lot 0.12.3", + "pin-utils", + "slab", + "wasm-bindgen", ] [[package]] @@ -11766,7 +11942,7 @@ dependencies = [ "nom", "oid-registry", "rusticata-macros", - "thiserror", + "thiserror 1.0.63", "time", ] @@ -11835,7 +12011,7 @@ checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] @@ -11855,7 +12031,7 @@ checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", - "syn 2.0.77", + "syn 2.0.90", ] [[package]] diff --git a/networks/ethereum/alloy-simple-request-transport/Cargo.toml b/networks/ethereum/alloy-simple-request-transport/Cargo.toml index e87fbe5b..b9769263 100644 --- a/networks/ethereum/alloy-simple-request-transport/Cargo.toml +++ b/networks/ethereum/alloy-simple-request-transport/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/networks/ethereum/alloy-simple-request-transport" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.78" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -16,13 +16,13 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -tower = "0.4" +tower = "0.5" serde_json = { version = "1", default-features = false } simple-request = { path = "../../../common/request", version = "0.1", default-features = false } -alloy-json-rpc = { version = "0.3", default-features = false } -alloy-transport = { version = "0.3", default-features = false } +alloy-json-rpc = { version = "0.7", default-features = false } +alloy-transport = { version = "0.7", default-features = false } [features] default = ["tls"] diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 2e9597c8..9c88e7c0 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -33,10 +33,10 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-rpc-client = { version = "0.3", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-rpc-client = { version = "0.7", default-features = false } +alloy-provider = { version = "0.7", default-features = false } -alloy-node-bindings = { version = "0.3", default-features = false } +alloy-node-bindings = { version = "0.7", default-features = false } tokio = { version = "1", default-features = false, features = ["macros"] } diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 13978631..4a26e11f 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -33,11 +34,11 @@ k256 = { version = "^0.13.1", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } alloy-rlp = { version = "0.3", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-transport = { version = "0.3", default-features = false } +alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-transport = { version = "0.7", default-features = false } alloy-simple-request-transport = { path = "../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-client = { version = "0.3", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-rpc-client = { version = "0.7", default-features = false } +alloy-provider = { version = "0.7", default-features = false } serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } diff --git a/processor/ethereum/deployer/Cargo.toml b/processor/ethereum/deployer/Cargo.toml index 9b0ed146..7cbd57d0 100644 --- a/processor/ethereum/deployer/Cargo.toml +++ b/processor/ethereum/deployer/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -18,15 +18,16 @@ workspace = true [dependencies] alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.3", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-transport = { version = "0.3", default-features = false } +alloy-consensus = { version = "0.7", default-features = false } + +alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-transport = { version = "0.7", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-provider = { version = "0.7", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index 2293de47..58b0262d 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -49,7 +49,7 @@ impl Deployer { // 100 gwei gas_price: 100_000_000_000u128, // TODO: Use a more accurate gas limit - gas_limit: 1_000_000u128, + gas_limit: 1_000_000u64, to: TxKind::Create, value: U256::ZERO, input: bytecode, diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index 3c7f5101..ad1508f6 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -22,9 +22,9 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-transport = { version = "0.3", default-features = false } +alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-transport = { version = "0.7", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-provider = { version = "0.7", default-features = false } tokio = { version = "1", default-features = false, features = ["rt"] } diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index ec33989e..20f44b53 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -8,7 +8,7 @@ use alloy_core::primitives::{Address, B256, U256}; use alloy_sol_types::{SolInterface, SolEvent}; -use alloy_rpc_types_eth::Filter; +use alloy_rpc_types_eth::{Filter, TransactionTrait}; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -66,7 +66,7 @@ impl Erc20 { // If this is a top-level call... // Don't validate the encoding as this can't be re-encoded to an identical bytestring due // to the `InInstruction` appended after the call itself - if let Ok(call) = IERC20Calls::abi_decode(&transaction.input, false) { + if let Ok(call) = IERC20Calls::abi_decode(transaction.inner.input(), false) { // Extract the top-level call's from/to/value let (from, call_to, value) = match call { IERC20Calls::transfer(transferCall { to, value }) => (transaction.from, to, value), @@ -92,7 +92,7 @@ impl Erc20 { // Find the log for this transfer for log in receipt.inner.logs() { // If this log was emitted by a different contract, continue - if Some(log.address()) != transaction.to { + if Some(log.address()) != transaction.inner.to() { continue; } @@ -122,7 +122,7 @@ impl Erc20 { // Read the data appended after let encoded = call.abi_encode(); - let data = transaction.input.as_ref()[encoded.len() ..].to_vec(); + let data = transaction.inner.input().as_ref()[encoded.len() ..].to_vec(); return Ok(Some(TopLevelTransfer { id: (*transaction_id, log_index), diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml index 6c6ff886..7070d256 100644 --- a/processor/ethereum/primitives/Cargo.toml +++ b/processor/ethereum/primitives/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -21,4 +21,4 @@ group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.3", default-features = false, features = ["k256"] } +alloy-consensus = { version = "0.7", default-features = false, features = ["k256"] } diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index 1fbf2834..a6da3b4d 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -5,7 +5,7 @@ use group::ff::PrimeField; use k256::Scalar; -use alloy_core::primitives::{Parity, Signature}; +use alloy_core::primitives::PrimitiveSignature; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; /// The Keccak256 hash function. @@ -34,8 +34,8 @@ pub fn deterministically_sign(tx: &TxLegacy) -> Signed { // Create the signature let r_bytes: [u8; 32] = r.to_repr().into(); let s_bytes: [u8; 32] = s.to_repr().into(); - let v = Parity::NonEip155(false); - let signature = Signature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), v).unwrap(); + let signature = + PrimitiveSignature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), false); // Check if this is a valid signature let tx = tx.clone().into_signed(signature); diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 32f112c9..6d83d632 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum authors = ["Luke Parker "] edition = "2021" publish = false -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -24,12 +24,12 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.3", default-features = false } +alloy-consensus = { version = "0.7", default-features = false } -alloy-rpc-types-eth = { version = "0.3", default-features = false } -alloy-transport = { version = "0.3", default-features = false } +alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-transport = { version = "0.7", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-provider = { version = "0.7", default-features = false } ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../../networks/ethereum/schnorr", default-features = false } @@ -53,8 +53,8 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } k256 = { version = "0.13", default-features = false, features = ["std"] } -alloy-rpc-client = { version = "0.3", default-features = false } -alloy-node-bindings = { version = "0.3", default-features = false } +alloy-rpc-client = { version = "0.7", default-features = false } +alloy-node-bindings = { version = "0.7", default-features = false } tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros"] } diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index dd52985d..26a2bee6 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -26,6 +26,13 @@ fn main() { fs::create_dir(&artifacts_path).unwrap(); } + build_solidity_contracts::build( + &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts", "contracts"], + "contracts", + &artifacts_path, + ) + .unwrap(); + // This cannot be handled with the sol! macro. The Router requires an import // https://github.com/alloy-rs/core/issues/602 sol( diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index 196263d6..cf83bc51 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -5,6 +5,11 @@ pragma solidity ^0.8.26; /// @author Luke Parker /// @notice Intakes coins for the Serai network and handles relaying batches of transfers out interface IRouterWithoutCollisions { + /// @notice Emitted when the next key for Serai's Ethereum validators is set + /// @param nonce The nonce consumed to update this key + /// @param key The key updated to + event NextSeraiKeySet(uint256 indexed nonce, bytes32 indexed key); + /// @notice Emitted when the key for Serai's Ethereum validators is updated /// @param nonce The nonce consumed to update this key /// @param key The key updated to @@ -39,6 +44,9 @@ interface IRouterWithoutCollisions { /// @param coin The coin which escaped event Escaped(address indexed coin); + /// @notice The key for Serai was invalid + /// @dev This is incomplete and not always guaranteed to be thrown upon an invalid key + error InvalidSeraiKey(); /// @notice The contract has had its escape hatch invoked and won't accept further actions error EscapeHatchInvoked(); /// @notice The signature was invalid @@ -86,8 +94,15 @@ interface IRouterWithoutCollisions { /// return The next nonce to use by an action published to this contract function nextNonce() external view returns (uint256); + /// @notice Fetch the next key for Serai's Ethereum validator set + /// @return The next key for Serai's Ethereum validator set or bytes32(0) if none is currently set + function nextSeraiKey() external view returns (bytes32); + /// @notice Fetch the current key for Serai's Ethereum validator set - /// @return The current key for Serai's Ethereum validator set + /** + * @return The current key for Serai's Ethereum validator set or bytes32(0) if none is currently + * set + */ function seraiKey() external view returns (bytes32); /// @notice Fetch the address escaped to @@ -134,15 +149,25 @@ interface IRouter is IRouterWithoutCollisions { } /// @notice Update the key representing Serai's Ethereum validators - /// @dev This assumes the key is correct. No checks on it are performed + /** + * @dev This does not validate the passed-in key as much as possible. This is accepted as the key + * won't actually be rotated to until it provides a signature confirming the update however + * (proving signatures can be made by the key in question and verified via our Schnorr + * contract). + */ + // @param signature The signature by the current key authorizing this update /// @param signature The signature by the current key authorizing this update - /// @param newSeraiKey The key to update to - function updateSeraiKey(Signature calldata signature, bytes32 newSeraiKey) external; + /// @param nextSeraiKeyVar The key to update to, once it confirms the update + function updateSeraiKey(Signature calldata signature, bytes32 nextSeraiKeyVar) external; + + /// @notice Confirm the next key representing Serai's Ethereum validators, updating to it + /// @param signature The signature by the next key confirming its validity + function confirmNextSeraiKey(Signature calldata signature) external; /// @notice Execute a batch of `OutInstruction`s /** * @dev All `OutInstruction`s in a batch are only for a single coin to simplify handling of the - * fee + * fee */ /// @param signature The signature by the current key for Serai's Ethereum validators /// @param coin The coin all of these `OutInstruction`s are for diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 0eb31176..5d8211da 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -50,6 +50,12 @@ contract Router is IRouterWithoutCollisions { */ uint256 private _nextNonce; + /** + * @dev The next public key for Serai's Ethereum validator set, in the form the Schnorr library + * expects + */ + bytes32 private _nextSeraiKey; + /** * @dev The current public key for Serai's Ethereum validator set, in the form the Schnorr library * expects @@ -59,12 +65,16 @@ contract Router is IRouterWithoutCollisions { /// @dev The address escaped to address private _escapedTo; - /// @dev Updates the Serai key. This does not update `_nextNonce` - /// @param nonceUpdatedWith The nonce used to update the key - /// @param newSeraiKey The key updated to - function _updateSeraiKey(uint256 nonceUpdatedWith, bytes32 newSeraiKey) private { - _seraiKey = newSeraiKey; - emit SeraiKeyUpdated(nonceUpdatedWith, newSeraiKey); + /// @dev Set the next Serai key. This does not read from/write to `_nextNonce` + /// @param nonceUpdatedWith The nonce used to set the next key + /// @param nextSeraiKeyVar The key to set as next + function _setNextSeraiKey(uint256 nonceUpdatedWith, bytes32 nextSeraiKeyVar) private { + // Explicitly disallow 0 so we can always consider 0 as None and non-zero as Some + if (nextSeraiKeyVar == bytes32(0)) { + revert InvalidSeraiKey(); + } + _nextSeraiKey = nextSeraiKeyVar; + emit NextSeraiKeySet(nonceUpdatedWith, nextSeraiKeyVar); } /// @notice The constructor for the relayer @@ -74,8 +84,10 @@ contract Router is IRouterWithoutCollisions { // This is incompatible with any networks which don't have their nonces start at 0 _smartContractNonce = 1; - // Set the Serai key - _updateSeraiKey(0, initialSeraiKey); + // Set the next Serai key + _setNextSeraiKey(0, initialSeraiKey); + // Set the current Serai key to None + _seraiKey = bytes32(0); // We just consumed nonce 0 when setting the initial Serai key _nextNonce = 1; @@ -90,7 +102,7 @@ contract Router is IRouterWithoutCollisions { * calldata should be signed with the nonce taking the place of the signature's commitment to * its nonce, and the signature solution zeroed. */ - function verifySignature() + function verifySignature(bytes32 key) private returns (uint256 nonceUsed, bytes memory message, bytes32 messageHash) { @@ -99,6 +111,15 @@ contract Router is IRouterWithoutCollisions { revert EscapeHatchInvoked(); } + /* + If this key isn't set, reject it. + + The Schnorr contract should already reject this public key yet it's best to be explicit. + */ + if (key == bytes32(0)) { + revert InvalidSignature(); + } + message = msg.data; uint256 messageLen = message.length; /* @@ -134,7 +155,7 @@ contract Router is IRouterWithoutCollisions { } // Verify the signature - if (!Schnorr.verify(_seraiKey, messageHash, signatureC, signatureS)) { + if (!Schnorr.verify(key, messageHash, signatureC, signatureS)) { revert InvalidSignature(); } @@ -178,22 +199,38 @@ contract Router is IRouterWithoutCollisions { } } - /// @notice Update the key representing Serai's Ethereum validators + /// @notice Start updating the key representing Serai's Ethereum validators /** - * @dev This assumes the key is correct. No checks on it are performed. + * @dev This does not validate the passed-in key as much as possible. This is accepted as the key + * won't actually be rotated to until it provides a signature confirming the update however + * (proving signatures can be made by the key in question and verified via our Schnorr + * contract). * * The hex bytes are to cause a collision with `IRouter.updateSeraiKey`. */ // @param signature The signature by the current key authorizing this update - // @param newSeraiKey The key to update to + // @param nextSeraiKey The key to update to function updateSeraiKey5A8542A2() external { - (uint256 nonceUsed, bytes memory args,) = verifySignature(); + (uint256 nonceUsed, bytes memory args,) = verifySignature(_seraiKey); /* We could replace this with a length check (if we don't simply assume the calldata is valid as it was properly signed) + mload to save 24 gas but it's not worth the complexity. */ - (,, bytes32 newSeraiKey) = abi.decode(args, (bytes32, bytes32, bytes32)); - _updateSeraiKey(nonceUsed, newSeraiKey); + (,, bytes32 nextSeraiKeyVar) = abi.decode(args, (bytes32, bytes32, bytes32)); + _setNextSeraiKey(nonceUsed, nextSeraiKeyVar); + } + + /// @notice Confirm the next key representing Serai's Ethereum validators, updating to it + /// @dev The hex bytes are to cause a collision with `IRouter.confirmSeraiKey`. + // @param signature The signature by the next key confirming its validity + function confirmNextSeraiKey34AC53AC() external { + // Checks + bytes32 nextSeraiKeyVar = _nextSeraiKey; + (uint256 nonceUsed,,) = verifySignature(nextSeraiKeyVar); + // Effects + _nextSeraiKey = bytes32(0); + _seraiKey = nextSeraiKeyVar; + emit SeraiKeyUpdated(nonceUsed, nextSeraiKeyVar); } /// @notice Transfer coins into Serai with an instruction @@ -384,7 +421,7 @@ contract Router is IRouterWithoutCollisions { revert ReenteredExecute(); } - (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(); + (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(_seraiKey); (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); @@ -481,7 +518,7 @@ contract Router is IRouterWithoutCollisions { // @param escapeTo The address to escape to function escapeHatchDCDD91CC() external { // Verify the signature - (, bytes memory args,) = verifySignature(); + (, bytes memory args,) = verifySignature(_seraiKey); (,, address escapeTo) = abi.decode(args, (bytes32, bytes32, address)); @@ -526,8 +563,17 @@ contract Router is IRouterWithoutCollisions { return _nextNonce; } + /// @notice Fetch the next key for Serai's Ethereum validator set + /// @return The next key for Serai's Ethereum validator set or bytes32(0) if none is currently set + function nextSeraiKey() external view returns (bytes32) { + return _nextSeraiKey; + } + /// @notice Fetch the current key for Serai's Ethereum validator set - /// @return The current key for Serai's Ethereum validator set + /** + * @return The current key for Serai's Ethereum validator set or bytes32(0) if none is currently + * set + */ function seraiKey() external view returns (bytes32) { return _seraiKey; } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 71b4bca4..e28fb2f5 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -275,6 +275,11 @@ impl Executed { #[derive(Clone, Debug)] pub struct Router(Arc>, Address); impl Router { + const DEPLOYMENT_GAS: u64 = 995_000; + const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 58_000; + const UPDATE_SERAI_KEY_GAS: u64 = 61_000; + const EXECUTE_BASE_GAS: u64 = 48_000; + fn code() -> Vec { const BYTECODE: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); @@ -293,7 +298,7 @@ impl Router { /// This transaction assumes the `Deployer` has already been deployed. pub fn deployment_tx(initial_serai_key: &PublicKey) -> TxLegacy { let mut tx = Deployer::deploy_tx(Self::init_code(initial_serai_key)); - tx.gas_limit = 883654 * 120 / 100; + tx.gas_limit = Self::DEPLOYMENT_GAS * 120 / 100; tx } @@ -322,6 +327,25 @@ impl Router { self.1 } + /// Get the message to be signed in order to confirm the next key for Serai. + pub fn confirm_next_serai_key_message(nonce: u64) -> Vec { + abi::confirmNextSeraiKeyCall::new((abi::Signature { + c: U256::try_from(nonce).unwrap().into(), + s: U256::ZERO.into(), + },)) + .abi_encode() + } + + /// Construct a transaction to confirm the next key representing Serai. + pub fn confirm_next_serai_key(&self, sig: &Signature) -> TxLegacy { + TxLegacy { + to: TxKind::Call(self.1), + input: abi::confirmNextSeraiKeyCall::new((abi::Signature::from(sig),)).abi_encode().into(), + gas_limit: Self::CONFIRM_NEXT_SERAI_KEY_GAS * 120 / 100, + ..Default::default() + } + } + /// Get the message to be signed in order to update the key for Serai. pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { abi::updateSeraiKeyCall::new(( @@ -341,7 +365,7 @@ impl Router { )) .abi_encode() .into(), - gas_limit: 40_889 * 120 / 100, + gas_limit: Self::UPDATE_SERAI_KEY_GAS * 120 / 100, ..Default::default() } } @@ -359,14 +383,14 @@ impl Router { /// Construct a transaction to execute a batch of `OutInstruction`s. pub fn execute(&self, coin: Coin, fee: U256, outs: OutInstructions, sig: &Signature) -> TxLegacy { - let outs_len = outs.0.len(); + // TODO + let gas_limit = Self::EXECUTE_BASE_GAS + outs.0.iter().map(|_| 200_000 + 10_000).sum::(); TxLegacy { to: TxKind::Call(self.1), input: abi::executeCall::new((abi::Signature::from(sig), coin.address(), fee, outs.0)) .abi_encode() .into(), - // TODO - gas_limit: (45_501 + ((200_000 + 10_000) * u128::try_from(outs_len).unwrap())) * 120 / 100, + gas_limit: gas_limit * 120 / 100, ..Default::default() } } @@ -536,7 +560,7 @@ impl Router { res.push(Executed::SetKey { nonce: log.nonce.try_into().map_err(|e| { - TransportErrorKind::Custom(format!("filtered to convert nonce to u64: {e:?}").into()) + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) })?, key: log.key.into(), }); @@ -568,7 +592,7 @@ impl Router { res.push(Executed::Batch { nonce: log.nonce.try_into().map_err(|e| { - TransportErrorKind::Custom(format!("filtered to convert nonce to u64: {e:?}").into()) + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) })?, message_hash: log.messageHash.into(), }); @@ -580,19 +604,40 @@ impl Router { Ok(res) } - /// Fetch the current key for Serai's Ethereum validators - pub async fn key(&self, block: BlockId) -> Result> { - let call = TransactionRequest::default() - .to(self.1) - .input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into())); + async fn fetch_key( + &self, + block: BlockId, + call: Vec, + ) -> Result, RpcError> { + let call = TransactionRequest::default().to(self.1).input(TransactionInput::new(call.into())); let bytes = self.0.call(&call).block(block).await?; - let res = abi::seraiKeyCall::abi_decode_returns(&bytes, true) - .map_err(|e| TransportErrorKind::Custom(format!("filtered to decode key: {e:?}").into()))?; - Ok( - PublicKey::from_eth_repr(res._0.into()).ok_or_else(|| { + // This is fine as both key calls share a return type + let res = abi::nextSeraiKeyCall::abi_decode_returns(&bytes, true) + .map_err(|e| TransportErrorKind::Custom(format!("failed to decode key: {e:?}").into()))?; + let eth_repr = <[u8; 32]>::from(res._0); + Ok(if eth_repr == [0; 32] { + None + } else { + Some(PublicKey::from_eth_repr(eth_repr).ok_or_else(|| { TransportErrorKind::Custom("invalid key set on router".to_string().into()) - })?, - ) + })?) + }) + } + + /// Fetch the next key for Serai's Ethereum validators + pub async fn next_key( + &self, + block: BlockId, + ) -> Result, RpcError> { + self.fetch_key(block, abi::nextSeraiKeyCall::new(()).abi_encode()).await + } + + /// Fetch the current key for Serai's Ethereum validators + pub async fn key( + &self, + block: BlockId, + ) -> Result, RpcError> { + self.fetch_key(block, abi::seraiKeyCall::new(()).abi_encode()).await } /// Fetch the nonce of the next action to execute @@ -602,7 +647,7 @@ impl Router { .input(TransactionInput::new(abi::nextNonceCall::new(()).abi_encode().into())); let bytes = self.0.call(&call).block(block).await?; let res = abi::nextNonceCall::abi_decode_returns(&bytes, true) - .map_err(|e| TransportErrorKind::Custom(format!("filtered to decode nonce: {e:?}").into()))?; + .map_err(|e| TransportErrorKind::Custom(format!("failed to decode nonce: {e:?}").into()))?; Ok(u64::try_from(res._0).map_err(|_| { TransportErrorKind::Custom("nonce returned exceeded 2**64".to_string().into()) })?) @@ -615,7 +660,7 @@ impl Router { .input(TransactionInput::new(abi::escapedToCall::new(()).abi_encode().into())); let bytes = self.0.call(&call).block(block).await?; let res = abi::escapedToCall::abi_decode_returns(&bytes, true).map_err(|e| { - TransportErrorKind::Custom(format!("filtered to decode the address escaped to: {e:?}").into()) + TransportErrorKind::Custom(format!("failed to decode the address escaped to: {e:?}").into()) })?; Ok(res._0) } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 2da5422d..107723f8 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -37,13 +37,17 @@ fn execute_reentrancy_guard() { #[test] fn selector_collisions() { assert_eq!( - crate::_irouter_abi::IRouter::executeCall::SELECTOR, - crate::_router_abi::Router::execute4DE42904Call::SELECTOR + crate::_irouter_abi::IRouter::confirmNextSeraiKeyCall::SELECTOR, + crate::_router_abi::Router::confirmNextSeraiKey34AC53ACCall::SELECTOR ); assert_eq!( crate::_irouter_abi::IRouter::updateSeraiKeyCall::SELECTOR, crate::_router_abi::Router::updateSeraiKey5A8542A2Call::SELECTOR ); + assert_eq!( + crate::_irouter_abi::IRouter::executeCall::SELECTOR, + crate::_router_abi::Router::execute4DE42904Call::SELECTOR + ); assert_eq!( crate::_irouter_abi::IRouter::escapeHatchCall::SELECTOR, crate::_router_abi::Router::escapeHatchDCDD91CCCall::SELECTOR @@ -78,13 +82,13 @@ async fn setup_test( // Get the TX to deploy the Router let mut tx = Router::deployment_tx(&public_key); // Set a gas price (100 gwei) - tx.gas_price = 100_000_000_000u128; + tx.gas_price = 100_000_000_000; // Sign it let tx = ethereum_primitives::deterministically_sign(&tx); // Publish it let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); - println!("Router deployment used {} gas:", receipt.gas_used); + assert_eq!(u128::from(Router::DEPLOYMENT_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); let router = Router::new(provider.clone(), &public_key).await.unwrap().unwrap(); @@ -94,7 +98,8 @@ async fn setup_test( #[tokio::test] async fn test_constructor() { let (_anvil, _provider, router, key) = setup_test().await; - assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), key.1); + assert_eq!(router.next_key(BlockNumberOrTag::Latest.into()).await.unwrap(), Some(key.1)); + assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), None); assert_eq!(router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), 1); assert_eq!( router.escaped_to(BlockNumberOrTag::Latest.into()).await.unwrap(), @@ -102,12 +107,54 @@ async fn test_constructor() { ); } +async fn confirm_next_serai_key( + provider: &Arc>, + router: &Router, + nonce: u64, + key: (Scalar, PublicKey), +) -> TransactionReceipt { + let msg = Router::confirm_next_serai_key_message(nonce); + + let nonce = Scalar::random(&mut OsRng); + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); + let s = nonce + (c * key.0); + + let sig = Signature::new(c, s).unwrap(); + + let mut tx = router.confirm_next_serai_key(&sig); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(&tx); + let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; + assert!(receipt.status()); + assert_eq!( + u128::from(Router::CONFIRM_NEXT_SERAI_KEY_GAS), + ((receipt.gas_used + 1000) / 1000) * 1000 + ); + receipt +} + +#[tokio::test] +async fn test_confirm_next_serai_key() { + let (_anvil, provider, router, key) = setup_test().await; + + assert_eq!(router.next_key(BlockNumberOrTag::Latest.into()).await.unwrap(), Some(key.1)); + assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), None); + assert_eq!(router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), 1); + + let receipt = confirm_next_serai_key(&provider, &router, 1, key).await; + + assert_eq!(router.next_key(receipt.block_hash.unwrap().into()).await.unwrap(), None); + assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(key.1)); + assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); +} + #[tokio::test] async fn test_update_serai_key() { let (_anvil, provider, router, key) = setup_test().await; + confirm_next_serai_key(&provider, &router, 1, key).await; let update_to = test_key().1; - let msg = Router::update_serai_key_message(1, &update_to); + let msg = Router::update_serai_key_message(2, &update_to); let nonce = Scalar::random(&mut OsRng); let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); @@ -116,19 +163,22 @@ async fn test_update_serai_key() { let sig = Signature::new(c, s).unwrap(); let mut tx = router.update_serai_key(&update_to, &sig); - tx.gas_price = 100_000_000_000u128; + tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(&tx); let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); - println!("update_serai_key used {} gas:", receipt.gas_used); + assert_eq!(u128::from(Router::UPDATE_SERAI_KEY_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); - assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), update_to); - assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); + assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(key.1)); + assert_eq!(router.next_key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(update_to)); + assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); } #[tokio::test] async fn test_eth_in_instruction() { - let (_anvil, provider, router, _key) = setup_test().await; + let (_anvil, provider, router, key) = setup_test().await; + // TODO: Do we want to allow InInstructions before any key has been confirmed? + confirm_next_serai_key(&provider, &router, 1, key).await; let amount = U256::try_from(OsRng.next_u64()).unwrap(); let mut in_instruction = vec![0; usize::try_from(OsRng.next_u64() % 256).unwrap()]; @@ -138,8 +188,8 @@ async fn test_eth_in_instruction() { chain_id: None, nonce: 0, // 100 gwei - gas_price: 100_000_000_000u128, - gas_limit: 1_000_000u128, + gas_price: 100_000_000_000, + gas_limit: 1_000_000, to: TxKind::Call(router.address()), value: amount, input: crate::abi::inInstructionCall::new(( @@ -200,7 +250,7 @@ async fn publish_outs( let sig = Signature::new(c, s).unwrap(); let mut tx = router.execute(coin, fee, outs, &sig); - tx.gas_price = 100_000_000_000u128; + tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(&tx); ethereum_test_primitives::publish_tx(provider, tx).await } @@ -208,6 +258,7 @@ async fn publish_outs( #[tokio::test] async fn test_eth_address_out_instruction() { let (_anvil, provider, router, key) = setup_test().await; + confirm_next_serai_key(&provider, &router, 1, key).await; let mut amount = U256::try_from(OsRng.next_u64()).unwrap(); let mut fee = U256::try_from(OsRng.next_u64()).unwrap(); @@ -218,11 +269,11 @@ async fn test_eth_address_out_instruction() { ethereum_test_primitives::fund_account(&provider, router.address(), amount).await; let instructions = OutInstructions::from([].as_slice()); - let receipt = publish_outs(&provider, &router, key, 1, Coin::Ether, fee, instructions).await; + let receipt = publish_outs(&provider, &router, key, 2, Coin::Ether, fee, instructions).await; assert!(receipt.status()); - println!("empty execute used {} gas:", receipt.gas_used); + assert_eq!(u128::from(Router::EXECUTE_BASE_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); - assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); + assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); } #[tokio::test] diff --git a/processor/ethereum/test-primitives/Cargo.toml b/processor/ethereum/test-primitives/Cargo.toml index 54bc6850..3f025662 100644 --- a/processor/ethereum/test-primitives/Cargo.toml +++ b/processor/ethereum/test-primitives/Cargo.toml @@ -7,6 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/processor/ethereum authors = ["Luke Parker "] edition = "2021" publish = false +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -19,10 +20,10 @@ workspace = true k256 = { version = "0.13", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.3", default-features = false, features = ["std"] } +alloy-consensus = { version = "0.7", default-features = false, features = ["std"] } -alloy-rpc-types-eth = { version = "0.3", default-features = false } +alloy-rpc-types-eth = { version = "0.7", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.3", default-features = false } +alloy-provider = { version = "0.7", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } diff --git a/processor/ethereum/test-primitives/src/lib.rs b/processor/ethereum/test-primitives/src/lib.rs index b91ba97f..9f43d0a2 100644 --- a/processor/ethereum/test-primitives/src/lib.rs +++ b/processor/ethereum/test-primitives/src/lib.rs @@ -5,7 +5,7 @@ use k256::{elliptic_curve::sec1::ToEncodedPoint, ProjectivePoint}; use alloy_core::{ - primitives::{Address, U256, Bytes, Signature, TxKind}, + primitives::{Address, U256, Bytes, PrimitiveSignature, TxKind}, hex::FromHex, }; use alloy_consensus::{SignableTransaction, TxLegacy, Signed}; @@ -46,7 +46,7 @@ pub async fn publish_tx( let (tx, sig, _) = tx.into_parts(); let mut bytes = vec![]; - tx.encode_with_signature_fields(&sig, &mut bytes); + tx.into_signed(sig).eip2718_encode(&mut bytes); let pending_tx = provider.send_raw_transaction(&bytes).await.unwrap(); pending_tx.get_receipt().await.unwrap() } @@ -111,7 +111,7 @@ pub async fn send( ); let mut bytes = vec![]; - tx.encode_with_signature_fields(&Signature::from(sig), &mut bytes); + tx.into_signed(PrimitiveSignature::from(sig)).eip2718_encode(&mut bytes); let pending_tx = provider.send_raw_transaction(&bytes).await.unwrap(); pending_tx.get_receipt().await.unwrap() } From 18897978d0dac84c862ea8926b6b42485d544897 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Dec 2024 21:55:37 -0500 Subject: [PATCH 201/368] thiserror 2.0, cargo update --- Cargo.lock | 1075 ++++++++--------- common/std-shims/Cargo.toml | 2 +- coordinator/Cargo.toml | 1 + coordinator/tributary/Cargo.toml | 3 +- coordinator/tributary/tendermint/Cargo.toml | 3 +- crypto/dkg/Cargo.toml | 6 +- crypto/dkg/src/lib.rs | 4 +- crypto/dleq/Cargo.toml | 6 +- crypto/dleq/src/cross_group/mod.rs | 2 +- crypto/evrf/embedwards25519/Cargo.toml | 2 +- crypto/frost/Cargo.toml | 2 +- networks/bitcoin/Cargo.toml | 4 +- .../monero/ringct/bulletproofs/Cargo.toml | 4 +- .../monero/ringct/bulletproofs/src/lib.rs | 7 +- networks/monero/ringct/clsag/Cargo.toml | 4 +- networks/monero/ringct/clsag/src/lib.rs | 17 +- networks/monero/ringct/mlsag/Cargo.toml | 4 +- networks/monero/ringct/mlsag/src/lib.rs | 13 +- networks/monero/rpc/Cargo.toml | 4 +- networks/monero/rpc/src/lib.rs | 19 +- networks/monero/wallet/Cargo.toml | 4 +- networks/monero/wallet/address/Cargo.toml | 4 +- networks/monero/wallet/address/src/lib.rs | 18 +- networks/monero/wallet/polyseed/Cargo.toml | 4 +- networks/monero/wallet/polyseed/src/lib.rs | 11 +- networks/monero/wallet/seed/Cargo.toml | 4 +- networks/monero/wallet/seed/src/lib.rs | 9 +- networks/monero/wallet/src/scan.rs | 7 +- networks/monero/wallet/src/send/mod.rs | 41 +- networks/monero/wallet/src/view_pair.rs | 5 +- networks/monero/wallet/util/Cargo.toml | 4 +- networks/monero/wallet/util/src/seed.rs | 11 +- substrate/client/Cargo.toml | 4 +- substrate/client/src/serai/mod.rs | 2 +- substrate/runtime/Cargo.toml | 2 +- 35 files changed, 645 insertions(+), 667 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 34f71890..12da7bf1 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -23,18 +23,18 @@ dependencies = [ [[package]] name = "addr2line" -version = "0.22.0" +version = "0.24.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6e4503c46a5c0c7844e948c9a4d6acd9f50cccb4de1c48eb9e291ea17470c678" +checksum = "dfbe277e56a376000877090da837660b4427aad530e3028d44e0bffe4f89a1c1" dependencies = [ - "gimli 0.29.0", + "gimli 0.31.1", ] [[package]] -name = "adler" -version = "1.0.2" +name = "adler2" +version = "2.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" +checksum = "512761e0bb2578dd7380c6baaa0f4ce03e84f95e960231d1dec8bf4d7d6e2627" [[package]] name = "aead" @@ -95,16 +95,17 @@ dependencies = [ [[package]] name = "allocator-api2" -version = "0.2.18" +version = "0.2.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5c6cb57a04249c6480766f7f7cef5467412af1490f8d1e243141daddada3264f" +checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923" [[package]] name = "alloy-chains" -version = "0.1.29" +version = "0.1.48" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bb07629a5d0645d29f68d2fb6f4d0cf15c89ec0965be915f303967180929743f" +checksum = "a0161082e0edd9013d23083465cc04b20e44b7a15646d36ba7b0cdb7cd6fe18f" dependencies = [ + "alloy-primitives", "num_enum", "strum 0.26.3", ] @@ -122,7 +123,7 @@ dependencies = [ "alloy-trie", "auto_impl", "c-kzg", - "derive_more 1.0.0", + "derive_more", "k256", "serde", ] @@ -143,9 +144,9 @@ dependencies = [ [[package]] name = "alloy-core" -version = "0.8.0" +version = "0.8.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8e6dbb79f4e3285cc87f50c0d4be9a3a812643623b2e3558d425b41cbd795ceb" +checksum = "c3d14d531c99995de71558e8e2206c27d709559ee8e5a0452b965ea82405a013" dependencies = [ "alloy-primitives", ] @@ -169,7 +170,7 @@ checksum = "4c986539255fb839d1533c128e190e557e52ff652c9ef62939e233a81dd93f7e" dependencies = [ "alloy-primitives", "alloy-rlp", - "derive_more 1.0.0", + "derive_more", "k256", "serde", ] @@ -186,7 +187,7 @@ dependencies = [ "alloy-rlp", "alloy-serde", "c-kzg", - "derive_more 1.0.0", + "derive_more", "once_cell", "serde", "sha2", @@ -283,11 +284,11 @@ dependencies = [ "bytes", "cfg-if", "const-hex", - "derive_more 1.0.0", + "derive_more", "foldhash", "hashbrown 0.15.2", "hex-literal", - "indexmap 2.5.0", + "indexmap 2.7.0", "itoa", "k256", "keccak-asm", @@ -403,7 +404,7 @@ dependencies = [ "alloy-rlp", "alloy-serde", "alloy-sol-types", - "derive_more 1.0.0", + "derive_more", "itertools 0.13.0", "serde", "serde_json", @@ -468,7 +469,7 @@ dependencies = [ "alloy-sol-macro-input", "const-hex", "heck 0.5.0", - "indexmap 2.5.0", + "indexmap 2.7.0", "proc-macro-error2", "proc-macro2", "quote", @@ -542,7 +543,7 @@ dependencies = [ "alloy-primitives", "alloy-rlp", "arrayvec", - "derive_more 1.0.0", + "derive_more", "nybbles", "serde", "smallvec", @@ -575,9 +576,9 @@ dependencies = [ [[package]] name = "anstream" -version = "0.6.15" +version = "0.6.18" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "64e15c1ab1f89faffbf04a634d5e1962e9074f2741eef6d97f3c4e322426d526" +checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b" dependencies = [ "anstyle", "anstyle-parse", @@ -590,43 +591,43 @@ dependencies = [ [[package]] name = "anstyle" -version = "1.0.8" +version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1bec1de6f59aedf83baf9ff929c98f2ad654b97c9510f4e70cf6f661d49fd5b1" +checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9" [[package]] name = "anstyle-parse" -version = "0.2.5" +version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eb47de1e80c2b463c735db5b217a0ddc39d612e7ac9e2e96a5aed1f57616c1cb" +checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" -version = "1.1.1" +version = "1.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6d36fc52c7f6c869915e99412912f22093507da8d9e942ceaf66fe4b7c14422a" +checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c" dependencies = [ - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] name = "anstyle-wincon" -version = "3.0.4" +version = "3.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5bf74e1b6e971609db8ca7a9ce79fd5768ab6ae46441c572e46cf596f59e57f8" +checksum = "2109dbce0e72be3ec00bed26e6a7479ca384ad226efdd66db8fa2e3a38c83125" dependencies = [ "anstyle", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] name = "anyhow" -version = "1.0.86" +version = "1.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da" +checksum = "c1fd03a028ef38ba2276dce7e33fcd6369c158a1bca17946c4b1b701891c1ff7" [[package]] name = "approx" @@ -639,9 +640,9 @@ dependencies = [ [[package]] name = "arbitrary" -version = "1.3.2" +version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7d5a26814d8dcb93b0e5a0ff3c6d80a8843bafb21b39e8e18a6f05471870e110" +checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223" [[package]] name = "ark-ff" @@ -775,9 +776,9 @@ checksum = "5d5dde061bd34119e902bbb2d9b90c5692635cf59fb91d582c2b68043f1b8293" [[package]] name = "arrayref" -version = "0.3.8" +version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9d151e35f61089500b617991b791fc8bfd237ae50cd5950803758a179b41e67a" +checksum = "76a2e8124351fda1ef8aaaa3bbd7ebbcb486bbcd4225aca0aa0d84bb2db8fecb" [[package]] name = "arrayvec" @@ -800,7 +801,7 @@ dependencies = [ "nom", "num-traits", "rusticata-macros", - "thiserror 1.0.63", + "thiserror 1.0.69", "time", ] @@ -840,9 +841,9 @@ dependencies = [ [[package]] name = "async-io" -version = "2.3.4" +version = "2.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "444b0228950ee6501b3568d3c93bf1176a1fdbc3b758dcd9475046d30f4dc7e8" +checksum = "43a2b323ccce0a1d90b449fd71f2a06ca7faa7c54c2751f06c9bd851fc061059" dependencies = [ "async-lock", "cfg-if", @@ -870,9 +871,9 @@ dependencies = [ [[package]] name = "async-stream" -version = "0.3.5" +version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cd56dd203fef61ac097dd65721a419ddccb106b2d2b70ba60a6b529f03961a51" +checksum = "0b5a71a6f37880a80d1d7f19efd781e4b5de42c88f0722cc13bcb6cc2cfe8476" dependencies = [ "async-stream-impl", "futures-core", @@ -881,9 +882,9 @@ dependencies = [ [[package]] name = "async-stream-impl" -version = "0.3.5" +version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "16e62a023e7c117e27523144c5d2459f4397fcc3cab0085af8e2224f643a0193" +checksum = "c7c24de15d275a1ecfd47a380fb4d5ec9bfe0933f309ed5e705b775596a3574d" dependencies = [ "proc-macro2", "quote", @@ -892,9 +893,9 @@ dependencies = [ [[package]] name = "async-trait" -version = "0.1.82" +version = "0.1.83" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a27b8a3a6e1a44fa4c8baf1f653e4172e81486d4941f2237e20dc2d0cf4ddff1" +checksum = "721cae7de5c34fbb2acd27e21e6d2cf7b886dce0c27388d46c4e6c47ea4318dd" dependencies = [ "proc-macro2", "quote", @@ -938,23 +939,23 @@ dependencies = [ [[package]] name = "autocfg" -version = "1.3.0" +version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0" +checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26" [[package]] name = "backtrace" -version = "0.3.73" +version = "0.3.74" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5cc23269a4f8976d0a4d2e7109211a419fe30e8d88d677cd60b6bc79c5732e0a" +checksum = "8d82cb332cdfaed17ae235a638438ac4d4839913cc2af585c3c6746e8f8bee1a" dependencies = [ - "addr2line 0.22.0", - "cc", + "addr2line 0.24.2", "cfg-if", "libc", "miniz_oxide", - "object 0.36.4", + "object 0.36.5", "rustc-demangle", + "windows-targets 0.52.6", ] [[package]] @@ -1029,14 +1030,14 @@ dependencies = [ [[package]] name = "bindgen" -version = "0.69.4" +version = "0.69.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a00dc851838a2120612785d195287475a3ac45514741da670b735818822129a0" +checksum = "271383c67ccabffb7381723dea0672a673f292304fcb45c01cc648c7a8d58088" dependencies = [ "bitflags 2.6.0", "cexpr", "clang-sys", - "itertools 0.12.1", + "itertools 0.10.5", "lazy_static", "lazycell", "proc-macro2", @@ -1064,9 +1065,9 @@ checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb" [[package]] name = "bitcoin" -version = "0.32.2" +version = "0.32.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ea507acc1cd80fc084ace38544bbcf7ced7c2aa65b653b102de0ce718df668f6" +checksum = "ce6bc65742dea50536e35ad42492b234c27904a27f0abdcbce605015cb4ea026" dependencies = [ "base58ck", "bech32", @@ -1091,9 +1092,9 @@ dependencies = [ [[package]] name = "bitcoin-io" -version = "0.1.2" +version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "340e09e8399c7bd8912f495af6aa58bea0c9214773417ffaa8f6460f93aaee56" +checksum = "0b47c4ab7a93edb0c7198c5535ed9b52b63095f4e9b45279c6736cec4b856baf" [[package]] name = "bitcoin-serai" @@ -1109,7 +1110,7 @@ dependencies = [ "serde_json", "simple-request", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "tokio", "zeroize", ] @@ -1193,9 +1194,9 @@ dependencies = [ [[package]] name = "blake3" -version = "1.5.4" +version = "1.5.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d82033247fd8e890df8f740e407ad4d038debb9eb1f40533fffb32e7d17dc6f7" +checksum = "b8ee0c1824c4dea5b5f81736aff91bae041d2c07ee1192bec91054e10e3e601e" dependencies = [ "arrayref", "arrayvec", @@ -1268,7 +1269,7 @@ dependencies = [ "futures-core", "futures-util", "hex", - "http 1.1.0", + "http 1.2.0", "http-body-util", "hyper 1.4.1", "hyper-named-pipe", @@ -1281,7 +1282,7 @@ dependencies = [ "serde_json", "serde_repr", "serde_urlencoded", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tokio-util", "tower-service", @@ -1302,26 +1303,25 @@ dependencies = [ [[package]] name = "borsh" -version = "1.5.0" +version = "1.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dbe5b10e214954177fb1dc9fbd20a1a2608fe99e6c832033bdc7cea287a20d77" +checksum = "2506947f73ad44e344215ccd6403ac2ae18cd8e046e581a441bf8d199f257f03" dependencies = [ "borsh-derive", - "cfg_aliases", + "cfg_aliases 0.2.1", ] [[package]] name = "borsh-derive" -version = "1.5.1" +version = "1.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c3ef8005764f53cd4dca619f5bf64cafd4664dada50ece25e4d81de54c80cc0b" +checksum = "c2593a3b8b938bd68373196c9832f516be11fa487ef4ae745eb282e6a56a7244" dependencies = [ "once_cell", "proc-macro-crate 3.2.0", "proc-macro2", "quote", "syn 2.0.90", - "syn_derive", ] [[package]] @@ -1347,9 +1347,9 @@ dependencies = [ [[package]] name = "bstr" -version = "1.10.0" +version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "40723b8fb387abc38f4f4a37c09073622e41dd12327033091ef8950659e6dc0c" +checksum = "1a68f1f47cdf0ec8ee4b941b2eee2a80cb796db73118c0dd09ac63fbe405be22" dependencies = [ "memchr", "serde", @@ -1382,9 +1382,9 @@ checksum = "c3ac9f8b63eca6fd385229b3675f6cc0dc5c8a5c8a54a59d4f52ffd670d87b0c" [[package]] name = "bytemuck" -version = "1.17.1" +version = "1.20.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "773d90827bc3feecfb67fab12e24de0749aad83c74b9504ecde46237b5cd24e2" +checksum = "8b37c88a63ffd85d15b406896cc343916d7cf57838a847b3a6f2ca5d39a5695a" [[package]] name = "byteorder" @@ -1394,9 +1394,9 @@ checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bytes" -version = "1.7.1" +version = "1.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50" +checksum = "325918d6fe32f23b19878fe4b34794ae41fc19ddbe53b10571a4874d44ffd39b" dependencies = [ "serde", ] @@ -1438,9 +1438,9 @@ dependencies = [ [[package]] name = "cargo-platform" -version = "0.1.8" +version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "24b1f0365a6c6bb4020cd05806fd0d33c44d38046b8bd7f0e40814b9763cabfc" +checksum = "e35af189006b9c0f00a064685c727031e3ed2d8020f7ba284d78cc2671bd36ea" dependencies = [ "serde", ] @@ -1456,14 +1456,14 @@ dependencies = [ "semver 1.0.23", "serde", "serde_json", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] name = "cc" -version = "1.1.16" +version = "1.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e9d013ecb737093c0e86b151a7b837993cf9ec6c502946cfb44bedc392421e0b" +checksum = "27f657647bcff5394bf56c7317665bbf790a137a50eaaa5c6bfbb9e27a518f2d" dependencies = [ "jobserver", "libc", @@ -1500,6 +1500,12 @@ version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e" +[[package]] +name = "cfg_aliases" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724" + [[package]] name = "chacha20" version = "0.9.1" @@ -1599,9 +1605,9 @@ dependencies = [ [[package]] name = "clap" -version = "4.5.17" +version = "4.5.23" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3e5a21b8495e732f1b3c364c9949b201ca7bae518c502c80256c96ad79eaf6ac" +checksum = "3135e7ec2ef7b10c6ed8950f0f792ed96ee093fa088608f1c76e569722700c84" dependencies = [ "clap_builder", "clap_derive", @@ -1609,9 +1615,9 @@ dependencies = [ [[package]] name = "clap_builder" -version = "4.5.17" +version = "4.5.23" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8cf2dd12af7a047ad9d6da2b6b249759a22a7abc0f474c1dae1777afa4b21a73" +checksum = "30582fc632330df2bd26877bde0c1f4470d57c582bbc070376afcd04d8cb4838" dependencies = [ "anstream", "anstyle", @@ -1621,9 +1627,9 @@ dependencies = [ [[package]] name = "clap_derive" -version = "4.5.13" +version = "4.5.18" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "501d359d5f3dcaf6ecdeee48833ae73ec6e42723a1e52419c79abf9507eec0a0" +checksum = "4ac6a0c7b1a9e9a5186361f67dfa1b88213572f427fb9ab038efb2bd8c582dab" dependencies = [ "heck 0.5.0", "proc-macro2", @@ -1633,9 +1639,9 @@ dependencies = [ [[package]] name = "clap_lex" -version = "0.7.2" +version = "0.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97" +checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6" [[package]] name = "codespan-reporting" @@ -1649,9 +1655,9 @@ dependencies = [ [[package]] name = "colorchoice" -version = "1.0.2" +version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d3fd119d74b830634cea2a0f58bbd0d54540518a14397557951e79340abc28c0" +checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990" [[package]] name = "concurrent-queue" @@ -1717,6 +1723,16 @@ dependencies = [ "libc", ] +[[package]] +name = "core-foundation" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b55271e5c8c478ad3f38ad24ef34923091e0548492a266d19b3c0b4d82574c63" +dependencies = [ + "core-foundation-sys", + "libc", +] + [[package]] name = "core-foundation-sys" version = "0.8.7" @@ -1743,9 +1759,9 @@ dependencies = [ [[package]] name = "cpufeatures" -version = "0.2.13" +version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "51e852e6dc9a5bed1fae92dd2375037bf2b768725bf3be87811edee3249d09ad" +checksum = "16b80225097f2e5ae4e7179dd2266824648f3e2f49d9134d584b76389d31c4c3" dependencies = [ "libc", ] @@ -1961,13 +1977,14 @@ dependencies = [ [[package]] name = "cxx" -version = "1.0.128" +version = "1.0.131" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "54ccead7d199d584d139148b04b4a368d1ec7556a1d9ea2548febb1b9d49f9a4" +checksum = "2568d7d2cfc051e43414fe1ef80c712cbcd60c3624d1ad1cb4b2572324d0a5d9" dependencies = [ "cc", "cxxbridge-flags", "cxxbridge-macro", + "foldhash", "link-cplusplus", ] @@ -1988,18 +2005,19 @@ dependencies = [ [[package]] name = "cxxbridge-flags" -version = "1.0.128" +version = "1.0.131" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "65777e06cc48f0cb0152024c77d6cf9e4bdb4408e7b48bea993d42fa0f5b02b6" +checksum = "0c710c27f23b7fa00c23aaee9e6fd3e79a6dffc5f5c6217487ec5213f51296b7" [[package]] name = "cxxbridge-macro" -version = "1.0.128" +version = "1.0.131" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "98532a60dedaebc4848cb2cba5023337cc9ea3af16a5b062633fabfd9f18fb60" +checksum = "0aa53ef9fc54b986272efe83e257bbb417d1c3ceab1b732411d8c634fda572be" dependencies = [ "proc-macro2", "quote", + "rustversion", "syn 2.0.90", ] @@ -2124,17 +2142,6 @@ dependencies = [ "syn 1.0.109", ] -[[package]] -name = "derive_more" -version = "0.99.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f33878137e4dafd7fa914ad4e259e18a4e8e532b9617a2d0150262bf53abfce" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.90", -] - [[package]] name = "derive_more" version = "1.0.0" @@ -2250,7 +2257,7 @@ dependencies = [ "generalized-bulletproofs", "generalized-bulletproofs-circuit-abstraction", "generalized-bulletproofs-ec-gadgets", - "generic-array 1.1.0", + "generic-array 1.1.1", "multiexp", "pasta_curves", "rand", @@ -2259,7 +2266,7 @@ dependencies = [ "schnorr-signatures", "secq256k1", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -2278,7 +2285,7 @@ dependencies = [ "multiexp", "rand_core", "rustversion", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -2300,7 +2307,7 @@ dependencies = [ "serde", "serde_json", "strum 0.26.3", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tracing", ] @@ -2455,7 +2462,7 @@ dependencies = [ "ec-divisors", "ff-group-tests", "generalized-bulletproofs-ec-gadgets", - "generic-array 0.14.7", + "generic-array 1.1.1", "hex", "hex-literal", "rand_core", @@ -2478,11 +2485,11 @@ dependencies = [ [[package]] name = "enum-as-inner" -version = "0.6.0" +version = "0.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5ffccbb6966c05b32ef8fbac435df276c4ae4d3dc55a8cd0eb9745e6c12f546a" +checksum = "a1e6a265c649f3f5979b601d26f1d05ada116434c87741c9493cb56218f76cbc" dependencies = [ - "heck 0.4.1", + "heck 0.5.0", "proc-macro2", "quote", "syn 2.0.90", @@ -2515,12 +2522,12 @@ checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5" [[package]] name = "errno" -version = "0.3.9" +version = "0.3.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "534c5cf6194dfab3db3242765c03bbe257cf92f22b38f6bc0c58d59108a820ba" +checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -2562,9 +2569,9 @@ dependencies = [ [[package]] name = "event-listener-strategy" -version = "0.5.2" +version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0f214dc438f977e6d4e3500aaa277f5ad94ca83fbbd9b1a15713ce2344ccc5a1" +checksum = "3c3e4e0dd3673c1139bf041f3008816d9cf2946bbfac2945c09e523b8d7b05b2" dependencies = [ "event-listener 5.3.1", "pin-project-lite", @@ -2600,9 +2607,9 @@ checksum = "4443176a9f2c162692bd3d352d745ef9413eec5782a80d8fd6f8a1ac692a07f7" [[package]] name = "fastrand" -version = "2.1.1" +version = "2.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e8c02a5121d4ea3eb16a80748c74f5549a5665e4c21333c6098f283870fbdea6" +checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" [[package]] name = "fastrlp" @@ -2976,9 +2983,9 @@ checksum = "e6d5a32815ae3f33302d95fdcb2ce17862f8c65363dcfd29360480ba1001fc9c" [[package]] name = "futures" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "645c6916888f6cb6350d2550b80fb63e734897a8498abe35cfb732b6487804b0" +checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876" dependencies = [ "futures-channel", "futures-core", @@ -3001,9 +3008,9 @@ dependencies = [ [[package]] name = "futures-channel" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eac8f7d7865dcb88bd4373ab671c8cf4508703796caa2b1985a9ca867b3fcb78" +checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10" dependencies = [ "futures-core", "futures-sink", @@ -3011,15 +3018,15 @@ dependencies = [ [[package]] name = "futures-core" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dfc6580bb841c5a68e9ef15c77ccc837b40a7504914d52e47b8b0e9bbda25a1d" +checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e" [[package]] name = "futures-executor" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a576fc72ae164fca6b9db127eaa9a9dda0d61316034f33a0a0d4eda41f02b01d" +checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f" dependencies = [ "futures-core", "futures-task", @@ -3029,15 +3036,15 @@ dependencies = [ [[package]] name = "futures-io" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a44623e20b9681a318efdd71c299b6b222ed6f231972bfe2f224ebad6311f0c1" +checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6" [[package]] name = "futures-lite" -version = "2.3.0" +version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "52527eb5074e35e9339c6b4e8d12600c7128b68fb25dcb9fa9dec18f7c25f3a5" +checksum = "cef40d21ae2c515b51041df9ed313ed21e572df340ea58a922a0aefe7e8891a1" dependencies = [ "futures-core", "pin-project-lite", @@ -3045,9 +3052,9 @@ dependencies = [ [[package]] name = "futures-macro" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "87750cf4b7a4c0625b1529e4c543c2182106e4dedc60a2a6455e00d212c489ac" +checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650" dependencies = [ "proc-macro2", "quote", @@ -3066,15 +3073,15 @@ dependencies = [ [[package]] name = "futures-sink" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9fb8e00e87438d937621c1c6269e53f536c14d3fbd6a042bb24879e57d474fb5" +checksum = "e575fab7d1e0dcb8d0c7bcf9a63ee213816ab51902e6d244a95819acacf1d4f7" [[package]] name = "futures-task" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "38d84fa142264698cdce1a9f9172cf383a0c82de1bddcf3092901442c4097004" +checksum = "f90f7dce0722e95104fcb095585910c0977252f286e354b5e3bd38902cd99988" [[package]] name = "futures-ticker" @@ -3095,9 +3102,9 @@ checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24" [[package]] name = "futures-util" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3d6401deb83407ab3da39eba7e33987a73c3df0c82b4bb5813ee871c19c41d48" +checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81" dependencies = [ "futures-channel", "futures-core", @@ -3166,21 +3173,20 @@ version = "0.1.0" dependencies = [ "ciphersuite", "generalized-bulletproofs-circuit-abstraction", - "generic-array 1.1.0", + "generic-array 1.1.1", ] [[package]] name = "generator" -version = "0.8.1" +version = "0.8.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "186014d53bc231d0090ef8d6f03e0920c54d85a5ed22f4f2f74315ec56cf83fb" +checksum = "cc6bd114ceda131d3b1d665eba35788690ad37f5916457286b32ab6fd3c438dd" dependencies = [ - "cc", "cfg-if", "libc", "log", "rustversion", - "windows 0.54.0", + "windows 0.58.0", ] [[package]] @@ -3196,9 +3202,9 @@ dependencies = [ [[package]] name = "generic-array" -version = "1.1.0" +version = "1.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "96512db27971c2c3eece70a1e106fbe6c87760234e31e8f7e5634912fe52794a" +checksum = "2cb8bc4c28d15ade99c7e90b219f30da4be5c88e586277e8cbe886beeb868ab2" dependencies = [ "typenum", ] @@ -3247,9 +3253,9 @@ dependencies = [ [[package]] name = "gimli" -version = "0.29.0" +version = "0.31.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "40ecd4077b5ae9fd2e9e169b102c6c330d0605168eb0e8bf79952b256dbefffd" +checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f" [[package]] name = "glob" @@ -3259,15 +3265,15 @@ checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b" [[package]] name = "globset" -version = "0.4.14" +version = "0.4.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57da3b9b5b85bd66f31093f8c408b90a74431672542466497dcbdfdc02034be1" +checksum = "15f1ce686646e7f1e19bf7d5533fe443a45dbfb990e00629110797578b42fb19" dependencies = [ "aho-corasick", "bstr", "log", - "regex-automata 0.4.7", - "regex-syntax 0.8.4", + "regex-automata 0.4.9", + "regex-syntax 0.8.5", ] [[package]] @@ -3293,7 +3299,7 @@ dependencies = [ "futures-sink", "futures-util", "http 0.2.12", - "indexmap 2.5.0", + "indexmap 2.7.0", "slab", "tokio", "tokio-util", @@ -3346,6 +3352,8 @@ version = "0.15.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289" dependencies = [ + "allocator-api2", + "equivalent", "foldhash", "serde", ] @@ -3455,9 +3463,9 @@ dependencies = [ [[package]] name = "http" -version = "1.1.0" +version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "21b9ddb458710bc376481b842f5da65cdf31522de232c1ca8146abce2a358258" +checksum = "f16ca2af56261c99fba8bac40a10251ce8188205a4c448fbb745a2e4daa76fea" dependencies = [ "bytes", "fnv", @@ -3482,7 +3490,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184" dependencies = [ "bytes", - "http 1.1.0", + "http 1.2.0", ] [[package]] @@ -3493,7 +3501,7 @@ checksum = "793429d76616a256bcb62c2a2ec2bed781c8307e797e2598c50010f2bee2544f" dependencies = [ "bytes", "futures-util", - "http 1.1.0", + "http 1.2.0", "http-body 1.0.1", "pin-project-lite", ] @@ -3506,9 +3514,9 @@ checksum = "add0ab9360ddbd88cfeb3bd9574a1d85cfdfa14db10b3e21d3700dbc4328758f" [[package]] name = "httparse" -version = "1.9.4" +version = "1.9.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0fcc0b4a115bf80b728eb8ea024ad5bd707b615bfed49e0665b6e0f86fd082d9" +checksum = "7d71d3574edd2771538b901e6549113b4006ece66150fb69c0fb6d9a2adae946" [[package]] name = "httpdate" @@ -3539,7 +3547,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.5.7", + "socket2 0.4.10", "tokio", "tower-service", "tracing", @@ -3555,7 +3563,7 @@ dependencies = [ "bytes", "futures-channel", "futures-util", - "http 1.1.0", + "http 1.2.0", "http-body 1.0.1", "httparse", "httpdate", @@ -3588,10 +3596,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08afdbb5c31130e3034af566421053ab03787c640246a446327f550d11bcb333" dependencies = [ "futures-util", - "http 1.1.0", + "http 1.2.0", "hyper 1.4.1", "hyper-util", - "rustls 0.23.12", + "rustls 0.23.19", "rustls-native-certs", "rustls-pki-types", "tokio", @@ -3601,20 +3609,19 @@ dependencies = [ [[package]] name = "hyper-util" -version = "0.1.7" +version = "0.1.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cde7055719c54e36e95e8719f95883f22072a48ede39db7fc17a4e1d5281e9b9" +checksum = "df2dcfbe0677734ab2f3ffa7fa7bfd4706bfdc1ef393f2ee30184aed67e631b4" dependencies = [ "bytes", "futures-channel", "futures-util", - "http 1.1.0", + "http 1.2.0", "http-body 1.0.1", "hyper 1.4.1", "pin-project-lite", - "socket2 0.5.7", + "socket2 0.5.8", "tokio", - "tower 0.4.13", "tower-service", "tracing", ] @@ -3636,9 +3643,9 @@ dependencies = [ [[package]] name = "iana-time-zone" -version = "0.1.60" +version = "0.1.61" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e7ffbb5a1b541ea2561f8c41c087286cc091e21e556a4f09a8f6cbf17b69b141" +checksum = "235e081f3925a06703c2d0117ea8b91f042756fd6e7a6e5d901e8ca1a996b220" dependencies = [ "android_system_properties", "core-foundation-sys", @@ -3700,17 +3707,21 @@ dependencies = [ [[package]] name = "if-watch" -version = "3.2.0" +version = "3.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d6b0422c86d7ce0e97169cc42e04ae643caf278874a7a3c87b8150a220dc7e1e" +checksum = "cdf9d64cfcf380606e64f9a0bcf493616b65331199f984151a6fa11a7b3cde38" dependencies = [ "async-io", - "core-foundation", + "core-foundation 0.9.4", "fnv", "futures", "if-addrs", "ipnet", "log", + "netlink-packet-core", + "netlink-packet-route", + "netlink-proto", + "netlink-sys", "rtnetlink", "system-configuration", "tokio", @@ -3756,13 +3767,13 @@ dependencies = [ [[package]] name = "impl-trait-for-tuples" -version = "0.2.2" +version = "0.2.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "11d7a9f6330b71fea57921c9b61c47ee6e84f72d394754eff6163ae67e7395eb" +checksum = "a0eb5a3343abf848c0984fe4604b2b105da9539376e24fc0a3b0007411ae4fd9" dependencies = [ "proc-macro2", "quote", - "syn 1.0.109", + "syn 2.0.90", ] [[package]] @@ -3778,12 +3789,12 @@ dependencies = [ [[package]] name = "indexmap" -version = "2.5.0" +version = "2.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "68b900aa2f7301e21c36462b170ee99994de34dff39a4a6a528e80e7376d07e5" +checksum = "62f822373a4fe84d4bb149bf54e584a7f4abec90e072ed49cda0edea5b95471f" dependencies = [ "equivalent", - "hashbrown 0.14.5", + "hashbrown 0.15.2", "serde", ] @@ -3827,7 +3838,7 @@ version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f" dependencies = [ - "socket2 0.5.7", + "socket2 0.5.8", "widestring", "windows-sys 0.48.0", "winreg", @@ -3835,9 +3846,9 @@ dependencies = [ [[package]] name = "ipnet" -version = "2.9.0" +version = "2.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8f518f335dce6725a761382244631d86cf0ccb2863413590b31338feb467f9c3" +checksum = "ddc24109865250148c2e0f3d25d4f0f479571723792d3802153c60922a4fb708" [[package]] name = "is-terminal" @@ -3858,15 +3869,6 @@ dependencies = [ "either", ] -[[package]] -name = "itertools" -version = "0.12.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569" -dependencies = [ - "either", -] - [[package]] name = "itertools" version = "0.13.0" @@ -3878,9 +3880,9 @@ dependencies = [ [[package]] name = "itoa" -version = "1.0.11" +version = "1.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b" +checksum = "d75a2a4b1b190afb6f5425f10f6a8f959d2ea0b9c2b1d79553551850539e4674" [[package]] name = "jobserver" @@ -3893,10 +3895,11 @@ dependencies = [ [[package]] name = "js-sys" -version = "0.3.70" +version = "0.3.76" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1868808506b929d7b0cfa8f75951347aa71bb21144b7791bae35d9bccfcfe37a" +checksum = "6717b6b5b077764fb5966237269cb3c64edddde4b14ce42647430a78ced9e7b7" dependencies = [ + "once_cell", "wasm-bindgen", ] @@ -3934,7 +3937,7 @@ dependencies = [ "serde", "serde_json", "soketto 0.7.1", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tracing", ] @@ -3984,15 +3987,15 @@ dependencies = [ "beef", "serde", "serde_json", - "thiserror 1.0.63", + "thiserror 1.0.69", "tracing", ] [[package]] name = "k256" -version = "0.13.3" +version = "0.13.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "956ff9b67e26e1a6a866cb758f12c6f8746208489e3e4a4b5580802f2f0a587b" +checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b" dependencies = [ "cfg-if", "ecdsa", @@ -4013,9 +4016,9 @@ dependencies = [ [[package]] name = "keccak-asm" -version = "0.1.3" +version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "422fbc7ff2f2f5bdffeb07718e5a5324dca72b0c9293d50df4026652385e3314" +checksum = "505d1856a39b200489082f90d897c3f07c455563880bc5952e38eabf731c83b6" dependencies = [ "digest 0.10.7", "sha3-asm", @@ -4076,25 +4079,25 @@ checksum = "884e2677b40cc8c339eaefcb701c32ef1fd2493d71118dc0ca4b6a736c93bd67" [[package]] name = "libc" -version = "0.2.158" +version = "0.2.167" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d8adc4bb1803a324070e64a98ae98f38934d91957a99cfb3a43dcbc01bc56439" +checksum = "09d6582e104315a817dff97f75133544b2e094ee22447d2acf4a74e189ba06fc" [[package]] name = "libloading" -version = "0.8.5" +version = "0.8.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4" +checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.52.6", + "windows-targets 0.48.5", ] [[package]] name = "libm" -version = "0.2.8" +version = "0.2.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4ec2a862134d2a7d32d7983ddcdd1c4923530833c9f2ea1a44fc5fa473989058" +checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa" [[package]] name = "libp2p" @@ -4131,7 +4134,7 @@ dependencies = [ "multiaddr", "pin-project", "rw-stream-sink", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -4181,7 +4184,7 @@ dependencies = [ "rand", "rw-stream-sink", "smallvec", - "thiserror 1.0.63", + "thiserror 1.0.69", "unsigned-varint", "void", ] @@ -4253,15 +4256,15 @@ dependencies = [ "quick-protobuf", "quick-protobuf-codec", "smallvec", - "thiserror 1.0.63", + "thiserror 1.0.69", "void", ] [[package]] name = "libp2p-identity" -version = "0.2.9" +version = "0.2.10" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "55cca1eb2bc1fd29f099f3daaab7effd01e1a54b7c577d0ed082521034d912e8" +checksum = "257b5621d159b32282eac446bed6670c39c7dc68a200a992d8f056afa0066f6d" dependencies = [ "bs58", "ed25519-dalek", @@ -4270,7 +4273,7 @@ dependencies = [ "quick-protobuf", "rand", "sha2", - "thiserror 1.0.63", + "thiserror 1.0.69", "tracing", "zeroize", ] @@ -4298,7 +4301,7 @@ dependencies = [ "rand", "sha2", "smallvec", - "thiserror 1.0.63", + "thiserror 1.0.69", "uint", "unsigned-varint", "void", @@ -4319,7 +4322,7 @@ dependencies = [ "log", "rand", "smallvec", - "socket2 0.5.7", + "socket2 0.5.8", "tokio", "trust-dns-proto 0.22.0", "void", @@ -4363,7 +4366,7 @@ dependencies = [ "sha2", "snow", "static_assertions", - "thiserror 1.0.63", + "thiserror 1.0.69", "x25519-dalek", "zeroize", ] @@ -4405,8 +4408,8 @@ dependencies = [ "rand", "ring 0.16.20", "rustls 0.21.12", - "socket2 0.5.7", - "thiserror 1.0.63", + "socket2 0.5.8", + "thiserror 1.0.69", "tokio", ] @@ -4477,7 +4480,7 @@ dependencies = [ "libp2p-core", "libp2p-identity", "log", - "socket2 0.5.7", + "socket2 0.5.8", "tokio", ] @@ -4495,7 +4498,7 @@ dependencies = [ "ring 0.16.20", "rustls 0.21.12", "rustls-webpki 0.101.7", - "thiserror 1.0.63", + "thiserror 1.0.69", "x509-parser", "yasna", ] @@ -4545,8 +4548,8 @@ dependencies = [ "parking_lot 0.12.3", "pin-project-lite", "rw-stream-sink", - "soketto 0.8.0", - "thiserror 1.0.63", + "soketto 0.8.1", + "thiserror 1.0.69", "url", "webpki-roots", ] @@ -4560,7 +4563,7 @@ dependencies = [ "futures", "libp2p-core", "log", - "thiserror 1.0.63", + "thiserror 1.0.69", "yamux", ] @@ -4628,9 +4631,9 @@ dependencies = [ [[package]] name = "linregress" -version = "0.5.3" +version = "0.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4de04dcecc58d366391f9920245b85ffa684558a5ef6e7736e754347c3aea9c2" +checksum = "a9eda9dcf4f2a99787827661f312ac3219292549c2ee992bf9a6248ffb066bf7" dependencies = [ "nalgebra", ] @@ -4672,11 +4675,11 @@ dependencies = [ [[package]] name = "lru" -version = "0.12.4" +version = "0.12.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "37ee39891760e7d94734f6f63fedc29a2e4a152f836120753a72503f09fcf904" +checksum = "234cf4f4a04dc1f57e24b96cc0cd600cf2af460d4161ac5ecdd0af8e1f3b2a38" dependencies = [ - "hashbrown 0.14.5", + "hashbrown 0.15.2", ] [[package]] @@ -4690,19 +4693,18 @@ dependencies = [ [[package]] name = "lz4" -version = "1.26.0" +version = "1.28.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "958b4caa893816eea05507c20cfe47574a43d9a697138a7872990bba8a0ece68" +checksum = "4d1febb2b4a79ddd1980eede06a8f7902197960aa0383ffcfdd62fe723036725" dependencies = [ - "libc", "lz4-sys", ] [[package]] name = "lz4-sys" -version = "1.10.0" +version = "1.11.1+lz4-1.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "109de74d5d2353660401699a4174a4ff23fcc649caf553df71933c7fb45ad868" +checksum = "6bd8c0d6c6ed0cd30b3652886bb8711dc4bb01d637a68105a3d5158039b418e6" dependencies = [ "cc", "libc", @@ -4881,7 +4883,7 @@ dependencies = [ "crypto-bigint", "ff", "ff-group-tests", - "generic-array 1.1.0", + "generic-array 1.1.1", "group", "hex", "rand_core", @@ -4898,20 +4900,19 @@ checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" [[package]] name = "miniz_oxide" -version = "0.7.4" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8a240ddb74feaf34a79a7add65a741f3167852fba007066dcac1ca548d89c08" +checksum = "e2d80299ef12ff69b16a84bb182e3b9df68b5a91574d3d4fa6e41b65deec4df1" dependencies = [ - "adler", + "adler2", ] [[package]] name = "mio" -version = "1.0.2" +version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "80e04d1dcff3aae0704555fe5fee3bcfaf3d1fdf8a7e521d5b9d2b42acb52cec" +checksum = "2886843bf800fba2e3377cff24abf6379b4c4d5c6681eaf9ea5b0d15090450bd" dependencies = [ - "hermit-abi", "libc", "wasi", "windows-sys 0.52.0", @@ -4961,7 +4962,7 @@ dependencies = [ "schnorr-signatures", "serde_json", "subtle", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -4978,7 +4979,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5005,7 +5006,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5025,7 +5026,7 @@ dependencies = [ "rand_core", "std-shims", "subtle", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5060,7 +5061,7 @@ dependencies = [ "monero-io", "monero-primitives", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5088,7 +5089,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5101,7 +5102,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5174,7 +5175,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "tokio", "zeroize", ] @@ -5190,7 +5191,7 @@ dependencies = [ "polyseed", "rand_core", "std-shims", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -5345,13 +5346,12 @@ dependencies = [ [[package]] name = "nalgebra" -version = "0.32.6" +version = "0.33.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7b5c17de023a86f59ed79891b2e5d5a94c705dbe904a5b5c9c952ea6221b03e4" +checksum = "26aecdf64b707efd1310e3544d709c5c0ac61c13756046aaaba41be5c4f66a3b" dependencies = [ "approx", "matrixmultiply", - "nalgebra-macros", "num-complex", "num-rational", "num-traits", @@ -5359,17 +5359,6 @@ dependencies = [ "typenum", ] -[[package]] -name = "nalgebra-macros" -version = "0.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "254a5372af8fc138e36684761d3c0cdb758a4410e938babcff1c860ce14ddbfc" -dependencies = [ - "proc-macro2", - "quote", - "syn 2.0.90", -] - [[package]] name = "names" version = "0.14.0" @@ -5381,21 +5370,20 @@ dependencies = [ [[package]] name = "netlink-packet-core" -version = "0.4.2" +version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "345b8ab5bd4e71a2986663e88c56856699d060e78e152e6e9d7966fcd5491297" +checksum = "72724faf704479d67b388da142b186f916188505e7e0b26719019c525882eda4" dependencies = [ "anyhow", "byteorder", - "libc", "netlink-packet-utils", ] [[package]] name = "netlink-packet-route" -version = "0.12.0" +version = "0.17.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d9ea4302b9759a7a88242299225ea3688e63c85ea136371bb6cf94fd674efaab" +checksum = "053998cea5a306971f88580d0829e90f270f940befd7cf928da179d4187a5a66" dependencies = [ "anyhow", "bitflags 1.3.2", @@ -5414,21 +5402,21 @@ dependencies = [ "anyhow", "byteorder", "paste", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] name = "netlink-proto" -version = "0.10.0" +version = "0.11.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "65b4b14489ab424703c092062176d52ba55485a89c076b4f9db05092b7223aa6" +checksum = "86b33524dc0968bfad349684447bfce6db937a9ac3332a1fe60c0c5a5ce63f21" dependencies = [ "bytes", "futures", "log", "netlink-packet-core", "netlink-sys", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", ] @@ -5447,9 +5435,9 @@ dependencies = [ [[package]] name = "nix" -version = "0.24.3" +version = "0.26.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fa52e972a9a719cecb6864fb88568781eb706bac2cd1d4f04a648542dbf78069" +checksum = "598beaf3cc6fdd9a5dfb1630c2800c7acd31df7aaf0f565796fba2b53ca1af1b" dependencies = [ "bitflags 1.3.2", "cfg-if", @@ -5608,9 +5596,9 @@ dependencies = [ [[package]] name = "object" -version = "0.36.4" +version = "0.36.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "084f1a5821ac4c651660a94a7153d27ac9d8a53736203f58b31945ded098070a" +checksum = "aedf0a2d09c573ed1d8d85b30c119153926a2b36dce0ab28322c09a117a4683e" dependencies = [ "memchr", ] @@ -5626,9 +5614,9 @@ dependencies = [ [[package]] name = "once_cell" -version = "1.19.0" +version = "1.20.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" +checksum = "1261fe7e33c73b354eab43b1273a57c8f967d0391e80353e51f764ac02cf6775" [[package]] name = "opaque-debug" @@ -5872,9 +5860,9 @@ checksum = "e1ad0aff30c1da14b1254fcb2af73e1fa9a28670e584a626f53a369d0e157304" [[package]] name = "parking" -version = "2.2.0" +version = "2.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bb813b8af86854136c6922af0598d719255ecb2179515e6e7730d468f05c9cae" +checksum = "f38d5652c16fde515bb1ecef450ab0f6a219d619a7274976324d5e377f7dceba" [[package]] name = "parking_lot" @@ -5996,12 +5984,12 @@ checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e" [[package]] name = "pest" -version = "2.7.11" +version = "2.7.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cd53dff83f26735fdc1ca837098ccf133605d794cdae66acfc2bfac3ec809d95" +checksum = "8b7cafe60d6cf8e62e1b9b2ea516a089c008945bb5a275416789e7db0bc199dc" dependencies = [ "memchr", - "thiserror 1.0.63", + "thiserror 2.0.6", "ucd-trie", ] @@ -6012,23 +6000,23 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b4c5cc86750666a3ed20bdaf5ca2a0344f9c67674cae0515bec2da16fbaa47db" dependencies = [ "fixedbitset", - "indexmap 2.5.0", + "indexmap 2.7.0", ] [[package]] name = "pin-project" -version = "1.1.5" +version = "1.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b6bf43b791c5b9e34c3d182969b4abb522f9343702850a2e57f460d00d09b4b3" +checksum = "be57f64e946e500c8ee36ef6331845d40a93055567ec57e8fae13efd33759b95" dependencies = [ "pin-project-internal", ] [[package]] name = "pin-project-internal" -version = "1.1.5" +version = "1.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2f38a4412a78282e09a2cf38d195ea5420d15ba0602cb375210efbc877243965" +checksum = "3c0f5fad0874fc7abcd4d750e76917eaebbecaa2c20bde22e1dbeeba8beb758c" dependencies = [ "proc-macro2", "quote", @@ -6037,9 +6025,9 @@ dependencies = [ [[package]] name = "pin-project-lite" -version = "0.2.14" +version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bda66fc9667c18cb2758a2ac84d1167245054bcf85d5d1aaa6923f45801bdd02" +checksum = "915a1e146535de9163f3987b8944ed8cf49a18bb0056bcebcdcece385cece4ff" [[package]] name = "pin-utils" @@ -6059,9 +6047,9 @@ dependencies = [ [[package]] name = "pkg-config" -version = "0.3.30" +version = "0.3.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d231b230927b5e4ad203db57bbcbee2802f6bce620b1e4a9024a07d94e2907ec" +checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2" [[package]] name = "polling" @@ -6099,7 +6087,7 @@ dependencies = [ "sha3", "std-shims", "subtle", - "thiserror 1.0.63", + "thiserror 2.0.6", "zeroize", ] @@ -6208,7 +6196,7 @@ version = "3.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ecf48c7ca261d60b74ab1a7b20da18bede46776b2e55535cb958eb595c5fa7b" dependencies = [ - "toml_edit 0.22.20", + "toml_edit 0.22.22", ] [[package]] @@ -6288,7 +6276,7 @@ dependencies = [ "lazy_static", "memchr", "parking_lot 0.12.3", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -6328,7 +6316,7 @@ dependencies = [ "rand", "rand_chacha", "rand_xorshift", - "regex-syntax 0.8.4", + "regex-syntax 0.8.5", "rusty-fork", "tempfile", "unarray", @@ -6390,9 +6378,9 @@ dependencies = [ [[package]] name = "psm" -version = "0.1.23" +version = "0.1.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aa37f80ca58604976033fae9515a8a2989fc13797d953f7c04fb8fa36a11f205" +checksum = "200b9ff220857e53e184257720a14553b2f4aa02577d2ed9842d45d4b9654810" dependencies = [ "cc", ] @@ -6421,7 +6409,7 @@ dependencies = [ "asynchronous-codec", "bytes", "quick-protobuf", - "thiserror 1.0.63", + "thiserror 1.0.69", "unsigned-varint", ] @@ -6438,7 +6426,7 @@ dependencies = [ "quinn-udp", "rustc-hash 1.1.0", "rustls 0.21.12", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tracing", ] @@ -6455,7 +6443,7 @@ dependencies = [ "rustc-hash 1.1.0", "rustls 0.21.12", "slab", - "thiserror 1.0.63", + "thiserror 1.0.69", "tinyvec", "tracing", ] @@ -6468,7 +6456,7 @@ checksum = "055b4e778e8feb9f93c4e439f71dc2156ef13360b432b799e179a8c4cdf0b1d7" dependencies = [ "bytes", "libc", - "socket2 0.5.7", + "socket2 0.5.8", "tracing", "windows-sys 0.48.0", ] @@ -6587,9 +6575,9 @@ dependencies = [ [[package]] name = "redox_syscall" -version = "0.5.3" +version = "0.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2a908a6e00f1fdd0dfd9c0eb08ce85126f6d8bbda50017e74bc4a4b7d4a926a4" +checksum = "9b6dfecf2c74bce2466cabf93f6664d6998a69eb21e39f4207930065b27b771f" dependencies = [ "bitflags 2.6.0", ] @@ -6602,7 +6590,7 @@ checksum = "ba009ff324d1fc1b900bd1fdb31564febe58a8ccc8a6fdbb93b543d33b13ca43" dependencies = [ "getrandom", "libredox", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -6640,14 +6628,14 @@ dependencies = [ [[package]] name = "regex" -version = "1.10.6" +version = "1.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619" +checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191" dependencies = [ "aho-corasick", "memchr", - "regex-automata 0.4.7", - "regex-syntax 0.8.4", + "regex-automata 0.4.9", + "regex-syntax 0.8.5", ] [[package]] @@ -6661,13 +6649,13 @@ dependencies = [ [[package]] name = "regex-automata" -version = "0.4.7" +version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "38caf58cc5ef2fed281f89292ef23f6365465ed9a41b7a7754eb4e26496c92df" +checksum = "809e8dc61f6de73b46c85f4c96486310fe304c434cfa43669d7b40f711150908" dependencies = [ "aho-corasick", "memchr", - "regex-syntax 0.8.4", + "regex-syntax 0.8.5", ] [[package]] @@ -6678,9 +6666,9 @@ checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1" [[package]] name = "regex-syntax" -version = "0.8.4" +version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7a66a03ae7c801facd77a29370b4faec201768915ac14a721ba36f20bc9c209b" +checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c" [[package]] name = "resolv-conf" @@ -6781,16 +6769,19 @@ dependencies = [ [[package]] name = "rtnetlink" -version = "0.10.1" +version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "322c53fd76a18698f1c27381d58091de3a043d356aa5bd0d510608b565f469a0" +checksum = "7a552eb82d19f38c3beed3f786bd23aa434ceb9ac43ab44419ca6d67a7e186c0" dependencies = [ "futures", "log", + "netlink-packet-core", "netlink-packet-route", + "netlink-packet-utils", "netlink-proto", + "netlink-sys", "nix", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", ] @@ -6887,15 +6878,15 @@ dependencies = [ [[package]] name = "rustix" -version = "0.38.36" +version = "0.38.42" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3f55e80d50763938498dd5ebb18647174e0c76dc38c5505294bb224624f30f36" +checksum = "f93dc38ecbab2eb790ff964bb77fa94faf256fd3e73285fd7ba0903b76bedb85" dependencies = [ "bitflags 2.6.0", "errno", "libc", "linux-raw-sys", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -6912,46 +6903,35 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.12" +version = "0.23.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c58f8c84392efc0a126acce10fa59ff7b3d2ac06ab451a33f2741989b806b044" +checksum = "934b404430bb06b3fae2cba809eb45a1ab1aecd64491213d7c3301b88393f8d1" dependencies = [ "once_cell", "ring 0.17.8", "rustls-pki-types", - "rustls-webpki 0.102.7", + "rustls-webpki 0.102.8", "subtle", "zeroize", ] [[package]] name = "rustls-native-certs" -version = "0.8.0" +version = "0.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fcaf18a4f2be7326cd874a5fa579fae794320a0f388d365dca7e480e55f83f8a" +checksum = "7fcff2dd52b58a8d98a70243663a0d234c4e2b79235637849d15913394a247d3" dependencies = [ "openssl-probe", - "rustls-pemfile", "rustls-pki-types", "schannel", "security-framework", ] -[[package]] -name = "rustls-pemfile" -version = "2.1.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "196fe16b00e106300d3e45ecfcb764fa292a535d7326a29a5875c579c7417425" -dependencies = [ - "base64 0.22.1", - "rustls-pki-types", -] - [[package]] name = "rustls-pki-types" -version = "1.8.0" +version = "1.10.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fc0a2ce646f8655401bb81e7927b812614bd5d91dbc968696be50603510fcaf0" +checksum = "16f1201b3c9a7ee8039bcadc17b7e605e2945b27eee7631788c1bd2b0643674b" [[package]] name = "rustls-webpki" @@ -6965,9 +6945,9 @@ dependencies = [ [[package]] name = "rustls-webpki" -version = "0.102.7" +version = "0.102.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "84678086bd54edf2b415183ed7a94d0efb049f1b646a33e22a36f3794be6ae56" +checksum = "64ca1bc8749bd4cf37b5ce386cc146580777b4e8572c7b97baf22c83f444bee9" dependencies = [ "ring 0.17.8", "rustls-pki-types", @@ -6976,9 +6956,9 @@ dependencies = [ [[package]] name = "rustversion" -version = "1.0.17" +version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "955d28af4278de8121b7ebeb796b6a45735dc01436d898801014aced2773a3d6" +checksum = "0e819f2bc632f285be6d7cd36e25940d45b2391dd6d9b939e79de557f7014248" [[package]] name = "rusty-fork" @@ -7035,7 +7015,7 @@ dependencies = [ "log", "sp-core", "sp-wasm-interface", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7063,7 +7043,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7168,7 +7148,7 @@ dependencies = [ "sp-panic-handler", "sp-runtime", "sp-version", - "thiserror 1.0.63", + "thiserror 1.0.69", "tiny-bip39", "tokio", ] @@ -7246,7 +7226,7 @@ dependencies = [ "sp-runtime", "sp-state-machine", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7282,7 +7262,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7336,7 +7316,7 @@ dependencies = [ "sp-keystore", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7392,7 +7372,7 @@ dependencies = [ "sc-allocator", "sp-maybe-compressed-blob", "sp-wasm-interface", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasm-instrument", ] @@ -7440,7 +7420,7 @@ dependencies = [ "sp-application-crypto", "sp-core", "sp-keystore", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7478,7 +7458,7 @@ dependencies = [ "sp-core", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", "unsigned-varint", "void", "wasm-timer", @@ -7501,7 +7481,7 @@ dependencies = [ "sc-network", "sp-blockchain", "sp-runtime", - "thiserror 1.0.63", + "thiserror 1.0.69", "unsigned-varint", ] @@ -7559,7 +7539,7 @@ dependencies = [ "sp-blockchain", "sp-core", "sp-runtime", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7593,7 +7573,7 @@ dependencies = [ "sp-core", "sp-runtime", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7700,7 +7680,7 @@ dependencies = [ "sp-rpc", "sp-runtime", "sp-version", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7740,7 +7720,7 @@ dependencies = [ "sp-core", "sp-runtime", "sp-version", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio-stream", ] @@ -7801,7 +7781,7 @@ dependencies = [ "static_init", "substrate-prometheus-endpoint", "tempfile", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tracing", "tracing-futures", @@ -7852,7 +7832,7 @@ dependencies = [ "sc-utils", "serde", "serde_json", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasm-timer", ] @@ -7878,7 +7858,7 @@ dependencies = [ "sp-rpc", "sp-runtime", "sp-tracing", - "thiserror 1.0.63", + "thiserror 1.0.69", "tracing", "tracing-log", "tracing-subscriber 0.2.25", @@ -7918,7 +7898,7 @@ dependencies = [ "sp-tracing", "sp-transaction-pool", "substrate-prometheus-endpoint", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7934,7 +7914,7 @@ dependencies = [ "sp-blockchain", "sp-core", "sp-runtime", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -7954,13 +7934,13 @@ dependencies = [ [[package]] name = "scale-info" -version = "2.11.3" +version = "2.11.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eca070c12893629e2cc820a9761bedf6ce1dcddc9852984d1dc734b8bd9bd024" +checksum = "346a3b32eba2640d17a9cb5927056b08f3de90f65b72fe09402c2ad07d684d0b" dependencies = [ "bitvec", "cfg-if", - "derive_more 0.99.18", + "derive_more", "parity-scale-codec", "scale-info-derive", "serde", @@ -7968,23 +7948,23 @@ dependencies = [ [[package]] name = "scale-info-derive" -version = "2.11.3" +version = "2.11.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2d35494501194174bda522a32605929eefc9ecf7e0a326c26db1fdd85881eb62" +checksum = "c6630024bf739e2179b91fb424b28898baf819414262c5d376677dbff1fe7ebf" dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 1.0.109", + "syn 2.0.90", ] [[package]] name = "schannel" -version = "0.1.23" +version = "0.1.27" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fbc91545643bcf3a0bbb6569265615222618bdf33ce4ffbbd13c4bbd4c093534" +checksum = "1f29ebaa345f945cec9fbbc532eb307f0fdad8161f281b6369539c8d84876b3d" dependencies = [ - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -8076,9 +8056,9 @@ dependencies = [ [[package]] name = "secp256k1" -version = "0.29.0" +version = "0.29.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0e0cc0f1cf93f4969faf3ea1c7d8a9faed25918d96affa959720823dfe86d4f3" +checksum = "9465315bc9d4566e1724f0fffcbcc446268cb522e60f9a27bcded6b19c108113" dependencies = [ "bitcoin_hashes", "rand", @@ -8088,9 +8068,9 @@ dependencies = [ [[package]] name = "secp256k1-sys" -version = "0.10.0" +version = "0.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1433bd67156263443f14d603720b082dd3121779323fce20cba2aa07b874bc1b" +checksum = "d4387882333d3aa8cb20530a17c69a3752e97837832f34f6dccc760e715001d9" dependencies = [ "cc", ] @@ -8126,12 +8106,12 @@ dependencies = [ [[package]] name = "security-framework" -version = "2.11.1" +version = "3.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02" +checksum = "e1415a607e92bec364ea2cf9264646dcce0f91e6d65281bd6f2819cca3bf39c8" dependencies = [ "bitflags 2.6.0", - "core-foundation", + "core-foundation 0.10.0", "core-foundation-sys", "libc", "security-framework-sys", @@ -8139,9 +8119,9 @@ dependencies = [ [[package]] name = "security-framework-sys" -version = "2.11.1" +version = "2.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "75da29fe9b9b08fe9d6b22b5b4bcbc75d8db3aa31e639aa56bb62e9d46bfceaf" +checksum = "fa39c7303dc58b5543c94d22c1766b0d31f2ee58306363ea622b10bbc075eaa2" dependencies = [ "core-foundation-sys", "libc", @@ -8162,7 +8142,7 @@ version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f301af10236f6df4160f7c3f04eec6dbc70ace82d23326abad5edee88801c6b6" dependencies = [ - "semver-parser 0.10.2", + "semver-parser 0.10.3", ] [[package]] @@ -8182,9 +8162,9 @@ checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3" [[package]] name = "semver-parser" -version = "0.10.2" +version = "0.10.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "00b0bef5b7f9e0df16536d3961cfb6e84331c065b4066afb39768d0e319411f7" +checksum = "9900206b54a3527fdc7b8a938bffd94a568bac4f4aa8113b209df75a09c0dec2" dependencies = [ "pest", ] @@ -8272,7 +8252,7 @@ dependencies = [ "simple-request", "sp-core", "sp-runtime", - "thiserror 1.0.63", + "thiserror 2.0.6", "tokio", "zeroize", ] @@ -9086,7 +9066,7 @@ dependencies = [ "frame-support", "frame-system", "frame-system-rpc-runtime-api", - "hashbrown 0.14.5", + "hashbrown 0.15.2", "pallet-authorship", "pallet-babe", "pallet-grandpa", @@ -9192,9 +9172,9 @@ dependencies = [ [[package]] name = "serde" -version = "1.0.209" +version = "1.0.215" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09" +checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f" dependencies = [ "serde_derive", ] @@ -9210,9 +9190,9 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.209" +version = "1.0.215" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170" +checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0" dependencies = [ "proc-macro2", "quote", @@ -9221,9 +9201,9 @@ dependencies = [ [[package]] name = "serde_json" -version = "1.0.128" +version = "1.0.133" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6ff5456707a1de34e7e37f2a6fd3d3f808c318259cbd01ab6377795054b483d8" +checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377" dependencies = [ "itoa", "memchr", @@ -9244,9 +9224,9 @@ dependencies = [ [[package]] name = "serde_spanned" -version = "0.6.7" +version = "0.6.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "eb5b1b31579f3811bf615c144393417496f152e12ac8b7663bf664f4a815306d" +checksum = "87607cb1398ed59d48732e575a4c28a7a8ebf2454b964fe3f224f2afc07909e1" dependencies = [ "serde", ] @@ -9265,15 +9245,15 @@ dependencies = [ [[package]] name = "serde_with" -version = "3.9.0" +version = "3.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "69cecfa94848272156ea67b2b1a53f20fc7bc638c4a46d2f8abde08f05f4b857" +checksum = "8e28bdad6db2b8340e449f7108f020b3b092e8583a9e3fb82713e1d4e71fe817" dependencies = [ "base64 0.22.1", "chrono", "hex", "indexmap 1.9.3", - "indexmap 2.5.0", + "indexmap 2.7.0", "serde", "serde_derive", "serde_json", @@ -9327,9 +9307,9 @@ dependencies = [ [[package]] name = "sha3-asm" -version = "0.1.3" +version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57d79b758b7cb2085612b11a235055e485605a5103faccdd633f35bd7aee69dd" +checksum = "c28efc5e327c837aa837c59eae585fc250715ef939ac32881bcc11677cd02d46" dependencies = [ "cc", "cfg-if", @@ -9371,9 +9351,9 @@ dependencies = [ [[package]] name = "simba" -version = "0.8.1" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "061507c94fc6ab4ba1c9a0305018408e312e17c041eb63bef8aa726fa33aceae" +checksum = "b3a386a501cd104797982c15ae17aafe8b9261315b5d07e3ec803f2ea26be0fa" dependencies = [ "approx", "num-complex", @@ -9461,9 +9441,9 @@ dependencies = [ [[package]] name = "socket2" -version = "0.5.7" +version = "0.5.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ce305eb0b4296696835b71df73eb912e0f1ffd2556a501fcede6e0c50349191c" +checksum = "c970269d99b64e60ec3bd6ad27270092a5394c4e309314b18ae3fe575695fbe8" dependencies = [ "libc", "windows-sys 0.52.0", @@ -9487,9 +9467,9 @@ dependencies = [ [[package]] name = "soketto" -version = "0.8.0" +version = "0.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "37468c595637c10857701c990f93a40ce0e357cedb0953d1c26c8d8027f9bb53" +checksum = "2e859df029d160cb88608f5d7df7fb4753fd20fdfb4de5644f3d8b8440841721" dependencies = [ "base64 0.22.1", "bytes", @@ -9518,7 +9498,7 @@ dependencies = [ "sp-std", "sp-trie", "sp-version", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -9600,7 +9580,7 @@ dependencies = [ "sp-database", "sp-runtime", "sp-state-machine", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -9614,7 +9594,7 @@ dependencies = [ "sp-inherents", "sp-runtime", "sp-state-machine", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -9703,7 +9683,7 @@ dependencies = [ "sp-storage", "ss58-registry", "substrate-bip39", - "thiserror 1.0.63", + "thiserror 1.0.69", "tiny-bip39", "tracing", "zeroize", @@ -9772,7 +9752,7 @@ dependencies = [ "scale-info", "sp-runtime", "sp-std", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -9817,7 +9797,7 @@ dependencies = [ "parking_lot 0.12.3", "sp-core", "sp-externalities", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -9825,7 +9805,7 @@ name = "sp-maybe-compressed-blob" version = "4.1.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "thiserror 1.0.63", + "thiserror 1.0.69", "zstd 0.12.4", ] @@ -9967,7 +9947,7 @@ dependencies = [ "sp-panic-handler", "sp-std", "sp-trie", - "thiserror 1.0.63", + "thiserror 1.0.69", "tracing", "trie-db", ] @@ -10000,7 +9980,7 @@ dependencies = [ "sp-inherents", "sp-runtime", "sp-std", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -10041,7 +10021,7 @@ dependencies = [ "schnellru", "sp-core", "sp-std", - "thiserror 1.0.63", + "thiserror 1.0.69", "tracing", "trie-db", "trie-root", @@ -10061,7 +10041,7 @@ dependencies = [ "sp-runtime", "sp-std", "sp-version-proc-macro", - "thiserror 1.0.63", + "thiserror 1.0.69", ] [[package]] @@ -10133,9 +10113,9 @@ checksum = "3b9b39299b249ad65f3b7e96443bad61c02ca5cd3589f46cb6d610a0fd6c0d6a" [[package]] name = "ss58-registry" -version = "1.50.0" +version = "1.51.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "43fce22ed1df64d04b262351c8f9d5c6da4f76f79f25ad15529792f893fad25d" +checksum = "19409f13998e55816d1c728395af0b52ec066206341d939e22e7766df9b494b8" dependencies = [ "Inflector", "num-format", @@ -10165,7 +10145,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8a2a1c578e98c1c16fc3b8ec1328f7659a500737d7a0c6d625e73e830ff9c1f6" dependencies = [ "bitflags 1.3.2", - "cfg_aliases", + "cfg_aliases 0.1.1", "libc", "parking_lot 0.11.2", "parking_lot_core 0.8.6", @@ -10179,7 +10159,7 @@ version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "70a2595fc3aa78f2d0e45dd425b22282dd863273761cc77780914b2cf3003acf" dependencies = [ - "cfg_aliases", + "cfg_aliases 0.1.1", "memchr", "proc-macro2", "quote", @@ -10190,7 +10170,7 @@ dependencies = [ name = "std-shims" version = "0.1.1" dependencies = [ - "hashbrown 0.14.5", + "hashbrown 0.15.2", "spin 0.9.8", ] @@ -10320,7 +10300,7 @@ dependencies = [ "hyper 0.14.30", "log", "prometheus", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", ] @@ -10382,18 +10362,6 @@ dependencies = [ "syn 2.0.90", ] -[[package]] -name = "syn_derive" -version = "0.1.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1329189c02ff984e9736652b1631330da25eaa6bc639089ed4915d25446cbe7b" -dependencies = [ - "proc-macro-error", - "proc-macro2", - "quote", - "syn 2.0.90", -] - [[package]] name = "sync_wrapper" version = "0.1.2" @@ -10414,20 +10382,20 @@ dependencies = [ [[package]] name = "system-configuration" -version = "0.5.1" +version = "0.6.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ba3a3adc5c275d719af8cb4272ea1c4a6d668a777f37e115f6d11ddbc1c8e0e7" +checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b" dependencies = [ - "bitflags 1.3.2", - "core-foundation", + "bitflags 2.6.0", + "core-foundation 0.9.4", "system-configuration-sys", ] [[package]] name = "system-configuration-sys" -version = "0.5.0" +version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a75fb188eb626b924683e3b95e3a48e63551fcfb51949de2f06a9d91dbee93c9" +checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4" dependencies = [ "core-foundation-sys", "libc", @@ -10447,15 +10415,15 @@ checksum = "61c41af27dd6d1e27b1b16b489db798443478cef1f06a660c96db617ba5de3b1" [[package]] name = "tempfile" -version = "3.11.0" +version = "3.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8fcd239983515c23a32fb82099f97d0b11b8c72f654ed659363a95c3dad7a53" +checksum = "28cce251fcbc87fac86a866eeb0d6c2d536fc16d06f184bb61aeae11aa4cee0c" dependencies = [ "cfg-if", "fastrand", "once_cell", "rustix", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -10470,7 +10438,7 @@ dependencies = [ "parity-scale-codec", "patchable-async-sleep", "serai-db", - "thiserror 1.0.63", + "thiserror 2.0.6", "tokio", ] @@ -10491,11 +10459,11 @@ checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76" [[package]] name = "thiserror" -version = "1.0.63" +version = "1.0.69" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c0342370b38b6a11b6cc11d6a805569958d54cfa061a29969c3b5ce2ea405724" +checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52" dependencies = [ - "thiserror-impl 1.0.63", + "thiserror-impl 1.0.69", ] [[package]] @@ -10509,9 +10477,9 @@ dependencies = [ [[package]] name = "thiserror-impl" -version = "1.0.63" +version = "1.0.69" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a4558b58466b9ad7ca0f102865eccc95938dca1a74a856f2b57b6629050da261" +checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" dependencies = [ "proc-macro2", "quote", @@ -10550,9 +10518,9 @@ dependencies = [ [[package]] name = "time" -version = "0.3.36" +version = "0.3.37" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5dfd88e563464686c916c7e46e623e520ddc6d79fa6641390f2e3fa86e83e885" +checksum = "35e7868883861bd0e56d9ac6efcaaca0d6d5d82a2a7ec8209ff492c07cf37b21" dependencies = [ "deranged", "itoa", @@ -10571,9 +10539,9 @@ checksum = "ef927ca75afb808a4d64dd374f00a2adf8d0fcff8e7b184af886c3c87ec4a3f3" [[package]] name = "time-macros" -version = "0.2.18" +version = "0.2.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3f252a68540fde3a3877aeea552b832b40ab9a69e318efd078774a01ddee1ccf" +checksum = "2834e6017e3e5e4b9834939793b282bc03b37a3336245fa820e35e233e2a85de" dependencies = [ "num-conv", "time-core", @@ -10592,7 +10560,7 @@ dependencies = [ "rand", "rustc-hash 1.1.0", "sha2", - "thiserror 1.0.63", + "thiserror 1.0.69", "unicode-normalization", "wasm-bindgen", "zeroize", @@ -10624,9 +10592,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" [[package]] name = "tokio" -version = "1.40.0" +version = "1.42.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e2b070231665d27ad9ec9b8df639893f46727666c6767db40317fbe920a5d998" +checksum = "5cec9b21b0450273377fc97bd4c33a8acffc8c996c987a7c5b319a0083707551" dependencies = [ "backtrace", "bytes", @@ -10635,7 +10603,7 @@ dependencies = [ "parking_lot 0.12.3", "pin-project-lite", "signal-hook-registry", - "socket2 0.5.7", + "socket2 0.5.8", "tokio-macros", "windows-sys 0.52.0", ] @@ -10653,20 +10621,19 @@ dependencies = [ [[package]] name = "tokio-rustls" -version = "0.26.0" +version = "0.26.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" +checksum = "5f6d0975eaace0cf0fcadee4e4aaa5da15b5c079146f2cffb67c113be122bf37" dependencies = [ - "rustls 0.23.12", - "rustls-pki-types", + "rustls 0.23.19", "tokio", ] [[package]] name = "tokio-stream" -version = "0.1.16" +version = "0.1.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4f4e6ce100d0eb49a2734f8c0812bcd324cf357d21810932c5df6b96ef2b86f1" +checksum = "eca58d7bba4a75707817a2c44174253f9236b2d5fbd055602e9d5c07c139a047" dependencies = [ "futures-core", "pin-project-lite", @@ -10676,9 +10643,9 @@ dependencies = [ [[package]] name = "tokio-util" -version = "0.7.12" +version = "0.7.13" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61e7c3654c13bcd040d4a03abee2c75b1d14a37b423cf5a813ceae1cc903ec6a" +checksum = "d7fcaa8d55a2bdd6b83ace262b016eca0d79ee02818c5c1bcdf0305114081078" dependencies = [ "bytes", "futures-core", @@ -10724,7 +10691,7 @@ version = "0.19.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b5bb770da30e5cbfde35a2d7b9b8a2c4b8ef89548a7a6aeab5c9a576e3e7421" dependencies = [ - "indexmap 2.5.0", + "indexmap 2.7.0", "serde", "serde_spanned", "toml_datetime", @@ -10733,13 +10700,13 @@ dependencies = [ [[package]] name = "toml_edit" -version = "0.22.20" +version = "0.22.22" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "583c44c02ad26b0c3f3066fe629275e50627026c51ac2e595cca4c230ce1ce1d" +checksum = "4ae48d6208a266e853d946088ed816055e556cc6028c5e8e2b84d9fa5dd7c7f5" dependencies = [ - "indexmap 2.5.0", + "indexmap 2.7.0", "toml_datetime", - "winnow 0.6.18", + "winnow 0.6.20", ] [[package]] @@ -10748,11 +10715,6 @@ version = "0.4.13" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c" dependencies = [ - "futures-core", - "futures-util", - "pin-project", - "pin-project-lite", - "tokio", "tower-layer", "tower-service", "tracing", @@ -10926,7 +10888,7 @@ dependencies = [ "serai-db", "subtle", "tendermint-machine", - "thiserror 1.0.63", + "thiserror 2.0.6", "tokio", "zeroize", ] @@ -10972,7 +10934,7 @@ dependencies = [ "rand", "smallvec", "socket2 0.4.10", - "thiserror 1.0.63", + "thiserror 1.0.69", "tinyvec", "tokio", "tracing", @@ -10988,7 +10950,7 @@ dependencies = [ "async-trait", "cfg-if", "data-encoding", - "enum-as-inner 0.6.0", + "enum-as-inner 0.6.1", "futures-channel", "futures-io", "futures-util", @@ -10997,7 +10959,7 @@ dependencies = [ "once_cell", "rand", "smallvec", - "thiserror 1.0.63", + "thiserror 1.0.69", "tinyvec", "tokio", "tracing", @@ -11019,7 +10981,7 @@ dependencies = [ "rand", "resolv-conf", "smallvec", - "thiserror 1.0.63", + "thiserror 1.0.69", "tokio", "tracing", "trust-dns-proto 0.23.2", @@ -11057,9 +11019,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825" [[package]] name = "ucd-trie" -version = "0.1.6" +version = "0.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed646292ffc8188ef8ea4d1e0e0150fb15a5c2e12ad9b8fc191ae7a8a7f3c4b9" +checksum = "2896d95c02a80c6d6a5d6e953d479f5ddf2dfdb6a244441010e373ac0fb88971" [[package]] name = "uint" @@ -11081,36 +11043,36 @@ checksum = "eaea85b334db583fe3274d12b4cd1880032beab409c0d774be044d4480ab9a94" [[package]] name = "unicode-bidi" -version = "0.3.15" +version = "0.3.17" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "08f95100a766bf4f8f28f90d77e0a5461bbdb219042e7679bebe79004fed8d75" +checksum = "5ab17db44d7388991a428b2ee655ce0c212e862eff1768a455c58f9aad6e7893" [[package]] name = "unicode-ident" -version = "1.0.12" +version = "1.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b" +checksum = "adb9e6ca4f869e1180728b7950e35922a7fc6397f7b641499e8f3ef06e50dc83" [[package]] name = "unicode-normalization" -version = "0.1.23" +version = "0.1.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a56d1686db2308d901306f92a263857ef59ea39678a5458e7cb17f01415101f5" +checksum = "5033c97c4262335cded6d6fc3e5c18ab755e1a3dc96376350f3d8e9f009ad956" dependencies = [ "tinyvec", ] [[package]] name = "unicode-width" -version = "0.1.13" +version = "0.1.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0336d538f7abc86d282a4189614dfaa90810dfc2c6f6427eaf88e16311dd225d" +checksum = "7dd6e30e90baa6f72411720665d41d89b9a3d039dc45b8faea1ddd07f617f6af" [[package]] name = "unicode-xid" -version = "0.2.5" +version = "0.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "229730647fbc343e3a80e463c1db7f78f3855d3f3739bee0dda773c9a037c90a" +checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" [[package]] name = "universal-hash" @@ -11165,9 +11127,9 @@ checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" [[package]] name = "uuid" -version = "1.10.0" +version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "81dfa00651efa65069b0b6b651f4aaa31ba9e3c3ce0137aaad053604ee7e0314" +checksum = "f8c5f0a0af699448548ad1a2fbf920fb4bee257eae39953ba95cb84891a0446a" [[package]] name = "valuable" @@ -11229,9 +11191,9 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "wasm-bindgen" -version = "0.2.93" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a82edfc16a6c469f5f44dc7b571814045d60404b55a0ee849f9bcfa2e63dd9b5" +checksum = "a474f6281d1d70c17ae7aa6a613c87fce69a127e2624002df63dcb39d6cf6396" dependencies = [ "cfg-if", "once_cell", @@ -11240,13 +11202,12 @@ dependencies = [ [[package]] name = "wasm-bindgen-backend" -version = "0.2.93" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9de396da306523044d3302746f1208fa71d7532227f15e347e2d93e4145dd77b" +checksum = "5f89bb38646b4f81674e8f5c3fb81b562be1fd936d84320f3264486418519c79" dependencies = [ "bumpalo", "log", - "once_cell", "proc-macro2", "quote", "syn 2.0.90", @@ -11255,21 +11216,22 @@ dependencies = [ [[package]] name = "wasm-bindgen-futures" -version = "0.4.43" +version = "0.4.49" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61e9300f63a621e96ed275155c108eb6f843b6a26d053f122ab69724559dc8ed" +checksum = "38176d9b44ea84e9184eff0bc34cc167ed044f816accfe5922e54d84cf48eca2" dependencies = [ "cfg-if", "js-sys", + "once_cell", "wasm-bindgen", "web-sys", ] [[package]] name = "wasm-bindgen-macro" -version = "0.2.93" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "585c4c91a46b072c92e908d99cb1dcdf95c5218eeb6f3bf1efa991ee7a68cccf" +checksum = "2cc6181fd9a7492eef6fef1f33961e3695e4579b9872a6f7c83aee556666d4fe" dependencies = [ "quote", "wasm-bindgen-macro-support", @@ -11277,9 +11239,9 @@ dependencies = [ [[package]] name = "wasm-bindgen-macro-support" -version = "0.2.93" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "afc340c74d9005395cf9dd098506f7f44e38f2b4a21c6aaacf9a105ea5e1e836" +checksum = "30d7a95b763d3c45903ed6c81f156801839e5ee968bb07e534c44df0fcd330c2" dependencies = [ "proc-macro2", "quote", @@ -11290,9 +11252,9 @@ dependencies = [ [[package]] name = "wasm-bindgen-shared" -version = "0.2.93" +version = "0.2.99" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c62a0a307cb4a311d3a07867860911ca130c3494e8c2719593806c08bc5d0484" +checksum = "943aab3fdaaa029a6e0271b35ea10b72b943135afe9bffca82384098ad0e06a6" [[package]] name = "wasm-encoder" @@ -11323,7 +11285,7 @@ dependencies = [ "strum 0.24.1", "strum_macros 0.24.3", "tempfile", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasm-opt-cxx-sys", "wasm-opt-sys", ] @@ -11373,7 +11335,7 @@ version = "0.110.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1dfcdb72d96f01e6c85b6bf20102e7423bdbaad5c337301bab2bbf253d26413c" dependencies = [ - "indexmap 2.5.0", + "indexmap 2.7.0", "semver 1.0.23", ] @@ -11388,7 +11350,7 @@ dependencies = [ "bumpalo", "cfg-if", "fxprof-processed-profile", - "indexmap 2.5.0", + "indexmap 2.7.0", "libc", "log", "object 0.31.1", @@ -11455,7 +11417,7 @@ dependencies = [ "log", "object 0.31.1", "target-lexicon", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasmparser", "wasmtime-cranelift-shared", "wasmtime-environ", @@ -11487,12 +11449,12 @@ dependencies = [ "anyhow", "cranelift-entity", "gimli 0.27.3", - "indexmap 2.5.0", + "indexmap 2.7.0", "log", "object 0.31.1", "serde", "target-lexicon", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasmparser", "wasmtime-types", ] @@ -11554,7 +11516,7 @@ dependencies = [ "anyhow", "cc", "cfg-if", - "indexmap 2.5.0", + "indexmap 2.7.0", "libc", "log", "mach", @@ -11580,7 +11542,7 @@ checksum = "77943729d4b46141538e8d0b6168915dc5f88575ecdfea26753fd3ba8bab244a" dependencies = [ "cranelift-entity", "serde", - "thiserror 1.0.63", + "thiserror 1.0.69", "wasmparser", ] @@ -11611,9 +11573,9 @@ dependencies = [ [[package]] name = "web-sys" -version = "0.3.70" +version = "0.3.76" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "26fdeaafd9bd129f65e7c031593c24d62186301e0c72c8978fa1678be7d532c0" +checksum = "04dd7223427d52553d3702c004d3b2fe07c148165faa56313cb00211e31c12bc" dependencies = [ "js-sys", "wasm-bindgen", @@ -11639,9 +11601,9 @@ dependencies = [ [[package]] name = "wide" -version = "0.7.28" +version = "0.7.30" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b828f995bf1e9622031f8009f8481a85406ce1f4d4588ff746d872043e855690" +checksum = "58e6db2670d2be78525979e9a5f9c69d296fd7d670549fe9ebf70f8708cb5019" dependencies = [ "bytemuck", "safe_arch", @@ -11696,11 +11658,11 @@ dependencies = [ [[package]] name = "windows" -version = "0.54.0" +version = "0.58.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9252e5725dbed82865af151df558e754e4a3c2c30818359eb17465f1346a1b49" +checksum = "dd04d41d93c4992d421894c18c8b43496aa748dd4c081bac0dc93eb0489272b6" dependencies = [ - "windows-core 0.54.0", + "windows-core 0.58.0", "windows-targets 0.52.6", ] @@ -11715,20 +11677,55 @@ dependencies = [ [[package]] name = "windows-core" -version = "0.54.0" +version = "0.58.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "12661b9c89351d684a50a8a643ce5f608e20243b9fb84687800163429f161d65" +checksum = "6ba6d44ec8c2591c134257ce647b7ea6b20335bf6379a27dac5f1641fcf59f99" dependencies = [ + "windows-implement", + "windows-interface", "windows-result", + "windows-strings", "windows-targets 0.52.6", ] [[package]] -name = "windows-result" -version = "0.1.2" +name = "windows-implement" +version = "0.58.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5e383302e8ec8515204254685643de10811af0ed97ea37210dc26fb0032647f8" +checksum = "2bbd5b46c938e506ecbce286b6628a02171d56153ba733b6c741fc627ec9579b" dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.90", +] + +[[package]] +name = "windows-interface" +version = "0.58.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "053c4c462dc91d3b1504c6fe5a726dd15e216ba718e84a0e46a88fbe5ded3515" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.90", +] + +[[package]] +name = "windows-result" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1d1043d8214f791817bab27572aaa8af63732e11bf84aa21a45a78d6c317ae0e" +dependencies = [ + "windows-targets 0.52.6", +] + +[[package]] +name = "windows-strings" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4cd9b125c486025df0eabcb585e62173c6c9eddcec5d117d3b6e8c30e2ee4d10" +dependencies = [ + "windows-result", "windows-targets 0.52.6", ] @@ -11891,9 +11888,9 @@ dependencies = [ [[package]] name = "winnow" -version = "0.6.18" +version = "0.6.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "68a9bda4691f099d435ad181000724da8e5899daa10713c2d432552b9ccd3a6f" +checksum = "36c1fec1a2bb5866f07c25f68c26e565c4c200aebb96d7e55710c19d3e8ac49b" dependencies = [ "memchr", ] @@ -11942,15 +11939,15 @@ dependencies = [ "nom", "oid-registry", "rusticata-macros", - "thiserror 1.0.63", + "thiserror 1.0.69", "time", ] [[package]] name = "xml-rs" -version = "0.8.21" +version = "0.8.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "539a77ee7c0de333dcc6da69b177380a0b81e0dacfa4f7344c465a36871ee601" +checksum = "ea8b391c9a790b496184c29f7f93b9ed5b16abb306c05415b68bcc16e4d06432" [[package]] name = "xmltree" diff --git a/common/std-shims/Cargo.toml b/common/std-shims/Cargo.toml index 983a2522..d2bda4b7 100644 --- a/common/std-shims/Cargo.toml +++ b/common/std-shims/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] } -hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] } +hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] } [features] std = [] diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 85865650..7d48fcc7 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -8,6 +8,7 @@ authors = ["Luke Parker "] keywords = [] edition = "2021" publish = false +rust-version = "1.81" [package.metadata.docs.rs] all-features = true diff --git a/coordinator/tributary/Cargo.toml b/coordinator/tributary/Cargo.toml index b6a5a251..28beb2ab 100644 --- a/coordinator/tributary/Cargo.toml +++ b/coordinator/tributary/Cargo.toml @@ -6,6 +6,7 @@ license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary" authors = ["Luke Parker "] edition = "2021" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -16,7 +17,7 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } -thiserror = { version = "1", default-features = false } +thiserror = { version = "2", default-features = false, features = ["std"] } subtle = { version = "^2", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] } diff --git a/coordinator/tributary/tendermint/Cargo.toml b/coordinator/tributary/tendermint/Cargo.toml index ac6becfa..807938c8 100644 --- a/coordinator/tributary/tendermint/Cargo.toml +++ b/coordinator/tributary/tendermint/Cargo.toml @@ -6,6 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint" authors = ["Luke Parker "] edition = "2021" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -16,7 +17,7 @@ workspace = true [dependencies] async-trait = { version = "0.1", default-features = false } -thiserror = { version = "1", default-features = false } +thiserror = { version = "2", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] } diff --git a/crypto/dkg/Cargo.toml b/crypto/dkg/Cargo.toml index 337b702e..1ff3efb8 100644 --- a/crypto/dkg/Cargo.toml +++ b/crypto/dkg/Cargo.toml @@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dkg" authors = ["Luke Parker "] keywords = ["dkg", "multisig", "threshold", "ff", "group"] edition = "2021" -rust-version = "1.80" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } rand_core = { version = "0.6", default-features = false } @@ -58,7 +58,7 @@ pasta_curves = "0.5" [features] std = [ - "thiserror", + "thiserror/std", "rand_core/std", diff --git a/crypto/dkg/src/lib.rs b/crypto/dkg/src/lib.rs index 48037bcd..7ab29168 100644 --- a/crypto/dkg/src/lib.rs +++ b/crypto/dkg/src/lib.rs @@ -4,7 +4,6 @@ use core::fmt::{self, Debug}; -#[cfg(feature = "std")] use thiserror::Error; use zeroize::Zeroize; @@ -67,8 +66,7 @@ impl fmt::Display for Participant { } /// Various errors possible during key generation. -#[derive(Clone, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(Error))] +#[derive(Clone, PartialEq, Eq, Debug, Error)] pub enum DkgError { /// A parameter was zero. #[cfg_attr(feature = "std", error("a parameter was 0 (threshold {0}, participants {1})"))] diff --git a/crypto/dleq/Cargo.toml b/crypto/dleq/Cargo.toml index fc25899f..a2b8ad9e 100644 --- a/crypto/dleq/Cargo.toml +++ b/crypto/dleq/Cargo.toml @@ -6,7 +6,7 @@ license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/crypto/dleq" authors = ["Luke Parker "] edition = "2021" -rust-version = "1.79" +rust-version = "1.81" [package.metadata.docs.rs] all-features = true @@ -18,7 +18,7 @@ workspace = true [dependencies] rustversion = "1" -thiserror = { version = "1", optional = true } +thiserror = { version = "2", default-features = false, optional = true } rand_core = { version = "0.6", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -44,7 +44,7 @@ dalek-ff-group = { path = "../dalek-ff-group" } transcript = { package = "flexible-transcript", path = "../transcript", features = ["recommended"] } [features] -std = ["rand_core/std", "zeroize/std", "digest/std", "transcript/std", "ff/std", "multiexp?/std"] +std = ["thiserror?/std", "rand_core/std", "zeroize/std", "digest/std", "transcript/std", "ff/std", "multiexp?/std"] serialize = ["std"] # Needed for cross-group DLEqs diff --git a/crypto/dleq/src/cross_group/mod.rs b/crypto/dleq/src/cross_group/mod.rs index 8014ea9f..c530f60a 100644 --- a/crypto/dleq/src/cross_group/mod.rs +++ b/crypto/dleq/src/cross_group/mod.rs @@ -92,7 +92,7 @@ impl Generators { } /// Error for cross-group DLEq proofs. -#[derive(Error, PartialEq, Eq, Debug)] +#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)] pub enum DLEqError { /// Invalid proof length. #[error("invalid proof length")] diff --git a/crypto/evrf/embedwards25519/Cargo.toml b/crypto/evrf/embedwards25519/Cargo.toml index eae36b13..bca1f3c2 100644 --- a/crypto/evrf/embedwards25519/Cargo.toml +++ b/crypto/evrf/embedwards25519/Cargo.toml @@ -22,7 +22,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } subtle = { version = "^2.4", default-features = false, features = ["std"] } -generic-array = { version = "0.14", default-features = false } +generic-array = { version = "1", default-features = false } crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] } dalek-ff-group = { path = "../../dalek-ff-group", version = "0.4", default-features = false } diff --git a/crypto/frost/Cargo.toml b/crypto/frost/Cargo.toml index 00b18cd0..1210c9ed 100644 --- a/crypto/frost/Cargo.toml +++ b/crypto/frost/Cargo.toml @@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -thiserror = "1" +thiserror = { version = "2", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_chacha = { version = "0.3", default-features = false, features = ["std"] } diff --git a/networks/bitcoin/Cargo.toml b/networks/bitcoin/Cargo.toml index 5ab44cc6..5a3ce471 100644 --- a/networks/bitcoin/Cargo.toml +++ b/networks/bitcoin/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { version = "0.1.1", path = "../../common/std-shims", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false } rand_core = { version = "0.6", default-features = false } @@ -44,7 +44,7 @@ tokio = { version = "1", features = ["macros"] } std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", "rand_core/std", diff --git a/networks/monero/ringct/bulletproofs/Cargo.toml b/networks/monero/ringct/bulletproofs/Cargo.toml index 9c807193..03a23f19 100644 --- a/networks/monero/ringct/bulletproofs/Cargo.toml +++ b/networks/monero/ringct/bulletproofs/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } rand_core = { version = "0.6", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -42,7 +42,7 @@ hex-literal = "0.4" std = [ "std-shims/std", - "thiserror", + "thiserror/std", "rand_core/std", "zeroize/std", diff --git a/networks/monero/ringct/bulletproofs/src/lib.rs b/networks/monero/ringct/bulletproofs/src/lib.rs index 2a789575..5ca8ec36 100644 --- a/networks/monero/ringct/bulletproofs/src/lib.rs +++ b/networks/monero/ringct/bulletproofs/src/lib.rs @@ -45,14 +45,13 @@ use crate::plus::{ mod tests; /// An error from proving/verifying Bulletproofs(+). -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum BulletproofError { /// Proving/verifying a Bulletproof(+) range proof with no commitments. - #[cfg_attr(feature = "std", error("no commitments to prove the range for"))] + #[error("no commitments to prove the range for")] NoCommitments, /// Proving/verifying a Bulletproof(+) range proof with more commitments than supported. - #[cfg_attr(feature = "std", error("too many commitments to prove the range for"))] + #[error("too many commitments to prove the range for")] TooManyCommitments, } diff --git a/networks/monero/ringct/clsag/Cargo.toml b/networks/monero/ringct/clsag/Cargo.toml index 801717c7..647125cd 100644 --- a/networks/monero/ringct/clsag/Cargo.toml +++ b/networks/monero/ringct/clsag/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } rand_core = { version = "0.6", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -46,7 +46,7 @@ frost = { package = "modular-frost", path = "../../../../crypto/frost", default- std = [ "std-shims/std", - "thiserror", + "thiserror/std", "rand_core/std", "zeroize/std", diff --git a/networks/monero/ringct/clsag/src/lib.rs b/networks/monero/ringct/clsag/src/lib.rs index 0aab537b..d21067ae 100644 --- a/networks/monero/ringct/clsag/src/lib.rs +++ b/networks/monero/ringct/clsag/src/lib.rs @@ -36,29 +36,28 @@ pub use multisig::{ClsagMultisigMaskSender, ClsagAddendum, ClsagMultisig}; mod tests; /// Errors when working with CLSAGs. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum ClsagError { /// The ring was invalid (such as being too small or too large). - #[cfg_attr(feature = "std", error("invalid ring"))] + #[error("invalid ring")] InvalidRing, /// The discrete logarithm of the key, scaling G, wasn't equivalent to the signing ring member. - #[cfg_attr(feature = "std", error("invalid commitment"))] + #[error("invalid commitment")] InvalidKey, /// The commitment opening provided did not match the ring member's. - #[cfg_attr(feature = "std", error("invalid commitment"))] + #[error("invalid commitment")] InvalidCommitment, /// The key image was invalid (such as being identity or torsioned) - #[cfg_attr(feature = "std", error("invalid key image"))] + #[error("invalid key image")] InvalidImage, /// The `D` component was invalid. - #[cfg_attr(feature = "std", error("invalid D"))] + #[error("invalid D")] InvalidD, /// The `s` vector was invalid. - #[cfg_attr(feature = "std", error("invalid s"))] + #[error("invalid s")] InvalidS, /// The `c1` variable was invalid. - #[cfg_attr(feature = "std", error("invalid c1"))] + #[error("invalid c1")] InvalidC1, } diff --git a/networks/monero/ringct/mlsag/Cargo.toml b/networks/monero/ringct/mlsag/Cargo.toml index d666ebfa..9c2bd568 100644 --- a/networks/monero/ringct/mlsag/Cargo.toml +++ b/networks/monero/ringct/mlsag/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -34,7 +34,7 @@ monero-primitives = { path = "../../primitives", version = "0.1", default-featur std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", diff --git a/networks/monero/ringct/mlsag/src/lib.rs b/networks/monero/ringct/mlsag/src/lib.rs index f5164b88..bda8060e 100644 --- a/networks/monero/ringct/mlsag/src/lib.rs +++ b/networks/monero/ringct/mlsag/src/lib.rs @@ -19,23 +19,22 @@ use monero_generators::{H, hash_to_point}; use monero_primitives::keccak256_to_scalar; /// Errors when working with MLSAGs. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum MlsagError { /// Invalid ring (such as too small or too large). - #[cfg_attr(feature = "std", error("invalid ring"))] + #[error("invalid ring")] InvalidRing, /// Invalid amount of key images. - #[cfg_attr(feature = "std", error("invalid amount of key images"))] + #[error("invalid amount of key images")] InvalidAmountOfKeyImages, /// Invalid ss matrix. - #[cfg_attr(feature = "std", error("invalid ss"))] + #[error("invalid ss")] InvalidSs, /// Invalid key image. - #[cfg_attr(feature = "std", error("invalid key image"))] + #[error("invalid key image")] InvalidKeyImage, /// Invalid ci vector. - #[cfg_attr(feature = "std", error("invalid ci"))] + #[error("invalid ci")] InvalidCi, } diff --git a/networks/monero/rpc/Cargo.toml b/networks/monero/rpc/Cargo.toml index e5e6a650..8f1cfd8e 100644 --- a/networks/monero/rpc/Cargo.toml +++ b/networks/monero/rpc/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } hex = { version = "0.4", default-features = false, features = ["alloc"] } @@ -34,7 +34,7 @@ monero-address = { path = "../wallet/address", default-features = false } std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", "hex/std", diff --git a/networks/monero/rpc/src/lib.rs b/networks/monero/rpc/src/lib.rs index 3c8d337a..8ae8ef01 100644 --- a/networks/monero/rpc/src/lib.rs +++ b/networks/monero/rpc/src/lib.rs @@ -42,34 +42,33 @@ const GRACE_BLOCKS_FOR_FEE_ESTIMATE: u64 = 10; const TXS_PER_REQUEST: usize = 100; /// An error from the RPC. -#[derive(Clone, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)] pub enum RpcError { /// An internal error. - #[cfg_attr(feature = "std", error("internal error ({0})"))] + #[error("internal error ({0})")] InternalError(String), /// A connection error with the node. - #[cfg_attr(feature = "std", error("connection error ({0})"))] + #[error("connection error ({0})")] ConnectionError(String), /// The node is invalid per the expected protocol. - #[cfg_attr(feature = "std", error("invalid node ({0})"))] + #[error("invalid node ({0})")] InvalidNode(String), /// Requested transactions weren't found. - #[cfg_attr(feature = "std", error("transactions not found"))] + #[error("transactions not found")] TransactionsNotFound(Vec<[u8; 32]>), /// The transaction was pruned. /// /// Pruned transactions are not supported at this time. - #[cfg_attr(feature = "std", error("pruned transaction"))] + #[error("pruned transaction")] PrunedTransaction, /// A transaction (sent or received) was invalid. - #[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))] + #[error("invalid transaction ({0:?})")] InvalidTransaction([u8; 32]), /// The returned fee was unusable. - #[cfg_attr(feature = "std", error("unexpected fee response"))] + #[error("unexpected fee response")] InvalidFee, /// The priority intended for use wasn't usable. - #[cfg_attr(feature = "std", error("invalid priority"))] + #[error("invalid priority")] InvalidPriority, } diff --git a/networks/monero/wallet/Cargo.toml b/networks/monero/wallet/Cargo.toml index 12cc3245..0e151e0f 100644 --- a/networks/monero/wallet/Cargo.toml +++ b/networks/monero/wallet/Cargo.toml @@ -21,7 +21,7 @@ workspace = true [dependencies] std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -61,7 +61,7 @@ monero-simple-request-rpc = { path = "../rpc/simple-request", default-features = std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", diff --git a/networks/monero/wallet/address/Cargo.toml b/networks/monero/wallet/address/Cargo.toml index a86ff73c..e2899b50 100644 --- a/networks/monero/wallet/address/Cargo.toml +++ b/networks/monero/wallet/address/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -40,7 +40,7 @@ serde_json = { version = "1", default-features = false, features = ["alloc"] } std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", diff --git a/networks/monero/wallet/address/src/lib.rs b/networks/monero/wallet/address/src/lib.rs index 194d4469..24dba053 100644 --- a/networks/monero/wallet/address/src/lib.rs +++ b/networks/monero/wallet/address/src/lib.rs @@ -199,29 +199,25 @@ pub enum Network { } /// Errors when decoding an address. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum AddressError { /// The address had an invalid (network, type) byte. - #[cfg_attr(feature = "std", error("invalid byte for the address's network/type ({0})"))] + #[error("invalid byte for the address's network/type ({0})")] InvalidTypeByte(u8), /// The address wasn't a valid Base58Check (as defined by Monero) string. - #[cfg_attr(feature = "std", error("invalid address encoding"))] + #[error("invalid address encoding")] InvalidEncoding, /// The data encoded wasn't the proper length. - #[cfg_attr(feature = "std", error("invalid length"))] + #[error("invalid length")] InvalidLength, /// The address had an invalid key. - #[cfg_attr(feature = "std", error("invalid key"))] + #[error("invalid key")] InvalidKey, /// The address was featured with unrecognized features. - #[cfg_attr(feature = "std", error("unknown features"))] + #[error("unknown features")] UnknownFeatures(u64), /// The network was for a different network than expected. - #[cfg_attr( - feature = "std", - error("different network ({actual:?}) than expected ({expected:?})") - )] + #[error("different network ({actual:?}) than expected ({expected:?})")] DifferentNetwork { /// The Network expected. expected: Network, diff --git a/networks/monero/wallet/polyseed/Cargo.toml b/networks/monero/wallet/polyseed/Cargo.toml index 38861481..b8394ecb 100644 --- a/networks/monero/wallet/polyseed/Cargo.toml +++ b/networks/monero/wallet/polyseed/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } subtle = { version = "^2.4", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } @@ -34,7 +34,7 @@ hex = { version = "0.4", default-features = false, features = ["std"] } std = [ "std-shims/std", - "thiserror", + "thiserror/std", "subtle/std", "zeroize/std", diff --git a/networks/monero/wallet/polyseed/src/lib.rs b/networks/monero/wallet/polyseed/src/lib.rs index 8163d3c4..933168e8 100644 --- a/networks/monero/wallet/polyseed/src/lib.rs +++ b/networks/monero/wallet/polyseed/src/lib.rs @@ -106,20 +106,19 @@ const POLYSEED_KEYGEN_ITERATIONS: u32 = 10000; const COIN: u16 = 0; /// An error when working with a Polyseed. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum PolyseedError { /// The seed was invalid. - #[cfg_attr(feature = "std", error("invalid seed"))] + #[error("invalid seed")] InvalidSeed, /// The entropy was invalid. - #[cfg_attr(feature = "std", error("invalid entropy"))] + #[error("invalid entropy")] InvalidEntropy, /// The checksum did not match the data. - #[cfg_attr(feature = "std", error("invalid checksum"))] + #[error("invalid checksum")] InvalidChecksum, /// Unsupported feature bits were set. - #[cfg_attr(feature = "std", error("unsupported features"))] + #[error("unsupported features")] UnsupportedFeatures, } diff --git a/networks/monero/wallet/seed/Cargo.toml b/networks/monero/wallet/seed/Cargo.toml index ba28ed0c..f864b35d 100644 --- a/networks/monero/wallet/seed/Cargo.toml +++ b/networks/monero/wallet/seed/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } rand_core = { version = "0.6", default-features = false } @@ -33,7 +33,7 @@ monero-primitives = { path = "../../primitives", default-features = false, featu std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", "rand_core/std", diff --git a/networks/monero/wallet/seed/src/lib.rs b/networks/monero/wallet/seed/src/lib.rs index 1cbdffa8..e2b9d0e6 100644 --- a/networks/monero/wallet/seed/src/lib.rs +++ b/networks/monero/wallet/seed/src/lib.rs @@ -26,19 +26,18 @@ const SEED_LENGTH: usize = 24; const SEED_LENGTH_WITH_CHECKSUM: usize = 25; /// An error when working with a seed. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum SeedError { - #[cfg_attr(feature = "std", error("invalid seed"))] + #[error("invalid seed")] /// The seed was invalid. InvalidSeed, /// The checksum did not match the data. - #[cfg_attr(feature = "std", error("invalid checksum"))] + #[error("invalid checksum")] InvalidChecksum, /// The deprecated English language option was used with a checksum. /// /// The deprecated English language option did not include a checksum. - #[cfg_attr(feature = "std", error("deprecated English language option included a checksum"))] + #[error("deprecated English language option included a checksum")] DeprecatedEnglishWithChecksum, } diff --git a/networks/monero/wallet/src/scan.rs b/networks/monero/wallet/src/scan.rs index 0de35f35..2190244a 100644 --- a/networks/monero/wallet/src/scan.rs +++ b/networks/monero/wallet/src/scan.rs @@ -67,14 +67,13 @@ impl Timelocked { } /// Errors when scanning a block. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum ScanError { /// The block was for an unsupported protocol version. - #[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))] + #[error("unsupported protocol version ({0})")] UnsupportedProtocol(u8), /// The ScannableBlock was invalid. - #[cfg_attr(feature = "std", error("invalid scannable block ({0})"))] + #[error("invalid scannable block ({0})")] InvalidScannableBlock(&'static str), } diff --git a/networks/monero/wallet/src/send/mod.rs b/networks/monero/wallet/src/send/mod.rs index 3bd883df..f6628846 100644 --- a/networks/monero/wallet/src/send/mod.rs +++ b/networks/monero/wallet/src/send/mod.rs @@ -140,48 +140,44 @@ impl InternalPayment { } /// An error while sending Monero. -#[derive(Clone, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)] pub enum SendError { /// The RingCT type to produce proofs for this transaction with weren't supported. - #[cfg_attr(feature = "std", error("this library doesn't yet support that RctType"))] + #[error("this library doesn't yet support that RctType")] UnsupportedRctType, /// The transaction had no inputs specified. - #[cfg_attr(feature = "std", error("no inputs"))] + #[error("no inputs")] NoInputs, /// The decoy quantity was invalid for the specified RingCT type. - #[cfg_attr(feature = "std", error("invalid number of decoys"))] + #[error("invalid number of decoys")] InvalidDecoyQuantity, /// The transaction had no outputs specified. - #[cfg_attr(feature = "std", error("no outputs"))] + #[error("no outputs")] NoOutputs, /// The transaction had too many outputs specified. - #[cfg_attr(feature = "std", error("too many outputs"))] + #[error("too many outputs")] TooManyOutputs, /// The transaction did not have a change output, and did not have two outputs. /// /// Monero requires all transactions have at least two outputs, assuming one payment and one /// change (or at least one dummy and one change). Accordingly, specifying no change and only /// one payment prevents creating a valid transaction - #[cfg_attr(feature = "std", error("only one output and no change address"))] + #[error("only one output and no change address")] NoChange, /// Multiple addresses had payment IDs specified. - /// + ///o /// Only one payment ID is allowed per transaction. - #[cfg_attr(feature = "std", error("multiple addresses with payment IDs"))] + #[error("multiple addresses with payment IDs")] MultiplePaymentIds, /// Too much arbitrary data was specified. - #[cfg_attr(feature = "std", error("too much data"))] + #[error("too much data")] TooMuchArbitraryData, /// The created transaction was too large. - #[cfg_attr(feature = "std", error("too large of a transaction"))] + #[error("too large of a transaction")] TooLargeTransaction, /// This transaction could not pay for itself. - #[cfg_attr( - feature = "std", - error( - "not enough funds (inputs {inputs}, outputs {outputs}, necessary_fee {necessary_fee:?})" - ) + #[error( + "not enough funds (inputs {inputs}, outputs {outputs}, necessary_fee {necessary_fee:?})" )] NotEnoughFunds { /// The amount of funds the inputs contributed. @@ -195,20 +191,17 @@ pub enum SendError { necessary_fee: Option, }, /// This transaction is being signed with the wrong private key. - #[cfg_attr(feature = "std", error("wrong spend private key"))] + #[error("wrong spend private key")] WrongPrivateKey, /// This transaction was read from a bytestream which was malicious. - #[cfg_attr( - feature = "std", - error("this SignableTransaction was created by deserializing a malicious serialization") - )] + #[error("this SignableTransaction was created by deserializing a malicious serialization")] MaliciousSerialization, /// There was an error when working with the CLSAGs. - #[cfg_attr(feature = "std", error("clsag error ({0})"))] + #[error("clsag error ({0})")] ClsagError(ClsagError), /// There was an error when working with FROST. #[cfg(feature = "multisig")] - #[cfg_attr(feature = "std", error("frost error {0}"))] + #[error("frost error {0}")] FrostError(FrostError), } diff --git a/networks/monero/wallet/src/view_pair.rs b/networks/monero/wallet/src/view_pair.rs index 3b09f088..c09f2965 100644 --- a/networks/monero/wallet/src/view_pair.rs +++ b/networks/monero/wallet/src/view_pair.rs @@ -10,8 +10,7 @@ use crate::{ }; /// An error while working with a ViewPair. -#[derive(Clone, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, PartialEq, Eq, Debug, thiserror::Error)] pub enum ViewPairError { /// The spend key was torsioned. /// @@ -20,7 +19,7 @@ pub enum ViewPairError { // CLSAG seems to support it if the challenge does a torsion clear, FCMP++ should ship with a // torsion clear, yet it's not worth it to modify CLSAG sign to generate challenges until the // torsion clears and ensure spendability (nor can we reasonably guarantee that in the future) - #[cfg_attr(feature = "std", error("torsioned spend key"))] + #[error("torsioned spend key")] TorsionedSpendKey, } diff --git a/networks/monero/wallet/util/Cargo.toml b/networks/monero/wallet/util/Cargo.toml index 39dfe3a6..14dc5847 100644 --- a/networks/monero/wallet/util/Cargo.toml +++ b/networks/monero/wallet/util/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] std-shims = { path = "../../../../common/std-shims", version = "^0.1.1", default-features = false } -thiserror = { version = "1", default-features = false, optional = true } +thiserror = { version = "2", default-features = false } zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } rand_core = { version = "0.6", default-features = false } @@ -36,7 +36,7 @@ curve25519-dalek = { version = "4", default-features = false, features = ["alloc std = [ "std-shims/std", - "thiserror", + "thiserror/std", "zeroize/std", "rand_core/std", diff --git a/networks/monero/wallet/util/src/seed.rs b/networks/monero/wallet/util/src/seed.rs index 77ca3358..50852ea5 100644 --- a/networks/monero/wallet/util/src/seed.rs +++ b/networks/monero/wallet/util/src/seed.rs @@ -11,20 +11,19 @@ use original::{SeedError as OriginalSeedError, Seed as OriginalSeed}; use polyseed::{PolyseedError, Polyseed}; /// An error from working with seeds. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -#[cfg_attr(feature = "std", derive(thiserror::Error))] +#[derive(Clone, Copy, PartialEq, Eq, Debug, thiserror::Error)] pub enum SeedError { /// The seed was invalid. - #[cfg_attr(feature = "std", error("invalid seed"))] + #[error("invalid seed")] InvalidSeed, /// The entropy was invalid. - #[cfg_attr(feature = "std", error("invalid entropy"))] + #[error("invalid entropy")] InvalidEntropy, /// The checksum did not match the data. - #[cfg_attr(feature = "std", error("invalid checksum"))] + #[error("invalid checksum")] InvalidChecksum, /// Unsupported features were enabled. - #[cfg_attr(feature = "std", error("unsupported features"))] + #[error("unsupported features")] UnsupportedFeatures, } diff --git a/substrate/client/Cargo.toml b/substrate/client/Cargo.toml index f8672718..b872335d 100644 --- a/substrate/client/Cargo.toml +++ b/substrate/client/Cargo.toml @@ -18,7 +18,7 @@ workspace = true [dependencies] zeroize = "^1.5" -thiserror = { version = "1", optional = true } +thiserror = { version = "2", default-features = false, optional = true } bitvec = { version = "1", default-features = false, features = ["alloc", "serde"] } @@ -60,7 +60,7 @@ dockertest = "0.5" serai-docker-tests = { path = "../../tests/docker" } [features] -serai = ["thiserror", "serde", "serde_json", "serai-abi/serde", "multiaddr", "sp-core", "sp-runtime", "frame-system", "simple-request"] +serai = ["thiserror/std", "serde", "serde_json", "serai-abi/serde", "multiaddr", "sp-core", "sp-runtime", "frame-system", "simple-request"] borsh = ["serai-abi/borsh"] networks = [] diff --git a/substrate/client/src/serai/mod.rs b/substrate/client/src/serai/mod.rs index c688bf36..8b17d5d1 100644 --- a/substrate/client/src/serai/mod.rs +++ b/substrate/client/src/serai/mod.rs @@ -55,7 +55,7 @@ impl Block { } } -#[derive(Error, Debug)] +#[derive(Debug, Error)] pub enum SeraiError { #[error("failed to communicate with serai")] ConnectionError, diff --git a/substrate/runtime/Cargo.toml b/substrate/runtime/Cargo.toml index 2d914ab1..c718a3a3 100644 --- a/substrate/runtime/Cargo.toml +++ b/substrate/runtime/Cargo.toml @@ -19,7 +19,7 @@ ignored = ["scale", "scale-info"] workspace = true [dependencies] -hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] } +hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] } scale-info = { version = "2", default-features = false, features = ["derive"] } From 9ccfa8a9f58f0eaa871735d1c71740e0386c6b1d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 8 Dec 2024 22:01:43 -0500 Subject: [PATCH 202/368] Fix deny --- deny.toml | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/deny.toml b/deny.toml index 51bffbc0..7581a9db 100644 --- a/deny.toml +++ b/deny.toml @@ -11,6 +11,7 @@ ignore = [ "RUSTSEC-2021-0139", # https://github.com/serai-dex/serai/228 "RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227 "RUSTSEC-2024-0370", # proc-macro-error is unmaintained + "RUSTSEC-2024-0384", # instant is unmaintained ] [licenses] @@ -27,7 +28,8 @@ allow = [ "BSD-2-Clause", "BSD-3-Clause", "ISC", - "Unicode-DFS-2016", + "Zlib", + "Unicode-3.0", "OpenSSL", # Non-invasive copyleft @@ -46,6 +48,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-message-queue" }, { allow = ["AGPL-3.0"], name = "serai-processor-messages" }, + { allow = ["AGPL-3.0"], name = "serai-processor-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-key-gen" }, { allow = ["AGPL-3.0"], name = "serai-processor-frost-attempt-manager" }, @@ -53,14 +56,15 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-processor-scanner" }, { allow = ["AGPL-3.0"], name = "serai-processor-scheduler-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler-primitives" }, - { allow = ["AGPL-3.0"], name = "serai-processor-standard-scheduler" }, + { allow = ["AGPL-3.0"], name = "serai-processor-utxo-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-transaction-chaining-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-smart-contract-scheduler" }, { allow = ["AGPL-3.0"], name = "serai-processor-signers" }, { allow = ["AGPL-3.0"], name = "serai-bitcoin-processor" }, + { allow = ["AGPL-3.0"], name = "serai-processor-bin" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-primitives" }, - { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-test-primitives" }, + { allow = ["AGPL-3.0"], name = "serai-ethereum-test-primitives" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-deployer" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-router" }, { allow = ["AGPL-3.0"], name = "serai-processor-ethereum-erc20" }, From 5b3c5ec02bbbde03f257010d67d7cc5a9460f578 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 9 Dec 2024 02:00:17 -0500 Subject: [PATCH 203/368] Basic Ethereum escapeHatch test --- .../ethereum/router/contracts/Router.sol | 13 ++++- processor/ethereum/router/src/lib.rs | 52 +++++++++++++++---- processor/ethereum/router/src/tests/mod.rs | 48 ++++++++++++++++- processor/ethereum/src/primitives/mod.rs | 9 ++-- processor/ethereum/src/rpc.rs | 2 +- 5 files changed, 106 insertions(+), 18 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 5d8211da..12d4fa9c 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -243,6 +243,16 @@ contract Router is IRouterWithoutCollisions { // Re-entrancy doesn't bork this function // slither-disable-next-line reentrancy-events function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { + // Check there is an active key + if (_seraiKey == bytes32(0)) { + revert InvalidSeraiKey(); + } + + // Don't allow further InInstructions once the escape hatch has been invoked + if (_escapedTo != address(0)) { + revert EscapeHatchInvoked(); + } + // Check the transfer if (coin == address(0)) { if (amount != msg.value) revert AmountMismatchesMsgValue(); @@ -313,7 +323,8 @@ contract Router is IRouterWithoutCollisions { This should be in such excess of the gas requirements of integrated tokens we'll survive repricing, so long as the repricing doesn't revolutionize EVM gas costs as we know it. In such - a case, Serai would have to migrate to a new smart contract using `escapeHatch`. + a case, Serai would have to migrate to a new smart contract using `escapeHatch`. That also + covers all other potential exceptional cases. */ uint256 _gas = 100_000; diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index e28fb2f5..1531a5b9 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -70,16 +70,15 @@ pub enum Coin { /// Ether, the native coin of Ethereum. Ether, /// An ERC20 token. - Erc20([u8; 20]), + Erc20(Address), } impl Coin { fn address(&self) -> Address { - (match self { - Coin::Ether => [0; 20], + match self { + Coin::Ether => [0; 20].into(), Coin::Erc20(address) => *address, - }) - .into() + } } /// Read a `Coin`. @@ -91,7 +90,7 @@ impl Coin { 1 => { let mut address = [0; 20]; reader.read_exact(&mut address)?; - Coin::Erc20(address) + Coin::Erc20(address.into()) } _ => Err(io::Error::other("unrecognized Coin type"))?, }) @@ -103,7 +102,7 @@ impl Coin { Coin::Ether => writer.write_all(&[0]), Coin::Erc20(token) => { writer.write_all(&[1])?; - writer.write_all(token) + writer.write_all(token.as_ref()) } } } @@ -275,10 +274,12 @@ impl Executed { #[derive(Clone, Debug)] pub struct Router(Arc>, Address); impl Router { - const DEPLOYMENT_GAS: u64 = 995_000; + const DEPLOYMENT_GAS: u64 = 1_000_000; const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 58_000; const UPDATE_SERAI_KEY_GAS: u64 = 61_000; const EXECUTE_BASE_GAS: u64 = 48_000; + const ESCAPE_HATCH_GAS: u64 = 58_000; + const ESCAPE_GAS: u64 = 200_000; fn code() -> Vec { const BYTECODE: &[u8] = @@ -395,11 +396,40 @@ impl Router { } } + /// Get the message to be signed in order to trigger the escape hatch. + pub fn escape_hatch_message(nonce: u64, escape_to: Address) -> Vec { + abi::escapeHatchCall::new(( + abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, + escape_to, + )) + .abi_encode() + } + + /// Construct a transaction to trigger the escape hatch. + pub fn escape_hatch(&self, escape_to: Address, sig: &Signature) -> TxLegacy { + TxLegacy { + to: TxKind::Call(self.1), + input: abi::escapeHatchCall::new((abi::Signature::from(sig), escape_to)).abi_encode().into(), + gas_limit: Self::ESCAPE_HATCH_GAS * 120 / 100, + ..Default::default() + } + } + + /// Construct a transaction to escape coins via the escape hatch. + pub fn escape(&self, coin: Address) -> TxLegacy { + TxLegacy { + to: TxKind::Call(self.1), + input: abi::escapeCall::new((coin,)).abi_encode().into(), + gas_limit: Self::ESCAPE_GAS, + ..Default::default() + } + } + /// Fetch the `InInstruction`s emitted by the Router from this block. pub async fn in_instructions( &self, block: u64, - allowed_tokens: &HashSet<[u8; 20]>, + allowed_tokens: &HashSet
, ) -> Result, RpcError> { // The InInstruction events for this block let filter = Filter::new().from_block(block).to_block(block).address(self.1); @@ -451,7 +481,7 @@ impl Router { let coin = if log.coin.0 == [0; 20] { Coin::Ether } else { - let token = *log.coin.0; + let token = log.coin; if !allowed_tokens.contains(&token) { continue; @@ -490,7 +520,7 @@ impl Router { } // Check if this log is from the token we expected to be transferred - if tx_log.address().0 != token { + if tx_log.address() != token { continue; } // Check if this is a transfer log diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 107723f8..e5f8f41e 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -177,7 +177,6 @@ async fn test_update_serai_key() { #[tokio::test] async fn test_eth_in_instruction() { let (_anvil, provider, router, key) = setup_test().await; - // TODO: Do we want to allow InInstructions before any key has been confirmed? confirm_next_serai_key(&provider, &router, 1, key).await; let amount = U256::try_from(OsRng.next_u64()).unwrap(); @@ -291,7 +290,52 @@ async fn test_erc20_code_out_instruction() { todo!("TODO") } +async fn escape_hatch( + provider: &Arc>, + router: &Router, + nonce: u64, + key: (Scalar, PublicKey), + escape_to: Address, +) -> TransactionReceipt { + let msg = Router::escape_hatch_message(nonce, escape_to); + + let nonce = Scalar::random(&mut OsRng); + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); + let s = nonce + (c * key.0); + + let sig = Signature::new(c, s).unwrap(); + + let mut tx = router.escape_hatch(escape_to, &sig); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(&tx); + let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; + assert!(receipt.status()); + assert_eq!(u128::from(Router::ESCAPE_HATCH_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); + receipt +} + +async fn escape( + provider: &Arc>, + router: &Router, + coin: Coin, +) -> TransactionReceipt { + let mut tx = router.escape(coin.address()); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(&tx); + let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; + assert!(receipt.status()); + receipt +} + #[tokio::test] async fn test_escape_hatch() { - todo!("TODO") + let (_anvil, provider, router, key) = setup_test().await; + confirm_next_serai_key(&provider, &router, 1, key).await; + let escape_to: Address = { + let mut escape_to = [0; 20]; + OsRng.fill_bytes(&mut escape_to); + escape_to.into() + }; + escape_hatch(&provider, &router, 2, key, escape_to).await; + escape(&provider, &router, Coin::Ether).await; } diff --git a/processor/ethereum/src/primitives/mod.rs b/processor/ethereum/src/primitives/mod.rs index 197acf8f..39f1eb94 100644 --- a/processor/ethereum/src/primitives/mod.rs +++ b/processor/ethereum/src/primitives/mod.rs @@ -1,3 +1,5 @@ +use alloy_core::primitives::{FixedBytes, Address}; + use serai_client::primitives::Amount; pub(crate) mod output; @@ -5,13 +7,14 @@ pub(crate) mod transaction; pub(crate) mod machine; pub(crate) mod block; -pub(crate) const DAI: [u8; 20] = +pub(crate) const DAI: Address = Address(FixedBytes( match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") { Ok(res) => res, Err(_) => panic!("invalid non-test DAI hex address"), - }; + }, +)); -pub(crate) const TOKENS: [[u8; 20]; 1] = [DAI]; +pub(crate) const TOKENS: [Address; 1] = [DAI]; // 8 decimals, so 1_000_000_00 would be 1 ETH. This is 0.0015 ETH (5 USD if Ether is ~3300 USD). #[allow(clippy::inconsistent_digit_grouping)] diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 7f8a422b..610eb491 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -165,7 +165,7 @@ impl ScannerFeed for Rpc { let mut instructions = router.in_instructions(block.number, &HashSet::from(TOKENS)).await?; for token in TOKENS { - for TopLevelTransfer { id, from, amount, data } in Erc20::new(provider.clone(), token) + for TopLevelTransfer { id, from, amount, data } in Erc20::new(provider.clone(), **token) .top_level_transfers(block.number, router.address()) .await? { From 9593a428e37a0d2f61bf31ca0e389691018b7c06 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 11 Dec 2024 01:02:58 -0500 Subject: [PATCH 204/368] alloy 0.8 --- Cargo.lock | 92 +++++++++---------- .../alloy-simple-request-transport/Cargo.toml | 4 +- networks/ethereum/schnorr/Cargo.toml | 8 +- processor/ethereum/Cargo.toml | 8 +- processor/ethereum/deployer/Cargo.toml | 8 +- processor/ethereum/erc20/Cargo.toml | 6 +- processor/ethereum/primitives/Cargo.toml | 2 +- processor/ethereum/router/Cargo.toml | 12 +-- processor/ethereum/test-primitives/Cargo.toml | 6 +- 9 files changed, 73 insertions(+), 73 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 12da7bf1..93a8a9db 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -112,9 +112,9 @@ dependencies = [ [[package]] name = "alloy-consensus" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a101d4d016f47f13890a74290fdd17b05dd175191d9337bc600791fb96e4dea8" +checksum = "8ba14856660f31807ebb26ce8f667e814c72694e1077e97ef102e326ad580f3f" dependencies = [ "alloy-eips", "alloy-primitives", @@ -130,9 +130,9 @@ dependencies = [ [[package]] name = "alloy-consensus-any" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fa60357dda9a3d0f738f18844bd6d0f4a5924cc5cf00bfad2ff1369897966123" +checksum = "28666307e76441e7af37a2b90cde7391c28112121bea59f4e0d804df8b20057e" dependencies = [ "alloy-consensus", "alloy-eips", @@ -177,9 +177,9 @@ dependencies = [ [[package]] name = "alloy-eips" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8b6755b093afef5925f25079dd5a7c8d096398b804ba60cb5275397b06b31689" +checksum = "47e922d558006ba371681d484d12aa73fe673d84884f83747730af7433c0e86d" dependencies = [ "alloy-eip2930", "alloy-eip7702", @@ -195,9 +195,9 @@ dependencies = [ [[package]] name = "alloy-genesis" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aeec8e6eab6e52b7c9f918748c9b811e87dbef7312a2e3a2ca1729a92966a6af" +checksum = "5dca170827a7ca156b43588faebf9e9d27c27d0fb07cab82cfd830345e2b24f5" dependencies = [ "alloy-primitives", "alloy-serde", @@ -207,9 +207,9 @@ dependencies = [ [[package]] name = "alloy-json-rpc" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4fa077efe0b834bcd89ff4ba547f48fb081e4fdc3673dd7da1b295a2cf2bb7b7" +checksum = "9335278f50b0273e0a187680ee742bb6b154a948adf036f448575bacc5ccb315" dependencies = [ "alloy-primitives", "alloy-sol-types", @@ -221,9 +221,9 @@ dependencies = [ [[package]] name = "alloy-network" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "209a1882a08e21aca4aac6e2a674dc6fcf614058ef8cb02947d63782b1899552" +checksum = "ad4e6ad4230df8c4a254c20f8d6a84ab9df151bfca13f463177dbc96571cc1f8" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -246,9 +246,9 @@ dependencies = [ [[package]] name = "alloy-network-primitives" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c20219d1ad261da7a6331c16367214ee7ded41d001fabbbd656fbf71898b2773" +checksum = "c4df88a2f8020801e0fefce79471d3946d39ca3311802dbbd0ecfdeee5e972e3" dependencies = [ "alloy-consensus", "alloy-eips", @@ -259,9 +259,9 @@ dependencies = [ [[package]] name = "alloy-node-bindings" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bffcf33dd319f21cd6f066d81cbdef0326d4bdaaf7cfe91110bc090707858e9f" +checksum = "2db5cefbc736b2b26a960dcf82279c70a03695dd11a0032a6dc27601eeb29182" dependencies = [ "alloy-genesis", "alloy-primitives", @@ -276,9 +276,9 @@ dependencies = [ [[package]] name = "alloy-primitives" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9db948902dfbae96a73c2fbf1f7abec62af034ab883e4c777c3fd29702bd6e2c" +checksum = "6259a506ab13e1d658796c31e6e39d2e2ee89243bcc505ddc613b35732e0a430" dependencies = [ "alloy-rlp", "bytes", @@ -304,9 +304,9 @@ dependencies = [ [[package]] name = "alloy-provider" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9eefa6f4c798ad01f9b4202d02cea75f5ec11fa180502f4701e2b47965a8c0bb" +checksum = "5115c74c037714e1b02a86f742289113afa5d494b5ea58308ba8aa378e739101" dependencies = [ "alloy-chains", "alloy-consensus", @@ -360,9 +360,9 @@ dependencies = [ [[package]] name = "alloy-rpc-client" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed30bf1041e84cabc5900f52978ca345dd9969f2194a945e6fdec25b0620705c" +checksum = "5c6a0bd0ce5660ac48e4f3bb0c7c5c3a94db287a0be94971599d83928476cbcd" dependencies = [ "alloy-json-rpc", "alloy-primitives", @@ -381,9 +381,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-any" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "200661999b6e235d9840be5d60a6e8ae2f0af9eb2a256dd378786744660e36ec" +checksum = "ea98f81bcd759dbfa3601565f9d7a02220d8ef1d294ec955948b90aaafbfd857" dependencies = [ "alloy-consensus-any", "alloy-rpc-types-eth", @@ -392,9 +392,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-eth" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a0600b8b5e2dc0cab12cbf91b5a885c35871789fb7b3a57b434bd4fced5b7a8b" +checksum = "0e518b0a7771e00728f18be0708f828b18a1cfc542a7153bef630966a26388e0" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -412,9 +412,9 @@ dependencies = [ [[package]] name = "alloy-serde" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9afa753a97002a33b2ccb707d9f15f31c81b8c1b786c95b73cc62bb1d1fd0c3f" +checksum = "ed3dc8d4a08ffc90c1381d39a4afa2227668259a42c97ab6eecf51cbd82a8761" dependencies = [ "alloy-primitives", "serde", @@ -423,9 +423,9 @@ dependencies = [ [[package]] name = "alloy-signer" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9b2cbff01a673936c2efd7e00d4c0e9a4dbbd6d600e2ce298078d33efbb19cd7" +checksum = "16188684100f6e0f2a2b949968fe3007749c5be431549064a1bce4e7b3a196a9" dependencies = [ "alloy-primitives", "async-trait", @@ -448,9 +448,9 @@ dependencies = [ [[package]] name = "alloy-sol-macro" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3bfd7853b65a2b4f49629ec975fee274faf6dff15ab8894c620943398ef283c0" +checksum = "d9d64f851d95619233f74b310f12bcf16e0cbc27ee3762b6115c14a84809280a" dependencies = [ "alloy-sol-macro-expander", "alloy-sol-macro-input", @@ -462,9 +462,9 @@ dependencies = [ [[package]] name = "alloy-sol-macro-expander" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "82ec42f342d9a9261699f8078e57a7a4fda8aaa73c1a212ed3987080e6a9cd13" +checksum = "6bf7ed1574b699f48bf17caab4e6e54c6d12bc3c006ab33d58b1e227c1c3559f" dependencies = [ "alloy-sol-macro-input", "const-hex", @@ -480,9 +480,9 @@ dependencies = [ [[package]] name = "alloy-sol-macro-input" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed2c50e6a62ee2b4f7ab3c6d0366e5770a21cad426e109c2f40335a1b3aff3df" +checksum = "8c02997ccef5f34f9c099277d4145f183b422938ed5322dc57a089fe9b9ad9ee" dependencies = [ "const-hex", "dunce", @@ -495,9 +495,9 @@ dependencies = [ [[package]] name = "alloy-sol-types" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c9dc0fffe397aa17628160e16b89f704098bf3c9d74d5d369ebc239575936de5" +checksum = "1174cafd6c6d810711b4e00383037bdb458efc4fe3dbafafa16567e0320c54d8" dependencies = [ "alloy-primitives", "alloy-sol-macro", @@ -506,9 +506,9 @@ dependencies = [ [[package]] name = "alloy-transport" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d69d36982b9e46075ae6b792b0f84208c6c2c15ad49f6c500304616ef67b70e0" +checksum = "628be5b9b75e4f4c4f2a71d985bbaca4f23de356dc83f1625454c505f5eef4df" dependencies = [ "alloy-json-rpc", "base64 0.22.1", @@ -526,9 +526,9 @@ dependencies = [ [[package]] name = "alloy-transport-http" -version = "0.7.3" +version = "0.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2e02ffd5d93ffc51d72786e607c97de3b60736ca3e636ead0ec1f7dce68ea3fd" +checksum = "4e24412cf72f79c95cd9b1d9482e3a31f9d94c24b43c4b3b710cc8d4341eaab0" dependencies = [ "alloy-transport", "url", @@ -3547,7 +3547,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.4.10", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -4090,7 +4090,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.48.5", + "windows-targets 0.52.6", ] [[package]] @@ -10352,9 +10352,9 @@ dependencies = [ [[package]] name = "syn-solidity" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "da0523f59468a2696391f2a772edc089342aacd53c3caa2ac3264e598edf119b" +checksum = "219389c1ebe89f8333df8bdfb871f6631c552ff399c23cac02480b6088aad8f0" dependencies = [ "paste", "proc-macro2", diff --git a/networks/ethereum/alloy-simple-request-transport/Cargo.toml b/networks/ethereum/alloy-simple-request-transport/Cargo.toml index b9769263..c470c727 100644 --- a/networks/ethereum/alloy-simple-request-transport/Cargo.toml +++ b/networks/ethereum/alloy-simple-request-transport/Cargo.toml @@ -21,8 +21,8 @@ tower = "0.5" serde_json = { version = "1", default-features = false } simple-request = { path = "../../../common/request", version = "0.1", default-features = false } -alloy-json-rpc = { version = "0.7", default-features = false } -alloy-transport = { version = "0.7", default-features = false } +alloy-json-rpc = { version = "0.8", default-features = false } +alloy-transport = { version = "0.8", default-features = false } [features] default = ["tls"] diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 9c88e7c0..90c9d418 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -33,10 +33,10 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-types-eth = { version = "0.7", default-features = false } -alloy-rpc-client = { version = "0.7", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-rpc-client = { version = "0.8", default-features = false } +alloy-provider = { version = "0.8", default-features = false } -alloy-node-bindings = { version = "0.7", default-features = false } +alloy-node-bindings = { version = "0.8", default-features = false } tokio = { version = "1", default-features = false, features = ["macros"] } diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 4a26e11f..282af928 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -34,11 +34,11 @@ k256 = { version = "^0.13.1", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } alloy-rlp = { version = "0.3", default-features = false } -alloy-rpc-types-eth = { version = "0.7", default-features = false } -alloy-transport = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-transport = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-client = { version = "0.7", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-rpc-client = { version = "0.8", default-features = false } +alloy-provider = { version = "0.8", default-features = false } serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } diff --git a/processor/ethereum/deployer/Cargo.toml b/processor/ethereum/deployer/Cargo.toml index 7cbd57d0..ba523b42 100644 --- a/processor/ethereum/deployer/Cargo.toml +++ b/processor/ethereum/deployer/Cargo.toml @@ -22,12 +22,12 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.7", default-features = false } +alloy-consensus = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.7", default-features = false } -alloy-transport = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-transport = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-provider = { version = "0.8", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index ad1508f6..ad6ca6ad 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -22,9 +22,9 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.7", default-features = false } -alloy-transport = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-transport = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-provider = { version = "0.8", default-features = false } tokio = { version = "1", default-features = false, features = ["rt"] } diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml index 7070d256..33740c7f 100644 --- a/processor/ethereum/primitives/Cargo.toml +++ b/processor/ethereum/primitives/Cargo.toml @@ -21,4 +21,4 @@ group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.7", default-features = false, features = ["k256"] } +alloy-consensus = { version = "0.8", default-features = false, features = ["k256"] } diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 6d83d632..97594de0 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -24,12 +24,12 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.7", default-features = false } +alloy-consensus = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.7", default-features = false } -alloy-transport = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-transport = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-provider = { version = "0.8", default-features = false } ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../../networks/ethereum/schnorr", default-features = false } @@ -53,8 +53,8 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } k256 = { version = "0.13", default-features = false, features = ["std"] } -alloy-rpc-client = { version = "0.7", default-features = false } -alloy-node-bindings = { version = "0.7", default-features = false } +alloy-rpc-client = { version = "0.8", default-features = false } +alloy-node-bindings = { version = "0.8", default-features = false } tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros"] } diff --git a/processor/ethereum/test-primitives/Cargo.toml b/processor/ethereum/test-primitives/Cargo.toml index 3f025662..d77e7d6e 100644 --- a/processor/ethereum/test-primitives/Cargo.toml +++ b/processor/ethereum/test-primitives/Cargo.toml @@ -20,10 +20,10 @@ workspace = true k256 = { version = "0.13", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.7", default-features = false, features = ["std"] } +alloy-consensus = { version = "0.8", default-features = false, features = ["std"] } -alloy-rpc-types-eth = { version = "0.7", default-features = false } +alloy-rpc-types-eth = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.7", default-features = false } +alloy-provider = { version = "0.8", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } From 066aa9eda4236fe26587e0980c5cff6455c2336e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 12 Dec 2024 00:45:19 -0500 Subject: [PATCH 205/368] cargo update Resolves RUSTSEC-2024-0421 --- Cargo.lock | 105 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 69 insertions(+), 36 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 93a8a9db..444b1756 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -144,9 +144,9 @@ dependencies = [ [[package]] name = "alloy-core" -version = "0.8.14" +version = "0.8.15" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c3d14d531c99995de71558e8e2206c27d709559ee8e5a0452b965ea82405a013" +checksum = "c618bd382f0bc2ac26a7e4bfae01c9b015ca8f21b37ca40059ae35a7e62b3dc6" dependencies = [ "alloy-primitives", ] @@ -374,7 +374,7 @@ dependencies = [ "serde_json", "tokio", "tokio-stream", - "tower 0.5.1", + "tower 0.5.2", "tracing", "wasmtimer", ] @@ -443,7 +443,7 @@ dependencies = [ "alloy-transport", "serde_json", "simple-request", - "tower 0.5.1", + "tower 0.5.2", ] [[package]] @@ -518,7 +518,7 @@ dependencies = [ "serde_json", "thiserror 2.0.6", "tokio", - "tower 0.5.1", + "tower 0.5.2", "tracing", "url", "wasmtimer", @@ -1347,9 +1347,9 @@ dependencies = [ [[package]] name = "bstr" -version = "1.11.0" +version = "1.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1a68f1f47cdf0ec8ee4b941b2eee2a80cb796db73118c0dd09ac63fbe405be22" +checksum = "786a307d683a5bf92e6fd5fd69a7eb613751668d1d8d67d802846dfe367c62c8" dependencies = [ "memchr", "serde", @@ -1532,9 +1532,9 @@ dependencies = [ [[package]] name = "chrono" -version = "0.4.38" +version = "0.4.39" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a21f936df1771bf62b77f047b726c4625ff2e8aa607c01ec06e5a05bd8463401" +checksum = "7e36cc9d416881d2e24f9a963be5fb1cd90966419ac844274161d10488b3e825" dependencies = [ "android-tzdata", "iana-time-zone", @@ -3547,7 +3547,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.5.8", + "socket2 0.4.10", "tokio", "tower-service", "tracing", @@ -3599,7 +3599,7 @@ dependencies = [ "http 1.2.0", "hyper 1.4.1", "hyper-util", - "rustls 0.23.19", + "rustls 0.23.20", "rustls-native-certs", "rustls-pki-types", "tokio", @@ -3687,14 +3687,35 @@ dependencies = [ [[package]] name = "idna" -version = "0.5.0" +version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "634d9b1461af396cad843f47fdba5597a4f9e6ddd4bfb6ff5d85028c25cb12f6" +checksum = "686f825264d630750a544639377bae737628043f20d38bbc029e8f29ea968a7e" dependencies = [ + "idna_adapter", + "smallvec", + "utf8_iter", +] + +[[package]] +name = "idna_adapter" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "279259b0ac81c89d11c290495fdcfa96ea3643b7df311c138b6fe8ca5237f0f8" +dependencies = [ + "idna_mapping", "unicode-bidi", "unicode-normalization", ] +[[package]] +name = "idna_mapping" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d5422cc5bc64289a77dbb45e970b86b5e9a04cb500abc7240505aedc1bf40f38" +dependencies = [ + "unicode-joining-type", +] + [[package]] name = "if-addrs" version = "0.10.2" @@ -4079,9 +4100,9 @@ checksum = "884e2677b40cc8c339eaefcb701c32ef1fd2493d71118dc0ca4b6a736c93bd67" [[package]] name = "libc" -version = "0.2.167" +version = "0.2.168" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "09d6582e104315a817dff97f75133544b2e094ee22447d2acf4a74e189ba06fc" +checksum = "5aaeb2981e0606ca11d79718f8bb01164f1d6ed75080182d3abf017e6d244b6d" [[package]] name = "libloading" @@ -5422,9 +5443,9 @@ dependencies = [ [[package]] name = "netlink-sys" -version = "0.8.6" +version = "0.8.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "416060d346fbaf1f23f9512963e3e878f1a78e707cb699ba9215761754244307" +checksum = "16c903aa70590cb93691bf97a767c8d1d6122d2cc9070433deb3bbf36ce8bd23" dependencies = [ "bytes", "futures", @@ -6575,9 +6596,9 @@ dependencies = [ [[package]] name = "redox_syscall" -version = "0.5.7" +version = "0.5.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9b6dfecf2c74bce2466cabf93f6664d6998a69eb21e39f4207930065b27b771f" +checksum = "03a862b389f93e68874fbf580b9de08dd02facb9a788ebadaf4a3fd33cf58834" dependencies = [ "bitflags 2.6.0", ] @@ -6903,9 +6924,9 @@ dependencies = [ [[package]] name = "rustls" -version = "0.23.19" +version = "0.23.20" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "934b404430bb06b3fae2cba809eb45a1ab1aecd64491213d7c3301b88393f8d1" +checksum = "5065c3f250cbd332cd894be57c40fa52387247659b14a2d6041d121547903b1b" dependencies = [ "once_cell", "ring 0.17.8", @@ -9172,9 +9193,9 @@ dependencies = [ [[package]] name = "serde" -version = "1.0.215" +version = "1.0.216" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f" +checksum = "0b9781016e935a97e8beecf0c933758c97a5520d32930e460142b4cd80c6338e" dependencies = [ "serde_derive", ] @@ -9190,9 +9211,9 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.215" +version = "1.0.216" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0" +checksum = "46f859dbbf73865c6627ed570e78961cd3ac92407a2d117204c49232485da55e" dependencies = [ "proc-macro2", "quote", @@ -10155,9 +10176,9 @@ dependencies = [ [[package]] name = "static_init_macro" -version = "1.0.2" +version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "70a2595fc3aa78f2d0e45dd425b22282dd863273761cc77780914b2cf3003acf" +checksum = "b07d15a19b60c12b1a4d927f86bf124037e899c962017d8a198d59997cf12f1b" dependencies = [ "cfg_aliases 0.1.1", "memchr", @@ -10364,9 +10385,9 @@ dependencies = [ [[package]] name = "sync_wrapper" -version = "0.1.2" +version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2047c6ded9c721764247e62cd3b03c09ffc529b2ba5b10ec482ae507a4a70160" +checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263" [[package]] name = "synstructure" @@ -10625,7 +10646,7 @@ version = "0.26.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f6d0975eaace0cf0fcadee4e4aaa5da15b5c079146f2cffb67c113be122bf37" dependencies = [ - "rustls 0.23.19", + "rustls 0.23.20", "tokio", ] @@ -10722,9 +10743,9 @@ dependencies = [ [[package]] name = "tower" -version = "0.5.1" +version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2873938d487c3cfb9aed7546dc9f2711d867c9f90c46b889989a2cb84eba6b4f" +checksum = "d039ad9159c98b70ecfd540b2573b97f7f52c3e8d9f8ad57a24b916a536975f9" dependencies = [ "futures-core", "futures-util", @@ -11053,6 +11074,12 @@ version = "1.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "adb9e6ca4f869e1180728b7950e35922a7fc6397f7b641499e8f3ef06e50dc83" +[[package]] +name = "unicode-joining-type" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "22f8cb47ccb8bc750808755af3071da4a10dcd147b68fc874b7ae4b12543f6f5" + [[package]] name = "unicode-normalization" version = "0.1.24" @@ -11110,15 +11137,21 @@ checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" [[package]] name = "url" -version = "2.5.2" +version = "2.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22784dbdf76fdde8af1aeda5622b546b422b6fc585325248a2bf9f5e41e94d6c" +checksum = "32f8b686cadd1473f4bd0117a5d28d36b1ade384ea9b5069a1c40aefed7fda60" dependencies = [ "form_urlencoded", - "idna 0.5.0", + "idna 1.0.3", "percent-encoding", ] +[[package]] +name = "utf8_iter" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" + [[package]] name = "utf8parse" version = "0.2.2" @@ -11637,7 +11670,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.59.0", + "windows-sys 0.48.0", ] [[package]] From 147a6e43d02526da59c36ce375d69fc9c04c3703 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 19 Dec 2024 10:07:24 -0500 Subject: [PATCH 206/368] Split task from serai-processor-primitives into serai-task --- .github/workflows/common-tests.yml | 1 + .github/workflows/msrv.yml | 1 + Cargo.lock | 9 ++++++++ Cargo.toml | 1 + common/task/Cargo.toml | 22 +++++++++++++++++++ common/task/LICENSE | 15 +++++++++++++ common/task/README.md | 3 +++ .../src/task.rs => common/task/src/lib.rs | 4 ++++ deny.toml | 1 + processor/primitives/Cargo.toml | 2 ++ processor/primitives/src/lib.rs | 2 +- 11 files changed, 60 insertions(+), 1 deletion(-) create mode 100644 common/task/Cargo.toml create mode 100644 common/task/LICENSE create mode 100644 common/task/README.md rename processor/primitives/src/task.rs => common/task/src/lib.rs (98%) diff --git a/.github/workflows/common-tests.yml b/.github/workflows/common-tests.yml index 117b5858..b93db510 100644 --- a/.github/workflows/common-tests.yml +++ b/.github/workflows/common-tests.yml @@ -30,4 +30,5 @@ jobs: -p patchable-async-sleep \ -p serai-db \ -p serai-env \ + -p serai-task \ -p simple-request diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml index f969f755..409f5a9b 100644 --- a/.github/workflows/msrv.yml +++ b/.github/workflows/msrv.yml @@ -24,6 +24,7 @@ jobs: cargo msrv verify --manifest-path common/std-shims/Cargo.toml cargo msrv verify --manifest-path common/env/Cargo.toml cargo msrv verify --manifest-path common/db/Cargo.toml + cargo msrv verify --manifest-path common/task/Cargo.toml cargo msrv verify --manifest-path common/request/Cargo.toml cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml diff --git a/Cargo.lock b/Cargo.lock index 444b1756..4266d05a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8922,6 +8922,7 @@ dependencies = [ "parity-scale-codec", "serai-coins-primitives", "serai-primitives", + "serai-task", "tokio", ] @@ -9150,6 +9151,14 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-task" +version = "0.1.0" +dependencies = [ + "log", + "tokio", +] + [[package]] name = "serai-validator-sets-pallet" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 16c12262..6fe5fe89 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -20,6 +20,7 @@ members = [ "common/patchable-async-sleep", "common/db", "common/env", + "common/task", "common/request", "crypto/transcript", diff --git a/common/task/Cargo.toml b/common/task/Cargo.toml new file mode 100644 index 00000000..f96e4557 --- /dev/null +++ b/common/task/Cargo.toml @@ -0,0 +1,22 @@ +[package] +name = "serai-task" +version = "0.1.0" +description = "A task schema for Serai services" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/common/task" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false +rust-version = "1.75" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +log = { version = "0.4", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] } diff --git a/common/task/LICENSE b/common/task/LICENSE new file mode 100644 index 00000000..41d5a261 --- /dev/null +++ b/common/task/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/common/task/README.md b/common/task/README.md new file mode 100644 index 00000000..db1d02ba --- /dev/null +++ b/common/task/README.md @@ -0,0 +1,3 @@ +# Task + +A schema to define tasks to be run ad infinitum. diff --git a/processor/primitives/src/task.rs b/common/task/src/lib.rs similarity index 98% rename from processor/primitives/src/task.rs rename to common/task/src/lib.rs index e8efc64c..b5523cda 100644 --- a/processor/primitives/src/task.rs +++ b/common/task/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use core::{future::Future, time::Duration}; use std::sync::Arc; diff --git a/deny.toml b/deny.toml index 7581a9db..66bd4bc6 100644 --- a/deny.toml +++ b/deny.toml @@ -41,6 +41,7 @@ allow = [ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-env" }, + { allow = ["AGPL-3.0"], name = "serai-task" }, { allow = ["AGPL-3.0"], name = "ethereum-schnorr-contract" }, { allow = ["AGPL-3.0"], name = "serai-ethereum-relayer" }, diff --git a/processor/primitives/Cargo.toml b/processor/primitives/Cargo.toml index 6eba2e5b..a950a61b 100644 --- a/processor/primitives/Cargo.toml +++ b/processor/primitives/Cargo.toml @@ -28,3 +28,5 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] } + +serai-task = { path = "../../common/task", default-features = false } diff --git a/processor/primitives/src/lib.rs b/processor/primitives/src/lib.rs index cc915ca2..371bdafb 100644 --- a/processor/primitives/src/lib.rs +++ b/processor/primitives/src/lib.rs @@ -10,7 +10,7 @@ use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; /// A module for task-related structs and functionality. -pub mod task; +pub use serai_task as task; mod output; pub use output::*; From 4de1a5804dde95aef30d64c1935de3349967fddb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 22 Dec 2024 06:41:55 -0500 Subject: [PATCH 207/368] Dedicated library for intending and evaluating cosigns Not only cleans the existing cosign code but enables non-Serai-coordinators to evaluate cosigns if they gain access to a feed of them (such as over an RPC). This would let centralized services not only track the finalized chain yet the cosigned chain without directly running a coordinator. Still being wrapped up. --- .github/workflows/msrv.yml | 1 + .github/workflows/tests.yml | 1 + Cargo.lock | 17 ++ Cargo.toml | 3 +- coordinator/cosign/Cargo.toml | 36 +++ coordinator/cosign/LICENSE | 15 + coordinator/cosign/README.md | 121 ++++++++ coordinator/cosign/src/evaluator.rs | 188 +++++++++++++ coordinator/cosign/src/intend.rs | 116 ++++++++ coordinator/cosign/src/lib.rs | 416 ++++++++++++++++++++++++++++ coordinator/src/cosign_evaluator.rs | 336 ---------------------- coordinator/src/substrate/cosign.rs | 332 ---------------------- deny.toml | 1 + 13 files changed, 914 insertions(+), 669 deletions(-) create mode 100644 coordinator/cosign/Cargo.toml create mode 100644 coordinator/cosign/LICENSE create mode 100644 coordinator/cosign/README.md create mode 100644 coordinator/cosign/src/evaluator.rs create mode 100644 coordinator/cosign/src/intend.rs create mode 100644 coordinator/cosign/src/lib.rs delete mode 100644 coordinator/src/cosign_evaluator.rs delete mode 100644 coordinator/src/substrate/cosign.rs diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml index 409f5a9b..75fcdd79 100644 --- a/.github/workflows/msrv.yml +++ b/.github/workflows/msrv.yml @@ -175,6 +175,7 @@ jobs: run: | cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml + cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml cargo msrv verify --manifest-path coordinator/Cargo.toml msrv-substrate: diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 3adc3ac5..9f1b0a1f 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -61,6 +61,7 @@ jobs: -p serai-monero-processor \ -p tendermint-machine \ -p tributary-chain \ + -p serai-cosign \ -p serai-coordinator \ -p serai-orchestrator \ -p serai-docker-tests diff --git a/Cargo.lock b/Cargo.lock index 4266d05a..8965bf2d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8365,6 +8365,23 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-cosign" +version = "0.1.0" +dependencies = [ + "blake2", + "borsh", + "ciphersuite", + "log", + "parity-scale-codec", + "schnorr-signatures", + "serai-client", + "serai-db", + "serai-task", + "sp-application-crypto", + "tokio", +] + [[package]] name = "serai-db" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 6fe5fe89..eea39f37 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -98,6 +98,7 @@ members = [ "coordinator/tributary/tendermint", "coordinator/tributary", + "coordinator/cosign", "coordinator", "substrate/primitives", @@ -208,6 +209,7 @@ pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46 [workspace.lints.clippy] unwrap_or_default = "allow" +map_unwrap_or = "allow" borrow_as_ptr = "deny" cast_lossless = "deny" cast_possible_truncation = "deny" @@ -235,7 +237,6 @@ manual_instant_elapsed = "deny" manual_let_else = "deny" manual_ok_or = "deny" manual_string_new = "deny" -map_unwrap_or = "deny" match_bool = "deny" match_same_arms = "deny" missing_fields_in_debug = "deny" diff --git a/coordinator/cosign/Cargo.toml b/coordinator/cosign/Cargo.toml new file mode 100644 index 00000000..b6177121 --- /dev/null +++ b/coordinator/cosign/Cargo.toml @@ -0,0 +1,36 @@ +[package] +name = "serai-cosign" +version = "0.1.0" +description = "Evaluator of cosigns for the Serai network" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false +rust-version = "1.81" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } +schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } + +sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } +serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } + +log = { version = "0.4", default-features = false, features = ["std"] } + +tokio = { version = "1", default-features = false, features = [] } + +serai-db = { path = "../../common/db" } +serai-task = { path = "../../common/task" } diff --git a/coordinator/cosign/LICENSE b/coordinator/cosign/LICENSE new file mode 100644 index 00000000..26d57cbb --- /dev/null +++ b/coordinator/cosign/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2023-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/coordinator/cosign/README.md b/coordinator/cosign/README.md new file mode 100644 index 00000000..96fe0f92 --- /dev/null +++ b/coordinator/cosign/README.md @@ -0,0 +1,121 @@ +# Serai Cosign + +The Serai blockchain is controlled by a set of validators referred to as the +Serai validators. These validators could attempt to double-spend, even if every +node on the network is a full node, via equivocating. + +Posit: + - The Serai validators control X SRI + - The Serai validators produce block A swapping X SRI to Y XYZ + - The Serai validators produce block B swapping X SRI to Z ABC + - The Serai validators finalize block A and send to the validators for XYZ + - The Serai validators finalize block B and send to the validators for ABC + +This is solved via the cosigning protocol. The validators for XYZ and the +validators for ABC each sign their view of the Serai blockchain, communicating +amongst each other to ensure consistency. + +The security of the cosigning protocol is not formally proven, and there are no +claims it achieves Byzantine Fault Tolerance. This protocol is meant to be +practical and make such attacks infeasible, when they could already be argued +difficult to perform. + +### Definitions + +- Cosign: A signature from a non-Serai validator set for a Serai block +- Cosign Commit: A collection of cosigns which achieve the necessary weight + +### Methodology + +Finalized blocks from the Serai network are intended to be cosigned if they +contain burn events. Only once cosigned should non-Serai validators process +them. + +Cosigning occurs by a non-Serai validator set, using their threshold keys +declared on the Serai blockchain. Once 83% of non-Serai validator sets, by +weight, cosign a block, a cosign commit is formed. A cosign commit for a block +is considered to also cosign for all blocks preceding it. + +### Bounds Under Asynchrony + +Assuming an asynchronous environment fully controlled by the adversary, 34% of +a validator set may cause an equivocation. Control of 67% of non-Serai +validator sets, by weight, is sufficient to produce two distinct cosign commits +at the same position. This is due to the honest stake, 33%, being split across +the two candidates (67% + 16.5% = 83.5%, just over the threshold). This means +the cosigning protocol may produce multiple cosign commits if 34% of 67%, just +22.78%, of the non-Serai validator sets, is malicious. This would be in +conjunction with 34% of the Serai validator set (assumed 20% of total stake), +for a total stake requirement of 34% of 20% + 22.78% of 80% (25.024%). This is +an increase from the 6.8% required without the cosigning protocol. + +### Bounds Under Synchrony + +Assuming the honest stake within the non-Serai validator sets detect the +malicious stake within their set prior to assisting in producing a cosign for +their set, for which there is a multi-second window, 67% of 67% of non-Serai +validator sets is required to produce cosigns for those sets. This raises the +total stake requirement to 42.712% (past the usual 34% threshold). + +### Behavior Reliant on Synchrony + +If the Serai blockchain node detects an equivocation, it will stop responding +to all RPC requests and stop participating in finalizing further blocks. This +lets the node communicate the equivocating commits to other nodes (causing them +to exhibit the same behavior), yet prevents interaction with it. + +If cosigns representing 17% of the non-Serai validators sets by weight are +detected for distinct blocks at the same position, the protocol halts. An +explicit latency period of five seconds is enacted after receiving a cosign +commit for the detection of such an equivocation. This is largely redundant +given how the Serai blockchain node will presumably have halted itself by this +time. + +### Equivocation-Detection Avoidance + +Malicious Serai validators could avoid detection of their equivocating if they +produced two distinct blockchains, A and B, with different keys declared for +the same non-Serai validator set. While the validators following A may detect +the cosigns for distinct blocks by validators following B, the cosigns would be +assumed invalid due to their signatures being verified against distinct keys. + +This is prevented by requiring cosigns on the blocks which declare new keys, +ensuring all validators have a consistent view of the keys used within the +cosigning protocol (per the bounds of the cosigning protocol). These blocks are +exempt from the general policy of cosign commits cosigning all prior blocks, +preventing the newly declared keys (which aren't yet cosigned) from being used +to cosign themselves. These cosigns are flagged as "notable", are permanently +archived, and must be synced before a validator will move forward. + +Cosigning the block which declares new keys also ensures agreement on the +preceding block which declared the new set, with an exact specification of the +participants and their weight, before it impacts the cosigning protocol. + +### Denial of Service Concerns + +Any historical Serai validator set may trigger a chain halt by producing an +equivocation after their retiry. This requires 67% to be malicious. 34% of the +active Serai validator set may also trigger a chain halt. + +17% of non-Serai validator sets equivocating causing a halt means 5.67% of +non-Serai validator sets' stake may cause a halt (in an asynchronous +environment fully controlled by the adversary). In a synchronous environment +where the honest stake cannot be split across two candidates, 11.33% of +non-Serai validator sets' stake is required. + +The more practical attack is for one to obtain 5.67% of non-Serai validator +sets' stake, under any network conditions, and simply go offline. This will +take 17% of validator sets offline with it, preventing any cosign commits +from being performed. A fallback protocol where validators individually produce +cosigns, removing the network's horizontal scalability but ensuring liveness, +prevents this, restoring the additional requirements for control of an +asynchronous network or 11.33% of non-Serai validator sets' stake. + +### TODO + +The Serai node no longer responding to RPC requests upon detecting any +equivocation, and the fallback protocol where validators individually produce +signatures, are not implemented at this time. The former means the detection of +equivocating cosigns not redundant and the latter makes 5.67% of non-Serai +validator sets' stake the DoS threshold, even without control of an +asynchronous network. diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs new file mode 100644 index 00000000..db77281a --- /dev/null +++ b/coordinator/cosign/src/evaluator.rs @@ -0,0 +1,188 @@ +use core::future::Future; + +use serai_client::{primitives::Amount, Serai}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use crate::{*, intend::BlockHasEvents}; + +create_db!( + SubstrateCosignEvaluator { + LatestCosignedBlockNumber: () -> u64, + } +); + +/// A task to determine if a block has been cosigned and we should handle it. +pub(crate) struct CosignEvaluatorTask { + pub(crate) db: D, + pub(crate) serai: Serai, + pub(crate) request: R, +} + +// TODO: Add a cache for the stake values + +impl ContinuallyRan for CosignEvaluatorTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&self.db).unwrap_or(0); + + let mut known_cosign = None; + let mut made_progress = false; + loop { + let mut txn = self.db.txn(); + let Some((block_number, has_events)) = BlockHasEvents::try_recv(&mut txn) else { break }; + // Make sure these two feeds haven't desynchronized somehow + // We could remove our `LatestCosignedBlockNumber`, making the latest cosigned block number + // the next message in the channel's block number minus one, but that'd only work when the + // channel isn't empty + assert_eq!(block_number, latest_cosigned_block_number + 1); + + let cosigns_for_block = Cosigns::get(&txn, block_number).unwrap_or(vec![]); + + match has_events { + // Because this had notable events, we require an explicit cosign for this block by a + // supermajority of the prior block's validator sets + HasEvents::Notable => { + let mut weight_cosigned = 0; + let mut total_weight = 0; + let (_block, sets) = cosigning_sets_for_block(&self.serai, block_number).await?; + let global_session = GlobalSession::new(sets.clone()).id(); + let (_, global_session_start_block) = GlobalSessions::get(&txn, global_session).expect( + "checking if intended cosign was satisfied within an unrecognized global session", + ); + for set in sets { + // Fetch the weight for this set, as of the start of the global session + // This simplifies the logic around which set of stakes to use when evaluating + // cosigns, even if it's lossy as it isn't accurate to how stake may fluctuate within + // a session + let stake = self + .serai + .as_of(global_session_start_block) + .validator_sets() + .total_allocated_stake(set.network) + .await + .map_err(|e| format!("{e:?}"))? + .unwrap_or(Amount(0)) + .0; + total_weight += stake; + + // Check if we have the cosign from this set + if cosigns_for_block + .iter() + .any(|cosign| cosign.cosigner == Cosigner::ValidatorSet(set.network)) + { + // Since have this cosign, add the set's weight to the weight which has cosigned + weight_cosigned += stake; + } + } + // Check if the sum weight doesn't cross the required threshold + if weight_cosigned < (((total_weight * 83) / 100) + 1) { + // Request the necessary cosigns over the network + // TODO: Add a timer to ensure this isn't called too often + self + .request + .request_notable_cosigns(global_session) + .await + .map_err(|e| format!("{e:?}"))?; + // We return an error so the delay before this task is run again increases + return Err(format!( + "notable block (#{block_number}) wasn't yet cosigned. this should resolve shortly", + )); + } + } + // Since this block didn't have any notable events, we simply require a cosign for this + // block or a greater block by the current validator sets + HasEvents::NonNotable => { + // Check if this was satisfied by a cached result which wasn't calculated incrementally + let known_cosigned = if let Some(known_cosign) = known_cosign { + known_cosign >= block_number + } else { + // Clear `known_cosign` which is no longer helpful + known_cosign = None; + false + }; + + // If it isn't already known to be cosigned, evaluate the latest cosigns + if !known_cosigned { + /* + LatestCosign is populated with the latest cosigns for each network which don't + exceed the latest global session we've evaluated the start of. This current block + is during the latest global session we've evaluated the start of. + */ + + // Get the global session for this block + let (_block, sets) = cosigning_sets_for_block(&self.serai, block_number).await?; + let global_session = GlobalSession::new(sets.clone()).id(); + let (_, global_session_start_block) = GlobalSessions::get(&txn, global_session) + .expect( + "checking if intended cosign was satisfied within an unrecognized global session", + ); + + let mut weight_cosigned = 0; + let mut total_weight = 0; + let mut lowest_common_block: Option = None; + for set in sets { + let stake = self + .serai + .as_of(global_session_start_block) + .validator_sets() + .total_allocated_stake(set.network) + .await + .map_err(|e| format!("{e:?}"))? + .unwrap_or(Amount(0)) + .0; + // Increment total_weight with this set's stake + total_weight += stake; + + // Check if this set cosigned this block or not + let Some(cosign) = NetworksLatestCosignedBlock::get(&txn, set.network) else { + continue; + }; + if cosign.block_number >= block_number { + weight_cosigned += total_weight + } + + // Update the lowest block common to all of these cosigns + lowest_common_block = lowest_common_block + .map(|existing| existing.min(cosign.block_number)) + .or(Some(cosign.block_number)); + } + + // Check if the sum weight doesn't cross the required threshold + if weight_cosigned < (((total_weight * 83) / 100) + 1) { + // Request the superseding notable cosigns over the network + // If this session hasn't yet produced notable cosigns, then we presume we'll see + // the desired non-notable cosigns as part of normal operations, without needing to + // explicitly request them + self + .request + .request_notable_cosigns(global_session) + .await + .map_err(|e| format!("{e:?}"))?; + // We return an error so the delay before this task is run again increases + return Err(format!( + "block (#{block_number}) wasn't yet cosigned. this should resolve shortly", + )); + } + + // Update the cached result for the block we know is cosigned + known_cosign = lowest_common_block; + } + } + // If this block has no events necessitating cosigning, we can immediately consider the + // block cosigned (making this block a NOP) + HasEvents::No => {} + } + + // Since we checked we had the necessary cosigns, increment the latest cosigned block + LatestCosignedBlockNumber::set(&mut txn, &block_number); + txn.commit(); + + made_progress = true; + } + + Ok(made_progress) + } + } +} diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs new file mode 100644 index 00000000..76863925 --- /dev/null +++ b/coordinator/cosign/src/intend.rs @@ -0,0 +1,116 @@ +use core::future::Future; + +use serai_client::{SeraiError, Serai, validator_sets::primitives::ValidatorSet}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use crate::*; + +create_db!( + CosignIntend { + ScanCosignFrom: () -> u64, + } +); + +db_channel! { + CosignIntendChannels { + BlockHasEvents: () -> (u64, HasEvents), + IntendedCosigns: (set: ValidatorSet) -> CosignIntent, + } +} + +async fn block_has_events_justifying_a_cosign( + serai: &Serai, + block: u64, +) -> Result { + let serai = serai.as_of( + serai + .finalized_block_by_number(block) + .await? + .expect("couldn't get block which should've been finalized") + .hash(), + ); + + if !serai.validator_sets().key_gen_events().await?.is_empty() { + return Ok(HasEvents::Notable); + } + + if !serai.coins().burn_with_instruction_events().await?.is_empty() { + return Ok(HasEvents::NonNotable); + } + + Ok(HasEvents::No) +} + +/// A task to determine which blocks we should intend to cosign. +pub(crate) struct CosignIntendTask { + pub(crate) db: D, + pub(crate) serai: Serai, +} + +impl ContinuallyRan for CosignIntendTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1); + let latest_block_number = + self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number(); + + for block_number in start_block_number ..= latest_block_number { + let mut txn = self.db.txn(); + + let mut has_events = block_has_events_justifying_a_cosign(&self.serai, block_number) + .await + .map_err(|e| format!("{e:?}"))?; + + match has_events { + HasEvents::Notable | HasEvents::NonNotable => { + let (block, sets) = cosigning_sets_for_block(&self.serai, block_number).await?; + + // If this is notable, it creates a new global session, which we index into the + // database now + if has_events == HasEvents::Notable { + let sets = cosigning_sets(&self.serai.as_of(block.hash())).await?; + GlobalSessions::set( + &mut txn, + GlobalSession::new(sets).id(), + &(block.number(), block.hash()), + ); + } + + // If this block doesn't have any cosigners, meaning it'll never be cosigned, we flag it + // as not having any events requiring cosigning so we don't attempt to sign/require a + // cosign for it + if sets.is_empty() { + has_events = HasEvents::No; + } else { + let global_session = GlobalSession::new(sets.clone()).id(); + // Tell each set of their expectation to cosign this block + for set in sets { + log::debug!("{:?} will be cosigning block #{block_number}", set); + IntendedCosigns::send( + &mut txn, + set, + &CosignIntent { + global_session, + block_number, + block_hash: block.hash(), + notable: has_events == HasEvents::Notable, + }, + ); + } + } + } + HasEvents::No => {} + } + // Populate a singular feed with every block's status for the evluator to work off of + BlockHasEvents::send(&mut txn, &(block_number, has_events)); + // Mark this block as handled, meaning we should scan from the next block moving on + ScanCosignFrom::set(&mut txn, &(block_number + 1)); + txn.commit(); + } + + Ok(start_block_number <= latest_block_number) + } + } +} diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs new file mode 100644 index 00000000..345334a4 --- /dev/null +++ b/coordinator/cosign/src/lib.rs @@ -0,0 +1,416 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use core::{fmt::Debug, future::Future}; + +use blake2::{Digest, Blake2s256}; + +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::{ + primitives::{Amount, NetworkId, SeraiAddress}, + validator_sets::primitives::{Session, ValidatorSet, KeyPair}, + Block, Serai, TemporalSerai, +}; + +use serai_db::*; +use serai_task::*; + +/// The cosigns which are intended to be performed. +mod intend; +/// The evaluator of the cosigns. +mod evaluator; +use evaluator::LatestCosignedBlockNumber; + +/// A 'global session', defined as all validator sets used for cosigning at a given moment. +/// +/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign +/// distinct blocks at distinct positions within a global session, we still identify the faults. +/* + There is the attack where a validator set is given an alternate blockchain with a key generation + event at block #n, while most validator sets are given a blockchain with a key generation event + at block number #(n+1). This prevents whoever has the alternate blockchain from verifying the + cosigns on the primary blockchain, and detecting the faults, if they use the keys as of the block + prior to the block being cosigned. + + We solve this by binding cosigns to a global session ID, which has a specific start block, and + reading the keys from the start block. This means that so long as all validator sets agree on the + start of a global session, they can verify all cosigns produced by that session, regardless of + how it advances. Since agreeing on the start of a global session is mandated, there's no way to + have validator sets follow two distinct global sessions without breaking the bounds of the + cosigning protocol. +*/ +#[derive(Debug, BorshSerialize, BorshDeserialize)] +struct GlobalSession { + cosigners: Vec, +} +impl GlobalSession { + fn new(mut cosigners: Vec) -> Self { + cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap()); + Self { cosigners } + } + fn id(&self) -> [u8; 32] { + Blake2s256::digest(borsh::to_vec(self).unwrap()).into() + } +} + +create_db! { + Cosign { + // A mapping from a global session's ID to its start block (number, hash). + GlobalSessions: (global_session: [u8; 32]) -> (u64, [u8; 32]), + // An archive of all cosigns ever received. + // + // This will only be populated with cosigns predating or during the most recent global session + // to have its start cosigned. + Cosigns: (block_number: u64) -> Vec, + // The latest cosigned block for each network. + // + // This will only be populated with cosigns predating or during the most recent global session + // to have its start cosigned. + NetworksLatestCosignedBlock: (network: NetworkId) -> Cosign, + // Cosigns received for blocks not locally recognized as finalized. + Faults: (global_session: [u8; 32]) -> Vec, + // The global session which faulted. + FaultedSession: () -> [u8; 32], + } +} + +/// If the block has events. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +enum HasEvents { + /// The block had a notable event. + /// + /// This is a special case as blocks with key gen events change the keys used for cosigning, and + /// accordingly must be cosigned before we advance past them. + Notable, + /// The block had an non-notable event justifying a cosign. + NonNotable, + /// The block didn't have an event justifying a cosign. + No, +} + +/// An intended cosign. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +struct CosignIntent { + /// The global session this cosign is being performed under. + global_session: [u8; 32], + /// The number of the block to cosign. + block_number: u64, + /// The hash of the block to cosign. + block_hash: [u8; 32], + /// If this cosign must be handled before further cosigns are. + notable: bool, +} + +/// The identification of a cosigner. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +enum Cosigner { + /// The network which produced this cosign. + ValidatorSet(NetworkId), + /// The individual validator which produced this cosign. + Validator(SeraiAddress), +} + +/// A cosign. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +struct Cosign { + /// The global session this cosign is being performed under. + global_session: [u8; 32], + /// The number of the block to cosign. + block_number: u64, + /// The hash of the block to cosign. + block_hash: [u8; 32], + /// The actual cosigner. + cosigner: Cosigner, +} + +/// Construct a `TemporalSerai` bound to the time used for cosigning this block. +async fn temporal_serai_used_for_cosigning( + serai: &Serai, + block_number: u64, +) -> Result<(Block, TemporalSerai<'_>), String> { + let block = serai + .finalized_block_by_number(block_number) + .await + .map_err(|e| format!("{e:?}"))? + .ok_or("block wasn't finalized".to_string())?; + + // If we're cosigning block `n`, it's cosigned by the sets as of block `n-1` + // (as block `n` may update the sets declared but that update shouldn't take effect here + // until it's cosigned) + let serai = serai.as_of(block.header.parent_hash.into()); + + Ok((block, serai)) +} + +/// Fetch the keys used for cosigning by a specific network. +async fn keys_for_network( + serai: &TemporalSerai<'_>, + network: NetworkId, +) -> Result, String> { + let Some(latest_session) = + serai.validator_sets().session(network).await.map_err(|e| format!("{e:?}"))? + else { + // If this network hasn't had a session declared, move on + return Ok(None); + }; + + // Get the keys for the latest session + if let Some(keys) = serai + .validator_sets() + .keys(ValidatorSet { network, session: latest_session }) + .await + .map_err(|e| format!("{e:?}"))? + { + return Ok(Some((latest_session, keys))); + } + + // If the latest session has yet to set keys, use the prior session + if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) { + if let Some(keys) = serai + .validator_sets() + .keys(ValidatorSet { network, session: prior_session }) + .await + .map_err(|e| format!("{e:?}"))? + { + return Ok(Some((prior_session, keys))); + } + } + + Ok(None) +} + +/// Fetch the `ValidatorSet`s used for cosigning as of this block. +async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result, String> { + let mut sets = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); + for network in serai_client::primitives::NETWORKS { + let Some((session, _)) = keys_for_network(serai, network).await? else { + // If this network doesn't have usable keys, move on + continue; + }; + + sets.push(ValidatorSet { network, session }); + } + Ok(sets) +} + +/// Fetch the `ValidatorSet`s used for cosigning this block. +async fn cosigning_sets_for_block( + serai: &Serai, + block_number: u64, +) -> Result<(Block, Vec), String> { + let (block, serai) = temporal_serai_used_for_cosigning(serai, block_number).await?; + cosigning_sets(&serai).await.map(|sets| (block, sets)) +} + +/// An object usable to request notable cosigns for a block. +pub trait RequestNotableCosigns: 'static + Send { + /// The error type which may be encountered when requesting notable cosigns. + type Error: Debug; + + /// Request the notable cosigns for this global session. + fn request_notable_cosigns( + &self, + global_session: [u8; 32], + ) -> impl Send + Future>; +} + +/// An error used to indicate the cosigning protocol has faulted. +pub struct Faulted; + +/// The interface to manage cosigning with. +pub struct Cosigning { + db: D, + serai: Serai, +} +impl Cosigning { + /// Spawn the tasks to intend and evaluate cosigns. + /// + /// The database specified must only be used with a singular instance of the Serai network, and + /// only used once at any given time. + pub fn spawn( + db: D, + serai: Serai, + request: R, + tasks_to_run_upon_cosigning: Vec, + ) -> Self { + let (intend_task, _intend_task_handle) = Task::new(); + let (evaluator_task, evaluator_task_handle) = Task::new(); + tokio::spawn( + (intend::CosignIntendTask { db: db.clone(), serai: serai.clone() }) + .continually_run(intend_task, vec![evaluator_task_handle]), + ); + tokio::spawn( + (evaluator::CosignEvaluatorTask { db: db.clone(), serai: serai.clone(), request }) + .continually_run(evaluator_task, tasks_to_run_upon_cosigning), + ); + Self { db, serai } + } + + /// The latest cosigned block number. + pub fn latest_cosigned_block_number(&self) -> Result { + if FaultedSession::get(&self.db).is_some() { + Err(Faulted)?; + } + + Ok(LatestCosignedBlockNumber::get(&self.db).unwrap_or(0)) + } + + /// Fetch the notable cosigns for a global session in order to respond to requests. + pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec { + todo!("TODO") + } + + /// The cosigns to rebroadcast ever so often. + /// + /// This will be the most recent cosigns, in case the initial broadcast failed, or the faulty + /// cosigns, in case of a fault, to induce identification of the fault by others. + pub fn cosigns_to_rebroadcast(&self) -> Vec { + if let Some(faulted) = FaultedSession::get(&self.db) { + let mut cosigns = Faults::get(&self.db, faulted).unwrap(); + // Also include all of our recognized-as-honest cosigns in an attempt to induce fault + // identification in those who see the faulty cosigns as honest + for network in serai_client::primitives::NETWORKS { + if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, network) { + if cosign.global_session == faulted { + cosigns.push(cosign); + } + } + } + cosigns + } else { + let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); + for network in serai_client::primitives::NETWORKS { + if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, network) { + cosigns.push(cosign); + } + } + cosigns + } + } + + /// Intake a cosign from the Serai network. + /// + /// - Returns Err(_) if there was an error trying to validate the cosign and it should be retired + /// later. + /// - Returns Ok(true) if the cosign was successfully handled or could not be handled at this + /// time. + /// - Returns Ok(false) if the cosign was invalid. + // + // We collapse a cosign which shouldn't be handled yet into a valid cosign (`Ok(true)`) as we + // assume we'll either explicitly request it if we need it or we'll naturally see it (or a later, + // more relevant, cosign) again. + // + // Takes `&mut self` as this should only be called once at any given moment. + pub async fn intake_cosign(&mut self, cosign: Cosign) -> Result { + // Check if we've prior handled this cosign + let mut txn = self.db.txn(); + let mut cosigns_for_this_block_position = + Cosigns::get(&txn, cosign.block_number).unwrap_or(vec![]); + if cosigns_for_this_block_position.iter().any(|existing| *existing == cosign) { + return Ok(true); + } + + // Check we can verify this cosign's signature + let Some((global_session_start_block_number, global_session_start_block_hash)) = + GlobalSessions::get(&txn, cosign.global_session) + else { + // Unrecognized global session + return Ok(true); + }; + + // Check the cosign's signature + let network = match cosign.cosigner { + Cosigner::ValidatorSet(network) => { + let Some((_session, keys)) = + keys_for_network(&self.serai.as_of(global_session_start_block_hash), network).await? + else { + return Ok(false); + }; + + todo!("TODO"); + + network + } + Cosigner::Validator(_) => return Ok(false), + }; + + // Check our finalized blockchain exceeds this block number + if self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number() < + cosign.block_number + { + // Unrecognized block number + return Ok(true); + } + + // Since we verified this cosign's signature, and have a chain sufficiently long, handle the + // cosign + + // Save the cosign to the database + cosigns_for_this_block_position.push(cosign); + Cosigns::set(&mut txn, cosign.block_number, &cosigns_for_this_block_position); + + let our_block_hash = self + .serai + .block_hash(cosign.block_number) + .await + .map_err(|e| format!("{e:?}"))? + .expect("requested hash of a finalized block yet received None"); + if our_block_hash == cosign.block_hash { + // If this is for a future global session, we don't acknowledge this cosign at this time + if global_session_start_block_number > LatestCosignedBlockNumber::get(&self.db).unwrap_or(0) { + drop(txn); + return Ok(true); + } + + if NetworksLatestCosignedBlock::get(&txn, network) + .map(|cosign| cosign.block_number) + .unwrap_or(0) < + cosign.block_number + { + NetworksLatestCosignedBlock::set(&mut txn, network, &cosign); + } + } else { + let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); + // Only handle this as a fault if this set wasn't prior faulty + if !faults.iter().any(|cosign| cosign.cosigner == Cosigner::ValidatorSet(network)) { + faults.push(cosign); + Faults::set(&mut txn, cosign.global_session, &faults); + + let mut weight_cosigned = 0; + let mut total_weight = 0; + for set in cosigning_sets(&self.serai.as_of(global_session_start_block_hash)).await? { + let stake = self + .serai + .as_of(global_session_start_block_hash) + .validator_sets() + .total_allocated_stake(set.network) + .await + .map_err(|e| format!("{e:?}"))? + .unwrap_or(Amount(0)) + .0; + // Increment total_weight with this set's stake + total_weight += stake; + + // Check if this set cosigned this block or not + if faults.iter().any(|cosign| cosign.cosigner == Cosigner::ValidatorSet(set.network)) { + weight_cosigned += total_weight + } + } + + // Check if the sum weight means a fault has occurred + assert!( + total_weight != 0, + "evaluating valid cosign when no stake was present in the system" + ); + if weight_cosigned >= ((total_weight * 17) / 100) { + FaultedSession::set(&mut txn, &cosign.global_session); + } + } + } + + txn.commit(); + Ok(true) + } +} diff --git a/coordinator/src/cosign_evaluator.rs b/coordinator/src/cosign_evaluator.rs deleted file mode 100644 index 29d9cc4b..00000000 --- a/coordinator/src/cosign_evaluator.rs +++ /dev/null @@ -1,336 +0,0 @@ -use core::time::Duration; -use std::{ - sync::Arc, - collections::{HashSet, HashMap}, -}; - -use tokio::{ - sync::{mpsc, Mutex, RwLock}, - time::sleep, -}; - -use borsh::BorshSerialize; -use sp_application_crypto::RuntimePublic; -use serai_client::{ - primitives::{NETWORKS, NetworkId, Signature}, - validator_sets::primitives::{Session, ValidatorSet}, - SeraiError, TemporalSerai, Serai, -}; - -use serai_db::{Get, DbTxn, Db, create_db}; - -use processor_messages::coordinator::cosign_block_msg; - -use crate::{ - p2p::{CosignedBlock, GossipMessageKind, P2p}, - substrate::LatestCosignedBlock, -}; - -create_db! { - CosignDb { - ReceivedCosign: (set: ValidatorSet, block: [u8; 32]) -> CosignedBlock, - LatestCosign: (network: NetworkId) -> CosignedBlock, - DistinctChain: (set: ValidatorSet) -> (), - } -} - -pub struct CosignEvaluator { - db: Mutex, - serai: Arc, - stakes: RwLock>>, - latest_cosigns: RwLock>, -} - -impl CosignEvaluator { - async fn update_latest_cosign(&self) { - let stakes_lock = self.stakes.read().await; - // If we haven't gotten the stake data yet, return - let Some(stakes) = stakes_lock.as_ref() else { return }; - - let total_stake = stakes.values().copied().sum::(); - - let latest_cosigns = self.latest_cosigns.read().await; - let mut highest_block = 0; - for cosign in latest_cosigns.values() { - let mut networks = HashSet::new(); - for (network, sub_cosign) in &*latest_cosigns { - if sub_cosign.block_number >= cosign.block_number { - networks.insert(network); - } - } - let sum_stake = - networks.into_iter().map(|network| stakes.get(network).unwrap_or(&0)).sum::(); - let needed_stake = ((total_stake * 2) / 3) + 1; - if (total_stake == 0) || (sum_stake > needed_stake) { - highest_block = highest_block.max(cosign.block_number); - } - } - - let mut db_lock = self.db.lock().await; - let mut txn = db_lock.txn(); - if highest_block > LatestCosignedBlock::latest_cosigned_block(&txn) { - log::info!("setting latest cosigned block to {}", highest_block); - LatestCosignedBlock::set(&mut txn, &highest_block); - } - txn.commit(); - } - - async fn update_stakes(&self) -> Result<(), SeraiError> { - let serai = self.serai.as_of_latest_finalized_block().await?; - - let mut stakes = HashMap::new(); - for network in NETWORKS { - // Use if this network has published a Batch for a short-circuit of if they've ever set a key - let set_key = serai.in_instructions().last_batch_for_network(network).await?.is_some(); - if set_key { - stakes.insert( - network, - serai - .validator_sets() - .total_allocated_stake(network) - .await? - .expect("network which published a batch didn't have a stake set") - .0, - ); - } - } - - // Since we've successfully built stakes, set it - *self.stakes.write().await = Some(stakes); - - self.update_latest_cosign().await; - - Ok(()) - } - - // Uses Err to signify a message should be retried - async fn handle_new_cosign(&self, cosign: CosignedBlock) -> Result<(), SeraiError> { - // If we already have this cosign or a newer cosign, return - if let Some(latest) = self.latest_cosigns.read().await.get(&cosign.network) { - if latest.block_number >= cosign.block_number { - return Ok(()); - } - } - - // If this an old cosign (older than a day), drop it - let latest_block = self.serai.latest_finalized_block().await?; - if (cosign.block_number + (24 * 60 * 60 / 6)) < latest_block.number() { - log::debug!("received old cosign supposedly signed by {:?}", cosign.network); - return Ok(()); - } - - let Some(block) = self.serai.finalized_block_by_number(cosign.block_number).await? else { - log::warn!("received cosign with a block number which doesn't map to a block"); - return Ok(()); - }; - - async fn set_with_keys_fn( - serai: &TemporalSerai<'_>, - network: NetworkId, - ) -> Result, SeraiError> { - let Some(latest_session) = serai.validator_sets().session(network).await? else { - log::warn!("received cosign from {:?}, which doesn't yet have a session", network); - return Ok(None); - }; - let prior_session = Session(latest_session.0.saturating_sub(1)); - Ok(Some( - if serai - .validator_sets() - .keys(ValidatorSet { network, session: prior_session }) - .await? - .is_some() - { - ValidatorSet { network, session: prior_session } - } else { - ValidatorSet { network, session: latest_session } - }, - )) - } - - // Get the key for this network as of the prior block - // If we have two chains, this value may be different across chains depending on if one chain - // included the set_keys and one didn't - // Because set_keys will force a cosign, it will force detection of distinct blocks - // re: set_keys using keys prior to set_keys (assumed amenable to all) - let serai = self.serai.as_of(block.header.parent_hash.into()); - - let Some(set_with_keys) = set_with_keys_fn(&serai, cosign.network).await? else { - return Ok(()); - }; - let Some(keys) = serai.validator_sets().keys(set_with_keys).await? else { - log::warn!("received cosign for a block we didn't have keys for"); - return Ok(()); - }; - - if !keys - .0 - .verify(&cosign_block_msg(cosign.block_number, cosign.block), &Signature(cosign.signature)) - { - log::warn!("received cosigned block with an invalid signature"); - return Ok(()); - } - - log::info!( - "received cosign for block {} ({}) by {:?}", - block.number(), - hex::encode(cosign.block), - cosign.network - ); - - // Save this cosign to the DB - { - let mut db = self.db.lock().await; - let mut txn = db.txn(); - ReceivedCosign::set(&mut txn, set_with_keys, cosign.block, &cosign); - LatestCosign::set(&mut txn, set_with_keys.network, &(cosign)); - txn.commit(); - } - - if cosign.block != block.hash() { - log::error!( - "received cosign for a distinct block at {}. we have {}. cosign had {}", - cosign.block_number, - hex::encode(block.hash()), - hex::encode(cosign.block) - ); - - let serai = self.serai.as_of(latest_block.hash()); - - let mut db = self.db.lock().await; - // Save this set as being on a different chain - let mut txn = db.txn(); - DistinctChain::set(&mut txn, set_with_keys, &()); - txn.commit(); - - let mut total_stake = 0; - let mut total_on_distinct_chain = 0; - for network in NETWORKS { - if network == NetworkId::Serai { - continue; - } - - // Get the current set for this network - let set_with_keys = { - let mut res; - while { - res = set_with_keys_fn(&serai, cosign.network).await; - res.is_err() - } { - log::error!( - "couldn't get the set with keys when checking for a distinct chain: {:?}", - res - ); - tokio::time::sleep(core::time::Duration::from_secs(3)).await; - } - res.unwrap() - }; - - // Get its stake - // Doesn't use the stakes inside self to prevent deadlocks re: multi-lock acquisition - if let Some(set_with_keys) = set_with_keys { - let stake = { - let mut res; - while { - res = serai.validator_sets().total_allocated_stake(set_with_keys.network).await; - res.is_err() - } { - log::error!( - "couldn't get total allocated stake when checking for a distinct chain: {:?}", - res - ); - tokio::time::sleep(core::time::Duration::from_secs(3)).await; - } - res.unwrap() - }; - - if let Some(stake) = stake { - total_stake += stake.0; - - if DistinctChain::get(&*db, set_with_keys).is_some() { - total_on_distinct_chain += stake.0; - } - } - } - } - - // See https://github.com/serai-dex/serai/issues/339 for the reasoning on 17% - if (total_stake * 17 / 100) <= total_on_distinct_chain { - panic!("17% of validator sets (by stake) have co-signed a distinct chain"); - } - } else { - { - let mut latest_cosigns = self.latest_cosigns.write().await; - latest_cosigns.insert(cosign.network, cosign); - } - self.update_latest_cosign().await; - } - - Ok(()) - } - - #[allow(clippy::new_ret_no_self)] - pub fn new(db: D, p2p: P, serai: Arc) -> mpsc::UnboundedSender { - let mut latest_cosigns = HashMap::new(); - for network in NETWORKS { - if let Some(cosign) = LatestCosign::get(&db, network) { - latest_cosigns.insert(network, cosign); - } - } - - let evaluator = Arc::new(Self { - db: Mutex::new(db), - serai, - stakes: RwLock::new(None), - latest_cosigns: RwLock::new(latest_cosigns), - }); - - // Spawn a task to update stakes regularly - tokio::spawn({ - let evaluator = evaluator.clone(); - async move { - loop { - // Run this until it passes - while evaluator.update_stakes().await.is_err() { - log::warn!("couldn't update stakes in the cosign evaluator"); - // Try again in 10 seconds - sleep(Duration::from_secs(10)).await; - } - // Run it every 10 minutes as we don't need the exact stake data for this to be valid - sleep(Duration::from_secs(10 * 60)).await; - } - } - }); - - // Spawn a task to receive cosigns and handle them - let (send, mut recv) = mpsc::unbounded_channel(); - tokio::spawn({ - let evaluator = evaluator.clone(); - async move { - while let Some(msg) = recv.recv().await { - while evaluator.handle_new_cosign(msg).await.is_err() { - // Try again in 10 seconds - sleep(Duration::from_secs(10)).await; - } - } - } - }); - - // Spawn a task to rebroadcast the most recent cosigns - tokio::spawn({ - async move { - loop { - let cosigns = evaluator.latest_cosigns.read().await.values().copied().collect::>(); - for cosign in cosigns { - let mut buf = vec![]; - cosign.serialize(&mut buf).unwrap(); - P2p::broadcast(&p2p, GossipMessageKind::CosignedBlock, buf).await; - } - sleep(Duration::from_secs(60)).await; - } - } - }); - - // Return the channel to send cosigns - send - } -} diff --git a/coordinator/src/substrate/cosign.rs b/coordinator/src/substrate/cosign.rs deleted file mode 100644 index 00560763..00000000 --- a/coordinator/src/substrate/cosign.rs +++ /dev/null @@ -1,332 +0,0 @@ -/* - If: - A) This block has events and it's been at least X blocks since the last cosign or - B) This block doesn't have events but it's been X blocks since a skipped block which did - have events or - C) This block key gens (which changes who the cosigners are) - cosign this block. - - This creates both a minimum and maximum delay of X blocks before a block's cosigning begins, - barring key gens which are exceptional. The minimum delay is there to ensure we don't constantly - spawn new protocols every 6 seconds, overwriting the old ones. The maximum delay is there to - ensure any block needing cosigned is consigned within a reasonable amount of time. -*/ - -use zeroize::Zeroizing; - -use ciphersuite::{Ciphersuite, Ristretto}; - -use borsh::{BorshSerialize, BorshDeserialize}; - -use serai_client::{ - SeraiError, Serai, - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet}, -}; - -use serai_db::*; - -use crate::{Db, substrate::in_set, tributary::SeraiBlockNumber}; - -// 5 minutes, expressed in blocks -// TODO: Pull a constant for block time -const COSIGN_DISTANCE: u64 = 5 * 60 / 6; - -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -enum HasEvents { - KeyGen, - Yes, - No, -} - -create_db!( - SubstrateCosignDb { - ScanCosignFrom: () -> u64, - IntendedCosign: () -> (u64, Option), - BlockHasEventsCache: (block: u64) -> HasEvents, - LatestCosignedBlock: () -> u64, - } -); - -impl IntendedCosign { - // Sets the intended to cosign block, clearing the prior value entirely. - pub fn set_intended_cosign(txn: &mut impl DbTxn, intended: u64) { - Self::set(txn, &(intended, None::)); - } - - // Sets the cosign skipped since the last intended to cosign block. - pub fn set_skipped_cosign(txn: &mut impl DbTxn, skipped: u64) { - let (intended, prior_skipped) = Self::get(txn).unwrap(); - assert!(prior_skipped.is_none()); - Self::set(txn, &(intended, Some(skipped))); - } -} - -impl LatestCosignedBlock { - pub fn latest_cosigned_block(getter: &impl Get) -> u64 { - Self::get(getter).unwrap_or_default().max(1) - } -} - -db_channel! { - SubstrateDbChannels { - CosignTransactions: (network: NetworkId) -> (Session, u64, [u8; 32]), - } -} - -impl CosignTransactions { - // Append a cosign transaction. - pub fn append_cosign(txn: &mut impl DbTxn, set: ValidatorSet, number: u64, hash: [u8; 32]) { - CosignTransactions::send(txn, set.network, &(set.session, number, hash)) - } -} - -async fn block_has_events( - txn: &mut impl DbTxn, - serai: &Serai, - block: u64, -) -> Result { - let cached = BlockHasEventsCache::get(txn, block); - match cached { - None => { - let serai = serai.as_of( - serai - .finalized_block_by_number(block) - .await? - .expect("couldn't get block which should've been finalized") - .hash(), - ); - - if !serai.validator_sets().key_gen_events().await?.is_empty() { - return Ok(HasEvents::KeyGen); - } - - let has_no_events = serai.coins().burn_with_instruction_events().await?.is_empty() && - serai.in_instructions().batch_events().await?.is_empty() && - serai.validator_sets().new_set_events().await?.is_empty() && - serai.validator_sets().set_retired_events().await?.is_empty(); - - let has_events = if has_no_events { HasEvents::No } else { HasEvents::Yes }; - - BlockHasEventsCache::set(txn, block, &has_events); - Ok(has_events) - } - Some(code) => Ok(code), - } -} - -async fn potentially_cosign_block( - txn: &mut impl DbTxn, - serai: &Serai, - block: u64, - skipped_block: Option, - window_end_exclusive: u64, -) -> Result { - // The following code regarding marking cosigned if prior block is cosigned expects this block to - // not be zero - // While we could perform this check there, there's no reason not to optimize the entire function - // as such - if block == 0 { - return Ok(false); - } - - let block_has_events = block_has_events(txn, serai, block).await?; - - // If this block had no events and immediately follows a cosigned block, mark it as cosigned - if (block_has_events == HasEvents::No) && - (LatestCosignedBlock::latest_cosigned_block(txn) == (block - 1)) - { - log::debug!("automatically co-signing next block ({block}) since it has no events"); - LatestCosignedBlock::set(txn, &block); - } - - // If we skipped a block, we're supposed to sign it plus the COSIGN_DISTANCE if no other blocks - // trigger a cosigning protocol covering it - // This means there will be the maximum delay allowed from a block needing cosigning occurring - // and a cosign for it triggering - let maximally_latent_cosign_block = - skipped_block.map(|skipped_block| skipped_block + COSIGN_DISTANCE); - - // If this block is within the window, - if block < window_end_exclusive { - // and set a key, cosign it - if block_has_events == HasEvents::KeyGen { - IntendedCosign::set_intended_cosign(txn, block); - // Carry skipped if it isn't included by cosigning this block - if let Some(skipped) = skipped_block { - if skipped > block { - IntendedCosign::set_skipped_cosign(txn, block); - } - } - return Ok(true); - } - } else if (Some(block) == maximally_latent_cosign_block) || (block_has_events != HasEvents::No) { - // Since this block was outside the window and had events/was maximally latent, cosign it - IntendedCosign::set_intended_cosign(txn, block); - return Ok(true); - } - Ok(false) -} - -/* - Advances the cosign protocol as should be done per the latest block. - - A block is considered cosigned if: - A) It was cosigned - B) It's the parent of a cosigned block - C) It immediately follows a cosigned block and has no events requiring cosigning - - This only actually performs advancement within a limited bound (generally until it finds a block - which should be cosigned). Accordingly, it is necessary to call multiple times even if - `latest_number` doesn't change. -*/ -async fn advance_cosign_protocol_inner( - db: &mut impl Db, - key: &Zeroizing<::F>, - serai: &Serai, - latest_number: u64, -) -> Result<(), SeraiError> { - let mut txn = db.txn(); - - const INITIAL_INTENDED_COSIGN: u64 = 1; - let (last_intended_to_cosign_block, mut skipped_block) = { - let intended_cosign = IntendedCosign::get(&txn); - // If we haven't prior intended to cosign a block, set the intended cosign to 1 - if let Some(intended_cosign) = intended_cosign { - intended_cosign - } else { - IntendedCosign::set_intended_cosign(&mut txn, INITIAL_INTENDED_COSIGN); - IntendedCosign::get(&txn).unwrap() - } - }; - - // "windows" refers to the window of blocks where even if there's a block which should be - // cosigned, it won't be due to proximity due to the prior cosign - let mut window_end_exclusive = last_intended_to_cosign_block + COSIGN_DISTANCE; - // If we've never triggered a cosign, don't skip any cosigns based on proximity - if last_intended_to_cosign_block == INITIAL_INTENDED_COSIGN { - window_end_exclusive = 1; - } - - // The consensus rules for this are `last_intended_to_cosign_block + 1` - let scan_start_block = last_intended_to_cosign_block + 1; - // As a practical optimization, we don't re-scan old blocks since old blocks are independent to - // new state - let scan_start_block = scan_start_block.max(ScanCosignFrom::get(&txn).unwrap_or(1)); - - // Check all blocks within the window to see if they should be cosigned - // If so, we're skipping them and need to flag them as skipped so that once the window closes, we - // do cosign them - // We only perform this check if we haven't already marked a block as skipped since the cosign - // the skipped block will cause will cosign all other blocks within this window - if skipped_block.is_none() { - let window_end_inclusive = window_end_exclusive - 1; - for b in scan_start_block ..= window_end_inclusive.min(latest_number) { - if block_has_events(&mut txn, serai, b).await? == HasEvents::Yes { - skipped_block = Some(b); - log::debug!("skipping cosigning {b} due to proximity to prior cosign"); - IntendedCosign::set_skipped_cosign(&mut txn, b); - break; - } - } - } - - // A block which should be cosigned - let mut to_cosign = None; - // A list of sets which are cosigning, along with a boolean of if we're in the set - let mut cosigning = vec![]; - - for block in scan_start_block ..= latest_number { - let actual_block = serai - .finalized_block_by_number(block) - .await? - .expect("couldn't get block which should've been finalized"); - - // Save the block number for this block, as needed by the cosigner to perform cosigning - SeraiBlockNumber::set(&mut txn, actual_block.hash(), &block); - - if potentially_cosign_block(&mut txn, serai, block, skipped_block, window_end_exclusive).await? - { - to_cosign = Some((block, actual_block.hash())); - - // Get the keys as of the prior block - // If this key sets new keys, the coordinator won't acknowledge so until we process this - // block - // We won't process this block until its co-signed - // Using the keys of the prior block ensures this deadlock isn't reached - let serai = serai.as_of(actual_block.header.parent_hash.into()); - - for network in serai_client::primitives::NETWORKS { - // Get the latest session to have set keys - let set_with_keys = { - let Some(latest_session) = serai.validator_sets().session(network).await? else { - continue; - }; - let prior_session = Session(latest_session.0.saturating_sub(1)); - if serai - .validator_sets() - .keys(ValidatorSet { network, session: prior_session }) - .await? - .is_some() - { - ValidatorSet { network, session: prior_session } - } else { - let set = ValidatorSet { network, session: latest_session }; - if serai.validator_sets().keys(set).await?.is_none() { - continue; - } - set - } - }; - - log::debug!("{:?} will be cosigning {block}", set_with_keys.network); - cosigning.push((set_with_keys, in_set(key, &serai, set_with_keys).await?.unwrap())); - } - - break; - } - - // If this TX is committed, always start future scanning from the next block - ScanCosignFrom::set(&mut txn, &(block + 1)); - // Since we're scanning *from* the next block, tidy the cache - BlockHasEventsCache::del(&mut txn, block); - } - - if let Some((number, hash)) = to_cosign { - // If this block doesn't have cosigners, yet does have events, automatically mark it as - // cosigned - if cosigning.is_empty() { - log::debug!("{} had no cosigners available, marking as cosigned", number); - LatestCosignedBlock::set(&mut txn, &number); - } else { - for (set, in_set) in cosigning { - if in_set { - log::debug!("cosigning {number} with {:?} {:?}", set.network, set.session); - CosignTransactions::append_cosign(&mut txn, set, number, hash); - } - } - } - } - txn.commit(); - - Ok(()) -} - -pub async fn advance_cosign_protocol( - db: &mut impl Db, - key: &Zeroizing<::F>, - serai: &Serai, - latest_number: u64, -) -> Result<(), SeraiError> { - loop { - let scan_from = ScanCosignFrom::get(db).unwrap_or(1); - // Only scan 1000 blocks at a time to limit a massive txn from forming - let scan_to = latest_number.min(scan_from + 1000); - advance_cosign_protocol_inner(db, key, serai, scan_to).await?; - // If we didn't limit the scan_to, break - if scan_to == latest_number { - break; - } - } - Ok(()) -} diff --git a/deny.toml b/deny.toml index 66bd4bc6..cc45984a 100644 --- a/deny.toml +++ b/deny.toml @@ -73,6 +73,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, { allow = ["AGPL-3.0"], name = "tributary-chain" }, + { allow = ["AGPL-3.0"], name = "serai-cosign" }, { allow = ["AGPL-3.0"], name = "serai-coordinator" }, { allow = ["AGPL-3.0"], name = "serai-coins-pallet" }, From ef972b26589789790de8897687ad4e184080fd4b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 00:06:46 -0500 Subject: [PATCH 208/368] Add cosign signature verification --- coordinator/cosign/Cargo.toml | 8 +--- coordinator/cosign/README.md | 10 ++-- coordinator/cosign/src/evaluator.rs | 8 ++-- coordinator/cosign/src/lib.rs | 74 ++++++++++++++++++++--------- 4 files changed, 62 insertions(+), 38 deletions(-) diff --git a/coordinator/cosign/Cargo.toml b/coordinator/cosign/Cargo.toml index b6177121..4aab6348 100644 --- a/coordinator/cosign/Cargo.toml +++ b/coordinator/cosign/Cargo.toml @@ -18,14 +18,10 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } -borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } - blake2 = { version = "0.10", default-features = false, features = ["std"] } -ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } -schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } +schnorrkel = { version = "0.11", default-features = false, features = ["std"] } -sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } log = { version = "0.4", default-features = false, features = ["std"] } diff --git a/coordinator/cosign/README.md b/coordinator/cosign/README.md index 96fe0f92..10f31378 100644 --- a/coordinator/cosign/README.md +++ b/coordinator/cosign/README.md @@ -114,8 +114,8 @@ asynchronous network or 11.33% of non-Serai validator sets' stake. ### TODO The Serai node no longer responding to RPC requests upon detecting any -equivocation, and the fallback protocol where validators individually produce -signatures, are not implemented at this time. The former means the detection of -equivocating cosigns not redundant and the latter makes 5.67% of non-Serai -validator sets' stake the DoS threshold, even without control of an -asynchronous network. +equivocation, the delayed acknowledgement of cosigns, and the fallback protocol +where validators individually produce signatures, are not implemented at this +time. The former means the detection of equivocating cosigns not redundant and +the latter makes 5.67% of non-Serai validator sets' stake the DoS threshold, +even without control of an asynchronous network. diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index db77281a..5be7f924 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -70,7 +70,7 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask= block_number { + if cosign.cosign.block_number >= block_number { weight_cosigned += total_weight } // Update the lowest block common to all of these cosigns lowest_common_block = lowest_common_block - .map(|existing| existing.min(cosign.block_number)) - .or(Some(cosign.block_number)); + .map(|existing| existing.min(cosign.cosign.block_number)) + .or(Some(cosign.cosign.block_number)); } // Check if the sum weight doesn't cross the required threshold diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 345334a4..f42ffb7c 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -23,6 +23,9 @@ mod intend; mod evaluator; use evaluator::LatestCosignedBlockNumber; +/// The schnorrkel context to used when signing a cosign. +pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; + /// A 'global session', defined as all validator sets used for cosigning at a given moment. /// /// We evaluate cosign faults within a global session. This ensures even if cosigners cosign @@ -63,14 +66,14 @@ create_db! { // // This will only be populated with cosigns predating or during the most recent global session // to have its start cosigned. - Cosigns: (block_number: u64) -> Vec, + Cosigns: (block_number: u64) -> Vec, // The latest cosigned block for each network. // // This will only be populated with cosigns predating or during the most recent global session // to have its start cosigned. - NetworksLatestCosignedBlock: (network: NetworkId) -> Cosign, + NetworksLatestCosignedBlock: (network: NetworkId) -> SignedCosign, // Cosigns received for blocks not locally recognized as finalized. - Faults: (global_session: [u8; 32]) -> Vec, + Faults: (global_session: [u8; 32]) -> Vec, // The global session which faulted. FaultedSession: () -> [u8; 32], } @@ -105,7 +108,7 @@ struct CosignIntent { /// The identification of a cosigner. #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -enum Cosigner { +pub enum Cosigner { /// The network which produced this cosign. ValidatorSet(NetworkId), /// The individual validator which produced this cosign. @@ -113,16 +116,34 @@ enum Cosigner { } /// A cosign. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -struct Cosign { +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub struct Cosign { /// The global session this cosign is being performed under. - global_session: [u8; 32], + pub global_session: [u8; 32], /// The number of the block to cosign. - block_number: u64, + pub block_number: u64, /// The hash of the block to cosign. - block_hash: [u8; 32], + pub block_hash: [u8; 32], /// The actual cosigner. - cosigner: Cosigner, + pub cosigner: Cosigner, +} + +/// A signed cosign. +#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] +pub struct SignedCosign { + /// The cosign. + pub cosign: Cosign, + /// The signature for the cosign. + pub signature: [u8; 64], +} + +impl SignedCosign { + fn verify_signature(&self, signer: serai_client::Public) -> bool { + let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false }; + let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false }; + + signer.verify_simple(COSIGN_CONTEXT, &borsh::to_vec(&self.cosign).unwrap(), &signature).is_ok() + } } /// Construct a `TemporalSerai` bound to the time used for cosigning this block. @@ -258,7 +279,7 @@ impl Cosigning { } /// Fetch the notable cosigns for a global session in order to respond to requests. - pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec { + pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec { todo!("TODO") } @@ -266,14 +287,14 @@ impl Cosigning { /// /// This will be the most recent cosigns, in case the initial broadcast failed, or the faulty /// cosigns, in case of a fault, to induce identification of the fault by others. - pub fn cosigns_to_rebroadcast(&self) -> Vec { + pub fn cosigns_to_rebroadcast(&self) -> Vec { if let Some(faulted) = FaultedSession::get(&self.db) { let mut cosigns = Faults::get(&self.db, faulted).unwrap(); // Also include all of our recognized-as-honest cosigns in an attempt to induce fault // identification in those who see the faulty cosigns as honest for network in serai_client::primitives::NETWORKS { if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, network) { - if cosign.global_session == faulted { + if cosign.cosign.global_session == faulted { cosigns.push(cosign); } } @@ -303,12 +324,14 @@ impl Cosigning { // more relevant, cosign) again. // // Takes `&mut self` as this should only be called once at any given moment. - pub async fn intake_cosign(&mut self, cosign: Cosign) -> Result { + pub async fn intake_cosign(&mut self, signed_cosign: SignedCosign) -> Result { + let cosign = &signed_cosign.cosign; + // Check if we've prior handled this cosign let mut txn = self.db.txn(); let mut cosigns_for_this_block_position = Cosigns::get(&txn, cosign.block_number).unwrap_or(vec![]); - if cosigns_for_this_block_position.iter().any(|existing| *existing == cosign) { + if cosigns_for_this_block_position.iter().any(|existing| existing.cosign == *cosign) { return Ok(true); } @@ -329,7 +352,9 @@ impl Cosigning { return Ok(false); }; - todo!("TODO"); + if !signed_cosign.verify_signature(keys.0) { + return Ok(false); + } network } @@ -348,7 +373,7 @@ impl Cosigning { // cosign // Save the cosign to the database - cosigns_for_this_block_position.push(cosign); + cosigns_for_this_block_position.push(signed_cosign.clone()); Cosigns::set(&mut txn, cosign.block_number, &cosigns_for_this_block_position); let our_block_hash = self @@ -359,23 +384,23 @@ impl Cosigning { .expect("requested hash of a finalized block yet received None"); if our_block_hash == cosign.block_hash { // If this is for a future global session, we don't acknowledge this cosign at this time - if global_session_start_block_number > LatestCosignedBlockNumber::get(&self.db).unwrap_or(0) { + if global_session_start_block_number > LatestCosignedBlockNumber::get(&txn).unwrap_or(0) { drop(txn); return Ok(true); } if NetworksLatestCosignedBlock::get(&txn, network) - .map(|cosign| cosign.block_number) + .map(|cosign| cosign.cosign.block_number) .unwrap_or(0) < cosign.block_number { - NetworksLatestCosignedBlock::set(&mut txn, network, &cosign); + NetworksLatestCosignedBlock::set(&mut txn, network, &signed_cosign); } } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); // Only handle this as a fault if this set wasn't prior faulty - if !faults.iter().any(|cosign| cosign.cosigner == Cosigner::ValidatorSet(network)) { - faults.push(cosign); + if !faults.iter().any(|cosign| cosign.cosign.cosigner == Cosigner::ValidatorSet(network)) { + faults.push(signed_cosign.clone()); Faults::set(&mut txn, cosign.global_session, &faults); let mut weight_cosigned = 0; @@ -394,7 +419,10 @@ impl Cosigning { total_weight += stake; // Check if this set cosigned this block or not - if faults.iter().any(|cosign| cosign.cosigner == Cosigner::ValidatorSet(set.network)) { + if faults + .iter() + .any(|cosign| cosign.cosign.cosigner == Cosigner::ValidatorSet(set.network)) + { weight_cosigned += total_weight } } From e119fb4c16607d8f9598da2142adf57856d72898 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 01:45:37 -0500 Subject: [PATCH 209/368] Replace Cosigns by extending NetworksLatestCosignedBlock Cosigns was an archive of every single cosign ever received. By scoping NetworksLatestCosignedBlock to be by the global session, we have the latest cosign for each network in a session (valid to replace all prior cosigns by that network within that session, even for the purposes of fault) and automatically have the notable cosigns indexed (as they are the latest ones within their session). This not only saves space yet also allows optimizing evaluation a bit. --- coordinator/cosign/Cargo.toml | 4 + coordinator/cosign/src/evaluator.rs | 66 +++++++++++--- coordinator/cosign/src/intend.rs | 59 +++++++----- coordinator/cosign/src/lib.rs | 135 ++++++++++++++++------------ 4 files changed, 169 insertions(+), 95 deletions(-) diff --git a/coordinator/cosign/Cargo.toml b/coordinator/cosign/Cargo.toml index 4aab6348..bbd96399 100644 --- a/coordinator/cosign/Cargo.toml +++ b/coordinator/cosign/Cargo.toml @@ -14,6 +14,9 @@ rust-version = "1.81" all-features = true rustdoc-args = ["--cfg", "docsrs"] +[package.metadata.cargo-machete] +ignored = ["scale"] + [lints] workspace = true @@ -21,6 +24,7 @@ workspace = true blake2 = { version = "0.10", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 5be7f924..f38b18b1 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -5,11 +5,18 @@ use serai_client::{primitives::Amount, Serai}; use serai_db::*; use serai_task::ContinuallyRan; -use crate::{*, intend::BlockHasEvents}; +use crate::{ + *, + intend::{BlockEventData, BlockEvents}, +}; create_db!( SubstrateCosignEvaluator { + // The latest cosigned block number. LatestCosignedBlockNumber: () -> u64, + // The latest global session evaluated. + // TODO: Also include the weights here + LatestGlobalSessionEvaluated: () -> ([u8; 32], Vec), } ); @@ -20,7 +27,23 @@ pub(crate) struct CosignEvaluatorTask { pub(crate) request: R, } -// TODO: Add a cache for the stake values +async fn get_latest_global_session_evaluated( + txn: &mut impl DbTxn, + serai: &Serai, + parent_hash: [u8; 32], +) -> Result<([u8; 32], Vec), String> { + Ok(match LatestGlobalSessionEvaluated::get(txn) { + Some(res) => res, + None => { + // This is the initial global session + // Fetch the sets participating and declare it the latest value recognized + let sets = cosigning_sets_by_parent_hash(serai, parent_hash).await?; + let initial_global_session = GlobalSession::new(sets.clone()).id(); + LatestGlobalSessionEvaluated::set(txn, &(initial_global_session, sets.clone())); + (initial_global_session, sets) + } + }) +} impl ContinuallyRan for CosignEvaluatorTask { fn run_iteration(&mut self) -> impl Send + Future> { @@ -31,23 +54,26 @@ impl ContinuallyRan for CosignEvaluatorTask { + let (global_session, sets) = + get_latest_global_session_evaluated(&mut txn, &self.serai, parent_hash).await?; + let mut weight_cosigned = 0; let mut total_weight = 0; - let (_block, sets) = cosigning_sets_for_block(&self.serai, block_number).await?; - let global_session = GlobalSession::new(sets.clone()).id(); let (_, global_session_start_block) = GlobalSessions::get(&txn, global_session).expect( "checking if intended cosign was satisfied within an unrecognized global session", ); @@ -68,9 +94,9 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask= block_number { @@ -167,6 +202,11 @@ impl ContinuallyRan for CosignEvaluatorTask (u64, HasEvents), + BlockEvents: () -> BlockEventData, IntendedCosigns: (set: ValidatorSet) -> CosignIntent, } } async fn block_has_events_justifying_a_cosign( serai: &Serai, - block: u64, -) -> Result { - let serai = serai.as_of( - serai - .finalized_block_by_number(block) - .await? - .expect("couldn't get block which should've been finalized") - .hash(), - ); + block_number: u64, +) -> Result<(Block, HasEvents), SeraiError> { + let block = serai + .finalized_block_by_number(block_number) + .await? + .expect("couldn't get block which should've been finalized"); + let serai = serai.as_of(block.hash()); if !serai.validator_sets().key_gen_events().await?.is_empty() { - return Ok(HasEvents::Notable); + return Ok((block, HasEvents::Notable)); } if !serai.coins().burn_with_instruction_events().await?.is_empty() { - return Ok(HasEvents::NonNotable); + return Ok((block, HasEvents::NonNotable)); } - Ok(HasEvents::No) + Ok((block, HasEvents::No)) } /// A task to determine which blocks we should intend to cosign. @@ -59,23 +65,22 @@ impl ContinuallyRan for CosignIntendTask { for block_number in start_block_number ..= latest_block_number { let mut txn = self.db.txn(); - let mut has_events = block_has_events_justifying_a_cosign(&self.serai, block_number) - .await - .map_err(|e| format!("{e:?}"))?; + let (block, mut has_events) = + block_has_events_justifying_a_cosign(&self.serai, block_number) + .await + .map_err(|e| format!("{e:?}"))?; match has_events { HasEvents::Notable | HasEvents::NonNotable => { - let (block, sets) = cosigning_sets_for_block(&self.serai, block_number).await?; + let sets = cosigning_sets_for_block(&self.serai, &block).await?; // If this is notable, it creates a new global session, which we index into the // database now if has_events == HasEvents::Notable { let sets = cosigning_sets(&self.serai.as_of(block.hash())).await?; - GlobalSessions::set( - &mut txn, - GlobalSession::new(sets).id(), - &(block.number(), block.hash()), - ); + let global_session = GlobalSession::new(sets).id(); + GlobalSessions::set(&mut txn, global_session, &(block.number(), block.hash())); + LatestGlobalSessionIntended::set(&mut txn, &global_session); } // If this block doesn't have any cosigners, meaning it'll never be cosigned, we flag it @@ -104,7 +109,15 @@ impl ContinuallyRan for CosignIntendTask { HasEvents::No => {} } // Populate a singular feed with every block's status for the evluator to work off of - BlockHasEvents::send(&mut txn, &(block_number, has_events)); + BlockEvents::send( + &mut txn, + &(BlockEventData { + block_number, + parent_hash: block.header.parent_hash.into(), + block_hash: block.hash(), + has_events, + }), + ); // Mark this block as handled, meaning we should scan from the next block moving on ScanCosignFrom::set(&mut txn, &(block_number + 1)); txn.commit(); diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index f42ffb7c..7dc5a3a1 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -62,16 +62,20 @@ create_db! { Cosign { // A mapping from a global session's ID to its start block (number, hash). GlobalSessions: (global_session: [u8; 32]) -> (u64, [u8; 32]), - // An archive of all cosigns ever received. + // The latest global session intended. // - // This will only be populated with cosigns predating or during the most recent global session - // to have its start cosigned. - Cosigns: (block_number: u64) -> Vec, + // This is distinct from the latest global session for which we've evaluated the cosigns for. + LatestGlobalSessionIntended: () -> [u8; 32], // The latest cosigned block for each network. // // This will only be populated with cosigns predating or during the most recent global session // to have its start cosigned. - NetworksLatestCosignedBlock: (network: NetworkId) -> SignedCosign, + // + // The global session changes upon a notable block, causing each global session to have exactly + // one notable block. All validator sets will explicitly produce a cosign for their notable + // block, causing the latest cosigned block for a global session to either be the global + // session's notable cosigns or the network's latest cosigns. + NetworksLatestCosignedBlock: (global_session: [u8; 32], network: NetworkId) -> SignedCosign, // Cosigns received for blocks not locally recognized as finalized. Faults: (global_session: [u8; 32]) -> Vec, // The global session which faulted. @@ -146,25 +150,6 @@ impl SignedCosign { } } -/// Construct a `TemporalSerai` bound to the time used for cosigning this block. -async fn temporal_serai_used_for_cosigning( - serai: &Serai, - block_number: u64, -) -> Result<(Block, TemporalSerai<'_>), String> { - let block = serai - .finalized_block_by_number(block_number) - .await - .map_err(|e| format!("{e:?}"))? - .ok_or("block wasn't finalized".to_string())?; - - // If we're cosigning block `n`, it's cosigned by the sets as of block `n-1` - // (as block `n` may update the sets declared but that update shouldn't take effect here - // until it's cosigned) - let serai = serai.as_of(block.header.parent_hash.into()); - - Ok((block, serai)) -} - /// Fetch the keys used for cosigning by a specific network. async fn keys_for_network( serai: &TemporalSerai<'_>, @@ -216,13 +201,25 @@ async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result, Ok(sets) } +/// Fetch the `ValidatorSet`s used for cosigning a block by the block's parent hash. +async fn cosigning_sets_by_parent_hash( + serai: &Serai, + parent_hash: [u8; 32], +) -> Result, String> { + /* + If we're cosigning block `n`, it's cosigned by the sets as of block `n-1` (as block `n` may + update the sets declared but that update shouldn't take effect until block `n` is cosigned). + That's why fetching the cosigning sets for a block by its parent hash is valid. + */ + cosigning_sets(&serai.as_of(parent_hash)).await +} + /// Fetch the `ValidatorSet`s used for cosigning this block. async fn cosigning_sets_for_block( serai: &Serai, - block_number: u64, -) -> Result<(Block, Vec), String> { - let (block, serai) = temporal_serai_used_for_cosigning(serai, block_number).await?; - cosigning_sets(&serai).await.map(|sets| (block, sets)) + block: &Block, +) -> Result, String> { + cosigning_sets_by_parent_hash(serai, block.header.parent_hash.into()).await } /// An object usable to request notable cosigns for a block. @@ -279,8 +276,17 @@ impl Cosigning { } /// Fetch the notable cosigns for a global session in order to respond to requests. + /// + /// If this global session hasn't produced any notable cosigns, this will return the latest + /// cosigns for this session. pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec { - todo!("TODO") + let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); + for network in serai_client::primitives::NETWORKS { + if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) { + cosigns.push(cosign); + } + } + cosigns } /// The cosigns to rebroadcast ever so often. @@ -293,7 +299,7 @@ impl Cosigning { // Also include all of our recognized-as-honest cosigns in an attempt to induce fault // identification in those who see the faulty cosigns as honest for network in serai_client::primitives::NETWORKS { - if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, network) { + if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) { if cosign.cosign.global_session == faulted { cosigns.push(cosign); } @@ -301,9 +307,14 @@ impl Cosigning { } cosigns } else { + let Some(latest_global_session) = LatestGlobalSessionIntended::get(&self.db) else { + return vec![]; + }; let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); for network in serai_client::primitives::NETWORKS { - if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, network) { + if let Some(cosign) = + NetworksLatestCosignedBlock::get(&self.db, latest_global_session, network) + { cosigns.push(cosign); } } @@ -324,15 +335,24 @@ impl Cosigning { // more relevant, cosign) again. // // Takes `&mut self` as this should only be called once at any given moment. + // TODO: Don't overload bool here pub async fn intake_cosign(&mut self, signed_cosign: SignedCosign) -> Result { let cosign = &signed_cosign.cosign; - // Check if we've prior handled this cosign + let Cosigner::ValidatorSet(network) = cosign.cosigner else { + // TODO + // Individually signed cosign despite that protocol not being implemented + return Ok(false); + }; + + // Check if we should even bother handling this cosign let mut txn = self.db.txn(); - let mut cosigns_for_this_block_position = - Cosigns::get(&txn, cosign.block_number).unwrap_or(vec![]); - if cosigns_for_this_block_position.iter().any(|existing| existing.cosign == *cosign) { - return Ok(true); + // TODO: A malicious validator set can sign a block after their notable block to erase a + // notable cosign + if let Some(existing) = NetworksLatestCosignedBlock::get(&txn, cosign.global_session, network) { + if existing.cosign.block_number >= cosign.block_number { + return Ok(true); + } } // Check we can verify this cosign's signature @@ -342,24 +362,31 @@ impl Cosigning { // Unrecognized global session return Ok(true); }; + if cosign.block_number <= global_session_start_block_number { + // Cosign is for a block predating the global session + return Ok(false); + } // Check the cosign's signature - let network = match cosign.cosigner { - Cosigner::ValidatorSet(network) => { - let Some((_session, keys)) = - keys_for_network(&self.serai.as_of(global_session_start_block_hash), network).await? - else { - return Ok(false); - }; + { + let key = match cosign.cosigner { + Cosigner::ValidatorSet(network) => { + // TODO: Cache this + let Some((_session, keys)) = + keys_for_network(&self.serai.as_of(global_session_start_block_hash), network).await? + else { + return Ok(false); + }; - if !signed_cosign.verify_signature(keys.0) { - return Ok(false); + keys.0 } + Cosigner::Validator(signer) => signer.into(), + }; - network + if !signed_cosign.verify_signature(key) { + return Ok(false); } - Cosigner::Validator(_) => return Ok(false), - }; + } // Check our finalized blockchain exceeds this block number if self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number() < @@ -372,10 +399,6 @@ impl Cosigning { // Since we verified this cosign's signature, and have a chain sufficiently long, handle the // cosign - // Save the cosign to the database - cosigns_for_this_block_position.push(signed_cosign.clone()); - Cosigns::set(&mut txn, cosign.block_number, &cosigns_for_this_block_position); - let our_block_hash = self .serai .block_hash(cosign.block_number) @@ -389,13 +412,7 @@ impl Cosigning { return Ok(true); } - if NetworksLatestCosignedBlock::get(&txn, network) - .map(|cosign| cosign.cosign.block_number) - .unwrap_or(0) < - cosign.block_number - { - NetworksLatestCosignedBlock::set(&mut txn, network, &signed_cosign); - } + NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, &signed_cosign); } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); // Only handle this as a fault if this set wasn't prior faulty From 5b337c3ce8a9c69ab0a7e6482be6a8dfa216485a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 02:11:05 -0500 Subject: [PATCH 210/368] Prevent a malicious validator set from overwriting a notable cosign Also prevents panics from an invalid Serai node (removing the assumption of an honest Serai node). --- coordinator/cosign/src/evaluator.rs | 15 ++++++----- coordinator/cosign/src/intend.rs | 18 ++++++++----- coordinator/cosign/src/lib.rs | 41 ++++++++++++++--------------- 3 files changed, 40 insertions(+), 34 deletions(-) diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index f38b18b1..152830f0 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -74,9 +74,11 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask Result<(Block, HasEvents), SeraiError> { +) -> Result<(Block, HasEvents), String> { let block = serai .finalized_block_by_number(block_number) - .await? - .expect("couldn't get block which should've been finalized"); + .await + .map_err(|e| format!("{e:?}"))? + .ok_or_else(|| "couldn't get block which should've been finalized".to_string())?; let serai = serai.as_of(block.hash()); - if !serai.validator_sets().key_gen_events().await?.is_empty() { + if !serai.validator_sets().key_gen_events().await.map_err(|e| format!("{e:?}"))?.is_empty() { return Ok((block, HasEvents::Notable)); } - if !serai.coins().burn_with_instruction_events().await?.is_empty() { + if !serai.coins().burn_with_instruction_events().await.map_err(|e| format!("{e:?}"))?.is_empty() { return Ok((block, HasEvents::NonNotable)); } @@ -79,7 +80,10 @@ impl ContinuallyRan for CosignIntendTask { if has_events == HasEvents::Notable { let sets = cosigning_sets(&self.serai.as_of(block.hash())).await?; let global_session = GlobalSession::new(sets).id(); - GlobalSessions::set(&mut txn, global_session, &(block.number(), block.hash())); + GlobalSessions::set(&mut txn, global_session, &(block_number, block.hash())); + if let Some(ending_global_session) = LatestGlobalSessionIntended::get(&txn) { + GlobalSessionLastBlock::set(&mut txn, ending_global_session, &block_number); + } LatestGlobalSessionIntended::set(&mut txn, &global_session); } diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 7dc5a3a1..d0abee08 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -62,6 +62,8 @@ create_db! { Cosign { // A mapping from a global session's ID to its start block (number, hash). GlobalSessions: (global_session: [u8; 32]) -> (u64, [u8; 32]), + // The last block to be cosigned by a global session. + GlobalSessionLastBlock: (global_session: [u8; 32]) -> u64, // The latest global session intended. // // This is distinct from the latest global session for which we've evaluated the cosigns for. @@ -295,7 +297,7 @@ impl Cosigning { /// cosigns, in case of a fault, to induce identification of the fault by others. pub fn cosigns_to_rebroadcast(&self) -> Vec { if let Some(faulted) = FaultedSession::get(&self.db) { - let mut cosigns = Faults::get(&self.db, faulted).unwrap(); + let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults"); // Also include all of our recognized-as-honest cosigns in an attempt to induce fault // identification in those who see the faulty cosigns as honest for network in serai_client::primitives::NETWORKS { @@ -345,19 +347,22 @@ impl Cosigning { return Ok(false); }; - // Check if we should even bother handling this cosign - let mut txn = self.db.txn(); - // TODO: A malicious validator set can sign a block after their notable block to erase a - // notable cosign - if let Some(existing) = NetworksLatestCosignedBlock::get(&txn, cosign.global_session, network) { + // Check this isn't a dated cosign + if let Some(existing) = + NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) + { if existing.cosign.block_number >= cosign.block_number { return Ok(true); } } - // Check we can verify this cosign's signature + // Check our finalized (and indexed by intend) blockchain exceeds this block number + if cosign.block_number >= intend::ScanCosignFrom::get(&self.db).unwrap_or(0) { + return Ok(true); + } + let Some((global_session_start_block_number, global_session_start_block_hash)) = - GlobalSessions::get(&txn, cosign.global_session) + GlobalSessions::get(&self.db, cosign.global_session) else { // Unrecognized global session return Ok(true); @@ -366,6 +371,10 @@ impl Cosigning { // Cosign is for a block predating the global session return Ok(false); } + if Some(cosign.block_number) > GlobalSessionLastBlock::get(&self.db, cosign.global_session) { + // Cosign is for a block after the last block this global session should have signed + return Ok(false); + } // Check the cosign's signature { @@ -388,23 +397,17 @@ impl Cosigning { } } - // Check our finalized blockchain exceeds this block number - if self.serai.latest_finalized_block().await.map_err(|e| format!("{e:?}"))?.number() < - cosign.block_number - { - // Unrecognized block number - return Ok(true); - } - // Since we verified this cosign's signature, and have a chain sufficiently long, handle the // cosign + let mut txn = self.db.txn(); + let our_block_hash = self .serai .block_hash(cosign.block_number) .await .map_err(|e| format!("{e:?}"))? - .expect("requested hash of a finalized block yet received None"); + .ok_or_else(|| "requested hash of a finalized block yet received None".to_string())?; if our_block_hash == cosign.block_hash { // If this is for a future global session, we don't acknowledge this cosign at this time if global_session_start_block_number > LatestCosignedBlockNumber::get(&txn).unwrap_or(0) { @@ -445,10 +448,6 @@ impl Cosigning { } // Check if the sum weight means a fault has occurred - assert!( - total_weight != 0, - "evaluating valid cosign when no stake was present in the system" - ); if weight_cosigned >= ((total_weight * 17) / 100) { FaultedSession::set(&mut txn, &cosign.global_session); } From 4b34be05bf991d9c76e45387d21321dc7f68d4e6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 19:48:48 -0500 Subject: [PATCH 211/368] rocksdb 0.23 --- Cargo.lock | 21 +++++++++------------ patches/rocksdb/Cargo.toml | 4 ++-- 2 files changed, 11 insertions(+), 14 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 8965bf2d..c53057dc 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -3547,7 +3547,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.4.10", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -4111,7 +4111,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.52.6", + "windows-targets 0.48.5", ] [[package]] @@ -4601,14 +4601,13 @@ dependencies = [ [[package]] name = "librocksdb-sys" -version = "0.16.0+8.10.0" +version = "0.17.1+9.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ce3d60bc059831dc1c83903fb45c103f75db65c5a7bf22272764d9cc683e348c" +checksum = "2b7869a512ae9982f4d46ba482c2a304f1efd80c6412a3d4bf57bb79a619679f" dependencies = [ "bindgen", "bzip2-sys", "cc", - "glob", "libc", "libz-sys", "lz4-sys", @@ -6764,14 +6763,14 @@ dependencies = [ name = "rocksdb" version = "0.21.0" dependencies = [ - "rocksdb 0.22.0", + "rocksdb 0.23.0", ] [[package]] name = "rocksdb" -version = "0.22.0" +version = "0.23.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6bd13e55d6d7b8cd0ea569161127567cd587676c99f4472f779a0279aa60a7a7" +checksum = "26ec73b20525cb235bad420f911473b69f9fe27cc856c5461bccd7e4af037f43" dependencies = [ "libc", "librocksdb-sys", @@ -8371,14 +8370,12 @@ version = "0.1.0" dependencies = [ "blake2", "borsh", - "ciphersuite", "log", "parity-scale-codec", - "schnorr-signatures", + "schnorrkel", "serai-client", "serai-db", "serai-task", - "sp-application-crypto", "tokio", ] @@ -11696,7 +11693,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.48.0", + "windows-sys 0.59.0", ] [[package]] diff --git a/patches/rocksdb/Cargo.toml b/patches/rocksdb/Cargo.toml index b77ee4bc..6622694a 100644 --- a/patches/rocksdb/Cargo.toml +++ b/patches/rocksdb/Cargo.toml @@ -13,10 +13,10 @@ all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] -rocksdb = { version = "0.22", default-features = false } +rocksdb = { version = "0.23", default-features = false, features = ["bindgen-runtime"] } [features] -jemalloc = [] +jemalloc = [] # Dropped as this causes a compilation failure on windows snappy = ["rocksdb/snappy"] lz4 = ["rocksdb/lz4"] zstd = ["rocksdb/zstd"] From 56af6c44ebedd3a37d977e9048cde07345d1cb6e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 21:19:04 -0500 Subject: [PATCH 212/368] Remove usage of serai from `intake_cosign` --- coordinator/cosign/src/evaluator.rs | 60 ++++--------- coordinator/cosign/src/intend.rs | 71 ++++++++++++--- coordinator/cosign/src/lib.rs | 131 +++++++++++++--------------- 3 files changed, 137 insertions(+), 125 deletions(-) diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 152830f0..06a3019f 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -1,6 +1,6 @@ use core::future::Future; -use serai_client::{primitives::Amount, Serai}; +use serai_client::Serai; use serai_db::*; use serai_task::ContinuallyRan; @@ -15,12 +15,12 @@ create_db!( // The latest cosigned block number. LatestCosignedBlockNumber: () -> u64, // The latest global session evaluated. - // TODO: Also include the weights here LatestGlobalSessionEvaluated: () -> ([u8; 32], Vec), } ); /// A task to determine if a block has been cosigned and we should handle it. +// TODO: Remove `serai` from this pub(crate) struct CosignEvaluatorTask { pub(crate) db: D, pub(crate) serai: Serai, @@ -38,7 +38,7 @@ async fn get_latest_global_session_evaluated( // This is the initial global session // Fetch the sets participating and declare it the latest value recognized let sets = cosigning_sets_by_parent_hash(serai, parent_hash).await?; - let initial_global_session = GlobalSession::new(sets.clone()).id(); + let initial_global_session = GlobalSession::id(sets.clone()); LatestGlobalSessionEvaluated::set(txn, &(initial_global_session, sets.clone())); (initial_global_session, sets) } @@ -73,39 +73,26 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask>(); + let global_session = GlobalSession::id(sets.clone()); LatestGlobalSessionEvaluated::set(&mut txn, &(global_session, sets)); } } @@ -149,28 +137,15 @@ impl ContinuallyRan for CosignEvaluatorTask = None; for set in sets { - let stake = self - .serai - .as_of(global_session_start_block) - .validator_sets() - .total_allocated_stake(set.network) - .await - .map_err(|e| format!("{e:?}"))? - .unwrap_or(Amount(0)) - .0; - // Increment total_weight with this set's stake - total_weight += stake; - // Check if this set cosigned this block or not let Some(cosign) = NetworksLatestCosignedBlock::get(&txn, global_session, set.network) @@ -178,7 +153,10 @@ impl ContinuallyRan for CosignEvaluatorTask= block_number { - weight_cosigned += total_weight + weight_cosigned += + global_session_info.stakes.get(&set.network).ok_or_else(|| { + "ValidatorSet in global session yet didn't have its stake".to_string() + })?; } // Update the lowest block common to all of these cosigns @@ -188,7 +166,7 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignIntendTask { .await .map_err(|e| format!("{e:?}"))?; + // Check we are indexing a linear chain + if (block_number > 1) && + (<[u8; 32]>::from(block.header.parent_hash) != + SubstrateBlocks::get(&txn, block_number - 1) + .expect("indexing a block but haven't indexed its parent")) + { + Err(format!( + "node's block #{block_number} doesn't build upon the block #{} prior indexed", + block_number - 1 + ))?; + } + + SubstrateBlocks::set(&mut txn, block_number, &block.hash()); + match has_events { HasEvents::Notable | HasEvents::NonNotable => { + // TODO: Replace with LatestGlobalSessionIntended, GlobalSessions let sets = cosigning_sets_for_block(&self.serai, &block).await?; + // If this block doesn't have any cosigners, meaning it'll never be cosigned, we flag + // it as not having any events requiring cosigning so we don't attempt to sign/require + // a cosign for it + if sets.is_empty() { + has_events = HasEvents::No; + } + // If this is notable, it creates a new global session, which we index into the // database now if has_events == HasEvents::Notable { - let sets = cosigning_sets(&self.serai.as_of(block.hash())).await?; - let global_session = GlobalSession::new(sets).id(); - GlobalSessions::set(&mut txn, global_session, &(block_number, block.hash())); + let serai = self.serai.as_of(block.hash()); + let sets = cosigning_sets(&serai).await?; + let global_session = GlobalSession::id(sets.iter().map(|(set, _key)| *set).collect()); + + let mut keys = HashMap::new(); + let mut stakes = HashMap::new(); + let mut total_stake = 0; + for (set, key) in &sets { + keys.insert(set.network, SeraiAddress::from(*key)); + let stake = serai + .validator_sets() + .total_allocated_stake(set.network) + .await + .map_err(|e| format!("{e:?}"))? + .unwrap_or(Amount(0)) + .0; + stakes.insert(set.network, stake); + total_stake += stake; + } + if total_stake == 0 { + Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?; + } + + GlobalSessions::set( + &mut txn, + global_session, + &(GlobalSession { start_block_number: block_number, keys, stakes, total_stake }), + ); if let Some(ending_global_session) = LatestGlobalSessionIntended::get(&txn) { - GlobalSessionLastBlock::set(&mut txn, ending_global_session, &block_number); + GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number); } LatestGlobalSessionIntended::set(&mut txn, &global_session); } - // If this block doesn't have any cosigners, meaning it'll never be cosigned, we flag it - // as not having any events requiring cosigning so we don't attempt to sign/require a - // cosign for it - if sets.is_empty() { - has_events = HasEvents::No; - } else { - let global_session = GlobalSession::new(sets.clone()).id(); + if has_events != HasEvents::No { + let global_session = GlobalSession::id(sets.clone()); // Tell each set of their expectation to cosign this block for set in sets { log::debug!("{:?} will be cosigning block #{block_number}", set); diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index d0abee08..baeee8ea 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -3,15 +3,16 @@ #![deny(missing_docs)] use core::{fmt::Debug, future::Future}; +use std::collections::HashMap; use blake2::{Digest, Blake2s256}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{Amount, NetworkId, SeraiAddress}, + primitives::{NetworkId, SeraiAddress}, validator_sets::primitives::{Session, ValidatorSet, KeyPair}, - Block, Serai, TemporalSerai, + Public, Block, Serai, TemporalSerai, }; use serai_db::*; @@ -45,29 +46,36 @@ pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; cosigning protocol. */ #[derive(Debug, BorshSerialize, BorshDeserialize)] -struct GlobalSession { - cosigners: Vec, +pub(crate) struct GlobalSession { + pub(crate) start_block_number: u64, + pub(crate) keys: HashMap, + pub(crate) stakes: HashMap, + pub(crate) total_stake: u64, } impl GlobalSession { - fn new(mut cosigners: Vec) -> Self { + fn id(mut cosigners: Vec) -> [u8; 32] { cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap()); - Self { cosigners } - } - fn id(&self) -> [u8; 32] { - Blake2s256::digest(borsh::to_vec(self).unwrap()).into() + Blake2s256::digest(borsh::to_vec(&cosigners).unwrap()).into() } } create_db! { Cosign { - // A mapping from a global session's ID to its start block (number, hash). - GlobalSessions: (global_session: [u8; 32]) -> (u64, [u8; 32]), + // The following are populated by the intend task and used throughout the library + + // An index of Substrate blocks + SubstrateBlocks: (block_number: u64) -> [u8; 32], + // A mapping from a global session's ID to its relevant information. + GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, // The last block to be cosigned by a global session. - GlobalSessionLastBlock: (global_session: [u8; 32]) -> u64, + GlobalSessionsLastBlock: (global_session: [u8; 32]) -> u64, // The latest global session intended. // // This is distinct from the latest global session for which we've evaluated the cosigns for. LatestGlobalSessionIntended: () -> [u8; 32], + + // The following are managed by the `intake_cosign` function present in this file + // The latest cosigned block for each network. // // This will only be populated with cosigns predating or during the most recent global session @@ -189,21 +197,22 @@ async fn keys_for_network( Ok(None) } -/// Fetch the `ValidatorSet`s used for cosigning as of this block. -async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result, String> { +/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning as of this block. +async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result, String> { let mut sets = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); for network in serai_client::primitives::NETWORKS { - let Some((session, _)) = keys_for_network(serai, network).await? else { + let Some((session, keys)) = keys_for_network(serai, network).await? else { // If this network doesn't have usable keys, move on continue; }; - sets.push(ValidatorSet { network, session }); + sets.push((ValidatorSet { network, session }, keys.0)); } Ok(sets) } -/// Fetch the `ValidatorSet`s used for cosigning a block by the block's parent hash. +/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning a block by the block's +/// parent hash. async fn cosigning_sets_by_parent_hash( serai: &Serai, parent_hash: [u8; 32], @@ -213,10 +222,11 @@ async fn cosigning_sets_by_parent_hash( update the sets declared but that update shouldn't take effect until block `n` is cosigned). That's why fetching the cosigning sets for a block by its parent hash is valid. */ - cosigning_sets(&serai.as_of(parent_hash)).await + let sets = cosigning_sets(&serai.as_of(parent_hash)).await?; + Ok(sets.into_iter().map(|(set, _key)| set).collect::>()) } -/// Fetch the `ValidatorSet`s used for cosigning this block. +/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning this block. async fn cosigning_sets_for_block( serai: &Serai, block: &Block, @@ -242,7 +252,6 @@ pub struct Faulted; /// The interface to manage cosigning with. pub struct Cosigning { db: D, - serai: Serai, } impl Cosigning { /// Spawn the tasks to intend and evaluate cosigns. @@ -262,10 +271,10 @@ impl Cosigning { .continually_run(intend_task, vec![evaluator_task_handle]), ); tokio::spawn( - (evaluator::CosignEvaluatorTask { db: db.clone(), serai: serai.clone(), request }) + (evaluator::CosignEvaluatorTask { db: db.clone(), serai, request }) .continually_run(evaluator_task, tasks_to_run_upon_cosigning), ); - Self { db, serai } + Self { db } } /// The latest cosigned block number. @@ -338,7 +347,7 @@ impl Cosigning { // // Takes `&mut self` as this should only be called once at any given moment. // TODO: Don't overload bool here - pub async fn intake_cosign(&mut self, signed_cosign: SignedCosign) -> Result { + pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result { let cosign = &signed_cosign.cosign; let Cosigner::ValidatorSet(network) = cosign.cosigner else { @@ -356,41 +365,37 @@ impl Cosigning { } } - // Check our finalized (and indexed by intend) blockchain exceeds this block number - if cosign.block_number >= intend::ScanCosignFrom::get(&self.db).unwrap_or(0) { + // Check our indexed blockchain includes a block with this block number + let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else { return Ok(true); - } + }; - let Some((global_session_start_block_number, global_session_start_block_hash)) = - GlobalSessions::get(&self.db, cosign.global_session) - else { + let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else { // Unrecognized global session return Ok(true); }; - if cosign.block_number <= global_session_start_block_number { + if cosign.block_number <= global_session.start_block_number { // Cosign is for a block predating the global session return Ok(false); } - if Some(cosign.block_number) > GlobalSessionLastBlock::get(&self.db, cosign.global_session) { - // Cosign is for a block after the last block this global session should have signed - return Ok(false); + if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) { + if cosign.block_number > last_block { + // Cosign is for a block after the last block this global session should have signed + return Ok(false); + } } // Check the cosign's signature { - let key = match cosign.cosigner { + let key = Public::from(match cosign.cosigner { Cosigner::ValidatorSet(network) => { - // TODO: Cache this - let Some((_session, keys)) = - keys_for_network(&self.serai.as_of(global_session_start_block_hash), network).await? - else { + let Some(key) = global_session.keys.get(&network) else { return Ok(false); }; - - keys.0 + *key } - Cosigner::Validator(signer) => signer.into(), - }; + Cosigner::Validator(signer) => signer, + }); if !signed_cosign.verify_signature(key) { return Ok(false); @@ -402,20 +407,14 @@ impl Cosigning { let mut txn = self.db.txn(); - let our_block_hash = self - .serai - .block_hash(cosign.block_number) - .await - .map_err(|e| format!("{e:?}"))? - .ok_or_else(|| "requested hash of a finalized block yet received None".to_string())?; if our_block_hash == cosign.block_hash { // If this is for a future global session, we don't acknowledge this cosign at this time - if global_session_start_block_number > LatestCosignedBlockNumber::get(&txn).unwrap_or(0) { + if global_session.start_block_number > LatestCosignedBlockNumber::get(&txn).unwrap_or(0) { drop(txn); return Ok(true); } - NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, &signed_cosign); + NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign); } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); // Only handle this as a fault if this set wasn't prior faulty @@ -424,31 +423,19 @@ impl Cosigning { Faults::set(&mut txn, cosign.global_session, &faults); let mut weight_cosigned = 0; - let mut total_weight = 0; - for set in cosigning_sets(&self.serai.as_of(global_session_start_block_hash)).await? { - let stake = self - .serai - .as_of(global_session_start_block_hash) - .validator_sets() - .total_allocated_stake(set.network) - .await - .map_err(|e| format!("{e:?}"))? - .unwrap_or(Amount(0)) - .0; - // Increment total_weight with this set's stake - total_weight += stake; - - // Check if this set cosigned this block or not - if faults - .iter() - .any(|cosign| cosign.cosign.cosigner == Cosigner::ValidatorSet(set.network)) - { - weight_cosigned += total_weight - } + for fault in &faults { + let Cosigner::ValidatorSet(network) = fault.cosign.cosigner else { + // TODO when we implement the non-ValidatorSet cosigner protocol + Err("non-ValidatorSet cosigner had a fault".to_string())? + }; + let Some(stake) = global_session.stakes.get(&network) else { + Err("cosigner with recognized key didn't have a stake entry saved".to_string())? + }; + weight_cosigned += stake; } // Check if the sum weight means a fault has occurred - if weight_cosigned >= ((total_weight * 17) / 100) { + if weight_cosigned >= ((global_session.total_stake * 17) / 100) { FaultedSession::set(&mut txn, &cosign.global_session); } } From 2aebfb21af5036828c4b10d10b01e342d3f5bdf6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 23:21:25 -0500 Subject: [PATCH 213/368] Remove serai from the cosign evaluator --- common/db/src/create_db.rs | 13 +++ coordinator/cosign/src/evaluator.rs | 90 +++++++++-------- coordinator/cosign/src/intend.rs | 145 ++++++++++++++-------------- coordinator/cosign/src/lib.rs | 74 ++++---------- 4 files changed, 146 insertions(+), 176 deletions(-) diff --git a/common/db/src/create_db.rs b/common/db/src/create_db.rs index 50fe51f7..c2917e58 100644 --- a/common/db/src/create_db.rs +++ b/common/db/src/create_db.rs @@ -143,6 +143,19 @@ macro_rules! db_channel { Self::set(txn, $($arg,)* index_to_use, value); } + pub(crate) fn peek( + txn: &mut impl DbTxn + $(, $arg: $arg_type)* + ) -> Option<$field_type> { + let messages_recvd_key = Self::key($($arg,)* 1); + let messages_recvd = txn.get(&messages_recvd_key).map(|counter| { + u32::from_le_bytes(counter.try_into().unwrap()) + }).unwrap_or(0); + + let index_to_read = messages_recvd + 2; + + Self::get(txn, $($arg,)* index_to_read) + } pub(crate) fn try_recv( txn: &mut impl DbTxn $(, $arg: $arg_type)* diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 06a3019f..64094f0a 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -1,13 +1,11 @@ use core::future::Future; -use serai_client::Serai; - use serai_db::*; use serai_task::ContinuallyRan; use crate::{ - *, - intend::{BlockEventData, BlockEvents}, + HasEvents, GlobalSession, NetworksLatestCosignedBlock, RequestNotableCosigns, + intend::{GlobalSessionsChannel, BlockEventData, BlockEvents}, }; create_db!( @@ -15,34 +13,51 @@ create_db!( // The latest cosigned block number. LatestCosignedBlockNumber: () -> u64, // The latest global session evaluated. - LatestGlobalSessionEvaluated: () -> ([u8; 32], Vec), + LatestGlobalSessionEvaluated: () -> ([u8; 32], GlobalSession), } ); /// A task to determine if a block has been cosigned and we should handle it. -// TODO: Remove `serai` from this pub(crate) struct CosignEvaluatorTask { pub(crate) db: D, - pub(crate) serai: Serai, pub(crate) request: R, } -async fn get_latest_global_session_evaluated( +fn get_latest_global_session_evaluated( txn: &mut impl DbTxn, - serai: &Serai, - parent_hash: [u8; 32], -) -> Result<([u8; 32], Vec), String> { - Ok(match LatestGlobalSessionEvaluated::get(txn) { - Some(res) => res, - None => { - // This is the initial global session - // Fetch the sets participating and declare it the latest value recognized - let sets = cosigning_sets_by_parent_hash(serai, parent_hash).await?; - let initial_global_session = GlobalSession::id(sets.clone()); - LatestGlobalSessionEvaluated::set(txn, &(initial_global_session, sets.clone())); - (initial_global_session, sets) + block_number: u64, +) -> ([u8; 32], GlobalSession) { + let mut res = { + let existing = match LatestGlobalSessionEvaluated::get(txn) { + Some(existing) => existing, + None => { + let first = GlobalSessionsChannel::try_recv(txn) + .expect("fetching latest global session yet none declared"); + LatestGlobalSessionEvaluated::set(txn, &first); + first + } + }; + assert!( + existing.1.start_block_number <= block_number, + "candidate's start block number exceeds our block number" + ); + existing + }; + + if let Some(next) = GlobalSessionsChannel::peek(txn) { + assert!( + block_number <= next.1.start_block_number, + "get_latest_global_session_evaluated wasn't called incrementally" + ); + // If it's time for this session to activate, take it from the channel and set it + if block_number == next.1.start_block_number { + GlobalSessionsChannel::try_recv(txn).unwrap(); + LatestGlobalSessionEvaluated::set(txn, &next); + res = next; } - }) + } + + res } impl ContinuallyRan for CosignEvaluatorTask { @@ -54,8 +69,7 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask { - let (global_session, sets) = - get_latest_global_session_evaluated(&mut txn, &self.serai, parent_hash).await?; + let (global_session, global_session_info) = + get_latest_global_session_evaluated(&mut txn, block_number); let mut weight_cosigned = 0; - let global_session_info = - GlobalSessions::get(&txn, global_session).ok_or_else(|| { - "checking if intended cosign was satisfied within an unrecognized global session" - .to_string() - })?; - for set in sets { + for set in global_session_info.sets { // Check if we have the cosign from this set if NetworksLatestCosignedBlock::get(&txn, global_session, set.network) .map(|signed_cosign| signed_cosign.cosign.block_number) == @@ -105,14 +114,6 @@ impl ContinuallyRan for CosignEvaluatorTask>(); - let global_session = GlobalSession::id(sets.clone()); - LatestGlobalSessionEvaluated::set(&mut txn, &(global_session, sets)); - } } // Since this block didn't have any notable events, we simply require a cosign for this // block or a greater block by the current validator sets @@ -135,17 +136,12 @@ impl ContinuallyRan for CosignEvaluatorTask = None; - for set in sets { + for set in global_session_info.sets { // Check if this set cosigned this block or not let Some(cosign) = NetworksLatestCosignedBlock::get(&txn, global_session, set.network) diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index 03c9d5c4..7466ae5a 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -21,13 +21,12 @@ create_db!( #[derive(Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct BlockEventData { pub(crate) block_number: u64, - pub(crate) parent_hash: [u8; 32], - pub(crate) block_hash: [u8; 32], pub(crate) has_events: HasEvents, } db_channel! { CosignIntendChannels { + GlobalSessionsChannel: () -> ([u8; 32], GlobalSession), BlockEvents: () -> BlockEventData, IntendedCosigns: (set: ValidatorSet) -> CosignIntent, } @@ -87,88 +86,90 @@ impl ContinuallyRan for CosignIntendTask { block_number - 1 ))?; } - SubstrateBlocks::set(&mut txn, block_number, &block.hash()); + let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn); + + // If this is notable, it creates a new global session, which we index into the database + // now + if has_events == HasEvents::Notable { + let serai = self.serai.as_of(block.hash()); + let sets_and_keys = cosigning_sets(&serai).await?; + let global_session = + GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect()); + + let mut sets = Vec::with_capacity(sets_and_keys.len()); + let mut keys = HashMap::with_capacity(sets_and_keys.len()); + let mut stakes = HashMap::with_capacity(sets_and_keys.len()); + let mut total_stake = 0; + for (set, key) in &sets_and_keys { + sets.push(*set); + keys.insert(set.network, SeraiAddress::from(*key)); + let stake = serai + .validator_sets() + .total_allocated_stake(set.network) + .await + .map_err(|e| format!("{e:?}"))? + .unwrap_or(Amount(0)) + .0; + stakes.insert(set.network, stake); + total_stake += stake; + } + if total_stake == 0 { + Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?; + } + + let global_session_info = GlobalSession { + // This session starts cosigning after this block, as this block must be cosigned by + // the existing validators + start_block_number: block_number + 1, + sets, + keys, + stakes, + total_stake, + }; + GlobalSessions::set(&mut txn, global_session, &global_session_info); + if let Some(ending_global_session) = global_session_for_this_block { + GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number); + } + LatestGlobalSessionIntended::set(&mut txn, &global_session); + GlobalSessionsChannel::send(&mut txn, &(global_session, global_session_info)); + } + + // If there isn't anyone available to cosign this block, meaning it'll never be cosigned, + // we flag it as not having any events requiring cosigning so we don't attempt to + // sign/require a cosign for it + if global_session_for_this_block.is_none() { + has_events = HasEvents::No; + } + match has_events { HasEvents::Notable | HasEvents::NonNotable => { - // TODO: Replace with LatestGlobalSessionIntended, GlobalSessions - let sets = cosigning_sets_for_block(&self.serai, &block).await?; + let global_session_for_this_block = global_session_for_this_block + .expect("global session for this block was None but still attempting to cosign it"); + let global_session_info = GlobalSessions::get(&txn, global_session_for_this_block) + .expect("last global session intended wasn't saved to the database"); - // If this block doesn't have any cosigners, meaning it'll never be cosigned, we flag - // it as not having any events requiring cosigning so we don't attempt to sign/require - // a cosign for it - if sets.is_empty() { - has_events = HasEvents::No; - } - - // If this is notable, it creates a new global session, which we index into the - // database now - if has_events == HasEvents::Notable { - let serai = self.serai.as_of(block.hash()); - let sets = cosigning_sets(&serai).await?; - let global_session = GlobalSession::id(sets.iter().map(|(set, _key)| *set).collect()); - - let mut keys = HashMap::new(); - let mut stakes = HashMap::new(); - let mut total_stake = 0; - for (set, key) in &sets { - keys.insert(set.network, SeraiAddress::from(*key)); - let stake = serai - .validator_sets() - .total_allocated_stake(set.network) - .await - .map_err(|e| format!("{e:?}"))? - .unwrap_or(Amount(0)) - .0; - stakes.insert(set.network, stake); - total_stake += stake; - } - if total_stake == 0 { - Err(format!("cosigning sets for block #{block_number} had 0 stake in total"))?; - } - - GlobalSessions::set( + // Tell each set of their expectation to cosign this block + for set in global_session_info.sets { + log::debug!("{:?} will be cosigning block #{block_number}", set); + IntendedCosigns::send( &mut txn, - global_session, - &(GlobalSession { start_block_number: block_number, keys, stakes, total_stake }), + set, + &CosignIntent { + global_session: global_session_for_this_block, + block_number, + block_hash: block.hash(), + notable: has_events == HasEvents::Notable, + }, ); - if let Some(ending_global_session) = LatestGlobalSessionIntended::get(&txn) { - GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number); - } - LatestGlobalSessionIntended::set(&mut txn, &global_session); - } - - if has_events != HasEvents::No { - let global_session = GlobalSession::id(sets.clone()); - // Tell each set of their expectation to cosign this block - for set in sets { - log::debug!("{:?} will be cosigning block #{block_number}", set); - IntendedCosigns::send( - &mut txn, - set, - &CosignIntent { - global_session, - block_number, - block_hash: block.hash(), - notable: has_events == HasEvents::Notable, - }, - ); - } } } HasEvents::No => {} } + // Populate a singular feed with every block's status for the evluator to work off of - BlockEvents::send( - &mut txn, - &(BlockEventData { - block_number, - parent_hash: block.header.parent_hash.into(), - block_hash: block.hash(), - has_events, - }), - ); + BlockEvents::send(&mut txn, &(BlockEventData { block_number, has_events })); // Mark this block as handled, meaning we should scan from the next block moving on ScanCosignFrom::set(&mut txn, &(block_number + 1)); txn.commit(); diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index baeee8ea..b6f163ab 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -48,6 +48,7 @@ pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; #[derive(Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct GlobalSession { pub(crate) start_block_number: u64, + pub(crate) sets: Vec, pub(crate) keys: HashMap, pub(crate) stakes: HashMap, pub(crate) total_stake: u64, @@ -120,15 +121,6 @@ struct CosignIntent { notable: bool, } -/// The identification of a cosigner. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -pub enum Cosigner { - /// The network which produced this cosign. - ValidatorSet(NetworkId), - /// The individual validator which produced this cosign. - Validator(SeraiAddress), -} - /// A cosign. #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct Cosign { @@ -139,7 +131,7 @@ pub struct Cosign { /// The hash of the block to cosign. pub block_hash: [u8; 32], /// The actual cosigner. - pub cosigner: Cosigner, + pub cosigner: NetworkId, } /// A signed cosign. @@ -211,29 +203,6 @@ async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result Result, String> { - /* - If we're cosigning block `n`, it's cosigned by the sets as of block `n-1` (as block `n` may - update the sets declared but that update shouldn't take effect until block `n` is cosigned). - That's why fetching the cosigning sets for a block by its parent hash is valid. - */ - let sets = cosigning_sets(&serai.as_of(parent_hash)).await?; - Ok(sets.into_iter().map(|(set, _key)| set).collect::>()) -} - -/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning this block. -async fn cosigning_sets_for_block( - serai: &Serai, - block: &Block, -) -> Result, String> { - cosigning_sets_by_parent_hash(serai, block.header.parent_hash.into()).await -} - /// An object usable to request notable cosigns for a block. pub trait RequestNotableCosigns: 'static + Send { /// The error type which may be encountered when requesting notable cosigns. @@ -267,11 +236,11 @@ impl Cosigning { let (intend_task, _intend_task_handle) = Task::new(); let (evaluator_task, evaluator_task_handle) = Task::new(); tokio::spawn( - (intend::CosignIntendTask { db: db.clone(), serai: serai.clone() }) + (intend::CosignIntendTask { db: db.clone(), serai }) .continually_run(intend_task, vec![evaluator_task_handle]), ); tokio::spawn( - (evaluator::CosignEvaluatorTask { db: db.clone(), serai, request }) + (evaluator::CosignEvaluatorTask { db: db.clone(), request }) .continually_run(evaluator_task, tasks_to_run_upon_cosigning), ); Self { db } @@ -349,12 +318,7 @@ impl Cosigning { // TODO: Don't overload bool here pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result { let cosign = &signed_cosign.cosign; - - let Cosigner::ValidatorSet(network) = cosign.cosigner else { - // TODO - // Individually signed cosign despite that protocol not being implemented - return Ok(false); - }; + let network = cosign.cosigner; // Check this isn't a dated cosign if let Some(existing) = @@ -374,7 +338,7 @@ impl Cosigning { // Unrecognized global session return Ok(true); }; - if cosign.block_number <= global_session.start_block_number { + if cosign.block_number < global_session.start_block_number { // Cosign is for a block predating the global session return Ok(false); } @@ -387,14 +351,11 @@ impl Cosigning { // Check the cosign's signature { - let key = Public::from(match cosign.cosigner { - Cosigner::ValidatorSet(network) => { - let Some(key) = global_session.keys.get(&network) else { - return Ok(false); - }; - *key - } - Cosigner::Validator(signer) => signer, + let key = Public::from({ + let Some(key) = global_session.keys.get(&network) else { + return Ok(false); + }; + *key }); if !signed_cosign.verify_signature(key) { @@ -409,7 +370,10 @@ impl Cosigning { if our_block_hash == cosign.block_hash { // If this is for a future global session, we don't acknowledge this cosign at this time - if global_session.start_block_number > LatestCosignedBlockNumber::get(&txn).unwrap_or(0) { + let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0); + // This global session starts the block *after* its declaration, so we want to check if the + // block declaring it was cosigned + if (global_session.start_block_number - 1) > latest_cosigned_block_number { drop(txn); return Ok(true); } @@ -418,17 +382,13 @@ impl Cosigning { } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); // Only handle this as a fault if this set wasn't prior faulty - if !faults.iter().any(|cosign| cosign.cosign.cosigner == Cosigner::ValidatorSet(network)) { + if !faults.iter().any(|cosign| cosign.cosign.cosigner == network) { faults.push(signed_cosign.clone()); Faults::set(&mut txn, cosign.global_session, &faults); let mut weight_cosigned = 0; for fault in &faults { - let Cosigner::ValidatorSet(network) = fault.cosign.cosigner else { - // TODO when we implement the non-ValidatorSet cosigner protocol - Err("non-ValidatorSet cosigner had a fault".to_string())? - }; - let Some(stake) = global_session.stakes.get(&network) else { + let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else { Err("cosigner with recognized key didn't have a stake entry saved".to_string())? }; weight_cosigned += stake; From f336ab1ece470a2e6b90382eb70d9a54de1668fb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 25 Dec 2024 23:51:24 -0500 Subject: [PATCH 214/368] Remove GlobalSessions DB entry If we read the currently-being-evaluated session from the evaluator, we can avoid paying the storage costs on all sessions ad-infinitum. --- common/db/src/create_db.rs | 6 +-- coordinator/cosign/src/evaluator.rs | 66 ++++++++++++++++++++--------- coordinator/cosign/src/intend.rs | 19 +++++---- coordinator/cosign/src/lib.rs | 41 +++++++++--------- 4 files changed, 79 insertions(+), 53 deletions(-) diff --git a/common/db/src/create_db.rs b/common/db/src/create_db.rs index c2917e58..f5bd6e91 100644 --- a/common/db/src/create_db.rs +++ b/common/db/src/create_db.rs @@ -144,17 +144,17 @@ macro_rules! db_channel { Self::set(txn, $($arg,)* index_to_use, value); } pub(crate) fn peek( - txn: &mut impl DbTxn + getter: &impl Get $(, $arg: $arg_type)* ) -> Option<$field_type> { let messages_recvd_key = Self::key($($arg,)* 1); - let messages_recvd = txn.get(&messages_recvd_key).map(|counter| { + let messages_recvd = getter.get(&messages_recvd_key).map(|counter| { u32::from_le_bytes(counter.try_into().unwrap()) }).unwrap_or(0); let index_to_read = messages_recvd + 2; - Self::get(txn, $($arg,)* index_to_read) + Self::get(getter, $($arg,)* index_to_read) } pub(crate) fn try_recv( txn: &mut impl DbTxn diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 64094f0a..8f72b536 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -5,35 +5,38 @@ use serai_task::ContinuallyRan; use crate::{ HasEvents, GlobalSession, NetworksLatestCosignedBlock, RequestNotableCosigns, - intend::{GlobalSessionsChannel, BlockEventData, BlockEvents}, + intend::{GlobalSessions, BlockEventData, BlockEvents}, }; create_db!( SubstrateCosignEvaluator { // The latest cosigned block number. LatestCosignedBlockNumber: () -> u64, - // The latest global session evaluated. - LatestGlobalSessionEvaluated: () -> ([u8; 32], GlobalSession), + // The global session currently being evaluated. + CurrentlyEvaluatedGlobalSession: () -> ([u8; 32], GlobalSession), } ); -/// A task to determine if a block has been cosigned and we should handle it. -pub(crate) struct CosignEvaluatorTask { - pub(crate) db: D, - pub(crate) request: R, -} - -fn get_latest_global_session_evaluated( +// This is a strict function which won't panic, even with a malicious Serai node, so long as: +// - It's called incrementally +// - It's only called for block numbers we've completed indexing on within the intend task +// - It's only called for block numbers after a global session has started +// - The global sessions channel is populated as the block declaring the session is indexed +// Which all hold true within the context of this task and the intend task. +// +// This function will also ensure the currently evaluated global session is incremented once we +// finish evaluation of the prior session. +fn currently_evaluated_global_session_strict( txn: &mut impl DbTxn, block_number: u64, ) -> ([u8; 32], GlobalSession) { let mut res = { - let existing = match LatestGlobalSessionEvaluated::get(txn) { + let existing = match CurrentlyEvaluatedGlobalSession::get(txn) { Some(existing) => existing, None => { - let first = GlobalSessionsChannel::try_recv(txn) - .expect("fetching latest global session yet none declared"); - LatestGlobalSessionEvaluated::set(txn, &first); + let first = + GlobalSessions::try_recv(txn).expect("fetching latest global session yet none declared"); + CurrentlyEvaluatedGlobalSession::set(txn, &first); first } }; @@ -44,15 +47,15 @@ fn get_latest_global_session_evaluated( existing }; - if let Some(next) = GlobalSessionsChannel::peek(txn) { + if let Some(next) = GlobalSessions::peek(txn) { assert!( block_number <= next.1.start_block_number, - "get_latest_global_session_evaluated wasn't called incrementally" + "currently_evaluated_global_session_strict wasn't called incrementally" ); // If it's time for this session to activate, take it from the channel and set it if block_number == next.1.start_block_number { - GlobalSessionsChannel::try_recv(txn).unwrap(); - LatestGlobalSessionEvaluated::set(txn, &next); + GlobalSessions::try_recv(txn).unwrap(); + CurrentlyEvaluatedGlobalSession::set(txn, &next); res = next; } } @@ -60,6 +63,29 @@ fn get_latest_global_session_evaluated( res } +// This is a non-strict function which won't panic, and also won't increment the session as needed. +pub(crate) fn currently_evaluated_global_session( + getter: &impl Get, +) -> Option<([u8; 32], GlobalSession)> { + // If there's a next session... + if let Some(next_global_session) = GlobalSessions::peek(getter) { + // and we've already evaluated the cosigns for the block declaring it... + if LatestCosignedBlockNumber::get(getter) == Some(next_global_session.1.start_block_number - 1) + { + // return it as the current session. + return Some(next_global_session); + } + } + // Else, return the current session + CurrentlyEvaluatedGlobalSession::get(getter) +} + +/// A task to determine if a block has been cosigned and we should handle it. +pub(crate) struct CosignEvaluatorTask { + pub(crate) db: D, + pub(crate) request: R, +} + impl ContinuallyRan for CosignEvaluatorTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { @@ -84,7 +110,7 @@ impl ContinuallyRan for CosignEvaluatorTask { let (global_session, global_session_info) = - get_latest_global_session_evaluated(&mut txn, block_number); + currently_evaluated_global_session_strict(&mut txn, block_number); let mut weight_cosigned = 0; for set in global_session_info.sets { @@ -137,7 +163,7 @@ impl ContinuallyRan for CosignEvaluatorTask = None; diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index 7466ae5a..38d8b9a8 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -26,7 +26,7 @@ pub(crate) struct BlockEventData { db_channel! { CosignIntendChannels { - GlobalSessionsChannel: () -> ([u8; 32], GlobalSession), + GlobalSessions: () -> ([u8; 32], GlobalSession), BlockEvents: () -> BlockEventData, IntendedCosigns: (set: ValidatorSet) -> CosignIntent, } @@ -128,12 +128,14 @@ impl ContinuallyRan for CosignIntendTask { stakes, total_stake, }; - GlobalSessions::set(&mut txn, global_session, &global_session_info); - if let Some(ending_global_session) = global_session_for_this_block { + if let Some((ending_global_session, _ending_global_session_info)) = global_session_for_this_block { GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number); } - LatestGlobalSessionIntended::set(&mut txn, &global_session); - GlobalSessionsChannel::send(&mut txn, &(global_session, global_session_info)); + LatestGlobalSessionIntended::set( + &mut txn, + &(global_session, global_session_info.clone()), + ); + GlobalSessions::send(&mut txn, &(global_session, global_session_info)); } // If there isn't anyone available to cosign this block, meaning it'll never be cosigned, @@ -145,10 +147,9 @@ impl ContinuallyRan for CosignIntendTask { match has_events { HasEvents::Notable | HasEvents::NonNotable => { - let global_session_for_this_block = global_session_for_this_block - .expect("global session for this block was None but still attempting to cosign it"); - let global_session_info = GlobalSessions::get(&txn, global_session_for_this_block) - .expect("last global session intended wasn't saved to the database"); + let (global_session_for_this_block, global_session_info) = + global_session_for_this_block + .expect("global session for this block was None but still attempting to cosign it"); // Tell each set of their expectation to cosign this block for set in global_session_info.sets { diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index b6f163ab..3ef802e6 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -45,7 +45,7 @@ pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; have validator sets follow two distinct global sessions without breaking the bounds of the cosigning protocol. */ -#[derive(Debug, BorshSerialize, BorshDeserialize)] +#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct GlobalSession { pub(crate) start_block_number: u64, pub(crate) sets: Vec, @@ -66,14 +66,12 @@ create_db! { // An index of Substrate blocks SubstrateBlocks: (block_number: u64) -> [u8; 32], - // A mapping from a global session's ID to its relevant information. - GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, // The last block to be cosigned by a global session. GlobalSessionsLastBlock: (global_session: [u8; 32]) -> u64, // The latest global session intended. // // This is distinct from the latest global session for which we've evaluated the cosigns for. - LatestGlobalSessionIntended: () -> [u8; 32], + LatestGlobalSessionIntended: () -> ([u8; 32], GlobalSession), // The following are managed by the `intake_cosign` function present in this file @@ -287,7 +285,9 @@ impl Cosigning { } cosigns } else { - let Some(latest_global_session) = LatestGlobalSessionIntended::get(&self.db) else { + let Some((latest_global_session, _latest_global_session_info)) = + LatestGlobalSessionIntended::get(&self.db) + else { return vec![]; }; let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); @@ -320,7 +320,7 @@ impl Cosigning { let cosign = &signed_cosign.cosign; let network = cosign.cosigner; - // Check this isn't a dated cosign + // Check this isn't a dated cosign within its global session (as it would be if rebroadcasted) if let Some(existing) = NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) { @@ -334,11 +334,19 @@ impl Cosigning { return Ok(true); }; - let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else { - // Unrecognized global session + // Check the cosign aligns with the global session we're currently working on + let Some((global_session, global_session_info)) = + evaluator::currently_evaluated_global_session(&self.db) + else { + // We haven't recognized any global sessions yet return Ok(true); }; - if cosign.block_number < global_session.start_block_number { + if cosign.global_session != global_session { + return Ok(true); + } + + // Check the cosigned block number is in range to the global session + if cosign.block_number < global_session_info.start_block_number { // Cosign is for a block predating the global session return Ok(false); } @@ -352,7 +360,7 @@ impl Cosigning { // Check the cosign's signature { let key = Public::from({ - let Some(key) = global_session.keys.get(&network) else { + let Some(key) = global_session_info.keys.get(&network) else { return Ok(false); }; *key @@ -369,15 +377,6 @@ impl Cosigning { let mut txn = self.db.txn(); if our_block_hash == cosign.block_hash { - // If this is for a future global session, we don't acknowledge this cosign at this time - let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0); - // This global session starts the block *after* its declaration, so we want to check if the - // block declaring it was cosigned - if (global_session.start_block_number - 1) > latest_cosigned_block_number { - drop(txn); - return Ok(true); - } - NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign); } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); @@ -388,14 +387,14 @@ impl Cosigning { let mut weight_cosigned = 0; for fault in &faults { - let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else { + let Some(stake) = global_session_info.stakes.get(&fault.cosign.cosigner) else { Err("cosigner with recognized key didn't have a stake entry saved".to_string())? }; weight_cosigned += stake; } // Check if the sum weight means a fault has occurred - if weight_cosigned >= ((global_session.total_stake * 17) / 100) { + if weight_cosigned >= ((global_session_info.total_stake * 17) / 100) { FaultedSession::set(&mut txn, &cosign.global_session); } } From cef5bc95b0e7b0e4f85d04feefa146cc03cbe6c8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 26 Dec 2024 00:15:49 -0500 Subject: [PATCH 215/368] Revert prior commit An archive of all GlobalSessions is necessary to check for faults. The storage cost is also minimal. While it should be avoided if it can be, it can't be here. --- coordinator/cosign/src/evaluator.rs | 27 +++------------- coordinator/cosign/src/intend.rs | 19 ++++++------ coordinator/cosign/src/lib.rs | 48 ++++++++++++++++------------- 3 files changed, 40 insertions(+), 54 deletions(-) diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 8f72b536..91f92b44 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -5,7 +5,7 @@ use serai_task::ContinuallyRan; use crate::{ HasEvents, GlobalSession, NetworksLatestCosignedBlock, RequestNotableCosigns, - intend::{GlobalSessions, BlockEventData, BlockEvents}, + intend::{GlobalSessionsChannel, BlockEventData, BlockEvents}, }; create_db!( @@ -34,8 +34,8 @@ fn currently_evaluated_global_session_strict( let existing = match CurrentlyEvaluatedGlobalSession::get(txn) { Some(existing) => existing, None => { - let first = - GlobalSessions::try_recv(txn).expect("fetching latest global session yet none declared"); + let first = GlobalSessionsChannel::try_recv(txn) + .expect("fetching latest global session yet none declared"); CurrentlyEvaluatedGlobalSession::set(txn, &first); first } @@ -47,14 +47,14 @@ fn currently_evaluated_global_session_strict( existing }; - if let Some(next) = GlobalSessions::peek(txn) { + if let Some(next) = GlobalSessionsChannel::peek(txn) { assert!( block_number <= next.1.start_block_number, "currently_evaluated_global_session_strict wasn't called incrementally" ); // If it's time for this session to activate, take it from the channel and set it if block_number == next.1.start_block_number { - GlobalSessions::try_recv(txn).unwrap(); + GlobalSessionsChannel::try_recv(txn).unwrap(); CurrentlyEvaluatedGlobalSession::set(txn, &next); res = next; } @@ -63,23 +63,6 @@ fn currently_evaluated_global_session_strict( res } -// This is a non-strict function which won't panic, and also won't increment the session as needed. -pub(crate) fn currently_evaluated_global_session( - getter: &impl Get, -) -> Option<([u8; 32], GlobalSession)> { - // If there's a next session... - if let Some(next_global_session) = GlobalSessions::peek(getter) { - // and we've already evaluated the cosigns for the block declaring it... - if LatestCosignedBlockNumber::get(getter) == Some(next_global_session.1.start_block_number - 1) - { - // return it as the current session. - return Some(next_global_session); - } - } - // Else, return the current session - CurrentlyEvaluatedGlobalSession::get(getter) -} - /// A task to determine if a block has been cosigned and we should handle it. pub(crate) struct CosignEvaluatorTask { pub(crate) db: D, diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index 38d8b9a8..7466ae5a 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -26,7 +26,7 @@ pub(crate) struct BlockEventData { db_channel! { CosignIntendChannels { - GlobalSessions: () -> ([u8; 32], GlobalSession), + GlobalSessionsChannel: () -> ([u8; 32], GlobalSession), BlockEvents: () -> BlockEventData, IntendedCosigns: (set: ValidatorSet) -> CosignIntent, } @@ -128,14 +128,12 @@ impl ContinuallyRan for CosignIntendTask { stakes, total_stake, }; - if let Some((ending_global_session, _ending_global_session_info)) = global_session_for_this_block { + GlobalSessions::set(&mut txn, global_session, &global_session_info); + if let Some(ending_global_session) = global_session_for_this_block { GlobalSessionsLastBlock::set(&mut txn, ending_global_session, &block_number); } - LatestGlobalSessionIntended::set( - &mut txn, - &(global_session, global_session_info.clone()), - ); - GlobalSessions::send(&mut txn, &(global_session, global_session_info)); + LatestGlobalSessionIntended::set(&mut txn, &global_session); + GlobalSessionsChannel::send(&mut txn, &(global_session, global_session_info)); } // If there isn't anyone available to cosign this block, meaning it'll never be cosigned, @@ -147,9 +145,10 @@ impl ContinuallyRan for CosignIntendTask { match has_events { HasEvents::Notable | HasEvents::NonNotable => { - let (global_session_for_this_block, global_session_info) = - global_session_for_this_block - .expect("global session for this block was None but still attempting to cosign it"); + let global_session_for_this_block = global_session_for_this_block + .expect("global session for this block was None but still attempting to cosign it"); + let global_session_info = GlobalSessions::get(&txn, global_session_for_this_block) + .expect("last global session intended wasn't saved to the database"); // Tell each set of their expectation to cosign this block for set in global_session_info.sets { diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 3ef802e6..aacd4c96 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -45,7 +45,7 @@ pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; have validator sets follow two distinct global sessions without breaking the bounds of the cosigning protocol. */ -#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] +#[derive(Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct GlobalSession { pub(crate) start_block_number: u64, pub(crate) sets: Vec, @@ -66,12 +66,14 @@ create_db! { // An index of Substrate blocks SubstrateBlocks: (block_number: u64) -> [u8; 32], + // A mapping from a global session's ID to its relevant information. + GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, // The last block to be cosigned by a global session. GlobalSessionsLastBlock: (global_session: [u8; 32]) -> u64, // The latest global session intended. // // This is distinct from the latest global session for which we've evaluated the cosigns for. - LatestGlobalSessionIntended: () -> ([u8; 32], GlobalSession), + LatestGlobalSessionIntended: () -> [u8; 32], // The following are managed by the `intake_cosign` function present in this file @@ -285,9 +287,7 @@ impl Cosigning { } cosigns } else { - let Some((latest_global_session, _latest_global_session_info)) = - LatestGlobalSessionIntended::get(&self.db) - else { + let Some(latest_global_session) = LatestGlobalSessionIntended::get(&self.db) else { return vec![]; }; let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); @@ -320,6 +320,12 @@ impl Cosigning { let cosign = &signed_cosign.cosign; let network = cosign.cosigner; + // Check our indexed blockchain includes a block with this block number + let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else { + return Ok(true); + }; + let faulty = our_block_hash == cosign.block_hash; + // Check this isn't a dated cosign within its global session (as it would be if rebroadcasted) if let Some(existing) = NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) @@ -329,24 +335,13 @@ impl Cosigning { } } - // Check our indexed blockchain includes a block with this block number - let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else { + let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else { + // Unrecognized global session return Ok(true); }; - // Check the cosign aligns with the global session we're currently working on - let Some((global_session, global_session_info)) = - evaluator::currently_evaluated_global_session(&self.db) - else { - // We haven't recognized any global sessions yet - return Ok(true); - }; - if cosign.global_session != global_session { - return Ok(true); - } - // Check the cosigned block number is in range to the global session - if cosign.block_number < global_session_info.start_block_number { + if cosign.block_number < global_session.start_block_number { // Cosign is for a block predating the global session return Ok(false); } @@ -360,7 +355,7 @@ impl Cosigning { // Check the cosign's signature { let key = Public::from({ - let Some(key) = global_session_info.keys.get(&network) else { + let Some(key) = global_session.keys.get(&network) else { return Ok(false); }; *key @@ -377,6 +372,15 @@ impl Cosigning { let mut txn = self.db.txn(); if our_block_hash == cosign.block_hash { + // If this is for a future global session, we don't acknowledge this cosign at this time + let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0); + // This global session starts the block *after* its declaration, so we want to check if the + // block declaring it was cosigned + if (global_session.start_block_number - 1) > latest_cosigned_block_number { + drop(txn); + return Ok(true); + } + NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign); } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); @@ -387,14 +391,14 @@ impl Cosigning { let mut weight_cosigned = 0; for fault in &faults { - let Some(stake) = global_session_info.stakes.get(&fault.cosign.cosigner) else { + let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else { Err("cosigner with recognized key didn't have a stake entry saved".to_string())? }; weight_cosigned += stake; } // Check if the sum weight means a fault has occurred - if weight_cosigned >= ((global_session_info.total_stake * 17) / 100) { + if weight_cosigned >= ((global_session.total_stake * 17) / 100) { FaultedSession::set(&mut txn, &cosign.global_session); } } From df06da55528dbe7b5ff8eebb18c0fa9e60f61e11 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 26 Dec 2024 00:24:48 -0500 Subject: [PATCH 216/368] Only check if the cosign is stale if it isn't faulty If it is faulty, we want to archive it regardless. --- coordinator/cosign/src/lib.rs | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index aacd4c96..2c145cf0 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -324,14 +324,16 @@ impl Cosigning { let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else { return Ok(true); }; - let faulty = our_block_hash == cosign.block_hash; + let faulty = cosign.block_hash != our_block_hash; // Check this isn't a dated cosign within its global session (as it would be if rebroadcasted) - if let Some(existing) = - NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) - { - if existing.cosign.block_number >= cosign.block_number { - return Ok(true); + if !faulty { + if let Some(existing) = + NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) + { + if existing.cosign.block_number >= cosign.block_number { + return Ok(true); + } } } @@ -371,7 +373,7 @@ impl Cosigning { let mut txn = self.db.txn(); - if our_block_hash == cosign.block_hash { + if !faulty { // If this is for a future global session, we don't acknowledge this cosign at this time let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&txn).unwrap_or(0); // This global session starts the block *after* its declaration, so we want to check if the @@ -381,6 +383,7 @@ impl Cosigning { return Ok(true); } + // This is safe as it's in-range and newer, as prior checked since it isn't faulty NetworksLatestCosignedBlock::set(&mut txn, cosign.global_session, network, signed_cosign); } else { let mut faults = Faults::get(&txn, cosign.global_session).unwrap_or(vec![]); From 3d15710a4356fff4ee69b74d82f61cebd53534ef Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 26 Dec 2024 00:26:48 -0500 Subject: [PATCH 217/368] Only check the cosign is after its start block if faulty We don't have consensus on the session's last block, so we shouldn't check if the cosign is before the session ends. What matters is that network, within its set, claims it's still active at that block (on its view of the blockchain). --- coordinator/cosign/src/lib.rs | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 2c145cf0..3f771983 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -347,10 +347,12 @@ impl Cosigning { // Cosign is for a block predating the global session return Ok(false); } - if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) { - if cosign.block_number > last_block { - // Cosign is for a block after the last block this global session should have signed - return Ok(false); + if !faulty { + if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) { + if cosign.block_number > last_block { + // Cosign is for a block after the last block this global session should have signed + return Ok(false); + } } } From 9c92709e6244d29c9f31b4cc3c0d13cd639c5770 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 26 Dec 2024 01:02:44 -0500 Subject: [PATCH 218/368] Delay cosign acknowledgments --- coordinator/cosign/README.md | 12 +++---- coordinator/cosign/src/delay.rs | 55 +++++++++++++++++++++++++++++ coordinator/cosign/src/evaluator.rs | 30 ++++++++++------ coordinator/cosign/src/lib.rs | 16 +++++++-- 4 files changed, 93 insertions(+), 20 deletions(-) create mode 100644 coordinator/cosign/src/delay.rs diff --git a/coordinator/cosign/README.md b/coordinator/cosign/README.md index 10f31378..50ce52a6 100644 --- a/coordinator/cosign/README.md +++ b/coordinator/cosign/README.md @@ -66,7 +66,7 @@ to exhibit the same behavior), yet prevents interaction with it. If cosigns representing 17% of the non-Serai validators sets by weight are detected for distinct blocks at the same position, the protocol halts. An -explicit latency period of five seconds is enacted after receiving a cosign +explicit latency period of seventy seconds is enacted after receiving a cosign commit for the detection of such an equivocation. This is largely redundant given how the Serai blockchain node will presumably have halted itself by this time. @@ -114,8 +114,8 @@ asynchronous network or 11.33% of non-Serai validator sets' stake. ### TODO The Serai node no longer responding to RPC requests upon detecting any -equivocation, the delayed acknowledgement of cosigns, and the fallback protocol -where validators individually produce signatures, are not implemented at this -time. The former means the detection of equivocating cosigns not redundant and -the latter makes 5.67% of non-Serai validator sets' stake the DoS threshold, -even without control of an asynchronous network. +equivocation, and the fallback protocol where validators individually produce +signatures, are not implemented at this time. The former means the detection of +equivocating cosigns is not redundant and the latter makes 5.67% of non-Serai +validator sets' stake the DoS threshold, even without control of an +asynchronous network. diff --git a/coordinator/cosign/src/delay.rs b/coordinator/cosign/src/delay.rs new file mode 100644 index 00000000..5593eaf7 --- /dev/null +++ b/coordinator/cosign/src/delay.rs @@ -0,0 +1,55 @@ +use core::future::Future; +use std::time::{Duration, SystemTime}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use crate::evaluator::CosignedBlocks; + +/// How often callers should broadcast the cosigns flagged for rebroadcasting. +pub const BROADCAST_FREQUENCY: Duration = Duration::from_secs(60); +const SYNCHRONY_EXPECTATION: Duration = Duration::from_secs(10); +const ACKNOWLEDGEMENT_DELAY: Duration = + Duration::from_secs(BROADCAST_FREQUENCY.as_secs() + SYNCHRONY_EXPECTATION.as_secs()); + +create_db!( + SubstrateCosignDelay { + // The latest cosigned block number. + LatestCosignedBlockNumber: () -> u64, + } +); + +/// A task to delay acknowledgement of cosigns. +pub(crate) struct CosignDelayTask { + pub(crate) db: D, +} + +impl ContinuallyRan for CosignDelayTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + loop { + let mut txn = self.db.txn(); + + // Receive the next block to mark as cosigned + let Some((block_number, time_evaluated)) = CosignedBlocks::try_recv(&mut txn) else { + break; + }; + // Calculate when we should mark it as valid + let time_valid = + SystemTime::UNIX_EPOCH + Duration::from_secs(time_evaluated) + ACKNOWLEDGEMENT_DELAY; + // Sleep until then + tokio::time::sleep(SystemTime::now().duration_since(time_valid).unwrap_or(Duration::ZERO)) + .await; + + // Set the cosigned block + LatestCosignedBlockNumber::set(&mut txn, &block_number); + txn.commit(); + + made_progress = true; + } + + Ok(made_progress) + } + } +} diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 91f92b44..856a6e00 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -1,4 +1,5 @@ use core::future::Future; +use std::time::{Duration, SystemTime}; use serai_db::*; use serai_task::ContinuallyRan; @@ -10,13 +11,18 @@ use crate::{ create_db!( SubstrateCosignEvaluator { - // The latest cosigned block number. - LatestCosignedBlockNumber: () -> u64, // The global session currently being evaluated. CurrentlyEvaluatedGlobalSession: () -> ([u8; 32], GlobalSession), } ); +db_channel!( + SubstrateCosignEvaluatorChannels { + // (cosigned block, time cosign was evaluated) + CosignedBlocks: () -> (u64, u64), + } +); + // This is a strict function which won't panic, even with a malicious Serai node, so long as: // - It's called incrementally // - It's only called for block numbers we've completed indexing on within the intend task @@ -72,8 +78,6 @@ pub(crate) struct CosignEvaluatorTask { impl ContinuallyRan for CosignEvaluatorTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { - let latest_cosigned_block_number = LatestCosignedBlockNumber::get(&self.db).unwrap_or(0); - let mut known_cosign = None; let mut made_progress = false; loop { @@ -82,11 +86,6 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask {} } - // Since we checked we had the necessary cosigns, increment the latest cosigned block - LatestCosignedBlockNumber::set(&mut txn, &block_number); + // Since we checked we had the necessary cosigns, send it for delay before acknowledgement + CosignedBlocks::send( + &mut txn, + &( + block_number, + SystemTime::now() + .duration_since(SystemTime::UNIX_EPOCH) + .unwrap_or(Duration::ZERO) + .as_secs(), + ), + ); txn.commit(); made_progress = true; diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 3f771983..dd8eb833 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -22,7 +22,10 @@ use serai_task::*; mod intend; /// The evaluator of the cosigns. mod evaluator; -use evaluator::LatestCosignedBlockNumber; +/// The task to delay acknowledgement of the cosigns. +mod delay; +pub use delay::BROADCAST_FREQUENCY; +use delay::LatestCosignedBlockNumber; /// The schnorrkel context to used when signing a cosign. pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; @@ -235,13 +238,18 @@ impl Cosigning { ) -> Self { let (intend_task, _intend_task_handle) = Task::new(); let (evaluator_task, evaluator_task_handle) = Task::new(); + let (delay_task, delay_task_handle) = Task::new(); tokio::spawn( (intend::CosignIntendTask { db: db.clone(), serai }) .continually_run(intend_task, vec![evaluator_task_handle]), ); tokio::spawn( (evaluator::CosignEvaluatorTask { db: db.clone(), request }) - .continually_run(evaluator_task, tasks_to_run_upon_cosigning), + .continually_run(evaluator_task, vec![delay_task_handle]), + ); + tokio::spawn( + (delay::CosignDelayTask { db: db.clone() }) + .continually_run(delay_task, tasks_to_run_upon_cosigning), ); Self { db } } @@ -269,7 +277,7 @@ impl Cosigning { cosigns } - /// The cosigns to rebroadcast ever so often. + /// The cosigns to rebroadcast every `BROADCAST_FREQUENCY` seconds. /// /// This will be the most recent cosigns, in case the initial broadcast failed, or the faulty /// cosigns, in case of a fault, to induce identification of the fault by others. @@ -348,6 +356,8 @@ impl Cosigning { return Ok(false); } if !faulty { + // This prevents a malicious validator set, on the same chain, from producing a cosign after + // their final block, replacing their notable cosign if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) { if cosign.block_number > last_block { // Cosign is for a block after the last block this global session should have signed From 1d50792eedf76ce4955a9c6135d0e94c491cc74b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 26 Dec 2024 02:32:14 -0500 Subject: [PATCH 219/368] Document serai-db with bounds and intent --- Cargo.lock | 376 +++++++++++++++++++++---------------------- common/db/Cargo.toml | 4 +- common/db/README.md | 8 + common/db/src/lib.rs | 23 ++- common/db/src/mem.rs | 4 +- 5 files changed, 220 insertions(+), 195 deletions(-) create mode 100644 common/db/README.md diff --git a/Cargo.lock b/Cargo.lock index c53057dc..4711b68e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -101,9 +101,9 @@ checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923" [[package]] name = "alloy-chains" -version = "0.1.48" +version = "0.1.51" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a0161082e0edd9013d23083465cc04b20e44b7a15646d36ba7b0cdb7cd6fe18f" +checksum = "d4e0f0136c085132939da6b753452ebed4efaa73fe523bb855b10c199c2ebfaf" dependencies = [ "alloy-primitives", "num_enum", @@ -112,9 +112,9 @@ dependencies = [ [[package]] name = "alloy-consensus" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8ba14856660f31807ebb26ce8f667e814c72694e1077e97ef102e326ad580f3f" +checksum = "e88e1edea70787c33e11197d3f32ae380f3db19e6e061e539a5bcf8184a6b326" dependencies = [ "alloy-eips", "alloy-primitives", @@ -130,9 +130,9 @@ dependencies = [ [[package]] name = "alloy-consensus-any" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "28666307e76441e7af37a2b90cde7391c28112121bea59f4e0d804df8b20057e" +checksum = "57b1bb53f40c0273cd1975573cd457b39213e68584e36d1401d25fd0398a1d65" dependencies = [ "alloy-consensus", "alloy-eips", @@ -177,9 +177,9 @@ dependencies = [ [[package]] name = "alloy-eips" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "47e922d558006ba371681d484d12aa73fe673d84884f83747730af7433c0e86d" +checksum = "5f9fadfe089e9ccc0650473f2d4ef0a28bc015bbca5631d9f0f09e49b557fdb3" dependencies = [ "alloy-eip2930", "alloy-eip7702", @@ -195,9 +195,9 @@ dependencies = [ [[package]] name = "alloy-genesis" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5dca170827a7ca156b43588faebf9e9d27c27d0fb07cab82cfd830345e2b24f5" +checksum = "2b2a4cf7b70f3495788e74ce1c765260ffe38820a2a774ff4aacb62e31ea73f9" dependencies = [ "alloy-primitives", "alloy-serde", @@ -207,23 +207,23 @@ dependencies = [ [[package]] name = "alloy-json-rpc" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9335278f50b0273e0a187680ee742bb6b154a948adf036f448575bacc5ccb315" +checksum = "e29040b9d5fe2fb70415531882685b64f8efd08dfbd6cc907120650504821105" dependencies = [ "alloy-primitives", "alloy-sol-types", "serde", "serde_json", - "thiserror 2.0.6", + "thiserror 2.0.9", "tracing", ] [[package]] name = "alloy-network" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ad4e6ad4230df8c4a254c20f8d6a84ab9df151bfca13f463177dbc96571cc1f8" +checksum = "510cc00b318db0dfccfdd2d032411cfae64fc144aef9679409e014145d3dacc4" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -241,14 +241,14 @@ dependencies = [ "futures-utils-wasm", "serde", "serde_json", - "thiserror 2.0.6", + "thiserror 2.0.9", ] [[package]] name = "alloy-network-primitives" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c4df88a2f8020801e0fefce79471d3946d39ca3311802dbbd0ecfdeee5e972e3" +checksum = "9081c099e798b8a2bba2145eb82a9a146f01fc7a35e9ab6e7b43305051f97550" dependencies = [ "alloy-consensus", "alloy-eips", @@ -259,9 +259,9 @@ dependencies = [ [[package]] name = "alloy-node-bindings" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2db5cefbc736b2b26a960dcf82279c70a03695dd11a0032a6dc27601eeb29182" +checksum = "aef9849fb8bbb28f69f2cbdb4b0dac2f0e35c04f6078a00dfb8486469aed02de" dependencies = [ "alloy-genesis", "alloy-primitives", @@ -269,7 +269,7 @@ dependencies = [ "rand", "serde_json", "tempfile", - "thiserror 2.0.6", + "thiserror 2.0.9", "tracing", "url", ] @@ -304,9 +304,9 @@ dependencies = [ [[package]] name = "alloy-provider" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5115c74c037714e1b02a86f742289113afa5d494b5ea58308ba8aa378e739101" +checksum = "dc2dfaddd9a30aa870a78a4e1316e3e115ec1e12e552cbc881310456b85c1f24" dependencies = [ "alloy-chains", "alloy-consensus", @@ -330,7 +330,7 @@ dependencies = [ "schnellru", "serde", "serde_json", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "tracing", "wasmtimer", @@ -355,14 +355,14 @@ checksum = "5a833d97bf8a5f0f878daf2c8451fff7de7f9de38baa5a45d936ec718d81255a" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] name = "alloy-rpc-client" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5c6a0bd0ce5660ac48e4f3bb0c7c5c3a94db287a0be94971599d83928476cbcd" +checksum = "531137b283547d5b9a5cafc96b006c64ef76810c681d606f28be9781955293b6" dependencies = [ "alloy-json-rpc", "alloy-primitives", @@ -381,9 +381,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-any" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ea98f81bcd759dbfa3601565f9d7a02220d8ef1d294ec955948b90aaafbfd857" +checksum = "ed98e1af55a7d856bfa385f30f63d8d56be2513593655c904a8f4a7ec963aa3e" dependencies = [ "alloy-consensus-any", "alloy-rpc-types-eth", @@ -392,9 +392,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-eth" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0e518b0a7771e00728f18be0708f828b18a1cfc542a7153bef630966a26388e0" +checksum = "8737d7a6e37ca7bba9c23e9495c6534caec6760eb24abc9d5ffbaaba147818e1" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -412,9 +412,9 @@ dependencies = [ [[package]] name = "alloy-serde" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed3dc8d4a08ffc90c1381d39a4afa2227668259a42c97ab6eecf51cbd82a8761" +checksum = "5851bf8d5ad33014bd0c45153c603303e730acc8a209450a7ae6b4a12c2789e2" dependencies = [ "alloy-primitives", "serde", @@ -423,16 +423,16 @@ dependencies = [ [[package]] name = "alloy-signer" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "16188684100f6e0f2a2b949968fe3007749c5be431549064a1bce4e7b3a196a9" +checksum = "7e10ca565da6500cca015ba35ee424d59798f2e1b85bc0dd8f81dafd401f029a" dependencies = [ "alloy-primitives", "async-trait", "auto_impl", "elliptic-curve", "k256", - "thiserror 2.0.6", + "thiserror 2.0.9", ] [[package]] @@ -457,7 +457,7 @@ dependencies = [ "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -473,7 +473,7 @@ dependencies = [ "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", "syn-solidity", "tiny-keccak", ] @@ -489,7 +489,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", "syn-solidity", ] @@ -506,9 +506,9 @@ dependencies = [ [[package]] name = "alloy-transport" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "628be5b9b75e4f4c4f2a71d985bbaca4f23de356dc83f1625454c505f5eef4df" +checksum = "538a04a37221469cac0ce231b737fd174de2fdfcdd843bdd068cb39ed3e066ad" dependencies = [ "alloy-json-rpc", "base64 0.22.1", @@ -516,7 +516,7 @@ dependencies = [ "futures-utils-wasm", "serde", "serde_json", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "tower 0.5.2", "tracing", @@ -526,9 +526,9 @@ dependencies = [ [[package]] name = "alloy-transport-http" -version = "0.8.0" +version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4e24412cf72f79c95cd9b1d9482e3a31f9d94c24b43c4b3b710cc8d4341eaab0" +checksum = "2ed40eb1e1265b2911512f6aa1dcece9702d078f5a646730c45e39e2be00ac1c" dependencies = [ "alloy-transport", "url", @@ -536,9 +536,9 @@ dependencies = [ [[package]] name = "alloy-trie" -version = "0.7.6" +version = "0.7.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3a5fd8fea044cc9a8c8a50bb6f28e31f0385d820f116c5b98f6f4e55d6e5590b" +checksum = "1e428104b2445a4f929030891b3dbf8c94433a8349ba6480946bf6af7975c2f6" dependencies = [ "alloy-primitives", "alloy-rlp", @@ -625,9 +625,9 @@ dependencies = [ [[package]] name = "anyhow" -version = "1.0.94" +version = "1.0.95" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c1fd03a028ef38ba2276dce7e33fcd6369c158a1bca17946c4b1b701891c1ff7" +checksum = "34ac096ce696dc2fcabef30516bb13c0a68a11d30131d3df6f04711467681b04" [[package]] name = "approx" @@ -888,7 +888,7 @@ checksum = "c7c24de15d275a1ecfd47a380fb4d5ec9bfe0933f309ed5e705b775596a3574d" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -899,7 +899,7 @@ checksum = "721cae7de5c34fbb2acd27e21e6d2cf7b886dce0c27388d46c4e6c47ea4318dd" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -934,7 +934,7 @@ checksum = "3c87f3f15e7794432337fc718554eaa4dc8f04c9677a950ffe366f20a162ae42" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -953,7 +953,7 @@ dependencies = [ "cfg-if", "libc", "miniz_oxide", - "object 0.36.5", + "object 0.36.7", "rustc-demangle", "windows-targets 0.52.6", ] @@ -1045,7 +1045,7 @@ dependencies = [ "regex", "rustc-hash 1.1.0", "shlex", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -1110,7 +1110,7 @@ dependencies = [ "serde_json", "simple-request", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "zeroize", ] @@ -1321,7 +1321,7 @@ dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -1382,9 +1382,9 @@ checksum = "c3ac9f8b63eca6fd385229b3675f6cc0dc5c8a5c8a54a59d4f52ffd670d87b0c" [[package]] name = "bytemuck" -version = "1.20.0" +version = "1.21.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8b37c88a63ffd85d15b406896cc343916d7cf57838a847b3a6f2ca5d39a5695a" +checksum = "ef657dfab802224e671f5818e9a4935f9b1957ed18e58292690cc39e7a4092a3" [[package]] name = "byteorder" @@ -1453,7 +1453,7 @@ checksum = "e7daec1a2a2129eeba1644b220b4647ec537b0b5d4bfd6876fcc5a540056b592" dependencies = [ "camino", "cargo-platform", - "semver 1.0.23", + "semver 1.0.24", "serde", "serde_json", "thiserror 1.0.69", @@ -1634,7 +1634,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -1885,9 +1885,9 @@ dependencies = [ [[package]] name = "crossbeam-deque" -version = "0.8.5" +version = "0.8.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "613f8cc01fe9cf1a3eb3d7f488fd2fa8388403e97039e2f73692932e291a770d" +checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51" dependencies = [ "crossbeam-epoch", "crossbeam-utils", @@ -1904,9 +1904,9 @@ dependencies = [ [[package]] name = "crossbeam-utils" -version = "0.8.20" +version = "0.8.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22ec99545bb0ed0ea7bb9b8e1e9122ea386ff8a48c0922e43f36d45ab09e0e80" +checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28" [[package]] name = "crunchy" @@ -1972,7 +1972,7 @@ checksum = "f46882e17999c6cc590af592290432be3bce0428cb0d5f8b6715e4dc7b383eb3" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2000,7 +2000,7 @@ dependencies = [ "proc-macro2", "quote", "scratch", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2018,7 +2018,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2159,7 +2159,7 @@ checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", "unicode-xid", ] @@ -2239,7 +2239,7 @@ checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2266,7 +2266,7 @@ dependencies = [ "schnorr-signatures", "secq256k1", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -2285,7 +2285,7 @@ dependencies = [ "multiexp", "rand_core", "rustversion", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -2492,7 +2492,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2527,7 +2527,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -2596,7 +2596,7 @@ dependencies = [ "fs-err", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2748,9 +2748,9 @@ checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" [[package]] name = "foldhash" -version = "0.1.3" +version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f81ec6369c545a7d40e4589b5597581fa1c441fe1cce96dd1de43159910a36a2" +checksum = "a0d2fde1f7b3d48b8395d5f2de76c18a528bd6a9cdde438df747bfcba3e05d6f" [[package]] name = "fork-tree" @@ -2877,7 +2877,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2889,7 +2889,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -2899,7 +2899,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -3058,7 +3058,7 @@ checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -3432,11 +3432,11 @@ dependencies = [ [[package]] name = "home" -version = "0.5.9" +version = "0.5.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e3d1354bf6b7235cb4a0576c2619fd4ed18183f689b12b006a0ee7329eeff9a5" +checksum = "589533453244b0995c858700322199b2becb13b627df2851f64a2775d024abcf" dependencies = [ - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -3547,7 +3547,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.5.8", + "socket2 0.4.10", "tokio", "tower-service", "tracing", @@ -3591,9 +3591,9 @@ dependencies = [ [[package]] name = "hyper-rustls" -version = "0.27.3" +version = "0.27.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "08afdbb5c31130e3034af566421053ab03787c640246a446327f550d11bcb333" +checksum = "2d191583f3da1305256f22463b9bb0471acad48a4e534a5218b9963e9c1f59b2" dependencies = [ "futures-util", "http 1.2.0", @@ -3794,7 +3794,7 @@ checksum = "a0eb5a3343abf848c0984fe4604b2b105da9539376e24fc0a3b0007411ae4fd9" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4100,9 +4100,9 @@ checksum = "884e2677b40cc8c339eaefcb701c32ef1fd2493d71118dc0ca4b6a736c93bd67" [[package]] name = "libc" -version = "0.2.168" +version = "0.2.169" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5aaeb2981e0606ca11d79718f8bb01164f1d6ed75080182d3abf017e6d244b6d" +checksum = "b5aba8db14291edd000dfcc4d620c7ebfb122c613afb886ca8803fa4e128a20a" [[package]] name = "libloading" @@ -4485,7 +4485,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4642,9 +4642,9 @@ checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f" [[package]] name = "linked_hash_set" -version = "0.1.4" +version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "47186c6da4d81ca383c7c47c1bfc80f4b95f4720514d860a5407aaf4233f9588" +checksum = "bae85b5be22d9843c80e5fc80e9b64c8a3b1f98f867c709956eca3efff4e92e2" dependencies = [ "linked-hash-map", ] @@ -4748,7 +4748,7 @@ dependencies = [ "macro_magic_core", "macro_magic_macros", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4762,7 +4762,7 @@ dependencies = [ "macro_magic_core_macros", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4773,7 +4773,7 @@ checksum = "d710e1214dffbab3b5dacb21475dde7d6ed84c69ff722b3a47a782668d44fbac" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4784,7 +4784,7 @@ checksum = "b8fb85ec1620619edf2984a7693497d4ec88a9665d8b87e942856884c92dbf2a" dependencies = [ "macro_magic_core", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -4920,9 +4920,9 @@ checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" [[package]] name = "miniz_oxide" -version = "0.8.0" +version = "0.8.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e2d80299ef12ff69b16a84bb182e3b9df68b5a91574d3d4fa6e41b65deec4df1" +checksum = "4ffbe83022cedc1d264172192511ae958937694cd57ce297164951b8b3568394" dependencies = [ "adler2", ] @@ -4982,7 +4982,7 @@ dependencies = [ "schnorr-signatures", "serde_json", "subtle", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -4999,7 +4999,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5026,7 +5026,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5046,7 +5046,7 @@ dependencies = [ "rand_core", "std-shims", "subtle", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5081,7 +5081,7 @@ dependencies = [ "monero-io", "monero-primitives", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5109,7 +5109,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5122,7 +5122,7 @@ dependencies = [ "monero-primitives", "rand_core", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5195,7 +5195,7 @@ dependencies = [ "serde", "serde_json", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "zeroize", ] @@ -5211,7 +5211,7 @@ dependencies = [ "polyseed", "rand_core", "std-shims", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -5588,14 +5588,14 @@ checksum = "af1844ef2428cc3e1cb900be36181049ef3d3193c63e43026cfe202983b27a56" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] name = "nybbles" -version = "0.2.1" +version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "95f06be0417d97f81fe4e5c86d7d01b392655a9cac9c19a848aa033e18937b23" +checksum = "55a62e678a89501192cc5ebf47dcbc656b608ae5e1c61c9251fe35230f119fe3" dependencies = [ "const-hex", "serde", @@ -5616,9 +5616,9 @@ dependencies = [ [[package]] name = "object" -version = "0.36.5" +version = "0.36.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aedf0a2d09c573ed1d8d85b30c119153926a2b36dce0ab28322c09a117a4683e" +checksum = "62948e14d923ea95ea2c7c86c71013138b66525b86bdc08d2dcc262bdb497b87" dependencies = [ "memchr", ] @@ -6009,7 +6009,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b7cafe60d6cf8e62e1b9b2ea516a089c008945bb5a275416789e7db0bc199dc" dependencies = [ "memchr", - "thiserror 2.0.6", + "thiserror 2.0.9", "ucd-trie", ] @@ -6040,7 +6040,7 @@ checksum = "3c0f5fad0874fc7abcd4d750e76917eaebbecaa2c20bde22e1dbeeba8beb758c" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -6107,7 +6107,7 @@ dependencies = [ "sha3", "std-shims", "subtle", - "thiserror 2.0.6", + "thiserror 2.0.9", "zeroize", ] @@ -6154,15 +6154,15 @@ dependencies = [ [[package]] name = "predicates-core" -version = "1.0.8" +version = "1.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ae8177bee8e75d6846599c6b9ff679ed51e882816914eec639944d7c9aa11931" +checksum = "727e462b119fe9c93fd0eb1429a5f7647394014cf3c04ab2c0350eeb09095ffa" [[package]] name = "predicates-tree" -version = "1.0.11" +version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "41b740d195ed3166cd147c8047ec98db0e22ec019eb8eeb76d343b795304fb13" +checksum = "72dd2d6d381dfb73a193c7fca536518d7caee39fc8503f74e7dc0be0531b425c" dependencies = [ "predicates-core", "termtree", @@ -6262,7 +6262,7 @@ dependencies = [ "proc-macro-error-attr2", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -6273,7 +6273,7 @@ checksum = "3d1eaa7fa0aa1929ffdf7eeb6eac234dde6268914a14ad44d23521ab6a9b258e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -6319,7 +6319,7 @@ checksum = "440f724eba9f6996b75d63681b0a92b06947f1457076d503a4d2e2c8f56442b8" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -6483,9 +6483,9 @@ dependencies = [ [[package]] name = "quote" -version = "1.0.37" +version = "1.0.38" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af" +checksum = "0e4dccaaaf89514f546c693ddc140f729f958c247918a13380cccc6078391acc" dependencies = [ "proc-macro2", ] @@ -6630,7 +6630,7 @@ checksum = "bcc303e793d3734489387d205e9b186fac9c6cfacedd98cbb2e8a5943595f3e6" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -6884,7 +6884,7 @@ version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cfcb3a22ef46e85b45de6ee7e79d063319ebb6594faafcf1c225ea92ab6e9b92" dependencies = [ - "semver 1.0.23", + "semver 1.0.24", ] [[package]] @@ -6906,7 +6906,7 @@ dependencies = [ "errno", "libc", "linux-raw-sys", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -6949,9 +6949,9 @@ dependencies = [ [[package]] name = "rustls-pki-types" -version = "1.10.0" +version = "1.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "16f1201b3c9a7ee8039bcadc17b7e605e2945b27eee7631788c1bd2b0643674b" +checksum = "d2bf47e6ff922db3825eb750c4e2ff784c6ff8fb9e13046ef6a1d1c5401b0b37" [[package]] name = "rustls-webpki" @@ -7011,9 +7011,9 @@ checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" [[package]] name = "safe_arch" -version = "0.7.2" +version = "0.7.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c3460605018fdc9612bce72735cba0d27efbcd9904780d44c7e3a9948f96148a" +checksum = "96b02de82ddbe1b636e6170c21be622223aea188ef2e139be0a5b219ec215323" dependencies = [ "bytemuck", ] @@ -7131,7 +7131,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -7892,7 +7892,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -7975,7 +7975,7 @@ dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -8126,9 +8126,9 @@ dependencies = [ [[package]] name = "security-framework" -version = "3.0.1" +version = "3.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e1415a607e92bec364ea2cf9264646dcce0f91e6d65281bd6f2819cca3bf39c8" +checksum = "81d3f8c9bfcc3cbb6b0179eb57042d75b1582bdc65c3cb95f3fa999509c03cbc" dependencies = [ "bitflags 2.6.0", "core-foundation 0.10.0", @@ -8139,9 +8139,9 @@ dependencies = [ [[package]] name = "security-framework-sys" -version = "2.12.1" +version = "2.13.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fa39c7303dc58b5543c94d22c1766b0d31f2ee58306363ea622b10bbc075eaa2" +checksum = "1863fd3768cd83c56a7f60faa4dc0d403f1b6df0a38c3c25f44b7894e45370d5" dependencies = [ "core-foundation-sys", "libc", @@ -8167,9 +8167,9 @@ dependencies = [ [[package]] name = "semver" -version = "1.0.23" +version = "1.0.24" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61697e0a1c7e512e84a621326239844a24d8207b4669b41bc18b32ea5cbf988b" +checksum = "3cb6eb87a131f756572d7fb904f6e7b68633f09cca868c5df1c4b8d1a694bbba" dependencies = [ "serde", ] @@ -8272,7 +8272,7 @@ dependencies = [ "simple-request", "sp-core", "sp-runtime", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "zeroize", ] @@ -8381,10 +8381,10 @@ dependencies = [ [[package]] name = "serai-db" -version = "0.1.0" +version = "0.1.1" dependencies = [ "parity-db", - "rocksdb 0.21.0", + "rocksdb 0.23.0", ] [[package]] @@ -8873,7 +8873,7 @@ dependencies = [ "serai-processor-ethereum-deployer", "serai-processor-ethereum-erc20", "serai-processor-ethereum-primitives", - "syn 2.0.90", + "syn 2.0.91", "syn-solidity", "tokio", ] @@ -9240,14 +9240,14 @@ checksum = "46f859dbbf73865c6627ed570e78961cd3ac92407a2d117204c49232485da55e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] name = "serde_json" -version = "1.0.133" +version = "1.0.134" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377" +checksum = "d00f4175c42ee48b15416f6193a959ba3a0d67fc699a0db9ad12df9f83991c7d" dependencies = [ "itoa", "memchr", @@ -9263,7 +9263,7 @@ checksum = "6c64451ba24fc7a6a2d60fc75dd9c83c90903b19028d4eff35e88fc1e86564e9" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -9289,9 +9289,9 @@ dependencies = [ [[package]] name = "serde_with" -version = "3.11.0" +version = "3.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8e28bdad6db2b8340e449f7108f020b3b092e8583a9e3fb82713e1d4e71fe817" +checksum = "d6b6f7f2fcb69f747921f79f3926bd1e203fce4fef62c268dd3abfb6d86029aa" dependencies = [ "base64 0.22.1", "chrono", @@ -9556,7 +9556,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -9752,7 +9752,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "quote", "sp-core-hashing", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -9771,7 +9771,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -9943,7 +9943,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10096,7 +10096,7 @@ dependencies = [ "parity-scale-codec", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10199,9 +10199,9 @@ dependencies = [ [[package]] name = "static_init_macro" -version = "1.0.3" +version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b07d15a19b60c12b1a4d927f86bf124037e899c962017d8a198d59997cf12f1b" +checksum = "1389c88ddd739ec6d3f8f83343764a0e944cd23cfbf126a9796a714b0b6edd6f" dependencies = [ "cfg_aliases 0.1.1", "memchr", @@ -10284,7 +10284,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10297,7 +10297,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10385,9 +10385,9 @@ dependencies = [ [[package]] name = "syn" -version = "2.0.90" +version = "2.0.91" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "919d3b74a5dd0ccd15aeb8f93e7006bd9e14c295087c9896a110f490752bcf31" +checksum = "d53cbcb5a243bd33b7858b1d7f4aca2153490815872d86d955d6ea29f743c035" dependencies = [ "proc-macro2", "quote", @@ -10403,7 +10403,7 @@ dependencies = [ "paste", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10467,7 +10467,7 @@ dependencies = [ "fastrand", "once_cell", "rustix", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -10482,7 +10482,7 @@ dependencies = [ "parity-scale-codec", "patchable-async-sleep", "serai-db", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", ] @@ -10497,9 +10497,9 @@ dependencies = [ [[package]] name = "termtree" -version = "0.4.1" +version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76" +checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683" [[package]] name = "thiserror" @@ -10512,11 +10512,11 @@ dependencies = [ [[package]] name = "thiserror" -version = "2.0.6" +version = "2.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8fec2a1820ebd077e2b90c4df007bebf344cd394098a13c563957d0afc83ea47" +checksum = "f072643fd0190df67a8bab670c20ef5d8737177d6ac6b2e9a236cb096206b2cc" dependencies = [ - "thiserror-impl 2.0.6", + "thiserror-impl 2.0.9", ] [[package]] @@ -10527,18 +10527,18 @@ checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] name = "thiserror-impl" -version = "2.0.6" +version = "2.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d65750cab40f4ff1929fb1ba509e9914eb756131cef4210da8d5d700d26f6312" +checksum = "7b50fa271071aae2e6ee85f842e2e28ba8cd2c5fb67f11fcb1fd70b276f9e7d4" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10621,9 +10621,9 @@ dependencies = [ [[package]] name = "tinyvec" -version = "1.8.0" +version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "445e881f4f6d382d5f27c034e25eb92edd7c784ceab92a0937db7f2e9471b938" +checksum = "022db8904dfa342efe721985167e9fcd16c29b226db4397ed752a761cfce81e8" dependencies = [ "tinyvec_macros", ] @@ -10660,7 +10660,7 @@ checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10828,7 +10828,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -10932,7 +10932,7 @@ dependencies = [ "serai-db", "subtle", "tendermint-machine", - "thiserror 2.0.6", + "thiserror 2.0.9", "tokio", "zeroize", ] @@ -11087,9 +11087,9 @@ checksum = "eaea85b334db583fe3274d12b4cd1880032beab409c0d774be044d4480ab9a94" [[package]] name = "unicode-bidi" -version = "0.3.17" +version = "0.3.18" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5ab17db44d7388991a428b2ee655ce0c212e862eff1768a455c58f9aad6e7893" +checksum = "5c1cb5db39152898a79168971543b1cb5020dff7fe43c8dc468b0885f5e29df5" [[package]] name = "unicode-ident" @@ -11266,7 +11266,7 @@ dependencies = [ "log", "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", "wasm-bindgen-shared", ] @@ -11301,7 +11301,7 @@ checksum = "30d7a95b763d3c45903ed6c81f156801839e5ee968bb07e534c44df0fcd330c2" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", "wasm-bindgen-backend", "wasm-bindgen-shared", ] @@ -11392,7 +11392,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1dfcdb72d96f01e6c85b6bf20102e7423bdbaad5c337301bab2bbf253d26413c" dependencies = [ "indexmap 2.7.0", - "semver 1.0.23", + "semver 1.0.24", ] [[package]] @@ -11610,7 +11610,7 @@ checksum = "ca7af9bb3ee875c4907835e607a275d10b04d15623d3aebe01afe8fbd3f85050" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -11693,7 +11693,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.59.0", + "windows-sys 0.48.0", ] [[package]] @@ -11752,7 +11752,7 @@ checksum = "2bbd5b46c938e506ecbce286b6628a02171d56153ba733b6c741fc627ec9579b" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -11763,7 +11763,7 @@ checksum = "053c4c462dc91d3b1504c6fe5a726dd15e216ba718e84a0e46a88fbe5ded3515" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -12064,7 +12064,7 @@ checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] @@ -12084,7 +12084,7 @@ checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", - "syn 2.0.90", + "syn 2.0.91", ] [[package]] diff --git a/common/db/Cargo.toml b/common/db/Cargo.toml index 66f798a4..53ff012a 100644 --- a/common/db/Cargo.toml +++ b/common/db/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "serai-db" -version = "0.1.0" +version = "0.1.1" description = "A simple database trait and backends for it" license = "MIT" repository = "https://github.com/serai-dex/serai/tree/develop/common/db" @@ -18,7 +18,7 @@ workspace = true [dependencies] parity-db = { version = "0.4", default-features = false, optional = true } -rocksdb = { version = "0.21", default-features = false, features = ["zstd"], optional = true } +rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true } [features] parity-db = ["dep:parity-db"] diff --git a/common/db/README.md b/common/db/README.md new file mode 100644 index 00000000..83d4735f --- /dev/null +++ b/common/db/README.md @@ -0,0 +1,8 @@ +# Serai DB + +An inefficient, minimal abstraction around databases. + +The abstraction offers `get`, `put`, and `del` with helper functions and macros +built on top. Database iteration is not offered, forcing the caller to manually +implement indexing schemes. This ensures wide compatibility across abstracted +databases. diff --git a/common/db/src/lib.rs b/common/db/src/lib.rs index 1c08fe3d..72ff4367 100644 --- a/common/db/src/lib.rs +++ b/common/db/src/lib.rs @@ -14,26 +14,43 @@ mod parity_db; #[cfg(feature = "parity-db")] pub use parity_db::{ParityDb, new_parity_db}; -/// An object implementing get. +/// An object implementing `get`. pub trait Get { + /// Get a value from the database. fn get(&self, key: impl AsRef<[u8]>) -> Option>; } -/// An atomic database operation. +/// An atomic database transaction. +/// +/// A transaction is only required to atomically commit. It is not required that two `Get` calls +/// made with the same transaction return the same result, if another transaction wrote to that +/// key. +/// +/// If two transactions are created, and both write (including deletions) to the same key, behavior +/// is undefined. The transaction may block, deadlock, panic, overwrite one of the two values +/// randomly, or any other action, at time of write or at time of commit. #[must_use] pub trait DbTxn: Send + Get { + /// Write a value to this key. fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>); + /// Delete the value from this key. fn del(&mut self, key: impl AsRef<[u8]>); + /// Commit this transaction. fn commit(self); } -/// A database supporting atomic operations. +/// A database supporting atomic transaction. pub trait Db: 'static + Send + Sync + Clone + Get { + /// The type representing a database transaction. type Transaction<'a>: DbTxn; + /// Calculate a key for a database entry. + /// + /// Keys are separated by the database, the item within the database, and the item's key itself. fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec { let db_len = u8::try_from(db_dst.len()).unwrap(); let dst_len = u8::try_from(item_dst.len()).unwrap(); [[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat() } + /// Open a new transaction. fn txn(&mut self) -> Self::Transaction<'_>; } diff --git a/common/db/src/mem.rs b/common/db/src/mem.rs index ecac300e..d24aa109 100644 --- a/common/db/src/mem.rs +++ b/common/db/src/mem.rs @@ -11,7 +11,7 @@ use crate::*; #[derive(PartialEq, Eq, Debug)] pub struct MemDbTxn<'a>(&'a MemDb, HashMap, Vec>, HashSet>); -impl<'a> Get for MemDbTxn<'a> { +impl Get for MemDbTxn<'_> { fn get(&self, key: impl AsRef<[u8]>) -> Option> { if self.2.contains(key.as_ref()) { return None; @@ -23,7 +23,7 @@ impl<'a> Get for MemDbTxn<'a> { .or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned()) } } -impl<'a> DbTxn for MemDbTxn<'a> { +impl DbTxn for MemDbTxn<'_> { fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) { self.2.remove(key.as_ref()); self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec()); From e67e301fc2ca071917970e64491ec1e010c1970a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 05:21:26 -0500 Subject: [PATCH 220/368] Have the processor verify the published Batches match expectations --- processor/bin/src/lib.rs | 12 ++++- processor/scanner/Cargo.toml | 2 + processor/scanner/src/db.rs | 30 +++++++++--- processor/scanner/src/lib.rs | 6 +++ processor/scanner/src/report/db.rs | 30 +++++++++--- processor/scanner/src/report/mod.rs | 49 +++++++++---------- processor/scanner/src/scan/mod.rs | 1 + processor/scanner/src/substrate/db.rs | 13 +++++ processor/scanner/src/substrate/mod.rs | 21 +++++++- substrate/client/src/serai/in_instructions.rs | 7 --- substrate/client/tests/batch.rs | 7 --- .../client/tests/common/in_instructions.rs | 1 - substrate/in-instructions/pallet/src/lib.rs | 8 --- .../in-instructions/primitives/src/lib.rs | 4 +- 14 files changed, 124 insertions(+), 67 deletions(-) diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index ef83d7e0..0fc7257e 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -280,7 +280,13 @@ pub async fn main_loop< // Substrate sets this limit to prevent DoSs from malicious validator sets // That bound lets us consume this txn in the following loop body, as an optimization assert!(batches.len() <= 1); - for messages::substrate::ExecutedBatch { id, in_instructions } in batches { + for messages::substrate::ExecutedBatch { + id, + publisher, + in_instructions_hash, + in_instruction_results, + } in batches + { let key_to_activate = KeyToActivate::>::try_recv(txn.as_mut().unwrap()).map(|key| key.0); @@ -288,7 +294,9 @@ pub async fn main_loop< let _: () = scanner.acknowledge_batch( txn.take().unwrap(), id, - in_instructions, + publisher, + in_instructions_hash, + in_instruction_results, /* `acknowledge_batch` takes burns to optimize handling returns with standard payments. That's why handling these with a Batch (and not waiting until the diff --git a/processor/scanner/Cargo.toml b/processor/scanner/Cargo.toml index c46adde3..1fc70e0f 100644 --- a/processor/scanner/Cargo.toml +++ b/processor/scanner/Cargo.toml @@ -24,6 +24,7 @@ scale = { package = "parity-scale-codec", version = "3", default-features = fals borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } # Cryptography +blake2 = { version = "0.10", default-features = false, features = ["std"] } group = { version = "0.13", default-features = false } # Application @@ -35,6 +36,7 @@ serai-db = { path = "../../common/db" } messages = { package = "serai-processor-messages", path = "../messages" } serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } +serai-validator-sets-primitives = { path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std", "borsh"] } serai-in-instructions-primitives = { path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std", "borsh"] } serai-coins-primitives = { path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index a985ba43..80b716ae 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -7,8 +7,9 @@ use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; -use serai_in_instructions_primitives::{InInstructionWithBalance, Batch}; use serai_coins_primitives::OutInstructionWithBalance; +use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::{InInstructionWithBalance, Batch}; use primitives::{EncodableG, ReceivedOutput}; @@ -25,11 +26,13 @@ impl Borshy for T {} #[derive(BorshSerialize, BorshDeserialize)] struct SeraiKeyDbEntry { activation_block_number: u64, + session: Session, key: K, } #[derive(Clone)] pub(crate) struct SeraiKey { + pub(crate) session: Session, pub(crate) key: K, pub(crate) stage: LifetimeStage, pub(crate) activation_block_number: u64, @@ -165,7 +168,7 @@ impl ScannerGlobalDb { // If this new key retires a key, mark the block at which forwarding explicitly occurs notable // This lets us obtain synchrony over the transactions we'll make to accomplish this - if let Some(key_retired_by_this) = keys.last() { + let this_keys_session = if let Some(key_retired_by_this) = keys.last() { NotableBlock::set( txn, Lifetime::calculate::( @@ -182,10 +185,17 @@ impl ScannerGlobalDb { ), &(), ); - } + Session(key_retired_by_this.session.0 + 1) + } else { + Session(0) + }; // Push and save the next key - keys.push(SeraiKeyDbEntry { activation_block_number, key: EncodableG(key) }); + keys.push(SeraiKeyDbEntry { + activation_block_number, + session: this_keys_session, + key: EncodableG(key), + }); ActiveKeys::set(txn, &keys); // Now tidy the keys, ensuring this has a maximum length of 2 @@ -236,6 +246,7 @@ impl ScannerGlobalDb { raw_keys.get(i + 1).map(|key| key.activation_block_number), ); keys.push(SeraiKey { + session: raw_keys[i].session, key: raw_keys[i].key.0, stage, activation_block_number: raw_keys[i].activation_block_number, @@ -477,6 +488,7 @@ db_channel! { } pub(crate) struct InInstructionData { + pub(crate) session_to_sign_batch: Session, pub(crate) external_key_for_session_to_sign_batch: KeyFor, pub(crate) returnable_in_instructions: Vec>, } @@ -488,7 +500,8 @@ impl ScanToReportDb { block_number: u64, data: &InInstructionData, ) { - let mut buf = data.external_key_for_session_to_sign_batch.to_bytes().as_ref().to_vec(); + let mut buf = data.session_to_sign_batch.encode(); + buf.extend(data.external_key_for_session_to_sign_batch.to_bytes().as_ref()); for returnable_in_instruction in &data.returnable_in_instructions { returnable_in_instruction.write(&mut buf).unwrap(); } @@ -510,6 +523,7 @@ impl ScanToReportDb { ); let mut buf = data.returnable_in_instructions.as_slice(); + let session_to_sign_batch = Session::decode(&mut buf).unwrap(); let external_key_for_session_to_sign_batch = { let mut external_key_for_session_to_sign_batch = as GroupEncoding>::Repr::default(); @@ -523,7 +537,11 @@ impl ScanToReportDb { while !buf.is_empty() { returnable_in_instructions.push(Returnable::read(&mut buf).unwrap()); } - InInstructionData { external_key_for_session_to_sign_batch, returnable_in_instructions } + InInstructionData { + session_to_sign_batch, + external_key_for_session_to_sign_batch, + returnable_in_instructions, + } } } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 5046753c..1ef4f8c2 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -11,6 +11,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; +use serai_validator_sets_primitives::Session; use serai_coins_primitives::OutInstructionWithBalance; use primitives::{task::*, Address, ReceivedOutput, Block, Payment}; @@ -437,10 +438,13 @@ impl Scanner { /// `queue_burns`. Doing so will cause them to be executed multiple times. /// /// The calls to this function must be ordered with regards to `queue_burns`. + #[allow(clippy::too_many_arguments)] pub fn acknowledge_batch( &mut self, mut txn: impl DbTxn, batch_id: u32, + publisher: Session, + in_instructions_hash: [u8; 32], in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, @@ -451,6 +455,8 @@ impl Scanner { substrate::queue_acknowledge_batch::( &mut txn, batch_id, + publisher, + in_instructions_hash, in_instruction_results, burns, key_to_activate, diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 186accac..b6503e86 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -8,9 +8,17 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; use serai_primitives::Balance; +use serai_validator_sets_primitives::Session; use crate::{ScannerFeed, KeyFor, AddressFor}; +#[derive(BorshSerialize, BorshDeserialize)] +pub(crate) struct BatchInfo { + pub(crate) block_number: u64, + pub(crate) publisher: Session, + pub(crate) in_instructions_hash: [u8; 32], +} + create_db!( ScannerReport { // The next block to potentially report @@ -18,10 +26,11 @@ create_db!( // The next Batch ID to use NextBatchId: () -> u32, - // The block number which caused a batch - BlockNumberForBatch: (batch: u32) -> u64, + // The information needed to verify a batch + InfoForBatch: (batch: u32) -> BatchInfo, // The external key for the session which should sign a batch + // TODO: Merge this with InfoForBatch ExternalKeyForSessionToSignBatch: (batch: u32) -> Vec, // The return addresses for the InInstructions within a Batch @@ -46,15 +55,24 @@ impl ReportDb { NextToPotentiallyReportBlock::get(getter) } - pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn, block_number: u64) -> u32 { + pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { let id = NextBatchId::get(txn).unwrap_or(0); NextBatchId::set(txn, &(id + 1)); - BlockNumberForBatch::set(txn, id, &block_number); id } - pub(crate) fn take_block_number_for_batch(txn: &mut impl DbTxn, id: u32) -> Option { - BlockNumberForBatch::take(txn, id) + pub(crate) fn save_batch_info( + txn: &mut impl DbTxn, + id: u32, + block_number: u64, + publisher: Session, + in_instructions_hash: [u8; 32], + ) { + InfoForBatch::set(txn, id, &BatchInfo { block_number, publisher, in_instructions_hash }); + } + + pub(crate) fn take_info_for_batch(txn: &mut impl DbTxn, id: u32) -> Option { + InfoForBatch::take(txn, id) } pub(crate) fn save_external_key_for_session_to_sign_batch( diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index afb1b672..f5208460 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -1,28 +1,28 @@ use core::{marker::PhantomData, future::Future}; +use blake2::{digest::typenum::U32, Digest, Blake2b}; + use scale::Encode; use serai_db::{DbTxn, Db}; -use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::task::ContinuallyRan; use crate::{ db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, Batches, BatchesToSign}, - index, scan::next_to_scan_for_outputs_block, ScannerFeed, KeyFor, }; mod db; -pub(crate) use db::ReturnInformation; +pub(crate) use db::{BatchInfo, ReturnInformation}; use db::ReportDb; -pub(crate) fn take_block_number_for_batch( +pub(crate) fn take_info_for_batch( txn: &mut impl DbTxn, id: u32, -) -> Option { - ReportDb::::take_block_number_for_batch(txn, id) +) -> Option { + ReportDb::::take_info_for_batch(txn, id) } pub(crate) fn take_external_key_for_session_to_sign_batch( @@ -88,33 +88,28 @@ impl ContinuallyRan for ReportTask { let next_to_potentially_report = ReportDb::::next_to_potentially_report_block(&self.db) .expect("ReportTask run before writing the start block"); - for b in next_to_potentially_report ..= highest_reportable { + for block_number in next_to_potentially_report ..= highest_reportable { let mut txn = self.db.txn(); // Receive the InInstructions for this block // We always do this as we can't trivially tell if we should recv InInstructions before we // do let InInstructionData { + session_to_sign_batch, external_key_for_session_to_sign_batch, returnable_in_instructions: in_instructions, - } = ScanToReportDb::::recv_in_instructions(&mut txn, b); - let notable = ScannerGlobalDb::::is_block_notable(&txn, b); + } = ScanToReportDb::::recv_in_instructions(&mut txn, block_number); + let notable = ScannerGlobalDb::::is_block_notable(&txn, block_number); if !notable { assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); } // If this block is notable, create the Batch(s) for it if notable { let network = S::NETWORK; - let block_hash = index::block_id(&txn, b); - let mut batch_id = ReportDb::::acquire_batch_id(&mut txn, b); + let mut batch_id = ReportDb::::acquire_batch_id(&mut txn); // start with empty batch - let mut batches = vec![Batch { - network, - id: batch_id, - block: BlockHash(block_hash), - instructions: vec![], - }]; + let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; // We also track the return information for the InInstructions within a Batch in case // they error let mut return_information = vec![vec![]]; @@ -131,15 +126,10 @@ impl ContinuallyRan for ReportTask { let in_instruction = batch.instructions.pop().unwrap(); // bump the id for the new batch - batch_id = ReportDb::::acquire_batch_id(&mut txn, b); + batch_id = ReportDb::::acquire_batch_id(&mut txn); // make a new batch with this instruction included - batches.push(Batch { - network, - id: batch_id, - block: BlockHash(block_hash), - instructions: vec![in_instruction], - }); + batches.push(Batch { network, id: batch_id, instructions: vec![in_instruction] }); // Since we're allocating a new batch, allocate a new set of return addresses for it return_information.push(vec![]); } @@ -152,10 +142,17 @@ impl ContinuallyRan for ReportTask { .push(return_address.map(|address| ReturnInformation { address, balance })); } - // Save the return addresses to the database + // Now that we've finalized the Batches, save the information for each to the database assert_eq!(batches.len(), return_information.len()); for (batch, return_information) in batches.iter().zip(&return_information) { assert_eq!(batch.instructions.len(), return_information.len()); + ReportDb::::save_batch_info( + &mut txn, + batch.id, + block_number, + session_to_sign_batch, + Blake2b::::digest(batch.instructions.encode()).into(), + ); ReportDb::::save_external_key_for_session_to_sign_batch( &mut txn, batch.id, @@ -171,7 +168,7 @@ impl ContinuallyRan for ReportTask { } // Update the next to potentially report block - ReportDb::::set_next_to_potentially_report_block(&mut txn, b + 1); + ReportDb::::set_next_to_potentially_report_block(&mut txn, block_number + 1); txn.commit(); } diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 0ebdf992..14506092 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -349,6 +349,7 @@ impl ContinuallyRan for ScanTask { &mut txn, b, &InInstructionData { + session_to_sign_batch: keys[0].session, external_key_for_session_to_sign_batch: keys[0].key, returnable_in_instructions: in_instructions, }, diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs index c1a1b0e2..d0037ac8 100644 --- a/processor/scanner/src/substrate/db.rs +++ b/processor/scanner/src/substrate/db.rs @@ -6,12 +6,15 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_coins_primitives::OutInstructionWithBalance; +use serai_validator_sets_primitives::Session; use crate::{ScannerFeed, KeyFor}; #[derive(BorshSerialize, BorshDeserialize)] struct AcknowledgeBatchEncodable { batch_id: u32, + publisher: Session, + in_instructions_hash: [u8; 32], in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, @@ -25,6 +28,8 @@ enum ActionEncodable { pub(crate) struct AcknowledgeBatch { pub(crate) batch_id: u32, + pub(crate) publisher: Session, + pub(crate) in_instructions_hash: [u8; 32], pub(crate) in_instruction_results: Vec, pub(crate) burns: Vec, pub(crate) key_to_activate: Option>, @@ -46,6 +51,8 @@ impl SubstrateDb { pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, + publisher: Session, + in_instructions_hash: [u8; 32], in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, @@ -54,6 +61,8 @@ impl SubstrateDb { txn, &ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, + publisher, + in_instructions_hash, in_instruction_results, burns, key_to_activate: key_to_activate.map(|key| key.to_bytes().as_ref().to_vec()), @@ -69,11 +78,15 @@ impl SubstrateDb { Some(match action_encodable { ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { batch_id, + publisher, + in_instructions_hash, in_instruction_results, burns, key_to_activate, }) => Action::AcknowledgeBatch(AcknowledgeBatch { batch_id, + publisher, + in_instructions_hash, in_instruction_results, burns, key_to_activate: key_to_activate.map(|key| { diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index ce28470d..aced7d53 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -3,6 +3,7 @@ use core::{marker::PhantomData, future::Future}; use serai_db::{DbTxn, Db}; use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; +use serai_validator_sets_primitives::Session; use primitives::task::ContinuallyRan; use crate::{ @@ -16,6 +17,8 @@ use db::*; pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, + publisher: Session, + in_instructions_hash: [u8; 32], in_instruction_results: Vec, burns: Vec, key_to_activate: Option>, @@ -23,6 +26,8 @@ pub(crate) fn queue_acknowledge_batch( SubstrateDb::::queue_acknowledge_batch( txn, batch_id, + publisher, + in_instructions_hash, in_instruction_results, burns, key_to_activate, @@ -67,17 +72,31 @@ impl ContinuallyRan for SubstrateTask { match action { Action::AcknowledgeBatch(AcknowledgeBatch { batch_id, + publisher, + in_instructions_hash, in_instruction_results, mut burns, key_to_activate, }) => { // Check if we have the information for this batch - let Some(block_number) = report::take_block_number_for_batch::(&mut txn, batch_id) + let Some(report::BatchInfo { + block_number, + publisher: expected_publisher, + in_instructions_hash: expected_in_instructions_hash, + }) = report::take_info_for_batch::(&mut txn, batch_id) else { // If we don't, drop this txn (restoring the action to the database) drop(txn); return Ok(made_progress); }; + assert_eq!( + publisher, expected_publisher, + "batch acknowledged on-chain was acknowledged by an unexpected publisher" + ); + assert_eq!( + in_instructions_hash, expected_in_instructions_hash, + "batch acknowledged on-chain was distinct" + ); { let external_key_for_session_to_sign_batch = diff --git a/substrate/client/src/serai/in_instructions.rs b/substrate/client/src/serai/in_instructions.rs index a8b47bfc..50c9ed96 100644 --- a/substrate/client/src/serai/in_instructions.rs +++ b/substrate/client/src/serai/in_instructions.rs @@ -13,13 +13,6 @@ const PALLET: &str = "InInstructions"; #[derive(Clone, Copy)] pub struct SeraiInInstructions<'a>(pub(crate) &'a TemporalSerai<'a>); impl<'a> SeraiInInstructions<'a> { - pub async fn latest_block_for_network( - &self, - network: NetworkId, - ) -> Result, SeraiError> { - self.0.storage(PALLET, "LatestNetworkBlock", network).await - } - pub async fn last_batch_for_network( &self, network: NetworkId, diff --git a/substrate/client/tests/batch.rs b/substrate/client/tests/batch.rs index 17e4d374..c19a4422 100644 --- a/substrate/client/tests/batch.rs +++ b/substrate/client/tests/batch.rs @@ -25,9 +25,6 @@ serai_test!( let network = NetworkId::Bitcoin; let id = 0; - let mut block_hash = BlockHash([0; 32]); - OsRng.fill_bytes(&mut block_hash.0); - let mut address = SeraiAddress::new([0; 32]); OsRng.fill_bytes(&mut address.0); @@ -38,7 +35,6 @@ serai_test!( let batch = Batch { network, id, - block: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Transfer(address), balance, @@ -50,15 +46,12 @@ serai_test!( let serai = serai.as_of(block); { let serai = serai.in_instructions(); - let latest_finalized = serai.latest_block_for_network(network).await.unwrap(); - assert_eq!(latest_finalized, Some(block_hash)); let batches = serai.batch_events().await.unwrap(); assert_eq!( batches, vec![InInstructionsEvent::Batch { network, id, - block: block_hash, instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), }] ); diff --git a/substrate/client/tests/common/in_instructions.rs b/substrate/client/tests/common/in_instructions.rs index 103940ab..5f29f2ba 100644 --- a/substrate/client/tests/common/in_instructions.rs +++ b/substrate/client/tests/common/in_instructions.rs @@ -52,7 +52,6 @@ pub async fn provide_batch(serai: &Serai, batch: Batch) -> [u8; 32] { vec![InInstructionsEvent::Batch { network: batch.network, id: batch.id, - block: batch.block, instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), }], ); diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 1cb05c40..5b394c3d 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -89,12 +89,6 @@ pub mod pallet { #[pallet::storage] pub(crate) type Halted = StorageMap<_, Identity, NetworkId, (), OptionQuery>; - // The latest block a network has acknowledged as finalized - #[pallet::storage] - #[pallet::getter(fn latest_network_block)] - pub(crate) type LatestNetworkBlock = - StorageMap<_, Identity, NetworkId, BlockHash, OptionQuery>; - impl Pallet { // Use a dedicated transaction layer when executing this InInstruction // This lets it individually error without causing any storage modifications @@ -262,11 +256,9 @@ pub mod pallet { let batch = batch.batch; - LatestNetworkBlock::::insert(batch.network, batch.block); Self::deposit_event(Event::Batch { network: batch.network, id: batch.id, - block: batch.block, instructions_hash: blake2_256(&batch.instructions.encode()), }); for (i, instruction) in batch.instructions.into_iter().enumerate() { diff --git a/substrate/in-instructions/primitives/src/lib.rs b/substrate/in-instructions/primitives/src/lib.rs index 1455e423..ef88061b 100644 --- a/substrate/in-instructions/primitives/src/lib.rs +++ b/substrate/in-instructions/primitives/src/lib.rs @@ -19,8 +19,7 @@ use sp_application_crypto::sr25519::Signature; use sp_std::vec::Vec; use sp_runtime::RuntimeDebug; -#[rustfmt::skip] -use serai_primitives::{BlockHash, Balance, NetworkId, SeraiAddress, ExternalAddress, system_address}; +use serai_primitives::{Balance, NetworkId, SeraiAddress, ExternalAddress, system_address}; mod shorthand; pub use shorthand::*; @@ -107,7 +106,6 @@ pub struct InInstructionWithBalance { pub struct Batch { pub network: NetworkId, pub id: u32, - pub block: BlockHash, pub instructions: Vec, } From 5b74fc8ac1dfd363c83e73df978fe18d3fcbe126 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 05:33:53 -0500 Subject: [PATCH 221/368] Merge ExternalKeyForSessionToSignBatch into InfoForBatch --- processor/scanner/src/report/db.rs | 46 ++++++++++---------------- processor/scanner/src/report/mod.rs | 17 ++-------- processor/scanner/src/substrate/mod.rs | 20 +++++------ 3 files changed, 28 insertions(+), 55 deletions(-) diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index b6503e86..209150d6 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -10,12 +10,14 @@ use serai_db::{Get, DbTxn, create_db}; use serai_primitives::Balance; use serai_validator_sets_primitives::Session; +use primitives::EncodableG; use crate::{ScannerFeed, KeyFor, AddressFor}; #[derive(BorshSerialize, BorshDeserialize)] -pub(crate) struct BatchInfo { +pub(crate) struct BatchInfo { pub(crate) block_number: u64, - pub(crate) publisher: Session, + pub(crate) session_to_sign_batch: Session, + pub(crate) external_key_for_session_to_sign_batch: K, pub(crate) in_instructions_hash: [u8; 32], } @@ -27,11 +29,7 @@ create_db!( NextBatchId: () -> u32, // The information needed to verify a batch - InfoForBatch: (batch: u32) -> BatchInfo, - - // The external key for the session which should sign a batch - // TODO: Merge this with InfoForBatch - ExternalKeyForSessionToSignBatch: (batch: u32) -> Vec, + InfoForBatch: (batch: u32) -> BatchInfo>, // The return addresses for the InInstructions within a Batch SerializedReturnAddresses: (batch: u32) -> Vec, @@ -65,37 +63,27 @@ impl ReportDb { txn: &mut impl DbTxn, id: u32, block_number: u64, - publisher: Session, + session_to_sign_batch: Session, + external_key_for_session_to_sign_batch: KeyFor, in_instructions_hash: [u8; 32], ) { - InfoForBatch::set(txn, id, &BatchInfo { block_number, publisher, in_instructions_hash }); - } - - pub(crate) fn take_info_for_batch(txn: &mut impl DbTxn, id: u32) -> Option { - InfoForBatch::take(txn, id) - } - - pub(crate) fn save_external_key_for_session_to_sign_batch( - txn: &mut impl DbTxn, - id: u32, - external_key_for_session_to_sign_batch: &KeyFor, - ) { - ExternalKeyForSessionToSignBatch::set( + InfoForBatch::set( txn, id, - &external_key_for_session_to_sign_batch.to_bytes().as_ref().to_vec(), + &BatchInfo { + block_number, + session_to_sign_batch, + external_key_for_session_to_sign_batch: EncodableG(external_key_for_session_to_sign_batch), + in_instructions_hash, + }, ); } - pub(crate) fn take_external_key_for_session_to_sign_batch( + pub(crate) fn take_info_for_batch( txn: &mut impl DbTxn, id: u32, - ) -> Option> { - ExternalKeyForSessionToSignBatch::get(txn, id).map(|key_vec| { - let mut key = as GroupEncoding>::Repr::default(); - key.as_mut().copy_from_slice(&key_vec); - KeyFor::::from_bytes(&key).unwrap() - }) + ) -> Option>>> { + InfoForBatch::take(txn, id) } pub(crate) fn save_return_information( diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index f5208460..1747fde3 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -7,7 +7,7 @@ use serai_db::{DbTxn, Db}; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; -use primitives::task::ContinuallyRan; +use primitives::{EncodableG, task::ContinuallyRan}; use crate::{ db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, Batches, BatchesToSign}, scan::next_to_scan_for_outputs_block, @@ -21,17 +21,10 @@ use db::ReportDb; pub(crate) fn take_info_for_batch( txn: &mut impl DbTxn, id: u32, -) -> Option { +) -> Option>>> { ReportDb::::take_info_for_batch(txn, id) } -pub(crate) fn take_external_key_for_session_to_sign_batch( - txn: &mut impl DbTxn, - id: u32, -) -> Option> { - ReportDb::::take_external_key_for_session_to_sign_batch(txn, id) -} - pub(crate) fn take_return_information( txn: &mut impl DbTxn, id: u32, @@ -151,13 +144,9 @@ impl ContinuallyRan for ReportTask { batch.id, block_number, session_to_sign_batch, + external_key_for_session_to_sign_batch, Blake2b::::digest(batch.instructions.encode()).into(), ); - ReportDb::::save_external_key_for_session_to_sign_batch( - &mut txn, - batch.id, - &external_key_for_session_to_sign_batch, - ); ReportDb::::save_return_information(&mut txn, batch.id, return_information); } diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index aced7d53..106397b0 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -81,7 +81,8 @@ impl ContinuallyRan for SubstrateTask { // Check if we have the information for this batch let Some(report::BatchInfo { block_number, - publisher: expected_publisher, + session_to_sign_batch, + external_key_for_session_to_sign_batch, in_instructions_hash: expected_in_instructions_hash, }) = report::take_info_for_batch::(&mut txn, batch_id) else { @@ -90,7 +91,7 @@ impl ContinuallyRan for SubstrateTask { return Ok(made_progress); }; assert_eq!( - publisher, expected_publisher, + publisher, session_to_sign_batch, "batch acknowledged on-chain was acknowledged by an unexpected publisher" ); assert_eq!( @@ -98,16 +99,11 @@ impl ContinuallyRan for SubstrateTask { "batch acknowledged on-chain was distinct" ); - { - let external_key_for_session_to_sign_batch = - report::take_external_key_for_session_to_sign_batch::(&mut txn, batch_id) - .unwrap(); - AcknowledgedBatches::send( - &mut txn, - &external_key_for_session_to_sign_batch, - batch_id, - ); - } + AcknowledgedBatches::send( + &mut txn, + &external_key_for_session_to_sign_batch.0, + batch_id, + ); // Mark we made progress and handle this made_progress = true; From 445c49f0303a3793627be818f09ea8e418180e17 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 06:11:47 -0500 Subject: [PATCH 222/368] Have the scanner's report task ensure handovers only occur if `Batch`s are valid This is incomplete at this time. The logic is fine, but needs to be moved to a distinct location to handle singular blocks which produce multiple Batches. --- processor/scanner/src/report/db.rs | 17 ++++++++ processor/scanner/src/report/mod.rs | 60 +++++++++++++++++++++++++- processor/scanner/src/substrate/db.rs | 14 ++++++ processor/scanner/src/substrate/mod.rs | 6 ++- 4 files changed, 95 insertions(+), 2 deletions(-) diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 209150d6..20a78337 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -25,6 +25,10 @@ create_db!( ScannerReport { // The next block to potentially report NextToPotentiallyReportBlock: () -> u64, + + // The last session to sign a Batch and their first Batch signed + LastSessionToSignBatchAndFirstBatch: () -> (Session, u32), + // The next Batch ID to use NextBatchId: () -> u32, @@ -43,6 +47,19 @@ pub(crate) struct ReturnInformation { pub(crate) struct ReportDb(PhantomData); impl ReportDb { + pub(crate) fn set_last_session_to_sign_batch_and_first_batch( + txn: &mut impl DbTxn, + session: Session, + id: u32, + ) { + LastSessionToSignBatchAndFirstBatch::set(txn, &(session, id)); + } + pub(crate) fn last_session_to_sign_batch_and_first_batch( + getter: &impl Get, + ) -> Option<(Session, u32)> { + LastSessionToSignBatchAndFirstBatch::get(getter) + } + pub(crate) fn set_next_to_potentially_report_block( txn: &mut impl DbTxn, next_to_potentially_report_block: u64, diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 1747fde3..1e1be868 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -5,13 +5,14 @@ use blake2::{digest::typenum::U32, Digest, Blake2b}; use scale::Encode; use serai_db::{DbTxn, Db}; +use serai_validator_sets_primitives::Session; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::{EncodableG, task::ContinuallyRan}; use crate::{ db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, Batches, BatchesToSign}, scan::next_to_scan_for_outputs_block, - ScannerFeed, KeyFor, + substrate, ScannerFeed, KeyFor, }; mod db; @@ -92,6 +93,7 @@ impl ContinuallyRan for ReportTask { external_key_for_session_to_sign_batch, returnable_in_instructions: in_instructions, } = ScanToReportDb::::recv_in_instructions(&mut txn, block_number); + let notable = ScannerGlobalDb::::is_block_notable(&txn, block_number); if !notable { assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); @@ -101,6 +103,62 @@ impl ContinuallyRan for ReportTask { let network = S::NETWORK; let mut batch_id = ReportDb::::acquire_batch_id(&mut txn); + /* + If this is the handover Batch, the first Batch signed by a session which retires the + prior validator set, then this should only be signed after the prior validator set's + actions are fully validated. + + The new session will only be responsible for signing this Batch if the prior key has + retired, successfully completed all its on-external-network actions. + + We check here the prior session has successfully completed all its on-Serai-network + actions by ensuring we've validated all Batches expected from it. Only then do we sign + the Batch confirming the handover. + + We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we + don't verify the prior session's Batches, sign the handover Batch and the following + Batch, have the prior session publish a malicious Batch where our handover Batch should + be, before our following Batch becomes our handover Batch. + */ + if session_to_sign_batch != Session(0) { + // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a + // Batch. This is fine as we'll hit the distinct Session check and then set the correct + // values into this DB entry. All other sessions must complete the handover process, + // which requires having published at least one Batch + let (last_session, first_batch) = + ReportDb::::last_session_to_sign_batch_and_first_batch(&txn) + .unwrap_or((Session(0), 0)); + // Because this boolean was expanded, we lose short-circuiting. That's fine + let handover_batch = last_session != session_to_sign_batch; + let batch_after_handover_batch = + (last_session == session_to_sign_batch) && ((first_batch + 1) == batch_id); + if handover_batch || batch_after_handover_batch { + let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) + // Since `batch_id = 0` in the Session(0)-never-published-a-Batch case, we don't + // check `last_acknowledged_batch >= (batch_id - 1)` but instead this + .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch_id) + // We've never verified any Batches + .unwrap_or(false); + if !verified_prior_batch { + // Drop this txn, restoring the Batch to be worked on in the future + drop(txn); + return Ok(block_number > next_to_potentially_report); + } + } + + // If this is the handover Batch, update the last session to sign a Batch + if handover_batch { + ReportDb::::set_last_session_to_sign_batch_and_first_batch( + &mut txn, + session_to_sign_batch, + batch_id, + ); + } + } + + // TODO: The above code doesn't work if we end up with two Batches (the handover and the + // following) within this one Block due to Batch size limits + // start with empty batch let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; // We also track the return information for the InInstructions within a Batch in case diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs index d0037ac8..af6679d4 100644 --- a/processor/scanner/src/substrate/db.rs +++ b/processor/scanner/src/substrate/db.rs @@ -40,6 +40,12 @@ pub(crate) enum Action { QueueBurns(Vec), } +create_db!( + ScannerSubstrate { + LastAcknowledgedBatch: () -> u32, + } +); + db_channel!( ScannerSubstrate { Actions: () -> ActionEncodable, @@ -48,6 +54,14 @@ db_channel!( pub(crate) struct SubstrateDb(PhantomData); impl SubstrateDb { + pub(crate) fn last_acknowledged_batch(getter: &impl Get) -> Option { + LastAcknowledgedBatch::get(getter) + } + + pub(crate) fn set_last_acknowledged_batch(txn: &mut impl DbTxn, id: u32) { + LastAcknowledgedBatch::set(txn, &id) + } + pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 106397b0..fddc7453 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -1,6 +1,6 @@ use core::{marker::PhantomData, future::Future}; -use serai_db::{DbTxn, Db}; +use serai_db::{Get, DbTxn, Db}; use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; use serai_validator_sets_primitives::Session; @@ -14,6 +14,9 @@ use crate::{ mod db; use db::*; +pub(crate) fn last_acknowledged_batch(getter: &impl Get) -> Option { + SubstrateDb::::last_acknowledged_batch(getter) +} pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, batch_id: u32, @@ -99,6 +102,7 @@ impl ContinuallyRan for SubstrateTask { "batch acknowledged on-chain was distinct" ); + SubstrateDb::::set_last_acknowledged_batch(&mut txn, batch_id); AcknowledgedBatches::send( &mut txn, &external_key_for_session_to_sign_batch.0, From 1de8136739417caaec1c27289af6293066d4e4ae Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 06:16:03 -0500 Subject: [PATCH 223/368] Remove Session from VariantSignId::SlashReport It's only there to make the VariantSignid unique across Sessions. By localizing the VariantSignid to a Session, we avoid this, and can better ensure we don't queue work for historic sessions. --- .../frost-attempt-manager/src/individual.rs | 10 +++++----- processor/frost-attempt-manager/src/lib.rs | 4 ++-- processor/messages/src/lib.rs | 12 +++++++----- processor/signers/src/lib.rs | 18 +++++++++++++++++- processor/signers/src/slash_report.rs | 14 +++++++------- 5 files changed, 38 insertions(+), 20 deletions(-) diff --git a/processor/frost-attempt-manager/src/individual.rs b/processor/frost-attempt-manager/src/individual.rs index 6a8b3352..e918ff02 100644 --- a/processor/frost-attempt-manager/src/individual.rs +++ b/processor/frost-attempt-manager/src/individual.rs @@ -14,7 +14,7 @@ use messages::sign::{VariantSignId, SignId, ProcessorMessage}; create_db!( FrostAttemptManager { - Attempted: (id: VariantSignId) -> u32, + Attempted: (session: Session, id: VariantSignId) -> u32, } ); @@ -92,11 +92,11 @@ impl SigningProtocol { */ { let mut txn = self.db.txn(); - let prior_attempted = Attempted::get(&txn, self.id); + let prior_attempted = Attempted::get(&txn, self.session, self.id); if Some(attempt) <= prior_attempted { return vec![]; } - Attempted::set(&mut txn, self.id, &attempt); + Attempted::set(&mut txn, self.session, self.id, &attempt); txn.commit(); } @@ -278,7 +278,7 @@ impl SigningProtocol { } /// Cleanup the database entries for a specified signing protocol. - pub(crate) fn cleanup(txn: &mut impl DbTxn, id: VariantSignId) { - Attempted::del(txn, id); + pub(crate) fn cleanup(txn: &mut impl DbTxn, session: Session, id: VariantSignId) { + Attempted::del(txn, session, id); } } diff --git a/processor/frost-attempt-manager/src/lib.rs b/processor/frost-attempt-manager/src/lib.rs index db8b0861..670d8d9f 100644 --- a/processor/frost-attempt-manager/src/lib.rs +++ b/processor/frost-attempt-manager/src/lib.rs @@ -45,7 +45,7 @@ impl AttemptManager { /// Register a signing protocol to attempt. /// - /// This ID must be unique across all sessions, attempt managers, protocols, etc. + /// This ID must be unique to the session, across all attempt managers, protocols, etc. pub fn register(&mut self, id: VariantSignId, machines: Vec) -> Vec { let mut protocol = SigningProtocol::new(self.db.clone(), self.session, self.start_i, id, machines); @@ -66,7 +66,7 @@ impl AttemptManager { } else { log::info!("retired signing protocol {id:?}"); } - SigningProtocol::::cleanup(txn, id); + SigningProtocol::::cleanup(txn, self.session, id); } /// Handle a message for a signing protocol. diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 659491d4..748cf39b 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -84,7 +84,7 @@ pub mod sign { pub enum VariantSignId { Cosign(u64), Batch(u32), - SlashReport(Session), + SlashReport, Transaction([u8; 32]), } impl fmt::Debug for VariantSignId { @@ -94,9 +94,7 @@ pub mod sign { f.debug_struct("VariantSignId::Cosign").field("0", &cosign).finish() } Self::Batch(batch) => f.debug_struct("VariantSignId::Batch").field("0", &batch).finish(), - Self::SlashReport(session) => { - f.debug_struct("VariantSignId::SlashReport").field("0", &session).finish() - } + Self::SlashReport => f.debug_struct("VariantSignId::SlashReport").finish(), Self::Transaction(tx) => { f.debug_struct("VariantSignId::Transaction").field("0", &hex::encode(tx)).finish() } @@ -189,7 +187,9 @@ pub mod substrate { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct ExecutedBatch { pub id: u32, - pub in_instructions: Vec, + pub publisher: Session, + pub in_instructions_hash: [u8; 32], + pub in_instruction_results: Vec, } #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] @@ -197,6 +197,8 @@ pub mod substrate { /// Keys set on the Serai blockchain. SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, /// Slashes reported on the Serai blockchain OR the process timed out. + /// + /// This is the final message for a session, SlashesReported { session: Session }, /// A block from Serai with relevance to this processor. Block { diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index a6714fdf..d247cf8f 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -376,6 +376,12 @@ impl< /// This is a cheap call and able to be done inline from a higher-level loop. pub fn queue_message(&mut self, txn: &mut impl DbTxn, message: &CoordinatorMessage) { let sign_id = message.sign_id(); + + // Don't queue messages for already retired keys + if Some(sign_id.session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + return; + } + let tasks = self.tasks.get(&sign_id.session); match sign_id.id { VariantSignId::Cosign(_) => { @@ -390,7 +396,7 @@ impl< tasks.batch.run_now(); } } - VariantSignId::SlashReport(_) => { + VariantSignId::SlashReport => { db::CoordinatorToSlashReportSignerMessages::send(txn, sign_id.session, message); if let Some(tasks) = tasks { tasks.slash_report.run_now(); @@ -415,6 +421,11 @@ impl< block_number: u64, block: [u8; 32], ) { + // Don't cosign blocks with already retired keys + if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + return; + } + db::ToCosign::set(&mut txn, session, &(block_number, block)); txn.commit(); @@ -432,6 +443,11 @@ impl< session: Session, slash_report: &Vec, ) { + // Don't sign slash reports with already retired keys + if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + return; + } + db::SlashReport::send(&mut txn, session, slash_report); txn.commit(); diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs index e040798c..577ec90b 100644 --- a/processor/signers/src/slash_report.rs +++ b/processor/signers/src/slash_report.rs @@ -79,8 +79,7 @@ impl ContinuallyRan for SlashReportSignerTask { } } let mut txn = self.db.txn(); - for msg in self.attempt_manager.register(VariantSignId::SlashReport(self.session), machines) - { + for msg in self.attempt_manager.register(VariantSignId::SlashReport, machines) { SlashReportSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } txn.commit(); @@ -102,14 +101,15 @@ impl ContinuallyRan for SlashReportSignerTask { } } Response::Signature { id, signature } => { - let VariantSignId::SlashReport(session) = id else { - panic!("SlashReportSignerTask signed a non-SlashReport") - }; - assert_eq!(session, self.session); + assert_eq!(id, VariantSignId::SlashReport); // Drain the channel SlashReport::try_recv(&mut txn, self.session).unwrap(); // Send the signature - SlashReportSignature::send(&mut txn, session, &Signature::from(signature).encode()); + SlashReportSignature::send( + &mut txn, + self.session, + &Signature::from(signature).encode(), + ); } } From 458f4fe170d94a39ccc6259b9de7aca190e27620 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 10:18:38 -0500 Subject: [PATCH 224/368] Move where we check if we should delay reporting of Batches --- processor/scanner/src/report/db.rs | 9 +- processor/scanner/src/report/mod.rs | 131 +++++++++++++++------------- processor/signers/src/lib.rs | 4 +- 3 files changed, 82 insertions(+), 62 deletions(-) diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs index 20a78337..d5b8fbd1 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/report/db.rs @@ -5,10 +5,11 @@ use group::GroupEncoding; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_db::{Get, DbTxn, create_db}; +use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_primitives::Balance; use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::Batch; use primitives::EncodableG; use crate::{ScannerFeed, KeyFor, AddressFor}; @@ -40,6 +41,12 @@ create_db!( } ); +db_channel!( + ScannerReport { + InternalBatches: () -> (Session, EncodableG, Batch), + } +); + pub(crate) struct ReturnInformation { pub(crate) address: AddressFor, pub(crate) balance: Balance, diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 1e1be868..d3c2995a 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -16,7 +16,7 @@ use crate::{ }; mod db; -pub(crate) use db::{BatchInfo, ReturnInformation}; +pub(crate) use db::{BatchInfo, ReturnInformation, InternalBatches}; use db::ReportDb; pub(crate) fn take_info_for_batch( @@ -103,62 +103,6 @@ impl ContinuallyRan for ReportTask { let network = S::NETWORK; let mut batch_id = ReportDb::::acquire_batch_id(&mut txn); - /* - If this is the handover Batch, the first Batch signed by a session which retires the - prior validator set, then this should only be signed after the prior validator set's - actions are fully validated. - - The new session will only be responsible for signing this Batch if the prior key has - retired, successfully completed all its on-external-network actions. - - We check here the prior session has successfully completed all its on-Serai-network - actions by ensuring we've validated all Batches expected from it. Only then do we sign - the Batch confirming the handover. - - We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we - don't verify the prior session's Batches, sign the handover Batch and the following - Batch, have the prior session publish a malicious Batch where our handover Batch should - be, before our following Batch becomes our handover Batch. - */ - if session_to_sign_batch != Session(0) { - // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a - // Batch. This is fine as we'll hit the distinct Session check and then set the correct - // values into this DB entry. All other sessions must complete the handover process, - // which requires having published at least one Batch - let (last_session, first_batch) = - ReportDb::::last_session_to_sign_batch_and_first_batch(&txn) - .unwrap_or((Session(0), 0)); - // Because this boolean was expanded, we lose short-circuiting. That's fine - let handover_batch = last_session != session_to_sign_batch; - let batch_after_handover_batch = - (last_session == session_to_sign_batch) && ((first_batch + 1) == batch_id); - if handover_batch || batch_after_handover_batch { - let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) - // Since `batch_id = 0` in the Session(0)-never-published-a-Batch case, we don't - // check `last_acknowledged_batch >= (batch_id - 1)` but instead this - .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch_id) - // We've never verified any Batches - .unwrap_or(false); - if !verified_prior_batch { - // Drop this txn, restoring the Batch to be worked on in the future - drop(txn); - return Ok(block_number > next_to_potentially_report); - } - } - - // If this is the handover Batch, update the last session to sign a Batch - if handover_batch { - ReportDb::::set_last_session_to_sign_batch_and_first_batch( - &mut txn, - session_to_sign_batch, - batch_id, - ); - } - } - - // TODO: The above code doesn't work if we end up with two Batches (the handover and the - // following) within this one Block due to Batch size limits - // start with empty batch let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; // We also track the return information for the InInstructions within a Batch in case @@ -209,8 +153,10 @@ impl ContinuallyRan for ReportTask { } for batch in batches { - Batches::send(&mut txn, &batch); - BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch, &batch); + InternalBatches::send( + &mut txn, + &(session_to_sign_batch, EncodableG(external_key_for_session_to_sign_batch), batch), + ); } } @@ -220,6 +166,73 @@ impl ContinuallyRan for ReportTask { txn.commit(); } + // TODO: This should be its own task. The above doesn't error, doesn't return early, so this + // is fine, but this is precarious and would be better as its own task + { + let mut txn = self.db.txn(); + while let Some((session_to_sign_batch, external_key_for_session_to_sign_batch, batch)) = + InternalBatches::>::peek(&txn) + { + /* + If this is the handover Batch, the first Batch signed by a session which retires the + prior validator set, then this should only be signed after the prior validator set's + actions are fully validated. + + The new session will only be responsible for signing this Batch if the prior key has + retired, successfully completed all its on-external-network actions. + + We check here the prior session has successfully completed all its on-Serai-network + actions by ensuring we've validated all Batches expected from it. Only then do we sign + the Batch confirming the handover. + + We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we + don't verify the prior session's Batches, sign the handover Batch and the following + Batch, have the prior session publish a malicious Batch where our handover Batch should + be, before our following Batch becomes our handover Batch. + */ + if session_to_sign_batch != Session(0) { + // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a + // Batch. This is fine as we'll hit the distinct Session check and then set the correct + // values into this DB entry. All other sessions must complete the handover process, + // which requires having published at least one Batch + let (last_session, first_batch) = + ReportDb::::last_session_to_sign_batch_and_first_batch(&txn) + .unwrap_or((Session(0), 0)); + // Because this boolean was expanded, we lose short-circuiting. That's fine + let handover_batch = last_session != session_to_sign_batch; + let batch_after_handover_batch = + (last_session == session_to_sign_batch) && ((first_batch + 1) == batch.id); + if handover_batch || batch_after_handover_batch { + let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) + // Since `batch.id = 0` in the Session(0)-never-published-a-Batch case, we don't + // check `last_acknowledged_batch >= (batch.id - 1)` but instead this + .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch.id) + // We've never verified any Batches + .unwrap_or(false); + if !verified_prior_batch { + break; + } + } + + // If this is the handover Batch, update the last session to sign a Batch + if handover_batch { + ReportDb::::set_last_session_to_sign_batch_and_first_batch( + &mut txn, + session_to_sign_batch, + batch.id, + ); + } + } + + // Since we should handle this batch now, recv it from the channel + InternalBatches::>::try_recv(&mut txn).unwrap(); + + Batches::send(&mut txn, &batch); + BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch.0, &batch); + } + txn.commit(); + } + // Run dependents if we decided to report any blocks Ok(next_to_potentially_report <= highest_reportable) } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index d247cf8f..4943e91d 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -422,7 +422,7 @@ impl< block: [u8; 32], ) { // Don't cosign blocks with already retired keys - if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + if Some(session.0) <= db::LatestRetiredSession::get(&txn).map(|session| session.0) { return; } @@ -444,7 +444,7 @@ impl< slash_report: &Vec, ) { // Don't sign slash reports with already retired keys - if Some(session.0) <= db::LatestRetiredSession::get(txn).map(|session| session.0) { + if Some(session.0) <= db::LatestRetiredSession::get(&txn).map(|session| session.0) { return; } From f0094b3c7c2e10ce18dc21017e6b5577c35ef5a7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 10:49:35 -0500 Subject: [PATCH 225/368] Rename Report task to Batch task --- processor/bin/src/coordinator.rs | 12 - processor/messages/src/lib.rs | 26 +- processor/scanner/src/{report => batch}/db.rs | 30 +-- processor/scanner/src/batch/mod.rs | 252 ++++++++++++++++++ processor/scanner/src/db.rs | 46 ++-- processor/scanner/src/lib.rs | 20 +- processor/scanner/src/report/mod.rs | 240 ----------------- processor/scanner/src/scan/mod.rs | 4 +- processor/scanner/src/substrate/mod.rs | 10 +- processor/signers/src/coordinator/mod.rs | 14 - processor/signers/src/lib.rs | 6 - 11 files changed, 319 insertions(+), 341 deletions(-) rename processor/scanner/src/{report => batch}/db.rs (83%) create mode 100644 processor/scanner/src/batch/mod.rs delete mode 100644 processor/scanner/src/report/mod.rs diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index e05712cf..e5d0e23b 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -196,18 +196,6 @@ impl signers::Coordinator for CoordinatorSend { } } - fn publish_batch( - &mut self, - batch: Batch, - ) -> impl Send + Future> { - async move { - self.send(&messages::ProcessorMessage::Substrate( - messages::substrate::ProcessorMessage::Batch { batch }, - )); - Ok(()) - } - } - fn publish_signed_batch( &mut self, batch: SignedBatch, diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 748cf39b..bbab3186 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -9,7 +9,7 @@ use dkg::Participant; use serai_primitives::BlockHash; use validator_sets_primitives::{Session, KeyPair, Slash}; use coins_primitives::OutInstructionWithBalance; -use in_instructions_primitives::{Batch, SignedBatch}; +use in_instructions_primitives::SignedBatch; #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct SubstrateContext { @@ -208,9 +208,17 @@ pub mod substrate { }, } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] - pub enum ProcessorMessage { - Batch { batch: Batch }, + #[derive(Clone, PartialEq, Eq, Debug)] + pub enum ProcessorMessage {} + impl BorshSerialize for ProcessorMessage { + fn serialize(&self, _writer: &mut W) -> borsh::io::Result<()> { + unimplemented!() + } + } + impl BorshDeserialize for ProcessorMessage { + fn deserialize_reader(_reader: &mut R) -> borsh::io::Result { + unimplemented!() + } } } @@ -383,15 +391,7 @@ impl ProcessorMessage { res.extend(&id); res } - ProcessorMessage::Substrate(msg) => { - let (sub, id) = match msg { - substrate::ProcessorMessage::Batch { batch } => (0, batch.id.encode()), - }; - - let mut res = vec![PROCESSOR_UID, TYPE_SUBSTRATE_UID, sub]; - res.extend(&id); - res - } + ProcessorMessage::Substrate(_) => panic!("requesting intent for empty message type"), } } } diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/batch/db.rs similarity index 83% rename from processor/scanner/src/report/db.rs rename to processor/scanner/src/batch/db.rs index d5b8fbd1..edca6d4a 100644 --- a/processor/scanner/src/report/db.rs +++ b/processor/scanner/src/batch/db.rs @@ -5,11 +5,10 @@ use group::GroupEncoding; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_db::{Get, DbTxn, create_db, db_channel}; +use serai_db::{Get, DbTxn, create_db}; use serai_primitives::Balance; use serai_validator_sets_primitives::Session; -use serai_in_instructions_primitives::Batch; use primitives::EncodableG; use crate::{ScannerFeed, KeyFor, AddressFor}; @@ -23,9 +22,9 @@ pub(crate) struct BatchInfo { } create_db!( - ScannerReport { - // The next block to potentially report - NextToPotentiallyReportBlock: () -> u64, + ScannerBatch { + // The next block to create batches for + NextBlockToBatch: () -> u64, // The last session to sign a Batch and their first Batch signed LastSessionToSignBatchAndFirstBatch: () -> (Session, u32), @@ -41,19 +40,13 @@ create_db!( } ); -db_channel!( - ScannerReport { - InternalBatches: () -> (Session, EncodableG, Batch), - } -); - pub(crate) struct ReturnInformation { pub(crate) address: AddressFor, pub(crate) balance: Balance, } -pub(crate) struct ReportDb(PhantomData); -impl ReportDb { +pub(crate) struct BatchDb(PhantomData); +impl BatchDb { pub(crate) fn set_last_session_to_sign_batch_and_first_batch( txn: &mut impl DbTxn, session: Session, @@ -67,14 +60,11 @@ impl ReportDb { LastSessionToSignBatchAndFirstBatch::get(getter) } - pub(crate) fn set_next_to_potentially_report_block( - txn: &mut impl DbTxn, - next_to_potentially_report_block: u64, - ) { - NextToPotentiallyReportBlock::set(txn, &next_to_potentially_report_block); + pub(crate) fn set_next_block_to_batch(txn: &mut impl DbTxn, next_block_to_batch: u64) { + NextBlockToBatch::set(txn, &next_block_to_batch); } - pub(crate) fn next_to_potentially_report_block(getter: &impl Get) -> Option { - NextToPotentiallyReportBlock::get(getter) + pub(crate) fn next_block_to_batch(getter: &impl Get) -> Option { + NextBlockToBatch::get(getter) } pub(crate) fn acquire_batch_id(txn: &mut impl DbTxn) -> u32 { diff --git a/processor/scanner/src/batch/mod.rs b/processor/scanner/src/batch/mod.rs new file mode 100644 index 00000000..e12cda89 --- /dev/null +++ b/processor/scanner/src/batch/mod.rs @@ -0,0 +1,252 @@ +use core::{marker::PhantomData, future::Future}; + +use blake2::{digest::typenum::U32, Digest, Blake2b}; + +use scale::Encode; +use serai_db::{DbTxn, Db}; + +use serai_validator_sets_primitives::Session; +use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; + +use primitives::{EncodableG, task::ContinuallyRan}; +use crate::{ + db::{ + Returnable, ScannerGlobalDb, InInstructionData, ScanToBatchDb, BatchData, BatchToReportDb, + BatchesToSign, + }, + scan::next_to_scan_for_outputs_block, + substrate, ScannerFeed, KeyFor, +}; + +mod db; +pub(crate) use db::{BatchInfo, ReturnInformation}; +use db::BatchDb; + +pub(crate) fn take_info_for_batch( + txn: &mut impl DbTxn, + id: u32, +) -> Option>>> { + BatchDb::::take_info_for_batch(txn, id) +} + +pub(crate) fn take_return_information( + txn: &mut impl DbTxn, + id: u32, +) -> Option>>> { + BatchDb::::take_return_information(txn, id) +} + +/* + This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. + + We only produce batches once both tasks, scanning for received outputs and checking for resolved + Eventualities, have processed the block. This ensures we know if this block is notable, and have + the InInstructions for it. +*/ +#[allow(non_snake_case)] +pub(crate) struct BatchTask { + db: D, + _S: PhantomData, +} + +impl BatchTask { + pub(crate) fn new(mut db: D, start_block: u64) -> Self { + if BatchDb::::next_block_to_batch(&db).is_none() { + // Initialize the DB + let mut txn = db.txn(); + BatchDb::::set_next_block_to_batch(&mut txn, start_block); + txn.commit(); + } + + Self { db, _S: PhantomData } + } +} + +impl ContinuallyRan for BatchTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let highest_batchable = { + // Fetch the next to scan block + let next_to_scan = next_to_scan_for_outputs_block::(&self.db) + .expect("BatchTask run before writing the start block"); + // If we haven't done any work, return + if next_to_scan == 0 { + return Ok(false); + } + // The last scanned block is the block prior to this + #[allow(clippy::let_and_return)] + let last_scanned = next_to_scan - 1; + // The last scanned block is the highest batchable block as we only scan blocks within a + // window where it's safe to immediately report the block + // See `eventuality.rs` for more info + last_scanned + }; + + let next_block_to_batch = BatchDb::::next_block_to_batch(&self.db) + .expect("BatchTask run before writing the start block"); + + for block_number in next_block_to_batch ..= highest_batchable { + let mut txn = self.db.txn(); + + // Receive the InInstructions for this block + // We always do this as we can't trivially tell if we should recv InInstructions before we + // do + let InInstructionData { + session_to_sign_batch, + external_key_for_session_to_sign_batch, + returnable_in_instructions: in_instructions, + } = ScanToBatchDb::::recv_in_instructions(&mut txn, block_number); + + let notable = ScannerGlobalDb::::is_block_notable(&txn, block_number); + if !notable { + assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); + } + // If this block is notable, create the Batch(s) for it + if notable { + let network = S::NETWORK; + let mut batch_id = BatchDb::::acquire_batch_id(&mut txn); + + // start with empty batch + let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; + // We also track the return information for the InInstructions within a Batch in case + // they error + let mut return_information = vec![vec![]]; + + for Returnable { return_address, in_instruction } in in_instructions { + let balance = in_instruction.balance; + + let batch = batches.last_mut().unwrap(); + batch.instructions.push(in_instruction); + + // check if batch is over-size + if batch.encode().len() > MAX_BATCH_SIZE { + // pop the last instruction so it's back in size + let in_instruction = batch.instructions.pop().unwrap(); + + // bump the id for the new batch + batch_id = BatchDb::::acquire_batch_id(&mut txn); + + // make a new batch with this instruction included + batches.push(Batch { network, id: batch_id, instructions: vec![in_instruction] }); + // Since we're allocating a new batch, allocate a new set of return addresses for it + return_information.push(vec![]); + } + + // For the set of return addresses for the InInstructions for the batch we just pushed + // onto, push this InInstruction's return addresses + return_information + .last_mut() + .unwrap() + .push(return_address.map(|address| ReturnInformation { address, balance })); + } + + // Now that we've finalized the Batches, save the information for each to the database + assert_eq!(batches.len(), return_information.len()); + for (batch, return_information) in batches.iter().zip(&return_information) { + assert_eq!(batch.instructions.len(), return_information.len()); + BatchDb::::save_batch_info( + &mut txn, + batch.id, + block_number, + session_to_sign_batch, + external_key_for_session_to_sign_batch, + Blake2b::::digest(batch.instructions.encode()).into(), + ); + BatchDb::::save_return_information(&mut txn, batch.id, return_information); + } + + for batch in batches { + BatchToReportDb::::send_batch( + &mut txn, + &BatchData { + session_to_sign_batch, + external_key_for_session_to_sign_batch: EncodableG( + external_key_for_session_to_sign_batch, + ), + batch, + }, + ); + } + } + + // Update the next block to batch + BatchDb::::set_next_block_to_batch(&mut txn, block_number + 1); + + txn.commit(); + } + + // TODO: This should be its own task. The above doesn't error, doesn't return early, so this + // is fine, but this is precarious and would be better as its own task + loop { + let mut txn = self.db.txn(); + let Some(BatchData { + session_to_sign_batch, + external_key_for_session_to_sign_batch, + batch, + }) = BatchToReportDb::::try_recv_batch(&mut txn) + else { + break; + }; + + /* + If this is the handover Batch, the first Batch signed by a session which retires the + prior validator set, then this should only be signed after the prior validator set's + actions are fully validated. + + The new session will only be responsible for signing this Batch if the prior key has + retired, successfully completed all its on-external-network actions. + + We check here the prior session has successfully completed all its on-Serai-network + actions by ensuring we've validated all Batches expected from it. Only then do we sign + the Batch confirming the handover. + + We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we + don't verify the prior session's Batches, sign the handover Batch and the following + Batch, have the prior session publish a malicious Batch where our handover Batch should + be, before our following Batch becomes our handover Batch. + */ + if session_to_sign_batch != Session(0) { + // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a + // Batch. This is fine as we'll hit the distinct Session check and then set the correct + // values into this DB entry. All other sessions must complete the handover process, + // which requires having published at least one Batch + let (last_session, first_batch) = + BatchDb::::last_session_to_sign_batch_and_first_batch(&txn) + .unwrap_or((Session(0), 0)); + // Because this boolean was expanded, we lose short-circuiting. That's fine + let handover_batch = last_session != session_to_sign_batch; + let batch_after_handover_batch = + (last_session == session_to_sign_batch) && ((first_batch + 1) == batch.id); + if handover_batch || batch_after_handover_batch { + let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) + // Since `batch.id = 0` in the Session(0)-never-published-a-Batch case, we don't + // check `last_acknowledged_batch >= (batch.id - 1)` but instead this + .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch.id) + // We've never verified any Batches + .unwrap_or(false); + if !verified_prior_batch { + // Drop the txn to restore the Batch to report to the DB + drop(txn); + break; + } + } + + // If this is the handover Batch, update the last session to sign a Batch + if handover_batch { + BatchDb::::set_last_session_to_sign_batch_and_first_batch( + &mut txn, + session_to_sign_batch, + batch.id, + ); + } + } + + BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch.0, &batch); + txn.commit(); + } + + // Run dependents if were able to batch any blocks + Ok(next_block_to_batch <= highest_batchable) + } + } +} diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 80b716ae..8790da31 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -482,7 +482,7 @@ struct BlockBoundInInstructions { } db_channel! { - ScannerScanReport { + ScannerScanBatch { InInstructions: () -> BlockBoundInInstructions, } } @@ -493,8 +493,8 @@ pub(crate) struct InInstructionData { pub(crate) returnable_in_instructions: Vec>, } -pub(crate) struct ScanToReportDb(PhantomData); -impl ScanToReportDb { +pub(crate) struct ScanToBatchDb(PhantomData); +impl ScanToBatchDb { pub(crate) fn send_in_instructions( txn: &mut impl DbTxn, block_number: u64, @@ -545,6 +545,30 @@ impl ScanToReportDb { } } +#[derive(BorshSerialize, BorshDeserialize)] +pub(crate) struct BatchData { + pub(crate) session_to_sign_batch: Session, + pub(crate) external_key_for_session_to_sign_batch: K, + pub(crate) batch: Batch, +} + +db_channel! { + ScannerBatchReport { + BatchToReport: () -> BatchData, + } +} + +pub(crate) struct BatchToReportDb(PhantomData); +impl BatchToReportDb { + pub(crate) fn send_batch(txn: &mut impl DbTxn, batch_data: &BatchData>>) { + BatchToReport::send(txn, batch_data); + } + + pub(crate) fn try_recv_batch(txn: &mut impl DbTxn) -> Option>>> { + BatchToReport::try_recv(txn) + } +} + db_channel! { ScannerSubstrateEventuality { Burns: (acknowledged_block: u64) -> Vec, @@ -583,7 +607,6 @@ mod _public_db { db_channel! { ScannerPublic { - Batches: () -> Batch, BatchesToSign: (key: &[u8]) -> Batch, AcknowledgedBatches: (key: &[u8]) -> u32, CompletedEventualities: (key: &[u8]) -> [u8; 32], @@ -591,21 +614,6 @@ mod _public_db { } } -/// The batches to publish. -/// -/// This is used for auditing the Batches published to Serai. -pub struct Batches; -impl Batches { - pub(crate) fn send(txn: &mut impl DbTxn, batch: &Batch) { - _public_db::Batches::send(txn, batch); - } - - /// Receive a batch to publish. - pub fn try_recv(txn: &mut impl DbTxn) -> Option { - _public_db::Batches::try_recv(txn) - } -} - /// The batches to sign and publish. /// /// This is used for publishing Batches onto Serai. diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 1ef4f8c2..e5b37969 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -23,13 +23,13 @@ pub use lifetime::LifetimeStage; // Database schema definition and associated functions. mod db; use db::ScannerGlobalDb; -pub use db::{Batches, BatchesToSign, AcknowledgedBatches, CompletedEventualities}; +pub use db::{BatchesToSign, AcknowledgedBatches, CompletedEventualities}; // Task to index the blockchain, ensuring we don't reorganize finalized blocks. mod index; // Scans blocks for received coins. mod scan; -/// Task which reports Batches to Substrate. -mod report; +/// Task which creates Batches for Substrate. +mod batch; /// Task which handles events from Substrate once we can. mod substrate; /// Check blocks for transactions expected to eventually occur. @@ -379,24 +379,24 @@ impl Scanner { let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); - let report_task = report::ReportTask::<_, S>::new(db.clone(), start_block); + let batch_task = batch::BatchTask::<_, S>::new(db.clone(), start_block); let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::<_, _, _>::new(db, feed, scheduler, start_block); let (index_task_def, _index_handle) = Task::new(); let (scan_task_def, scan_handle) = Task::new(); - let (report_task_def, report_handle) = Task::new(); + let (batch_task_def, batch_handle) = Task::new(); let (substrate_task_def, substrate_handle) = Task::new(); let (eventuality_task_def, eventuality_handle) = Task::new(); // Upon indexing a new block, scan it tokio::spawn(index_task.continually_run(index_task_def, vec![scan_handle.clone()])); - // Upon scanning a block, report it - tokio::spawn(scan_task.continually_run(scan_task_def, vec![report_handle])); - // Upon reporting a block, we do nothing (as the burden is on Substrate which won't be - // immediately ready) - tokio::spawn(report_task.continually_run(report_task_def, vec![])); + // Upon scanning a block, creates the batches for it + tokio::spawn(scan_task.continually_run(scan_task_def, vec![batch_handle])); + // Upon creating batches for a block, we do nothing (as the burden is on Substrate which won't + // be immediately ready) + tokio::spawn(batch_task.continually_run(batch_task_def, vec![])); // Upon handling an event from Substrate, we run the Eventuality task (as it's what's affected) tokio::spawn(substrate_task.continually_run(substrate_task_def, vec![eventuality_handle])); // Upon handling the Eventualities in a block, we run the scan task as we've advanced the diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs deleted file mode 100644 index d3c2995a..00000000 --- a/processor/scanner/src/report/mod.rs +++ /dev/null @@ -1,240 +0,0 @@ -use core::{marker::PhantomData, future::Future}; - -use blake2::{digest::typenum::U32, Digest, Blake2b}; - -use scale::Encode; -use serai_db::{DbTxn, Db}; - -use serai_validator_sets_primitives::Session; -use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; - -use primitives::{EncodableG, task::ContinuallyRan}; -use crate::{ - db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToReportDb, Batches, BatchesToSign}, - scan::next_to_scan_for_outputs_block, - substrate, ScannerFeed, KeyFor, -}; - -mod db; -pub(crate) use db::{BatchInfo, ReturnInformation, InternalBatches}; -use db::ReportDb; - -pub(crate) fn take_info_for_batch( - txn: &mut impl DbTxn, - id: u32, -) -> Option>>> { - ReportDb::::take_info_for_batch(txn, id) -} - -pub(crate) fn take_return_information( - txn: &mut impl DbTxn, - id: u32, -) -> Option>>> { - ReportDb::::take_return_information(txn, id) -} - -/* - This task produces Batches for notable blocks, with all InInstructions, in an ordered fashion. - - We only report blocks once both tasks, scanning for received outputs and checking for resolved - Eventualities, have processed the block. This ensures we know if this block is notable, and have - the InInstructions for it. -*/ -#[allow(non_snake_case)] -pub(crate) struct ReportTask { - db: D, - _S: PhantomData, -} - -impl ReportTask { - pub(crate) fn new(mut db: D, start_block: u64) -> Self { - if ReportDb::::next_to_potentially_report_block(&db).is_none() { - // Initialize the DB - let mut txn = db.txn(); - ReportDb::::set_next_to_potentially_report_block(&mut txn, start_block); - txn.commit(); - } - - Self { db, _S: PhantomData } - } -} - -impl ContinuallyRan for ReportTask { - fn run_iteration(&mut self) -> impl Send + Future> { - async move { - let highest_reportable = { - // Fetch the next to scan block - let next_to_scan = next_to_scan_for_outputs_block::(&self.db) - .expect("ReportTask run before writing the start block"); - // If we haven't done any work, return - if next_to_scan == 0 { - return Ok(false); - } - // The last scanned block is the block prior to this - #[allow(clippy::let_and_return)] - let last_scanned = next_to_scan - 1; - // The last scanned block is the highest reportable block as we only scan blocks within a - // window where it's safe to immediately report the block - // See `eventuality.rs` for more info - last_scanned - }; - - let next_to_potentially_report = ReportDb::::next_to_potentially_report_block(&self.db) - .expect("ReportTask run before writing the start block"); - - for block_number in next_to_potentially_report ..= highest_reportable { - let mut txn = self.db.txn(); - - // Receive the InInstructions for this block - // We always do this as we can't trivially tell if we should recv InInstructions before we - // do - let InInstructionData { - session_to_sign_batch, - external_key_for_session_to_sign_batch, - returnable_in_instructions: in_instructions, - } = ScanToReportDb::::recv_in_instructions(&mut txn, block_number); - - let notable = ScannerGlobalDb::::is_block_notable(&txn, block_number); - if !notable { - assert!(in_instructions.is_empty(), "block wasn't notable yet had InInstructions"); - } - // If this block is notable, create the Batch(s) for it - if notable { - let network = S::NETWORK; - let mut batch_id = ReportDb::::acquire_batch_id(&mut txn); - - // start with empty batch - let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; - // We also track the return information for the InInstructions within a Batch in case - // they error - let mut return_information = vec![vec![]]; - - for Returnable { return_address, in_instruction } in in_instructions { - let balance = in_instruction.balance; - - let batch = batches.last_mut().unwrap(); - batch.instructions.push(in_instruction); - - // check if batch is over-size - if batch.encode().len() > MAX_BATCH_SIZE { - // pop the last instruction so it's back in size - let in_instruction = batch.instructions.pop().unwrap(); - - // bump the id for the new batch - batch_id = ReportDb::::acquire_batch_id(&mut txn); - - // make a new batch with this instruction included - batches.push(Batch { network, id: batch_id, instructions: vec![in_instruction] }); - // Since we're allocating a new batch, allocate a new set of return addresses for it - return_information.push(vec![]); - } - - // For the set of return addresses for the InInstructions for the batch we just pushed - // onto, push this InInstruction's return addresses - return_information - .last_mut() - .unwrap() - .push(return_address.map(|address| ReturnInformation { address, balance })); - } - - // Now that we've finalized the Batches, save the information for each to the database - assert_eq!(batches.len(), return_information.len()); - for (batch, return_information) in batches.iter().zip(&return_information) { - assert_eq!(batch.instructions.len(), return_information.len()); - ReportDb::::save_batch_info( - &mut txn, - batch.id, - block_number, - session_to_sign_batch, - external_key_for_session_to_sign_batch, - Blake2b::::digest(batch.instructions.encode()).into(), - ); - ReportDb::::save_return_information(&mut txn, batch.id, return_information); - } - - for batch in batches { - InternalBatches::send( - &mut txn, - &(session_to_sign_batch, EncodableG(external_key_for_session_to_sign_batch), batch), - ); - } - } - - // Update the next to potentially report block - ReportDb::::set_next_to_potentially_report_block(&mut txn, block_number + 1); - - txn.commit(); - } - - // TODO: This should be its own task. The above doesn't error, doesn't return early, so this - // is fine, but this is precarious and would be better as its own task - { - let mut txn = self.db.txn(); - while let Some((session_to_sign_batch, external_key_for_session_to_sign_batch, batch)) = - InternalBatches::>::peek(&txn) - { - /* - If this is the handover Batch, the first Batch signed by a session which retires the - prior validator set, then this should only be signed after the prior validator set's - actions are fully validated. - - The new session will only be responsible for signing this Batch if the prior key has - retired, successfully completed all its on-external-network actions. - - We check here the prior session has successfully completed all its on-Serai-network - actions by ensuring we've validated all Batches expected from it. Only then do we sign - the Batch confirming the handover. - - We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we - don't verify the prior session's Batches, sign the handover Batch and the following - Batch, have the prior session publish a malicious Batch where our handover Batch should - be, before our following Batch becomes our handover Batch. - */ - if session_to_sign_batch != Session(0) { - // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a - // Batch. This is fine as we'll hit the distinct Session check and then set the correct - // values into this DB entry. All other sessions must complete the handover process, - // which requires having published at least one Batch - let (last_session, first_batch) = - ReportDb::::last_session_to_sign_batch_and_first_batch(&txn) - .unwrap_or((Session(0), 0)); - // Because this boolean was expanded, we lose short-circuiting. That's fine - let handover_batch = last_session != session_to_sign_batch; - let batch_after_handover_batch = - (last_session == session_to_sign_batch) && ((first_batch + 1) == batch.id); - if handover_batch || batch_after_handover_batch { - let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) - // Since `batch.id = 0` in the Session(0)-never-published-a-Batch case, we don't - // check `last_acknowledged_batch >= (batch.id - 1)` but instead this - .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch.id) - // We've never verified any Batches - .unwrap_or(false); - if !verified_prior_batch { - break; - } - } - - // If this is the handover Batch, update the last session to sign a Batch - if handover_batch { - ReportDb::::set_last_session_to_sign_batch_and_first_batch( - &mut txn, - session_to_sign_batch, - batch.id, - ); - } - } - - // Since we should handle this batch now, recv it from the channel - InternalBatches::>::try_recv(&mut txn).unwrap(); - - Batches::send(&mut txn, &batch); - BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch.0, &batch); - } - txn.commit(); - } - - // Run dependents if we decided to report any blocks - Ok(next_to_potentially_report <= highest_reportable) - } - } -} diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 14506092..25127ace 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -14,7 +14,7 @@ use crate::{ lifetime::LifetimeStage, db::{ OutputWithInInstruction, Returnable, SenderScanData, ScannerGlobalDb, InInstructionData, - ScanToReportDb, ScanToEventualityDb, + ScanToBatchDb, ScanToEventualityDb, }, BlockExt, ScannerFeed, AddressFor, OutputFor, Return, sort_outputs, eventuality::latest_scannable_block, @@ -345,7 +345,7 @@ impl ContinuallyRan for ScanTask { // We need to also specify which key is responsible for signing the Batch for these, which // will always be the oldest key (as the new key signing the Batch signifies handover // acceptance) - ScanToReportDb::::send_in_instructions( + ScanToBatchDb::::send_in_instructions( &mut txn, b, &InInstructionData { diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index fddc7453..506debd4 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -8,7 +8,7 @@ use serai_validator_sets_primitives::Session; use primitives::task::ContinuallyRan; use crate::{ db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatches}, - report, ScannerFeed, KeyFor, + batch, ScannerFeed, KeyFor, }; mod db; @@ -82,12 +82,12 @@ impl ContinuallyRan for SubstrateTask { key_to_activate, }) => { // Check if we have the information for this batch - let Some(report::BatchInfo { + let Some(batch::BatchInfo { block_number, session_to_sign_batch, external_key_for_session_to_sign_batch, in_instructions_hash: expected_in_instructions_hash, - }) = report::take_info_for_batch::(&mut txn, batch_id) + }) = batch::take_info_for_batch::(&mut txn, batch_id) else { // If we don't, drop this txn (restoring the action to the database) drop(txn); @@ -143,7 +143,7 @@ impl ContinuallyRan for SubstrateTask { // Return the balances for any InInstructions which failed to execute { - let return_information = report::take_return_information::(&mut txn, batch_id) + let return_information = batch::take_return_information::(&mut txn, batch_id) .expect("didn't save the return information for Batch we published"); assert_eq!( in_instruction_results.len(), @@ -159,7 +159,7 @@ impl ContinuallyRan for SubstrateTask { continue; } - if let Some(report::ReturnInformation { address, balance }) = return_information { + if let Some(batch::ReturnInformation { address, balance }) = return_information { burns.push(OutInstructionWithBalance { instruction: OutInstruction { address: address.into() }, balance, diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 1e3c84d2..b57742a5 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -136,20 +136,6 @@ impl ContinuallyRan for CoordinatorTask { } } - // Publish the Batches - { - let mut txn = self.db.txn(); - while let Some(batch) = scanner::Batches::try_recv(&mut txn) { - iterated = true; - self - .coordinator - .publish_batch(batch) - .await - .map_err(|e| format!("couldn't publish Batch: {e:?}"))?; - } - txn.commit(); - } - // Publish the signed Batches { let mut txn = self.db.txn(); diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 4943e91d..40e538aa 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -64,12 +64,6 @@ pub trait Coordinator: 'static + Send + Sync { signature: Signature, ) -> impl Send + Future>; - /// Publish a `Batch`. - fn publish_batch( - &mut self, - batch: Batch, - ) -> impl Send + Future>; - /// Publish a `SignedBatch`. fn publish_signed_batch( &mut self, From 26ccff25a1c097f36a7d4086ad65a05c76c81989 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 11:03:52 -0500 Subject: [PATCH 226/368] Split reporting Batches to the signer from the Batch test --- processor/scanner/src/batch/db.rs | 16 ----- processor/scanner/src/batch/mod.rs | 78 +-------------------- processor/scanner/src/lib.rs | 12 +++- processor/scanner/src/report/db.rs | 26 +++++++ processor/scanner/src/report/mod.rs | 105 ++++++++++++++++++++++++++++ 5 files changed, 142 insertions(+), 95 deletions(-) create mode 100644 processor/scanner/src/report/db.rs create mode 100644 processor/scanner/src/report/mod.rs diff --git a/processor/scanner/src/batch/db.rs b/processor/scanner/src/batch/db.rs index edca6d4a..88ca2882 100644 --- a/processor/scanner/src/batch/db.rs +++ b/processor/scanner/src/batch/db.rs @@ -26,9 +26,6 @@ create_db!( // The next block to create batches for NextBlockToBatch: () -> u64, - // The last session to sign a Batch and their first Batch signed - LastSessionToSignBatchAndFirstBatch: () -> (Session, u32), - // The next Batch ID to use NextBatchId: () -> u32, @@ -47,19 +44,6 @@ pub(crate) struct ReturnInformation { pub(crate) struct BatchDb(PhantomData); impl BatchDb { - pub(crate) fn set_last_session_to_sign_batch_and_first_batch( - txn: &mut impl DbTxn, - session: Session, - id: u32, - ) { - LastSessionToSignBatchAndFirstBatch::set(txn, &(session, id)); - } - pub(crate) fn last_session_to_sign_batch_and_first_batch( - getter: &impl Get, - ) -> Option<(Session, u32)> { - LastSessionToSignBatchAndFirstBatch::get(getter) - } - pub(crate) fn set_next_block_to_batch(txn: &mut impl DbTxn, next_block_to_batch: u64) { NextBlockToBatch::set(txn, &next_block_to_batch); } diff --git a/processor/scanner/src/batch/mod.rs b/processor/scanner/src/batch/mod.rs index e12cda89..8563ac4e 100644 --- a/processor/scanner/src/batch/mod.rs +++ b/processor/scanner/src/batch/mod.rs @@ -5,17 +5,13 @@ use blake2::{digest::typenum::U32, Digest, Blake2b}; use scale::Encode; use serai_db::{DbTxn, Db}; -use serai_validator_sets_primitives::Session; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::{EncodableG, task::ContinuallyRan}; use crate::{ - db::{ - Returnable, ScannerGlobalDb, InInstructionData, ScanToBatchDb, BatchData, BatchToReportDb, - BatchesToSign, - }, + db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToBatchDb, BatchData, BatchToReportDb}, scan::next_to_scan_for_outputs_block, - substrate, ScannerFeed, KeyFor, + ScannerFeed, KeyFor, }; mod db; @@ -175,76 +171,6 @@ impl ContinuallyRan for BatchTask { txn.commit(); } - // TODO: This should be its own task. The above doesn't error, doesn't return early, so this - // is fine, but this is precarious and would be better as its own task - loop { - let mut txn = self.db.txn(); - let Some(BatchData { - session_to_sign_batch, - external_key_for_session_to_sign_batch, - batch, - }) = BatchToReportDb::::try_recv_batch(&mut txn) - else { - break; - }; - - /* - If this is the handover Batch, the first Batch signed by a session which retires the - prior validator set, then this should only be signed after the prior validator set's - actions are fully validated. - - The new session will only be responsible for signing this Batch if the prior key has - retired, successfully completed all its on-external-network actions. - - We check here the prior session has successfully completed all its on-Serai-network - actions by ensuring we've validated all Batches expected from it. Only then do we sign - the Batch confirming the handover. - - We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we - don't verify the prior session's Batches, sign the handover Batch and the following - Batch, have the prior session publish a malicious Batch where our handover Batch should - be, before our following Batch becomes our handover Batch. - */ - if session_to_sign_batch != Session(0) { - // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a - // Batch. This is fine as we'll hit the distinct Session check and then set the correct - // values into this DB entry. All other sessions must complete the handover process, - // which requires having published at least one Batch - let (last_session, first_batch) = - BatchDb::::last_session_to_sign_batch_and_first_batch(&txn) - .unwrap_or((Session(0), 0)); - // Because this boolean was expanded, we lose short-circuiting. That's fine - let handover_batch = last_session != session_to_sign_batch; - let batch_after_handover_batch = - (last_session == session_to_sign_batch) && ((first_batch + 1) == batch.id); - if handover_batch || batch_after_handover_batch { - let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) - // Since `batch.id = 0` in the Session(0)-never-published-a-Batch case, we don't - // check `last_acknowledged_batch >= (batch.id - 1)` but instead this - .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch.id) - // We've never verified any Batches - .unwrap_or(false); - if !verified_prior_batch { - // Drop the txn to restore the Batch to report to the DB - drop(txn); - break; - } - } - - // If this is the handover Batch, update the last session to sign a Batch - if handover_batch { - BatchDb::::set_last_session_to_sign_batch_and_first_batch( - &mut txn, - session_to_sign_batch, - batch.id, - ); - } - } - - BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch.0, &batch); - txn.commit(); - } - // Run dependents if were able to batch any blocks Ok(next_block_to_batch <= highest_batchable) } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index e5b37969..3575f0d7 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -30,6 +30,8 @@ mod index; mod scan; /// Task which creates Batches for Substrate. mod batch; +/// Task which reports Batches for signing. +mod report; /// Task which handles events from Substrate once we can. mod substrate; /// Check blocks for transactions expected to eventually occur. @@ -380,6 +382,7 @@ impl Scanner { let index_task = index::IndexTask::new(db.clone(), feed.clone(), start_block).await; let scan_task = scan::ScanTask::new(db.clone(), feed.clone(), start_block); let batch_task = batch::BatchTask::<_, S>::new(db.clone(), start_block); + let report_task = report::ReportTask::<_, S>::new(db.clone()); let substrate_task = substrate::SubstrateTask::<_, S>::new(db.clone()); let eventuality_task = eventuality::EventualityTask::<_, _, _>::new(db, feed, scheduler, start_block); @@ -387,6 +390,7 @@ impl Scanner { let (index_task_def, _index_handle) = Task::new(); let (scan_task_def, scan_handle) = Task::new(); let (batch_task_def, batch_handle) = Task::new(); + let (report_task_def, report_handle) = Task::new(); let (substrate_task_def, substrate_handle) = Task::new(); let (eventuality_task_def, eventuality_handle) = Task::new(); @@ -394,9 +398,11 @@ impl Scanner { tokio::spawn(index_task.continually_run(index_task_def, vec![scan_handle.clone()])); // Upon scanning a block, creates the batches for it tokio::spawn(scan_task.continually_run(scan_task_def, vec![batch_handle])); - // Upon creating batches for a block, we do nothing (as the burden is on Substrate which won't - // be immediately ready) - tokio::spawn(batch_task.continually_run(batch_task_def, vec![])); + // Upon creating batches for a block, we run the report task + tokio::spawn(batch_task.continually_run(batch_task_def, vec![report_handle])); + // Upon reporting the batches for signing, we do nothing (as the burden is on a tributary which + // won't immediately yield a result) + tokio::spawn(report_task.continually_run(report_task_def, vec![])); // Upon handling an event from Substrate, we run the Eventuality task (as it's what's affected) tokio::spawn(substrate_task.continually_run(substrate_task_def, vec![eventuality_handle])); // Upon handling the Eventualities in a block, we run the scan task as we've advanced the diff --git a/processor/scanner/src/report/db.rs b/processor/scanner/src/report/db.rs new file mode 100644 index 00000000..a97a6b39 --- /dev/null +++ b/processor/scanner/src/report/db.rs @@ -0,0 +1,26 @@ +use serai_db::{Get, DbTxn, create_db}; + +use serai_validator_sets_primitives::Session; + +create_db!( + ScannerBatch { + // The last session to sign a Batch and their first Batch signed + LastSessionToSignBatchAndFirstBatch: () -> (Session, u32), + } +); + +pub(crate) struct BatchDb; +impl BatchDb { + pub(crate) fn set_last_session_to_sign_batch_and_first_batch( + txn: &mut impl DbTxn, + session: Session, + id: u32, + ) { + LastSessionToSignBatchAndFirstBatch::set(txn, &(session, id)); + } + pub(crate) fn last_session_to_sign_batch_and_first_batch( + getter: &impl Get, + ) -> Option<(Session, u32)> { + LastSessionToSignBatchAndFirstBatch::get(getter) + } +} diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs new file mode 100644 index 00000000..2a7ab6a1 --- /dev/null +++ b/processor/scanner/src/report/mod.rs @@ -0,0 +1,105 @@ +use core::{marker::PhantomData, future::Future}; + +use serai_db::{DbTxn, Db}; + +use serai_validator_sets_primitives::Session; + +use primitives::task::ContinuallyRan; +use crate::{ + db::{BatchData, BatchToReportDb, BatchesToSign}, + substrate, ScannerFeed, +}; + +mod db; +use db::BatchDb; + +// This task begins reporting Batches for signing once the pre-requisities are met. +#[allow(non_snake_case)] +pub(crate) struct ReportTask { + db: D, + _S: PhantomData, +} + +impl ReportTask { + pub(crate) fn new(db: D) -> Self { + Self { db, _S: PhantomData } + } +} + +impl ContinuallyRan for ReportTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + loop { + let mut txn = self.db.txn(); + let Some(BatchData { + session_to_sign_batch, + external_key_for_session_to_sign_batch, + batch, + }) = BatchToReportDb::::try_recv_batch(&mut txn) + else { + break; + }; + + /* + If this is the handover Batch, the first Batch signed by a session which retires the + prior validator set, then this should only be signed after the prior validator set's + actions are fully validated. + + The new session will only be responsible for signing this Batch if the prior key has + retired, successfully completed all its on-external-network actions. + + We check here the prior session has successfully completed all its on-Serai-network + actions by ensuring we've validated all Batches expected from it. Only then do we sign + the Batch confirming the handover. + + We also wait for the Batch confirming the handover to be accepted on-chain, ensuring we + don't verify the prior session's Batches, sign the handover Batch and the following + Batch, have the prior session publish a malicious Batch where our handover Batch should + be, before our following Batch becomes our handover Batch. + */ + if session_to_sign_batch != Session(0) { + // We may have Session(1)'s first Batch be Batch 0 if Session(0) never publishes a + // Batch. This is fine as we'll hit the distinct Session check and then set the correct + // values into this DB entry. All other sessions must complete the handover process, + // which requires having published at least one Batch + let (last_session, first_batch) = + BatchDb::last_session_to_sign_batch_and_first_batch(&txn).unwrap_or((Session(0), 0)); + // Because this boolean was expanded, we lose short-circuiting. That's fine + let handover_batch = last_session != session_to_sign_batch; + let batch_after_handover_batch = + (last_session == session_to_sign_batch) && ((first_batch + 1) == batch.id); + if handover_batch || batch_after_handover_batch { + let verified_prior_batch = substrate::last_acknowledged_batch::(&txn) + // Since `batch.id = 0` in the Session(0)-never-published-a-Batch case, we don't + // check `last_acknowledged_batch >= (batch.id - 1)` but instead this + .map(|last_acknowledged_batch| (last_acknowledged_batch + 1) >= batch.id) + // We've never verified any Batches + .unwrap_or(false); + if !verified_prior_batch { + // Drop the txn to restore the Batch to report to the DB + drop(txn); + break; + } + } + + // If this is the handover Batch, update the last session to sign a Batch + if handover_batch { + BatchDb::set_last_session_to_sign_batch_and_first_batch( + &mut txn, + session_to_sign_batch, + batch.id, + ); + } + } + + BatchesToSign::send(&mut txn, &external_key_for_session_to_sign_batch.0, &batch); + txn.commit(); + + made_progress = true; + } + + Ok(made_progress) + } + } +} From b584a2beab5df0b8afc8ecd96283e43e170ebde7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 11:07:03 -0500 Subject: [PATCH 227/368] Remove old DB entry from the scanner We read from it but never writ to it. It was used to check we didn't flag a block as notable after reporting it, but it was called by the scan task as it scanned a block. We only batch/report blocks after the scan task after scanning them, so it was very redundant. --- processor/scanner/src/db.rs | 6 ------ 1 file changed, 6 deletions(-) diff --git a/processor/scanner/src/db.rs b/processor/scanner/src/db.rs index 8790da31..e7fd464b 100644 --- a/processor/scanner/src/db.rs +++ b/processor/scanner/src/db.rs @@ -81,8 +81,6 @@ create_db!( ActiveKeys: () -> Vec>, RetireAt: (key: K) -> u64, - // The next block to potentially report - NextToPotentiallyReportBlock: () -> u64, // Highest acknowledged block HighestAcknowledgedBlock: () -> u64, @@ -277,10 +275,6 @@ impl ScannerGlobalDb { blocks in which we receive outputs is notable). */ pub(crate) fn flag_notable_due_to_non_external_output(txn: &mut impl DbTxn, block_number: u64) { - assert!( - NextToPotentiallyReportBlock::get(txn).unwrap() <= block_number, - "already potentially reported a block we're only now flagging as notable" - ); NotableBlock::set(txn, block_number, &()); } From 5a42f66dc26a229567d892a3a78ec106772409de Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 30 Dec 2024 11:09:09 -0500 Subject: [PATCH 228/368] alloy 0.9 --- Cargo.lock | 100 +++++++++++------- .../alloy-simple-request-transport/Cargo.toml | 4 +- networks/ethereum/schnorr/Cargo.toml | 8 +- processor/ethereum/Cargo.toml | 8 +- processor/ethereum/deployer/Cargo.toml | 8 +- processor/ethereum/erc20/Cargo.toml | 6 +- processor/ethereum/primitives/Cargo.toml | 2 +- processor/ethereum/router/Cargo.toml | 12 +-- processor/ethereum/test-primitives/Cargo.toml | 6 +- 9 files changed, 86 insertions(+), 68 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 4711b68e..fe3c80bb 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -112,9 +112,9 @@ dependencies = [ [[package]] name = "alloy-consensus" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e88e1edea70787c33e11197d3f32ae380f3db19e6e061e539a5bcf8184a6b326" +checksum = "db66918860ff33920fb9e6d648d1e8cee275321406ea255ac9320f6562e26fec" dependencies = [ "alloy-eips", "alloy-primitives", @@ -130,9 +130,9 @@ dependencies = [ [[package]] name = "alloy-consensus-any" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "57b1bb53f40c0273cd1975573cd457b39213e68584e36d1401d25fd0398a1d65" +checksum = "04519b5157de8a2166bddb07d84a63590100f1d3e2b3682144e787f1c27ccdac" dependencies = [ "alloy-consensus", "alloy-eips", @@ -164,9 +164,9 @@ dependencies = [ [[package]] name = "alloy-eip7702" -version = "0.4.2" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c986539255fb839d1533c128e190e557e52ff652c9ef62939e233a81dd93f7e" +checksum = "cabf647eb4650c91a9d38cb6f972bb320009e7e9d61765fb688a86f1563b33e8" dependencies = [ "alloy-primitives", "alloy-rlp", @@ -177,9 +177,9 @@ dependencies = [ [[package]] name = "alloy-eips" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f9fadfe089e9ccc0650473f2d4ef0a28bc015bbca5631d9f0f09e49b557fdb3" +checksum = "e56518f46b074d562ac345238343e2231b672a13aca18142d285f95cc055980b" dependencies = [ "alloy-eip2930", "alloy-eip7702", @@ -195,10 +195,11 @@ dependencies = [ [[package]] name = "alloy-genesis" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2b2a4cf7b70f3495788e74ce1c765260ffe38820a2a774ff4aacb62e31ea73f9" +checksum = "2cf200fd4c28435995e47b26d4761a4cf6e1011a13b81f9a9afaf16a93d9fd09" dependencies = [ + "alloy-eips", "alloy-primitives", "alloy-serde", "alloy-trie", @@ -207,9 +208,9 @@ dependencies = [ [[package]] name = "alloy-json-rpc" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e29040b9d5fe2fb70415531882685b64f8efd08dfbd6cc907120650504821105" +checksum = "b17c5ada5faf0f9d2921e8b20971eced68abbc92a272b0502cac8b1d00f56777" dependencies = [ "alloy-primitives", "alloy-sol-types", @@ -221,9 +222,9 @@ dependencies = [ [[package]] name = "alloy-network" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "510cc00b318db0dfccfdd2d032411cfae64fc144aef9679409e014145d3dacc4" +checksum = "24f3117647e3262f6db9e18b371bf67c5810270c0cf915786c30fad3b1739561" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -246,9 +247,9 @@ dependencies = [ [[package]] name = "alloy-network-primitives" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9081c099e798b8a2bba2145eb82a9a146f01fc7a35e9ab6e7b43305051f97550" +checksum = "1535a4577648ec2fd3c446d4644d9b8e9e01e5816be53a5d515dc1624e2227b2" dependencies = [ "alloy-consensus", "alloy-eips", @@ -259,9 +260,9 @@ dependencies = [ [[package]] name = "alloy-node-bindings" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aef9849fb8bbb28f69f2cbdb4b0dac2f0e35c04f6078a00dfb8486469aed02de" +checksum = "bf741e871fb62c80e0007041e8bc1e81978abfd98aafea8354472f06bfd4d309" dependencies = [ "alloy-genesis", "alloy-primitives", @@ -304,9 +305,9 @@ dependencies = [ [[package]] name = "alloy-provider" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dc2dfaddd9a30aa870a78a4e1316e3e115ec1e12e552cbc881310456b85c1f24" +checksum = "fcfa2db03d4221b5ca14bff7dbed4712689cb87a3e826af522468783ff05ec5d" dependencies = [ "alloy-chains", "alloy-consensus", @@ -360,9 +361,9 @@ dependencies = [ [[package]] name = "alloy-rpc-client" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "531137b283547d5b9a5cafc96b006c64ef76810c681d606f28be9781955293b6" +checksum = "d2ec6963b08f1c6ef8eacc01dbba20f2c6a1533550403f6b52dbbe0da0360834" dependencies = [ "alloy-json-rpc", "alloy-primitives", @@ -381,9 +382,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-any" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ed98e1af55a7d856bfa385f30f63d8d56be2513593655c904a8f4a7ec963aa3e" +checksum = "c64a83112b09bd293ef522bfa3800fa2d2df4d72f2bcd3a84b08490503b22e55" dependencies = [ "alloy-consensus-any", "alloy-rpc-types-eth", @@ -392,9 +393,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-eth" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8737d7a6e37ca7bba9c23e9495c6534caec6760eb24abc9d5ffbaaba147818e1" +checksum = "5fc1892a1ac0d2a49c063f0791aa6bde342f020c5d37aaaec14832b661802cb4" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -404,17 +405,17 @@ dependencies = [ "alloy-rlp", "alloy-serde", "alloy-sol-types", - "derive_more", "itertools 0.13.0", "serde", "serde_json", + "thiserror 2.0.9", ] [[package]] name = "alloy-serde" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5851bf8d5ad33014bd0c45153c603303e730acc8a209450a7ae6b4a12c2789e2" +checksum = "17939f6bef49268e4494158fce1ab8913cd6164ec3f9a4ada2c677b9b5a77f2f" dependencies = [ "alloy-primitives", "serde", @@ -423,9 +424,9 @@ dependencies = [ [[package]] name = "alloy-signer" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7e10ca565da6500cca015ba35ee424d59798f2e1b85bc0dd8f81dafd401f029a" +checksum = "77d1f0762a44338f0e05987103bd5919df52170d949080bfebfeb6aaaa867c39" dependencies = [ "alloy-primitives", "async-trait", @@ -506,9 +507,9 @@ dependencies = [ [[package]] name = "alloy-transport" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "538a04a37221469cac0ce231b737fd174de2fdfcdd843bdd068cb39ed3e066ad" +checksum = "3a3827275a4eed3431ce876a59c76fd19effc2a8c09566b2603e3a3376d38af0" dependencies = [ "alloy-json-rpc", "base64 0.22.1", @@ -526,9 +527,9 @@ dependencies = [ [[package]] name = "alloy-transport-http" -version = "0.8.3" +version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2ed40eb1e1265b2911512f6aa1dcece9702d078f5a646730c45e39e2be00ac1c" +checksum = "958417ddf333c55b0627cb7fbee7c6666895061dee79f50404dd6dbdd8e9eba0" dependencies = [ "alloy-transport", "url", @@ -2527,7 +2528,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -3547,7 +3548,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.4.10", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -4111,7 +4112,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.48.5", + "windows-targets 0.52.6", ] [[package]] @@ -6906,7 +6907,7 @@ dependencies = [ "errno", "libc", "linux-raw-sys", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -8339,6 +8340,21 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-coordinator-substrate" +version = "0.1.0" +dependencies = [ + "blake2", + "borsh", + "log", + "parity-scale-codec", + "serai-client", + "serai-cosign", + "serai-db", + "serai-task", + "tokio", +] + [[package]] name = "serai-coordinator-tests" version = "0.1.0" @@ -8944,6 +8960,7 @@ dependencies = [ name = "serai-processor-scanner" version = "0.1.0" dependencies = [ + "blake2", "borsh", "group", "hex", @@ -8956,6 +8973,7 @@ dependencies = [ "serai-processor-messages", "serai-processor-primitives", "serai-processor-scheduler-primitives", + "serai-validator-sets-primitives", "tokio", ] @@ -10467,7 +10485,7 @@ dependencies = [ "fastrand", "once_cell", "rustix", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -11693,7 +11711,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.48.0", + "windows-sys 0.59.0", ] [[package]] diff --git a/networks/ethereum/alloy-simple-request-transport/Cargo.toml b/networks/ethereum/alloy-simple-request-transport/Cargo.toml index c470c727..de5925df 100644 --- a/networks/ethereum/alloy-simple-request-transport/Cargo.toml +++ b/networks/ethereum/alloy-simple-request-transport/Cargo.toml @@ -21,8 +21,8 @@ tower = "0.5" serde_json = { version = "1", default-features = false } simple-request = { path = "../../../common/request", version = "0.1", default-features = false } -alloy-json-rpc = { version = "0.8", default-features = false } -alloy-transport = { version = "0.8", default-features = false } +alloy-json-rpc = { version = "0.9", default-features = false } +alloy-transport = { version = "0.9", default-features = false } [features] default = ["tls"] diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 90c9d418..42797bb7 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -33,10 +33,10 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-types-eth = { version = "0.8", default-features = false } -alloy-rpc-client = { version = "0.8", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } +alloy-rpc-client = { version = "0.9", default-features = false } +alloy-provider = { version = "0.9", default-features = false } -alloy-node-bindings = { version = "0.8", default-features = false } +alloy-node-bindings = { version = "0.9", default-features = false } tokio = { version = "1", default-features = false, features = ["macros"] } diff --git a/processor/ethereum/Cargo.toml b/processor/ethereum/Cargo.toml index 282af928..94594b93 100644 --- a/processor/ethereum/Cargo.toml +++ b/processor/ethereum/Cargo.toml @@ -34,11 +34,11 @@ k256 = { version = "^0.13.1", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } alloy-rlp = { version = "0.3", default-features = false } -alloy-rpc-types-eth = { version = "0.8", default-features = false } -alloy-transport = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } +alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-rpc-client = { version = "0.8", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-rpc-client = { version = "0.9", default-features = false } +alloy-provider = { version = "0.9", default-features = false } serai-client = { path = "../../substrate/client", default-features = false, features = ["ethereum"] } diff --git a/processor/ethereum/deployer/Cargo.toml b/processor/ethereum/deployer/Cargo.toml index ba523b42..1b8e191e 100644 --- a/processor/ethereum/deployer/Cargo.toml +++ b/processor/ethereum/deployer/Cargo.toml @@ -22,12 +22,12 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.9", default-features = false } -alloy-rpc-types-eth = { version = "0.8", default-features = false } -alloy-transport = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } +alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-provider = { version = "0.9", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index ad6ca6ad..befa0f29 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -22,9 +22,9 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-rpc-types-eth = { version = "0.8", default-features = false } -alloy-transport = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } +alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-provider = { version = "0.9", default-features = false } tokio = { version = "1", default-features = false, features = ["rt"] } diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml index 33740c7f..180f5db7 100644 --- a/processor/ethereum/primitives/Cargo.toml +++ b/processor/ethereum/primitives/Cargo.toml @@ -21,4 +21,4 @@ group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.8", default-features = false, features = ["k256"] } +alloy-consensus = { version = "0.9", default-features = false, features = ["k256"] } diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 97594de0..46b1f203 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -24,12 +24,12 @@ alloy-core = { version = "0.8", default-features = false } alloy-sol-types = { version = "0.8", default-features = false } alloy-sol-macro = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.8", default-features = false } +alloy-consensus = { version = "0.9", default-features = false } -alloy-rpc-types-eth = { version = "0.8", default-features = false } -alloy-transport = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } +alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-provider = { version = "0.9", default-features = false } ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../../networks/ethereum/schnorr", default-features = false } @@ -53,8 +53,8 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } k256 = { version = "0.13", default-features = false, features = ["std"] } -alloy-rpc-client = { version = "0.8", default-features = false } -alloy-node-bindings = { version = "0.8", default-features = false } +alloy-rpc-client = { version = "0.9", default-features = false } +alloy-node-bindings = { version = "0.9", default-features = false } tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros"] } diff --git a/processor/ethereum/test-primitives/Cargo.toml b/processor/ethereum/test-primitives/Cargo.toml index d77e7d6e..6d3b4c1d 100644 --- a/processor/ethereum/test-primitives/Cargo.toml +++ b/processor/ethereum/test-primitives/Cargo.toml @@ -20,10 +20,10 @@ workspace = true k256 = { version = "0.13", default-features = false, features = ["std"] } alloy-core = { version = "0.8", default-features = false } -alloy-consensus = { version = "0.8", default-features = false, features = ["std"] } +alloy-consensus = { version = "0.9", default-features = false, features = ["std"] } -alloy-rpc-types-eth = { version = "0.8", default-features = false } +alloy-rpc-types-eth = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } -alloy-provider = { version = "0.8", default-features = false } +alloy-provider = { version = "0.9", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } From 8c9441a1a5b30435aa35c06b9d12ae4f2a68f1a5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 31 Dec 2024 10:37:19 -0500 Subject: [PATCH 229/368] Redo coordinator's Substrate scanner --- .github/workflows/msrv.yml | 1 + .github/workflows/tests.yml | 1 + Cargo.lock | 2 + Cargo.toml | 1 + coordinator/cosign/Cargo.toml | 9 +- coordinator/cosign/src/evaluator.rs | 8 + coordinator/cosign/src/lib.rs | 150 +++-- coordinator/src/substrate/db.rs | 32 - coordinator/src/substrate/mod.rs | 583 ------------------ coordinator/substrate/Cargo.toml | 35 ++ coordinator/substrate/LICENSE | 15 + coordinator/substrate/README.md | 14 + coordinator/substrate/src/canonical.rs | 216 +++++++ coordinator/substrate/src/ephemeral.rs | 240 +++++++ coordinator/substrate/src/lib.rs | 109 ++++ deny.toml | 1 + processor/bin/src/coordinator.rs | 5 +- processor/bin/src/lib.rs | 9 +- processor/messages/src/lib.rs | 4 +- processor/signers/src/lib.rs | 2 +- substrate/abi/src/in_instructions.rs | 15 +- substrate/client/src/serai/in_instructions.rs | 5 +- substrate/client/src/serai/mod.rs | 6 +- .../client/tests/common/genesis_liquidity.rs | 3 +- substrate/in-instructions/pallet/src/lib.rs | 69 +-- 25 files changed, 792 insertions(+), 743 deletions(-) delete mode 100644 coordinator/src/substrate/db.rs delete mode 100644 coordinator/src/substrate/mod.rs create mode 100644 coordinator/substrate/Cargo.toml create mode 100644 coordinator/substrate/LICENSE create mode 100644 coordinator/substrate/README.md create mode 100644 coordinator/substrate/src/canonical.rs create mode 100644 coordinator/substrate/src/ephemeral.rs create mode 100644 coordinator/substrate/src/lib.rs diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml index 75fcdd79..4d37fab7 100644 --- a/.github/workflows/msrv.yml +++ b/.github/workflows/msrv.yml @@ -176,6 +176,7 @@ jobs: cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml + cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml cargo msrv verify --manifest-path coordinator/Cargo.toml msrv-substrate: diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 9f1b0a1f..65a35cc3 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -62,6 +62,7 @@ jobs: -p tendermint-machine \ -p tributary-chain \ -p serai-cosign \ + -p serai-coordinator-substrate \ -p serai-coordinator \ -p serai-orchestrator \ -p serai-docker-tests diff --git a/Cargo.lock b/Cargo.lock index fe3c80bb..40f0e276 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8346,11 +8346,13 @@ version = "0.1.0" dependencies = [ "blake2", "borsh", + "futures", "log", "parity-scale-codec", "serai-client", "serai-cosign", "serai-db", + "serai-processor-messages", "serai-task", "tokio", ] diff --git a/Cargo.toml b/Cargo.toml index eea39f37..688537b1 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -99,6 +99,7 @@ members = [ "coordinator/tributary/tendermint", "coordinator/tributary", "coordinator/cosign", + "coordinator/substrate", "coordinator", "substrate/primitives", diff --git a/coordinator/cosign/Cargo.toml b/coordinator/cosign/Cargo.toml index bbd96399..fa5bd8ee 100644 --- a/coordinator/cosign/Cargo.toml +++ b/coordinator/cosign/Cargo.toml @@ -14,9 +14,6 @@ rust-version = "1.81" all-features = true rustdoc-args = ["--cfg", "docsrs"] -[package.metadata.cargo-machete] -ignored = ["scale"] - [lints] workspace = true @@ -30,7 +27,7 @@ serai-client = { path = "../../substrate/client", default-features = false, feat log = { version = "0.4", default-features = false, features = ["std"] } -tokio = { version = "1", default-features = false, features = [] } +tokio = { version = "1", default-features = false } -serai-db = { path = "../../common/db" } -serai-task = { path = "../../common/task" } +serai-db = { version = "0.1.1", path = "../../common/db" } +serai-task = { version = "0.1", path = "../../common/task" } diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 856a6e00..fc606ecc 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -122,6 +122,8 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask bool { + let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false }; + let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false }; + + signer.verify_simple(COSIGN_CONTEXT, &self.cosign.encode(), &signature).is_ok() + } +} + create_db! { Cosign { // The following are populated by the intend task and used throughout the library @@ -97,64 +156,6 @@ create_db! { } } -/// If the block has events. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -enum HasEvents { - /// The block had a notable event. - /// - /// This is a special case as blocks with key gen events change the keys used for cosigning, and - /// accordingly must be cosigned before we advance past them. - Notable, - /// The block had an non-notable event justifying a cosign. - NonNotable, - /// The block didn't have an event justifying a cosign. - No, -} - -/// An intended cosign. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -struct CosignIntent { - /// The global session this cosign is being performed under. - global_session: [u8; 32], - /// The number of the block to cosign. - block_number: u64, - /// The hash of the block to cosign. - block_hash: [u8; 32], - /// If this cosign must be handled before further cosigns are. - notable: bool, -} - -/// A cosign. -#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -pub struct Cosign { - /// The global session this cosign is being performed under. - pub global_session: [u8; 32], - /// The number of the block to cosign. - pub block_number: u64, - /// The hash of the block to cosign. - pub block_hash: [u8; 32], - /// The actual cosigner. - pub cosigner: NetworkId, -} - -/// A signed cosign. -#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] -pub struct SignedCosign { - /// The cosign. - pub cosign: Cosign, - /// The signature for the cosign. - pub signature: [u8; 64], -} - -impl SignedCosign { - fn verify_signature(&self, signer: serai_client::Public) -> bool { - let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false }; - let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false }; - - signer.verify_simple(COSIGN_CONTEXT, &borsh::to_vec(&self.cosign).unwrap(), &signature).is_ok() - } -} - /// Fetch the keys used for cosigning by a specific network. async fn keys_for_network( serai: &TemporalSerai<'_>, @@ -219,6 +220,7 @@ pub trait RequestNotableCosigns: 'static + Send { } /// An error used to indicate the cosigning protocol has faulted. +#[derive(Debug)] pub struct Faulted; /// The interface to manage cosigning with. @@ -255,12 +257,23 @@ impl Cosigning { } /// The latest cosigned block number. - pub fn latest_cosigned_block_number(&self) -> Result { - if FaultedSession::get(&self.db).is_some() { + pub fn latest_cosigned_block_number(getter: &impl Get) -> Result { + if FaultedSession::get(getter).is_some() { Err(Faulted)?; } - Ok(LatestCosignedBlockNumber::get(&self.db).unwrap_or(0)) + Ok(LatestCosignedBlockNumber::get(getter).unwrap_or(0)) + } + + /// Fetch an cosigned Substrate block by its block number. + pub fn cosigned_block(getter: &impl Get, block_number: u64) -> Result, Faulted> { + if block_number > Self::latest_cosigned_block_number(getter)? { + return Ok(None); + } + + Ok(Some( + SubstrateBlocks::get(getter, block_number).expect("cosigned block but didn't index it"), + )) } /// Fetch the notable cosigns for a global session in order to respond to requests. @@ -422,4 +435,19 @@ impl Cosigning { txn.commit(); Ok(true) } + + /// Receive intended cosigns to produce for this ValidatorSet. + /// + /// All cosigns intended, up to and including the next notable cosign, are returned. + /// + /// This will drain the internal channel and not re-yield these intentions again. + pub fn intended_cosigns(txn: &mut impl DbTxn, set: ValidatorSet) -> Vec { + let mut res: Vec = vec![]; + // While we have yet to find a notable cosign... + while !res.last().map(|cosign| cosign.notable).unwrap_or(false) { + let Some(intent) = intend::IntendedCosigns::try_recv(txn, set) else { break }; + res.push(intent); + } + res + } } diff --git a/coordinator/src/substrate/db.rs b/coordinator/src/substrate/db.rs deleted file mode 100644 index 2621e5ef..00000000 --- a/coordinator/src/substrate/db.rs +++ /dev/null @@ -1,32 +0,0 @@ -use serai_client::primitives::NetworkId; - -pub use serai_db::*; - -mod inner_db { - use super::*; - - create_db!( - SubstrateDb { - NextBlock: () -> u64, - HandledEvent: (block: [u8; 32]) -> u32, - BatchInstructionsHashDb: (network: NetworkId, id: u32) -> [u8; 32] - } - ); -} -pub(crate) use inner_db::{NextBlock, BatchInstructionsHashDb}; - -pub struct HandledEvent; -impl HandledEvent { - fn next_to_handle_event(getter: &impl Get, block: [u8; 32]) -> u32 { - inner_db::HandledEvent::get(getter, block).map_or(0, |last| last + 1) - } - pub fn is_unhandled(getter: &impl Get, block: [u8; 32], event_id: u32) -> bool { - let next = Self::next_to_handle_event(getter, block); - assert!(next >= event_id); - next == event_id - } - pub fn handle_event(txn: &mut impl DbTxn, block: [u8; 32], index: u32) { - assert!(Self::next_to_handle_event(txn, block) == index); - inner_db::HandledEvent::set(txn, block, &index); - } -} diff --git a/coordinator/src/substrate/mod.rs b/coordinator/src/substrate/mod.rs deleted file mode 100644 index d1946b7e..00000000 --- a/coordinator/src/substrate/mod.rs +++ /dev/null @@ -1,583 +0,0 @@ -use core::{ops::Deref, time::Duration}; -use std::{ - sync::Arc, - collections::{HashSet, HashMap}, -}; - -use zeroize::Zeroizing; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; - -use serai_client::{ - SeraiError, Block, Serai, TemporalSerai, - primitives::{BlockHash, EmbeddedEllipticCurve, NetworkId}, - validator_sets::{primitives::ValidatorSet, ValidatorSetsEvent}, - in_instructions::InInstructionsEvent, - coins::CoinsEvent, -}; - -use serai_db::DbTxn; - -use processor_messages::SubstrateContext; - -use tokio::{sync::mpsc, time::sleep}; - -use crate::{ - Db, - processors::Processors, - tributary::{TributarySpec, SeraiDkgCompleted}, -}; - -mod db; -pub use db::*; - -mod cosign; -pub use cosign::*; - -async fn in_set( - key: &Zeroizing<::F>, - serai: &TemporalSerai<'_>, - set: ValidatorSet, -) -> Result, SeraiError> { - let Some(participants) = serai.validator_sets().participants(set.network).await? else { - return Ok(None); - }; - let key = (Ristretto::generator() * key.deref()).to_bytes(); - Ok(Some(participants.iter().any(|(participant, _)| participant.0 == key))) -} - -async fn handle_new_set( - txn: &mut D::Transaction<'_>, - key: &Zeroizing<::F>, - new_tributary_spec: &mpsc::UnboundedSender, - serai: &Serai, - block: &Block, - set: ValidatorSet, -) -> Result<(), SeraiError> { - if in_set(key, &serai.as_of(block.hash()), set) - .await? - .expect("NewSet for set which doesn't exist") - { - log::info!("present in set {:?}", set); - - let validators; - let mut evrf_public_keys = vec![]; - { - let serai = serai.as_of(block.hash()); - let serai = serai.validator_sets(); - let set_participants = - serai.participants(set.network).await?.expect("NewSet for set which doesn't exist"); - - validators = set_participants - .iter() - .map(|(k, w)| { - ( - ::read_G::<&[u8]>(&mut k.0.as_ref()) - .expect("invalid key registered as participant"), - u16::try_from(*w).unwrap(), - ) - }) - .collect::>(); - for (validator, _) in set_participants { - // This is only run for external networks which always do a DKG for Serai - let substrate = serai - .embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519) - .await? - .expect("Serai called NewSet on a validator without an Embedwards25519 key"); - // `embedded_elliptic_curves` is documented to have the second entry be the - // network-specific curve (if it exists and is distinct from Embedwards25519) - let network = - if let Some(embedded_elliptic_curve) = set.network.embedded_elliptic_curves().get(1) { - serai.embedded_elliptic_curve_key(validator, *embedded_elliptic_curve).await?.expect( - "Serai called NewSet on a validator without the embedded key required for the network", - ) - } else { - substrate.clone() - }; - evrf_public_keys.push(( - <[u8; 32]>::try_from(substrate) - .expect("validator-sets pallet accepted a key of an invalid length"), - network, - )); - } - }; - - let time = if let Ok(time) = block.time() { - time - } else { - assert_eq!(block.number(), 0); - // Use the next block's time - loop { - let Ok(Some(res)) = serai.finalized_block_by_number(1).await else { - sleep(Duration::from_secs(5)).await; - continue; - }; - break res.time().unwrap(); - } - }; - // The block time is in milliseconds yet the Tributary is in seconds - let time = time / 1000; - // Since this block is in the past, and Tendermint doesn't play nice with starting chains after - // their start time (though it does eventually work), delay the start time by 120 seconds - // This is meant to handle ~20 blocks of lack of finalization for this first block - const SUBSTRATE_TO_TRIBUTARY_TIME_DELAY: u64 = 120; - let time = time + SUBSTRATE_TO_TRIBUTARY_TIME_DELAY; - - let spec = TributarySpec::new(block.hash(), time, set, validators, evrf_public_keys); - - log::info!("creating new tributary for {:?}", spec.set()); - - // Save it to the database now, not on the channel receiver's side, so this is safe against - // reboots - // If this txn finishes, and we reboot, then this'll be reloaded from active Tributaries - // If this txn doesn't finish, this will be re-fired - // If we waited to save to the DB, this txn may be finished, preventing re-firing, yet the - // prior fired event may have not been received yet - crate::ActiveTributaryDb::add_participating_in_tributary(txn, &spec); - - new_tributary_spec.send(spec).unwrap(); - } else { - log::info!("not present in new set {:?}", set); - } - - Ok(()) -} - -async fn handle_batch_and_burns( - txn: &mut impl DbTxn, - processors: &Pro, - serai: &Serai, - block: &Block, -) -> Result<(), SeraiError> { - // Track which networks had events with a Vec in ordr to preserve the insertion order - // While that shouldn't be needed, ensuring order never hurts, and may enable design choices - // with regards to Processor <-> Coordinator message passing - let mut networks_with_event = vec![]; - let mut network_had_event = |burns: &mut HashMap<_, _>, batches: &mut HashMap<_, _>, network| { - // Don't insert this network multiple times - // A Vec is still used in order to maintain the insertion order - if !networks_with_event.contains(&network) { - networks_with_event.push(network); - burns.insert(network, vec![]); - batches.insert(network, vec![]); - } - }; - - let mut batch_block = HashMap::new(); - let mut batches = HashMap::>::new(); - let mut burns = HashMap::new(); - - let serai = serai.as_of(block.hash()); - for batch in serai.in_instructions().batch_events().await? { - if let InInstructionsEvent::Batch { network, id, block: network_block, instructions_hash } = - batch - { - network_had_event(&mut burns, &mut batches, network); - - BatchInstructionsHashDb::set(txn, network, id, &instructions_hash); - - // Make sure this is the only Batch event for this network in this Block - assert!(batch_block.insert(network, network_block).is_none()); - - // Add the batch included by this block - batches.get_mut(&network).unwrap().push(id); - } else { - panic!("Batch event wasn't Batch: {batch:?}"); - } - } - - for burn in serai.coins().burn_with_instruction_events().await? { - if let CoinsEvent::BurnWithInstruction { from: _, instruction } = burn { - let network = instruction.balance.coin.network(); - network_had_event(&mut burns, &mut batches, network); - - // network_had_event should register an entry in burns - burns.get_mut(&network).unwrap().push(instruction); - } else { - panic!("Burn event wasn't Burn: {burn:?}"); - } - } - - assert_eq!(HashSet::<&_>::from_iter(networks_with_event.iter()).len(), networks_with_event.len()); - - for network in networks_with_event { - let network_latest_finalized_block = if let Some(block) = batch_block.remove(&network) { - block - } else { - // If it's had a batch or a burn, it must have had a block acknowledged - serai - .in_instructions() - .latest_block_for_network(network) - .await? - .expect("network had a batch/burn yet never set a latest block") - }; - - processors - .send( - network, - processor_messages::substrate::CoordinatorMessage::SubstrateBlock { - context: SubstrateContext { - serai_time: block.time().unwrap() / 1000, - network_latest_finalized_block, - }, - block: block.number(), - burns: burns.remove(&network).unwrap(), - batches: batches.remove(&network).unwrap(), - }, - ) - .await; - } - - Ok(()) -} - -// Handle a specific Substrate block, returning an error when it fails to get data -// (not blocking / holding) -#[allow(clippy::too_many_arguments)] -async fn handle_block( - db: &mut D, - key: &Zeroizing<::F>, - new_tributary_spec: &mpsc::UnboundedSender, - perform_slash_report: &mpsc::UnboundedSender, - tributary_retired: &mpsc::UnboundedSender, - processors: &Pro, - serai: &Serai, - block: Block, -) -> Result<(), SeraiError> { - let hash = block.hash(); - - // Define an indexed event ID. - let mut event_id = 0; - - // If a new validator set was activated, create tributary/inform processor to do a DKG - for new_set in serai.as_of(hash).validator_sets().new_set_events().await? { - // Individually mark each event as handled so on reboot, we minimize duplicates - // Additionally, if the Serai connection also fails 1/100 times, this means a block with 1000 - // events will successfully be incrementally handled - // (though the Serai connection should be stable, making this unnecessary) - let ValidatorSetsEvent::NewSet { set } = new_set else { - panic!("NewSet event wasn't NewSet: {new_set:?}"); - }; - - // If this is Serai, do nothing - // We only coordinate/process external networks - if set.network == NetworkId::Serai { - continue; - } - - if HandledEvent::is_unhandled(db, hash, event_id) { - log::info!("found fresh new set event {:?}", new_set); - let mut txn = db.txn(); - handle_new_set::(&mut txn, key, new_tributary_spec, serai, &block, set).await?; - HandledEvent::handle_event(&mut txn, hash, event_id); - txn.commit(); - } - event_id += 1; - } - - // If a key pair was confirmed, inform the processor - for key_gen in serai.as_of(hash).validator_sets().key_gen_events().await? { - if HandledEvent::is_unhandled(db, hash, event_id) { - log::info!("found fresh key gen event {:?}", key_gen); - let ValidatorSetsEvent::KeyGen { set, key_pair } = key_gen else { - panic!("KeyGen event wasn't KeyGen: {key_gen:?}"); - }; - let substrate_key = key_pair.0 .0; - processors - .send( - set.network, - processor_messages::substrate::CoordinatorMessage::ConfirmKeyPair { - context: SubstrateContext { - serai_time: block.time().unwrap() / 1000, - network_latest_finalized_block: serai - .as_of(block.hash()) - .in_instructions() - .latest_block_for_network(set.network) - .await? - // The processor treats this as a magic value which will cause it to find a network - // block which has a time greater than or equal to the Serai time - .unwrap_or(BlockHash([0; 32])), - }, - session: set.session, - key_pair, - }, - ) - .await; - - // TODO: If we were in the set, yet were removed, drop the tributary - - let mut txn = db.txn(); - SeraiDkgCompleted::set(&mut txn, set, &substrate_key); - HandledEvent::handle_event(&mut txn, hash, event_id); - txn.commit(); - } - event_id += 1; - } - - for accepted_handover in serai.as_of(hash).validator_sets().accepted_handover_events().await? { - let ValidatorSetsEvent::AcceptedHandover { set } = accepted_handover else { - panic!("AcceptedHandover event wasn't AcceptedHandover: {accepted_handover:?}"); - }; - - if set.network == NetworkId::Serai { - continue; - } - - if HandledEvent::is_unhandled(db, hash, event_id) { - log::info!("found fresh accepted handover event {:?}", accepted_handover); - // TODO: This isn't atomic with the event handling - // Send a oneshot receiver so we can await the response? - perform_slash_report.send(set).unwrap(); - let mut txn = db.txn(); - HandledEvent::handle_event(&mut txn, hash, event_id); - txn.commit(); - } - event_id += 1; - } - - for retired_set in serai.as_of(hash).validator_sets().set_retired_events().await? { - let ValidatorSetsEvent::SetRetired { set } = retired_set else { - panic!("SetRetired event wasn't SetRetired: {retired_set:?}"); - }; - - if set.network == NetworkId::Serai { - continue; - } - - if HandledEvent::is_unhandled(db, hash, event_id) { - log::info!("found fresh set retired event {:?}", retired_set); - let mut txn = db.txn(); - crate::ActiveTributaryDb::retire_tributary(&mut txn, set); - tributary_retired.send(set).unwrap(); - HandledEvent::handle_event(&mut txn, hash, event_id); - txn.commit(); - } - event_id += 1; - } - - // Finally, tell the processor of acknowledged blocks/burns - // This uses a single event as unlike prior events which individually executed code, all - // following events share data collection - if HandledEvent::is_unhandled(db, hash, event_id) { - let mut txn = db.txn(); - handle_batch_and_burns(&mut txn, processors, serai, &block).await?; - HandledEvent::handle_event(&mut txn, hash, event_id); - txn.commit(); - } - - Ok(()) -} - -#[allow(clippy::too_many_arguments)] -async fn handle_new_blocks( - db: &mut D, - key: &Zeroizing<::F>, - new_tributary_spec: &mpsc::UnboundedSender, - perform_slash_report: &mpsc::UnboundedSender, - tributary_retired: &mpsc::UnboundedSender, - processors: &Pro, - serai: &Serai, - next_block: &mut u64, -) -> Result<(), SeraiError> { - // Check if there's been a new Substrate block - let latest_number = serai.latest_finalized_block().await?.number(); - - // Advance the cosigning protocol - advance_cosign_protocol(db, key, serai, latest_number).await?; - - // Reduce to the latest cosigned block - let latest_number = latest_number.min(LatestCosignedBlock::latest_cosigned_block(db)); - - if latest_number < *next_block { - return Ok(()); - } - - for b in *next_block ..= latest_number { - let block = serai - .finalized_block_by_number(b) - .await? - .expect("couldn't get block before the latest finalized block"); - - log::info!("handling substrate block {b}"); - handle_block( - db, - key, - new_tributary_spec, - perform_slash_report, - tributary_retired, - processors, - serai, - block, - ) - .await?; - *next_block += 1; - - let mut txn = db.txn(); - NextBlock::set(&mut txn, next_block); - txn.commit(); - - log::info!("handled substrate block {b}"); - } - - Ok(()) -} - -pub async fn scan_task( - mut db: D, - key: Zeroizing<::F>, - processors: Pro, - serai: Arc, - new_tributary_spec: mpsc::UnboundedSender, - perform_slash_report: mpsc::UnboundedSender, - tributary_retired: mpsc::UnboundedSender, -) { - log::info!("scanning substrate"); - let mut next_substrate_block = NextBlock::get(&db).unwrap_or_default(); - - /* - let new_substrate_block_notifier = { - let serai = &serai; - move || async move { - loop { - match serai.newly_finalized_block().await { - Ok(sub) => return sub, - Err(e) => { - log::error!("couldn't communicate with serai node: {e}"); - sleep(Duration::from_secs(5)).await; - } - } - } - } - }; - */ - // TODO: Restore the above subscription-based system - // That would require moving serai-client from HTTP to websockets - let new_substrate_block_notifier = { - let serai = &serai; - move |next_substrate_block| async move { - loop { - match serai.latest_finalized_block().await { - Ok(latest) => { - if latest.header.number >= next_substrate_block { - return latest; - } - sleep(Duration::from_secs(3)).await; - } - Err(e) => { - log::error!("couldn't communicate with serai node: {e}"); - sleep(Duration::from_secs(5)).await; - } - } - } - } - }; - - loop { - // await the next block, yet if our notifier had an error, re-create it - { - let Ok(_) = tokio::time::timeout( - Duration::from_secs(60), - new_substrate_block_notifier(next_substrate_block), - ) - .await - else { - // Timed out, which may be because Serai isn't finalizing or may be some issue with the - // notifier - if serai.latest_finalized_block().await.map(|block| block.number()).ok() == - Some(next_substrate_block.saturating_sub(1)) - { - log::info!("serai hasn't finalized a block in the last 60s..."); - } - continue; - }; - - /* - // next_block is a Option - if next_block.and_then(Result::ok).is_none() { - substrate_block_notifier = new_substrate_block_notifier(next_substrate_block); - continue; - } - */ - } - - match handle_new_blocks( - &mut db, - &key, - &new_tributary_spec, - &perform_slash_report, - &tributary_retired, - &processors, - &serai, - &mut next_substrate_block, - ) - .await - { - Ok(()) => {} - Err(e) => { - log::error!("couldn't communicate with serai node: {e}"); - sleep(Duration::from_secs(5)).await; - } - } - } -} - -/// Gets the expected ID for the next Batch. -/// -/// Will log an error and apply a slight sleep on error, letting the caller simply immediately -/// retry. -pub(crate) async fn expected_next_batch( - serai: &Serai, - network: NetworkId, -) -> Result { - async fn expected_next_batch_inner(serai: &Serai, network: NetworkId) -> Result { - let serai = serai.as_of_latest_finalized_block().await?; - let last = serai.in_instructions().last_batch_for_network(network).await?; - Ok(if let Some(last) = last { last + 1 } else { 0 }) - } - match expected_next_batch_inner(serai, network).await { - Ok(next) => Ok(next), - Err(e) => { - log::error!("couldn't get the expected next batch from substrate: {e:?}"); - sleep(Duration::from_millis(100)).await; - Err(e) - } - } -} - -/// Verifies `Batch`s which have already been indexed from Substrate. -/// -/// Spins if a distinct `Batch` is detected on-chain. -/// -/// This has a slight malleability in that doesn't verify *who* published a `Batch` is as expected. -/// This is deemed fine. -pub(crate) async fn verify_published_batches( - txn: &mut D::Transaction<'_>, - network: NetworkId, - optimistic_up_to: u32, -) -> Option { - // TODO: Localize from MainDb to SubstrateDb - let last = crate::LastVerifiedBatchDb::get(txn, network); - for id in last.map_or(0, |last| last + 1) ..= optimistic_up_to { - let Some(on_chain) = BatchInstructionsHashDb::get(txn, network, id) else { - break; - }; - let off_chain = crate::ExpectedBatchDb::get(txn, network, id).unwrap(); - if on_chain != off_chain { - // Halt operations on this network and spin, as this is a critical fault - loop { - log::error!( - "{}! network: {:?} id: {} off-chain: {} on-chain: {}", - "on-chain batch doesn't match off-chain", - network, - id, - hex::encode(off_chain), - hex::encode(on_chain), - ); - sleep(Duration::from_secs(60)).await; - } - } - crate::LastVerifiedBatchDb::set(txn, network, &id); - } - - crate::LastVerifiedBatchDb::get(txn, network) -} diff --git a/coordinator/substrate/Cargo.toml b/coordinator/substrate/Cargo.toml new file mode 100644 index 00000000..4d66c05e --- /dev/null +++ b/coordinator/substrate/Cargo.toml @@ -0,0 +1,35 @@ +[package] +name = "serai-coordinator-substrate" +version = "0.1.0" +description = "Serai Coordinator's Substrate Scanner" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/substrate" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false +rust-version = "1.81" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } + +log = { version = "0.4", default-features = false, features = ["std"] } + +futures = { version = "0.3", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false } + +serai-db = { version = "0.1.1", path = "../../common/db" } +serai-task = { version = "0.1", path = "../../common/task" } + +serai-cosign = { path = "../cosign" } + +messages = { package = "serai-processor-messages", path = "../../processor/messages" } diff --git a/coordinator/substrate/LICENSE b/coordinator/substrate/LICENSE new file mode 100644 index 00000000..26d57cbb --- /dev/null +++ b/coordinator/substrate/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2023-2024 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/coordinator/substrate/README.md b/coordinator/substrate/README.md new file mode 100644 index 00000000..83d217aa --- /dev/null +++ b/coordinator/substrate/README.md @@ -0,0 +1,14 @@ +# Serai Coordinate Substrate Scanner + +This is the scanner of the Serai blockchain for the purposes of Serai's coordinator. + +Two event streams are defined: + +- Canonical events, which must be handled by every validator, regardless of the sets they're present + in. These are represented by `serai_processor_messages::substrate::CoordinatorMessage`. +- Ephemeral events, which only need to be handled by the validators present within the sets they + relate to. These are represented by two channels, `NewSet` and `SignSlashReport`. + +The canonical event stream is available without provision of a validator's public key. The ephemeral +event stream requires provision of a validator's public key. Both are ordered within themselves, yet +there are no ordering guarantees across the two. diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs new file mode 100644 index 00000000..d778bc7c --- /dev/null +++ b/coordinator/substrate/src/canonical.rs @@ -0,0 +1,216 @@ +use std::future::Future; + +use futures::stream::{StreamExt, FuturesOrdered}; + +use serai_client::Serai; + +use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use serai_cosign::Cosigning; + +create_db!( + CoordinatorSubstrateCanonical { + NextBlock: () -> u64, + } +); + +/// The event stream for canonical events. +pub struct CanonicalEventStream { + db: D, + serai: Serai, +} + +impl CanonicalEventStream { + /// Create a new canonical event stream. + /// + /// Only one of these may exist over the provided database. + pub fn new(db: D, serai: Serai) -> Self { + Self { db, serai } + } +} + +impl ContinuallyRan for CanonicalEventStream { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let next_block = NextBlock::get(&self.db).unwrap_or(0); + let latest_finalized_block = + Cosigning::::latest_cosigned_block_number(&self.db).map_err(|e| format!("{e:?}"))?; + + // These are all the events which generate canonical messages + struct CanonicalEvents { + time: u64, + key_gen_events: Vec, + set_retired_events: Vec, + batch_events: Vec, + burn_events: Vec, + } + + // For a cosigned block, fetch all relevant events + let scan = { + let db = self.db.clone(); + let serai = &self.serai; + move |block_number| { + let block_hash = Cosigning::::cosigned_block(&db, block_number); + + async move { + let block_hash = match block_hash { + Ok(Some(block_hash)) => block_hash, + Ok(None) => { + panic!("iterating to latest cosigned block but couldn't get cosigned block") + } + Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()), + }; + let temporal_serai = serai.as_of(block_hash); + let temporal_serai_validators = temporal_serai.validator_sets(); + let temporal_serai_instructions = temporal_serai.in_instructions(); + let temporal_serai_coins = temporal_serai.coins(); + + let (block, key_gen_events, set_retired_events, batch_events, burn_events) = + tokio::try_join!( + serai.block(block_hash), + temporal_serai_validators.key_gen_events(), + temporal_serai_validators.set_retired_events(), + temporal_serai_instructions.batch_events(), + temporal_serai_coins.burn_with_instruction_events(), + ) + .map_err(|e| format!("{e:?}"))?; + let Some(block) = block else { + Err(format!("Serai node didn't have cosigned block #{block_number}"))? + }; + + let time = if block_number == 0 { + block.time().unwrap_or(0) + } else { + // Serai's block time is in milliseconds + block + .time() + .ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? / + 1000 + }; + + Ok(( + block_number, + CanonicalEvents { + time, + key_gen_events, + set_retired_events, + batch_events, + burn_events, + }, + )) + } + } + }; + + // Sync the next set of upcoming blocks all at once to minimize latency + const BLOCKS_TO_SYNC_AT_ONCE: u64 = 10; + let mut set = FuturesOrdered::new(); + for block_number in + next_block ..= latest_finalized_block.min(next_block + BLOCKS_TO_SYNC_AT_ONCE) + { + set.push_back(scan(block_number)); + } + + for block_number in next_block ..= latest_finalized_block { + // Get the next block in our queue + let (popped_block_number, block) = set.next().await.unwrap()?; + assert_eq!(block_number, popped_block_number); + // Re-populate the queue + if (block_number + BLOCKS_TO_SYNC_AT_ONCE) <= latest_finalized_block { + set.push_back(scan(block_number + BLOCKS_TO_SYNC_AT_ONCE)); + } + + let mut txn = self.db.txn(); + + for key_gen in block.key_gen_events { + let serai_client::validator_sets::ValidatorSetsEvent::KeyGen { set, key_pair } = &key_gen + else { + panic!("KeyGen event wasn't a KeyGen event: {key_gen:?}"); + }; + crate::Canonical::send( + &mut txn, + set.network, + &CoordinatorMessage::SetKeys { + serai_time: block.time, + session: set.session, + key_pair: key_pair.clone(), + }, + ); + } + + for set_retired in block.set_retired_events { + let serai_client::validator_sets::ValidatorSetsEvent::SetRetired { set } = &set_retired + else { + panic!("SetRetired event wasn't a SetRetired event: {set_retired:?}"); + }; + crate::Canonical::send( + &mut txn, + set.network, + &CoordinatorMessage::SlashesReported { session: set.session }, + ); + } + + for network in serai_client::primitives::NETWORKS { + let mut batch = None; + for this_batch in &block.batch_events { + let serai_client::in_instructions::InInstructionsEvent::Batch { + network: batch_network, + publishing_session, + id, + in_instructions_hash, + in_instruction_results, + } = this_batch + else { + panic!("Batch event wasn't a Batch event: {this_batch:?}"); + }; + if network == *batch_network { + if batch.is_some() { + Err("Serai block had multiple batches for the same network".to_string())?; + } + batch = Some(ExecutedBatch { + id: *id, + publisher: *publishing_session, + in_instructions_hash: *in_instructions_hash, + in_instruction_results: in_instruction_results + .iter() + .map(|bit| { + if *bit { + InInstructionResult::Succeeded + } else { + InInstructionResult::Failed + } + }) + .collect(), + }); + } + } + + let mut burns = vec![]; + for burn in &block.burn_events { + let serai_client::coins::CoinsEvent::BurnWithInstruction { from: _, instruction } = + &burn + else { + panic!("Burn event wasn't a Burn.in event: {burn:?}"); + }; + if instruction.balance.coin.network() == network { + burns.push(instruction.clone()); + } + } + + crate::Canonical::send( + &mut txn, + network, + &CoordinatorMessage::Block { serai_block_number: block_number, batch, burns }, + ); + } + + txn.commit(); + } + + Ok(next_block <= latest_finalized_block) + } + } +} diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs new file mode 100644 index 00000000..858b5895 --- /dev/null +++ b/coordinator/substrate/src/ephemeral.rs @@ -0,0 +1,240 @@ +use std::future::Future; + +use futures::stream::{StreamExt, FuturesOrdered}; + +use serai_client::{ + primitives::{PublicKey, NetworkId, EmbeddedEllipticCurve}, + validator_sets::primitives::MAX_KEY_SHARES_PER_SET, + Serai, +}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use serai_cosign::Cosigning; + +use crate::NewSetInformation; + +create_db!( + CoordinatorSubstrateEphemeral { + NextBlock: () -> u64, + } +); + +/// The event stream for ephemeral events. +pub struct EphemeralEventStream { + db: D, + serai: Serai, + validator: PublicKey, +} + +impl EphemeralEventStream { + /// Create a new ephemeral event stream. + /// + /// Only one of these may exist over the provided database. + pub fn new(db: D, serai: Serai, validator: PublicKey) -> Self { + Self { db, serai, validator } + } +} + +impl ContinuallyRan for EphemeralEventStream { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let next_block = NextBlock::get(&self.db).unwrap_or(0); + let latest_finalized_block = + Cosigning::::latest_cosigned_block_number(&self.db).map_err(|e| format!("{e:?}"))?; + + // These are all the events which generate canonical messages + struct EphemeralEvents { + block_hash: [u8; 32], + time: u64, + new_set_events: Vec, + accepted_handover_events: Vec, + } + + // For a cosigned block, fetch all relevant events + let scan = { + let db = self.db.clone(); + let serai = &self.serai; + move |block_number| { + let block_hash = Cosigning::::cosigned_block(&db, block_number); + + async move { + let block_hash = match block_hash { + Ok(Some(block_hash)) => block_hash, + Ok(None) => { + panic!("iterating to latest cosigned block but couldn't get cosigned block") + } + Err(serai_cosign::Faulted) => return Err("cosigning process faulted".to_string()), + }; + + let temporal_serai = serai.as_of(block_hash); + let temporal_serai_validators = temporal_serai.validator_sets(); + let (block, new_set_events, accepted_handover_events) = tokio::try_join!( + serai.block(block_hash), + temporal_serai_validators.new_set_events(), + temporal_serai_validators.accepted_handover_events(), + ) + .map_err(|e| format!("{e:?}"))?; + let Some(block) = block else { + Err(format!("Serai node didn't have cosigned block #{block_number}"))? + }; + + let time = if block_number == 0 { + block.time().unwrap_or(0) + } else { + // Serai's block time is in milliseconds + block + .time() + .ok_or_else(|| "non-genesis Serai block didn't have a time".to_string())? / + 1000 + }; + + Ok(( + block_number, + EphemeralEvents { block_hash, time, new_set_events, accepted_handover_events }, + )) + } + } + }; + + // Sync the next set of upcoming blocks all at once to minimize latency + const BLOCKS_TO_SYNC_AT_ONCE: u64 = 50; + let mut set = FuturesOrdered::new(); + for block_number in + next_block ..= latest_finalized_block.min(next_block + BLOCKS_TO_SYNC_AT_ONCE) + { + set.push_back(scan(block_number)); + } + + for block_number in next_block ..= latest_finalized_block { + // Get the next block in our queue + let (popped_block_number, block) = set.next().await.unwrap()?; + assert_eq!(block_number, popped_block_number); + // Re-populate the queue + if (block_number + BLOCKS_TO_SYNC_AT_ONCE) <= latest_finalized_block { + set.push_back(scan(block_number + BLOCKS_TO_SYNC_AT_ONCE)); + } + + let mut txn = self.db.txn(); + + for new_set in block.new_set_events { + let serai_client::validator_sets::ValidatorSetsEvent::NewSet { set } = &new_set else { + panic!("NewSet event wasn't a NewSet event: {new_set:?}"); + }; + + // We only coordinate over external networks + if set.network == NetworkId::Serai { + continue; + } + + let serai = self.serai.as_of(block.block_hash); + let serai = serai.validator_sets(); + let Some(validators) = + serai.participants(set.network).await.map_err(|e| format!("{e:?}"))? + else { + Err(format!( + "block #{block_number} declared a new set but didn't have the participants" + ))? + }; + let in_set = validators.iter().any(|(validator, _)| *validator == self.validator); + if in_set { + if u16::try_from(validators.len()).is_err() { + Err("more than u16::MAX validators sent")?; + } + + let Ok(validators) = validators + .into_iter() + .map(|(validator, weight)| u16::try_from(weight).map(|weight| (validator, weight))) + .collect::, _>>() + else { + Err("validator's weight exceeded u16::MAX".to_string())? + }; + + let total_weight = validators.iter().map(|(_, weight)| u32::from(*weight)).sum::(); + if total_weight > MAX_KEY_SHARES_PER_SET { + Err(format!( + "{set:?} has {total_weight} key shares when the max is {MAX_KEY_SHARES_PER_SET}" + ))?; + } + let total_weight = u16::try_from(total_weight).unwrap(); + + // Fetch all of the validators' embedded elliptic curve keys + let mut embedded_elliptic_curve_keys = FuturesOrdered::new(); + for (validator, _) in &validators { + let validator = *validator; + // try_join doesn't return a future so we need to wrap it in this additional async + // block + embedded_elliptic_curve_keys.push_back(async move { + tokio::try_join!( + // One future to fetch the substrate embedded key + serai + .embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519), + // One future to fetch the external embedded key, if there is a distinct curve + async { + // `embedded_elliptic_curves` is documented to have the second entry be the + // network-specific curve (if it exists and is distinct from Embedwards25519) + if let Some(curve) = set.network.embedded_elliptic_curves().get(1) { + serai.embedded_elliptic_curve_key(validator, *curve).await.map(Some) + } else { + Ok(None) + } + } + ) + .map(|(substrate_embedded_key, external_embedded_key)| { + (validator, substrate_embedded_key, external_embedded_key) + }) + }); + } + + let mut evrf_public_keys = Vec::with_capacity(usize::from(total_weight)); + for (validator, weight) in &validators { + let (future_validator, substrate_embedded_key, external_embedded_key) = + embedded_elliptic_curve_keys.next().await.unwrap().map_err(|e| format!("{e:?}"))?; + assert_eq!(*validator, future_validator); + let external_embedded_key = + external_embedded_key.unwrap_or(substrate_embedded_key.clone()); + match (substrate_embedded_key, external_embedded_key) { + (Some(substrate_embedded_key), Some(external_embedded_key)) => { + let substrate_embedded_key = <[u8; 32]>::try_from(substrate_embedded_key) + .map_err(|_| "Embedwards25519 key wasn't 32 bytes".to_string())?; + for _ in 0 .. *weight { + evrf_public_keys.push((substrate_embedded_key, external_embedded_key.clone())); + } + } + _ => Err("NewSet with validator missing an embedded key".to_string())?, + } + } + + crate::NewSet::send( + &mut txn, + &NewSetInformation { + set: *set, + serai_block: block.block_hash, + start_time: block.time, + // TODO: Why do we have this as an explicit field here? + // Shouldn't thiis be inlined into the Processor's key gen code, where it's used? + threshold: ((total_weight * 2) / 3) + 1, + validators, + evrf_public_keys, + }, + ); + } + } + + for accepted_handover in block.accepted_handover_events { + let serai_client::validator_sets::ValidatorSetsEvent::AcceptedHandover { set } = + &accepted_handover + else { + panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}"); + }; + crate::SignSlashReport::send(&mut txn, set); + } + + txn.commit(); + } + + Ok(next_block <= latest_finalized_block) + } + } +} diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs new file mode 100644 index 00000000..9c3c8863 --- /dev/null +++ b/coordinator/substrate/src/lib.rs @@ -0,0 +1,109 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + +use scale::{Encode, Decode}; +use borsh::{io, BorshSerialize, BorshDeserialize}; + +use serai_client::{ + primitives::{PublicKey, NetworkId}, + validator_sets::primitives::ValidatorSet, +}; + +use serai_db::*; + +mod canonical; +mod ephemeral; + +fn borsh_serialize_validators( + validators: &Vec<(PublicKey, u16)>, + writer: &mut W, +) -> Result<(), io::Error> { + // This doesn't use `encode_to` as `encode_to` panics if the writer returns an error + writer.write_all(&validators.encode()) +} + +fn borsh_deserialize_validators( + reader: &mut R, +) -> Result, io::Error> { + Decode::decode(&mut scale::IoReader(reader)).map_err(io::Error::other) +} + +/// The information for a new set. +#[derive(Debug, BorshSerialize, BorshDeserialize)] +pub struct NewSetInformation { + set: ValidatorSet, + serai_block: [u8; 32], + start_time: u64, + threshold: u16, + #[borsh( + serialize_with = "borsh_serialize_validators", + deserialize_with = "borsh_deserialize_validators" + )] + validators: Vec<(PublicKey, u16)>, + evrf_public_keys: Vec<([u8; 32], Vec)>, +} + +mod _public_db { + use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; + + use serai_db::*; + + use crate::NewSetInformation; + + db_channel!( + CoordinatorSubstrate { + // Canonical messages to send to the processor + Canonical: (network: NetworkId) -> messages::substrate::CoordinatorMessage, + + // Relevant new set, from an ephemeral event stream + NewSet: () -> NewSetInformation, + // Relevant sign slash report, from an ephemeral event stream + SignSlashReport: () -> ValidatorSet, + } + ); +} + +/// The canonical event stream. +pub struct Canonical; +impl Canonical { + pub(crate) fn send( + txn: &mut impl DbTxn, + network: NetworkId, + msg: &messages::substrate::CoordinatorMessage, + ) { + _public_db::Canonical::send(txn, network, msg); + } + /// Try to receive a canonical event, returning `None` if there is none to receive. + pub fn try_recv( + txn: &mut impl DbTxn, + network: NetworkId, + ) -> Option { + _public_db::Canonical::try_recv(txn, network) + } +} + +/// The channel for new set events emitted by an ephemeral event stream. +pub struct NewSet; +impl NewSet { + pub(crate) fn send(txn: &mut impl DbTxn, msg: &NewSetInformation) { + _public_db::NewSet::send(txn, msg); + } + /// Try to receive a new set's information, returning `None` if there is none to receive. + pub fn try_recv(txn: &mut impl DbTxn) -> Option { + _public_db::NewSet::try_recv(txn) + } +} + +/// The channel for notifications to sign a slash report, as emitted by an ephemeral event stream. +pub struct SignSlashReport; +impl SignSlashReport { + pub(crate) fn send(txn: &mut impl DbTxn, set: &ValidatorSet) { + _public_db::SignSlashReport::send(txn, set); + } + /// Try to receive a notification to sign a slash report, returning `None` if there is none to + /// receive. + pub fn try_recv(txn: &mut impl DbTxn) -> Option { + _public_db::SignSlashReport::try_recv(txn) + } +} diff --git a/deny.toml b/deny.toml index cc45984a..fa12461c 100644 --- a/deny.toml +++ b/deny.toml @@ -74,6 +74,7 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "tributary-chain" }, { allow = ["AGPL-3.0"], name = "serai-cosign" }, + { allow = ["AGPL-3.0"], name = "serai-coordinator-substrate" }, { allow = ["AGPL-3.0"], name = "serai-coordinator" }, { allow = ["AGPL-3.0"], name = "serai-coins-pallet" }, diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index e5d0e23b..255525a2 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -5,9 +5,8 @@ use tokio::sync::mpsc; use scale::Encode; use serai_client::{ - primitives::Signature, - validator_sets::primitives::Session, - in_instructions::primitives::{Batch, SignedBatch}, + primitives::Signature, validator_sets::primitives::Session, + in_instructions::primitives::SignedBatch, }; use serai_db::{Get, DbTxn, Db, create_db, db_channel}; diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 0fc7257e..7dc794bf 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -272,20 +272,17 @@ pub async fn main_loop< } messages::substrate::CoordinatorMessage::Block { serai_block_number: _, - batches, + batch, mut burns, } => { let scanner = scanner.as_mut().unwrap(); - // Substrate sets this limit to prevent DoSs from malicious validator sets - // That bound lets us consume this txn in the following loop body, as an optimization - assert!(batches.len() <= 1); - for messages::substrate::ExecutedBatch { + if let Some(messages::substrate::ExecutedBatch { id, publisher, in_instructions_hash, in_instruction_results, - } in batches + }) = batch { let key_to_activate = KeyToActivate::>::try_recv(txn.as_mut().unwrap()).map(|key| key.0); diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index bbab3186..1b6e1996 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -145,7 +145,7 @@ pub mod sign { pub mod coordinator { use super::*; - // TODO: Why does this not simply take the block hash? + // TODO: Remove this for the one defined in serai-cosign pub fn cosign_block_msg(block_number: u64, block: [u8; 32]) -> Vec { const DST: &[u8] = b"Cosign"; let mut res = vec![u8::try_from(DST.len()).unwrap()]; @@ -203,7 +203,7 @@ pub mod substrate { /// A block from Serai with relevance to this processor. Block { serai_block_number: u64, - batches: Vec, + batch: Option, burns: Vec, }, } diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 40e538aa..116f7b9e 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -12,7 +12,7 @@ use frost::dkg::{ThresholdCore, ThresholdKeys}; use serai_primitives::Signature; use serai_validator_sets_primitives::{Session, Slash}; -use serai_in_instructions_primitives::{Batch, SignedBatch}; +use serai_in_instructions_primitives::SignedBatch; use serai_db::{DbTxn, Db}; diff --git a/substrate/abi/src/in_instructions.rs b/substrate/abi/src/in_instructions.rs index d3ab5ca3..89729f7a 100644 --- a/substrate/abi/src/in_instructions.rs +++ b/substrate/abi/src/in_instructions.rs @@ -2,6 +2,7 @@ use serai_primitives::*; pub use serai_in_instructions_primitives as primitives; use primitives::SignedBatch; +use serai_validator_sets_primitives::Session; #[derive(Clone, PartialEq, Eq, Debug, scale::Encode, scale::Decode, scale_info::TypeInfo)] #[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize, borsh::BorshDeserialize))] @@ -12,11 +13,17 @@ pub enum Call { } #[derive(Clone, PartialEq, Eq, Debug, scale::Encode, scale::Decode, scale_info::TypeInfo)] -#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize, borsh::BorshDeserialize))] #[cfg_attr(feature = "serde", derive(serde::Serialize))] #[cfg_attr(all(feature = "std", feature = "serde"), derive(serde::Deserialize))] pub enum Event { - Batch { network: NetworkId, id: u32, block: BlockHash, instructions_hash: [u8; 32] }, - InstructionFailure { network: NetworkId, id: u32, index: u32 }, - Halt { network: NetworkId }, + Batch { + network: NetworkId, + publishing_session: Session, + id: u32, + in_instructions_hash: [u8; 32], + in_instruction_results: bitvec::vec::BitVec, + }, + Halt { + network: NetworkId, + }, } diff --git a/substrate/client/src/serai/in_instructions.rs b/substrate/client/src/serai/in_instructions.rs index 50c9ed96..29f9b1a2 100644 --- a/substrate/client/src/serai/in_instructions.rs +++ b/substrate/client/src/serai/in_instructions.rs @@ -1,10 +1,7 @@ pub use serai_abi::in_instructions::primitives; use primitives::SignedBatch; -use crate::{ - primitives::{BlockHash, NetworkId}, - Transaction, SeraiError, Serai, TemporalSerai, -}; +use crate::{primitives::NetworkId, Transaction, SeraiError, Serai, TemporalSerai}; pub type InInstructionsEvent = serai_abi::in_instructions::Event; diff --git a/substrate/client/src/serai/mod.rs b/substrate/client/src/serai/mod.rs index 8b17d5d1..f99e9a39 100644 --- a/substrate/client/src/serai/mod.rs +++ b/substrate/client/src/serai/mod.rs @@ -45,13 +45,13 @@ impl Block { } /// Returns the time of this block, set by its producer, in milliseconds since the epoch. - pub fn time(&self) -> Result { + pub fn time(&self) -> Option { for transaction in &self.transactions { if let Call::Timestamp(timestamp::Call::set { now }) = transaction.call() { - return Ok(*now); + return Some(*now); } } - Err(SeraiError::InvalidNode("no time was present in block".to_string())) + None } } diff --git a/substrate/client/tests/common/genesis_liquidity.rs b/substrate/client/tests/common/genesis_liquidity.rs index 0c0cd269..c8c613f5 100644 --- a/substrate/client/tests/common/genesis_liquidity.rs +++ b/substrate/client/tests/common/genesis_liquidity.rs @@ -65,8 +65,7 @@ pub async fn set_up_genesis( }) .or_insert(0); - let batch = - Batch { network: coin.network(), id: batch_ids[&coin.network()], block, instructions }; + let batch = Batch { network: coin.network(), id: batch_ids[&coin.network()], instructions }; provide_batch(serai, batch).await; } diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 5b394c3d..79d4c717 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -60,9 +60,16 @@ pub mod pallet { #[pallet::event] #[pallet::generate_deposit(fn deposit_event)] pub enum Event { - Batch { network: NetworkId, id: u32, block: BlockHash, instructions_hash: [u8; 32] }, - InstructionFailure { network: NetworkId, id: u32, index: u32 }, - Halt { network: NetworkId }, + Batch { + network: NetworkId, + publishing_session: Session, + id: u32, + in_instructions_hash: [u8; 32], + in_instruction_results: BitVec, + }, + Halt { + network: NetworkId, + }, } #[pallet::error] @@ -254,22 +261,7 @@ pub mod pallet { pub fn execute_batch(origin: OriginFor, batch: SignedBatch) -> DispatchResult { ensure_none(origin)?; - let batch = batch.batch; - - Self::deposit_event(Event::Batch { - network: batch.network, - id: batch.id, - instructions_hash: blake2_256(&batch.instructions.encode()), - }); - for (i, instruction) in batch.instructions.into_iter().enumerate() { - if Self::execute(instruction).is_err() { - Self::deposit_event(Event::InstructionFailure { - network: batch.network, - id: batch.id, - index: u32::try_from(i).unwrap(), - }); - } - } + // The entire Batch execution is handled in pre_dispatch Ok(()) } @@ -300,6 +292,7 @@ pub mod pallet { // verify the signature let (current_session, prior, current) = keys_for_network::(network)?; + let prior_session = Session(current_session.0 - 1); let batch_message = batch_message(&batch.batch); // Check the prior key first since only a single `Batch` (the last one) will be when prior is // Some yet prior wasn't the signing key @@ -315,6 +308,8 @@ pub mod pallet { Err(InvalidTransaction::BadProof)?; } + let batch = batch.batch; + if Halted::::contains_key(network) { Err(InvalidTransaction::Custom(1))?; } @@ -323,10 +318,7 @@ pub mod pallet { // key is publishing `Batch`s. This should only happen once the current key has verified all // `Batch`s published by the prior key, meaning they are accepting the hand-over. if prior.is_some() && (!valid_by_prior) { - ValidatorSets::::retire_set(ValidatorSet { - network, - session: Session(current_session.0 - 1), - }); + ValidatorSets::::retire_set(ValidatorSet { network, session: prior_session }); } // check that this validator set isn't publishing a batch more than once per block @@ -335,34 +327,39 @@ pub mod pallet { if last_block >= current_block { Err(InvalidTransaction::Future)?; } - LastBatchBlock::::insert(batch.batch.network, frame_system::Pallet::::block_number()); + LastBatchBlock::::insert(batch.network, frame_system::Pallet::::block_number()); // Verify the batch is sequential // LastBatch has the last ID set. The next ID should be it + 1 // If there's no ID, the next ID should be 0 let expected = LastBatch::::get(network).map_or(0, |prev| prev + 1); - if batch.batch.id < expected { + if batch.id < expected { Err(InvalidTransaction::Stale)?; } - if batch.batch.id > expected { + if batch.id > expected { Err(InvalidTransaction::Future)?; } - LastBatch::::insert(batch.batch.network, batch.batch.id); + LastBatch::::insert(batch.network, batch.id); - // Verify all Balances in this Batch are for this network - for instruction in &batch.batch.instructions { + let in_instructions_hash = blake2_256(&batch.instructions.encode()); + let mut in_instruction_results = BitVec::new(); + for (i, instruction) in batch.instructions.into_iter().enumerate() { // Verify this coin is for this network - // If this is ever hit, it means the validator set has turned malicious and should be fully - // slashed - // Because we have an error here, no validator set which turns malicious should execute - // this code path - // Accordingly, there's no value in writing code to fully slash the network, when such an - // even would require a runtime upgrade to fully resolve anyways - if instruction.balance.coin.network() != batch.batch.network { + if instruction.balance.coin.network() != batch.network { Err(InvalidTransaction::Custom(2))?; } + + in_instruction_results.push(Self::execute(instruction).is_ok()); } + Self::deposit_event(Event::Batch { + network: batch.network, + publishing_session: if valid_by_prior { prior_session } else { current_session }, + id: batch.id, + in_instructions_hash, + in_instruction_results, + }); + ValidTransaction::with_tag_prefix("in-instructions") .and_provides((batch.batch.network, batch.batch.id)) // Set a 10 block longevity, though this should be included in the next block From 7e2b31e5da559d5a064635014b2045a283109fed Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 31 Dec 2024 12:14:32 -0500 Subject: [PATCH 230/368] Clean the transaction definitions in the coordinator Moves to borsh for serialization. No longer includes nonces anywhere in the TX. --- Cargo.lock | 1 - coordinator/LICENSE | 2 +- coordinator/README.md | 20 +- coordinator/cosign/Cargo.toml | 4 +- coordinator/src/tributary/transaction.rs | 632 +++++------------- coordinator/substrate/Cargo.toml | 10 +- coordinator/substrate/src/lib.rs | 3 + coordinator/tributary/src/block.rs | 2 +- coordinator/tributary/src/blockchain.rs | 2 +- coordinator/tributary/src/lib.rs | 2 +- coordinator/tributary/src/mempool.rs | 26 +- coordinator/tributary/src/tendermint/tx.rs | 2 +- coordinator/tributary/src/tests/block.rs | 4 +- coordinator/tributary/src/tests/blockchain.rs | 2 +- .../tributary/src/tests/transaction/mod.rs | 6 +- coordinator/tributary/src/transaction.rs | 12 +- 16 files changed, 220 insertions(+), 510 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 40f0e276..7e26d78a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8344,7 +8344,6 @@ dependencies = [ name = "serai-coordinator-substrate" version = "0.1.0" dependencies = [ - "blake2", "borsh", "futures", "log", diff --git a/coordinator/LICENSE b/coordinator/LICENSE index f684d027..26d57cbb 100644 --- a/coordinator/LICENSE +++ b/coordinator/LICENSE @@ -1,6 +1,6 @@ AGPL-3.0-only license -Copyright (c) 2023 Luke Parker +Copyright (c) 2023-2024 Luke Parker This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License Version 3 as diff --git a/coordinator/README.md b/coordinator/README.md index ed41ef71..27552a5a 100644 --- a/coordinator/README.md +++ b/coordinator/README.md @@ -1,7 +1,19 @@ # Coordinator -The Serai coordinator communicates with other coordinators to prepare batches -for Serai and sign transactions. +- [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint BFT algorithm. -In order to achieve consensus over gossip, and order certain events, a -micro-blockchain is instantiated. +- [`tributary`](./tributary) is a micro-blockchain framework. Instead of a producing a blockchain + daemon like the Polkadot SDK or Cosmos SDK intend to, `tributary` is solely intended to be an + embedded asynchronous task within an application. + + The Serai coordinator spawns a tributary for each validator set it's coordinating. This allows + the participating validators to communicate in a byzantine-fault-tolerant manner (relying on + Tendermint for consensus). + +- [`cosign`](./cosign) contains a library to decide which Substrate blocks should be cosigned and + to evaluate cosigns. + +- [`substrate`](./substrate) contains a library to index the Substrate blockchain and handle its + events. + +- [`src`](./src) contains the source code for the Coordinator binary itself. diff --git a/coordinator/cosign/Cargo.toml b/coordinator/cosign/Cargo.toml index fa5bd8ee..bf111f85 100644 --- a/coordinator/cosign/Cargo.toml +++ b/coordinator/cosign/Cargo.toml @@ -29,5 +29,5 @@ log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false } -serai-db = { version = "0.1.1", path = "../../common/db" } -serai-task = { version = "0.1", path = "../../common/task" } +serai-db = { path = "../../common/db", version = "0.1.1" } +serai-task = { path = "../../common/task", version = "0.1" } diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index 860dbd0f..fd8126ce 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -4,9 +4,7 @@ use std::io; use zeroize::Zeroizing; use rand_core::{RngCore, CryptoRng}; -use blake2::{Digest, Blake2s256}; -use transcript::{Transcript, RecommendedTranscript}; - +use blake2::{digest::typenum::U32, Digest, Blake2b}; use ciphersuite::{ group::{ff::Field, GroupEncoding}, Ciphersuite, Ristretto, @@ -14,22 +12,30 @@ use ciphersuite::{ use schnorr::SchnorrSignature; use scale::{Encode, Decode}; -use processor_messages::coordinator::SubstrateSignableId; +use borsh::{BorshSerialize, BorshDeserialize}; + +use serai_client::primitives::PublicKey; + +use processor_messages::sign::VariantSignId; use tributary::{ - TRANSACTION_SIZE_LIMIT, ReadWrite, - transaction::{Signed, TransactionError, TransactionKind, Transaction as TransactionTrait}, + ReadWrite, + transaction::{ + Signed as TributarySigned, TransactionError, TransactionKind, Transaction as TransactionTrait, + }, }; -#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode)] +/// The label for data from a signing protocol. +#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] pub enum Label { + /// A preprocess. Preprocess, + /// A signature share. Share, } impl Label { - // TODO: Should nonces be u8 thanks to our use of topics? - pub fn nonce(&self) -> u32 { + fn nonce(&self) -> u32 { match self { Label::Preprocess => 0, Label::Share => 1, @@ -37,474 +43,202 @@ impl Label { } } -#[derive(Clone, PartialEq, Eq)] -pub struct SignData { - pub plan: Id, - pub attempt: u32, - pub label: Label, - - pub data: Vec>, - - pub signed: Signed, +fn borsh_serialize_public( + public: &PublicKey, + writer: &mut W, +) -> Result<(), io::Error> { + // This doesn't use `encode_to` as `encode_to` panics if the writer returns an error + writer.write_all(&public.encode()) +} +fn borsh_deserialize_public(reader: &mut R) -> Result { + Decode::decode(&mut scale::IoReader(reader)).map_err(io::Error::other) } -impl Debug for SignData { - fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { - fmt - .debug_struct("SignData") - .field("id", &hex::encode(self.plan.encode())) - .field("attempt", &self.attempt) - .field("label", &self.label) - .field("signer", &hex::encode(self.signed.signer.to_bytes())) - .finish_non_exhaustive() +/// `tributary::Signed` without the nonce. +/// +/// All of our nonces are deterministic to the type of transaction and fields within. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct Signed { + pub signer: ::G, + pub signature: SchnorrSignature, +} + +impl BorshSerialize for Signed { + fn serialize(&self, writer: &mut W) -> Result<(), io::Error> { + writer.write_all(self.signer.to_bytes().as_ref())?; + self.signature.write(writer) + } +} +impl BorshDeserialize for Signed { + fn deserialize_reader(reader: &mut R) -> Result { + let signer = Ristretto::read_G(reader)?; + let signature = SchnorrSignature::read(reader)?; + Ok(Self { signer, signature }) } } -impl SignData { - pub(crate) fn read(reader: &mut R) -> io::Result { - let plan = Id::decode(&mut scale::IoReader(&mut *reader)) - .map_err(|_| io::Error::other("invalid plan in SignData"))?; - - let mut attempt = [0; 4]; - reader.read_exact(&mut attempt)?; - let attempt = u32::from_le_bytes(attempt); - - let mut label = [0; 1]; - reader.read_exact(&mut label)?; - let label = match label[0] { - 0 => Label::Preprocess, - 1 => Label::Share, - _ => Err(io::Error::other("invalid label in SignData"))?, - }; - - let data = { - let mut data_pieces = [0]; - reader.read_exact(&mut data_pieces)?; - if data_pieces[0] == 0 { - Err(io::Error::other("zero pieces of data in SignData"))?; - } - let mut all_data = vec![]; - for _ in 0 .. data_pieces[0] { - let mut data_len = [0; 2]; - reader.read_exact(&mut data_len)?; - let mut data = vec![0; usize::from(u16::from_le_bytes(data_len))]; - reader.read_exact(&mut data)?; - all_data.push(data); - } - all_data - }; - - let signed = Signed::read_without_nonce(reader, label.nonce())?; - - Ok(SignData { plan, attempt, label, data, signed }) - } - - pub(crate) fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&self.plan.encode())?; - writer.write_all(&self.attempt.to_le_bytes())?; - writer.write_all(&[match self.label { - Label::Preprocess => 0, - Label::Share => 1, - }])?; - - writer.write_all(&[u8::try_from(self.data.len()).unwrap()])?; - for data in &self.data { - if data.len() > u16::MAX.into() { - // Currently, the largest individual preprocess is a Monero transaction - // It provides 4 commitments per input (128 bytes), a 64-byte proof for them, along with a - // key image and proof (96 bytes) - // Even with all of that, we could support 227 inputs in a single TX - // Monero is limited to ~120 inputs per TX - // - // Bitcoin has a much higher input count of 520, yet it only uses 64 bytes per preprocess - Err(io::Error::other("signing data exceeded 65535 bytes"))?; - } - writer.write_all(&u16::try_from(data.len()).unwrap().to_le_bytes())?; - writer.write_all(data)?; - } - - self.signed.write_without_nonce(writer) +impl Signed { + /// Provide a nonce to convert a `Signed` into a `tributary::Signed`. + fn nonce(&self, nonce: u32) -> TributarySigned { + TributarySigned { signer: self.signer, nonce, signature: self.signature } } } -#[derive(Clone, PartialEq, Eq)] +/// The Tributary transaction definition used by Serai +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum Transaction { + /// A vote to remove a participant for invalid behavior RemoveParticipant { - participant: ::G, + /// The participant to remove + #[borsh( + serialize_with = "borsh_serialize_public", + deserialize_with = "borsh_deserialize_public" + )] + participant: PublicKey, + /// The transaction's signer and signature signed: Signed, }, + /// A participation in the DKG DkgParticipation { participation: Vec, + /// The transaction's signer and signature signed: Signed, }, - DkgConfirmationNonces { - // The confirmation attempt + /// The preprocess to confirm the DKG results on-chain + DkgConfirmationPreprocess { + /// The attempt number of this signing protocol attempt: u32, - // The nonces for DKG confirmation attempt #attempt - confirmation_nonces: [u8; 64], + // The preprocess + preprocess: [u8; 64], + /// The transaction's signer and signature signed: Signed, }, + /// The signature share to confirm the DKG results on-chain DkgConfirmationShare { - // The confirmation attempt + /// The attempt number of this signing protocol attempt: u32, - // The share for DKG confirmation attempt #attempt + // The signature share confirmation_share: [u8; 32], + /// The transaction's signer and signature signed: Signed, }, - // Co-sign a Substrate block. - CosignSubstrateBlock([u8; 32]), + /// Intend to co-sign a finalized Substrate block + /// + /// When the time comes to start a new co-signing protocol, the most recent Substrate block will + /// be the one selected to be cosigned. + CosignSubstrateBlock { + /// THe hash of the Substrate block to sign + hash: [u8; 32], + }, - // When we have synchrony on a batch, we can allow signing it - // TODO (never?): This is less efficient compared to an ExternalBlock provided transaction, - // which would be binding over the block hash and automatically achieve synchrony on all - // relevant batches. ExternalBlock was removed for this due to complexity around the pipeline - // with the current processor, yet it would still be an improvement. + /// Acknowledge a Substrate block + /// + /// This is provided after the block has been cosigned. + /// + /// With the acknowledgement of a Substrate block, we can whitelist all the `VariantSignId`s + /// resulting from its handling. + SubstrateBlock { + /// The hash of the Substrate block + hash: [u8; 32], + }, + + /// Acknowledge a Batch + /// + /// Once everyone has acknowledged the Batch, we can begin signing it. Batch { - block: [u8; 32], - batch: u32, - }, - // When a Serai block is finalized, with the contained batches, we can allow the associated plan - // IDs - SubstrateBlock(u64), - - SubstrateSign(SignData), - Sign(SignData<[u8; 32]>), - // This is defined as an Unsigned transaction in order to de-duplicate SignCompleted amongst - // reporters (who should all report the same thing) - // We do still track the signer in order to prevent a single signer from publishing arbitrarily - // many TXs without penalty - // Here, they're denoted as the first_signer, as only the signer of the first TX to be included - // with this pairing will be remembered on-chain - SignCompleted { - plan: [u8; 32], - tx_hash: Vec, - first_signer: ::G, - signature: SchnorrSignature, + /// The hash of the Batch's serialization. + /// + /// Generally, we refer to a Batch by its ID/the hash of its instructions. Here, we want to + /// ensure consensus on the Batch, and achieving consensus on its hash is the most effective + /// way to do that. + hash: [u8; 32], }, - SlashReport(Vec, Signed), -} + /// The local view of slashes observed by the transaction's sender + SlashReport { + /// The slash points accrued by each validator + slash_points: Vec, + /// The transaction's signer and signature + signed: Signed, + }, -impl Debug for Transaction { - fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> { - match self { - Transaction::RemoveParticipant { participant, signed } => fmt - .debug_struct("Transaction::RemoveParticipant") - .field("participant", &hex::encode(participant.to_bytes())) - .field("signer", &hex::encode(signed.signer.to_bytes())) - .finish_non_exhaustive(), - Transaction::DkgParticipation { signed, .. } => fmt - .debug_struct("Transaction::DkgParticipation") - .field("signer", &hex::encode(signed.signer.to_bytes())) - .finish_non_exhaustive(), - Transaction::DkgConfirmationNonces { attempt, signed, .. } => fmt - .debug_struct("Transaction::DkgConfirmationNonces") - .field("attempt", attempt) - .field("signer", &hex::encode(signed.signer.to_bytes())) - .finish_non_exhaustive(), - Transaction::DkgConfirmationShare { attempt, signed, .. } => fmt - .debug_struct("Transaction::DkgConfirmationShare") - .field("attempt", attempt) - .field("signer", &hex::encode(signed.signer.to_bytes())) - .finish_non_exhaustive(), - Transaction::CosignSubstrateBlock(block) => fmt - .debug_struct("Transaction::CosignSubstrateBlock") - .field("block", &hex::encode(block)) - .finish(), - Transaction::Batch { block, batch } => fmt - .debug_struct("Transaction::Batch") - .field("block", &hex::encode(block)) - .field("batch", &batch) - .finish(), - Transaction::SubstrateBlock(block) => { - fmt.debug_struct("Transaction::SubstrateBlock").field("block", block).finish() - } - Transaction::SubstrateSign(sign_data) => { - fmt.debug_struct("Transaction::SubstrateSign").field("sign_data", sign_data).finish() - } - Transaction::Sign(sign_data) => { - fmt.debug_struct("Transaction::Sign").field("sign_data", sign_data).finish() - } - Transaction::SignCompleted { plan, tx_hash, .. } => fmt - .debug_struct("Transaction::SignCompleted") - .field("plan", &hex::encode(plan)) - .field("tx_hash", &hex::encode(tx_hash)) - .finish_non_exhaustive(), - Transaction::SlashReport(points, signed) => fmt - .debug_struct("Transaction::SignCompleted") - .field("points", points) - .field("signed", signed) - .finish(), - } - } + Sign { + /// The ID of the object being signed + id: VariantSignId, + /// The attempt number of this signing protocol + attempt: u32, + /// The label for this data within the signing protocol + label: Label, + /// The data itself + /// + /// There will be `n` blobs of data where `n` is the amount of key shares the validator sending + /// this transaction has. + data: Vec>, + /// The transaction's signer and signature + signed: Signed, + }, } impl ReadWrite for Transaction { fn read(reader: &mut R) -> io::Result { - let mut kind = [0]; - reader.read_exact(&mut kind)?; - - match kind[0] { - 0 => Ok(Transaction::RemoveParticipant { - participant: Ristretto::read_G(reader)?, - signed: Signed::read_without_nonce(reader, 0)?, - }), - - 1 => { - let participation = { - let mut participation_len = [0; 4]; - reader.read_exact(&mut participation_len)?; - let participation_len = u32::from_le_bytes(participation_len); - - if participation_len > u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() { - Err(io::Error::other( - "participation present in transaction exceeded transaction size limit", - ))?; - } - let participation_len = usize::try_from(participation_len).unwrap(); - - let mut participation = vec![0; participation_len]; - reader.read_exact(&mut participation)?; - participation - }; - - let signed = Signed::read_without_nonce(reader, 0)?; - - Ok(Transaction::DkgParticipation { participation, signed }) - } - - 2 => { - let mut attempt = [0; 4]; - reader.read_exact(&mut attempt)?; - let attempt = u32::from_le_bytes(attempt); - - let mut confirmation_nonces = [0; 64]; - reader.read_exact(&mut confirmation_nonces)?; - - let signed = Signed::read_without_nonce(reader, 0)?; - - Ok(Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed }) - } - - 3 => { - let mut attempt = [0; 4]; - reader.read_exact(&mut attempt)?; - let attempt = u32::from_le_bytes(attempt); - - let mut confirmation_share = [0; 32]; - reader.read_exact(&mut confirmation_share)?; - - let signed = Signed::read_without_nonce(reader, 1)?; - - Ok(Transaction::DkgConfirmationShare { attempt, confirmation_share, signed }) - } - - 4 => { - let mut block = [0; 32]; - reader.read_exact(&mut block)?; - Ok(Transaction::CosignSubstrateBlock(block)) - } - - 5 => { - let mut block = [0; 32]; - reader.read_exact(&mut block)?; - let mut batch = [0; 4]; - reader.read_exact(&mut batch)?; - Ok(Transaction::Batch { block, batch: u32::from_le_bytes(batch) }) - } - - 6 => { - let mut block = [0; 8]; - reader.read_exact(&mut block)?; - Ok(Transaction::SubstrateBlock(u64::from_le_bytes(block))) - } - - 7 => SignData::read(reader).map(Transaction::SubstrateSign), - 8 => SignData::read(reader).map(Transaction::Sign), - - 9 => { - let mut plan = [0; 32]; - reader.read_exact(&mut plan)?; - - let mut tx_hash_len = [0]; - reader.read_exact(&mut tx_hash_len)?; - let mut tx_hash = vec![0; usize::from(tx_hash_len[0])]; - reader.read_exact(&mut tx_hash)?; - - let first_signer = Ristretto::read_G(reader)?; - let signature = SchnorrSignature::::read(reader)?; - - Ok(Transaction::SignCompleted { plan, tx_hash, first_signer, signature }) - } - - 10 => { - let mut len = [0]; - reader.read_exact(&mut len)?; - let len = len[0]; - // If the set has as many validators as MAX_KEY_SHARES_PER_SET, then the amount of distinct - // validators (the amount of validators reported on) will be at most - // `MAX_KEY_SHARES_PER_SET - 1` - if u32::from(len) > (serai_client::validator_sets::primitives::MAX_KEY_SHARES_PER_SET - 1) { - Err(io::Error::other("more points reported than allowed validator"))?; - } - let mut points = vec![0u32; len.into()]; - for points in &mut points { - let mut these_points = [0; 4]; - reader.read_exact(&mut these_points)?; - *points = u32::from_le_bytes(these_points); - } - Ok(Transaction::SlashReport(points, Signed::read_without_nonce(reader, 0)?)) - } - - _ => Err(io::Error::other("invalid transaction type")), - } + borsh::from_reader(reader) } fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - Transaction::RemoveParticipant { participant, signed } => { - writer.write_all(&[0])?; - writer.write_all(&participant.to_bytes())?; - signed.write_without_nonce(writer) - } - - Transaction::DkgParticipation { participation, signed } => { - writer.write_all(&[1])?; - writer.write_all(&u32::try_from(participation.len()).unwrap().to_le_bytes())?; - writer.write_all(participation)?; - signed.write_without_nonce(writer) - } - - Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => { - writer.write_all(&[2])?; - writer.write_all(&attempt.to_le_bytes())?; - writer.write_all(confirmation_nonces)?; - signed.write_without_nonce(writer) - } - - Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => { - writer.write_all(&[3])?; - writer.write_all(&attempt.to_le_bytes())?; - writer.write_all(confirmation_share)?; - signed.write_without_nonce(writer) - } - - Transaction::CosignSubstrateBlock(block) => { - writer.write_all(&[4])?; - writer.write_all(block) - } - - Transaction::Batch { block, batch } => { - writer.write_all(&[5])?; - writer.write_all(block)?; - writer.write_all(&batch.to_le_bytes()) - } - - Transaction::SubstrateBlock(block) => { - writer.write_all(&[6])?; - writer.write_all(&block.to_le_bytes()) - } - - Transaction::SubstrateSign(data) => { - writer.write_all(&[7])?; - data.write(writer) - } - Transaction::Sign(data) => { - writer.write_all(&[8])?; - data.write(writer) - } - Transaction::SignCompleted { plan, tx_hash, first_signer, signature } => { - writer.write_all(&[9])?; - writer.write_all(plan)?; - writer - .write_all(&[u8::try_from(tx_hash.len()).expect("tx hash length exceed 255 bytes")])?; - writer.write_all(tx_hash)?; - writer.write_all(&first_signer.to_bytes())?; - signature.write(writer) - } - Transaction::SlashReport(points, signed) => { - writer.write_all(&[10])?; - writer.write_all(&[u8::try_from(points.len()).unwrap()])?; - for points in points { - writer.write_all(&points.to_le_bytes())?; - } - signed.write_without_nonce(writer) - } - } + borsh::to_writer(writer, self) } } impl TransactionTrait for Transaction { - fn kind(&self) -> TransactionKind<'_> { + fn kind(&self) -> TransactionKind { match self { Transaction::RemoveParticipant { participant, signed } => { - TransactionKind::Signed((b"remove", participant.to_bytes()).encode(), signed) + TransactionKind::Signed((b"RemoveParticipant", participant).encode(), signed.nonce(0)) } Transaction::DkgParticipation { signed, .. } => { - TransactionKind::Signed(b"dkg".to_vec(), signed) + TransactionKind::Signed(b"DkgParticipation".encode(), signed.nonce(0)) + } + Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => { + TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(0)) } - Transaction::DkgConfirmationNonces { attempt, signed, .. } | Transaction::DkgConfirmationShare { attempt, signed, .. } => { - TransactionKind::Signed((b"dkg_confirmation", attempt).encode(), signed) + TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(1)) } - Transaction::CosignSubstrateBlock(_) => TransactionKind::Provided("cosign"), + Transaction::CosignSubstrateBlock { .. } => TransactionKind::Provided("CosignSubstrateBlock"), + Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), + Transaction::Batch { .. } => TransactionKind::Provided("Batch"), - Transaction::Batch { .. } => TransactionKind::Provided("batch"), - Transaction::SubstrateBlock(_) => TransactionKind::Provided("serai"), - - Transaction::SubstrateSign(data) => { - TransactionKind::Signed((b"substrate", data.plan, data.attempt).encode(), &data.signed) + Transaction::Sign { id, attempt, label, signed, .. } => { + TransactionKind::Signed((b"Sign", id, attempt).encode(), signed.nonce(label.nonce())) } - Transaction::Sign(data) => { - TransactionKind::Signed((b"sign", data.plan, data.attempt).encode(), &data.signed) - } - Transaction::SignCompleted { .. } => TransactionKind::Unsigned, - Transaction::SlashReport(_, signed) => { - TransactionKind::Signed(b"slash_report".to_vec(), signed) + Transaction::SlashReport { signed, .. } => { + TransactionKind::Signed(b"SlashReport".encode(), signed.nonce(0)) } } } fn hash(&self) -> [u8; 32] { - let mut tx = self.serialize(); + let mut tx = ReadWrite::serialize(self); if let TransactionKind::Signed(_, signed) = self.kind() { // Make sure the part we're cutting off is the signature assert_eq!(tx.drain((tx.len() - 64) ..).collect::>(), signed.signature.serialize()); } - Blake2s256::digest([b"Coordinator Tributary Transaction".as_slice(), &tx].concat()).into() + Blake2b::::digest(&tx).into() } + // We don't have any verification logic embedded into the transaction. We just slash anyone who + // publishes an invalid transaction. fn verify(&self) -> Result<(), TransactionError> { - // TODO: Check SubstrateSign's lengths here - - if let Transaction::SignCompleted { first_signer, signature, .. } = self { - if !signature.verify(*first_signer, self.sign_completed_challenge()) { - Err(TransactionError::InvalidContent)?; - } - } - Ok(()) } } impl Transaction { - // Used to initially construct transactions so we can then get sig hashes and perform signing - pub fn empty_signed() -> Signed { - Signed { - signer: Ristretto::generator(), - nonce: 0, - signature: SchnorrSignature:: { - R: Ristretto::generator(), - s: ::F::ZERO, - }, - } - } - // Sign a transaction pub fn sign( &mut self, @@ -512,76 +246,38 @@ impl Transaction { genesis: [u8; 32], key: &Zeroizing<::F>, ) { - fn signed(tx: &mut Transaction) -> (u32, &mut Signed) { - #[allow(clippy::match_same_arms)] // Doesn't make semantic sense here - let nonce = match tx { - Transaction::RemoveParticipant { .. } => 0, - - Transaction::DkgParticipation { .. } => 0, - // Uses a nonce of 0 as it has an internal attempt counter we distinguish by - Transaction::DkgConfirmationNonces { .. } => 0, - // Uses a nonce of 1 due to internal attempt counter and due to following - // DkgConfirmationNonces - Transaction::DkgConfirmationShare { .. } => 1, - - Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"), + fn signed(tx: &mut Transaction) -> &mut Signed { + #[allow(clippy::match_same_arms)] // This doesn't make semantic sense here + match tx { + Transaction::RemoveParticipant { ref mut signed, .. } | + Transaction::DkgParticipation { ref mut signed, .. } | + Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed, + Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, + Transaction::CosignSubstrateBlock { .. } => panic!("signing CosignSubstrateBlock"), + Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"), Transaction::Batch { .. } => panic!("signing Batch"), - Transaction::SubstrateBlock(_) => panic!("signing SubstrateBlock"), - Transaction::SubstrateSign(data) => data.label.nonce(), - Transaction::Sign(data) => data.label.nonce(), + Transaction::Sign { ref mut signed, .. } => signed, - Transaction::SignCompleted { .. } => panic!("signing SignCompleted"), - - Transaction::SlashReport(_, _) => 0, - }; - - ( - nonce, - #[allow(clippy::match_same_arms)] - match tx { - Transaction::RemoveParticipant { ref mut signed, .. } | - Transaction::DkgParticipation { ref mut signed, .. } | - Transaction::DkgConfirmationNonces { ref mut signed, .. } => signed, - Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, - - Transaction::CosignSubstrateBlock(_) => panic!("signing CosignSubstrateBlock"), - - Transaction::Batch { .. } => panic!("signing Batch"), - Transaction::SubstrateBlock(_) => panic!("signing SubstrateBlock"), - - Transaction::SubstrateSign(ref mut data) => &mut data.signed, - Transaction::Sign(ref mut data) => &mut data.signed, - - Transaction::SignCompleted { .. } => panic!("signing SignCompleted"), - - Transaction::SlashReport(_, ref mut signed) => signed, - }, - ) + Transaction::SlashReport { ref mut signed, .. } => signed, + } } - let (nonce, signed_ref) = signed(self); - signed_ref.signer = Ristretto::generator() * key.deref(); - signed_ref.nonce = nonce; - + // Decide the nonce to sign with let sig_nonce = Zeroizing::new(::F::random(rng)); - signed(self).1.signature.R = ::generator() * sig_nonce.deref(); - let sig_hash = self.sig_hash(genesis); - signed(self).1.signature = SchnorrSignature::::sign(key, sig_nonce, sig_hash); - } - pub fn sign_completed_challenge(&self) -> ::F { - if let Transaction::SignCompleted { plan, tx_hash, first_signer, signature } = self { - let mut transcript = - RecommendedTranscript::new(b"Coordinator Tributary Transaction SignCompleted"); - transcript.append_message(b"plan", plan); - transcript.append_message(b"tx_hash", tx_hash); - transcript.append_message(b"signer", first_signer.to_bytes()); - transcript.append_message(b"nonce", signature.R.to_bytes()); - Ristretto::hash_to_F(b"SignCompleted signature", &transcript.challenge(b"challenge")) - } else { - panic!("sign_completed_challenge called on transaction which wasn't SignCompleted") + { + // Set the signer and the nonce + let signed = signed(self); + signed.signer = Ristretto::generator() * key.deref(); + signed.signature.R = ::generator() * sig_nonce.deref(); } + + // Get the signature hash (which now includes `R || A` making it valid as the challenge) + let sig_hash = self.sig_hash(genesis); + + // Sign the signature + signed(self).signature = SchnorrSignature::::sign(key, sig_nonce, sig_hash); } } diff --git a/coordinator/substrate/Cargo.toml b/coordinator/substrate/Cargo.toml index 4d66c05e..21577d62 100644 --- a/coordinator/substrate/Cargo.toml +++ b/coordinator/substrate/Cargo.toml @@ -20,16 +20,16 @@ workspace = true [dependencies] scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } -serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } +serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] } log = { version = "0.4", default-features = false, features = ["std"] } futures = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false } -serai-db = { version = "0.1.1", path = "../../common/db" } -serai-task = { version = "0.1", path = "../../common/task" } +serai-db = { path = "../../common/db", version = "0.1.1" } +serai-task = { path = "../../common/task", version = "0.1" } -serai-cosign = { path = "../cosign" } +serai-cosign = { path = "../cosign", version = "0.1" } -messages = { package = "serai-processor-messages", path = "../../processor/messages" } +messages = { package = "serai-processor-messages", version = "0.1", path = "../../processor/messages" } diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index 9c3c8863..41378508 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -96,6 +96,9 @@ impl NewSet { } /// The channel for notifications to sign a slash report, as emitted by an ephemeral event stream. +/// +/// These notifications MAY be for irrelevant validator sets. The only guarantee is the +/// notifications for all relevant validator sets will be included. pub struct SignSlashReport; impl SignSlashReport { pub(crate) fn send(txn: &mut impl DbTxn, set: &ValidatorSet) { diff --git a/coordinator/tributary/src/block.rs b/coordinator/tributary/src/block.rs index 6f3374bd..d632ce57 100644 --- a/coordinator/tributary/src/block.rs +++ b/coordinator/tributary/src/block.rs @@ -135,7 +135,7 @@ impl Block { // Check TXs are sorted by nonce. let nonce = |tx: &Transaction| { if let TransactionKind::Signed(_, Signed { nonce, .. }) = tx.kind() { - *nonce + nonce } else { 0 } diff --git a/coordinator/tributary/src/blockchain.rs b/coordinator/tributary/src/blockchain.rs index 1664860b..0eee391b 100644 --- a/coordinator/tributary/src/blockchain.rs +++ b/coordinator/tributary/src/blockchain.rs @@ -323,7 +323,7 @@ impl Blockchain { } TransactionKind::Signed(order, Signed { signer, nonce, .. }) => { let next_nonce = nonce + 1; - txn.put(Self::next_nonce_key(&self.genesis, signer, &order), next_nonce.to_le_bytes()); + txn.put(Self::next_nonce_key(&self.genesis, &signer, &order), next_nonce.to_le_bytes()); self.mempool.remove(&tx.hash()); } } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 9b23dc6c..476dbf93 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -110,7 +110,7 @@ impl Transaction { } } - pub fn kind(&self) -> TransactionKind<'_> { + pub fn kind(&self) -> TransactionKind { match self { Transaction::Tendermint(tx) => tx.kind(), Transaction::Application(tx) => tx.kind(), diff --git a/coordinator/tributary/src/mempool.rs b/coordinator/tributary/src/mempool.rs index 7558bae0..e83c3acb 100644 --- a/coordinator/tributary/src/mempool.rs +++ b/coordinator/tributary/src/mempool.rs @@ -81,11 +81,11 @@ impl Mempool { } Transaction::Application(tx) => match tx.kind() { TransactionKind::Signed(order, Signed { signer, nonce, .. }) => { - let amount = *res.txs_per_signer.get(signer).unwrap_or(&0) + 1; - res.txs_per_signer.insert(*signer, amount); + let amount = *res.txs_per_signer.get(&signer).unwrap_or(&0) + 1; + res.txs_per_signer.insert(signer, amount); if let Some(prior_nonce) = - res.last_nonce_in_mempool.insert((*signer, order.clone()), *nonce) + res.last_nonce_in_mempool.insert((signer, order.clone()), nonce) { assert_eq!(prior_nonce, nonce - 1); } @@ -133,14 +133,14 @@ impl Mempool { match app_tx.kind() { TransactionKind::Signed(order, Signed { signer, .. }) => { // Get the nonce from the blockchain - let Some(blockchain_next_nonce) = blockchain_next_nonce(*signer, order.clone()) else { + let Some(blockchain_next_nonce) = blockchain_next_nonce(signer, order.clone()) else { // Not a participant Err(TransactionError::InvalidSigner)? }; let mut next_nonce = blockchain_next_nonce; if let Some(mempool_last_nonce) = - self.last_nonce_in_mempool.get(&(*signer, order.clone())) + self.last_nonce_in_mempool.get(&(signer, order.clone())) { assert!(*mempool_last_nonce >= blockchain_next_nonce); next_nonce = *mempool_last_nonce + 1; @@ -148,14 +148,14 @@ impl Mempool { // If we have too many transactions from this sender, don't add this yet UNLESS we are // this sender - let amount_in_pool = *self.txs_per_signer.get(signer).unwrap_or(&0) + 1; + let amount_in_pool = *self.txs_per_signer.get(&signer).unwrap_or(&0) + 1; if !internal && (amount_in_pool > ACCOUNT_MEMPOOL_LIMIT) { Err(TransactionError::TooManyInMempool)?; } verify_transaction(app_tx, self.genesis, &mut |_, _| Some(next_nonce))?; - self.last_nonce_in_mempool.insert((*signer, order.clone()), next_nonce); - self.txs_per_signer.insert(*signer, amount_in_pool); + self.last_nonce_in_mempool.insert((signer, order.clone()), next_nonce); + self.txs_per_signer.insert(signer, amount_in_pool); } TransactionKind::Unsigned => { // check we have the tx in the pool/chain @@ -205,7 +205,7 @@ impl Mempool { // Sort signed by nonce let nonce = |tx: &Transaction| { if let TransactionKind::Signed(_, Signed { nonce, .. }) = tx.kind() { - *nonce + nonce } else { unreachable!() } @@ -242,11 +242,11 @@ impl Mempool { if let Some(tx) = self.txs.remove(tx) { if let TransactionKind::Signed(order, Signed { signer, nonce, .. }) = tx.kind() { - let amount = *self.txs_per_signer.get(signer).unwrap() - 1; - self.txs_per_signer.insert(*signer, amount); + let amount = *self.txs_per_signer.get(&signer).unwrap() - 1; + self.txs_per_signer.insert(signer, amount); - if self.last_nonce_in_mempool.get(&(*signer, order.clone())) == Some(nonce) { - self.last_nonce_in_mempool.remove(&(*signer, order)); + if self.last_nonce_in_mempool.get(&(signer, order.clone())) == Some(&nonce) { + self.last_nonce_in_mempool.remove(&(signer, order)); } } } diff --git a/coordinator/tributary/src/tendermint/tx.rs b/coordinator/tributary/src/tendermint/tx.rs index 8af40708..ea2a7256 100644 --- a/coordinator/tributary/src/tendermint/tx.rs +++ b/coordinator/tributary/src/tendermint/tx.rs @@ -39,7 +39,7 @@ impl ReadWrite for TendermintTx { } impl Transaction for TendermintTx { - fn kind(&self) -> TransactionKind<'_> { + fn kind(&self) -> TransactionKind { // There's an assert elsewhere in the codebase expecting this behavior // If we do want to add Provided/Signed TendermintTxs, review the implications carefully TransactionKind::Unsigned diff --git a/coordinator/tributary/src/tests/block.rs b/coordinator/tributary/src/tests/block.rs index c5bf19c6..03493e21 100644 --- a/coordinator/tributary/src/tests/block.rs +++ b/coordinator/tributary/src/tests/block.rs @@ -60,8 +60,8 @@ impl ReadWrite for NonceTransaction { } impl TransactionTrait for NonceTransaction { - fn kind(&self) -> TransactionKind<'_> { - TransactionKind::Signed(vec![], &self.2) + fn kind(&self) -> TransactionKind { + TransactionKind::Signed(vec![], self.2.clone()) } fn hash(&self) -> [u8; 32] { diff --git a/coordinator/tributary/src/tests/blockchain.rs b/coordinator/tributary/src/tests/blockchain.rs index 6103a62f..3c4df327 100644 --- a/coordinator/tributary/src/tests/blockchain.rs +++ b/coordinator/tributary/src/tests/blockchain.rs @@ -425,7 +425,7 @@ async fn block_tx_ordering() { } impl TransactionTrait for SignedTx { - fn kind(&self) -> TransactionKind<'_> { + fn kind(&self) -> TransactionKind { match self { SignedTx::Signed(signed) => signed.kind(), SignedTx::Provided(pro) => pro.kind(), diff --git a/coordinator/tributary/src/tests/transaction/mod.rs b/coordinator/tributary/src/tests/transaction/mod.rs index 1f85947a..eeaa0acb 100644 --- a/coordinator/tributary/src/tests/transaction/mod.rs +++ b/coordinator/tributary/src/tests/transaction/mod.rs @@ -67,7 +67,7 @@ impl ReadWrite for ProvidedTransaction { } impl Transaction for ProvidedTransaction { - fn kind(&self) -> TransactionKind<'_> { + fn kind(&self) -> TransactionKind { match self.0[0] { 1 => TransactionKind::Provided("order1"), 2 => TransactionKind::Provided("order2"), @@ -119,8 +119,8 @@ impl ReadWrite for SignedTransaction { } impl Transaction for SignedTransaction { - fn kind(&self) -> TransactionKind<'_> { - TransactionKind::Signed(vec![], &self.1) + fn kind(&self) -> TransactionKind { + TransactionKind::Signed(vec![], self.1.clone()) } fn hash(&self) -> [u8; 32] { diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index 8e9342d7..d7ff4092 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -109,7 +109,7 @@ impl Signed { #[allow(clippy::large_enum_variant)] #[derive(Clone, PartialEq, Eq, Debug)] -pub enum TransactionKind<'a> { +pub enum TransactionKind { /// This transaction should be provided by every validator, in an exact order. /// /// The contained static string names the orderer to use. This allows two distinct provided @@ -137,14 +137,14 @@ pub enum TransactionKind<'a> { Unsigned, /// A signed transaction. - Signed(Vec, &'a Signed), + Signed(Vec, Signed), } // TODO: Should this be renamed TransactionTrait now that a literal Transaction exists? // Or should the literal Transaction be renamed to Event? pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite { /// Return what type of transaction this is. - fn kind(&self) -> TransactionKind<'_>; + fn kind(&self) -> TransactionKind; /// Return the hash of this transaction. /// @@ -198,8 +198,8 @@ pub(crate) fn verify_transaction( match tx.kind() { TransactionKind::Provided(_) | TransactionKind::Unsigned => {} TransactionKind::Signed(order, Signed { signer, nonce, signature }) => { - if let Some(next_nonce) = get_and_increment_nonce(signer, &order) { - if *nonce != next_nonce { + if let Some(next_nonce) = get_and_increment_nonce(&signer, &order) { + if nonce != next_nonce { Err(TransactionError::InvalidNonce)?; } } else { @@ -208,7 +208,7 @@ pub(crate) fn verify_transaction( } // TODO: Use a batch verification here - if !signature.verify(*signer, tx.sig_hash(genesis)) { + if !signature.verify(signer, tx.sig_hash(genesis)) { Err(TransactionError::InvalidSignature)?; } } From 2240a50a0cdce4f01fcca8d41f48dbf36cd8226b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 31 Dec 2024 17:17:12 -0500 Subject: [PATCH 231/368] Rebroadcast cosigns for the currently evaluated session, not the latest intended If Substrate has a block 500 with a key gen, and a block 600 with a key gen, and the session starting on 500 never cosigns everything, everyone up-to-date will want the cosigns for the session starting on block 500. Everyone up-to-date will also be rebroadcasting the non-existent cosigns for the session which has yet to start. This wouldn't cause a stall as eventually, each individual set would cosign the latest notable block, and then that would be explicitly synced, but it's still not the intended behavior. We also won't even intake the cosigns for the latest intended session if it exceeds the session we're currently evaluating. This does mean those behind on the cosigning protocol wouldn't have rebroadcasted their historical cosigns, and now will, but that's valuable as we don't actually know if we're behind or up-to-date (per above posited issue). --- coordinator/cosign/src/evaluator.rs | 17 +++++++++-------- coordinator/cosign/src/lib.rs | 6 ++---- 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index fc606ecc..db286a4f 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -24,7 +24,7 @@ db_channel!( ); // This is a strict function which won't panic, even with a malicious Serai node, so long as: -// - It's called incrementally +// - It's called incrementally (with an increment of 1) // - It's only called for block numbers we've completed indexing on within the intend task // - It's only called for block numbers after a global session has started // - The global sessions channel is populated as the block declaring the session is indexed @@ -69,6 +69,10 @@ fn currently_evaluated_global_session_strict( res } +pub(crate) fn currently_evaluated_global_session(getter: &impl Get) -> Option<[u8; 32]> { + CurrentlyEvaluatedGlobalSession::get(getter).map(|(id, _info)| id) +} + /// A task to determine if a block has been cosigned and we should handle it. pub(crate) struct CosignEvaluatorTask { pub(crate) db: D, @@ -87,13 +91,14 @@ impl ContinuallyRan for CosignEvaluatorTask { - let (global_session, global_session_info) = - currently_evaluated_global_session_strict(&mut txn, block_number); - let mut weight_cosigned = 0; for set in global_session_info.sets { // Check if we have the cosign from this set @@ -145,10 +150,6 @@ impl ContinuallyRan for CosignEvaluatorTask = None; for set in global_session_info.sets { diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 076b56c1..6409b56f 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -308,14 +308,12 @@ impl Cosigning { } cosigns } else { - let Some(latest_global_session) = LatestGlobalSessionIntended::get(&self.db) else { + let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else { return vec![]; }; let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); for network in serai_client::primitives::NETWORKS { - if let Some(cosign) = - NetworksLatestCosignedBlock::get(&self.db, latest_global_session, network) - { + if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) { cosigns.push(cosign); } } From 6272c40561f71894ba33f7834611b6fa56d841b5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 31 Dec 2024 18:10:47 -0500 Subject: [PATCH 232/368] Restore `block_hash` to Batch It's not only helpful (to easily check where Serai's view of the external network is) but it's necessary in case of a non-trivial chain fork to determine which blockchain Serai considers canonical. --- coordinator/substrate/src/canonical.rs | 2 + processor/bin/src/lib.rs | 13 +---- processor/messages/src/lib.rs | 1 + processor/scanner/src/batch/mod.rs | 16 +++++- processor/scanner/src/lib.rs | 20 ++----- processor/scanner/src/substrate/db.rs | 33 +++--------- processor/scanner/src/substrate/mod.rs | 53 +++++++------------ substrate/abi/src/in_instructions.rs | 1 + substrate/in-instructions/pallet/src/lib.rs | 2 + .../in-instructions/primitives/src/lib.rs | 1 + 10 files changed, 55 insertions(+), 87 deletions(-) diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index d778bc7c..b9479aeb 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -160,6 +160,7 @@ impl ContinuallyRan for CanonicalEventStream { network: batch_network, publishing_session, id, + external_network_block_hash, in_instructions_hash, in_instruction_results, } = this_batch @@ -173,6 +174,7 @@ impl ContinuallyRan for CanonicalEventStream { batch = Some(ExecutedBatch { id: *id, publisher: *publishing_session, + external_network_block_hash: *external_network_block_hash, in_instructions_hash: *in_instructions_hash, in_instruction_results: in_instruction_results .iter() diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 7dc794bf..119c4f40 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -277,23 +277,14 @@ pub async fn main_loop< } => { let scanner = scanner.as_mut().unwrap(); - if let Some(messages::substrate::ExecutedBatch { - id, - publisher, - in_instructions_hash, - in_instruction_results, - }) = batch - { + if let Some(batch) = batch { let key_to_activate = KeyToActivate::>::try_recv(txn.as_mut().unwrap()).map(|key| key.0); // This is a cheap call as it internally just queues this to be done later let _: () = scanner.acknowledge_batch( txn.take().unwrap(), - id, - publisher, - in_instructions_hash, - in_instruction_results, + batch, /* `acknowledge_batch` takes burns to optimize handling returns with standard payments. That's why handling these with a Batch (and not waiting until the diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 1b6e1996..438beb5b 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -188,6 +188,7 @@ pub mod substrate { pub struct ExecutedBatch { pub id: u32, pub publisher: Session, + pub external_network_block_hash: [u8; 32], pub in_instructions_hash: [u8; 32], pub in_instruction_results: Vec, } diff --git a/processor/scanner/src/batch/mod.rs b/processor/scanner/src/batch/mod.rs index 8563ac4e..158306ca 100644 --- a/processor/scanner/src/batch/mod.rs +++ b/processor/scanner/src/batch/mod.rs @@ -10,6 +10,7 @@ use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::{EncodableG, task::ContinuallyRan}; use crate::{ db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToBatchDb, BatchData, BatchToReportDb}, + index, scan::next_to_scan_for_outputs_block, ScannerFeed, KeyFor, }; @@ -100,10 +101,16 @@ impl ContinuallyRan for BatchTask { // If this block is notable, create the Batch(s) for it if notable { let network = S::NETWORK; + let external_network_block_hash = index::block_id(&txn, block_number); let mut batch_id = BatchDb::::acquire_batch_id(&mut txn); // start with empty batch - let mut batches = vec![Batch { network, id: batch_id, instructions: vec![] }]; + let mut batches = vec![Batch { + network, + id: batch_id, + external_network_block_hash, + instructions: vec![], + }]; // We also track the return information for the InInstructions within a Batch in case // they error let mut return_information = vec![vec![]]; @@ -123,7 +130,12 @@ impl ContinuallyRan for BatchTask { batch_id = BatchDb::::acquire_batch_id(&mut txn); // make a new batch with this instruction included - batches.push(Batch { network, id: batch_id, instructions: vec![in_instruction] }); + batches.push(Batch { + network, + id: batch_id, + external_network_block_hash, + instructions: vec![in_instruction], + }); // Since we're allocating a new batch, allocate a new set of return addresses for it return_information.push(vec![]); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 3575f0d7..510af61b 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -11,9 +11,9 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, Db}; use serai_primitives::{NetworkId, Coin, Amount}; -use serai_validator_sets_primitives::Session; use serai_coins_primitives::OutInstructionWithBalance; +use messages::substrate::ExecutedBatch; use primitives::{task::*, Address, ReceivedOutput, Block, Payment}; // Logic for deciding where in its lifetime a multisig is. @@ -444,29 +444,17 @@ impl Scanner { /// `queue_burns`. Doing so will cause them to be executed multiple times. /// /// The calls to this function must be ordered with regards to `queue_burns`. - #[allow(clippy::too_many_arguments)] pub fn acknowledge_batch( &mut self, mut txn: impl DbTxn, - batch_id: u32, - publisher: Session, - in_instructions_hash: [u8; 32], - in_instruction_results: Vec, + batch: ExecutedBatch, burns: Vec, key_to_activate: Option>, ) { - log::info!("acknowledging batch {batch_id}"); + log::info!("acknowledging batch {}", batch.id); // Queue acknowledging this block via the Substrate task - substrate::queue_acknowledge_batch::( - &mut txn, - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, - burns, - key_to_activate, - ); + substrate::queue_acknowledge_batch::(&mut txn, batch, burns, key_to_activate); // Commit this txn so this data is flushed txn.commit(); // Then run the Substrate task diff --git a/processor/scanner/src/substrate/db.rs b/processor/scanner/src/substrate/db.rs index af6679d4..1e0181b8 100644 --- a/processor/scanner/src/substrate/db.rs +++ b/processor/scanner/src/substrate/db.rs @@ -6,16 +6,14 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db, db_channel}; use serai_coins_primitives::OutInstructionWithBalance; -use serai_validator_sets_primitives::Session; + +use messages::substrate::ExecutedBatch; use crate::{ScannerFeed, KeyFor}; #[derive(BorshSerialize, BorshDeserialize)] struct AcknowledgeBatchEncodable { - batch_id: u32, - publisher: Session, - in_instructions_hash: [u8; 32], - in_instruction_results: Vec, + batch: ExecutedBatch, burns: Vec, key_to_activate: Option>, } @@ -27,10 +25,7 @@ enum ActionEncodable { } pub(crate) struct AcknowledgeBatch { - pub(crate) batch_id: u32, - pub(crate) publisher: Session, - pub(crate) in_instructions_hash: [u8; 32], - pub(crate) in_instruction_results: Vec, + pub(crate) batch: ExecutedBatch, pub(crate) burns: Vec, pub(crate) key_to_activate: Option>, } @@ -64,20 +59,14 @@ impl SubstrateDb { pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, - batch_id: u32, - publisher: Session, - in_instructions_hash: [u8; 32], - in_instruction_results: Vec, + batch: ExecutedBatch, burns: Vec, key_to_activate: Option>, ) { Actions::send( txn, &ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, + batch, burns, key_to_activate: key_to_activate.map(|key| key.to_bytes().as_ref().to_vec()), }), @@ -91,17 +80,11 @@ impl SubstrateDb { let action_encodable = Actions::try_recv(txn)?; Some(match action_encodable { ActionEncodable::AcknowledgeBatch(AcknowledgeBatchEncodable { - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, + batch, burns, key_to_activate, }) => Action::AcknowledgeBatch(AcknowledgeBatch { - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, + batch, burns, key_to_activate: key_to_activate.map(|key| { let mut repr = as GroupEncoding>::Repr::default(); diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 506debd4..6b22a259 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -3,12 +3,12 @@ use core::{marker::PhantomData, future::Future}; use serai_db::{Get, DbTxn, Db}; use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; -use serai_validator_sets_primitives::Session; +use messages::substrate::ExecutedBatch; use primitives::task::ContinuallyRan; use crate::{ db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatches}, - batch, ScannerFeed, KeyFor, + index, batch, ScannerFeed, KeyFor, }; mod db; @@ -19,22 +19,11 @@ pub(crate) fn last_acknowledged_batch(getter: &impl Get) -> Opti } pub(crate) fn queue_acknowledge_batch( txn: &mut impl DbTxn, - batch_id: u32, - publisher: Session, - in_instructions_hash: [u8; 32], - in_instruction_results: Vec, + batch: ExecutedBatch, burns: Vec, key_to_activate: Option>, ) { - SubstrateDb::::queue_acknowledge_batch( - txn, - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, - burns, - key_to_activate, - ) + SubstrateDb::::queue_acknowledge_batch(txn, batch, burns, key_to_activate) } pub(crate) fn queue_queue_burns( txn: &mut impl DbTxn, @@ -73,40 +62,38 @@ impl ContinuallyRan for SubstrateTask { }; match action { - Action::AcknowledgeBatch(AcknowledgeBatch { - batch_id, - publisher, - in_instructions_hash, - in_instruction_results, - mut burns, - key_to_activate, - }) => { + Action::AcknowledgeBatch(AcknowledgeBatch { batch, mut burns, key_to_activate }) => { // Check if we have the information for this batch let Some(batch::BatchInfo { block_number, session_to_sign_batch, external_key_for_session_to_sign_batch, - in_instructions_hash: expected_in_instructions_hash, - }) = batch::take_info_for_batch::(&mut txn, batch_id) + in_instructions_hash, + }) = batch::take_info_for_batch::(&mut txn, batch.id) else { // If we don't, drop this txn (restoring the action to the database) drop(txn); return Ok(made_progress); }; assert_eq!( - publisher, session_to_sign_batch, + batch.publisher, session_to_sign_batch, "batch acknowledged on-chain was acknowledged by an unexpected publisher" ); assert_eq!( - in_instructions_hash, expected_in_instructions_hash, - "batch acknowledged on-chain was distinct" + batch.external_network_block_hash, + index::block_id(&txn, block_number), + "batch acknowledged on-chain was for a distinct block" + ); + assert_eq!( + batch.in_instructions_hash, in_instructions_hash, + "batch acknowledged on-chain had distinct InInstructions" ); - SubstrateDb::::set_last_acknowledged_batch(&mut txn, batch_id); + SubstrateDb::::set_last_acknowledged_batch(&mut txn, batch.id); AcknowledgedBatches::send( &mut txn, &external_key_for_session_to_sign_batch.0, - batch_id, + batch.id, ); // Mark we made progress and handle this @@ -143,17 +130,17 @@ impl ContinuallyRan for SubstrateTask { // Return the balances for any InInstructions which failed to execute { - let return_information = batch::take_return_information::(&mut txn, batch_id) + let return_information = batch::take_return_information::(&mut txn, batch.id) .expect("didn't save the return information for Batch we published"); assert_eq!( - in_instruction_results.len(), + batch.in_instruction_results.len(), return_information.len(), "amount of InInstruction succeededs differed from amount of return information saved" ); // We map these into standard Burns for (result, return_information) in - in_instruction_results.into_iter().zip(return_information) + batch.in_instruction_results.into_iter().zip(return_information) { if result == messages::substrate::InInstructionResult::Succeeded { continue; diff --git a/substrate/abi/src/in_instructions.rs b/substrate/abi/src/in_instructions.rs index 89729f7a..ddf8c657 100644 --- a/substrate/abi/src/in_instructions.rs +++ b/substrate/abi/src/in_instructions.rs @@ -20,6 +20,7 @@ pub enum Event { network: NetworkId, publishing_session: Session, id: u32, + external_network_block_hash: [u8; 32], in_instructions_hash: [u8; 32], in_instruction_results: bitvec::vec::BitVec, }, diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 79d4c717..2666f5b2 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -63,6 +63,7 @@ pub mod pallet { Batch { network: NetworkId, publishing_session: Session, + external_network_block_hash: [u8; 32], id: u32, in_instructions_hash: [u8; 32], in_instruction_results: BitVec, @@ -356,6 +357,7 @@ pub mod pallet { network: batch.network, publishing_session: if valid_by_prior { prior_session } else { current_session }, id: batch.id, + external_network_block_hash: batch.external_network_block_hash, in_instructions_hash, in_instruction_results, }); diff --git a/substrate/in-instructions/primitives/src/lib.rs b/substrate/in-instructions/primitives/src/lib.rs index ef88061b..a4d97db8 100644 --- a/substrate/in-instructions/primitives/src/lib.rs +++ b/substrate/in-instructions/primitives/src/lib.rs @@ -106,6 +106,7 @@ pub struct InInstructionWithBalance { pub struct Batch { pub network: NetworkId, pub id: u32, + pub external_network_block_hash: [u8; 32], pub instructions: Vec, } From bcd3f14f4f95739523965f3b82b6d9a1d07d0a0d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 2 Jan 2025 09:11:04 -0500 Subject: [PATCH 233/368] Start work on cleaning up the coordinator's tributary handling --- Cargo.lock | 1 + coordinator/Cargo.toml | 1 + coordinator/src/db.rs | 134 -- coordinator/src/main.rs | 1285 +---------------- coordinator/src/p2p.rs | 1042 ------------- coordinator/src/processors.rs | 46 - coordinator/src/tests/mod.rs | 125 -- coordinator/src/tests/tributary/chain.rs | 243 ---- coordinator/src/tests/tributary/dkg.rs | 282 ---- coordinator/src/tests/tributary/handle_p2p.rs | 74 - coordinator/src/tests/tributary/mod.rs | 245 ---- coordinator/src/tests/tributary/sync.rs | 165 --- coordinator/src/tests/tributary/tx.rs | 62 - coordinator/src/tributary/db.rs | 456 ++++-- coordinator/src/tributary/handle.rs | 554 ------- coordinator/src/tributary/mod.rs | 63 +- coordinator/src/tributary/scan.rs | 203 +++ coordinator/src/tributary/scanner.rs | 685 --------- coordinator/src/tributary/signing_protocol.rs | 361 ----- coordinator/src/tributary/spec.rs | 124 -- coordinator/src/tributary/transaction.rs | 99 +- crypto/dkg/src/evrf/mod.rs | 2 +- processor/messages/src/lib.rs | 2 +- substrate/primitives/src/account.rs | 2 +- 24 files changed, 582 insertions(+), 5674 deletions(-) delete mode 100644 coordinator/src/db.rs delete mode 100644 coordinator/src/p2p.rs delete mode 100644 coordinator/src/processors.rs delete mode 100644 coordinator/src/tests/mod.rs delete mode 100644 coordinator/src/tests/tributary/chain.rs delete mode 100644 coordinator/src/tests/tributary/dkg.rs delete mode 100644 coordinator/src/tests/tributary/handle_p2p.rs delete mode 100644 coordinator/src/tests/tributary/mod.rs delete mode 100644 coordinator/src/tests/tributary/sync.rs delete mode 100644 coordinator/src/tests/tributary/tx.rs delete mode 100644 coordinator/src/tributary/handle.rs create mode 100644 coordinator/src/tributary/scan.rs delete mode 100644 coordinator/src/tributary/scanner.rs delete mode 100644 coordinator/src/tributary/signing_protocol.rs delete mode 100644 coordinator/src/tributary/spec.rs diff --git a/Cargo.lock b/Cargo.lock index 7e26d78a..c2dc5284 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8332,6 +8332,7 @@ dependencies = [ "serai-env", "serai-message-queue", "serai-processor-messages", + "serai-task", "sp-application-crypto", "sp-runtime", "tokio", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 7d48fcc7..8dcd4cd5 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -37,6 +37,7 @@ scale = { package = "parity-scale-codec", version = "3", default-features = fals zalloc = { path = "../common/zalloc" } serai-db = { path = "../common/db" } serai-env = { path = "../common/env" } +serai-task = { path = "../common/task", version = "0.1" } processor-messages = { package = "serai-processor-messages", path = "../processor/messages" } message-queue = { package = "serai-message-queue", path = "../message-queue" } diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs deleted file mode 100644 index 04ee9d35..00000000 --- a/coordinator/src/db.rs +++ /dev/null @@ -1,134 +0,0 @@ -use blake2::{ - digest::{consts::U32, Digest}, - Blake2b, -}; - -use scale::Encode; -use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::{ - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet}, - in_instructions::primitives::{Batch, SignedBatch}, -}; - -pub use serai_db::*; - -use ::tributary::ReadWrite; -use crate::tributary::{TributarySpec, Transaction, scanner::RecognizedIdType}; - -create_db!( - MainDb { - HandledMessageDb: (network: NetworkId) -> u64, - ActiveTributaryDb: () -> Vec, - RetiredTributaryDb: (set: ValidatorSet) -> (), - FirstPreprocessDb: ( - network: NetworkId, - id_type: RecognizedIdType, - id: &[u8] - ) -> Vec>, - LastReceivedBatchDb: (network: NetworkId) -> u32, - ExpectedBatchDb: (network: NetworkId, id: u32) -> [u8; 32], - BatchDb: (network: NetworkId, id: u32) -> SignedBatch, - LastVerifiedBatchDb: (network: NetworkId) -> u32, - HandoverBatchDb: (set: ValidatorSet) -> u32, - LookupHandoverBatchDb: (network: NetworkId, batch: u32) -> Session, - QueuedBatchesDb: (set: ValidatorSet) -> Vec - } -); - -impl ActiveTributaryDb { - pub fn active_tributaries(getter: &G) -> (Vec, Vec) { - let bytes = Self::get(getter).unwrap_or_default(); - let mut bytes_ref: &[u8] = bytes.as_ref(); - - let mut tributaries = vec![]; - while !bytes_ref.is_empty() { - tributaries.push(TributarySpec::deserialize_reader(&mut bytes_ref).unwrap()); - } - - (bytes, tributaries) - } - - pub fn add_participating_in_tributary(txn: &mut impl DbTxn, spec: &TributarySpec) { - let (mut existing_bytes, existing) = ActiveTributaryDb::active_tributaries(txn); - for tributary in &existing { - if tributary == spec { - return; - } - } - - spec.serialize(&mut existing_bytes).unwrap(); - ActiveTributaryDb::set(txn, &existing_bytes); - } - - pub fn retire_tributary(txn: &mut impl DbTxn, set: ValidatorSet) { - let mut active = Self::active_tributaries(txn).1; - for i in 0 .. active.len() { - if active[i].set() == set { - active.remove(i); - break; - } - } - - let mut bytes = vec![]; - for active in active { - active.serialize(&mut bytes).unwrap(); - } - Self::set(txn, &bytes); - RetiredTributaryDb::set(txn, set, &()); - } -} - -impl FirstPreprocessDb { - pub fn save_first_preprocess( - txn: &mut impl DbTxn, - network: NetworkId, - id_type: RecognizedIdType, - id: &[u8], - preprocess: &Vec>, - ) { - if let Some(existing) = FirstPreprocessDb::get(txn, network, id_type, id) { - assert_eq!(&existing, preprocess, "saved a distinct first preprocess"); - return; - } - FirstPreprocessDb::set(txn, network, id_type, id, preprocess); - } -} - -impl ExpectedBatchDb { - pub fn save_expected_batch(txn: &mut impl DbTxn, batch: &Batch) { - LastReceivedBatchDb::set(txn, batch.network, &batch.id); - Self::set( - txn, - batch.network, - batch.id, - &Blake2b::::digest(batch.instructions.encode()).into(), - ); - } -} - -impl HandoverBatchDb { - pub fn set_handover_batch(txn: &mut impl DbTxn, set: ValidatorSet, batch: u32) { - Self::set(txn, set, &batch); - LookupHandoverBatchDb::set(txn, set.network, batch, &set.session); - } -} -impl QueuedBatchesDb { - pub fn queue(txn: &mut impl DbTxn, set: ValidatorSet, batch: &Transaction) { - let mut batches = Self::get(txn, set).unwrap_or_default(); - batch.write(&mut batches).unwrap(); - Self::set(txn, set, &batches); - } - - pub fn take(txn: &mut impl DbTxn, set: ValidatorSet) -> Vec { - let batches_vec = Self::get(txn, set).unwrap_or_default(); - txn.del(Self::key(set)); - - let mut batches: &[u8] = &batches_vec; - let mut res = vec![]; - while !batches.is_empty() { - res.push(Transaction::read(&mut batches).unwrap()); - } - res - } -} diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 87db0135..c3eb8d80 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,1286 +1,5 @@ -use core::ops::Deref; -use std::{ - sync::{OnceLock, Arc}, - time::Duration, - collections::{VecDeque, HashSet, HashMap}, -}; - -use zeroize::{Zeroize, Zeroizing}; -use rand_core::OsRng; - -use ciphersuite::{ - group::{ - ff::{Field, PrimeField}, - GroupEncoding, - }, - Ciphersuite, Ristretto, -}; -use schnorr::SchnorrSignature; - -use serai_db::{DbTxn, Db}; - -use scale::Encode; -use borsh::BorshSerialize; -use serai_client::{ - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet, KeyPair}, - Public, Serai, SeraiInInstructions, -}; - -use message_queue::{Service, client::MessageQueue}; - -use tokio::{ - sync::{Mutex, RwLock, mpsc, broadcast}, - time::sleep, -}; - -use ::tributary::{ProvidedError, TransactionKind, TransactionTrait, Block, Tributary}; - mod tributary; -use crate::tributary::{ - TributarySpec, Label, SignData, Transaction, scanner::RecognizedIdType, PlanIds, -}; -mod db; -use db::*; - -mod p2p; -pub use p2p::*; - -use processor_messages::{ - key_gen, sign, - coordinator::{self, SubstrateSignableId}, - ProcessorMessage, -}; - -pub mod processors; -use processors::Processors; - -mod substrate; -use substrate::CosignTransactions; - -mod cosign_evaluator; -use cosign_evaluator::CosignEvaluator; - -#[cfg(test)] -pub mod tests; - -#[global_allocator] -static ALLOCATOR: zalloc::ZeroizingAlloc = - zalloc::ZeroizingAlloc(std::alloc::System); - -#[derive(Clone)] -pub struct ActiveTributary { - pub spec: TributarySpec, - pub tributary: Arc>, -} - -#[derive(Clone)] -pub enum TributaryEvent { - NewTributary(ActiveTributary), - TributaryRetired(ValidatorSet), -} - -// Creates a new tributary and sends it to all listeners. -async fn add_tributary( - db: D, - key: Zeroizing<::F>, - processors: &Pro, - p2p: P, - tributaries: &broadcast::Sender>, - spec: TributarySpec, -) { - if RetiredTributaryDb::get(&db, spec.set()).is_some() { - log::info!("not adding tributary {:?} since it's been retired", spec.set()); - } - - log::info!("adding tributary {:?}", spec.set()); - - let tributary = Tributary::<_, Transaction, _>::new( - // TODO2: Use a db on a distinct volume to protect against DoS attacks - // TODO2: Delete said db once the Tributary is dropped - db, - spec.genesis(), - spec.start_time(), - key.clone(), - spec.validators(), - p2p, - ) - .await - .unwrap(); - - // Trigger a DKG for the newly added Tributary - // If we're rebooting, we'll re-fire this message - // This is safe due to the message-queue deduplicating based off the intent system - let set = spec.set(); - - processors - .send( - set.network, - processor_messages::key_gen::CoordinatorMessage::GenerateKey { - session: set.session, - threshold: spec.t(), - evrf_public_keys: spec.evrf_public_keys(), - // TODO - // params: frost::ThresholdParams::new(spec.t(), spec.n(&[]), our_i.start).unwrap(), - // shares: u16::from(our_i.end) - u16::from(our_i.start), - }, - ) - .await; - - tributaries - .send(TributaryEvent::NewTributary(ActiveTributary { spec, tributary: Arc::new(tributary) })) - .map_err(|_| "all ActiveTributary recipients closed") - .unwrap(); -} - -// TODO: Find a better pattern for this -static HANDOVER_VERIFY_QUEUE_LOCK: OnceLock> = OnceLock::new(); - -#[allow(clippy::too_many_arguments)] -async fn handle_processor_message( - db: &mut D, - key: &Zeroizing<::F>, - serai: &Serai, - p2p: &P, - cosign_channel: &mpsc::UnboundedSender, - tributaries: &HashMap>, - network: NetworkId, - msg: &processors::Message, -) -> bool { - #[allow(clippy::nonminimal_bool)] - if let Some(already_handled) = HandledMessageDb::get(db, msg.network) { - assert!(!(already_handled > msg.id)); - assert!((already_handled == msg.id) || (already_handled == msg.id - 1)); - if already_handled == msg.id { - return true; - } - } else { - assert_eq!(msg.id, 0); - } - - let _hvq_lock = HANDOVER_VERIFY_QUEUE_LOCK.get_or_init(|| Mutex::new(())).lock().await; - let mut txn = db.txn(); - - let mut relevant_tributary = match &msg.msg { - // We'll only receive these if we fired GenerateKey, which we'll only do if if we're - // in-set, making the Tributary relevant - ProcessorMessage::KeyGen(inner_msg) => match inner_msg { - key_gen::ProcessorMessage::Participation { session, .. } | - key_gen::ProcessorMessage::GeneratedKeyPair { session, .. } | - key_gen::ProcessorMessage::Blame { session, .. } => Some(*session), - }, - ProcessorMessage::Sign(inner_msg) => match inner_msg { - // We'll only receive InvalidParticipant/Preprocess/Share if we're actively signing - sign::ProcessorMessage::InvalidParticipant { id, .. } | - sign::ProcessorMessage::Preprocess { id, .. } | - sign::ProcessorMessage::Share { id, .. } => Some(id.session), - // While the Processor's Scanner will always emit Completed, that's routed through the - // Signer and only becomes a ProcessorMessage::Completed if the Signer is present and - // confirms it - sign::ProcessorMessage::Completed { session, .. } => Some(*session), - }, - ProcessorMessage::Coordinator(inner_msg) => match inner_msg { - // This is a special case as it's relevant to *all* Tributaries for this network we're - // signing in - // It doesn't return a Tributary to become `relevant_tributary` though - coordinator::ProcessorMessage::SubstrateBlockAck { block, plans } => { - // Get the sessions for these keys - let sessions = plans - .iter() - .map(|plan| plan.session) - .filter(|session| { - RetiredTributaryDb::get(&txn, ValidatorSet { network, session: *session }).is_none() - }) - .collect::>(); - - // Ensure we have the Tributaries - for session in &sessions { - if !tributaries.contains_key(session) { - return false; - } - } - - for session in sessions { - let tributary = &tributaries[&session]; - let plans = plans - .iter() - .filter_map(|plan| Some(plan.id).filter(|_| plan.session == session)) - .collect::>(); - PlanIds::set(&mut txn, &tributary.spec.genesis(), *block, &plans); - - let tx = Transaction::SubstrateBlock(*block); - log::trace!( - "processor message effected transaction {} {:?}", - hex::encode(tx.hash()), - &tx - ); - log::trace!("providing transaction {}", hex::encode(tx.hash())); - let res = tributary.tributary.provide_transaction(tx).await; - if !(res.is_ok() || (res == Err(ProvidedError::AlreadyProvided))) { - if res == Err(ProvidedError::LocalMismatchesOnChain) { - // Spin, since this is a crit for this Tributary - loop { - log::error!( - "{}. tributary: {}, provided: SubstrateBlock({})", - "tributary added distinct provided to delayed locally provided TX", - hex::encode(tributary.spec.genesis()), - block, - ); - sleep(Duration::from_secs(60)).await; - } - } - panic!("provided an invalid transaction: {res:?}"); - } - } - - None - } - // We'll only fire these if we are the Substrate signer, making the Tributary relevant - coordinator::ProcessorMessage::InvalidParticipant { id, .. } | - coordinator::ProcessorMessage::CosignPreprocess { id, .. } | - coordinator::ProcessorMessage::BatchPreprocess { id, .. } | - coordinator::ProcessorMessage::SlashReportPreprocess { id, .. } | - coordinator::ProcessorMessage::SubstrateShare { id, .. } => Some(id.session), - // This causes an action on our P2P net yet not on any Tributary - coordinator::ProcessorMessage::CosignedBlock { block_number, block, signature } => { - let cosigned_block = CosignedBlock { - network, - block_number: *block_number, - block: *block, - signature: { - let mut arr = [0; 64]; - arr.copy_from_slice(signature); - arr - }, - }; - cosign_channel.send(cosigned_block).unwrap(); - let mut buf = vec![]; - cosigned_block.serialize(&mut buf).unwrap(); - P2p::broadcast(p2p, GossipMessageKind::CosignedBlock, buf).await; - None - } - // This causes an action on Substrate yet not on any Tributary - coordinator::ProcessorMessage::SignedSlashReport { session, signature } => { - let set = ValidatorSet { network, session: *session }; - let signature: &[u8] = signature.as_ref(); - let signature = serai_client::Signature(signature.try_into().unwrap()); - - let slashes = crate::tributary::SlashReport::get(&txn, set) - .expect("signed slash report despite not having slash report locally"); - let slashes_pubs = - slashes.iter().map(|(address, points)| (Public(*address), *points)).collect::>(); - - let tx = serai_client::SeraiValidatorSets::report_slashes( - network, - slashes - .into_iter() - .map(|(address, points)| (serai_client::SeraiAddress(address), points)) - .collect::>() - .try_into() - .unwrap(), - signature.clone(), - ); - - loop { - if serai.publish(&tx).await.is_ok() { - break None; - } - - // Check if the slashes shouldn't still be reported. If not, break. - let Ok(serai) = serai.as_of_latest_finalized_block().await else { - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - continue; - }; - let Ok(key) = serai.validator_sets().key_pending_slash_report(network).await else { - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - continue; - }; - let Some(key) = key else { - break None; - }; - // If this is the key for this slash report, then this will verify - use sp_application_crypto::RuntimePublic; - if !key.verify( - &serai_client::validator_sets::primitives::report_slashes_message(&set, &slashes_pubs), - &signature, - ) { - break None; - } - } - } - }, - // These don't return a relevant Tributary as there's no Tributary with action expected - ProcessorMessage::Substrate(inner_msg) => match inner_msg { - processor_messages::substrate::ProcessorMessage::Batch { batch } => { - assert_eq!( - batch.network, msg.network, - "processor sent us a batch for a different network than it was for", - ); - ExpectedBatchDb::save_expected_batch(&mut txn, batch); - None - } - // If this is a new Batch, immediately publish it (if we can) - processor_messages::substrate::ProcessorMessage::SignedBatch { batch } => { - assert_eq!( - batch.batch.network, msg.network, - "processor sent us a signed batch for a different network than it was for", - ); - - log::debug!("received batch {:?} {}", batch.batch.network, batch.batch.id); - - // Save this batch to the disk - BatchDb::set(&mut txn, batch.batch.network, batch.batch.id, &batch.clone()); - - // Get the next-to-execute batch ID - let Ok(mut next) = substrate::expected_next_batch(serai, network).await else { - return false; - }; - - // Since we have a new batch, publish all batches yet to be published to Serai - // This handles the edge-case where batch n+1 is signed before batch n is - let mut batches = VecDeque::new(); - while let Some(batch) = BatchDb::get(&txn, network, next) { - batches.push_back(batch); - next += 1; - } - - while let Some(batch) = batches.pop_front() { - // If this Batch should no longer be published, continue - let Ok(expected_next_batch) = substrate::expected_next_batch(serai, network).await else { - return false; - }; - if expected_next_batch > batch.batch.id { - continue; - } - - let tx = SeraiInInstructions::execute_batch(batch.clone()); - log::debug!("attempting to publish batch {:?} {}", batch.batch.network, batch.batch.id,); - // This publish may fail if this transactions already exists in the mempool, which is - // possible, or if this batch was already executed on-chain - // Either case will have eventual resolution and be handled by the above check on if - // this batch should execute - let res = serai.publish(&tx).await; - if res.is_ok() { - log::info!( - "published batch {network:?} {} (block {})", - batch.batch.id, - hex::encode(batch.batch.block), - ); - } else { - log::debug!( - "couldn't publish batch {:?} {}: {:?}", - batch.batch.network, - batch.batch.id, - res, - ); - // If we failed to publish it, restore it - batches.push_front(batch); - // Sleep for a few seconds before retrying to prevent hammering the node - sleep(Duration::from_secs(5)).await; - } - } - - None - } - }, - }; - - // If we have a relevant Tributary, check it's actually still relevant and has yet to be retired - if let Some(relevant_tributary_value) = relevant_tributary { - if RetiredTributaryDb::get( - &txn, - ValidatorSet { network: msg.network, session: relevant_tributary_value }, - ) - .is_some() - { - relevant_tributary = None; - } - } - - // If there's a relevant Tributary... - if let Some(relevant_tributary) = relevant_tributary { - // Make sure we have it - // Per the reasoning above, we only return a Tributary as relevant if we're a participant - // Accordingly, we do *need* to have this Tributary now to handle it UNLESS the Tributary has - // already completed and this is simply an old message (which we prior checked) - let Some(ActiveTributary { spec, tributary }) = tributaries.get(&relevant_tributary) else { - // Since we don't, sleep for a fraction of a second and return false, signaling we didn't - // handle this message - // At the start of the loop which calls this function, we'll check for new tributaries, - // making this eventually resolve - sleep(Duration::from_millis(100)).await; - return false; - }; - - let genesis = spec.genesis(); - let pub_key = Ristretto::generator() * key.deref(); - - let txs = match msg.msg.clone() { - ProcessorMessage::KeyGen(inner_msg) => match inner_msg { - key_gen::ProcessorMessage::Participation { session, participation } => { - assert_eq!(session, spec.set().session); - vec![Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }] - } - key_gen::ProcessorMessage::GeneratedKeyPair { session, substrate_key, network_key } => { - assert_eq!(session, spec.set().session); - crate::tributary::generated_key_pair::( - &mut txn, - genesis, - &KeyPair(Public(substrate_key), network_key.try_into().unwrap()), - ); - - // Create a MuSig-based machine to inform Substrate of this key generation - let confirmation_nonces = - crate::tributary::dkg_confirmation_nonces(key, spec, &mut txn, 0); - - vec![Transaction::DkgConfirmationNonces { - attempt: 0, - confirmation_nonces, - signed: Transaction::empty_signed(), - }] - } - key_gen::ProcessorMessage::Blame { session, participant } => { - assert_eq!(session, spec.set().session); - let participant = spec.reverse_lookup_i(participant).unwrap(); - vec![Transaction::RemoveParticipant { participant, signed: Transaction::empty_signed() }] - } - }, - ProcessorMessage::Sign(msg) => match msg { - sign::ProcessorMessage::InvalidParticipant { .. } => { - // TODO: Locally increase slash points to maximum (distinct from an explicitly fatal - // slash) and censor transactions (yet don't explicitly ban) - vec![] - } - sign::ProcessorMessage::Preprocess { id, preprocesses } => { - if id.attempt == 0 { - FirstPreprocessDb::save_first_preprocess( - &mut txn, - network, - RecognizedIdType::Plan, - &id.id, - &preprocesses, - ); - - vec![] - } else { - vec![Transaction::Sign(SignData { - plan: id.id, - attempt: id.attempt, - label: Label::Preprocess, - data: preprocesses, - signed: Transaction::empty_signed(), - })] - } - } - sign::ProcessorMessage::Share { id, shares } => { - vec![Transaction::Sign(SignData { - plan: id.id, - attempt: id.attempt, - label: Label::Share, - data: shares, - signed: Transaction::empty_signed(), - })] - } - sign::ProcessorMessage::Completed { session: _, id, tx } => { - let r = Zeroizing::new(::F::random(&mut OsRng)); - #[allow(non_snake_case)] - let R = ::generator() * r.deref(); - let mut tx = Transaction::SignCompleted { - plan: id, - tx_hash: tx, - first_signer: pub_key, - signature: SchnorrSignature { R, s: ::F::ZERO }, - }; - let signed = SchnorrSignature::sign(key, r, tx.sign_completed_challenge()); - match &mut tx { - Transaction::SignCompleted { signature, .. } => { - *signature = signed; - } - _ => unreachable!(), - } - vec![tx] - } - }, - ProcessorMessage::Coordinator(inner_msg) => match inner_msg { - coordinator::ProcessorMessage::SubstrateBlockAck { .. } => unreachable!(), - coordinator::ProcessorMessage::InvalidParticipant { .. } => { - // TODO: Locally increase slash points to maximum (distinct from an explicitly fatal - // slash) and censor transactions (yet don't explicitly ban) - vec![] - } - coordinator::ProcessorMessage::CosignPreprocess { id, preprocesses } | - coordinator::ProcessorMessage::SlashReportPreprocess { id, preprocesses } => { - vec![Transaction::SubstrateSign(SignData { - plan: id.id, - attempt: id.attempt, - label: Label::Preprocess, - data: preprocesses.into_iter().map(Into::into).collect(), - signed: Transaction::empty_signed(), - })] - } - coordinator::ProcessorMessage::BatchPreprocess { id, block, preprocesses } => { - log::info!( - "informed of batch (sign ID {}, attempt {}) for block {}", - hex::encode(id.id.encode()), - id.attempt, - hex::encode(block), - ); - - // If this is the first attempt instance, wait until we synchronize around the batch - // first - if id.attempt == 0 { - FirstPreprocessDb::save_first_preprocess( - &mut txn, - spec.set().network, - RecognizedIdType::Batch, - &{ - let SubstrateSignableId::Batch(id) = id.id else { - panic!("BatchPreprocess SubstrateSignableId wasn't Batch") - }; - id.to_le_bytes() - }, - &preprocesses.into_iter().map(Into::into).collect::>(), - ); - - let intended = Transaction::Batch { - block: block.0, - batch: match id.id { - SubstrateSignableId::Batch(id) => id, - _ => panic!("BatchPreprocess did not contain Batch ID"), - }, - }; - - // If this is the new key's first Batch, only create this TX once we verify all - // all prior published `Batch`s - // TODO: This assumes BatchPreprocess is immediately after Batch - // Ensure that assumption - let last_received = LastReceivedBatchDb::get(&txn, msg.network).unwrap(); - let handover_batch = HandoverBatchDb::get(&txn, spec.set()); - let mut queue = false; - if let Some(handover_batch) = handover_batch { - // There is a race condition here. We may verify all `Batch`s from the prior set, - // start signing the handover `Batch` `n`, start signing `n+1`, have `n+1` signed - // before `n` (or at the same time), yet then the prior set forges a malicious - // `Batch` `n`. - // - // The malicious `Batch` `n` would be publishable to Serai, as Serai can't - // distinguish what's intended to be a handover `Batch`, yet then anyone could - // publish the new set's `n+1`, causing their acceptance of the handover. - // - // To fix this, if this is after the handover `Batch` and we have yet to verify - // publication of the handover `Batch`, don't yet yield the provided. - if last_received > handover_batch { - if let Some(last_verified) = LastVerifiedBatchDb::get(&txn, msg.network) { - if last_verified < handover_batch { - queue = true; - } - } else { - queue = true; - } - } - } else { - HandoverBatchDb::set_handover_batch(&mut txn, spec.set(), last_received); - // If this isn't the first batch, meaning we do have to verify all prior batches, and - // the prior Batch hasn't been verified yet... - if (last_received != 0) && - LastVerifiedBatchDb::get(&txn, msg.network) - .map_or(true, |last_verified| last_verified < (last_received - 1)) - { - // Withhold this TX until we verify all prior `Batch`s - queue = true; - } - } - - if queue { - QueuedBatchesDb::queue(&mut txn, spec.set(), &intended); - vec![] - } else { - // Because this is post-verification of the handover batch, take all queued `Batch`s - // now to ensure we don't provide this before an already queued Batch - // This *may* be an unreachable case due to how last_verified_batch is set, yet it - // doesn't hurt to have as a defensive pattern - let mut res = QueuedBatchesDb::take(&mut txn, spec.set()); - res.push(intended); - res - } - } else { - vec![Transaction::SubstrateSign(SignData { - plan: id.id, - attempt: id.attempt, - label: Label::Preprocess, - data: preprocesses.into_iter().map(Into::into).collect(), - signed: Transaction::empty_signed(), - })] - } - } - coordinator::ProcessorMessage::SubstrateShare { id, shares } => { - vec![Transaction::SubstrateSign(SignData { - plan: id.id, - attempt: id.attempt, - label: Label::Share, - data: shares.into_iter().map(|share| share.to_vec()).collect(), - signed: Transaction::empty_signed(), - })] - } - #[allow(clippy::match_same_arms)] // Allowed to preserve layout - coordinator::ProcessorMessage::CosignedBlock { .. } => unreachable!(), - #[allow(clippy::match_same_arms)] - coordinator::ProcessorMessage::SignedSlashReport { .. } => unreachable!(), - }, - ProcessorMessage::Substrate(inner_msg) => match inner_msg { - processor_messages::substrate::ProcessorMessage::Batch { .. } | - processor_messages::substrate::ProcessorMessage::SignedBatch { .. } => unreachable!(), - }, - }; - - // If this created transactions, publish them - for mut tx in txs { - log::trace!("processor message effected transaction {} {:?}", hex::encode(tx.hash()), &tx); - - match tx.kind() { - TransactionKind::Provided(_) => { - log::trace!("providing transaction {}", hex::encode(tx.hash())); - let res = tributary.provide_transaction(tx.clone()).await; - if !(res.is_ok() || (res == Err(ProvidedError::AlreadyProvided))) { - if res == Err(ProvidedError::LocalMismatchesOnChain) { - // Spin, since this is a crit for this Tributary - loop { - log::error!( - "{}. tributary: {}, provided: {:?}", - "tributary added distinct provided to delayed locally provided TX", - hex::encode(spec.genesis()), - &tx, - ); - sleep(Duration::from_secs(60)).await; - } - } - panic!("provided an invalid transaction: {res:?}"); - } - } - TransactionKind::Unsigned => { - log::trace!("publishing unsigned transaction {}", hex::encode(tx.hash())); - match tributary.add_transaction(tx.clone()).await { - Ok(_) => {} - Err(e) => panic!("created an invalid unsigned transaction: {e:?}"), - } - } - TransactionKind::Signed(_, _) => { - tx.sign(&mut OsRng, genesis, key); - tributary::publish_signed_transaction(&mut txn, tributary, tx).await; - } - } - } - } - - HandledMessageDb::set(&mut txn, msg.network, &msg.id); - txn.commit(); - - true -} - -#[allow(clippy::too_many_arguments)] -async fn handle_processor_messages( - mut db: D, - key: Zeroizing<::F>, - serai: Arc, - processors: Pro, - p2p: P, - cosign_channel: mpsc::UnboundedSender, - network: NetworkId, - mut tributary_event: mpsc::UnboundedReceiver>, -) { - let mut tributaries = HashMap::new(); - loop { - match tributary_event.try_recv() { - Ok(event) => match event { - TributaryEvent::NewTributary(tributary) => { - let set = tributary.spec.set(); - assert_eq!(set.network, network); - tributaries.insert(set.session, tributary); - } - TributaryEvent::TributaryRetired(set) => { - tributaries.remove(&set.session); - } - }, - Err(mpsc::error::TryRecvError::Empty) => {} - Err(mpsc::error::TryRecvError::Disconnected) => { - panic!("handle_processor_messages tributary_event sender closed") - } - } - - // TODO: Check this ID is sane (last handled ID or expected next ID) - let Ok(msg) = tokio::time::timeout(Duration::from_secs(1), processors.recv(network)).await - else { - continue; - }; - log::trace!("entering handle_processor_message for {:?}", network); - if handle_processor_message( - &mut db, - &key, - &serai, - &p2p, - &cosign_channel, - &tributaries, - network, - &msg, - ) - .await - { - processors.ack(msg).await; - } - log::trace!("exited handle_processor_message for {:?}", network); - } -} - -#[allow(clippy::too_many_arguments)] -async fn handle_cosigns_and_batch_publication( - mut db: D, - network: NetworkId, - mut tributary_event: mpsc::UnboundedReceiver>, -) { - let mut tributaries = HashMap::new(); - 'outer: loop { - // TODO: Create a better async flow for this - tokio::time::sleep(core::time::Duration::from_millis(100)).await; - - match tributary_event.try_recv() { - Ok(event) => match event { - TributaryEvent::NewTributary(tributary) => { - let set = tributary.spec.set(); - assert_eq!(set.network, network); - tributaries.insert(set.session, tributary); - } - TributaryEvent::TributaryRetired(set) => { - tributaries.remove(&set.session); - } - }, - Err(mpsc::error::TryRecvError::Empty) => {} - Err(mpsc::error::TryRecvError::Disconnected) => { - panic!("handle_processor_messages tributary_event sender closed") - } - } - - // Handle pending cosigns - { - let mut txn = db.txn(); - while let Some((session, block, hash)) = CosignTransactions::try_recv(&mut txn, network) { - let Some(ActiveTributary { spec, tributary }) = tributaries.get(&session) else { - log::warn!("didn't yet have tributary we're supposed to cosign with"); - break; - }; - log::info!( - "{network:?} {session:?} cosigning block #{block} (hash {}...)", - hex::encode(&hash[.. 8]) - ); - let tx = Transaction::CosignSubstrateBlock(hash); - let res = tributary.provide_transaction(tx.clone()).await; - if !(res.is_ok() || (res == Err(ProvidedError::AlreadyProvided))) { - if res == Err(ProvidedError::LocalMismatchesOnChain) { - // Spin, since this is a crit for this Tributary - loop { - log::error!( - "{}. tributary: {}, provided: {:?}", - "tributary added distinct CosignSubstrateBlock", - hex::encode(spec.genesis()), - &tx, - ); - sleep(Duration::from_secs(60)).await; - } - } - panic!("provided an invalid CosignSubstrateBlock: {res:?}"); - } - } - txn.commit(); - } - - // Verify any publifshed `Batch`s - { - let _hvq_lock = HANDOVER_VERIFY_QUEUE_LOCK.get_or_init(|| Mutex::new(())).lock().await; - let mut txn = db.txn(); - let mut to_publish = vec![]; - let start_id = - LastVerifiedBatchDb::get(&txn, network).map_or(0, |already_verified| already_verified + 1); - if let Some(last_id) = - substrate::verify_published_batches::(&mut txn, network, u32::MAX).await - { - // Check if any of these `Batch`s were a handover `Batch` or the `Batch` before a handover - // `Batch` - // If so, we need to publish queued provided `Batch` transactions - for batch in start_id ..= last_id { - let is_pre_handover = LookupHandoverBatchDb::get(&txn, network, batch + 1); - if let Some(session) = is_pre_handover { - let set = ValidatorSet { network, session }; - let mut queued = QueuedBatchesDb::take(&mut txn, set); - // is_handover_batch is only set for handover `Batch`s we're participating in, making - // this safe - if queued.is_empty() { - panic!("knew the next Batch was a handover yet didn't queue it"); - } - - // Only publish the handover Batch - to_publish.push((set.session, queued.remove(0))); - // Re-queue the remaining batches - for remaining in queued { - QueuedBatchesDb::queue(&mut txn, set, &remaining); - } - } - - let is_handover = LookupHandoverBatchDb::get(&txn, network, batch); - if let Some(session) = is_handover { - for queued in QueuedBatchesDb::take(&mut txn, ValidatorSet { network, session }) { - to_publish.push((session, queued)); - } - } - } - } - - for (session, tx) in to_publish { - let Some(ActiveTributary { spec, tributary }) = tributaries.get(&session) else { - log::warn!("didn't yet have tributary we're supposed to provide a queued Batch for"); - // Safe since this will drop the txn updating the most recently queued batch - continue 'outer; - }; - log::debug!("providing Batch transaction {:?}", &tx); - let res = tributary.provide_transaction(tx.clone()).await; - if !(res.is_ok() || (res == Err(ProvidedError::AlreadyProvided))) { - if res == Err(ProvidedError::LocalMismatchesOnChain) { - // Spin, since this is a crit for this Tributary - loop { - log::error!( - "{}. tributary: {}, provided: {:?}", - "tributary added distinct Batch", - hex::encode(spec.genesis()), - &tx, - ); - sleep(Duration::from_secs(60)).await; - } - } - panic!("provided an invalid Batch: {res:?}"); - } - } - - txn.commit(); - } - } -} - -pub async fn handle_processors( - db: D, - key: Zeroizing<::F>, - serai: Arc, - processors: Pro, - p2p: P, - cosign_channel: mpsc::UnboundedSender, - mut tributary_event: broadcast::Receiver>, -) { - let mut channels = HashMap::new(); - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - } - let (processor_send, processor_recv) = mpsc::unbounded_channel(); - tokio::spawn(handle_processor_messages( - db.clone(), - key.clone(), - serai.clone(), - processors.clone(), - p2p.clone(), - cosign_channel.clone(), - network, - processor_recv, - )); - let (cosign_send, cosign_recv) = mpsc::unbounded_channel(); - tokio::spawn(handle_cosigns_and_batch_publication(db.clone(), network, cosign_recv)); - channels.insert(network, (processor_send, cosign_send)); - } - - // Listen to new tributary events - loop { - match tributary_event.recv().await.unwrap() { - TributaryEvent::NewTributary(tributary) => { - let (c1, c2) = &channels[&tributary.spec.set().network]; - c1.send(TributaryEvent::NewTributary(tributary.clone())).unwrap(); - c2.send(TributaryEvent::NewTributary(tributary)).unwrap(); - } - TributaryEvent::TributaryRetired(set) => { - let (c1, c2) = &channels[&set.network]; - c1.send(TributaryEvent::TributaryRetired(set)).unwrap(); - c2.send(TributaryEvent::TributaryRetired(set)).unwrap(); - } - }; - } -} - -pub async fn run( - raw_db: D, - key: Zeroizing<::F>, - p2p: P, - processors: Pro, - serai: Arc, -) { - let (new_tributary_spec_send, mut new_tributary_spec_recv) = mpsc::unbounded_channel(); - // Reload active tributaries from the database - for spec in ActiveTributaryDb::active_tributaries(&raw_db).1 { - new_tributary_spec_send.send(spec).unwrap(); - } - - let (perform_slash_report_send, mut perform_slash_report_recv) = mpsc::unbounded_channel(); - - let (tributary_retired_send, mut tributary_retired_recv) = mpsc::unbounded_channel(); - - // Handle new Substrate blocks - tokio::spawn(crate::substrate::scan_task( - raw_db.clone(), - key.clone(), - processors.clone(), - serai.clone(), - new_tributary_spec_send, - perform_slash_report_send, - tributary_retired_send, - )); - - // Handle the Tributaries - - // This should be large enough for an entire rotation of all tributaries - // If it's too small, the coordinator fail to boot, which is a decent sanity check - let (tributary_event, mut tributary_event_listener_1) = broadcast::channel(32); - let tributary_event_listener_2 = tributary_event.subscribe(); - let tributary_event_listener_3 = tributary_event.subscribe(); - let tributary_event_listener_4 = tributary_event.subscribe(); - let tributary_event_listener_5 = tributary_event.subscribe(); - - // Emit TributaryEvent::TributaryRetired - tokio::spawn({ - let tributary_event = tributary_event.clone(); - async move { - loop { - let retired = tributary_retired_recv.recv().await.unwrap(); - tributary_event.send(TributaryEvent::TributaryRetired(retired)).map_err(|_| ()).unwrap(); - } - } - }); - - // Spawn a task to further add Tributaries as needed - tokio::spawn({ - let raw_db = raw_db.clone(); - let key = key.clone(); - let processors = processors.clone(); - let p2p = p2p.clone(); - async move { - loop { - let spec = new_tributary_spec_recv.recv().await.unwrap(); - // Uses an inner task as Tributary::new may take several seconds - tokio::spawn({ - let raw_db = raw_db.clone(); - let key = key.clone(); - let processors = processors.clone(); - let p2p = p2p.clone(); - let tributary_event = tributary_event.clone(); - async move { - add_tributary(raw_db, key, &processors, p2p, &tributary_event, spec).await; - } - }); - } - } - }); - - // When we reach synchrony on an event requiring signing, send our preprocess for it - // TODO: Properly place this into the Tributary scanner, as it's a mess out here - let recognized_id = { - let raw_db = raw_db.clone(); - let key = key.clone(); - - let specs = Arc::new(RwLock::new(HashMap::new())); - let tributaries = Arc::new(RwLock::new(HashMap::new())); - // Spawn a task to maintain a local view of the tributaries for whenever recognized_id is - // called - tokio::spawn({ - let specs = specs.clone(); - let tributaries = tributaries.clone(); - let mut set_to_genesis = HashMap::new(); - async move { - loop { - match tributary_event_listener_1.recv().await { - Ok(TributaryEvent::NewTributary(tributary)) => { - set_to_genesis.insert(tributary.spec.set(), tributary.spec.genesis()); - tributaries.write().await.insert(tributary.spec.genesis(), tributary.tributary); - specs.write().await.insert(tributary.spec.set(), tributary.spec); - } - Ok(TributaryEvent::TributaryRetired(set)) => { - if let Some(genesis) = set_to_genesis.remove(&set) { - specs.write().await.remove(&set); - tributaries.write().await.remove(&genesis); - } - } - Err(broadcast::error::RecvError::Lagged(_)) => { - panic!("recognized_id lagged to handle tributary_event") - } - Err(broadcast::error::RecvError::Closed) => panic!("tributary_event sender closed"), - } - } - } - }); - - // Also spawn a task to handle slash reports, as this needs such a view of tributaries - tokio::spawn({ - let mut raw_db = raw_db.clone(); - let key = key.clone(); - let tributaries = tributaries.clone(); - async move { - 'task_loop: loop { - match perform_slash_report_recv.recv().await { - Some(set) => { - let (genesis, validators) = loop { - let specs = specs.read().await; - let Some(spec) = specs.get(&set) else { - // If we don't have this Tributary because it's retired, break and move on - if RetiredTributaryDb::get(&raw_db, set).is_some() { - continue 'task_loop; - } - - // This may happen if the task above is simply slow - log::warn!("tributary we don't have yet is supposed to perform a slash report"); - continue; - }; - break (spec.genesis(), spec.validators()); - }; - - let mut slashes = vec![]; - for (validator, _) in validators { - if validator == (::generator() * key.deref()) { - continue; - } - let validator = validator.to_bytes(); - - let fatally = tributary::FatallySlashed::get(&raw_db, genesis, validator).is_some(); - // TODO: Properly type this - let points = if fatally { - u32::MAX - } else { - tributary::SlashPoints::get(&raw_db, genesis, validator).unwrap_or(0) - }; - slashes.push(points); - } - - let mut tx = Transaction::SlashReport(slashes, Transaction::empty_signed()); - tx.sign(&mut OsRng, genesis, &key); - - let mut first = true; - loop { - if !first { - sleep(Duration::from_millis(100)).await; - } - first = false; - - let tributaries = tributaries.read().await; - let Some(tributary) = tributaries.get(&genesis) else { - // If we don't have this Tributary because it's retired, break and move on - if RetiredTributaryDb::get(&raw_db, set).is_some() { - break; - } - - // This may happen if the task above is simply slow - log::warn!("tributary we don't have yet is supposed to perform a slash report"); - continue; - }; - // This is safe to perform multiple times and solely needs atomicity with regards - // to itself - // TODO: Should this not take a txn accordingly? It's best practice to take a txn, - // yet taking a txn fails to declare its achieved independence - let mut txn = raw_db.txn(); - tributary::publish_signed_transaction(&mut txn, tributary, tx).await; - txn.commit(); - break; - } - } - None => panic!("perform slash report sender closed"), - } - } - } - }); - - move |set: ValidatorSet, genesis, id_type, id: Vec| { - log::debug!("recognized ID {:?} {}", id_type, hex::encode(&id)); - let mut raw_db = raw_db.clone(); - let key = key.clone(); - let tributaries = tributaries.clone(); - async move { - // The transactions for these are fired before the preprocesses are actually - // received/saved, creating a race between Tributary ack and the availability of all - // Preprocesses - // This waits until the necessary preprocess is available 0, - let get_preprocess = |raw_db, id_type, id| async move { - loop { - let Some(preprocess) = FirstPreprocessDb::get(raw_db, set.network, id_type, id) else { - log::warn!("waiting for preprocess for recognized ID"); - sleep(Duration::from_millis(100)).await; - continue; - }; - return preprocess; - } - }; - - let mut tx = match id_type { - RecognizedIdType::Batch => Transaction::SubstrateSign(SignData { - data: get_preprocess(&raw_db, id_type, &id).await, - plan: SubstrateSignableId::Batch(u32::from_le_bytes(id.try_into().unwrap())), - label: Label::Preprocess, - attempt: 0, - signed: Transaction::empty_signed(), - }), - - RecognizedIdType::Plan => Transaction::Sign(SignData { - data: get_preprocess(&raw_db, id_type, &id).await, - plan: id.try_into().unwrap(), - label: Label::Preprocess, - attempt: 0, - signed: Transaction::empty_signed(), - }), - }; - - tx.sign(&mut OsRng, genesis, &key); - - let mut first = true; - loop { - if !first { - sleep(Duration::from_millis(100)).await; - } - first = false; - - let tributaries = tributaries.read().await; - let Some(tributary) = tributaries.get(&genesis) else { - // If we don't have this Tributary because it's retired, break and move on - if RetiredTributaryDb::get(&raw_db, set).is_some() { - break; - } - - // This may happen if the task above is simply slow - log::warn!("tributary we don't have yet came to consensus on an Batch"); - continue; - }; - // This is safe to perform multiple times and solely needs atomicity with regards to - // itself - // TODO: Should this not take a txn accordingly? It's best practice to take a txn, yet - // taking a txn fails to declare its achieved independence - let mut txn = raw_db.txn(); - tributary::publish_signed_transaction(&mut txn, tributary, tx).await; - txn.commit(); - break; - } - } - } - }; - - // Handle new blocks for each Tributary - { - let raw_db = raw_db.clone(); - tokio::spawn(tributary::scanner::scan_tributaries_task( - raw_db, - key.clone(), - recognized_id, - processors.clone(), - serai.clone(), - tributary_event_listener_2, - )); - } - - // Spawn the heartbeat task, which will trigger syncing if there hasn't been a Tributary block - // in a while (presumably because we're behind) - tokio::spawn(p2p::heartbeat_tributaries_task(p2p.clone(), tributary_event_listener_3)); - - // Create the Cosign evaluator - let cosign_channel = CosignEvaluator::new(raw_db.clone(), p2p.clone(), serai.clone()); - - // Handle P2P messages - tokio::spawn(p2p::handle_p2p_task( - p2p.clone(), - cosign_channel.clone(), - tributary_event_listener_4, - )); - - // Handle all messages from processors - handle_processors( - raw_db, - key, - serai, - processors, - p2p, - cosign_channel, - tributary_event_listener_5, - ) - .await; -} - -#[tokio::main] -async fn main() { - // Override the panic handler with one which will panic if any tokio task panics - { - let existing = std::panic::take_hook(); - std::panic::set_hook(Box::new(move |panic| { - existing(panic); - const MSG: &str = "exiting the process due to a task panicking"; - println!("{MSG}"); - log::error!("{MSG}"); - std::process::exit(1); - })); - } - - if std::env::var("RUST_LOG").is_err() { - std::env::set_var("RUST_LOG", serai_env::var("RUST_LOG").unwrap_or_else(|| "info".to_string())); - } - env_logger::init(); - - log::info!("starting coordinator service..."); - - #[allow(unused_variables, unreachable_code)] - let db = { - #[cfg(all(feature = "parity-db", feature = "rocksdb"))] - panic!("built with parity-db and rocksdb"); - #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] - let db = - serai_db::new_parity_db(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); - #[cfg(feature = "rocksdb")] - let db = - serai_db::new_rocksdb(&serai_env::var("DB_PATH").expect("path to DB wasn't specified")); - db - }; - - let key = { - let mut key_hex = serai_env::var("SERAI_KEY").expect("Serai key wasn't provided"); - let mut key_vec = hex::decode(&key_hex).map_err(|_| ()).expect("Serai key wasn't hex-encoded"); - key_hex.zeroize(); - if key_vec.len() != 32 { - key_vec.zeroize(); - panic!("Serai key had an invalid length"); - } - let mut key_bytes = [0; 32]; - key_bytes.copy_from_slice(&key_vec); - key_vec.zeroize(); - let key = Zeroizing::new(::F::from_repr(key_bytes).unwrap()); - key_bytes.zeroize(); - key - }; - - let processors = Arc::new(MessageQueue::from_env(Service::Coordinator)); - - let serai = (async { - loop { - let Ok(serai) = Serai::new(format!( - "http://{}:9944", - serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided") - )) - .await - else { - log::error!("couldn't connect to the Serai node"); - sleep(Duration::from_secs(5)).await; - continue; - }; - log::info!("made initial connection to Serai node"); - return Arc::new(serai); - } - }) - .await; - let p2p = LibP2p::new(serai.clone()); - run(db, key, p2p, processors, serai).await +fn main() { + todo!("TODO") } diff --git a/coordinator/src/p2p.rs b/coordinator/src/p2p.rs deleted file mode 100644 index cecb3517..00000000 --- a/coordinator/src/p2p.rs +++ /dev/null @@ -1,1042 +0,0 @@ -use core::{time::Duration, fmt}; -use std::{ - sync::Arc, - io::{self, Read}, - collections::{HashSet, HashMap}, - time::{SystemTime, Instant}, -}; - -use async_trait::async_trait; -use rand_core::{RngCore, OsRng}; - -use scale::{Decode, Encode}; -use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet, Serai}; - -use serai_db::Db; - -use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, StreamExt}; -use tokio::{ - sync::{Mutex, RwLock, mpsc, broadcast}, - time::sleep, -}; - -use libp2p::{ - core::multiaddr::{Protocol, Multiaddr}, - identity::Keypair, - PeerId, - tcp::Config as TcpConfig, - noise, yamux, - request_response::{ - Codec as RrCodecTrait, Message as RrMessage, Event as RrEvent, Config as RrConfig, - Behaviour as RrBehavior, ProtocolSupport, - }, - gossipsub::{ - IdentTopic, FastMessageId, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, - IdentityTransform, AllowAllSubscriptionFilter, Event as GsEvent, PublishError, - Behaviour as GsBehavior, - }, - swarm::{NetworkBehaviour, SwarmEvent}, - SwarmBuilder, -}; - -pub(crate) use tributary::{ReadWrite, P2p as TributaryP2p}; - -use crate::{Transaction, Block, Tributary, ActiveTributary, TributaryEvent}; - -// Block size limit + 1 KB of space for signatures/metadata -const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 1024; - -const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = - (tributary::BLOCK_SIZE_LIMIT * BLOCKS_PER_BATCH) + 1024; - -const MAX_LIBP2P_MESSAGE_SIZE: usize = { - // Manual `max` since `max` isn't a const fn - if MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > MAX_LIBP2P_REQRES_MESSAGE_SIZE { - MAX_LIBP2P_GOSSIP_MESSAGE_SIZE - } else { - MAX_LIBP2P_REQRES_MESSAGE_SIZE - } -}; - -const LIBP2P_TOPIC: &str = "serai-coordinator"; - -// Amount of blocks in a minute -const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; - -// Maximum amount of blocks to send in a batch -const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] -pub struct CosignedBlock { - pub network: NetworkId, - pub block_number: u64, - pub block: [u8; 32], - pub signature: [u8; 64], -} - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] -pub enum ReqResMessageKind { - KeepAlive, - Heartbeat([u8; 32]), - Block([u8; 32]), -} - -impl ReqResMessageKind { - pub fn read(reader: &mut R) -> Option { - let mut kind = [0; 1]; - reader.read_exact(&mut kind).ok()?; - match kind[0] { - 0 => Some(ReqResMessageKind::KeepAlive), - 1 => Some({ - let mut genesis = [0; 32]; - reader.read_exact(&mut genesis).ok()?; - ReqResMessageKind::Heartbeat(genesis) - }), - 2 => Some({ - let mut genesis = [0; 32]; - reader.read_exact(&mut genesis).ok()?; - ReqResMessageKind::Block(genesis) - }), - _ => None, - } - } - - pub fn serialize(&self) -> Vec { - match self { - ReqResMessageKind::KeepAlive => vec![0], - ReqResMessageKind::Heartbeat(genesis) => { - let mut res = vec![1]; - res.extend(genesis); - res - } - ReqResMessageKind::Block(genesis) => { - let mut res = vec![2]; - res.extend(genesis); - res - } - } - } -} - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] -pub enum GossipMessageKind { - Tributary([u8; 32]), - CosignedBlock, -} - -impl GossipMessageKind { - pub fn read(reader: &mut R) -> Option { - let mut kind = [0; 1]; - reader.read_exact(&mut kind).ok()?; - match kind[0] { - 0 => Some({ - let mut genesis = [0; 32]; - reader.read_exact(&mut genesis).ok()?; - GossipMessageKind::Tributary(genesis) - }), - 1 => Some(GossipMessageKind::CosignedBlock), - _ => None, - } - } - - pub fn serialize(&self) -> Vec { - match self { - GossipMessageKind::Tributary(genesis) => { - let mut res = vec![0]; - res.extend(genesis); - res - } - GossipMessageKind::CosignedBlock => { - vec![1] - } - } - } -} - -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] -pub enum P2pMessageKind { - ReqRes(ReqResMessageKind), - Gossip(GossipMessageKind), -} - -impl P2pMessageKind { - fn genesis(&self) -> Option<[u8; 32]> { - match self { - P2pMessageKind::ReqRes(ReqResMessageKind::KeepAlive) | - P2pMessageKind::Gossip(GossipMessageKind::CosignedBlock) => None, - P2pMessageKind::ReqRes( - ReqResMessageKind::Heartbeat(genesis) | ReqResMessageKind::Block(genesis), - ) | - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)) => Some(*genesis), - } - } -} - -impl From for P2pMessageKind { - fn from(kind: ReqResMessageKind) -> P2pMessageKind { - P2pMessageKind::ReqRes(kind) - } -} - -impl From for P2pMessageKind { - fn from(kind: GossipMessageKind) -> P2pMessageKind { - P2pMessageKind::Gossip(kind) - } -} - -#[derive(Clone, Debug)] -pub struct Message { - pub sender: P::Id, - pub kind: P2pMessageKind, - pub msg: Vec, -} - -#[derive(Clone, Debug, Encode, Decode)] -pub struct BlockCommit { - pub block: Vec, - pub commit: Vec, -} - -#[derive(Clone, Debug, Encode, Decode)] -pub struct HeartbeatBatch { - pub blocks: Vec, - pub timestamp: u64, -} - -#[async_trait] -pub trait P2p: Send + Sync + Clone + fmt::Debug + TributaryP2p { - type Id: Send + Sync + Clone + Copy + fmt::Debug; - - async fn subscribe(&self, set: ValidatorSet, genesis: [u8; 32]); - async fn unsubscribe(&self, set: ValidatorSet, genesis: [u8; 32]); - - async fn send_raw(&self, to: Self::Id, msg: Vec); - async fn broadcast_raw(&self, kind: P2pMessageKind, msg: Vec); - async fn receive(&self) -> Message; - - async fn send(&self, to: Self::Id, kind: ReqResMessageKind, msg: Vec) { - let mut actual_msg = kind.serialize(); - actual_msg.extend(msg); - self.send_raw(to, actual_msg).await; - } - async fn broadcast(&self, kind: impl Send + Into, msg: Vec) { - let kind = kind.into(); - let mut actual_msg = match kind { - P2pMessageKind::ReqRes(kind) => kind.serialize(), - P2pMessageKind::Gossip(kind) => kind.serialize(), - }; - actual_msg.extend(msg); - /* - log::trace!( - "broadcasting p2p message (kind {})", - match kind { - P2pMessageKind::KeepAlive => "KeepAlive".to_string(), - P2pMessageKind::Tributary(genesis) => format!("Tributary({})", hex::encode(genesis)), - P2pMessageKind::Heartbeat(genesis) => format!("Heartbeat({})", hex::encode(genesis)), - P2pMessageKind::Block(genesis) => format!("Block({})", hex::encode(genesis)), - P2pMessageKind::CosignedBlock => "CosignedBlock".to_string(), - } - ); - */ - self.broadcast_raw(kind, actual_msg).await; - } -} - -#[derive(Default, Clone, Copy, PartialEq, Eq, Debug)] -struct RrCodec; -#[async_trait] -impl RrCodecTrait for RrCodec { - type Protocol = &'static str; - type Request = Vec; - type Response = Vec; - - async fn read_request( - &mut self, - _: &Self::Protocol, - io: &mut R, - ) -> io::Result> { - let mut len = [0; 4]; - io.read_exact(&mut len).await?; - let len = usize::try_from(u32::from_le_bytes(len)).expect("not at least a 32-bit platform?"); - if len > MAX_LIBP2P_REQRES_MESSAGE_SIZE { - Err(io::Error::other("request length exceeded MAX_LIBP2P_REQRES_MESSAGE_SIZE"))?; - } - // This may be a non-trivial allocation easily causable - // While we could chunk the read, meaning we only perform the allocation as bandwidth is used, - // the max message size should be sufficiently sane - let mut buf = vec![0; len]; - io.read_exact(&mut buf).await?; - Ok(buf) - } - async fn read_response( - &mut self, - proto: &Self::Protocol, - io: &mut R, - ) -> io::Result> { - self.read_request(proto, io).await - } - async fn write_request( - &mut self, - _: &Self::Protocol, - io: &mut W, - req: Vec, - ) -> io::Result<()> { - io.write_all( - &u32::try_from(req.len()) - .map_err(|_| io::Error::other("request length exceeded 2**32"))? - .to_le_bytes(), - ) - .await?; - io.write_all(&req).await - } - async fn write_response( - &mut self, - proto: &Self::Protocol, - io: &mut W, - res: Vec, - ) -> io::Result<()> { - self.write_request(proto, io, res).await - } -} - -#[derive(NetworkBehaviour)] -struct Behavior { - reqres: RrBehavior, - gossipsub: GsBehavior, -} - -#[allow(clippy::type_complexity)] -#[derive(Clone)] -pub struct LibP2p { - subscribe: Arc>>, - send: Arc)>>>, - broadcast: Arc)>>>, - receive: Arc>>>, -} -impl fmt::Debug for LibP2p { - fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - fmt.debug_struct("LibP2p").finish_non_exhaustive() - } -} - -impl LibP2p { - #[allow(clippy::new_without_default)] - pub fn new(serai: Arc) -> Self { - log::info!("creating a libp2p instance"); - - let throwaway_key_pair = Keypair::generate_ed25519(); - - let behavior = Behavior { - reqres: { RrBehavior::new([("/coordinator", ProtocolSupport::Full)], RrConfig::default()) }, - gossipsub: { - let heartbeat_interval = tributary::tendermint::LATENCY_TIME / 2; - let heartbeats_per_block = - usize::try_from(tributary::tendermint::TARGET_BLOCK_TIME / heartbeat_interval).unwrap(); - - use blake2::{Digest, Blake2s256}; - let config = ConfigBuilder::default() - .heartbeat_interval(Duration::from_millis(heartbeat_interval.into())) - .history_length(heartbeats_per_block * 2) - .history_gossip(heartbeats_per_block) - .max_transmit_size(MAX_LIBP2P_GOSSIP_MESSAGE_SIZE) - // We send KeepAlive after 80s - .idle_timeout(Duration::from_secs(85)) - .validation_mode(ValidationMode::Strict) - // Uses a content based message ID to avoid duplicates as much as possible - .message_id_fn(|msg| { - MessageId::new(&Blake2s256::digest([msg.topic.as_str().as_bytes(), &msg.data].concat())) - }) - // Re-defines for fast ID to prevent needing to convert into a Message to run - // message_id_fn - // This function is valid for both - .fast_message_id_fn(|msg| { - FastMessageId::new(&Blake2s256::digest( - [msg.topic.as_str().as_bytes(), &msg.data].concat(), - )) - }) - .build(); - let mut gossipsub = GsBehavior::::new( - MessageAuthenticity::Signed(throwaway_key_pair.clone()), - config.unwrap(), - ) - .unwrap(); - - // Subscribe to the base topic - let topic = IdentTopic::new(LIBP2P_TOPIC); - gossipsub.subscribe(&topic).unwrap(); - - gossipsub - }, - }; - - // Uses noise for authentication, yamux for multiplexing - // TODO: Do we want to add a custom authentication protocol to only accept connections from - // fellow validators? Doing so would reduce the potential for spam - // TODO: Relay client? - let mut swarm = SwarmBuilder::with_existing_identity(throwaway_key_pair) - .with_tokio() - .with_tcp(TcpConfig::default().nodelay(true), noise::Config::new, || { - let mut config = yamux::Config::default(); - // 1 MiB default + max message size - config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); - // 256 KiB default + max message size - config - .set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); - config - }) - .unwrap() - .with_behaviour(|_| behavior) - .unwrap() - .build(); - const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') - swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); - - let (send_send, mut send_recv) = mpsc::unbounded_channel(); - let (broadcast_send, mut broadcast_recv) = mpsc::unbounded_channel(); - let (receive_send, receive_recv) = mpsc::unbounded_channel(); - let (subscribe_send, mut subscribe_recv) = mpsc::unbounded_channel(); - - fn topic_for_set(set: ValidatorSet) -> IdentTopic { - IdentTopic::new(format!("{LIBP2P_TOPIC}-{}", hex::encode(set.encode()))) - } - - // TODO: If a network has less than TARGET_PEERS, this will cause retries ad infinitum - const TARGET_PEERS: usize = 5; - - // The addrs we're currently dialing, and the networks associated with them - let dialing_peers = Arc::new(RwLock::new(HashMap::new())); - // The peers we're currently connected to, and the networks associated with them - let connected_peers = Arc::new(RwLock::new(HashMap::>::new())); - - // Find and connect to peers - let (connect_to_network_send, mut connect_to_network_recv) = - tokio::sync::mpsc::unbounded_channel(); - let (to_dial_send, mut to_dial_recv) = tokio::sync::mpsc::unbounded_channel(); - tokio::spawn({ - let dialing_peers = dialing_peers.clone(); - let connected_peers = connected_peers.clone(); - - let connect_to_network_send = connect_to_network_send.clone(); - async move { - loop { - let connect = |network: NetworkId, addr: Multiaddr| { - let dialing_peers = dialing_peers.clone(); - let connected_peers = connected_peers.clone(); - let to_dial_send = to_dial_send.clone(); - let connect_to_network_send = connect_to_network_send.clone(); - async move { - log::info!("found peer from substrate: {addr}"); - - let protocols = addr.iter().filter_map(|piece| match piece { - // Drop PeerIds from the Substrate P2p network - Protocol::P2p(_) => None, - // Use our own TCP port - Protocol::Tcp(_) => Some(Protocol::Tcp(PORT)), - other => Some(other), - }); - - let mut new_addr = Multiaddr::empty(); - for protocol in protocols { - new_addr.push(protocol); - } - let addr = new_addr; - log::debug!("transformed found peer: {addr}"); - - let (is_fresh_dial, nets) = { - let mut dialing_peers = dialing_peers.write().await; - let is_fresh_dial = !dialing_peers.contains_key(&addr); - if is_fresh_dial { - dialing_peers.insert(addr.clone(), HashSet::new()); - } - // Associate this network with this peer - dialing_peers.get_mut(&addr).unwrap().insert(network); - - let nets = dialing_peers.get(&addr).unwrap().clone(); - (is_fresh_dial, nets) - }; - - // Spawn a task to remove this peer from 'dialing' in sixty seconds, in case dialing - // fails - // This performs cleanup and bounds the size of the map to whatever growth occurs - // within a temporal window - tokio::spawn({ - let dialing_peers = dialing_peers.clone(); - let connected_peers = connected_peers.clone(); - let connect_to_network_send = connect_to_network_send.clone(); - let addr = addr.clone(); - async move { - tokio::time::sleep(core::time::Duration::from_secs(60)).await; - let mut dialing_peers = dialing_peers.write().await; - if let Some(expected_nets) = dialing_peers.remove(&addr) { - log::debug!("removed addr from dialing upon timeout: {addr}"); - - // TODO: De-duplicate this below instance - // If we failed to dial and haven't gotten enough actual connections, retry - let connected_peers = connected_peers.read().await; - for net in expected_nets { - let mut remaining_peers = 0; - for nets in connected_peers.values() { - if nets.contains(&net) { - remaining_peers += 1; - } - } - // If we do not, start connecting to this network again - if remaining_peers < TARGET_PEERS { - connect_to_network_send.send(net).expect( - "couldn't send net to connect to due to disconnects (receiver dropped?)", - ); - } - } - } - } - }); - - if is_fresh_dial { - to_dial_send.send((addr, nets)).unwrap(); - } - } - }; - - // TODO: We should also connect to random peers from random nets as needed for - // cosigning - - // Drain the chainnel, de-duplicating any networks in it - let mut connect_to_network_networks = HashSet::new(); - while let Ok(network) = connect_to_network_recv.try_recv() { - connect_to_network_networks.insert(network); - } - for network in connect_to_network_networks { - if let Ok(mut nodes) = serai.p2p_validators(network).await { - // If there's an insufficient amount of nodes known, connect to all yet add it - // back and break - if nodes.len() < TARGET_PEERS { - log::warn!( - "insufficient amount of P2P nodes known for {:?}: {}", - network, - nodes.len() - ); - // Retry this later - connect_to_network_send.send(network).unwrap(); - for node in nodes { - connect(network, node).await; - } - continue; - } - - // Randomly select up to 150% of the TARGET_PEERS - for _ in 0 .. ((3 * TARGET_PEERS) / 2) { - if !nodes.is_empty() { - let to_connect = nodes.swap_remove( - usize::try_from(OsRng.next_u64() % u64::try_from(nodes.len()).unwrap()) - .unwrap(), - ); - connect(network, to_connect).await; - } - } - } - } - // Sleep 60 seconds before moving to the next iteration - tokio::time::sleep(core::time::Duration::from_secs(60)).await; - } - } - }); - - // Manage the actual swarm - tokio::spawn({ - let mut time_of_last_p2p_message = Instant::now(); - - async move { - let connected_peers = connected_peers.clone(); - - let mut set_for_genesis = HashMap::new(); - loop { - let time_since_last = Instant::now().duration_since(time_of_last_p2p_message); - tokio::select! { - biased; - - // Subscribe to any new topics - set = subscribe_recv.recv() => { - let (subscribe, set, genesis): (_, ValidatorSet, [u8; 32]) = - set.expect("subscribe_recv closed. are we shutting down?"); - let topic = topic_for_set(set); - if subscribe { - log::info!("subscribing to p2p messages for {set:?}"); - connect_to_network_send.send(set.network).unwrap(); - set_for_genesis.insert(genesis, set); - swarm.behaviour_mut().gossipsub.subscribe(&topic).unwrap(); - } else { - log::info!("unsubscribing to p2p messages for {set:?}"); - set_for_genesis.remove(&genesis); - swarm.behaviour_mut().gossipsub.unsubscribe(&topic).unwrap(); - } - } - - msg = send_recv.recv() => { - let (peer, msg): (PeerId, Vec) = - msg.expect("send_recv closed. are we shutting down?"); - swarm.behaviour_mut().reqres.send_request(&peer, msg); - }, - - // Handle any queued outbound messages - msg = broadcast_recv.recv() => { - // Update the time of last message - time_of_last_p2p_message = Instant::now(); - - let (kind, msg): (P2pMessageKind, Vec) = - msg.expect("broadcast_recv closed. are we shutting down?"); - - if matches!(kind, P2pMessageKind::ReqRes(_)) { - // Use request/response, yet send to all connected peers - for peer_id in swarm.connected_peers().copied().collect::>() { - swarm.behaviour_mut().reqres.send_request(&peer_id, msg.clone()); - } - } else { - // Use gossipsub - - let set = - kind.genesis().and_then(|genesis| set_for_genesis.get(&genesis).copied()); - let topic = if let Some(set) = set { - topic_for_set(set) - } else { - IdentTopic::new(LIBP2P_TOPIC) - }; - - match swarm.behaviour_mut().gossipsub.publish(topic, msg.clone()) { - Err(PublishError::SigningError(e)) => { - panic!("signing error when broadcasting: {e}") - }, - Err(PublishError::InsufficientPeers) => { - log::warn!("failed to send p2p message due to insufficient peers") - } - Err(PublishError::MessageTooLarge) => { - panic!("tried to send a too large message: {}", hex::encode(msg)) - } - Err(PublishError::TransformFailed(e)) => panic!("IdentityTransform failed: {e}"), - Err(PublishError::Duplicate) | Ok(_) => {} - } - } - } - - // Handle new incoming messages - event = swarm.next() => { - match event { - Some(SwarmEvent::Dialing { connection_id, .. }) => { - log::debug!("dialing to peer in connection ID {}", &connection_id); - } - Some(SwarmEvent::ConnectionEstablished { - peer_id, - connection_id, - endpoint, - .. - }) => { - if &peer_id == swarm.local_peer_id() { - log::warn!("established a libp2p connection to ourselves"); - swarm.close_connection(connection_id); - continue; - } - - let addr = endpoint.get_remote_address(); - let nets = { - let mut dialing_peers = dialing_peers.write().await; - if let Some(nets) = dialing_peers.remove(addr) { - nets - } else { - log::debug!("connected to a peer who we didn't have within dialing"); - HashSet::new() - } - }; - { - let mut connected_peers = connected_peers.write().await; - connected_peers.insert(addr.clone(), nets); - - log::debug!( - "connection established to peer {} in connection ID {}, connected peers: {}", - &peer_id, - &connection_id, - connected_peers.len(), - ); - } - } - Some(SwarmEvent::ConnectionClosed { peer_id, endpoint, .. }) => { - let mut connected_peers = connected_peers.write().await; - let Some(nets) = connected_peers.remove(endpoint.get_remote_address()) else { - log::debug!("closed connection to peer which wasn't in connected_peers"); - continue; - }; - // Downgrade to a read lock - let connected_peers = connected_peers.downgrade(); - - // For each net we lost a peer for, check if we still have sufficient peers - // overall - for net in nets { - let mut remaining_peers = 0; - for nets in connected_peers.values() { - if nets.contains(&net) { - remaining_peers += 1; - } - } - // If we do not, start connecting to this network again - if remaining_peers < TARGET_PEERS { - connect_to_network_send - .send(net) - .expect( - "couldn't send net to connect to due to disconnects (receiver dropped?)" - ); - } - } - - log::debug!( - "connection with peer {peer_id} closed, connected peers: {}", - connected_peers.len(), - ); - } - Some(SwarmEvent::Behaviour(BehaviorEvent::Reqres( - RrEvent::Message { peer, message }, - ))) => { - let message = match message { - RrMessage::Request { request, .. } => request, - RrMessage::Response { response, .. } => response, - }; - - let mut msg_ref = message.as_slice(); - let Some(kind) = ReqResMessageKind::read(&mut msg_ref) else { continue }; - let message = Message { - sender: peer, - kind: P2pMessageKind::ReqRes(kind), - msg: msg_ref.to_vec(), - }; - receive_send.send(message).expect("receive_send closed. are we shutting down?"); - } - Some(SwarmEvent::Behaviour(BehaviorEvent::Gossipsub( - GsEvent::Message { propagation_source, message, .. }, - ))) => { - let mut msg_ref = message.data.as_slice(); - let Some(kind) = GossipMessageKind::read(&mut msg_ref) else { continue }; - let message = Message { - sender: propagation_source, - kind: P2pMessageKind::Gossip(kind), - msg: msg_ref.to_vec(), - }; - receive_send.send(message).expect("receive_send closed. are we shutting down?"); - } - _ => {} - } - } - - // Handle peers to dial - addr_and_nets = to_dial_recv.recv() => { - let (addr, nets) = - addr_and_nets.expect("received address was None (sender dropped?)"); - // If we've already dialed and connected to this address, don't further dial them - // Just associate these networks with them - if let Some(existing_nets) = connected_peers.write().await.get_mut(&addr) { - for net in nets { - existing_nets.insert(net); - } - continue; - } - - if let Err(e) = swarm.dial(addr) { - log::warn!("dialing peer failed: {e:?}"); - } - } - - // If it's been >80s since we've published a message, publish a KeepAlive since we're - // still an active service - // This is useful when we have no active tributaries and accordingly aren't sending - // heartbeats - // If we are sending heartbeats, we should've sent one after 60s of no finalized blocks - // (where a finalized block only occurs due to network activity), meaning this won't be - // run - () = tokio::time::sleep(Duration::from_secs(80).saturating_sub(time_since_last)) => { - time_of_last_p2p_message = Instant::now(); - for peer_id in swarm.connected_peers().copied().collect::>() { - swarm - .behaviour_mut() - .reqres - .send_request(&peer_id, ReqResMessageKind::KeepAlive.serialize()); - } - } - } - } - } - }); - - LibP2p { - subscribe: Arc::new(Mutex::new(subscribe_send)), - send: Arc::new(Mutex::new(send_send)), - broadcast: Arc::new(Mutex::new(broadcast_send)), - receive: Arc::new(Mutex::new(receive_recv)), - } - } -} - -#[async_trait] -impl P2p for LibP2p { - type Id = PeerId; - - async fn subscribe(&self, set: ValidatorSet, genesis: [u8; 32]) { - self - .subscribe - .lock() - .await - .send((true, set, genesis)) - .expect("subscribe_send closed. are we shutting down?"); - } - - async fn unsubscribe(&self, set: ValidatorSet, genesis: [u8; 32]) { - self - .subscribe - .lock() - .await - .send((false, set, genesis)) - .expect("subscribe_send closed. are we shutting down?"); - } - - async fn send_raw(&self, peer: Self::Id, msg: Vec) { - self.send.lock().await.send((peer, msg)).expect("send_send closed. are we shutting down?"); - } - - async fn broadcast_raw(&self, kind: P2pMessageKind, msg: Vec) { - self - .broadcast - .lock() - .await - .send((kind, msg)) - .expect("broadcast_send closed. are we shutting down?"); - } - - // TODO: We only have a single handle call this. Differentiate Send/Recv to remove this constant - // lock acquisition? - async fn receive(&self) -> Message { - self.receive.lock().await.recv().await.expect("receive_recv closed. are we shutting down?") - } -} - -#[async_trait] -impl TributaryP2p for LibP2p { - async fn broadcast(&self, genesis: [u8; 32], msg: Vec) { - ::broadcast(self, GossipMessageKind::Tributary(genesis), msg).await - } -} - -pub async fn heartbeat_tributaries_task( - p2p: P, - mut tributary_event: broadcast::Receiver>, -) { - let ten_blocks_of_time = - Duration::from_secs((10 * Tributary::::block_time()).into()); - - let mut readers = HashMap::new(); - loop { - loop { - match tributary_event.try_recv() { - Ok(TributaryEvent::NewTributary(ActiveTributary { spec, tributary })) => { - readers.insert(spec.set(), tributary.reader()); - } - Ok(TributaryEvent::TributaryRetired(set)) => { - readers.remove(&set); - } - Err(broadcast::error::TryRecvError::Empty) => break, - Err(broadcast::error::TryRecvError::Lagged(_)) => { - panic!("heartbeat_tributaries lagged to handle tributary_event") - } - Err(broadcast::error::TryRecvError::Closed) => panic!("tributary_event sender closed"), - } - } - - for tributary in readers.values() { - let tip = tributary.tip(); - let block_time = - SystemTime::UNIX_EPOCH + Duration::from_secs(tributary.time_of_block(&tip).unwrap_or(0)); - - // Only trigger syncing if the block is more than a minute behind - if SystemTime::now() > (block_time + Duration::from_secs(60)) { - log::warn!("last known tributary block was over a minute ago"); - let mut msg = tip.to_vec(); - let time: u64 = SystemTime::now() - .duration_since(SystemTime::UNIX_EPOCH) - .expect("system clock is wrong") - .as_secs(); - msg.extend(time.to_le_bytes()); - P2p::broadcast(&p2p, ReqResMessageKind::Heartbeat(tributary.genesis()), msg).await; - } - } - - // Only check once every 10 blocks of time - sleep(ten_blocks_of_time).await; - } -} - -pub async fn handle_p2p_task( - p2p: P, - cosign_channel: mpsc::UnboundedSender, - mut tributary_event: broadcast::Receiver>, -) { - let channels = Arc::new(RwLock::new(HashMap::<_, mpsc::UnboundedSender>>::new())); - tokio::spawn({ - let p2p = p2p.clone(); - let channels = channels.clone(); - let mut set_to_genesis = HashMap::new(); - async move { - loop { - match tributary_event.recv().await.unwrap() { - TributaryEvent::NewTributary(tributary) => { - let genesis = tributary.spec.genesis(); - set_to_genesis.insert(tributary.spec.set(), genesis); - - let (send, mut recv) = mpsc::unbounded_channel(); - channels.write().await.insert(genesis, send); - - // Subscribe to the topic for this tributary - p2p.subscribe(tributary.spec.set(), genesis).await; - - let spec_set = tributary.spec.set(); - - // Per-Tributary P2P message handler - tokio::spawn({ - let p2p = p2p.clone(); - async move { - loop { - let Some(msg) = recv.recv().await else { - // Channel closure happens when the tributary retires - break; - }; - match msg.kind { - P2pMessageKind::ReqRes(ReqResMessageKind::KeepAlive) => {} - - // TODO: Slash on Heartbeat which justifies a response, since the node - // obviously was offline and we must now use our bandwidth to compensate for - // them? - P2pMessageKind::ReqRes(ReqResMessageKind::Heartbeat(msg_genesis)) => { - assert_eq!(msg_genesis, genesis); - if msg.msg.len() != 40 { - log::error!("validator sent invalid heartbeat"); - continue; - } - // Only respond to recent heartbeats - let msg_time = u64::from_le_bytes(msg.msg[32 .. 40].try_into().expect( - "length-checked heartbeat message didn't have 8 bytes for the u64", - )); - if SystemTime::now() - .duration_since(SystemTime::UNIX_EPOCH) - .expect("system clock is wrong") - .as_secs() - .saturating_sub(msg_time) > - 10 - { - continue; - } - - log::debug!("received heartbeat with a recent timestamp"); - - let reader = tributary.tributary.reader(); - - let p2p = p2p.clone(); - // Spawn a dedicated task as this may require loading large amounts of data - // from disk and take a notable amount of time - tokio::spawn(async move { - let mut latest = msg.msg[.. 32].try_into().unwrap(); - let mut to_send = vec![]; - while let Some(next) = reader.block_after(&latest) { - to_send.push(next); - latest = next; - } - if to_send.len() > 3 { - // prepare the batch to sends - let mut blocks = vec![]; - for (i, next) in to_send.iter().enumerate() { - if i >= BLOCKS_PER_BATCH { - break; - } - - blocks.push(BlockCommit { - block: reader.block(next).unwrap().serialize(), - commit: reader.commit(next).unwrap(), - }); - } - let batch = HeartbeatBatch { blocks, timestamp: msg_time }; - - p2p - .send(msg.sender, ReqResMessageKind::Block(genesis), batch.encode()) - .await; - } - }); - } - - P2pMessageKind::ReqRes(ReqResMessageKind::Block(msg_genesis)) => { - assert_eq!(msg_genesis, genesis); - // decode the batch - let Ok(batch) = HeartbeatBatch::decode(&mut msg.msg.as_ref()) else { - log::error!( - "received HeartBeatBatch message with an invalidly serialized batch" - ); - continue; - }; - - // sync blocks - for bc in batch.blocks { - // TODO: why do we use ReadWrite instead of Encode/Decode for blocks? - // Should we use the same for batches so we can read both at the same time? - let Ok(block) = Block::::read(&mut bc.block.as_slice()) else { - log::error!("received block message with an invalidly serialized block"); - continue; - }; - - let res = tributary.tributary.sync_block(block, bc.commit).await; - log::debug!( - "received block from {:?}, sync_block returned {}", - msg.sender, - res - ); - } - } - - P2pMessageKind::Gossip(GossipMessageKind::Tributary(msg_genesis)) => { - assert_eq!(msg_genesis, genesis); - log::trace!("handling message for tributary {:?}", spec_set); - if tributary.tributary.handle_message(&msg.msg).await { - P2p::broadcast(&p2p, msg.kind, msg.msg).await; - } - } - - P2pMessageKind::Gossip(GossipMessageKind::CosignedBlock) => unreachable!(), - } - } - } - }); - } - TributaryEvent::TributaryRetired(set) => { - if let Some(genesis) = set_to_genesis.remove(&set) { - p2p.unsubscribe(set, genesis).await; - channels.write().await.remove(&genesis); - } - } - } - } - } - }); - - loop { - let msg = p2p.receive().await; - match msg.kind { - P2pMessageKind::ReqRes(ReqResMessageKind::KeepAlive) => {} - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)) | - P2pMessageKind::ReqRes( - ReqResMessageKind::Heartbeat(genesis) | ReqResMessageKind::Block(genesis), - ) => { - if let Some(channel) = channels.read().await.get(&genesis) { - channel.send(msg).unwrap(); - } - } - P2pMessageKind::Gossip(GossipMessageKind::CosignedBlock) => { - let Ok(msg) = CosignedBlock::deserialize_reader(&mut msg.msg.as_slice()) else { - log::error!("received CosignedBlock message with invalidly serialized contents"); - continue; - }; - cosign_channel.send(msg).unwrap(); - } - } - } -} diff --git a/coordinator/src/processors.rs b/coordinator/src/processors.rs deleted file mode 100644 index 9157e2a6..00000000 --- a/coordinator/src/processors.rs +++ /dev/null @@ -1,46 +0,0 @@ -use std::sync::Arc; - -use serai_client::primitives::NetworkId; -use processor_messages::{ProcessorMessage, CoordinatorMessage}; - -use message_queue::{Service, Metadata, client::MessageQueue}; - -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Message { - pub id: u64, - pub network: NetworkId, - pub msg: ProcessorMessage, -} - -#[async_trait::async_trait] -pub trait Processors: 'static + Send + Sync + Clone { - async fn send(&self, network: NetworkId, msg: impl Send + Into); - async fn recv(&self, network: NetworkId) -> Message; - async fn ack(&self, msg: Message); -} - -#[async_trait::async_trait] -impl Processors for Arc { - async fn send(&self, network: NetworkId, msg: impl Send + Into) { - let msg: CoordinatorMessage = msg.into(); - let metadata = - Metadata { from: self.service, to: Service::Processor(network), intent: msg.intent() }; - let msg = borsh::to_vec(&msg).unwrap(); - self.queue(metadata, msg).await; - } - async fn recv(&self, network: NetworkId) -> Message { - let msg = self.next(Service::Processor(network)).await; - assert_eq!(msg.from, Service::Processor(network)); - - let id = msg.id; - - // Deserialize it into a ProcessorMessage - let msg: ProcessorMessage = - borsh::from_slice(&msg.msg).expect("message wasn't a borsh-encoded ProcessorMessage"); - - return Message { id, network, msg }; - } - async fn ack(&self, msg: Message) { - MessageQueue::ack(self, Service::Processor(msg.network), msg.id).await - } -} diff --git a/coordinator/src/tests/mod.rs b/coordinator/src/tests/mod.rs deleted file mode 100644 index db4c158f..00000000 --- a/coordinator/src/tests/mod.rs +++ /dev/null @@ -1,125 +0,0 @@ -use core::fmt::Debug; -use std::{ - sync::Arc, - collections::{VecDeque, HashSet, HashMap}, -}; - -use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; - -use processor_messages::CoordinatorMessage; - -use async_trait::async_trait; - -use tokio::sync::RwLock; - -use crate::{ - processors::{Message, Processors}, - TributaryP2p, ReqResMessageKind, GossipMessageKind, P2pMessageKind, Message as P2pMessage, P2p, -}; - -pub mod tributary; - -#[derive(Clone)] -pub struct MemProcessors(pub Arc>>>); -impl MemProcessors { - #[allow(clippy::new_without_default)] - pub fn new() -> MemProcessors { - MemProcessors(Arc::new(RwLock::new(HashMap::new()))) - } -} - -#[async_trait::async_trait] -impl Processors for MemProcessors { - async fn send(&self, network: NetworkId, msg: impl Send + Into) { - let mut processors = self.0.write().await; - let processor = processors.entry(network).or_insert_with(VecDeque::new); - processor.push_back(msg.into()); - } - async fn recv(&self, _: NetworkId) -> Message { - todo!() - } - async fn ack(&self, _: Message) { - todo!() - } -} - -#[allow(clippy::type_complexity)] -#[derive(Clone, Debug)] -pub struct LocalP2p( - usize, - pub Arc>, Vec)>>)>>, -); - -impl LocalP2p { - pub fn new(validators: usize) -> Vec { - let shared = Arc::new(RwLock::new((HashSet::new(), vec![VecDeque::new(); validators]))); - let mut res = vec![]; - for i in 0 .. validators { - res.push(LocalP2p(i, shared.clone())); - } - res - } -} - -#[async_trait] -impl P2p for LocalP2p { - type Id = usize; - - async fn subscribe(&self, _set: ValidatorSet, _genesis: [u8; 32]) {} - async fn unsubscribe(&self, _set: ValidatorSet, _genesis: [u8; 32]) {} - - async fn send_raw(&self, to: Self::Id, msg: Vec) { - let mut msg_ref = msg.as_slice(); - let kind = ReqResMessageKind::read(&mut msg_ref).unwrap(); - self.1.write().await.1[to].push_back((self.0, P2pMessageKind::ReqRes(kind), msg_ref.to_vec())); - } - - async fn broadcast_raw(&self, kind: P2pMessageKind, msg: Vec) { - // Content-based deduplication - let mut lock = self.1.write().await; - { - let already_sent = &mut lock.0; - if already_sent.contains(&msg) { - return; - } - already_sent.insert(msg.clone()); - } - let queues = &mut lock.1; - - let kind_len = (match kind { - P2pMessageKind::ReqRes(kind) => kind.serialize(), - P2pMessageKind::Gossip(kind) => kind.serialize(), - }) - .len(); - let msg = msg[kind_len ..].to_vec(); - - for (i, msg_queue) in queues.iter_mut().enumerate() { - if i == self.0 { - continue; - } - msg_queue.push_back((self.0, kind, msg.clone())); - } - } - - async fn receive(&self) -> P2pMessage { - // This is a cursed way to implement an async read from a Vec - loop { - if let Some((sender, kind, msg)) = self.1.write().await.1[self.0].pop_front() { - return P2pMessage { sender, kind, msg }; - } - tokio::time::sleep(std::time::Duration::from_millis(100)).await; - } - } -} - -#[async_trait] -impl TributaryP2p for LocalP2p { - async fn broadcast(&self, genesis: [u8; 32], msg: Vec) { - ::broadcast( - self, - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)), - msg, - ) - .await - } -} diff --git a/coordinator/src/tests/tributary/chain.rs b/coordinator/src/tests/tributary/chain.rs deleted file mode 100644 index 746c611b..00000000 --- a/coordinator/src/tests/tributary/chain.rs +++ /dev/null @@ -1,243 +0,0 @@ -use std::{ - time::{Duration, SystemTime}, - collections::HashSet, -}; - -use zeroize::Zeroizing; -use rand_core::{RngCore, CryptoRng, OsRng}; -use futures_util::{task::Poll, poll}; - -use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto}; - -use borsh::BorshDeserialize; -use serai_client::{ - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet}, -}; - -use tokio::time::sleep; - -use serai_db::MemDb; - -use tributary::Tributary; - -use crate::{ - GossipMessageKind, P2pMessageKind, P2p, - tributary::{Transaction, TributarySpec}, - tests::LocalP2p, -}; - -pub fn new_keys( - rng: &mut R, -) -> Vec::F>> { - let mut keys = vec![]; - for _ in 0 .. 5 { - keys.push(Zeroizing::new(::F::random(&mut *rng))); - } - keys -} - -pub fn new_spec( - rng: &mut R, - keys: &[Zeroizing<::F>], -) -> TributarySpec { - let mut serai_block = [0; 32]; - rng.fill_bytes(&mut serai_block); - - let start_time = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs(); - - let set = ValidatorSet { session: Session(0), network: NetworkId::Bitcoin }; - - let validators = keys - .iter() - .map(|key| ((::generator() * **key), 1)) - .collect::>(); - - // Generate random eVRF keys as none of these test rely on them to have any structure - let mut evrf_keys = vec![]; - for _ in 0 .. keys.len() { - let mut substrate = [0; 32]; - OsRng.fill_bytes(&mut substrate); - let mut network = vec![0; 64]; - OsRng.fill_bytes(&mut network); - evrf_keys.push((substrate, network)); - } - - let res = TributarySpec::new(serai_block, start_time, set, validators, evrf_keys); - assert_eq!( - TributarySpec::deserialize_reader(&mut borsh::to_vec(&res).unwrap().as_slice()).unwrap(), - res, - ); - res -} - -pub async fn new_tributaries( - keys: &[Zeroizing<::F>], - spec: &TributarySpec, -) -> Vec<(MemDb, LocalP2p, Tributary)> { - let p2p = LocalP2p::new(keys.len()); - let mut res = vec![]; - for (i, key) in keys.iter().enumerate() { - let db = MemDb::new(); - res.push(( - db.clone(), - p2p[i].clone(), - Tributary::<_, Transaction, _>::new( - db, - spec.genesis(), - spec.start_time(), - key.clone(), - spec.validators(), - p2p[i].clone(), - ) - .await - .unwrap(), - )); - } - res -} - -pub async fn run_tributaries( - mut tributaries: Vec<(LocalP2p, Tributary)>, -) { - loop { - for (p2p, tributary) in &mut tributaries { - while let Poll::Ready(msg) = poll!(p2p.receive()) { - match msg.kind { - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)) => { - assert_eq!(genesis, tributary.genesis()); - if tributary.handle_message(&msg.msg).await { - p2p.broadcast(msg.kind, msg.msg).await; - } - } - _ => panic!("unexpected p2p message found"), - } - } - } - - sleep(Duration::from_millis(100)).await; - } -} - -pub async fn wait_for_tx_inclusion( - tributary: &Tributary, - mut last_checked: [u8; 32], - hash: [u8; 32], -) -> [u8; 32] { - let reader = tributary.reader(); - loop { - let tip = tributary.tip().await; - if tip == last_checked { - sleep(Duration::from_secs(1)).await; - continue; - } - - let mut queue = vec![reader.block(&tip).unwrap()]; - let mut block = None; - while { - let parent = queue.last().unwrap().parent(); - if parent == tributary.genesis() { - false - } else { - block = Some(reader.block(&parent).unwrap()); - block.as_ref().unwrap().hash() != last_checked - } - } { - queue.push(block.take().unwrap()); - } - - while let Some(block) = queue.pop() { - for tx in &block.transactions { - if tx.hash() == hash { - return block.hash(); - } - } - } - - last_checked = tip; - } -} - -#[tokio::test] -async fn tributary_test() { - let keys = new_keys(&mut OsRng); - let spec = new_spec(&mut OsRng, &keys); - - let mut tributaries = new_tributaries(&keys, &spec) - .await - .into_iter() - .map(|(_, p2p, tributary)| (p2p, tributary)) - .collect::>(); - - let mut blocks = 0; - let mut last_block = spec.genesis(); - - // Doesn't use run_tributaries as we want to wind these down at a certain point - // run_tributaries will run them ad infinitum - let timeout = SystemTime::now() + Duration::from_secs(65); - while (blocks < 10) && (SystemTime::now().duration_since(timeout).is_err()) { - for (p2p, tributary) in &mut tributaries { - while let Poll::Ready(msg) = poll!(p2p.receive()) { - match msg.kind { - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)) => { - assert_eq!(genesis, tributary.genesis()); - tributary.handle_message(&msg.msg).await; - } - _ => panic!("unexpected p2p message found"), - } - } - } - - let tip = tributaries[0].1.tip().await; - if tip != last_block { - last_block = tip; - blocks += 1; - } - - sleep(Duration::from_millis(100)).await; - } - - if blocks != 10 { - panic!("tributary chain test hit timeout"); - } - - // Handle all existing messages - for (p2p, tributary) in &mut tributaries { - while let Poll::Ready(msg) = poll!(p2p.receive()) { - match msg.kind { - P2pMessageKind::Gossip(GossipMessageKind::Tributary(genesis)) => { - assert_eq!(genesis, tributary.genesis()); - tributary.handle_message(&msg.msg).await; - } - _ => panic!("unexpected p2p message found"), - } - } - } - - // handle_message informed the Tendermint machine, yet it still has to process it - // Sleep for a second accordingly - // TODO: Is there a better way to handle this? - sleep(Duration::from_secs(1)).await; - - // All tributaries should agree on the tip, within a block - let mut tips = HashSet::new(); - for (_, tributary) in &tributaries { - tips.insert(tributary.tip().await); - } - assert!(tips.len() <= 2); - if tips.len() == 2 { - for tip in &tips { - // Find a Tributary where this isn't the tip - for (_, tributary) in &tributaries { - let Some(after) = tributary.reader().block_after(tip) else { continue }; - // Make sure the block after is the other tip - assert!(tips.contains(&after)); - return; - } - } - } else { - assert_eq!(tips.len(), 1); - return; - } - panic!("tributary had different tip with a variance exceeding one block"); -} diff --git a/coordinator/src/tests/tributary/dkg.rs b/coordinator/src/tests/tributary/dkg.rs deleted file mode 100644 index aafa9a33..00000000 --- a/coordinator/src/tests/tributary/dkg.rs +++ /dev/null @@ -1,282 +0,0 @@ -use core::time::Duration; - -use zeroize::Zeroizing; -use rand_core::{RngCore, OsRng}; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; -use frost::Participant; - -use sp_runtime::traits::Verify; -use serai_client::{ - primitives::Signature, - validator_sets::primitives::{ValidatorSet, KeyPair}, -}; - -use tokio::time::sleep; - -use serai_db::{Get, DbTxn, Db, MemDb}; - -use processor_messages::{key_gen, CoordinatorMessage}; - -use tributary::{TransactionTrait, Tributary}; - -use crate::{ - tributary::{ - Transaction, TributarySpec, - scanner::{PublishSeraiTransaction, handle_new_blocks}, - }, - tests::{ - MemProcessors, LocalP2p, - tributary::{new_keys, new_spec, new_tributaries, run_tributaries, wait_for_tx_inclusion}, - }, -}; - -#[tokio::test] -async fn dkg_test() { - env_logger::init(); - - let keys = new_keys(&mut OsRng); - let spec = new_spec(&mut OsRng, &keys); - - let full_tributaries = new_tributaries(&keys, &spec).await; - let mut dbs = vec![]; - let mut tributaries = vec![]; - for (db, p2p, tributary) in full_tributaries { - dbs.push(db); - tributaries.push((p2p, tributary)); - } - - // Run the tributaries in the background - tokio::spawn(run_tributaries(tributaries.clone())); - - let mut txs = vec![]; - // Create DKG participation for each key - for key in &keys { - let mut participation = vec![0; 4096]; - OsRng.fill_bytes(&mut participation); - - let mut tx = - Transaction::DkgParticipation { participation, signed: Transaction::empty_signed() }; - tx.sign(&mut OsRng, spec.genesis(), key); - txs.push(tx); - } - - let block_before_tx = tributaries[0].1.tip().await; - - // Publish t-1 participations - let t = ((keys.len() * 2) / 3) + 1; - for (i, tx) in txs.iter().take(t - 1).enumerate() { - assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true)); - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; - } - - let expected_participations = txs - .iter() - .enumerate() - .map(|(i, tx)| { - if let Transaction::DkgParticipation { participation, .. } = tx { - CoordinatorMessage::KeyGen(key_gen::CoordinatorMessage::Participation { - session: spec.set().session, - participant: Participant::new((i + 1).try_into().unwrap()).unwrap(), - participation: participation.clone(), - }) - } else { - panic!("txs wasn't a DkgParticipation"); - } - }) - .collect::>(); - - async fn new_processors( - db: &mut MemDb, - key: &Zeroizing<::F>, - spec: &TributarySpec, - tributary: &Tributary, - ) -> MemProcessors { - let processors = MemProcessors::new(); - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - db, - key, - &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called in new_processors") - }, - &processors, - &(), - &|_| async { - panic!( - "test tried to publish a new Tributary TX from handle_application_tx in new_processors" - ) - }, - spec, - &tributary.reader(), - ) - .await; - processors - } - - // Instantiate a scanner and verify it has the first two participations to report (and isn't - // waiting for `t`) - let processors = new_processors(&mut dbs[0], &keys[0], &spec, &tributaries[0].1).await; - assert_eq!(processors.0.read().await.get(&spec.set().network).unwrap().len(), t - 1); - - // Publish the rest of the participations - let block_before_tx = tributaries[0].1.tip().await; - for tx in txs.iter().skip(t - 1) { - assert_eq!(tributaries[0].1.add_transaction(tx.clone()).await, Ok(true)); - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; - } - - // Verify the scanner emits all KeyGen::Participations messages - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - &mut dbs[0], - &keys[0], - &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called after DkgParticipation") - }, - &processors, - &(), - &|_| async { - panic!( - "test tried to publish a new Tributary TX from handle_application_tx after DkgParticipation" - ) - }, - &spec, - &tributaries[0].1.reader(), - ) - .await; - { - let mut msgs = processors.0.write().await; - let msgs = msgs.get_mut(&spec.set().network).unwrap(); - assert_eq!(msgs.len(), keys.len()); - for expected in &expected_participations { - assert_eq!(&msgs.pop_front().unwrap(), expected); - } - assert!(msgs.is_empty()); - } - - // Verify all keys exhibit this scanner behavior - for (i, key) in keys.iter().enumerate().skip(1) { - let processors = new_processors(&mut dbs[i], key, &spec, &tributaries[i].1).await; - let mut msgs = processors.0.write().await; - let msgs = msgs.get_mut(&spec.set().network).unwrap(); - assert_eq!(msgs.len(), keys.len()); - for expected in &expected_participations { - assert_eq!(&msgs.pop_front().unwrap(), expected); - } - assert!(msgs.is_empty()); - } - - let mut substrate_key = [0; 32]; - OsRng.fill_bytes(&mut substrate_key); - let mut network_key = vec![0; usize::try_from((OsRng.next_u64() % 32) + 32).unwrap()]; - OsRng.fill_bytes(&mut network_key); - let key_pair = KeyPair(serai_client::Public(substrate_key), network_key.try_into().unwrap()); - - let mut txs = vec![]; - for (i, key) in keys.iter().enumerate() { - let mut txn = dbs[i].txn(); - - // Claim we've generated the key pair - crate::tributary::generated_key_pair::(&mut txn, spec.genesis(), &key_pair); - - // Publish the nonces - let attempt = 0; - let mut tx = Transaction::DkgConfirmationNonces { - attempt, - confirmation_nonces: crate::tributary::dkg_confirmation_nonces(key, &spec, &mut txn, 0), - signed: Transaction::empty_signed(), - }; - txn.commit(); - tx.sign(&mut OsRng, spec.genesis(), key); - txs.push(tx); - } - let block_before_tx = tributaries[0].1.tip().await; - for (i, tx) in txs.iter().enumerate() { - assert_eq!(tributaries[i].1.add_transaction(tx.clone()).await, Ok(true)); - } - for tx in &txs { - wait_for_tx_inclusion(&tributaries[0].1, block_before_tx, tx.hash()).await; - } - - // This should not cause any new processor event as the processor doesn't handle DKG confirming - for (i, key) in keys.iter().enumerate() { - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - &mut dbs[i], - key, - &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called after DkgConfirmationNonces") - }, - &processors, - &(), - // The Tributary handler should publish ConfirmationShare itself after ConfirmationNonces - &|tx| async { assert_eq!(tributaries[i].1.add_transaction(tx).await, Ok(true)) }, - &spec, - &tributaries[i].1.reader(), - ) - .await; - { - assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty()); - } - } - - // Yet once these TXs are on-chain, the tributary should itself publish the confirmation shares - // This means in the block after the next block, the keys should be set onto Serai - // Sleep twice as long as two blocks, in case there's some stability issue - sleep(Duration::from_secs( - 2 * 2 * u64::from(Tributary::::block_time()), - )) - .await; - - struct CheckPublishSetKeys { - spec: TributarySpec, - key_pair: KeyPair, - } - #[async_trait::async_trait] - impl PublishSeraiTransaction for CheckPublishSetKeys { - async fn publish_set_keys( - &self, - _db: &(impl Sync + Get), - set: ValidatorSet, - key_pair: KeyPair, - signature_participants: bitvec::vec::BitVec, - signature: Signature, - ) { - assert_eq!(set, self.spec.set()); - assert_eq!(self.key_pair, key_pair); - assert!(signature.verify( - &*serai_client::validator_sets::primitives::set_keys_message(&set, &key_pair), - &serai_client::Public( - frost::dkg::musig::musig_key::( - &serai_client::validator_sets::primitives::musig_context(set), - &self - .spec - .validators() - .into_iter() - .zip(signature_participants) - .filter_map(|((validator, _), included)| included.then_some(validator)) - .collect::>() - ) - .unwrap() - .to_bytes() - ), - )); - } - } - - // The scanner should successfully try to publish a transaction with a validly signed signature - handle_new_blocks::<_, _, _, _, _, LocalP2p>( - &mut dbs[0], - &keys[0], - &|_, _, _, _| async { - panic!("provided TX caused recognized_id to be called after DKG confirmation") - }, - &processors, - &CheckPublishSetKeys { spec: spec.clone(), key_pair: key_pair.clone() }, - &|_| async { panic!("test tried to publish a new Tributary TX from handle_application_tx") }, - &spec, - &tributaries[0].1.reader(), - ) - .await; - { - assert!(processors.0.read().await.get(&spec.set().network).unwrap().is_empty()); - } -} diff --git a/coordinator/src/tests/tributary/handle_p2p.rs b/coordinator/src/tests/tributary/handle_p2p.rs deleted file mode 100644 index 756f4561..00000000 --- a/coordinator/src/tests/tributary/handle_p2p.rs +++ /dev/null @@ -1,74 +0,0 @@ -use core::time::Duration; -use std::sync::Arc; - -use rand_core::OsRng; - -use tokio::{ - sync::{mpsc, broadcast}, - time::sleep, -}; - -use serai_db::MemDb; - -use tributary::Tributary; - -use crate::{ - tributary::Transaction, - ActiveTributary, TributaryEvent, - p2p::handle_p2p_task, - tests::{ - LocalP2p, - tributary::{new_keys, new_spec, new_tributaries}, - }, -}; - -#[tokio::test] -async fn handle_p2p_test() { - let keys = new_keys(&mut OsRng); - let spec = new_spec(&mut OsRng, &keys); - - let mut tributaries = new_tributaries(&keys, &spec) - .await - .into_iter() - .map(|(_, p2p, tributary)| (p2p, tributary)) - .collect::>(); - - let mut tributary_senders = vec![]; - let mut tributary_arcs = vec![]; - for (p2p, tributary) in tributaries.drain(..) { - let tributary = Arc::new(tributary); - tributary_arcs.push(tributary.clone()); - let (new_tributary_send, new_tributary_recv) = broadcast::channel(5); - let (cosign_send, _) = mpsc::unbounded_channel(); - tokio::spawn(handle_p2p_task(p2p, cosign_send, new_tributary_recv)); - new_tributary_send - .send(TributaryEvent::NewTributary(ActiveTributary { spec: spec.clone(), tributary })) - .map_err(|_| "failed to send ActiveTributary") - .unwrap(); - tributary_senders.push(new_tributary_send); - } - let tributaries = tributary_arcs; - - // After two blocks of time, we should have a new block - // We don't wait one block of time as we may have missed the chance for this block - sleep(Duration::from_secs((2 * Tributary::::block_time()).into())) - .await; - let tip = tributaries[0].tip().await; - assert!(tip != spec.genesis()); - - // Sleep one second to make sure this block propagates - sleep(Duration::from_secs(1)).await; - // Make sure every tributary has it - for tributary in &tributaries { - assert!(tributary.reader().block(&tip).is_some()); - } - - // Then after another block of time, we should have yet another new block - sleep(Duration::from_secs(Tributary::::block_time().into())).await; - let new_tip = tributaries[0].tip().await; - assert!(new_tip != tip); - sleep(Duration::from_secs(1)).await; - for tributary in tributaries { - assert!(tributary.reader().block(&new_tip).is_some()); - } -} diff --git a/coordinator/src/tests/tributary/mod.rs b/coordinator/src/tests/tributary/mod.rs deleted file mode 100644 index 340809e1..00000000 --- a/coordinator/src/tests/tributary/mod.rs +++ /dev/null @@ -1,245 +0,0 @@ -use core::fmt::Debug; - -use rand_core::{RngCore, OsRng}; - -use ciphersuite::{group::Group, Ciphersuite, Ristretto}; - -use scale::{Encode, Decode}; -use serai_client::{ - primitives::Signature, - validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet, KeyPair}, -}; -use processor_messages::coordinator::SubstrateSignableId; - -use tributary::{ReadWrite, tests::random_signed_with_nonce}; - -use crate::tributary::{Label, SignData, Transaction, scanner::PublishSeraiTransaction}; - -mod chain; -pub use chain::*; - -mod tx; - -mod dkg; -// TODO: Test the other transactions - -mod handle_p2p; -mod sync; - -#[async_trait::async_trait] -impl PublishSeraiTransaction for () { - async fn publish_set_keys( - &self, - _db: &(impl Sync + serai_db::Get), - _set: ValidatorSet, - _key_pair: KeyPair, - _signature_participants: bitvec::vec::BitVec, - _signature: Signature, - ) { - panic!("publish_set_keys was called in test") - } -} - -fn random_u32(rng: &mut R) -> u32 { - u32::try_from(rng.next_u64() >> 32).unwrap() -} - -fn random_vec(rng: &mut R, limit: usize) -> Vec { - let len = usize::try_from(rng.next_u64() % u64::try_from(limit).unwrap()).unwrap(); - let mut res = vec![0; len]; - rng.fill_bytes(&mut res); - res -} - -fn random_sign_data( - rng: &mut R, - plan: Id, - label: Label, -) -> SignData { - SignData { - plan, - attempt: random_u32(&mut OsRng), - label, - - data: { - let mut res = vec![]; - for _ in 0 ..= (rng.next_u64() % 255) { - res.push(random_vec(&mut OsRng, 512)); - } - res - }, - - signed: random_signed_with_nonce(&mut OsRng, label.nonce()), - } -} - -fn test_read_write(value: &RW) { - assert_eq!(value, &RW::read::<&[u8]>(&mut value.serialize().as_ref()).unwrap()); -} - -#[test] -fn tx_size_limit() { - use serai_client::validator_sets::primitives::MAX_KEY_LEN; - - use tributary::TRANSACTION_SIZE_LIMIT; - - let max_dkg_coefficients = (MAX_KEY_SHARES_PER_SET * 2).div_ceil(3) + 1; - // n coefficients - // 2 ECDH values per recipient, and the encrypted share - let elements_outside_of_proof = max_dkg_coefficients + ((2 + 1) * MAX_KEY_SHARES_PER_SET); - // Then Pedersen Vector Commitments for each DH done, and the associated overhead in the proof - // It's handwaved as one commitment per DH, where we do 2 per coefficient and 1 for the explicit - // ECDHs - let vector_commitments = (2 * max_dkg_coefficients) + (2 * MAX_KEY_SHARES_PER_SET); - // Then we have commitments to the `t` polynomial of length 2 + 2 nc, where nc is the amount of - // commitments - let t_commitments = 2 + (2 * vector_commitments); - // The remainder of the proof should be ~30 elements - let proof_elements = 30; - - let handwaved_dkg_size = - ((elements_outside_of_proof + vector_commitments + t_commitments + proof_elements) * - MAX_KEY_LEN) + - 1024; - // Further scale by two in case of any errors in the above - assert!(u32::try_from(TRANSACTION_SIZE_LIMIT).unwrap() >= (2 * handwaved_dkg_size)); -} - -#[test] -fn serialize_sign_data() { - fn test_read_write(value: &SignData) { - let mut buf = vec![]; - value.write(&mut buf).unwrap(); - assert_eq!(value, &SignData::read(&mut buf.as_slice()).unwrap()) - } - - let mut plan = [0; 3]; - OsRng.fill_bytes(&mut plan); - test_read_write(&random_sign_data::<_, _>( - &mut OsRng, - plan, - if (OsRng.next_u64() % 2) == 0 { Label::Preprocess } else { Label::Share }, - )); - let mut plan = [0; 5]; - OsRng.fill_bytes(&mut plan); - test_read_write(&random_sign_data::<_, _>( - &mut OsRng, - plan, - if (OsRng.next_u64() % 2) == 0 { Label::Preprocess } else { Label::Share }, - )); - let mut plan = [0; 8]; - OsRng.fill_bytes(&mut plan); - test_read_write(&random_sign_data::<_, _>( - &mut OsRng, - plan, - if (OsRng.next_u64() % 2) == 0 { Label::Preprocess } else { Label::Share }, - )); - let mut plan = [0; 24]; - OsRng.fill_bytes(&mut plan); - test_read_write(&random_sign_data::<_, _>( - &mut OsRng, - plan, - if (OsRng.next_u64() % 2) == 0 { Label::Preprocess } else { Label::Share }, - )); -} - -#[test] -fn serialize_transaction() { - test_read_write(&Transaction::RemoveParticipant { - participant: ::G::random(&mut OsRng), - signed: random_signed_with_nonce(&mut OsRng, 0), - }); - - test_read_write(&Transaction::DkgParticipation { - participation: random_vec(&mut OsRng, 4096), - signed: random_signed_with_nonce(&mut OsRng, 0), - }); - - test_read_write(&Transaction::DkgConfirmationNonces { - attempt: random_u32(&mut OsRng), - confirmation_nonces: { - let mut nonces = [0; 64]; - OsRng.fill_bytes(&mut nonces); - nonces - }, - signed: random_signed_with_nonce(&mut OsRng, 0), - }); - - test_read_write(&Transaction::DkgConfirmationShare { - attempt: random_u32(&mut OsRng), - confirmation_share: { - let mut share = [0; 32]; - OsRng.fill_bytes(&mut share); - share - }, - signed: random_signed_with_nonce(&mut OsRng, 1), - }); - - { - let mut block = [0; 32]; - OsRng.fill_bytes(&mut block); - test_read_write(&Transaction::CosignSubstrateBlock(block)); - } - - { - let mut block = [0; 32]; - OsRng.fill_bytes(&mut block); - let batch = u32::try_from(OsRng.next_u64() >> 32).unwrap(); - test_read_write(&Transaction::Batch { block, batch }); - } - test_read_write(&Transaction::SubstrateBlock(OsRng.next_u64())); - - { - let batch = u32::try_from(OsRng.next_u64() >> 32).unwrap(); - test_read_write(&Transaction::SubstrateSign(random_sign_data( - &mut OsRng, - SubstrateSignableId::Batch(batch), - Label::Preprocess, - ))); - } - { - let batch = u32::try_from(OsRng.next_u64() >> 32).unwrap(); - test_read_write(&Transaction::SubstrateSign(random_sign_data( - &mut OsRng, - SubstrateSignableId::Batch(batch), - Label::Share, - ))); - } - - { - let mut plan = [0; 32]; - OsRng.fill_bytes(&mut plan); - test_read_write(&Transaction::Sign(random_sign_data(&mut OsRng, plan, Label::Preprocess))); - } - { - let mut plan = [0; 32]; - OsRng.fill_bytes(&mut plan); - test_read_write(&Transaction::Sign(random_sign_data(&mut OsRng, plan, Label::Share))); - } - - { - let mut plan = [0; 32]; - OsRng.fill_bytes(&mut plan); - let mut tx_hash = vec![0; (OsRng.next_u64() % 64).try_into().unwrap()]; - OsRng.fill_bytes(&mut tx_hash); - test_read_write(&Transaction::SignCompleted { - plan, - tx_hash, - first_signer: random_signed_with_nonce(&mut OsRng, 2).signer, - signature: random_signed_with_nonce(&mut OsRng, 2).signature, - }); - } - - test_read_write(&Transaction::SlashReport( - { - let amount = - usize::try_from(OsRng.next_u64() % u64::from(MAX_KEY_SHARES_PER_SET - 1)).unwrap(); - let mut points = vec![]; - for _ in 0 .. amount { - points.push((OsRng.next_u64() >> 32).try_into().unwrap()); - } - points - }, - random_signed_with_nonce(&mut OsRng, 0), - )); -} diff --git a/coordinator/src/tests/tributary/sync.rs b/coordinator/src/tests/tributary/sync.rs deleted file mode 100644 index a0b68839..00000000 --- a/coordinator/src/tests/tributary/sync.rs +++ /dev/null @@ -1,165 +0,0 @@ -use core::time::Duration; -use std::{sync::Arc, collections::HashSet}; - -use rand_core::OsRng; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; - -use tokio::{ - sync::{mpsc, broadcast}, - time::sleep, -}; - -use serai_db::MemDb; - -use tributary::Tributary; - -use crate::{ - tributary::Transaction, - ActiveTributary, TributaryEvent, - p2p::{heartbeat_tributaries_task, handle_p2p_task}, - tests::{ - LocalP2p, - tributary::{new_keys, new_spec, new_tributaries}, - }, -}; - -#[tokio::test] -async fn sync_test() { - let mut keys = new_keys(&mut OsRng); - let spec = new_spec(&mut OsRng, &keys); - // Ensure this can have a node fail - assert!(spec.n() > spec.t()); - - let mut tributaries = new_tributaries(&keys, &spec) - .await - .into_iter() - .map(|(_, p2p, tributary)| (p2p, tributary)) - .collect::>(); - - // Keep a Tributary back, effectively having it offline - let syncer_key = keys.pop().unwrap(); - let (syncer_p2p, syncer_tributary) = tributaries.pop().unwrap(); - - // Have the rest form a P2P net - let mut tributary_senders = vec![]; - let mut tributary_arcs = vec![]; - let mut p2p_threads = vec![]; - for (p2p, tributary) in tributaries.drain(..) { - let tributary = Arc::new(tributary); - tributary_arcs.push(tributary.clone()); - let (new_tributary_send, new_tributary_recv) = broadcast::channel(5); - let (cosign_send, _) = mpsc::unbounded_channel(); - let thread = tokio::spawn(handle_p2p_task(p2p, cosign_send, new_tributary_recv)); - new_tributary_send - .send(TributaryEvent::NewTributary(ActiveTributary { spec: spec.clone(), tributary })) - .map_err(|_| "failed to send ActiveTributary") - .unwrap(); - tributary_senders.push(new_tributary_send); - p2p_threads.push(thread); - } - let tributaries = tributary_arcs; - - // After four blocks of time, we should have a new block - // We don't wait one block of time as we may have missed the chance for the first block - // We don't wait two blocks because we may have missed the chance, and then had a failure to - // propose by our 'offline' validator, which would cause the Tendermint round time to increase, - // requiring a longer delay - let block_time = u64::from(Tributary::::block_time()); - sleep(Duration::from_secs(4 * block_time)).await; - let tip = tributaries[0].tip().await; - assert!(tip != spec.genesis()); - - // Sleep one second to make sure this block propagates - sleep(Duration::from_secs(1)).await; - // Make sure every tributary has it - for tributary in &tributaries { - assert!(tributary.reader().block(&tip).is_some()); - } - - // Now that we've confirmed the other tributaries formed a net without issue, drop the syncer's - // pending P2P messages - syncer_p2p.1.write().await.1.last_mut().unwrap().clear(); - - // Have it join the net - let syncer_key = Ristretto::generator() * *syncer_key; - let syncer_tributary = Arc::new(syncer_tributary); - let (syncer_tributary_send, syncer_tributary_recv) = broadcast::channel(5); - let (cosign_send, _) = mpsc::unbounded_channel(); - tokio::spawn(handle_p2p_task(syncer_p2p.clone(), cosign_send, syncer_tributary_recv)); - syncer_tributary_send - .send(TributaryEvent::NewTributary(ActiveTributary { - spec: spec.clone(), - tributary: syncer_tributary.clone(), - })) - .map_err(|_| "failed to send ActiveTributary to syncer") - .unwrap(); - - // It shouldn't automatically catch up. If it somehow was, our test would be broken - // Sanity check this - let tip = tributaries[0].tip().await; - // Wait until a new block occurs - sleep(Duration::from_secs(3 * block_time)).await; - // Make sure a new block actually occurred - assert!(tributaries[0].tip().await != tip); - // Make sure the new block alone didn't trigger catching up - assert_eq!(syncer_tributary.tip().await, spec.genesis()); - - // Start the heartbeat protocol - let (syncer_heartbeat_tributary_send, syncer_heartbeat_tributary_recv) = broadcast::channel(5); - tokio::spawn(heartbeat_tributaries_task(syncer_p2p, syncer_heartbeat_tributary_recv)); - syncer_heartbeat_tributary_send - .send(TributaryEvent::NewTributary(ActiveTributary { - spec: spec.clone(), - tributary: syncer_tributary.clone(), - })) - .map_err(|_| "failed to send ActiveTributary to heartbeat") - .unwrap(); - - // The heartbeat is once every 10 blocks, with some limitations - sleep(Duration::from_secs(20 * block_time)).await; - assert!(syncer_tributary.tip().await != spec.genesis()); - - // Verify it synced to the tip - let syncer_tip = { - let tributary = &tributaries[0]; - - let tip = tributary.tip().await; - let syncer_tip = syncer_tributary.tip().await; - // Allow a one block tolerance in case of race conditions - assert!( - HashSet::from([tip, tributary.reader().block(&tip).unwrap().parent()]).contains(&syncer_tip) - ); - syncer_tip - }; - - sleep(Duration::from_secs(block_time)).await; - - // Verify it's now keeping up - assert!(syncer_tributary.tip().await != syncer_tip); - - // Verify it's now participating in consensus - // Because only `t` validators are used in a commit, take n - t nodes offline - // leaving only `t` nodes. Which should force it to participate in the consensus - // of next blocks. - let spares = usize::from(spec.n() - spec.t()); - for thread in p2p_threads.iter().take(spares) { - thread.abort(); - } - - // wait for a block - sleep(Duration::from_secs(block_time)).await; - - if syncer_tributary - .reader() - .parsed_commit(&syncer_tributary.tip().await) - .unwrap() - .validators - .iter() - .any(|signer| signer == &syncer_key.to_bytes()) - { - return; - } - - panic!("synced tributary didn't start participating in consensus"); -} diff --git a/coordinator/src/tests/tributary/tx.rs b/coordinator/src/tests/tributary/tx.rs deleted file mode 100644 index 9b948f36..00000000 --- a/coordinator/src/tests/tributary/tx.rs +++ /dev/null @@ -1,62 +0,0 @@ -use core::time::Duration; - -use rand_core::{RngCore, OsRng}; - -use tokio::time::sleep; - -use serai_db::MemDb; - -use tributary::{ - transaction::Transaction as TransactionTrait, Transaction as TributaryTransaction, Tributary, -}; - -use crate::{ - tributary::Transaction, - tests::{ - LocalP2p, - tributary::{new_keys, new_spec, new_tributaries, run_tributaries, wait_for_tx_inclusion}, - }, -}; - -#[tokio::test] -async fn tx_test() { - let keys = new_keys(&mut OsRng); - let spec = new_spec(&mut OsRng, &keys); - - let tributaries = new_tributaries(&keys, &spec) - .await - .into_iter() - .map(|(_, p2p, tributary)| (p2p, tributary)) - .collect::>(); - - // Run the tributaries in the background - tokio::spawn(run_tributaries(tributaries.clone())); - - // Send a TX from a random Tributary - let sender = - usize::try_from(OsRng.next_u64() % u64::try_from(tributaries.len()).unwrap()).unwrap(); - let key = keys[sender].clone(); - - let block_before_tx = tributaries[sender].1.tip().await; - // Create the TX with a null signature so we can get its sig hash - let mut tx = Transaction::DkgParticipation { - participation: { - let mut participation = vec![0; 4096]; - OsRng.fill_bytes(&mut participation); - participation - }, - signed: Transaction::empty_signed(), - }; - tx.sign(&mut OsRng, spec.genesis(), &key); - - assert_eq!(tributaries[sender].1.add_transaction(tx.clone()).await, Ok(true)); - let included_in = wait_for_tx_inclusion(&tributaries[sender].1, block_before_tx, tx.hash()).await; - // Also sleep for the block time to ensure the block is synced around before we run checks on it - sleep(Duration::from_secs(Tributary::::block_time().into())).await; - - // All tributaries should have acknowledged this transaction in a block - for (_, tributary) in tributaries { - let block = tributary.reader().block(&included_in).unwrap(); - assert_eq!(block.transactions, vec![TributaryTransaction::Application(tx.clone())]); - } -} diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index 095f18af..008cd5c8 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -3,186 +3,344 @@ use std::collections::HashMap; use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; -use frost::Participant; +use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ValidatorSet}; -use serai_client::validator_sets::primitives::{KeyPair, ValidatorSet}; +use processor_messages::sign::VariantSignId; -use processor_messages::coordinator::SubstrateSignableId; +use serai_db::*; -pub use serai_db::*; - -use tributary::ReadWrite; - -use crate::tributary::{Label, Transaction}; +use crate::tributary::transaction::SigningProtocolRound; +/// A topic within the database which the group participates in #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] pub enum Topic { - DkgConfirmation, - SubstrateSign(SubstrateSignableId), - Sign([u8; 32]), + /// Vote to remove a participant + RemoveParticipant { participant: SeraiAddress }, + + // DkgParticipation isn't represented here as participations are immediately sent to the + // processor, not accumulated within this databse + /// Participation in the signing protocol to confirm the DKG results on Substrate + DkgConfirmation { attempt: u32, label: SigningProtocolRound }, + + /// The local view of the SlashReport, to be aggregated into the final SlashReport + SlashReport, + + /// Participation in a signing protocol + Sign { id: VariantSignId, attempt: u32, label: SigningProtocolRound }, } -// A struct to refer to a piece of data all validators will presumably provide a value for. -#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode)] -pub struct DataSpecification { - pub topic: Topic, - pub label: Label, - pub attempt: u32, +enum Participating { + Participated, + Everyone, } -pub enum DataSet { - Participating(HashMap>), - NotParticipating, +impl Topic { + // The topic used by the next attempt of this protocol + fn next_attempt_topic(self) -> Option { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { attempt, label: _ } => Some(Topic::DkgConfirmation { + attempt: attempt + 1, + label: SigningProtocolRound::Preprocess, + }), + Topic::SlashReport { .. } => None, + Topic::Sign { id, attempt, label: _ } => { + Some(Topic::Sign { id, attempt: attempt + 1, label: SigningProtocolRound::Preprocess }) + } + } + } + + // The topic for the re-attempt to schedule + fn reattempt_topic(self) -> Option<(u32, Topic)> { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { attempt, label } => match label { + SigningProtocolRound::Preprocess => { + let attempt = attempt + 1; + Some(( + attempt, + Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Preprocess }, + )) + } + SigningProtocolRound::Share => None, + }, + Topic::SlashReport { .. } => None, + Topic::Sign { id, attempt, label } => match label { + SigningProtocolRound::Preprocess => { + let attempt = attempt + 1; + Some((attempt, Topic::Sign { id, attempt, label: SigningProtocolRound::Preprocess })) + } + SigningProtocolRound::Share => None, + }, + } + } + + /// The topic which precedes this topic as a prerequisite + /// + /// The preceding topic must define this topic as succeeding + fn preceding_topic(self) -> Option { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { attempt, label } => match label { + SigningProtocolRound::Preprocess => None, + SigningProtocolRound::Share => { + Some(Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Preprocess }) + } + }, + Topic::SlashReport { .. } => None, + Topic::Sign { id, attempt, label } => match label { + SigningProtocolRound::Preprocess => None, + SigningProtocolRound::Share => { + Some(Topic::Sign { id, attempt, label: SigningProtocolRound::Preprocess }) + } + }, + } + } + + /// The topic which succeeds this topic, with this topic as a prerequisite + /// + /// The succeeding topic must define this topic as preceding + fn succeeding_topic(self) -> Option { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { attempt, label } => match label { + SigningProtocolRound::Preprocess => { + Some(Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Share }) + } + SigningProtocolRound::Share => None, + }, + Topic::SlashReport { .. } => None, + Topic::Sign { id, attempt, label } => match label { + SigningProtocolRound::Preprocess => { + Some(Topic::Sign { id, attempt, label: SigningProtocolRound::Share }) + } + SigningProtocolRound::Share => None, + }, + } + } + + fn requires_whitelisting(&self) -> bool { + #[allow(clippy::match_same_arms)] + match self { + // We don't require whitelisting to remove a participant + Topic::RemoveParticipant { .. } => false, + // We don't require whitelisting for the first attempt, solely the re-attempts + Topic::DkgConfirmation { attempt, .. } => *attempt != 0, + // We don't require whitelisting for the slash report + Topic::SlashReport { .. } => false, + // We do require whitelisting for every sign protocol + Topic::Sign { .. } => true, + } + } + + fn required_participation(&self, n: u64) -> u64 { + let _ = self; + // All of our topics require 2/3rds participation + ((2 * n) / 3) + 1 + } + + fn participating(&self) -> Participating { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => Participating::Everyone, + Topic::DkgConfirmation { .. } => Participating::Participated, + Topic::SlashReport { .. } => Participating::Everyone, + Topic::Sign { .. } => Participating::Participated, + } + } } -pub enum Accumulation { - Ready(DataSet), - NotReady, +/// The resulting data set from an accumulation +pub enum DataSet { + /// Accumulating this did not produce a data set to act on + /// (non-existent, not ready, prior handled, not participating, etc.) + None, + /// The data set was ready and we are participating in this event + Participating(HashMap), } -// TODO: Move from genesis to set for indexing +trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + create_db!( - Tributary { - SeraiBlockNumber: (hash: [u8; 32]) -> u64, - SeraiDkgCompleted: (set: ValidatorSet) -> [u8; 32], + CoordinatorTributary { + // The last handled tributary block's (number, hash) + LastHandledTributaryBlock: (set: ValidatorSet) -> (u64, [u8; 32]), - TributaryBlockNumber: (block: [u8; 32]) -> u32, - LastHandledBlock: (genesis: [u8; 32]) -> [u8; 32], + // The slash points a validator has accrued, with u64::MAX representing a fatal slash. + SlashPoints: (set: ValidatorSet, validator: SeraiAddress) -> u64, - // TODO: Revisit the point of this - FatalSlashes: (genesis: [u8; 32]) -> Vec<[u8; 32]>, - // TODO: Combine these two - FatallySlashed: (genesis: [u8; 32], account: [u8; 32]) -> (), - SlashPoints: (genesis: [u8; 32], account: [u8; 32]) -> u32, + // The latest Substrate block to cosign. + LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], - VotedToRemove: (genesis: [u8; 32], voter: [u8; 32], to_remove: [u8; 32]) -> (), - VotesToRemove: (genesis: [u8; 32], to_remove: [u8; 32]) -> u16, + // The weight accumulated for a topic. + AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u64, + // The entries accumulated for a topic, by validator. + Accumulated: (set: ValidatorSet, topic: Topic, validator: SeraiAddress) -> D, - AttemptDb: (genesis: [u8; 32], topic: &Topic) -> u32, - ReattemptDb: (genesis: [u8; 32], block: u32) -> Vec, - DataReceived: (genesis: [u8; 32], data_spec: &DataSpecification) -> u16, - DataDb: (genesis: [u8; 32], data_spec: &DataSpecification, signer_bytes: &[u8; 32]) -> Vec, - - DkgParticipation: (genesis: [u8; 32], from: u16) -> Vec, - ConfirmationNonces: (genesis: [u8; 32], attempt: u32) -> HashMap>, - DkgKeyPair: (genesis: [u8; 32]) -> KeyPair, - - PlanIds: (genesis: &[u8], block: u64) -> Vec<[u8; 32]>, - - SignedTransactionDb: (order: &[u8], nonce: u32) -> Vec, - - SlashReports: (genesis: [u8; 32], signer: [u8; 32]) -> Vec, - SlashReported: (genesis: [u8; 32]) -> u16, - SlashReportCutOff: (genesis: [u8; 32]) -> u64, - SlashReport: (set: ValidatorSet) -> Vec<([u8; 32], u32)>, + // Topics to be recognized as of a certain block number due to the reattempt protocol. + Reattempt: (set: ValidatorSet, block_number: u64) -> Vec, } ); -impl FatalSlashes { - pub fn get_as_keys(getter: &impl Get, genesis: [u8; 32]) -> Vec<::G> { - FatalSlashes::get(getter, genesis) - .unwrap_or(vec![]) - .iter() - .map(|key| ::G::from_bytes(key).unwrap()) - .collect::>() +pub struct TributaryDb; +impl TributaryDb { + pub fn last_handled_tributary_block( + getter: &impl Get, + set: ValidatorSet, + ) -> Option<(u64, [u8; 32])> { + LastHandledTributaryBlock::get(getter, set) } -} - -impl FatallySlashed { - pub fn set_fatally_slashed(txn: &mut impl DbTxn, genesis: [u8; 32], account: [u8; 32]) { - Self::set(txn, genesis, account, &()); - let mut existing = FatalSlashes::get(txn, genesis).unwrap_or_default(); - - // Don't append if we already have it, which can occur upon multiple faults - if existing.iter().any(|existing| existing == &account) { - return; - } - - existing.push(account); - FatalSlashes::set(txn, genesis, &existing); - } -} - -impl AttemptDb { - pub fn recognize_topic(txn: &mut impl DbTxn, genesis: [u8; 32], topic: Topic) { - Self::set(txn, genesis, &topic, &0u32); - } - - pub fn start_next_attempt(txn: &mut impl DbTxn, genesis: [u8; 32], topic: Topic) -> u32 { - let next = - Self::attempt(txn, genesis, topic).expect("starting next attempt for unknown topic") + 1; - Self::set(txn, genesis, &topic, &next); - next - } - - pub fn attempt(getter: &impl Get, genesis: [u8; 32], topic: Topic) -> Option { - let attempt = Self::get(getter, genesis, &topic); - // Don't require explicit recognition of the DkgConfirmation topic as it starts when the chain - // does - // Don't require explicit recognition of the SlashReport topic as it isn't a DoS risk and it - // should always happen (eventually) - if attempt.is_none() && - ((topic == Topic::DkgConfirmation) || - (topic == Topic::SubstrateSign(SubstrateSignableId::SlashReport))) - { - return Some(0); - } - attempt - } -} - -impl ReattemptDb { - pub fn schedule_reattempt( + pub fn set_last_handled_tributary_block( txn: &mut impl DbTxn, - genesis: [u8; 32], - current_block_number: u32, - topic: Topic, + set: ValidatorSet, + block_number: u64, + block_hash: [u8; 32], ) { - // 5 minutes - #[cfg(not(feature = "longer-reattempts"))] - const BASE_REATTEMPT_DELAY: u32 = (5 * 60 * 1000) / tributary::tendermint::TARGET_BLOCK_TIME; - - // 10 minutes, intended for latent environments like the GitHub CI - #[cfg(feature = "longer-reattempts")] - const BASE_REATTEMPT_DELAY: u32 = (10 * 60 * 1000) / tributary::tendermint::TARGET_BLOCK_TIME; - - // 5 minutes for attempts 0 ..= 2, 10 minutes for attempts 3 ..= 5, 15 minutes for attempts > 5 - // Assumes no event will take longer than 15 minutes, yet grows the time in case there are - // network bandwidth issues - let reattempt_delay = BASE_REATTEMPT_DELAY * - ((AttemptDb::attempt(txn, genesis, topic) - .expect("scheduling re-attempt for unknown topic") / - 3) + - 1) - .min(3); - let upon_block = current_block_number + reattempt_delay; - - let mut reattempts = Self::get(txn, genesis, upon_block).unwrap_or(vec![]); - reattempts.push(topic); - Self::set(txn, genesis, upon_block, &reattempts); + LastHandledTributaryBlock::set(txn, set, &(block_number, block_hash)); } - pub fn take(txn: &mut impl DbTxn, genesis: [u8; 32], block_number: u32) -> Vec { - let res = Self::get(txn, genesis, block_number).unwrap_or(vec![]); - if !res.is_empty() { - Self::del(txn, genesis, block_number); + pub fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { + AccumulatedWeight::set(txn, set, topic, &0); + } + + pub fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { + for topic in Reattempt::take(txn, set, block_number).unwrap_or(vec![]) { + // TODO: Slash all people who preprocessed but didn't share + Self::recognize_topic(txn, set, topic); } - res } -} -impl SignedTransactionDb { - pub fn take_signed_transaction( + pub fn fatal_slash( txn: &mut impl DbTxn, - order: &[u8], - nonce: u32, - ) -> Option { - let res = SignedTransactionDb::get(txn, order, nonce) - .map(|bytes| Transaction::read(&mut bytes.as_slice()).unwrap()); - if res.is_some() { - Self::del(txn, order, nonce); + set: ValidatorSet, + validator: SeraiAddress, + reason: &str, + ) { + log::warn!("{validator} fatally slashed: {reason}"); + SlashPoints::set(txn, set, validator, &u64::MAX); + } + + pub fn is_fatally_slashed(getter: &impl Get, set: ValidatorSet, validator: SeraiAddress) -> bool { + SlashPoints::get(getter, set, validator).unwrap_or(0) == u64::MAX + } + + #[allow(clippy::too_many_arguments)] + pub fn accumulate( + txn: &mut impl DbTxn, + set: ValidatorSet, + validators: &[SeraiAddress], + total_weight: u64, + block_number: u64, + topic: Topic, + validator: SeraiAddress, + validator_weight: u64, + data: &D, + ) -> DataSet { + // This function will only be called once for a (validator, topic) tuple due to how we handle + // nonces on transactions (deterministically to the topic) + + let accumulated_weight = AccumulatedWeight::get(txn, set, topic); + if topic.requires_whitelisting() && accumulated_weight.is_none() { + Self::fatal_slash(txn, set, validator, "participated in unrecognized topic"); + return DataSet::None; + } + let mut accumulated_weight = accumulated_weight.unwrap_or(0); + + // Check if there's a preceding topic, this validator participated + let preceding_topic = topic.preceding_topic(); + if let Some(preceding_topic) = preceding_topic { + if Accumulated::::get(txn, set, preceding_topic, validator).is_none() { + Self::fatal_slash( + txn, + set, + validator, + "participated in topic without participating in prior", + ); + return DataSet::None; + } + } + + // The complete lack of validation on the data by these NOPs opens the potential for spam here + + // If we've already accumulated past the threshold, NOP + if accumulated_weight >= topic.required_participation(total_weight) { + return DataSet::None; + } + // If this is for an old attempt, NOP + if let Some(next_attempt_topic) = topic.next_attempt_topic() { + if AccumulatedWeight::get(txn, set, next_attempt_topic).is_some() { + return DataSet::None; + } + } + + // Accumulate the data + accumulated_weight += validator_weight; + AccumulatedWeight::set(txn, set, topic, &accumulated_weight); + Accumulated::set(txn, set, topic, validator, data); + + // Check if we now cross the weight threshold + if accumulated_weight >= topic.required_participation(total_weight) { + // Queue this for re-attempt after enough time passes + if let Some((attempt, reattempt_topic)) = topic.reattempt_topic() { + // 5 minutes + #[cfg(not(feature = "longer-reattempts"))] + const BASE_REATTEMPT_DELAY: u32 = + (5u32 * 60 * 1000).div_ceil(tributary::tendermint::TARGET_BLOCK_TIME); + + // 10 minutes, intended for latent environments like the GitHub CI + #[cfg(feature = "longer-reattempts")] + const BASE_REATTEMPT_DELAY: u32 = + (10u32 * 60 * 1000).div_ceil(tributary::tendermint::TARGET_BLOCK_TIME); + + // Linearly scale the time for the protocol with the attempt number + let blocks_till_reattempt = u64::from(attempt * BASE_REATTEMPT_DELAY); + + let recognize_at = block_number + blocks_till_reattempt; + let mut queued = Reattempt::get(txn, set, recognize_at).unwrap_or(Vec::with_capacity(1)); + queued.push(reattempt_topic); + Reattempt::set(txn, set, recognize_at, &queued); + } + + // Register the succeeding topic + let succeeding_topic = topic.succeeding_topic(); + if let Some(succeeding_topic) = succeeding_topic { + Self::recognize_topic(txn, set, succeeding_topic); + } + + // Fetch and return all participations + let mut data_set = HashMap::with_capacity(validators.len()); + for validator in validators { + if let Some(data) = Accumulated::::get(txn, set, topic, *validator) { + // Clean this data up if there's not a succeeding topic + // If there is, we wait as the succeeding topic checks our participation in this topic + if succeeding_topic.is_none() { + Accumulated::::del(txn, set, topic, *validator); + } + // If this *was* the succeeding topic, clean up the preceding topic's data + if let Some(preceding_topic) = preceding_topic { + Accumulated::::del(txn, set, preceding_topic, *validator); + } + data_set.insert(*validator, data); + } + } + let participated = data_set.contains_key(&validator); + match topic.participating() { + Participating::Participated => { + if participated { + DataSet::Participating(data_set) + } else { + DataSet::None + } + } + Participating::Everyone => DataSet::Participating(data_set), + } + } else { + DataSet::None } - res } } diff --git a/coordinator/src/tributary/handle.rs b/coordinator/src/tributary/handle.rs deleted file mode 100644 index c5378cc7..00000000 --- a/coordinator/src/tributary/handle.rs +++ /dev/null @@ -1,554 +0,0 @@ -use core::ops::Deref; -use std::collections::HashMap; - -use zeroize::Zeroizing; -use rand_core::OsRng; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; -use frost::dkg::Participant; - -use scale::{Encode, Decode}; -use serai_client::{Signature, validator_sets::primitives::KeyPair}; - -use tributary::{Signed, TransactionKind, TransactionTrait}; - -use processor_messages::{ - key_gen::self, - coordinator::{self, SubstrateSignableId, SubstrateSignId}, - sign::{self, SignId}, -}; - -use serai_db::*; - -use crate::{ - processors::Processors, - tributary::{ - *, - signing_protocol::DkgConfirmer, - scanner::{ - RecognizedIdType, RIDTrait, PublishSeraiTransaction, PTTTrait, TributaryBlockHandler, - }, - }, - P2p, -}; - -pub fn dkg_confirmation_nonces( - key: &Zeroizing<::F>, - spec: &TributarySpec, - txn: &mut impl DbTxn, - attempt: u32, -) -> [u8; 64] { - DkgConfirmer::new(key, spec, txn, attempt).preprocess() -} - -pub fn generated_key_pair( - txn: &mut D::Transaction<'_>, - genesis: [u8; 32], - key_pair: &KeyPair, -) { - DkgKeyPair::set(txn, genesis, key_pair); -} - -fn unflatten(spec: &TributarySpec, data: &mut HashMap>) { - for (validator, _) in spec.validators() { - let Some(range) = spec.i(validator) else { continue }; - let Some(all_segments) = data.remove(&range.start) else { - continue; - }; - let mut data_vec = Vec::<_>::decode(&mut all_segments.as_slice()).unwrap(); - for i in u16::from(range.start) .. u16::from(range.end) { - let i = Participant::new(i).unwrap(); - data.insert(i, data_vec.remove(0)); - } - } -} - -impl< - D: Db, - T: DbTxn, - Pro: Processors, - PST: PublishSeraiTransaction, - PTT: PTTTrait, - RID: RIDTrait, - P: P2p, - > TributaryBlockHandler<'_, D, T, Pro, PST, PTT, RID, P> -{ - fn accumulate( - &mut self, - data_spec: &DataSpecification, - signer: ::G, - data: &Vec, - ) -> Accumulation { - log::debug!("accumulating entry for {:?} attempt #{}", &data_spec.topic, &data_spec.attempt); - let genesis = self.spec.genesis(); - if DataDb::get(self.txn, genesis, data_spec, &signer.to_bytes()).is_some() { - panic!("accumulating data for a participant multiple times"); - } - let signer_shares = { - let signer_i = self.spec.i(signer).expect("transaction signer wasn't a member of the set"); - u16::from(signer_i.end) - u16::from(signer_i.start) - }; - - let prior_received = DataReceived::get(self.txn, genesis, data_spec).unwrap_or_default(); - let now_received = prior_received + signer_shares; - DataReceived::set(self.txn, genesis, data_spec, &now_received); - DataDb::set(self.txn, genesis, data_spec, &signer.to_bytes(), data); - - let received_range = (prior_received + 1) ..= now_received; - - // If 2/3rds of the network participated in this preprocess, queue it for an automatic - // re-attempt - if (data_spec.label == Label::Preprocess) && received_range.contains(&self.spec.t()) { - // Double check the attempt on this entry, as we don't want to schedule a re-attempt if this - // is an old entry - // This is an assert, not part of the if check, as old data shouldn't be here in the first - // place - assert_eq!(AttemptDb::attempt(self.txn, genesis, data_spec.topic), Some(data_spec.attempt)); - ReattemptDb::schedule_reattempt(self.txn, genesis, self.block_number, data_spec.topic); - } - - // If we have all the needed commitments/preprocesses/shares, tell the processor - if received_range.contains(&self.spec.t()) { - log::debug!( - "accumulation for entry {:?} attempt #{} is ready", - &data_spec.topic, - &data_spec.attempt - ); - - let mut data = HashMap::new(); - for validator in self.spec.validators().iter().map(|validator| validator.0) { - let Some(i) = self.spec.i(validator) else { continue }; - data.insert( - i.start, - if let Some(data) = DataDb::get(self.txn, genesis, data_spec, &validator.to_bytes()) { - data - } else { - continue; - }, - ); - } - - assert_eq!(data.len(), usize::from(self.spec.t())); - - // Remove our own piece of data, if we were involved - if let Some(i) = self.spec.i(Ristretto::generator() * self.our_key.deref()) { - if data.remove(&i.start).is_some() { - return Accumulation::Ready(DataSet::Participating(data)); - } - } - return Accumulation::Ready(DataSet::NotParticipating); - } - Accumulation::NotReady - } - - fn handle_data( - &mut self, - data_spec: &DataSpecification, - bytes: &Vec, - signed: &Signed, - ) -> Accumulation { - let genesis = self.spec.genesis(); - - let Some(curr_attempt) = AttemptDb::attempt(self.txn, genesis, data_spec.topic) else { - // Premature publication of a valid ID/publication of an invalid ID - self.fatal_slash(signed.signer.to_bytes(), "published data for ID without an attempt"); - return Accumulation::NotReady; - }; - - // If they've already published a TX for this attempt, slash - // This shouldn't be reachable since nonces were made inserted by the coordinator, yet it's a - // cheap check to leave in for safety - if DataDb::get(self.txn, genesis, data_spec, &signed.signer.to_bytes()).is_some() { - self.fatal_slash(signed.signer.to_bytes(), "published data multiple times"); - return Accumulation::NotReady; - } - - // If the attempt is lesser than the blockchain's, return - if data_spec.attempt < curr_attempt { - log::debug!( - "dated attempt published onto tributary for topic {:?} (used attempt {}, current {})", - data_spec.topic, - data_spec.attempt, - curr_attempt - ); - return Accumulation::NotReady; - } - // If the attempt is greater, this is a premature publication, full slash - if data_spec.attempt > curr_attempt { - self.fatal_slash( - signed.signer.to_bytes(), - "published data with an attempt which hasn't started", - ); - return Accumulation::NotReady; - } - - // TODO: We can also full slash if shares before all commitments, or share before the - // necessary preprocesses - - // TODO: If this is shares, we need to check they are part of the selected signing set - - // Accumulate this data - self.accumulate(data_spec, signed.signer, bytes) - } - - fn check_sign_data_len( - &mut self, - signer: ::G, - len: usize, - ) -> Result<(), ()> { - let signer_i = self.spec.i(signer).expect("signer wasn't a member of the set"); - if len != usize::from(u16::from(signer_i.end) - u16::from(signer_i.start)) { - self.fatal_slash( - signer.to_bytes(), - "signer published a distinct amount of sign data than they had shares", - ); - Err(())?; - } - Ok(()) - } - - // TODO: Don't call fatal_slash in here, return the party to fatal_slash to ensure no further - // execution occurs - pub(crate) async fn handle_application_tx(&mut self, tx: Transaction) { - let genesis = self.spec.genesis(); - - // Don't handle transactions from fatally slashed participants - // This prevents removed participants from sabotaging the removal signing sessions and so on - // TODO: Because fatally slashed participants can still publish onto the blockchain, they have - // a notable DoS ability - if let TransactionKind::Signed(_, signed) = tx.kind() { - if FatallySlashed::get(self.txn, genesis, signed.signer.to_bytes()).is_some() { - return; - } - } - - match tx { - Transaction::RemoveParticipant { participant, signed } => { - if self.spec.i(participant).is_none() { - self.fatal_slash(participant.to_bytes(), "RemoveParticipant vote for non-validator"); - return; - } - - let participant = participant.to_bytes(); - let signer = signed.signer.to_bytes(); - - assert!( - VotedToRemove::get(self.txn, genesis, signer, participant).is_none(), - "VotedToRemove multiple times despite a single nonce being allocated", - ); - VotedToRemove::set(self.txn, genesis, signer, participant, &()); - - let prior_votes = VotesToRemove::get(self.txn, genesis, participant).unwrap_or(0); - let signer_votes = - self.spec.i(signed.signer).expect("signer wasn't a validator for this network?"); - let new_votes = prior_votes + u16::from(signer_votes.end) - u16::from(signer_votes.start); - VotesToRemove::set(self.txn, genesis, participant, &new_votes); - if ((prior_votes + 1) ..= new_votes).contains(&self.spec.t()) { - self.fatal_slash(participant, "RemoveParticipant vote") - } - } - - Transaction::DkgParticipation { participation, signed } => { - // Send the participation to the processor - self - .processors - .send( - self.spec.set().network, - key_gen::CoordinatorMessage::Participation { - session: self.spec.set().session, - participant: self - .spec - .i(signed.signer) - .expect("signer wasn't a validator for this network?") - .start, - participation, - }, - ) - .await; - } - - Transaction::DkgConfirmationNonces { attempt, confirmation_nonces, signed } => { - let data_spec = - DataSpecification { topic: Topic::DkgConfirmation, label: Label::Preprocess, attempt }; - match self.handle_data(&data_spec, &confirmation_nonces.to_vec(), &signed) { - Accumulation::Ready(DataSet::Participating(confirmation_nonces)) => { - log::info!( - "got all DkgConfirmationNonces for {}, attempt {attempt}", - hex::encode(genesis) - ); - - ConfirmationNonces::set(self.txn, genesis, attempt, &confirmation_nonces); - - // Send the expected DkgConfirmationShare - // TODO: Slight race condition here due to set, publish tx, then commit txn - let key_pair = DkgKeyPair::get(self.txn, genesis) - .expect("participating in confirming key we don't have"); - let mut tx = match DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt) - .share(confirmation_nonces, &key_pair) - { - Ok(confirmation_share) => Transaction::DkgConfirmationShare { - attempt, - confirmation_share, - signed: Transaction::empty_signed(), - }, - Err(participant) => Transaction::RemoveParticipant { - participant: self.spec.reverse_lookup_i(participant).unwrap(), - signed: Transaction::empty_signed(), - }, - }; - tx.sign(&mut OsRng, genesis, self.our_key); - self.publish_tributary_tx.publish_tributary_tx(tx).await; - } - Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {} - } - } - - Transaction::DkgConfirmationShare { attempt, confirmation_share, signed } => { - let data_spec = - DataSpecification { topic: Topic::DkgConfirmation, label: Label::Share, attempt }; - match self.handle_data(&data_spec, &confirmation_share.to_vec(), &signed) { - Accumulation::Ready(DataSet::Participating(shares)) => { - log::info!( - "got all DkgConfirmationShare for {}, attempt {attempt}", - hex::encode(genesis) - ); - - let preprocesses = ConfirmationNonces::get(self.txn, genesis, attempt).unwrap(); - - // TODO: This can technically happen under very very very specific timing as the txn - // put happens before DkgConfirmationShare, yet the txn isn't guaranteed to be - // committed - let key_pair = DkgKeyPair::get(self.txn, genesis).expect( - "in DkgConfirmationShare handling, which happens after everyone \ - (including us) fires DkgConfirmationShare, yet no confirming key pair", - ); - - // Determine the bitstring representing who participated before we move `shares` - let validators = self.spec.validators(); - let mut signature_participants = bitvec::vec::BitVec::with_capacity(validators.len()); - for (participant, _) in validators { - signature_participants.push( - (participant == (::generator() * self.our_key.deref())) || - shares.contains_key(&self.spec.i(participant).unwrap().start), - ); - } - - // Produce the final signature - let mut confirmer = DkgConfirmer::new(self.our_key, self.spec, self.txn, attempt); - let sig = match confirmer.complete(preprocesses, &key_pair, shares) { - Ok(sig) => sig, - Err(p) => { - let mut tx = Transaction::RemoveParticipant { - participant: self.spec.reverse_lookup_i(p).unwrap(), - signed: Transaction::empty_signed(), - }; - tx.sign(&mut OsRng, genesis, self.our_key); - self.publish_tributary_tx.publish_tributary_tx(tx).await; - return; - } - }; - - self - .publish_serai_tx - .publish_set_keys( - self.db, - self.spec.set(), - key_pair, - signature_participants, - Signature(sig), - ) - .await; - } - Accumulation::Ready(DataSet::NotParticipating) | Accumulation::NotReady => {} - } - } - - Transaction::CosignSubstrateBlock(hash) => { - AttemptDb::recognize_topic( - self.txn, - genesis, - Topic::SubstrateSign(SubstrateSignableId::CosigningSubstrateBlock(hash)), - ); - - let block_number = SeraiBlockNumber::get(self.txn, hash) - .expect("CosignSubstrateBlock yet didn't save Serai block number"); - let msg = coordinator::CoordinatorMessage::CosignSubstrateBlock { - id: SubstrateSignId { - session: self.spec.set().session, - id: SubstrateSignableId::CosigningSubstrateBlock(hash), - attempt: 0, - }, - block_number, - }; - self.processors.send(self.spec.set().network, msg).await; - } - - Transaction::Batch { block: _, batch } => { - // Because this Batch has achieved synchrony, its batch ID should be authorized - AttemptDb::recognize_topic( - self.txn, - genesis, - Topic::SubstrateSign(SubstrateSignableId::Batch(batch)), - ); - self - .recognized_id - .recognized_id( - self.spec.set(), - genesis, - RecognizedIdType::Batch, - batch.to_le_bytes().to_vec(), - ) - .await; - } - - Transaction::SubstrateBlock(block) => { - let plan_ids = PlanIds::get(self.txn, &genesis, block).expect( - "synced a tributary block finalizing a substrate block in a provided transaction \ - despite us not providing that transaction", - ); - - for id in plan_ids { - AttemptDb::recognize_topic(self.txn, genesis, Topic::Sign(id)); - self - .recognized_id - .recognized_id(self.spec.set(), genesis, RecognizedIdType::Plan, id.to_vec()) - .await; - } - } - - Transaction::SubstrateSign(data) => { - let signer = data.signed.signer; - let Ok(()) = self.check_sign_data_len(signer, data.data.len()) else { - return; - }; - let expected_len = match data.label { - Label::Preprocess => 64, - Label::Share => 32, - }; - for data in &data.data { - if data.len() != expected_len { - self.fatal_slash( - signer.to_bytes(), - "unexpected length data for substrate signing protocol", - ); - return; - } - } - - let data_spec = DataSpecification { - topic: Topic::SubstrateSign(data.plan), - label: data.label, - attempt: data.attempt, - }; - let Accumulation::Ready(DataSet::Participating(mut results)) = - self.handle_data(&data_spec, &data.data.encode(), &data.signed) - else { - return; - }; - unflatten(self.spec, &mut results); - - let id = SubstrateSignId { - session: self.spec.set().session, - id: data.plan, - attempt: data.attempt, - }; - let msg = match data.label { - Label::Preprocess => coordinator::CoordinatorMessage::SubstratePreprocesses { - id, - preprocesses: results.into_iter().map(|(v, p)| (v, p.try_into().unwrap())).collect(), - }, - Label::Share => coordinator::CoordinatorMessage::SubstrateShares { - id, - shares: results.into_iter().map(|(v, p)| (v, p.try_into().unwrap())).collect(), - }, - }; - self.processors.send(self.spec.set().network, msg).await; - } - - Transaction::Sign(data) => { - let Ok(()) = self.check_sign_data_len(data.signed.signer, data.data.len()) else { - return; - }; - - let data_spec = DataSpecification { - topic: Topic::Sign(data.plan), - label: data.label, - attempt: data.attempt, - }; - if let Accumulation::Ready(DataSet::Participating(mut results)) = - self.handle_data(&data_spec, &data.data.encode(), &data.signed) - { - unflatten(self.spec, &mut results); - let id = - SignId { session: self.spec.set().session, id: data.plan, attempt: data.attempt }; - self - .processors - .send( - self.spec.set().network, - match data.label { - Label::Preprocess => { - sign::CoordinatorMessage::Preprocesses { id, preprocesses: results } - } - Label::Share => sign::CoordinatorMessage::Shares { id, shares: results }, - }, - ) - .await; - } - } - - Transaction::SignCompleted { plan, tx_hash, first_signer, signature: _ } => { - log::info!( - "on-chain SignCompleted claims {} completes {}", - hex::encode(&tx_hash), - hex::encode(plan) - ); - - if AttemptDb::attempt(self.txn, genesis, Topic::Sign(plan)).is_none() { - self.fatal_slash(first_signer.to_bytes(), "claimed an unrecognized plan was completed"); - return; - }; - - // TODO: Confirm this signer hasn't prior published a completion - - let msg = sign::CoordinatorMessage::Completed { - session: self.spec.set().session, - id: plan, - tx: tx_hash, - }; - self.processors.send(self.spec.set().network, msg).await; - } - - Transaction::SlashReport(points, signed) => { - let signer_range = self.spec.i(signed.signer).unwrap(); - let signer_len = u16::from(signer_range.end) - u16::from(signer_range.start); - if points.len() != (self.spec.validators().len() - 1) { - self.fatal_slash( - signed.signer.to_bytes(), - "submitted a distinct amount of slash points to participants", - ); - return; - } - - if SlashReports::get(self.txn, genesis, signed.signer.to_bytes()).is_some() { - self.fatal_slash(signed.signer.to_bytes(), "submitted multiple slash points"); - return; - } - SlashReports::set(self.txn, genesis, signed.signer.to_bytes(), &points); - - let prior_reported = SlashReported::get(self.txn, genesis).unwrap_or(0); - let now_reported = prior_reported + signer_len; - SlashReported::set(self.txn, genesis, &now_reported); - - if (prior_reported < self.spec.t()) && (now_reported >= self.spec.t()) { - SlashReportCutOff::set( - self.txn, - genesis, - // 30 minutes into the future - &(u64::from(self.block_number) + - ((30 * 60 * 1000) / u64::from(tributary::tendermint::TARGET_BLOCK_TIME))), - ); - } - } - } - } -} diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary/mod.rs index 6e2f2661..6d748940 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary/mod.rs @@ -1,63 +1,6 @@ -use tributary::{ - ReadWrite, - transaction::{TransactionError, TransactionKind, Transaction as TransactionTrait}, - Tributary, -}; +mod transaction; +pub use transaction::Transaction; mod db; -pub use db::*; -mod spec; -pub use spec::TributarySpec; - -mod transaction; -pub use transaction::{Label, SignData, Transaction}; - -mod signing_protocol; - -mod handle; -pub use handle::*; - -pub mod scanner; - -pub async fn publish_signed_transaction( - txn: &mut D::Transaction<'_>, - tributary: &Tributary, - tx: Transaction, -) { - log::debug!("publishing transaction {}", hex::encode(tx.hash())); - - let (order, signer) = if let TransactionKind::Signed(order, signed) = tx.kind() { - let signer = signed.signer; - - // Safe as we should deterministically create transactions, meaning if this is already on-disk, - // it's what we're saving now - SignedTransactionDb::set(txn, &order, signed.nonce, &tx.serialize()); - - (order, signer) - } else { - panic!("non-signed transaction passed to publish_signed_transaction"); - }; - - // If we're trying to publish 5, when the last transaction published was 3, this will delay - // publication until the point in time we publish 4 - while let Some(tx) = SignedTransactionDb::take_signed_transaction( - txn, - &order, - tributary - .next_nonce(&signer, &order) - .await - .expect("we don't have a nonce, meaning we aren't a participant on this tributary"), - ) { - // We need to return a proper error here to enable that, due to a race condition around - // multiple publications - match tributary.add_transaction(tx.clone()).await { - Ok(_) => {} - // Some asynchonicity if InvalidNonce, assumed safe to deterministic nonces - Err(TransactionError::InvalidNonce) => { - log::warn!("publishing TX {tx:?} returned InvalidNonce. was it already added?") - } - Err(e) => panic!("created an invalid transaction: {e:?}"), - } - } -} +mod scan; diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs new file mode 100644 index 00000000..47e1103d --- /dev/null +++ b/coordinator/src/tributary/scan.rs @@ -0,0 +1,203 @@ +use core::future::Future; +use std::collections::HashMap; + +use ciphersuite::group::GroupEncoding; + +use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ValidatorSet}; + +use tributary::{ + Signed as TributarySigned, TransactionError, TransactionKind, TransactionTrait, + Transaction as TributaryTransaction, Block, TributaryReader, + tendermint::{ + tx::{TendermintTx, Evidence, decode_signed_message}, + TendermintNetwork, + }, +}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use crate::tributary::{ + db::*, + transaction::{Signed, Transaction}, +}; + +struct ScanBlock<'a, D: DbTxn, TD: Db> { + txn: &'a mut D, + set: ValidatorSet, + validators: &'a [SeraiAddress], + total_weight: u64, + validator_weights: &'a HashMap, + tributary: &'a TributaryReader, +} +impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { + fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) { + let signer = |signed: Signed| SeraiAddress(signed.signer.to_bytes()); + + if let TransactionKind::Signed(_, TributarySigned { signer, .. }) = tx.kind() { + // Don't handle transactions from those fatally slashed + // TODO: The fact they can publish these TXs makes this a notable spam vector + if TributaryDb::is_fatally_slashed(self.txn, self.set, SeraiAddress(signer.to_bytes())) { + return; + } + } + + match tx { + Transaction::RemoveParticipant { participant, signed } => { + // Accumulate this vote and fatally slash the participant if past the threshold + let signer = signer(signed); + match TributaryDb::accumulate( + self.txn, + self.set, + self.validators, + self.total_weight, + block_number, + Topic::RemoveParticipant { participant }, + signer, + self.validator_weights[&signer], + &(), + ) { + DataSet::None => {} + DataSet::Participating(_) => { + TributaryDb::fatal_slash(self.txn, self.set, participant, "voted to remove") + } + } + } + + Transaction::DkgParticipation { participation, signed } => { + // Send the participation to the processor + todo!("TODO") + } + Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { + // Accumulate the preprocesses into our own FROST attempt manager + todo!("TODO") + } + Transaction::DkgConfirmationShare { attempt, share, signed } => { + // Accumulate the shares into our own FROST attempt manager + todo!("TODO") + } + + Transaction::Cosign { substrate_block_hash } => { + // Update the latest intended-to-be-cosigned Substrate block + todo!("TODO") + } + Transaction::Cosigned { substrate_block_hash } => { + // Start cosigning the latest intended-to-be-cosigned block + todo!("TODO") + } + Transaction::SubstrateBlock { hash } => { + // Whitelist all of the IDs this Substrate block causes to be signed + todo!("TODO") + } + Transaction::Batch { hash } => { + // Whitelist the signing of this batch, publishing our own preprocess + todo!("TODO") + } + + Transaction::SlashReport { slash_points, signed } => { + // Accumulate, and if past the threshold, calculate *the* slash report and start signing it + todo!("TODO") + } + + Transaction::Sign { id, attempt, label, data, signed } => todo!("TODO"), + } + } + + fn handle_block(mut self, block_number: u64, block: Block) { + TributaryDb::start_of_block(self.txn, self.set, block_number); + + for tx in block.transactions { + match tx { + TributaryTransaction::Tendermint(TendermintTx::SlashEvidence(ev)) => { + // Since the evidence is on the chain, it will have already been validated + // We can just punish the signer + let data = match ev { + Evidence::ConflictingMessages(first, second) => (first, Some(second)), + Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None), + }; + /* TODO + let msgs = ( + decode_signed_message::>(&data.0).unwrap(), + if data.1.is_some() { + Some( + decode_signed_message::>(&data.1.unwrap()) + .unwrap(), + ) + } else { + None + }, + ); + + // Since anything with evidence is fundamentally faulty behavior, not just temporal + // errors, mark the node as fatally slashed + TributaryDb::fatal_slash( + self.txn, msgs.0.msg.sender, &format!("invalid tendermint messages: {msgs:?}")); + */ + todo!("TODO") + } + TributaryTransaction::Application(tx) => { + self.handle_application_tx(block_number, tx); + } + } + } + } +} + +struct ScanTributaryTask { + db: D, + set: ValidatorSet, + validators: Vec, + total_weight: u64, + validator_weights: HashMap, + tributary: TributaryReader, +} +impl ContinuallyRan for ScanTributaryTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let (mut last_block_number, mut last_block_hash) = + TributaryDb::last_handled_tributary_block(&self.db, self.set) + .unwrap_or((0, self.tributary.genesis())); + + let mut made_progess = false; + while let Some(next) = self.tributary.block_after(&last_block_hash) { + let block = self.tributary.block(&next).unwrap(); + let block_number = last_block_number + 1; + let block_hash = block.hash(); + + // Make sure we have all of the provided transactions for this block + for tx in &block.transactions { + let TransactionKind::Provided(order) = tx.kind() else { + continue; + }; + + // make sure we have all the provided txs in this block locally + if !self.tributary.locally_provided_txs_in_block(&block_hash, order) { + return Err(format!( + "didn't have the provided Transactions on-chain for set (ephemeral error): {:?}", + self.set + )); + } + } + + let mut txn = self.db.txn(); + (ScanBlock { + txn: &mut txn, + set: self.set, + validators: &self.validators, + total_weight: self.total_weight, + validator_weights: &self.validator_weights, + tributary: &self.tributary, + }) + .handle_block(block_number, block); + TributaryDb::set_last_handled_tributary_block(&mut txn, self.set, block_number, block_hash); + last_block_number = block_number; + last_block_hash = block_hash; + txn.commit(); + + made_progess = true; + } + + Ok(made_progess) + } + } +} diff --git a/coordinator/src/tributary/scanner.rs b/coordinator/src/tributary/scanner.rs deleted file mode 100644 index c0b906ed..00000000 --- a/coordinator/src/tributary/scanner.rs +++ /dev/null @@ -1,685 +0,0 @@ -use core::{marker::PhantomData, future::Future, time::Duration}; -use std::sync::Arc; - -use zeroize::Zeroizing; - -use rand_core::OsRng; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; - -use tokio::sync::broadcast; - -use scale::{Encode, Decode}; -use serai_client::{ - primitives::Signature, - validator_sets::primitives::{KeyPair, ValidatorSet}, - Serai, -}; - -use serai_db::DbTxn; - -use processor_messages::coordinator::{SubstrateSignId, SubstrateSignableId}; - -use tributary::{ - TransactionKind, Transaction as TributaryTransaction, TransactionError, Block, TributaryReader, - tendermint::{ - tx::{TendermintTx, Evidence, decode_signed_message}, - TendermintNetwork, - }, -}; - -use crate::{Db, processors::Processors, substrate::BatchInstructionsHashDb, tributary::*, P2p}; - -#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode)] -pub enum RecognizedIdType { - Batch, - Plan, -} - -#[async_trait::async_trait] -pub trait RIDTrait { - async fn recognized_id( - &self, - set: ValidatorSet, - genesis: [u8; 32], - kind: RecognizedIdType, - id: Vec, - ); -} -#[async_trait::async_trait] -impl< - FRid: Send + Future, - F: Sync + Fn(ValidatorSet, [u8; 32], RecognizedIdType, Vec) -> FRid, - > RIDTrait for F -{ - async fn recognized_id( - &self, - set: ValidatorSet, - genesis: [u8; 32], - kind: RecognizedIdType, - id: Vec, - ) { - (self)(set, genesis, kind, id).await - } -} - -#[async_trait::async_trait] -pub trait PublishSeraiTransaction { - async fn publish_set_keys( - &self, - db: &(impl Sync + Get), - set: ValidatorSet, - key_pair: KeyPair, - signature_participants: bitvec::vec::BitVec, - signature: Signature, - ); -} - -mod impl_pst_for_serai { - use super::*; - - use serai_client::SeraiValidatorSets; - - // Uses a macro because Rust can't resolve the lifetimes/generics around the check function - // check is expected to return true if the effect has already occurred - // The generated publish function will return true if *we* published the transaction - macro_rules! common_pst { - ($Meta: ty, $check: ident) => { - async fn publish( - serai: &Serai, - db: &impl Get, - set: ValidatorSet, - tx: serai_client::Transaction, - meta: $Meta, - ) -> bool { - loop { - match serai.publish(&tx).await { - Ok(_) => return true, - // This is assumed to be some ephemeral error due to the assumed fault-free - // creation - // TODO2: Differentiate connection errors from invariants - Err(e) => { - // The following block is irrelevant, and can/likely will fail, if we're publishing - // a TX for an old session - // If we're on a newer session, move on - if crate::RetiredTributaryDb::get(db, set).is_some() { - log::warn!("trying to publish a TX relevant to set {set:?} which isn't the latest"); - return false; - } - - if let Ok(serai) = serai.as_of_latest_finalized_block().await { - let serai = serai.validator_sets(); - - // Check if someone else published the TX in question - if $check(serai, set, meta).await { - return false; - } - } - - log::error!("couldn't connect to Serai node to publish TX: {e:?}"); - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - } - } - } - } - }; - } - - #[async_trait::async_trait] - impl PublishSeraiTransaction for Serai { - async fn publish_set_keys( - &self, - db: &(impl Sync + Get), - set: ValidatorSet, - key_pair: KeyPair, - signature_participants: bitvec::vec::BitVec, - signature: Signature, - ) { - let tx = - SeraiValidatorSets::set_keys(set.network, key_pair, signature_participants, signature); - async fn check(serai: SeraiValidatorSets<'_>, set: ValidatorSet, (): ()) -> bool { - if matches!(serai.keys(set).await, Ok(Some(_))) { - log::info!("another coordinator set key pair for {:?}", set); - return true; - } - false - } - common_pst!((), check); - if publish(self, db, set, tx, ()).await { - log::info!("published set keys for {set:?}"); - } - } - } -} - -#[async_trait::async_trait] -pub trait PTTTrait { - async fn publish_tributary_tx(&self, tx: Transaction); -} -#[async_trait::async_trait] -impl, F: Sync + Fn(Transaction) -> FPtt> PTTTrait for F { - async fn publish_tributary_tx(&self, tx: Transaction) { - (self)(tx).await - } -} - -pub struct TributaryBlockHandler< - 'a, - D: Db, - T: DbTxn, - Pro: Processors, - PST: PublishSeraiTransaction, - PTT: PTTTrait, - RID: RIDTrait, - P: P2p, -> { - pub db: &'a D, - pub txn: &'a mut T, - pub our_key: &'a Zeroizing<::F>, - pub recognized_id: &'a RID, - pub processors: &'a Pro, - pub publish_serai_tx: &'a PST, - pub publish_tributary_tx: &'a PTT, - pub spec: &'a TributarySpec, - block: Block, - pub block_number: u32, - _p2p: PhantomData

, -} - -impl< - D: Db, - T: DbTxn, - Pro: Processors, - PST: PublishSeraiTransaction, - PTT: PTTTrait, - RID: RIDTrait, - P: P2p, - > TributaryBlockHandler<'_, D, T, Pro, PST, PTT, RID, P> -{ - pub fn fatal_slash(&mut self, slashing: [u8; 32], reason: &str) { - let genesis = self.spec.genesis(); - - log::warn!("fatally slashing {}. reason: {}", hex::encode(slashing), reason); - FatallySlashed::set_fatally_slashed(self.txn, genesis, slashing); - - // TODO: disconnect the node from network/ban from further participation in all Tributaries - } - - // TODO: Once Substrate confirms a key, we need to rotate our validator set OR form a second - // Tributary post-DKG - // https://github.com/serai-dex/serai/issues/426 - - async fn handle(mut self) { - log::info!("found block for Tributary {:?}", self.spec.set()); - - let transactions = self.block.transactions.clone(); - for tx in transactions { - match tx { - TributaryTransaction::Tendermint(TendermintTx::SlashEvidence(ev)) => { - // Since the evidence is on the chain, it should already have been validated - // We can just punish the signer - let data = match ev { - Evidence::ConflictingMessages(first, second) => (first, Some(second)), - Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None), - }; - let msgs = ( - decode_signed_message::>(&data.0).unwrap(), - if data.1.is_some() { - Some( - decode_signed_message::>(&data.1.unwrap()) - .unwrap(), - ) - } else { - None - }, - ); - - // Since anything with evidence is fundamentally faulty behavior, not just temporal - // errors, mark the node as fatally slashed - self.fatal_slash(msgs.0.msg.sender, &format!("invalid tendermint messages: {msgs:?}")); - } - TributaryTransaction::Application(tx) => { - self.handle_application_tx(tx).await; - } - } - } - - let genesis = self.spec.genesis(); - - // Calculate the shares still present, spinning if not enough are - { - // Start with the original n value - let mut present_shares = self.spec.n(); - // Remove everyone fatally slashed - let current_fatal_slashes = FatalSlashes::get_as_keys(self.txn, genesis); - for removed in ¤t_fatal_slashes { - let original_i_for_removed = - self.spec.i(*removed).expect("removed party was never present"); - let removed_shares = - u16::from(original_i_for_removed.end) - u16::from(original_i_for_removed.start); - present_shares -= removed_shares; - } - - // Spin if the present shares don't satisfy the required threshold - if present_shares < self.spec.t() { - loop { - log::error!( - "fatally slashed so many participants for {:?} we no longer meet the threshold", - self.spec.set() - ); - tokio::time::sleep(core::time::Duration::from_secs(60)).await; - } - } - } - - for topic in ReattemptDb::take(self.txn, genesis, self.block_number) { - let attempt = AttemptDb::start_next_attempt(self.txn, genesis, topic); - log::info!("potentially re-attempting {topic:?} with attempt {attempt}"); - - // Slash people who failed to participate as expected in the prior attempt - { - let prior_attempt = attempt - 1; - // TODO: If 67% sent preprocesses, this should be them. Else, this should be vec![] - let expected_participants: Vec<::G> = vec![]; - - let mut did_not_participate = vec![]; - for expected_participant in expected_participants { - if DataDb::get( - self.txn, - genesis, - &DataSpecification { - topic, - // Since we got the preprocesses, we were supposed to get the shares - label: Label::Share, - attempt: prior_attempt, - }, - &expected_participant.to_bytes(), - ) - .is_none() - { - did_not_participate.push(expected_participant); - } - } - - // If a supermajority didn't participate as expected, the protocol was likely aborted due - // to detection of a completion or some larger networking error - // Accordingly, clear did_not_participate - // TODO - - // TODO: Increment the slash points of people who didn't preprocess in some expected window - // of time - - // Slash everyone who didn't participate as expected - // This may be overzealous as if a minority detects a completion, they'll abort yet the - // supermajority will cause the above allowance to not trigger, causing an honest minority - // to be slashed - // At the end of the protocol, the accumulated slashes are reduced by the amount obtained - // by the worst-performing member of the supermajority, and this is expected to - // sufficiently compensate for slashes which occur under normal operation - // TODO - } - - /* - All of these have the same common flow: - - 1) Check if this re-attempt is actually needed - 2) If so, dispatch whatever events as needed - - This is because we *always* re-attempt any protocol which had participation. That doesn't - mean we *should* re-attempt this protocol. - - The alternatives were: - 1) Note on-chain we completed a protocol, halting re-attempts upon 34%. - 2) Vote on-chain to re-attempt a protocol. - - This schema doesn't have any additional messages upon the success case (whereas - alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and - then preprocesses. This only sends preprocesses). - */ - match topic { - Topic::DkgConfirmation => { - if SeraiDkgCompleted::get(self.txn, self.spec.set()).is_none() { - log::info!("re-attempting DKG confirmation with attempt {attempt}"); - - // Since it wasn't completed, publish our nonces for the next attempt - let confirmation_nonces = - crate::tributary::dkg_confirmation_nonces(self.our_key, self.spec, self.txn, attempt); - let mut tx = Transaction::DkgConfirmationNonces { - attempt, - confirmation_nonces, - signed: Transaction::empty_signed(), - }; - tx.sign(&mut OsRng, genesis, self.our_key); - self.publish_tributary_tx.publish_tributary_tx(tx).await; - } - } - Topic::SubstrateSign(inner_id) => { - let id = processor_messages::coordinator::SubstrateSignId { - session: self.spec.set().session, - id: inner_id, - attempt, - }; - match inner_id { - SubstrateSignableId::CosigningSubstrateBlock(block) => { - let block_number = SeraiBlockNumber::get(self.txn, block) - .expect("couldn't get the block number for prior attempted cosign"); - - // Check if the cosigner has a signature from our set for this block/a newer one - let latest_cosign = - crate::cosign_evaluator::LatestCosign::get(self.txn, self.spec.set().network) - .map_or(0, |cosign| cosign.block_number); - if latest_cosign < block_number { - log::info!("re-attempting cosigning {block_number:?} with attempt {attempt}"); - - // Instruct the processor to start the next attempt - self - .processors - .send( - self.spec.set().network, - processor_messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - id, - block_number, - }, - ) - .await; - } - } - SubstrateSignableId::Batch(batch) => { - // If the Batch hasn't appeared on-chain... - if BatchInstructionsHashDb::get(self.txn, self.spec.set().network, batch).is_none() { - log::info!("re-attempting signing batch {batch:?} with attempt {attempt}"); - - // Instruct the processor to start the next attempt - // The processor won't continue if it's already signed a Batch - // Prior checking if the Batch is on-chain just may reduce the non-participating - // 33% from publishing their re-attempt messages - self - .processors - .send( - self.spec.set().network, - processor_messages::coordinator::CoordinatorMessage::BatchReattempt { id }, - ) - .await; - } - } - SubstrateSignableId::SlashReport => { - // If this Tributary hasn't been retired... - // (published SlashReport/took too long to do so) - if crate::RetiredTributaryDb::get(self.txn, self.spec.set()).is_none() { - log::info!( - "re-attempting signing slash report for {:?} with attempt {attempt}", - self.spec.set() - ); - - let report = SlashReport::get(self.txn, self.spec.set()) - .expect("re-attempting signing a SlashReport we don't have?"); - self - .processors - .send( - self.spec.set().network, - processor_messages::coordinator::CoordinatorMessage::SignSlashReport { - id, - report, - }, - ) - .await; - } - } - } - } - Topic::Sign(id) => { - // Instruct the processor to start the next attempt - // If it has already noted a completion, it won't send a preprocess and will simply drop - // the re-attempt message - self - .processors - .send( - self.spec.set().network, - processor_messages::sign::CoordinatorMessage::Reattempt { - id: processor_messages::sign::SignId { - session: self.spec.set().session, - id, - attempt, - }, - }, - ) - .await; - } - } - } - - if Some(u64::from(self.block_number)) == SlashReportCutOff::get(self.txn, genesis) { - // Grab every slash report - let mut all_reports = vec![]; - for (i, (validator, _)) in self.spec.validators().into_iter().enumerate() { - let Some(mut report) = SlashReports::get(self.txn, genesis, validator.to_bytes()) else { - continue; - }; - // Assign them 0 points for themselves - report.insert(i, 0); - let signer_i = self.spec.i(validator).unwrap(); - let signer_len = u16::from(signer_i.end) - u16::from(signer_i.start); - // Push `n` copies, one for each of their shares - for _ in 0 .. signer_len { - all_reports.push(report.clone()); - } - } - - // For each participant, grab their median - let mut medians = vec![]; - for p in 0 .. self.spec.validators().len() { - let mut median_calc = vec![]; - for report in &all_reports { - median_calc.push(report[p]); - } - median_calc.sort_unstable(); - medians.push(median_calc[median_calc.len() / 2]); - } - - // Grab the points of the last party within the best-performing threshold - // This is done by first expanding the point values by the amount of shares - let mut sorted_medians = vec![]; - for (i, (_, shares)) in self.spec.validators().into_iter().enumerate() { - for _ in 0 .. shares { - sorted_medians.push(medians[i]); - } - } - // Then performing the sort - sorted_medians.sort_unstable(); - let worst_points_by_party_within_threshold = sorted_medians[usize::from(self.spec.t()) - 1]; - - // Reduce everyone's points by this value - for median in &mut medians { - *median = median.saturating_sub(worst_points_by_party_within_threshold); - } - - // The threshold now has the proper incentive to report this as they no longer suffer - // negative effects - // - // Additionally, if all validators had degraded performance, they don't all get penalized for - // what's likely outside their control (as it occurred universally) - - // Mark everyone fatally slashed with u32::MAX - for (i, (validator, _)) in self.spec.validators().into_iter().enumerate() { - if FatallySlashed::get(self.txn, genesis, validator.to_bytes()).is_some() { - medians[i] = u32::MAX; - } - } - - let mut report = vec![]; - for (i, (validator, _)) in self.spec.validators().into_iter().enumerate() { - if medians[i] != 0 { - report.push((validator.to_bytes(), medians[i])); - } - } - - // This does lock in the report, meaning further slash point accumulations won't be reported - // They still have value to be locally tracked due to local decisions made based off - // accumulated slash reports - SlashReport::set(self.txn, self.spec.set(), &report); - - // Start a signing protocol for this - self - .processors - .send( - self.spec.set().network, - processor_messages::coordinator::CoordinatorMessage::SignSlashReport { - id: SubstrateSignId { - session: self.spec.set().session, - id: SubstrateSignableId::SlashReport, - attempt: 0, - }, - report, - }, - ) - .await; - } - } -} - -#[allow(clippy::too_many_arguments)] -pub(crate) async fn handle_new_blocks< - D: Db, - Pro: Processors, - PST: PublishSeraiTransaction, - PTT: PTTTrait, - RID: RIDTrait, - P: P2p, ->( - db: &mut D, - key: &Zeroizing<::F>, - recognized_id: &RID, - processors: &Pro, - publish_serai_tx: &PST, - publish_tributary_tx: &PTT, - spec: &TributarySpec, - tributary: &TributaryReader, -) { - let genesis = tributary.genesis(); - let mut last_block = LastHandledBlock::get(db, genesis).unwrap_or(genesis); - let mut block_number = TributaryBlockNumber::get(db, last_block).unwrap_or(0); - while let Some(next) = tributary.block_after(&last_block) { - let block = tributary.block(&next).unwrap(); - block_number += 1; - - // Make sure we have all of the provided transactions for this block - for tx in &block.transactions { - // Provided TXs will appear first in the Block, so we can break after we hit a non-Provided - let TransactionKind::Provided(order) = tx.kind() else { - break; - }; - - // make sure we have all the provided txs in this block locally - if !tributary.locally_provided_txs_in_block(&block.hash(), order) { - return; - } - } - - let mut db_clone = db.clone(); - let mut txn = db_clone.txn(); - TributaryBlockNumber::set(&mut txn, next, &block_number); - (TributaryBlockHandler { - db, - txn: &mut txn, - spec, - our_key: key, - recognized_id, - processors, - publish_serai_tx, - publish_tributary_tx, - block, - block_number, - _p2p: PhantomData::

, - }) - .handle() - .await; - last_block = next; - LastHandledBlock::set(&mut txn, genesis, &next); - txn.commit(); - } -} - -pub(crate) async fn scan_tributaries_task< - D: Db, - Pro: Processors, - P: P2p, - RID: 'static + Send + Sync + Clone + RIDTrait, ->( - raw_db: D, - key: Zeroizing<::F>, - recognized_id: RID, - processors: Pro, - serai: Arc, - mut tributary_event: broadcast::Receiver>, -) { - log::info!("scanning tributaries"); - - loop { - match tributary_event.recv().await { - Ok(crate::TributaryEvent::NewTributary(crate::ActiveTributary { spec, tributary })) => { - // For each Tributary, spawn a dedicated scanner task - tokio::spawn({ - let raw_db = raw_db.clone(); - let key = key.clone(); - let recognized_id = recognized_id.clone(); - let processors = processors.clone(); - let serai = serai.clone(); - async move { - let spec = &spec; - let reader = tributary.reader(); - let mut tributary_db = raw_db.clone(); - loop { - // Check if the set was retired, and if so, don't further operate - if crate::db::RetiredTributaryDb::get(&raw_db, spec.set()).is_some() { - break; - } - - // Obtain the next block notification now to prevent obtaining it immediately after - // the next block occurs - let next_block_notification = tributary.next_block_notification().await; - - handle_new_blocks::<_, _, _, _, _, P>( - &mut tributary_db, - &key, - &recognized_id, - &processors, - &*serai, - &|tx: Transaction| { - let tributary = tributary.clone(); - async move { - match tributary.add_transaction(tx.clone()).await { - Ok(_) => {} - // Can happen as this occurs on a distinct DB TXN - Err(TransactionError::InvalidNonce) => { - log::warn!( - "publishing TX {tx:?} returned InvalidNonce. was it already added?" - ) - } - Err(e) => panic!("created an invalid transaction: {e:?}"), - } - } - }, - spec, - &reader, - ) - .await; - - // Run either when the notification fires, or every interval of block_time - let _ = tokio::time::timeout( - Duration::from_secs(tributary::Tributary::::block_time().into()), - next_block_notification, - ) - .await; - } - } - }); - } - // The above loop simply checks the DB every few seconds, voiding the need for this event - Ok(crate::TributaryEvent::TributaryRetired(_)) => {} - Err(broadcast::error::RecvError::Lagged(_)) => { - panic!("scan_tributaries lagged to handle tributary_event") - } - Err(broadcast::error::RecvError::Closed) => panic!("tributary_event sender closed"), - } - } -} diff --git a/coordinator/src/tributary/signing_protocol.rs b/coordinator/src/tributary/signing_protocol.rs deleted file mode 100644 index af334149..00000000 --- a/coordinator/src/tributary/signing_protocol.rs +++ /dev/null @@ -1,361 +0,0 @@ -/* - A MuSig-based signing protocol executed with the validators' keys. - - This is used for confirming the results of a DKG on-chain, an operation requiring all validators - which aren't specified as removed while still satisfying a supermajority. - - Since we're using the validator's keys, as needed for their being the root of trust, the - coordinator must perform the signing. This is distinct from all other group-signing operations, - as they're all done by the processor. - - The MuSig-aggregation achieves on-chain efficiency and enables a more secure design pattern. - While we could individually tack votes, that'd require logic to prevent voting multiple times and - tracking the accumulated votes. MuSig-aggregation simply requires checking the list is sorted and - the list's weight exceeds the threshold. - - Instead of maintaining state in memory, a combination of the DB and re-execution are used. This - is deemed acceptable re: performance as: - - 1) This is only done prior to a DKG being confirmed on Substrate and is assumed infrequent. - 2) This is an O(n) algorithm. - 3) The size of the validator set is bounded by MAX_KEY_SHARES_PER_SET. - - Accordingly, this should be tolerable. - - As for safety, it is explicitly unsafe to reuse nonces across signing sessions. This raises - concerns regarding our re-execution which is dependent on fixed nonces. Safety is derived from - the nonces being context-bound under a BFT protocol. The flow is as follows: - - 1) Decide the nonce. - 2) Publish the nonces' commitments, receiving everyone elses *and potentially the message to be - signed*. - 3) Sign and publish the signature share. - - In order for nonce re-use to occur, the received nonce commitments (or the message to be signed) - would have to be distinct and sign would have to be called again. - - Before we act on any received messages, they're ordered and finalized by a BFT algorithm. The - only way to operate on distinct received messages would be if: - - 1) A logical flaw exists, letting new messages over write prior messages - 2) A reorganization occurred from chain A to chain B, and with it, different messages - - Reorganizations are not supported, as BFT is assumed by the presence of a BFT algorithm. While - a significant amount of processes may be byzantine, leading to BFT being broken, that still will - not trigger a reorganization. The only way to move to a distinct chain, with distinct messages, - would be by rebuilding the local process (this time following chain B). Upon any complete - rebuild, we'd re-decide nonces, achieving safety. This does set a bound preventing partial - rebuilds which is accepted. - - Additionally, to ensure a rebuilt service isn't flagged as malicious, we have to check the - commitments generated from the decided nonces are in fact its commitments on-chain (TODO). - - TODO: We also need to review how we're handling Processor preprocesses and likely implement the - same on-chain-preprocess-matches-presumed-preprocess check before publishing shares. -*/ - -use core::ops::Deref; -use std::collections::{HashSet, HashMap}; - -use zeroize::{Zeroize, Zeroizing}; - -use rand_core::OsRng; - -use blake2::{Digest, Blake2s256}; - -use ciphersuite::{group::ff::PrimeField, Ciphersuite, Ristretto}; -use frost::{ - FrostError, - dkg::{Participant, musig::musig}, - ThresholdKeys, - sign::*, -}; -use frost_schnorrkel::Schnorrkel; - -use scale::Encode; - -#[rustfmt::skip] -use serai_client::validator_sets::primitives::{ValidatorSet, KeyPair, musig_context, set_keys_message}; - -use serai_db::*; - -use crate::tributary::TributarySpec; - -create_db!( - SigningProtocolDb { - CachedPreprocesses: (context: &impl Encode) -> [u8; 32] - DataSignedWith: (context: &impl Encode) -> (Vec, HashMap>), - } -); - -struct SigningProtocol<'a, T: DbTxn, C: Encode> { - pub(crate) key: &'a Zeroizing<::F>, - pub(crate) spec: &'a TributarySpec, - pub(crate) txn: &'a mut T, - pub(crate) context: C, -} - -impl SigningProtocol<'_, T, C> { - fn preprocess_internal( - &mut self, - participants: &[::G], - ) -> (AlgorithmSignMachine, [u8; 64]) { - // Encrypt the cached preprocess as recovery of it will enable recovering the private key - // While the DB isn't expected to be arbitrarily readable, it isn't a proper secret store and - // shouldn't be trusted as one - let mut encryption_key = { - let mut encryption_key_preimage = - Zeroizing::new(b"Cached Preprocess Encryption Key".to_vec()); - encryption_key_preimage.extend(self.context.encode()); - let repr = Zeroizing::new(self.key.to_repr()); - encryption_key_preimage.extend(repr.deref()); - Blake2s256::digest(&encryption_key_preimage) - }; - let encryption_key_slice: &mut [u8] = encryption_key.as_mut(); - - // Create the MuSig keys - let keys: ThresholdKeys = - musig(&musig_context(self.spec.set()), self.key, participants) - .expect("signing for a set we aren't in/validator present multiple times") - .into(); - - // Define the algorithm - let algorithm = Schnorrkel::new(b"substrate"); - - // Check if we've prior preprocessed - if CachedPreprocesses::get(self.txn, &self.context).is_none() { - // If we haven't, we create a machine solely to obtain the preprocess with - let (machine, _) = - AlgorithmMachine::new(algorithm.clone(), keys.clone()).preprocess(&mut OsRng); - - // Cache and save the preprocess to disk - let mut cache = machine.cache(); - assert_eq!(cache.0.len(), 32); - #[allow(clippy::needless_range_loop)] - for b in 0 .. 32 { - cache.0[b] ^= encryption_key_slice[b]; - } - - CachedPreprocesses::set(self.txn, &self.context, &cache.0); - } - - // We're now guaranteed to have the preprocess, hence why this `unwrap` is safe - let cached = CachedPreprocesses::get(self.txn, &self.context).unwrap(); - let mut cached = Zeroizing::new(cached); - #[allow(clippy::needless_range_loop)] - for b in 0 .. 32 { - cached[b] ^= encryption_key_slice[b]; - } - encryption_key_slice.zeroize(); - // Create the machine from the cached preprocess - let (machine, preprocess) = - AlgorithmSignMachine::from_cache(algorithm, keys, CachedPreprocess(cached)); - - (machine, preprocess.serialize().try_into().unwrap()) - } - - fn share_internal( - &mut self, - participants: &[::G], - mut serialized_preprocesses: HashMap>, - msg: &[u8], - ) -> Result<(AlgorithmSignatureMachine, [u8; 32]), Participant> { - // We can't clear the preprocess as we sitll need it to accumulate all of the shares - // We do save the message we signed so any future calls with distinct messages panic - // This assumes the txn deciding this data is committed before the share is broaadcast - if let Some((existing_msg, existing_preprocesses)) = - DataSignedWith::get(self.txn, &self.context) - { - assert_eq!(msg, &existing_msg, "obtaining a signature share for a distinct message"); - assert_eq!( - &serialized_preprocesses, &existing_preprocesses, - "obtaining a signature share with a distinct set of preprocesses" - ); - } else { - DataSignedWith::set( - self.txn, - &self.context, - &(msg.to_vec(), serialized_preprocesses.clone()), - ); - } - - // Get the preprocessed machine - let (machine, _) = self.preprocess_internal(participants); - - // Deserialize all the preprocesses - let mut participants = serialized_preprocesses.keys().copied().collect::>(); - participants.sort(); - let mut preprocesses = HashMap::new(); - for participant in participants { - preprocesses.insert( - participant, - machine - .read_preprocess(&mut serialized_preprocesses.remove(&participant).unwrap().as_slice()) - .map_err(|_| participant)?, - ); - } - - // Sign the share - let (machine, share) = machine.sign(preprocesses, msg).map_err(|e| match e { - FrostError::InternalError(e) => unreachable!("FrostError::InternalError {e}"), - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => panic!("unexpected error during sign: {e:?}"), - FrostError::InvalidPreprocess(p) | FrostError::InvalidShare(p) => p, - })?; - - Ok((machine, share.serialize().try_into().unwrap())) - } - - fn complete_internal( - machine: AlgorithmSignatureMachine, - shares: HashMap>, - ) -> Result<[u8; 64], Participant> { - let shares = shares - .into_iter() - .map(|(p, share)| { - machine.read_share(&mut share.as_slice()).map(|share| (p, share)).map_err(|_| p) - }) - .collect::, _>>()?; - let signature = machine.complete(shares).map_err(|e| match e { - FrostError::InternalError(e) => unreachable!("FrostError::InternalError {e}"), - FrostError::InvalidParticipant(_, _) | - FrostError::InvalidSigningSet(_) | - FrostError::InvalidParticipantQuantity(_, _) | - FrostError::DuplicatedParticipant(_) | - FrostError::MissingParticipant(_) => unreachable!("{e:?}"), - FrostError::InvalidPreprocess(p) | FrostError::InvalidShare(p) => p, - })?; - Ok(signature.to_bytes()) - } -} - -// Get the keys of the participants, noted by their threshold is, and return a new map indexed by -// their MuSig is. -fn threshold_i_map_to_keys_and_musig_i_map( - spec: &TributarySpec, - our_key: &Zeroizing<::F>, - mut map: HashMap>, -) -> (Vec<::G>, HashMap>) { - // Insert our own index so calculations aren't offset - let our_threshold_i = spec - .i(::generator() * our_key.deref()) - .expect("not in a set we're signing for") - .start; - // Asserts we weren't unexpectedly already present - assert!(map.insert(our_threshold_i, vec![]).is_none()); - - let spec_validators = spec.validators(); - let key_from_threshold_i = |threshold_i| { - for (key, _) in &spec_validators { - if threshold_i == spec.i(*key).expect("validator wasn't in a set they're in").start { - return *key; - } - } - panic!("requested info for threshold i which doesn't exist") - }; - - let mut sorted = vec![]; - let mut threshold_is = map.keys().copied().collect::>(); - threshold_is.sort(); - for threshold_i in threshold_is { - sorted.push(( - threshold_i, - key_from_threshold_i(threshold_i), - map.remove(&threshold_i).unwrap(), - )); - } - - // Now that signers are sorted, with their shares, create a map with the is needed for MuSig - let mut participants = vec![]; - let mut map = HashMap::new(); - let mut our_musig_i = None; - for (raw_i, (threshold_i, key, share)) in sorted.into_iter().enumerate() { - let musig_i = Participant::new(u16::try_from(raw_i).unwrap() + 1).unwrap(); - if threshold_i == our_threshold_i { - our_musig_i = Some(musig_i); - } - participants.push(key); - map.insert(musig_i, share); - } - - map.remove(&our_musig_i.unwrap()).unwrap(); - - (participants, map) -} - -type DkgConfirmerSigningProtocol<'a, T> = - SigningProtocol<'a, T, (&'static [u8; 12], ValidatorSet, u32)>; - -pub(crate) struct DkgConfirmer<'a, T: DbTxn> { - key: &'a Zeroizing<::F>, - spec: &'a TributarySpec, - txn: &'a mut T, - attempt: u32, -} - -impl DkgConfirmer<'_, T> { - pub(crate) fn new<'a>( - key: &'a Zeroizing<::F>, - spec: &'a TributarySpec, - txn: &'a mut T, - attempt: u32, - ) -> DkgConfirmer<'a, T> { - DkgConfirmer { key, spec, txn, attempt } - } - - fn signing_protocol(&mut self) -> DkgConfirmerSigningProtocol<'_, T> { - let context = (b"DkgConfirmer", self.spec.set(), self.attempt); - SigningProtocol { key: self.key, spec: self.spec, txn: self.txn, context } - } - - fn preprocess_internal(&mut self) -> (AlgorithmSignMachine, [u8; 64]) { - // This preprocesses with just us as we only decide the participants after obtaining - // preprocesses - let participants = vec![::generator() * self.key.deref()]; - self.signing_protocol().preprocess_internal(&participants) - } - // Get the preprocess for this confirmation. - pub(crate) fn preprocess(&mut self) -> [u8; 64] { - self.preprocess_internal().1 - } - - fn share_internal( - &mut self, - preprocesses: HashMap>, - key_pair: &KeyPair, - ) -> Result<(AlgorithmSignatureMachine, [u8; 32]), Participant> { - let (participants, preprocesses) = - threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, preprocesses); - let msg = set_keys_message(&self.spec.set(), key_pair); - self.signing_protocol().share_internal(&participants, preprocesses, &msg) - } - // Get the share for this confirmation, if the preprocesses are valid. - pub(crate) fn share( - &mut self, - preprocesses: HashMap>, - key_pair: &KeyPair, - ) -> Result<[u8; 32], Participant> { - self.share_internal(preprocesses, key_pair).map(|(_, share)| share) - } - - pub(crate) fn complete( - &mut self, - preprocesses: HashMap>, - key_pair: &KeyPair, - shares: HashMap>, - ) -> Result<[u8; 64], Participant> { - assert_eq!(preprocesses.keys().collect::>(), shares.keys().collect::>()); - - let shares = threshold_i_map_to_keys_and_musig_i_map(self.spec, self.key, shares).1; - - let machine = self - .share_internal(preprocesses, key_pair) - .expect("trying to complete a machine which failed to preprocess") - .0; - - DkgConfirmerSigningProtocol::<'_, T>::complete_internal(machine, shares) - } -} diff --git a/coordinator/src/tributary/spec.rs b/coordinator/src/tributary/spec.rs deleted file mode 100644 index efc792e6..00000000 --- a/coordinator/src/tributary/spec.rs +++ /dev/null @@ -1,124 +0,0 @@ -use core::{ops::Range, fmt::Debug}; -use std::{io, collections::HashMap}; - -use transcript::{Transcript, RecommendedTranscript}; - -use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; -use frost::Participant; - -use scale::Encode; -use borsh::{BorshSerialize, BorshDeserialize}; - -use serai_client::validator_sets::primitives::ValidatorSet; - -fn borsh_serialize_validators( - validators: &Vec<(::G, u16)>, - writer: &mut W, -) -> Result<(), io::Error> { - let len = u16::try_from(validators.len()).unwrap(); - BorshSerialize::serialize(&len, writer)?; - for validator in validators { - BorshSerialize::serialize(&validator.0.to_bytes(), writer)?; - BorshSerialize::serialize(&validator.1, writer)?; - } - Ok(()) -} - -fn borsh_deserialize_validators( - reader: &mut R, -) -> Result::G, u16)>, io::Error> { - let len: u16 = BorshDeserialize::deserialize_reader(reader)?; - let mut res = vec![]; - for _ in 0 .. len { - let compressed: [u8; 32] = BorshDeserialize::deserialize_reader(reader)?; - let point = Option::from(::G::from_bytes(&compressed)) - .ok_or_else(|| io::Error::other("invalid point for validator"))?; - let weight: u16 = BorshDeserialize::deserialize_reader(reader)?; - res.push((point, weight)); - } - Ok(res) -} - -#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -pub struct TributarySpec { - serai_block: [u8; 32], - start_time: u64, - set: ValidatorSet, - #[borsh( - serialize_with = "borsh_serialize_validators", - deserialize_with = "borsh_deserialize_validators" - )] - validators: Vec<(::G, u16)>, - evrf_public_keys: Vec<([u8; 32], Vec)>, -} - -impl TributarySpec { - pub fn new( - serai_block: [u8; 32], - start_time: u64, - set: ValidatorSet, - validators: Vec<(::G, u16)>, - evrf_public_keys: Vec<([u8; 32], Vec)>, - ) -> TributarySpec { - Self { serai_block, start_time, set, validators, evrf_public_keys } - } - - pub fn set(&self) -> ValidatorSet { - self.set - } - - pub fn genesis(&self) -> [u8; 32] { - // Calculate the genesis for this Tributary - let mut genesis = RecommendedTranscript::new(b"Serai Tributary Genesis"); - // This locks it to a specific Serai chain - genesis.append_message(b"serai_block", self.serai_block); - genesis.append_message(b"session", self.set.session.0.to_le_bytes()); - genesis.append_message(b"network", self.set.network.encode()); - let genesis = genesis.challenge(b"genesis"); - let genesis_ref: &[u8] = genesis.as_ref(); - genesis_ref[.. 32].try_into().unwrap() - } - - pub fn start_time(&self) -> u64 { - self.start_time - } - - pub fn n(&self) -> u16 { - self.validators.iter().map(|(_, weight)| *weight).sum() - } - - pub fn t(&self) -> u16 { - ((2 * self.n()) / 3) + 1 - } - - pub fn i(&self, key: ::G) -> Option> { - let mut all_is = HashMap::new(); - let mut i = 1; - for (validator, weight) in &self.validators { - all_is.insert( - *validator, - Range { start: Participant::new(i).unwrap(), end: Participant::new(i + weight).unwrap() }, - ); - i += weight; - } - - Some(all_is.get(&key)?.clone()) - } - - pub fn reverse_lookup_i(&self, i: Participant) -> Option<::G> { - for (validator, _) in &self.validators { - if self.i(*validator).map_or(false, |range| range.contains(&i)) { - return Some(*validator); - } - } - None - } - - pub fn validators(&self) -> Vec<(::G, u64)> { - self.validators.iter().map(|(validator, weight)| (*validator, u64::from(*weight))).collect() - } - - pub fn evrf_public_keys(&self) -> Vec<([u8; 32], Vec)> { - self.evrf_public_keys.clone() - } -} diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index fd8126ce..c5d00e30 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -11,10 +11,10 @@ use ciphersuite::{ }; use schnorr::SchnorrSignature; -use scale::{Encode, Decode}; +use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::primitives::PublicKey; +use serai_client::primitives::SeraiAddress; use processor_messages::sign::VariantSignId; @@ -27,33 +27,22 @@ use tributary::{ /// The label for data from a signing protocol. #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] -pub enum Label { +pub enum SigningProtocolRound { /// A preprocess. Preprocess, /// A signature share. Share, } -impl Label { +impl SigningProtocolRound { fn nonce(&self) -> u32 { match self { - Label::Preprocess => 0, - Label::Share => 1, + SigningProtocolRound::Preprocess => 0, + SigningProtocolRound::Share => 1, } } } -fn borsh_serialize_public( - public: &PublicKey, - writer: &mut W, -) -> Result<(), io::Error> { - // This doesn't use `encode_to` as `encode_to` panics if the writer returns an error - writer.write_all(&public.encode()) -} -fn borsh_deserialize_public(reader: &mut R) -> Result { - Decode::decode(&mut scale::IoReader(reader)).map_err(io::Error::other) -} - /// `tributary::Signed` without the nonce. /// /// All of our nonces are deterministic to the type of transaction and fields within. @@ -90,11 +79,7 @@ pub enum Transaction { /// A vote to remove a participant for invalid behavior RemoveParticipant { /// The participant to remove - #[borsh( - serialize_with = "borsh_serialize_public", - deserialize_with = "borsh_deserialize_public" - )] - participant: PublicKey, + participant: SeraiAddress, /// The transaction's signer and signature signed: Signed, }, @@ -119,7 +104,7 @@ pub enum Transaction { /// The attempt number of this signing protocol attempt: u32, // The signature share - confirmation_share: [u8; 32], + share: [u8; 32], /// The transaction's signer and signature signed: Signed, }, @@ -128,11 +113,46 @@ pub enum Transaction { /// /// When the time comes to start a new co-signing protocol, the most recent Substrate block will /// be the one selected to be cosigned. - CosignSubstrateBlock { - /// THe hash of the Substrate block to sign - hash: [u8; 32], + Cosign { + /// The hash of the Substrate block to sign + substrate_block_hash: [u8; 32], }, + /// The cosign for a Substrate block + /// + /// After producing this cosign, we need to start work on the latest intended-to-be cosigned + /// block. That requires agreement on when this cosign was produced, which we solve by embedding + /// this cosign on chain. + /// + /// We ideally don't have this transaction at all. The coordinator, without access to any of the + /// key shares, could observe the FROST signing session and determine a successful completion. + /// Unfortunately, that functionality is not present in modular-frost, so we do need to support + /// *some* asynchronous flow (where the processor or P2P network informs us of the successful + /// completion). + /// + /// If we use a `Provided` transaction, that requires everyone observe this cosign. + /// + /// If we use an `Unsigned` transaction, we can't verify the cosign signature inside + /// `Transaction::verify` unless we embedded the full `SignedCosign` on-chain. The issue is since + /// a Tributary is stateless with regards to the on-chain logic, including `Transaction::verify`, + /// we can't verify the signature against the group's public key unless we also include that (but + /// then we open a DoS where arbitrary group keys are specified to cause inclusion of arbitrary + /// blobs on chain). + /// + /// If we use a `Signed` transaction, we mitigate the DoS risk by having someone to fatally + /// slash. We have horrible performance though as for 100 validators, all 100 will publish this + /// transaction. + /// + /// We could use a signed `Unsigned` transaction, where it includes a signer and signature but + /// isn't technically a Signed transaction. This lets us de-duplicate the transaction premised on + /// its contents. + /// + /// The optimal choice is likely to use a `Provided` transaction. We don't actually need to + /// observe the produced cosign (which is ephemeral). As long as it's agreed the cosign in + /// question no longer needs to produced, which would mean the cosigning protocol at-large + /// cosigning the block in question, it'd be safe to provide this and move on to the next cosign. + Cosigned { substrate_block_hash: [u8; 32] }, + /// Acknowledge a Substrate block /// /// This is provided after the block has been cosigned. @@ -156,21 +176,14 @@ pub enum Transaction { hash: [u8; 32], }, - /// The local view of slashes observed by the transaction's sender - SlashReport { - /// The slash points accrued by each validator - slash_points: Vec, - /// The transaction's signer and signature - signed: Signed, - }, - + /// Data from a signing protocol. Sign { /// The ID of the object being signed id: VariantSignId, /// The attempt number of this signing protocol attempt: u32, /// The label for this data within the signing protocol - label: Label, + label: SigningProtocolRound, /// The data itself /// /// There will be `n` blobs of data where `n` is the amount of key shares the validator sending @@ -179,6 +192,14 @@ pub enum Transaction { /// The transaction's signer and signature signed: Signed, }, + + /// The local view of slashes observed by the transaction's sender + SlashReport { + /// The slash points accrued by each validator + slash_points: Vec, + /// The transaction's signer and signature + signed: Signed, + }, } impl ReadWrite for Transaction { @@ -208,7 +229,8 @@ impl TransactionTrait for Transaction { TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(1)) } - Transaction::CosignSubstrateBlock { .. } => TransactionKind::Provided("CosignSubstrateBlock"), + Transaction::Cosign { .. } => TransactionKind::Provided("CosignSubstrateBlock"), + Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"), Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), Transaction::Batch { .. } => TransactionKind::Provided("Batch"), @@ -240,6 +262,8 @@ impl TransactionTrait for Transaction { impl Transaction { // Sign a transaction + // + // Panics if signing a transaction type which isn't `TransactionKind::Signed` pub fn sign( &mut self, rng: &mut R, @@ -254,7 +278,8 @@ impl Transaction { Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed, Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, - Transaction::CosignSubstrateBlock { .. } => panic!("signing CosignSubstrateBlock"), + Transaction::Cosign { .. } => panic!("signing CosignSubstrateBlock"), + Transaction::Cosigned { .. } => panic!("signing Cosigned"), Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"), Transaction::Batch { .. } => panic!("signing Batch"), diff --git a/crypto/dkg/src/evrf/mod.rs b/crypto/dkg/src/evrf/mod.rs index 3d043138..343c6141 100644 --- a/crypto/dkg/src/evrf/mod.rs +++ b/crypto/dkg/src/evrf/mod.rs @@ -106,7 +106,7 @@ pub struct Participation { impl Participation { pub fn read(reader: &mut R, n: u16) -> io::Result { - // TODO: Replace `len` with some calculcation deterministic to the params + // TODO: Replace `len` with some calculation deterministic to the params let mut len = [0; 4]; reader.read_exact(&mut len)?; let len = usize::try_from(u32::from_le_bytes(len)).expect("<32-bit platform?"); diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 438beb5b..7c964ebc 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -83,7 +83,7 @@ pub mod sign { #[derive(Clone, Copy, PartialEq, Eq, Hash, Encode, Decode, BorshSerialize, BorshDeserialize)] pub enum VariantSignId { Cosign(u64), - Batch(u32), + Batch([u8; 32]), SlashReport, Transaction([u8; 32]), } diff --git a/substrate/primitives/src/account.rs b/substrate/primitives/src/account.rs index 5c77c28f..61940f29 100644 --- a/substrate/primitives/src/account.rs +++ b/substrate/primitives/src/account.rs @@ -52,7 +52,7 @@ pub fn borsh_deserialize_signature( // TODO: Remove this for solely Public? #[derive( - Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug, Encode, Decode, MaxEncodedLen, TypeInfo, + Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Encode, Decode, MaxEncodedLen, TypeInfo, )] #[cfg_attr(feature = "std", derive(Zeroize))] #[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] From 0a611cb1557a892ee83c3031e76f4c22bbb74fc1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 06:57:28 -0500 Subject: [PATCH 234/368] Further flesh out tributary scanning Renames `label` to `round` since `Label` was renamed to `SigningProtocolRound`. Adds some more context-less validation to transactions which used to be done within the custom decode function which was simplified via the usage of borsh. Documents in processor-messages where the Coordinator sends each of its messages. --- coordinator/Cargo.toml | 2 +- coordinator/src/tributary/db.rs | 119 +++++++--- coordinator/src/tributary/scan.rs | 217 +++++++++++++++++- coordinator/src/tributary/transaction.rs | 46 +++- processor/messages/src/lib.rs | 28 ++- .../validator-sets/primitives/src/lib.rs | 4 +- 6 files changed, 352 insertions(+), 64 deletions(-) diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 8dcd4cd5..2af0f822 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -39,7 +39,7 @@ serai-db = { path = "../common/db" } serai-env = { path = "../common/env" } serai-task = { path = "../common/task", version = "0.1" } -processor-messages = { package = "serai-processor-messages", path = "../processor/messages" } +messages = { package = "serai-processor-messages", path = "../processor/messages" } message-queue = { package = "serai-message-queue", path = "../message-queue" } tributary = { package = "tributary-chain", path = "./tributary" } diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index 008cd5c8..31ae2e1e 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -5,7 +5,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ValidatorSet}; -use processor_messages::sign::VariantSignId; +use messages::sign::{VariantSignId, SignId}; use serai_db::*; @@ -13,20 +13,20 @@ use crate::tributary::transaction::SigningProtocolRound; /// A topic within the database which the group participates in #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] -pub enum Topic { +pub(crate) enum Topic { /// Vote to remove a participant RemoveParticipant { participant: SeraiAddress }, // DkgParticipation isn't represented here as participations are immediately sent to the // processor, not accumulated within this databse /// Participation in the signing protocol to confirm the DKG results on Substrate - DkgConfirmation { attempt: u32, label: SigningProtocolRound }, + DkgConfirmation { attempt: u32, round: SigningProtocolRound }, /// The local view of the SlashReport, to be aggregated into the final SlashReport SlashReport, /// Participation in a signing protocol - Sign { id: VariantSignId, attempt: u32, label: SigningProtocolRound }, + Sign { id: VariantSignId, attempt: u32, round: SigningProtocolRound }, } enum Participating { @@ -40,13 +40,13 @@ impl Topic { #[allow(clippy::match_same_arms)] match self { Topic::RemoveParticipant { .. } => None, - Topic::DkgConfirmation { attempt, label: _ } => Some(Topic::DkgConfirmation { + Topic::DkgConfirmation { attempt, round: _ } => Some(Topic::DkgConfirmation { attempt: attempt + 1, - label: SigningProtocolRound::Preprocess, + round: SigningProtocolRound::Preprocess, }), Topic::SlashReport { .. } => None, - Topic::Sign { id, attempt, label: _ } => { - Some(Topic::Sign { id, attempt: attempt + 1, label: SigningProtocolRound::Preprocess }) + Topic::Sign { id, attempt, round: _ } => { + Some(Topic::Sign { id, attempt: attempt + 1, round: SigningProtocolRound::Preprocess }) } } } @@ -56,27 +56,40 @@ impl Topic { #[allow(clippy::match_same_arms)] match self { Topic::RemoveParticipant { .. } => None, - Topic::DkgConfirmation { attempt, label } => match label { + Topic::DkgConfirmation { attempt, round } => match round { SigningProtocolRound::Preprocess => { let attempt = attempt + 1; Some(( attempt, - Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Preprocess }, + Topic::DkgConfirmation { attempt, round: SigningProtocolRound::Preprocess }, )) } SigningProtocolRound::Share => None, }, Topic::SlashReport { .. } => None, - Topic::Sign { id, attempt, label } => match label { + Topic::Sign { id, attempt, round } => match round { SigningProtocolRound::Preprocess => { let attempt = attempt + 1; - Some((attempt, Topic::Sign { id, attempt, label: SigningProtocolRound::Preprocess })) + Some((attempt, Topic::Sign { id, attempt, round: SigningProtocolRound::Preprocess })) } SigningProtocolRound::Share => None, }, } } + // The SignId for this topic + // + // Returns None if Topic isn't Topic::Sign + pub(crate) fn sign_id(self, set: ValidatorSet) -> Option { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { .. } => None, + Topic::SlashReport { .. } => None, + Topic::Sign { id, attempt, round: _ } => Some(SignId { session: set.session, id, attempt }), + } + } + /// The topic which precedes this topic as a prerequisite /// /// The preceding topic must define this topic as succeeding @@ -84,17 +97,17 @@ impl Topic { #[allow(clippy::match_same_arms)] match self { Topic::RemoveParticipant { .. } => None, - Topic::DkgConfirmation { attempt, label } => match label { + Topic::DkgConfirmation { attempt, round } => match round { SigningProtocolRound::Preprocess => None, SigningProtocolRound::Share => { - Some(Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Preprocess }) + Some(Topic::DkgConfirmation { attempt, round: SigningProtocolRound::Preprocess }) } }, Topic::SlashReport { .. } => None, - Topic::Sign { id, attempt, label } => match label { + Topic::Sign { id, attempt, round } => match round { SigningProtocolRound::Preprocess => None, SigningProtocolRound::Share => { - Some(Topic::Sign { id, attempt, label: SigningProtocolRound::Preprocess }) + Some(Topic::Sign { id, attempt, round: SigningProtocolRound::Preprocess }) } }, } @@ -107,16 +120,16 @@ impl Topic { #[allow(clippy::match_same_arms)] match self { Topic::RemoveParticipant { .. } => None, - Topic::DkgConfirmation { attempt, label } => match label { + Topic::DkgConfirmation { attempt, round } => match round { SigningProtocolRound::Preprocess => { - Some(Topic::DkgConfirmation { attempt, label: SigningProtocolRound::Share }) + Some(Topic::DkgConfirmation { attempt, round: SigningProtocolRound::Share }) } SigningProtocolRound::Share => None, }, Topic::SlashReport { .. } => None, - Topic::Sign { id, attempt, label } => match label { + Topic::Sign { id, attempt, round } => match round { SigningProtocolRound::Preprocess => { - Some(Topic::Sign { id, attempt, label: SigningProtocolRound::Share }) + Some(Topic::Sign { id, attempt, round: SigningProtocolRound::Share }) } SigningProtocolRound::Share => None, }, @@ -155,7 +168,7 @@ impl Topic { } /// The resulting data set from an accumulation -pub enum DataSet { +pub(crate) enum DataSet { /// Accumulating this did not produce a data set to act on /// (non-existent, not ready, prior handled, not participating, etc.) None, @@ -187,15 +200,21 @@ create_db!( } ); -pub struct TributaryDb; +db_channel!( + CoordinatorTributary { + ProcessorMessages: (set: ValidatorSet) -> messages::CoordinatorMessage, + } +); + +pub(crate) struct TributaryDb; impl TributaryDb { - pub fn last_handled_tributary_block( + pub(crate) fn last_handled_tributary_block( getter: &impl Get, set: ValidatorSet, ) -> Option<(u64, [u8; 32])> { LastHandledTributaryBlock::get(getter, set) } - pub fn set_last_handled_tributary_block( + pub(crate) fn set_last_handled_tributary_block( txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64, @@ -204,18 +223,35 @@ impl TributaryDb { LastHandledTributaryBlock::set(txn, set, &(block_number, block_hash)); } - pub fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { + pub(crate) fn latest_substrate_block_to_cosign( + getter: &impl Get, + set: ValidatorSet, + ) -> Option<[u8; 32]> { + LatestSubstrateBlockToCosign::get(getter, set) + } + pub(crate) fn set_latest_substrate_block_to_cosign( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + ) { + LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash); + } + + pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { AccumulatedWeight::set(txn, set, topic, &0); } - pub fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { + pub(crate) fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { for topic in Reattempt::take(txn, set, block_number).unwrap_or(vec![]) { // TODO: Slash all people who preprocessed but didn't share Self::recognize_topic(txn, set, topic); + if let Some(id) = topic.sign_id(set) { + Self::send_message(txn, set, messages::sign::CoordinatorMessage::Reattempt { id }); + } } } - pub fn fatal_slash( + pub(crate) fn fatal_slash( txn: &mut impl DbTxn, set: ValidatorSet, validator: SeraiAddress, @@ -225,12 +261,16 @@ impl TributaryDb { SlashPoints::set(txn, set, validator, &u64::MAX); } - pub fn is_fatally_slashed(getter: &impl Get, set: ValidatorSet, validator: SeraiAddress) -> bool { + pub(crate) fn is_fatally_slashed( + getter: &impl Get, + set: ValidatorSet, + validator: SeraiAddress, + ) -> bool { SlashPoints::get(getter, set, validator).unwrap_or(0) == u64::MAX } #[allow(clippy::too_many_arguments)] - pub fn accumulate( + pub(crate) fn accumulate( txn: &mut impl DbTxn, set: ValidatorSet, validators: &[SeraiAddress], @@ -286,7 +326,8 @@ impl TributaryDb { // Check if we now cross the weight threshold if accumulated_weight >= topic.required_participation(total_weight) { // Queue this for re-attempt after enough time passes - if let Some((attempt, reattempt_topic)) = topic.reattempt_topic() { + let reattempt_topic = topic.reattempt_topic(); + if let Some((attempt, reattempt_topic)) = reattempt_topic { // 5 minutes #[cfg(not(feature = "longer-reattempts"))] const BASE_REATTEMPT_DELAY: u32 = @@ -316,15 +357,11 @@ impl TributaryDb { let mut data_set = HashMap::with_capacity(validators.len()); for validator in validators { if let Some(data) = Accumulated::::get(txn, set, topic, *validator) { - // Clean this data up if there's not a succeeding topic - // If there is, we wait as the succeeding topic checks our participation in this topic - if succeeding_topic.is_none() { + // Clean this data up if there's not a re-attempt topic + // If there is a re-attempt topic, we clean it up upon re-attempt + if reattempt_topic.is_none() { Accumulated::::del(txn, set, topic, *validator); } - // If this *was* the succeeding topic, clean up the preceding topic's data - if let Some(preceding_topic) = preceding_topic { - Accumulated::::del(txn, set, preceding_topic, *validator); - } data_set.insert(*validator, data); } } @@ -343,4 +380,12 @@ impl TributaryDb { DataSet::None } } + + pub(crate) fn send_message( + txn: &mut impl DbTxn, + set: ValidatorSet, + message: impl Into, + ) { + ProcessorMessages::send(txn, set, &message.into()); + } } diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index 47e1103d..1a53bdda 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -3,10 +3,13 @@ use std::collections::HashMap; use ciphersuite::group::GroupEncoding; -use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ValidatorSet}; +use serai_client::{ + primitives::SeraiAddress, + validator_sets::primitives::{ValidatorSet, Slash}, +}; use tributary::{ - Signed as TributarySigned, TransactionError, TransactionKind, TransactionTrait, + Signed as TributarySigned, TransactionKind, TransactionTrait, Transaction as TributaryTransaction, Block, TributaryReader, tendermint::{ tx::{TendermintTx, Evidence, decode_signed_message}, @@ -17,9 +20,11 @@ use tributary::{ use serai_db::*; use serai_task::ContinuallyRan; +use messages::sign::VariantSignId; + use crate::tributary::{ db::*, - transaction::{Signed, Transaction}, + transaction::{SigningProtocolRound, Signed, Transaction}, }; struct ScanBlock<'a, D: DbTxn, TD: Db> { @@ -43,9 +48,21 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { } match tx { + // Accumulate this vote and fatally slash the participant if past the threshold Transaction::RemoveParticipant { participant, signed } => { - // Accumulate this vote and fatally slash the participant if past the threshold let signer = signer(signed); + + // Check the participant voted to be removed actually exists + if !self.validators.iter().any(|validator| *validator == participant) { + TributaryDb::fatal_slash( + self.txn, + self.set, + signer, + "voted to remove non-existent participant", + ); + return; + } + match TributaryDb::accumulate( self.txn, self.set, @@ -59,14 +76,22 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { ) { DataSet::None => {} DataSet::Participating(_) => { - TributaryDb::fatal_slash(self.txn, self.set, participant, "voted to remove") + TributaryDb::fatal_slash(self.txn, self.set, participant, "voted to remove"); } - } + }; } + // Send the participation to the processor Transaction::DkgParticipation { participation, signed } => { - // Send the participation to the processor - todo!("TODO") + TributaryDb::send_message( + self.txn, + self.set, + messages::key_gen::CoordinatorMessage::Participation { + session: self.set.session, + participant: todo!("TODO"), + participation, + }, + ); } Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { // Accumulate the preprocesses into our own FROST attempt manager @@ -79,11 +104,42 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { Transaction::Cosign { substrate_block_hash } => { // Update the latest intended-to-be-cosigned Substrate block - todo!("TODO") + TributaryDb::set_latest_substrate_block_to_cosign(self.txn, self.set, substrate_block_hash); + + // TODO: If we aren't currently cosigning a block, start cosigning this one } Transaction::Cosigned { substrate_block_hash } => { // Start cosigning the latest intended-to-be-cosigned block - todo!("TODO") + let Some(latest_substrate_block_to_cosign) = + TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set) + else { + return; + }; + // If this is the block we just cosigned, return + if latest_substrate_block_to_cosign == substrate_block_hash { + return; + } + let substrate_block_number = todo!("TODO"); + // Whitelist the topic + TributaryDb::recognize_topic( + self.txn, + self.set, + Topic::Sign { + id: VariantSignId::Cosign(substrate_block_number), + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); + // Send the message for the processor to start signing + TributaryDb::send_message( + self.txn, + self.set, + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { + session: self.set.session, + block_number: substrate_block_number, + block: substrate_block_hash, + }, + ); } Transaction::SubstrateBlock { hash } => { // Whitelist all of the IDs this Substrate block causes to be signed @@ -95,11 +151,148 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { } Transaction::SlashReport { slash_points, signed } => { + let signer = signer(signed); + + if slash_points.len() != self.validators.len() { + TributaryDb::fatal_slash( + self.txn, + self.set, + signer, + "slash report was for a distinct amount of signers", + ); + return; + } + // Accumulate, and if past the threshold, calculate *the* slash report and start signing it - todo!("TODO") + match TributaryDb::accumulate( + self.txn, + self.set, + self.validators, + self.total_weight, + block_number, + Topic::SlashReport, + signer, + self.validator_weights[&signer], + &slash_points, + ) { + DataSet::None => {} + DataSet::Participating(data_set) => { + // Find the median reported slashes for this validator + // TODO: This lets 34% perform a fatal slash. Should that be allowed? + let mut median_slash_report = Vec::with_capacity(self.validators.len()); + for i in 0 .. self.validators.len() { + let mut this_validator = + data_set.values().map(|report| report[i]).collect::>(); + this_validator.sort_unstable(); + // Choose the median, where if there are two median values, the lower one is chosen + let median_index = if (this_validator.len() % 2) == 1 { + this_validator.len() / 2 + } else { + (this_validator.len() / 2) - 1 + }; + median_slash_report.push(this_validator[median_index]); + } + + // We only publish slashes for the `f` worst performers to: + // 1) Effect amnesty if there were network disruptions which affected everyone + // 2) Ensure the signing threshold doesn't have a disincentive to do their job + + // Find the worst performer within the signing threshold's slash points + let f = (self.validators.len() - 1) / 3; + let worst_validator_in_supermajority_slash_points = { + let mut sorted_slash_points = median_slash_report.clone(); + sorted_slash_points.sort_unstable(); + // This won't be a valid index if `f == 0`, which means we don't have any validators + // to slash + let index_of_first_validator_to_slash = self.validators.len() - f; + let index_of_worst_validator_in_supermajority = index_of_first_validator_to_slash - 1; + sorted_slash_points[index_of_worst_validator_in_supermajority] + }; + + // Perform the amortization + for slash_points in &mut median_slash_report { + *slash_points = + slash_points.saturating_sub(worst_validator_in_supermajority_slash_points) + } + let amortized_slash_report = median_slash_report; + + // Create the resulting slash report + let mut slash_report = vec![]; + for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) { + if points != 0 { + slash_report.push(Slash { key: validator.into(), points }); + } + } + assert!(slash_report.len() <= f); + + // Recognize the topic for signing the slash report + TributaryDb::recognize_topic( + self.txn, + self.set, + Topic::Sign { + id: VariantSignId::SlashReport, + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); + // Send the message for the processor to start signing + TributaryDb::send_message( + self.txn, + self.set, + messages::coordinator::CoordinatorMessage::SignSlashReport { + session: self.set.session, + report: slash_report, + }, + ); + } + }; } - Transaction::Sign { id, attempt, label, data, signed } => todo!("TODO"), + Transaction::Sign { id, attempt, round, data, signed } => { + let topic = Topic::Sign { id, attempt, round }; + let signer = signer(signed); + + if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { + TributaryDb::fatal_slash( + self.txn, + self.set, + signer, + "signer signed with a distinct amount of key shares than they had key shares", + ); + return; + } + + match TributaryDb::accumulate( + self.txn, + self.set, + self.validators, + self.total_weight, + block_number, + topic, + signer, + self.validator_weights[&signer], + &data, + ) { + DataSet::None => {} + DataSet::Participating(data_set) => { + let id = topic.sign_id(self.set).expect("Topic::Sign didn't have SignId"); + let flatten_data_set = |data_set| todo!("TODO"); + let data_set = flatten_data_set(data_set); + TributaryDb::send_message( + self.txn, + self.set, + match round { + SigningProtocolRound::Preprocess => { + messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set } + } + SigningProtocolRound::Share => { + messages::sign::CoordinatorMessage::Shares { id, shares: data_set } + } + }, + ) + } + }; + } } } diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index c5d00e30..65391296 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -14,9 +14,9 @@ use schnorr::SchnorrSignature; use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::primitives::SeraiAddress; +use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET}; -use processor_messages::sign::VariantSignId; +use messages::sign::VariantSignId; use tributary::{ ReadWrite, @@ -25,7 +25,7 @@ use tributary::{ }, }; -/// The label for data from a signing protocol. +/// The round this data is for, within a signing protocol. #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] pub enum SigningProtocolRound { /// A preprocess. @@ -182,8 +182,8 @@ pub enum Transaction { id: VariantSignId, /// The attempt number of this signing protocol attempt: u32, - /// The label for this data within the signing protocol - label: SigningProtocolRound, + /// The round this data is for, within the signing protocol + round: SigningProtocolRound, /// The data itself /// /// There will be `n` blobs of data where `n` is the amount of key shares the validator sending @@ -234,8 +234,8 @@ impl TransactionTrait for Transaction { Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), Transaction::Batch { .. } => TransactionKind::Provided("Batch"), - Transaction::Sign { id, attempt, label, signed, .. } => { - TransactionKind::Signed((b"Sign", id, attempt).encode(), signed.nonce(label.nonce())) + Transaction::Sign { id, attempt, round, signed, .. } => { + TransactionKind::Signed((b"Sign", id, attempt).encode(), signed.nonce(round.nonce())) } Transaction::SlashReport { signed, .. } => { @@ -253,9 +253,37 @@ impl TransactionTrait for Transaction { Blake2b::::digest(&tx).into() } - // We don't have any verification logic embedded into the transaction. We just slash anyone who - // publishes an invalid transaction. + // This is a stateless verification which we use to enforce some size limits. fn verify(&self) -> Result<(), TransactionError> { + #[allow(clippy::match_same_arms)] + match self { + // Fixed-length TX + Transaction::RemoveParticipant { .. } => {} + + // TODO: MAX_DKG_PARTICIPATION_LEN + Transaction::DkgParticipation { .. } => {} + // These are fixed-length TXs + Transaction::DkgConfirmationPreprocess { .. } | Transaction::DkgConfirmationShare { .. } => {} + + // Provided TXs + Transaction::Cosign { .. } | + Transaction::Cosigned { .. } | + Transaction::SubstrateBlock { .. } | + Transaction::Batch { .. } => {} + + Transaction::Sign { data, .. } => { + if data.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + Err(TransactionError::InvalidContent)? + } + // TODO: MAX_SIGN_LEN + } + + Transaction::SlashReport { slash_points, .. } => { + if slash_points.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + Err(TransactionError::InvalidContent)? + } + } + }; Ok(()) } } diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 7c964ebc..ee6ed8ac 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -22,9 +22,13 @@ pub mod key_gen { #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { - // Instructs the Processor to begin the key generation process. + /// Instructs the Processor to begin the key generation process. + /// + /// This is sent by the Coordinator when it creates the Tributary (TODO). GenerateKey { session: Session, threshold: u16, evrf_public_keys: Vec<([u8; 32], Vec)> }, - // Received participations for the specified key generation protocol. + /// Received participations for the specified key generation protocol. + /// + /// This is sent by the Coordinator's Tributary scanner. Participation { session: Session, participant: Participant, participation: Vec }, } @@ -113,11 +117,17 @@ pub mod sign { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { - // Received preprocesses for the specified signing protocol. + /// Received preprocesses for the specified signing protocol. + /// + /// This is sent by the Coordinator's Tributary scanner. Preprocesses { id: SignId, preprocesses: HashMap> }, // Received shares for the specified signing protocol. + /// + /// This is sent by the Coordinator's Tributary scanner. Shares { id: SignId, shares: HashMap> }, // Re-attempt a signing protocol. + /// + /// This is sent by the Coordinator's Tributary re-attempt scheduling logic. Reattempt { id: SignId }, } @@ -157,7 +167,13 @@ pub mod coordinator { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { + /// Cosign the specified Substrate block. + /// + /// This is sent by the Coordinator's Tributary scanner. CosignSubstrateBlock { session: Session, block_number: u64, block: [u8; 32] }, + /// Sign the slash report for this session. + /// + /// This is sent by the Coordinator's Tributary scanner. SignSlashReport { session: Session, report: Vec }, } @@ -196,12 +212,18 @@ pub mod substrate { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { /// Keys set on the Serai blockchain. + /// + /// This is set by the Coordinator's Substrate canonical event stream. SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, /// Slashes reported on the Serai blockchain OR the process timed out. /// /// This is the final message for a session, + /// + /// This is set by the Coordinator's Substrate canonical event stream. SlashesReported { session: Session }, /// A block from Serai with relevance to this processor. + /// + /// This is set by the Coordinator's Substrate canonical event stream. Block { serai_block_number: u64, batch: Option, diff --git a/substrate/validator-sets/primitives/src/lib.rs b/substrate/validator-sets/primitives/src/lib.rs index 341d211f..fe78fbca 100644 --- a/substrate/validator-sets/primitives/src/lib.rs +++ b/substrate/validator-sets/primitives/src/lib.rs @@ -114,8 +114,8 @@ pub struct Slash { deserialize_with = "serai_primitives::borsh_deserialize_public" ) )] - key: Public, - points: u32, + pub key: Public, + pub points: u32, } #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] From ce676efb1fa59d4fade186d200feee5feeb03647 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 07:01:06 -0500 Subject: [PATCH 235/368] cargo update --- Cargo.lock | 271 +++++++++++++++++++++++++++-------------------------- 1 file changed, 136 insertions(+), 135 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index c2dc5284..a0d77739 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -112,9 +112,9 @@ dependencies = [ [[package]] name = "alloy-consensus" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "db66918860ff33920fb9e6d648d1e8cee275321406ea255ac9320f6562e26fec" +checksum = "f4138dc275554afa6f18c4217262ac9388790b2fc393c2dfe03c51d357abf013" dependencies = [ "alloy-eips", "alloy-primitives", @@ -130,9 +130,9 @@ dependencies = [ [[package]] name = "alloy-consensus-any" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "04519b5157de8a2166bddb07d84a63590100f1d3e2b3682144e787f1c27ccdac" +checksum = "0fa04e1882c31288ce1028fdf31b6ea94cfa9eafa2e497f903ded631c8c6a42c" dependencies = [ "alloy-consensus", "alloy-eips", @@ -144,9 +144,9 @@ dependencies = [ [[package]] name = "alloy-core" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c618bd382f0bc2ac26a7e4bfae01c9b015ca8f21b37ca40059ae35a7e62b3dc6" +checksum = "5e3fdddfc89197319b1be19875a70ced62a72bebb67e2276dad688cd59f40e70" dependencies = [ "alloy-primitives", ] @@ -177,9 +177,9 @@ dependencies = [ [[package]] name = "alloy-eips" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e56518f46b074d562ac345238343e2231b672a13aca18142d285f95cc055980b" +checksum = "52dd5869ed09e399003e0e0ec6903d981b2a92e74c5d37e6b40890bad2517526" dependencies = [ "alloy-eip2930", "alloy-eip7702", @@ -195,9 +195,9 @@ dependencies = [ [[package]] name = "alloy-genesis" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2cf200fd4c28435995e47b26d4761a4cf6e1011a13b81f9a9afaf16a93d9fd09" +checksum = "e7d2a7fe5c1a9bd6793829ea21a636f30fc2b3f5d2e7418ba86d96e41dd1f460" dependencies = [ "alloy-eips", "alloy-primitives", @@ -208,9 +208,9 @@ dependencies = [ [[package]] name = "alloy-json-rpc" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b17c5ada5faf0f9d2921e8b20971eced68abbc92a272b0502cac8b1d00f56777" +checksum = "2008bedb8159a255b46b7c8614516eda06679ea82f620913679afbd8031fea72" dependencies = [ "alloy-primitives", "alloy-sol-types", @@ -222,9 +222,9 @@ dependencies = [ [[package]] name = "alloy-network" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "24f3117647e3262f6db9e18b371bf67c5810270c0cf915786c30fad3b1739561" +checksum = "4556f01fe41d0677495df10a648ddcf7ce118b0e8aa9642a0e2b6dd1fb7259de" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -247,9 +247,9 @@ dependencies = [ [[package]] name = "alloy-network-primitives" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1535a4577648ec2fd3c446d4644d9b8e9e01e5816be53a5d515dc1624e2227b2" +checksum = "f31c3c6b71340a1d076831823f09cb6e02de01de5c6630a9631bdb36f947ff80" dependencies = [ "alloy-consensus", "alloy-eips", @@ -260,9 +260,9 @@ dependencies = [ [[package]] name = "alloy-node-bindings" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bf741e871fb62c80e0007041e8bc1e81978abfd98aafea8354472f06bfd4d309" +checksum = "4520cd4bc5cec20c32c98e4bc38914c7fb96bf4a712105e44da186a54e65e3ba" dependencies = [ "alloy-genesis", "alloy-primitives", @@ -277,9 +277,9 @@ dependencies = [ [[package]] name = "alloy-primitives" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6259a506ab13e1d658796c31e6e39d2e2ee89243bcc505ddc613b35732e0a430" +checksum = "0540fd0355d400b59633c27bd4b42173e59943f28e9d3376b77a24771d432d04" dependencies = [ "alloy-rlp", "bytes", @@ -305,9 +305,9 @@ dependencies = [ [[package]] name = "alloy-provider" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fcfa2db03d4221b5ca14bff7dbed4712689cb87a3e826af522468783ff05ec5d" +checksum = "5a22c4441b3ebe2d77fa9cf629ba68c3f713eb91779cff84275393db97eddd82" dependencies = [ "alloy-chains", "alloy-consensus", @@ -356,14 +356,14 @@ checksum = "5a833d97bf8a5f0f878daf2c8451fff7de7f9de38baa5a45d936ec718d81255a" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] name = "alloy-rpc-client" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d2ec6963b08f1c6ef8eacc01dbba20f2c6a1533550403f6b52dbbe0da0360834" +checksum = "d06a292b37e182e514903ede6e623b9de96420e8109ce300da288a96d88b7e4b" dependencies = [ "alloy-json-rpc", "alloy-primitives", @@ -382,9 +382,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-any" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c64a83112b09bd293ef522bfa3800fa2d2df4d72f2bcd3a84b08490503b22e55" +checksum = "ca445cef0eb6c2cf51cfb4e214fbf1ebd00893ae2e6f3b944c8101b07990f988" dependencies = [ "alloy-consensus-any", "alloy-rpc-types-eth", @@ -393,9 +393,9 @@ dependencies = [ [[package]] name = "alloy-rpc-types-eth" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5fc1892a1ac0d2a49c063f0791aa6bde342f020c5d37aaaec14832b661802cb4" +checksum = "0938bc615c02421bd86c1733ca7205cc3d99a122d9f9bff05726bd604b76a5c2" dependencies = [ "alloy-consensus", "alloy-consensus-any", @@ -413,9 +413,9 @@ dependencies = [ [[package]] name = "alloy-serde" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "17939f6bef49268e4494158fce1ab8913cd6164ec3f9a4ada2c677b9b5a77f2f" +checksum = "ae0465c71d4dced7525f408d84873aeebb71faf807d22d74c4a426430ccd9b55" dependencies = [ "alloy-primitives", "serde", @@ -424,9 +424,9 @@ dependencies = [ [[package]] name = "alloy-signer" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "77d1f0762a44338f0e05987103bd5919df52170d949080bfebfeb6aaaa867c39" +checksum = "9bfa395ad5cc952c82358d31e4c68b27bf4a89a5456d9b27e226e77dac50e4ff" dependencies = [ "alloy-primitives", "async-trait", @@ -449,23 +449,23 @@ dependencies = [ [[package]] name = "alloy-sol-macro" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d9d64f851d95619233f74b310f12bcf16e0cbc27ee3762b6115c14a84809280a" +checksum = "c6d1a14b4a9f6078ad9132775a2ebb465b06b387d60f7413ddc86d7bf7453408" dependencies = [ "alloy-sol-macro-expander", "alloy-sol-macro-input", "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] name = "alloy-sol-macro-expander" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6bf7ed1574b699f48bf17caab4e6e54c6d12bc3c006ab33d58b1e227c1c3559f" +checksum = "4436b4b96d265eb17daea26eb31525c3076d024d10901e446790afbd2f7eeaf5" dependencies = [ "alloy-sol-macro-input", "const-hex", @@ -474,31 +474,31 @@ dependencies = [ "proc-macro-error2", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", "syn-solidity", "tiny-keccak", ] [[package]] name = "alloy-sol-macro-input" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8c02997ccef5f34f9c099277d4145f183b422938ed5322dc57a089fe9b9ad9ee" +checksum = "e5f58698a18b96faa8513519de112b79a96010b4ff84264ce54a217c52a8e98b" dependencies = [ "const-hex", "dunce", "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", "syn-solidity", ] [[package]] name = "alloy-sol-types" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1174cafd6c6d810711b4e00383037bdb458efc4fe3dbafafa16567e0320c54d8" +checksum = "c766e4979fc19d70057150befe8e3ea3f0c4cbc6839b8eaaa250803451692305" dependencies = [ "alloy-primitives", "alloy-sol-macro", @@ -507,9 +507,9 @@ dependencies = [ [[package]] name = "alloy-transport" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3a3827275a4eed3431ce876a59c76fd19effc2a8c09566b2603e3a3376d38af0" +checksum = "d17722a198f33bbd25337660787aea8b8f57814febb7c746bc30407bdfc39448" dependencies = [ "alloy-json-rpc", "base64 0.22.1", @@ -527,9 +527,9 @@ dependencies = [ [[package]] name = "alloy-transport-http" -version = "0.9.0" +version = "0.9.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "958417ddf333c55b0627cb7fbee7c6666895061dee79f50404dd6dbdd8e9eba0" +checksum = "6e1509599021330a31c4a6816b655e34bf67acb1cc03c564e09fd8754ff6c5de" dependencies = [ "alloy-transport", "url", @@ -537,9 +537,9 @@ dependencies = [ [[package]] name = "alloy-trie" -version = "0.7.7" +version = "0.7.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e428104b2445a4f929030891b3dbf8c94433a8349ba6480946bf6af7975c2f6" +checksum = "6917c79e837aa7b77b7a6dae9f89cbe15313ac161c4d3cfaf8909ef21f3d22d8" dependencies = [ "alloy-primitives", "alloy-rlp", @@ -889,18 +889,18 @@ checksum = "c7c24de15d275a1ecfd47a380fb4d5ec9bfe0933f309ed5e705b775596a3574d" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] name = "async-trait" -version = "0.1.83" +version = "0.1.84" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "721cae7de5c34fbb2acd27e21e6d2cf7b886dce0c27388d46c4e6c47ea4318dd" +checksum = "1b1244b10dcd56c92219da4e14caa97e312079e185f04ba3eea25061561dc0a0" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -935,7 +935,7 @@ checksum = "3c87f3f15e7794432337fc718554eaa4dc8f04c9677a950ffe366f20a162ae42" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -1046,7 +1046,7 @@ dependencies = [ "regex", "rustc-hash 1.1.0", "shlex", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -1322,7 +1322,7 @@ dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -1348,9 +1348,9 @@ dependencies = [ [[package]] name = "bstr" -version = "1.11.1" +version = "1.11.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "786a307d683a5bf92e6fd5fd69a7eb613751668d1d8d67d802846dfe367c62c8" +checksum = "531a9155a481e2ee699d4f98f43c0ca4ff8ee1bfd55c31e9e98fb29d2b176fe0" dependencies = [ "memchr", "serde", @@ -1635,7 +1635,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -1973,7 +1973,7 @@ checksum = "f46882e17999c6cc590af592290432be3bce0428cb0d5f8b6715e4dc7b383eb3" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2001,7 +2001,7 @@ dependencies = [ "proc-macro2", "quote", "scratch", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2019,7 +2019,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2160,7 +2160,7 @@ checksum = "cb7330aeadfbe296029522e6c40f315320aba36fc43a5b3632f3795348f3bd22" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", "unicode-xid", ] @@ -2240,7 +2240,7 @@ checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2493,7 +2493,7 @@ dependencies = [ "heck 0.5.0", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2528,7 +2528,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -2597,7 +2597,7 @@ dependencies = [ "fs-err", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2878,7 +2878,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2890,7 +2890,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -2900,7 +2900,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -3059,7 +3059,7 @@ checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -3260,9 +3260,9 @@ checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f" [[package]] name = "glob" -version = "0.3.1" +version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b" +checksum = "a8d1add55171497b4705a648c6b583acafb01d58050a51727785f0b2c8e0a2b2" [[package]] name = "globset" @@ -3548,7 +3548,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.5.8", + "socket2 0.4.10", "tokio", "tower-service", "tracing", @@ -3795,7 +3795,7 @@ checksum = "a0eb5a3343abf848c0984fe4604b2b105da9539376e24fc0a3b0007411ae4fd9" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -4112,7 +4112,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.52.6", + "windows-targets 0.48.5", ] [[package]] @@ -4486,7 +4486,7 @@ dependencies = [ "proc-macro-warning", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -4749,7 +4749,7 @@ dependencies = [ "macro_magic_core", "macro_magic_macros", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -4763,7 +4763,7 @@ dependencies = [ "macro_magic_core_macros", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -4774,7 +4774,7 @@ checksum = "d710e1214dffbab3b5dacb21475dde7d6ed84c69ff722b3a47a782668d44fbac" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -4785,7 +4785,7 @@ checksum = "b8fb85ec1620619edf2984a7693497d4ec88a9665d8b87e942856884c92dbf2a" dependencies = [ "macro_magic_core", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -5589,14 +5589,14 @@ checksum = "af1844ef2428cc3e1cb900be36181049ef3d3193c63e43026cfe202983b27a56" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] name = "nybbles" -version = "0.3.0" +version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "55a62e678a89501192cc5ebf47dcbc656b608ae5e1c61c9251fe35230f119fe3" +checksum = "a3409fc85ac27b27d971ea7cd1aabafd2eefa6de7e481c8d4f707225c117e81a" dependencies = [ "const-hex", "serde", @@ -6041,7 +6041,7 @@ checksum = "3c0f5fad0874fc7abcd4d750e76917eaebbecaa2c20bde22e1dbeeba8beb758c" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -6263,7 +6263,7 @@ dependencies = [ "proc-macro-error-attr2", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -6274,7 +6274,7 @@ checksum = "3d1eaa7fa0aa1929ffdf7eeb6eac234dde6268914a14ad44d23521ab6a9b258e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -6320,7 +6320,7 @@ checksum = "440f724eba9f6996b75d63681b0a92b06947f1457076d503a4d2e2c8f56442b8" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -6631,7 +6631,7 @@ checksum = "bcc303e793d3734489387d205e9b186fac9c6cfacedd98cbb2e8a5943595f3e6" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -6907,7 +6907,7 @@ dependencies = [ "errno", "libc", "linux-raw-sys", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -6977,9 +6977,9 @@ dependencies = [ [[package]] name = "rustversion" -version = "1.0.18" +version = "1.0.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0e819f2bc632f285be6d7cd36e25940d45b2391dd6d9b939e79de557f7014248" +checksum = "f7c45b9784283f1b2e7fb61b42047c2fd678ef0960d4f6f1eba131594cc369d4" [[package]] name = "rusty-fork" @@ -7132,7 +7132,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -7893,7 +7893,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -7976,7 +7976,7 @@ dependencies = [ "proc-macro-crate 3.2.0", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -7990,9 +7990,9 @@ dependencies = [ [[package]] name = "schnellru" -version = "0.2.3" +version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c9a8ef13a93c54d20580de1e5c413e624e53121d42fc7e2c11d10ef7f8b02367" +checksum = "356285bbf17bea63d9e52e96bd18f039672ac92b55b8cb997d6162a2a37d1649" dependencies = [ "ahash", "cfg-if", @@ -8891,7 +8891,7 @@ dependencies = [ "serai-processor-ethereum-deployer", "serai-processor-ethereum-erc20", "serai-processor-ethereum-primitives", - "syn 2.0.91", + "syn 2.0.94", "syn-solidity", "tokio", ] @@ -9236,9 +9236,9 @@ dependencies = [ [[package]] name = "serde" -version = "1.0.216" +version = "1.0.217" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0b9781016e935a97e8beecf0c933758c97a5520d32930e460142b4cd80c6338e" +checksum = "02fc4265df13d6fa1d00ecff087228cc0a2b5f3c0e87e258d8b94a156e984c70" dependencies = [ "serde_derive", ] @@ -9254,13 +9254,13 @@ dependencies = [ [[package]] name = "serde_derive" -version = "1.0.216" +version = "1.0.217" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "46f859dbbf73865c6627ed570e78961cd3ac92407a2d117204c49232485da55e" +checksum = "5a9bf7cf98d04a2b28aead066b7496853d4779c9cc183c440dbac457641e19a0" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -9283,7 +9283,7 @@ checksum = "6c64451ba24fc7a6a2d60fc75dd9c83c90903b19028d4eff35e88fc1e86564e9" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -9576,7 +9576,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -9772,7 +9772,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "quote", "sp-core-hashing", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -9791,7 +9791,7 @@ source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46 dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -9963,7 +9963,7 @@ dependencies = [ "proc-macro-crate 1.3.1", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10116,7 +10116,7 @@ dependencies = [ "parity-scale-codec", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10304,7 +10304,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10317,7 +10317,7 @@ dependencies = [ "proc-macro2", "quote", "rustversion", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10405,9 +10405,9 @@ dependencies = [ [[package]] name = "syn" -version = "2.0.91" +version = "2.0.94" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d53cbcb5a243bd33b7858b1d7f4aca2153490815872d86d955d6ea29f743c035" +checksum = "987bc0be1cdea8b10216bd06e2ca407d40b9543468fafd3ddfb02f36e77f71f3" dependencies = [ "proc-macro2", "quote", @@ -10416,14 +10416,14 @@ dependencies = [ [[package]] name = "syn-solidity" -version = "0.8.15" +version = "0.8.16" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "219389c1ebe89f8333df8bdfb871f6631c552ff399c23cac02480b6088aad8f0" +checksum = "c74af950d86ec0f5b2ae2d7f1590bbfbcf4603a0a15742d8f98132ac4fe3efd4" dependencies = [ "paste", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10479,15 +10479,16 @@ checksum = "61c41af27dd6d1e27b1b16b489db798443478cef1f06a660c96db617ba5de3b1" [[package]] name = "tempfile" -version = "3.14.0" +version = "3.15.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "28cce251fcbc87fac86a866eeb0d6c2d536fc16d06f184bb61aeae11aa4cee0c" +checksum = "9a8a559c81686f576e8cd0290cd2a24a2a9ad80c98b3478856500fcbd7acd704" dependencies = [ "cfg-if", "fastrand", + "getrandom", "once_cell", "rustix", - "windows-sys 0.59.0", + "windows-sys 0.52.0", ] [[package]] @@ -10547,7 +10548,7 @@ checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10558,7 +10559,7 @@ checksum = "7b50fa271071aae2e6ee85f842e2e28ba8cd2c5fb67f11fcb1fd70b276f9e7d4" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10680,7 +10681,7 @@ checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -10770,7 +10771,7 @@ checksum = "4ae48d6208a266e853d946088ed816055e556cc6028c5e8e2b84d9fa5dd7c7f5" dependencies = [ "indexmap 2.7.0", "toml_datetime", - "winnow 0.6.20", + "winnow 0.6.21", ] [[package]] @@ -10848,7 +10849,7 @@ checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -11286,7 +11287,7 @@ dependencies = [ "log", "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", "wasm-bindgen-shared", ] @@ -11321,7 +11322,7 @@ checksum = "30d7a95b763d3c45903ed6c81f156801839e5ee968bb07e534c44df0fcd330c2" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", "wasm-bindgen-backend", "wasm-bindgen-shared", ] @@ -11630,7 +11631,7 @@ checksum = "ca7af9bb3ee875c4907835e607a275d10b04d15623d3aebe01afe8fbd3f85050" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -11713,7 +11714,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.59.0", + "windows-sys 0.48.0", ] [[package]] @@ -11772,7 +11773,7 @@ checksum = "2bbd5b46c938e506ecbce286b6628a02171d56153ba733b6c741fc627ec9579b" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -11783,7 +11784,7 @@ checksum = "053c4c462dc91d3b1504c6fe5a726dd15e216ba718e84a0e46a88fbe5ded3515" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -11964,9 +11965,9 @@ dependencies = [ [[package]] name = "winnow" -version = "0.6.20" +version = "0.6.21" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "36c1fec1a2bb5866f07c25f68c26e565c4c200aebb96d7e55710c19d3e8ac49b" +checksum = "e6f5bb5257f2407a5425c6e749bfd9692192a73e70a6060516ac04f889087d68" dependencies = [ "memchr", ] @@ -12084,7 +12085,7 @@ checksum = "fa4f8080344d4671fb4e831a13ad1e68092748387dfc4f55e356242fae12ce3e" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] @@ -12104,7 +12105,7 @@ checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69" dependencies = [ "proc-macro2", "quote", - "syn 2.0.91", + "syn 2.0.94", ] [[package]] From 906e2fb6699f1730a770c010858ba9bcbd29d568 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 10:30:39 -0500 Subject: [PATCH 236/368] Start cosigning on Cosign or Cosigned, not just on Cosigned --- coordinator/src/tributary/db.rs | 29 +++++++++++++++ coordinator/src/tributary/scan.rs | 62 ++++++++++++++++++------------- 2 files changed, 66 insertions(+), 25 deletions(-) diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index 31ae2e1e..fbcfcc01 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -189,6 +189,8 @@ create_db!( // The latest Substrate block to cosign. LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], + // If we're actively cosigning or not. + ActivelyCosigning: (set: ValidatorSet) -> (), // The weight accumulated for a topic. AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u64, @@ -236,6 +238,33 @@ impl TributaryDb { ) { LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash); } + pub(crate) fn actively_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) -> bool { + ActivelyCosigning::get(txn, set).is_some() + } + pub(crate) fn start_cosigning( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_number: u64, + ) { + assert!( + ActivelyCosigning::get(txn, set).is_none(), + "starting cosigning while already cosigning" + ); + ActivelyCosigning::set(txn, set, &()); + + TributaryDb::recognize_topic( + txn, + set, + Topic::Sign { + id: VariantSignId::Cosign(substrate_block_number), + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); + } + pub(crate) fn finish_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) { + assert!(ActivelyCosigning::take(txn, set).is_some(), "finished cosigning but not cosigning"); + } pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { AccumulatedWeight::set(txn, set, topic, &0); diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index 1a53bdda..bfc9760b 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -36,6 +36,34 @@ struct ScanBlock<'a, D: DbTxn, TD: Db> { tributary: &'a TributaryReader, } impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { + fn potentially_start_cosign(&mut self) { + // Don't start a new cosigning instance if we're actively running one + if TributaryDb::actively_cosigning(self.txn, self.set) { + return; + } + + // Start cosigning the latest intended-to-be-cosigned block + let Some(latest_substrate_block_to_cosign) = + TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set) + else { + return; + }; + + let substrate_block_number = todo!("TODO"); + + // Mark us as actively cosigning + TributaryDb::start_cosigning(self.txn, self.set, substrate_block_number); + // Send the message for the processor to start signing + TributaryDb::send_message( + self.txn, + self.set, + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { + session: self.set.session, + block_number: substrate_block_number, + block: latest_substrate_block_to_cosign, + }, + ); + } fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) { let signer = |signed: Signed| SeraiAddress(signed.signer.to_bytes()); @@ -105,41 +133,25 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { Transaction::Cosign { substrate_block_hash } => { // Update the latest intended-to-be-cosigned Substrate block TributaryDb::set_latest_substrate_block_to_cosign(self.txn, self.set, substrate_block_hash); - - // TODO: If we aren't currently cosigning a block, start cosigning this one + // Start a new cosign if we weren't already working on one + self.potentially_start_cosign(); } Transaction::Cosigned { substrate_block_hash } => { - // Start cosigning the latest intended-to-be-cosigned block + TributaryDb::finish_cosigning(self.txn, self.set); + + // Fetch the latest intended-to-be-cosigned block let Some(latest_substrate_block_to_cosign) = TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set) else { return; }; - // If this is the block we just cosigned, return + // If this is the block we just cosigned, return, preventing us from signing it again if latest_substrate_block_to_cosign == substrate_block_hash { return; } - let substrate_block_number = todo!("TODO"); - // Whitelist the topic - TributaryDb::recognize_topic( - self.txn, - self.set, - Topic::Sign { - id: VariantSignId::Cosign(substrate_block_number), - attempt: 0, - round: SigningProtocolRound::Preprocess, - }, - ); - // Send the message for the processor to start signing - TributaryDb::send_message( - self.txn, - self.set, - messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - session: self.set.session, - block_number: substrate_block_number, - block: substrate_block_hash, - }, - ); + + // Since we do have a new cosign to work on, start it + self.potentially_start_cosign(); } Transaction::SubstrateBlock { hash } => { // Whitelist all of the IDs this Substrate block causes to be signed From 49c221cca2774b071710fefc9851ef13229686ec Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 13:02:29 -0500 Subject: [PATCH 237/368] Restore request-response code to the coordinator --- Cargo.lock | 1 + coordinator/Cargo.toml | 2 + coordinator/src/main.rs | 1 + coordinator/src/p2p/reqres.rs | 126 +++++++++++++++++++++++ coordinator/src/tributary/transaction.rs | 4 +- 5 files changed, 133 insertions(+), 1 deletion(-) create mode 100644 coordinator/src/p2p/reqres.rs diff --git a/Cargo.lock b/Cargo.lock index a0d77739..802c5f00 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8328,6 +8328,7 @@ dependencies = [ "rand_core", "schnorr-signatures", "serai-client", + "serai-cosign", "serai-db", "serai-env", "serai-message-queue", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 2af0f822..9515bd74 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -56,6 +56,8 @@ futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "request-response", "gossipsub", "macros"] } +serai-cosign = { path = "./cosign" } + [dev-dependencies] tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] } sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index c3eb8d80..2316af2e 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,4 +1,5 @@ mod tributary; +mod p2p; fn main() { todo!("TODO") diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs new file mode 100644 index 00000000..fbd0388d --- /dev/null +++ b/coordinator/src/p2p/reqres.rs @@ -0,0 +1,126 @@ +use core::time::Duration; +use std::io::{self, Read}; + +use async_trait::async_trait; + +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_client::validator_sets::primitives::ValidatorSet; + +use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; + +use libp2p::request_response::{Codec as CodecTrait, Config, Behaviour, ProtocolSupport}; + +use serai_cosign::SignedCosign; + +/// The maximum message size for the request-response protocol +// This is derived from the heartbeat message size as it's our largest message +const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = + (tributary::BLOCK_SIZE_LIMIT * crate::p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; + +const PROTOCOL: &str = "/serai/coordinator"; + +/// Requests which can be made via the request-response protocol. +#[derive(Clone, Copy, Debug, BorshSerialize, BorshDeserialize)] +pub(crate) enum Request { + /// A keep-alive to prevent our connections from being dropped. + KeepAlive, + /// A heartbeat informing our peers of our latest block, for the specified blockchain, on regular + /// intervals. + /// + /// If our peers have more blocks than us, they're expected to respond with those blocks. + Heartbeat { set: ValidatorSet, latest_block_hash: [u8; 32] }, + /// A request for the notable cosigns for a global session. + NotableCosigns { global_session: [u8; 32] }, +} + +/// A tributary block and its commit. +#[derive(Clone, BorshSerialize, BorshDeserialize)] +pub(crate) struct TributaryBlockWithCommit { + pub(crate) block: Vec, + pub(crate) commit: Vec, +} + +/// Responses which can be received via the request-response protocol. +#[derive(Clone, BorshSerialize, BorshDeserialize)] +pub(crate) enum Response { + Blocks(Vec), + NotableCosigns(Vec), +} + +/// The codec used for the request-response protocol. +/// +/// We don't use CBOR or JSON, but use borsh to create `Vec`s we then length-prefix. While +/// ideally, we'd use borsh directly with the `io` traits defined here, they're async and there +/// isn't an amenable API within borsh for incremental deserialization. +#[derive(Default, Clone, Copy, Debug)] +struct Codec; +impl Codec { + async fn read(io: &mut (impl Unpin + AsyncRead)) -> io::Result { + let mut len = [0; 4]; + io.read_exact(&mut len).await?; + let len = usize::try_from(u32::from_le_bytes(len)).expect("not at least a 32-bit platform?"); + if len > MAX_LIBP2P_REQRES_MESSAGE_SIZE { + Err(io::Error::other("request length exceeded MAX_LIBP2P_REQRES_MESSAGE_SIZE"))?; + } + // This may be a non-trivial allocation easily causable + // While we could chunk the read, meaning we only perform the allocation as bandwidth is used, + // the max message size should be sufficiently sane + let mut buf = vec![0; len]; + io.read_exact(&mut buf).await?; + let mut buf = buf.as_slice(); + let res = M::deserialize(&mut buf)?; + if !buf.is_empty() { + Err(io::Error::other("p2p message had extra data appended to it"))?; + } + Ok(res) + } + async fn write(io: &mut (impl Unpin + AsyncWrite), msg: &impl BorshSerialize) -> io::Result<()> { + let msg = borsh::to_vec(msg).unwrap(); + io.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await?; + io.write_all(&msg).await + } +} +#[async_trait] +impl CodecTrait for Codec { + type Protocol = &'static str; + type Request = Request; + type Response = Response; + + async fn read_request( + &mut self, + _: &Self::Protocol, + io: &mut R, + ) -> io::Result { + Self::read(io).await + } + async fn read_response( + &mut self, + proto: &Self::Protocol, + io: &mut R, + ) -> io::Result { + Self::read(io).await + } + async fn write_request( + &mut self, + _: &Self::Protocol, + io: &mut W, + req: Request, + ) -> io::Result<()> { + Self::write(io, &req).await + } + async fn write_response( + &mut self, + proto: &Self::Protocol, + io: &mut W, + res: Response, + ) -> io::Result<()> { + Self::write(io, &res).await + } +} + +pub(crate) type Behavior = Behaviour; +pub(crate) fn new_behavior() -> Behavior { + let mut config = Config::default(); + config.set_request_timeout(Duration::from_secs(5)); + Behavior::new([(PROTOCOL, ProtocolSupport::Full)], config) +} diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index 65391296..0befbf36 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -43,12 +43,14 @@ impl SigningProtocolRound { } } -/// `tributary::Signed` without the nonce. +/// `tributary::Signed` but without the nonce. /// /// All of our nonces are deterministic to the type of transaction and fields within. #[derive(Clone, Copy, PartialEq, Eq, Debug)] pub struct Signed { + /// The signer. pub signer: ::G, + /// The signature. pub signature: SchnorrSignature, } From 5fc8500f8d788de2e24e8dcf6dc0c67ca46081ee Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 13:04:27 -0500 Subject: [PATCH 238/368] Add task to heartbeat a tributary to the P2P code --- coordinator/src/p2p/gossip.rs | 1 + coordinator/src/p2p/heartbeat.rs | 118 +++++++++++++++++++++++++++++++ coordinator/src/p2p/mod.rs | 30 ++++++++ 3 files changed, 149 insertions(+) create mode 100644 coordinator/src/p2p/gossip.rs create mode 100644 coordinator/src/p2p/heartbeat.rs create mode 100644 coordinator/src/p2p/mod.rs diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/coordinator/src/p2p/gossip.rs @@ -0,0 +1 @@ + diff --git a/coordinator/src/p2p/heartbeat.rs b/coordinator/src/p2p/heartbeat.rs new file mode 100644 index 00000000..2080a6e3 --- /dev/null +++ b/coordinator/src/p2p/heartbeat.rs @@ -0,0 +1,118 @@ +use core::future::Future; + +use std::time::{Duration, SystemTime}; + +use serai_client::validator_sets::primitives::ValidatorSet; + +use tributary::{ReadWrite, Block, Tributary, TributaryReader}; + +use serai_db::*; +use serai_task::ContinuallyRan; + +use crate::{ + tributary::Transaction, + p2p::{ + reqres::{Request, Response}, + P2p, + }, +}; + +// Amount of blocks in a minute +const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; + +// Maximum amount of blocks to send in a batch of blocks +pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; + +/// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's +/// tip. +/// +/// If the other validator has more blocks then we do, they're expected to inform us. This forms +/// the sync protocol for our Tributaries. +struct HeartbeatTask { + set: ValidatorSet, + tributary: Tributary, + reader: TributaryReader, + p2p: P2p, +} + +impl ContinuallyRan for HeartbeatTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + // If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol + const TIME_TO_TRIGGER_SYNCING: Duration = Duration::from_secs(60); + + // Fetch the state from the tip of the blockchain + let state = |reader: &TributaryReader<_, _>| { + let tip = reader.tip(); + let block_time = if let Some(time_of_block) = reader.time_of_block(&tip) { + SystemTime::UNIX_EPOCH + Duration::from_secs(time_of_block) + } else { + // If we couldn't fetch this block's time, assume it's old + // We don't want to declare its unix time as 0 and claim it's 50+ years old though + SystemTime::now() - TIME_TO_TRIGGER_SYNCING + }; + (tip, SystemTime::now().duration_since(block_time).unwrap_or(Duration::ZERO)) + }; + + // The current state, and a boolean of it's stale + let (mut tip, mut time_since) = state(&self.reader); + let mut state_is_stale = false; + + let mut synced_block = false; + if TIME_TO_TRIGGER_SYNCING <= time_since { + log::warn!( + "last known tributary block for {:?} was {} seconds ago", + self.set, + time_since.as_secs() + ); + + // This requests all peers for this network, without differentiating by session + // This should be fine as most validators should overlap across sessions + 'peer: for peer in self.p2p.peers(self.set.network).await { + loop { + // Create the request for blocks + if state_is_stale { + (tip, time_since) = state(&self.reader); + state_is_stale = false; + } + let request = Request::Heartbeat { set: self.set, latest_block_hash: tip }; + let Ok(Response::Blocks(blocks)) = peer.send(request).await else { continue 'peer }; + + // This is the final batch if it has less than the maximum amount of blocks + // (signifying there weren't more blocks after this to fill the batch with) + let final_batch = blocks.len() < BLOCKS_PER_BATCH; + + // Sync each block + for block_with_commit in blocks { + let Ok(block) = Block::read(&mut block_with_commit.block.as_slice()) else { + // TODO: Disconnect/slash this peer + log::warn!("received invalid Block inside response to heartbeat"); + continue 'peer; + }; + + // Attempt to sync the block + if !self.tributary.sync_block(block, block_with_commit.commit).await { + // The block may be invalid or may simply be stale + continue 'peer; + } + + // Because we synced a block, flag the state as stale + state_is_stale = true; + // And that we did sync a block + synced_block = true; + } + + // If this was the final batch, move on from this peer + // We could assume they were honest and we are done syncing the chain, but this is a + // bit more robust + if final_batch { + continue 'peer; + } + } + } + } + + Ok(synced_block) + } + } +} diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs new file mode 100644 index 00000000..09b7402d --- /dev/null +++ b/coordinator/src/p2p/mod.rs @@ -0,0 +1,30 @@ +use serai_client::primitives::NetworkId; + +mod reqres; +use reqres::{Request, Response}; + +mod gossip; + +mod heartbeat; + +struct Peer; +impl Peer { + async fn send(&self, request: Request) -> Result { + (async move { todo!("TODO") }).await + } +} + +#[derive(Clone, Debug)] +struct P2p; +impl P2p { + async fn peers(&self, set: NetworkId) -> Vec { + (async move { todo!("TODO") }).await + } +} + +#[async_trait::async_trait] +impl tributary::P2p for P2p { + async fn broadcast(&self, genesis: [u8; 32], msg: Vec) { + todo!("TODO") + } +} From 3f3b0255f8cd00fd61744ff6b6f2a0dc643506b9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 13:59:14 -0500 Subject: [PATCH 239/368] Tweak heartbeat task to run less often if there's no progress to be made --- coordinator/src/p2p/heartbeat.rs | 37 ++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 14 deletions(-) diff --git a/coordinator/src/p2p/heartbeat.rs b/coordinator/src/p2p/heartbeat.rs index 2080a6e3..85b07dc6 100644 --- a/coordinator/src/p2p/heartbeat.rs +++ b/coordinator/src/p2p/heartbeat.rs @@ -41,22 +41,21 @@ impl ContinuallyRan for HeartbeatTask { // If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol const TIME_TO_TRIGGER_SYNCING: Duration = Duration::from_secs(60); - // Fetch the state from the tip of the blockchain - let state = |reader: &TributaryReader<_, _>| { - let tip = reader.tip(); - let block_time = if let Some(time_of_block) = reader.time_of_block(&tip) { + let mut tip = self.reader.tip(); + let time_since = { + let block_time = if let Some(time_of_block) = self.reader.time_of_block(&tip) { SystemTime::UNIX_EPOCH + Duration::from_secs(time_of_block) } else { // If we couldn't fetch this block's time, assume it's old // We don't want to declare its unix time as 0 and claim it's 50+ years old though + log::warn!( + "heartbeat task couldn't fetch the time of a block, flagging it as a minute old" + ); SystemTime::now() - TIME_TO_TRIGGER_SYNCING }; - (tip, SystemTime::now().duration_since(block_time).unwrap_or(Duration::ZERO)) + SystemTime::now().duration_since(block_time).unwrap_or(Duration::ZERO) }; - - // The current state, and a boolean of it's stale - let (mut tip, mut time_since) = state(&self.reader); - let mut state_is_stale = false; + let mut tip_is_stale = false; let mut synced_block = false; if TIME_TO_TRIGGER_SYNCING <= time_since { @@ -71,9 +70,9 @@ impl ContinuallyRan for HeartbeatTask { 'peer: for peer in self.p2p.peers(self.set.network).await { loop { // Create the request for blocks - if state_is_stale { - (tip, time_since) = state(&self.reader); - state_is_stale = false; + if tip_is_stale { + tip = self.reader.tip(); + tip_is_stale = false; } let request = Request::Heartbeat { set: self.set, latest_block_hash: tip }; let Ok(Response::Blocks(blocks)) = peer.send(request).await else { continue 'peer }; @@ -96,8 +95,8 @@ impl ContinuallyRan for HeartbeatTask { continue 'peer; } - // Because we synced a block, flag the state as stale - state_is_stale = true; + // Because we synced a block, flag the tip as stale + tip_is_stale = true; // And that we did sync a block synced_block = true; } @@ -110,6 +109,16 @@ impl ContinuallyRan for HeartbeatTask { } } } + + // This will cause the tak to be run less and less often, ensuring we aren't spamming the + // net if we legitimately aren't making progress + if !synced_block { + Err(format!( + "tried to sync blocks for {:?} since we haven't seen one in {} seconds but didn't", + self.set, + time_since.as_secs(), + ))?; + } } Ok(synced_block) From 985261574ceffd6c6822babefa847baf3a2feb2b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 3 Jan 2025 14:00:20 -0500 Subject: [PATCH 240/368] Add gossip behavior back to the coordinator --- coordinator/src/p2p/gossip.rs | 69 +++++++++++++++++++++++++++++++++++ coordinator/src/p2p/reqres.rs | 2 +- 2 files changed, 70 insertions(+), 1 deletion(-) diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs index 8b137891..b5b8ebd9 100644 --- a/coordinator/src/p2p/gossip.rs +++ b/coordinator/src/p2p/gossip.rs @@ -1 +1,70 @@ +use core::time::Duration; +use blake2::{Digest, Blake2s256}; + +use scale::Encode; +use borsh::{BorshSerialize, BorshDeserialize}; +use serai_client::validator_sets::primitives::ValidatorSet; + +use libp2p::gossipsub::{ + IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, IdentityTransform, + AllowAllSubscriptionFilter, Behaviour, +}; + +use serai_cosign::SignedCosign; + +// Block size limit + 16 KB of space for signatures/metadata +const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384; + +const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); + +const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0"; +const BASE_TOPIC: &str = "/"; + +fn topic_for_set(set: ValidatorSet) -> IdentTopic { + IdentTopic::new(format!("/set/{}", hex::encode(set.encode()))) +} + +#[derive(Clone, BorshSerialize, BorshDeserialize)] +pub(crate) enum Message { + Tribuary { genesis: [u8; 32], message: Vec }, + Cosign(SignedCosign), +} + +pub(crate) type Behavior = Behaviour; + +pub(crate) fn new_behavior() -> Behavior { + // The latency used by the Tendermint protocol, used here as the gossip epoch duration + // libp2p-rs defaults to 1 second, whereas ours will be ~2 + let heartbeat_interval = tributary::tendermint::LATENCY_TIME; + // The amount of heartbeats which will occur within a single Tributary block + let heartbeats_per_block = tributary::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval); + // libp2p-rs defaults to 5, whereas ours will be ~8 + let heartbeats_to_keep = 2 * heartbeats_per_block; + // libp2p-rs defaults to 3 whereas ours will be ~4 + let heartbeats_to_gossip = heartbeats_per_block; + + let config = ConfigBuilder::default() + .protocol_id_prefix(LIBP2P_PROTOCOL) + .history_length(usize::try_from(heartbeats_to_keep).unwrap()) + .history_gossip(usize::try_from(heartbeats_to_gossip).unwrap()) + .heartbeat_interval(Duration::from_millis(heartbeat_interval.into())) + .max_transmit_size(MAX_LIBP2P_GOSSIP_MESSAGE_SIZE) + .idle_timeout(KEEP_ALIVE_INTERVAL + Duration::from_secs(5)) + .duplicate_cache_time(Duration::from_millis((heartbeats_to_keep * heartbeat_interval).into())) + .validation_mode(ValidationMode::Anonymous) + // Uses a content based message ID to avoid duplicates as much as possible + .message_id_fn(|msg| { + MessageId::new(&Blake2s256::digest([msg.topic.as_str().as_bytes(), &msg.data].concat())) + }) + .build(); + + // TODO: Don't use IdentityTransform here. Authenticate using validator keys + let mut gossipsub = Behavior::new(MessageAuthenticity::Anonymous, config.unwrap()).unwrap(); + + // Subscribe to the base topic + let topic = IdentTopic::new(BASE_TOPIC); + let _ = gossipsub.subscribe(&topic); + + gossipsub +} diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index fbd0388d..d1b1a2ec 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -17,7 +17,7 @@ use serai_cosign::SignedCosign; const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = (tributary::BLOCK_SIZE_LIMIT * crate::p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; -const PROTOCOL: &str = "/serai/coordinator"; +const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0"; /// Requests which can be made via the request-response protocol. #[derive(Clone, Copy, Debug, BorshSerialize, BorshDeserialize)] From 4836c1676b60a969b80fa8dc23d7631de3af5287 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 13:52:17 -0500 Subject: [PATCH 241/368] Don't consider the Serai set in the cosigning protocol The Serai set SHOULD be banned from setting keys so this SHOULD be unreachable. It's now explicitly unreachable. --- coordinator/cosign/src/lib.rs | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 6409b56f..faa449dd 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -161,6 +161,11 @@ async fn keys_for_network( serai: &TemporalSerai<'_>, network: NetworkId, ) -> Result, String> { + // The Serai network never cosigns so it has no keys for cosigning + if network == NetworkId::Serai { + return Ok(None); + } + let Some(latest_session) = serai.validator_sets().session(network).await.map_err(|e| format!("{e:?}"))? else { From f9f6d406956470f4ef595ae3b7fc399c28eadb20 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 18:03:37 -0500 Subject: [PATCH 242/368] Use Serai validator keys as PeerIds --- coordinator/src/p2p/authenticate.rs | 168 ++++++++++++++++++++++++++++ 1 file changed, 168 insertions(+) create mode 100644 coordinator/src/p2p/authenticate.rs diff --git a/coordinator/src/p2p/authenticate.rs b/coordinator/src/p2p/authenticate.rs new file mode 100644 index 00000000..4b61d381 --- /dev/null +++ b/coordinator/src/p2p/authenticate.rs @@ -0,0 +1,168 @@ +use core::{pin::Pin, future::Future}; +use std::io; + +use zeroize::Zeroizing; +use rand_core::{RngCore, OsRng}; + +use blake2::{Digest, Blake2s256}; +use schnorrkel::{Keypair, PublicKey, Signature}; + +use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, StreamExt}; +use libp2p::{ + core::UpgradeInfo, InboundUpgrade, OutboundUpgrade, multihash::Multihash, identity::PeerId, noise, +}; + +const PROTOCOL: &str = "/serai/coordinator/validators"; + +struct OnlyValidators { + serai_key: Zeroizing, + our_peer_id: PeerId, +} + +impl OnlyValidators { + /// The ephemeral challenge protocol for authentication. + /// + /// We use ephemeral challenges to prevent replaying signatures from historic sessions. + /// + /// We don't immediately send the challenge. We only send a commitment to it. This prevents our + /// remote peer from choosing their challenge in response to our challenge, in case there was any + /// benefit to doing so. + async fn challenges( + socket: &mut noise::Output, + ) -> io::Result<([u8; 32], [u8; 32])> { + let mut our_challenge = [0; 32]; + OsRng.fill_bytes(&mut our_challenge); + + // Write the hash of our challenge + socket.write_all(&Blake2s256::digest(our_challenge)).await?; + + // Read the hash of their challenge + let mut their_challenge_commitment = [0; 32]; + socket.read_exact(&mut their_challenge_commitment).await?; + + // Reveal our challenge + socket.write_all(&our_challenge).await?; + + // Read their challenge + let mut their_challenge = [0; 32]; + socket.read_exact(&mut their_challenge).await?; + + // Verify their challenge + if <[u8; 32]>::from(Blake2s256::digest(their_challenge)) != their_challenge_commitment { + Err(io::Error::other("challenge didn't match challenge commitment"))?; + } + + Ok((our_challenge, their_challenge)) + } + + // We sign the two noise peer IDs and the ephemeral challenges. + // + // Signing the noise peer IDs ensures we're authenticating this noise connection. The only + // expectations placed on noise are for it to prevent a MITM from impersonating the other end or + // modifying any messages sent. + // + // Signing the ephemeral challenges prevents any replays. While that should be unnecessary, as + // noise MAY prevent replays across sessions (even when the same key is used), and noise IDs + // shouldn't be reused (so it should be fine to reuse an existing signature for these noise IDs), + // it doesn't hurt. + async fn authenticate( + &self, + socket: &mut noise::Output, + dialer_peer_id: PeerId, + dialer_challenge: [u8; 32], + listener_peer_id: PeerId, + listener_challenge: [u8; 32], + ) -> io::Result { + // Write our public key + socket.write_all(&self.serai_key.public.to_bytes()).await?; + + let msg = borsh::to_vec(&( + dialer_peer_id.to_bytes(), + dialer_challenge, + listener_peer_id.to_bytes(), + listener_challenge, + )) + .unwrap(); + let signature = self.serai_key.sign_simple(PROTOCOL.as_bytes(), &msg); + socket.write_all(&signature.to_bytes()).await?; + + let mut public_key_and_sig = [0; 96]; + socket.read_exact(&mut public_key_and_sig).await?; + let public_key = PublicKey::from_bytes(&public_key_and_sig[.. 32]) + .map_err(|_| io::Error::other("invalid public key"))?; + let sig = Signature::from_bytes(&public_key_and_sig[32 ..]) + .map_err(|_| io::Error::other("invalid signature serialization"))?; + + public_key + .verify_simple(PROTOCOL.as_bytes(), &msg, &sig) + .map_err(|_| io::Error::other("invalid signature"))?; + + // 0 represents the identity Multihash, that no hash was performed + // It's an internal constant so we can't refer to the constant inside libp2p + Ok(PeerId::from_multihash(Multihash::wrap(0, &public_key.to_bytes()).unwrap()).unwrap()) + } +} + +impl UpgradeInfo for OnlyValidators { + type Info = &'static str; + type InfoIter = [&'static str; 1]; + fn protocol_info(&self) -> [&'static str; 1] { + [PROTOCOL] + } +} + +impl InboundUpgrade<(PeerId, noise::Output)> + for OnlyValidators +{ + type Output = (PeerId, noise::Output); + type Error = io::Error; + type Future = Pin>>>; + + fn upgrade_inbound( + self, + (dialer_noise_peer_id, mut socket): (PeerId, noise::Output), + _: Self::Info, + ) -> Self::Future { + Box::pin(async move { + let (our_challenge, dialer_challenge) = OnlyValidators::challenges(&mut socket).await?; + let dialer_serai_validator = self + .authenticate( + &mut socket, + dialer_noise_peer_id, + dialer_challenge, + self.our_peer_id, + our_challenge, + ) + .await?; + Ok((dialer_serai_validator, socket)) + }) + } +} + +impl OutboundUpgrade<(PeerId, noise::Output)> + for OnlyValidators +{ + type Output = (PeerId, noise::Output); + type Error = io::Error; + type Future = Pin>>>; + + fn upgrade_outbound( + self, + (listener_noise_peer_id, mut socket): (PeerId, noise::Output), + _: Self::Info, + ) -> Self::Future { + Box::pin(async move { + let (our_challenge, listener_challenge) = OnlyValidators::challenges(&mut socket).await?; + let listener_serai_validator = self + .authenticate( + &mut socket, + self.our_peer_id, + our_challenge, + listener_noise_peer_id, + listener_challenge, + ) + .await?; + Ok((listener_serai_validator, socket)) + }) + } +} From a64e2004ab0d95bef140624855f2cae414a8a39b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 18:04:24 -0500 Subject: [PATCH 243/368] Dial new peers when we don't have the target amount --- coordinator/src/p2p/dial.rs | 126 ++++++++++++++++++++++++++++++++++++ 1 file changed, 126 insertions(+) create mode 100644 coordinator/src/p2p/dial.rs diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs new file mode 100644 index 00000000..94ee664e --- /dev/null +++ b/coordinator/src/p2p/dial.rs @@ -0,0 +1,126 @@ +use core::future::Future; +use std::collections::HashMap; + +use rand_core::{RngCore, OsRng}; + +use tokio::sync::mpsc; + +use serai_client::{ + primitives::{NetworkId, PublicKey}, + validator_sets::primitives::Session, + Serai, +}; + +use libp2p::{ + core::multiaddr::{Protocol, Multiaddr}, + swarm::dial_opts::DialOpts, +}; + +use serai_task::ContinuallyRan; + +use crate::p2p::{PORT, Peers}; + +const TARGET_PEERS_PER_NETWORK: usize = 5; + +struct DialTask { + serai: Serai, + + sessions: HashMap, + validators: HashMap>, + + peers: Peers, + to_dial: mpsc::UnboundedSender, +} + +impl ContinuallyRan for DialTask { + const DELAY_BETWEEN_ITERATIONS: u64 = 30; + + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let temporal_serai = + self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + let temporal_serai = temporal_serai.validator_sets(); + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + } + let Some(session) = temporal_serai.session(network).await.map_err(|e| format!("{e:?}"))? + else { + continue; + }; + // If the session has changed, populate it with the current validators + if self.sessions.get(&network) != Some(&session) { + self.validators.insert( + network, + temporal_serai + .active_network_validators(network) + .await + .map_err(|e| format!("{e:?}"))?, + ); + self.sessions.insert(network, session); + } + } + + // If any of our peers is lacking, try to connect to more + let mut dialed = false; + let peer_counts = self + .peers + .peers + .read() + .unwrap() + .iter() + .map(|(network, peers)| (*network, peers.len())) + .collect::>(); + for (network, peer_count) in peer_counts { + /* + If we don't have the target amount of peers, and we don't have all the validators in the + set but one, attempt to connect to more validators within this set. + + The latter clause is so if there's a set with only 3 validators, we don't infinitely try + to connect to the target amount of peers for this network as we never will. Instead, we + only try to connect to most of the validators actually present. + */ + if (peer_count < TARGET_PEERS_PER_NETWORK) && + (peer_count < self.validators[&network].len().saturating_sub(1)) + { + let mut potential_peers = + self.serai.p2p_validators(network).await.map_err(|e| format!("{e:?}"))?; + for _ in 0 .. (TARGET_PEERS_PER_NETWORK - peer_count) { + if potential_peers.is_empty() { + break; + } + let index_to_dial = + usize::try_from(OsRng.next_u64() % u64::try_from(potential_peers.len()).unwrap()) + .unwrap(); + let randomly_selected_peer = potential_peers.swap_remove(index_to_dial); + + log::info!("found peer from substrate: {randomly_selected_peer}"); + + // Map the peer from a Substrate P2P network peer to a Coordinator P2P network peer + let mapped_peer = randomly_selected_peer + .into_iter() + .filter_map(|protocol| match protocol { + // Drop PeerIds from the Substrate P2p network + Protocol::P2p(_) => None, + // Use our own TCP port + Protocol::Tcp(_) => Some(Protocol::Tcp(PORT)), + // Pass-through any other specifications (IPv4, IPv6, etc) + other => Some(other), + }) + .collect::(); + + log::debug!("mapped found peer: {mapped_peer}"); + + self + .to_dial + .send(DialOpts::unknown_peer_id().address(mapped_peer).build()) + .expect("dial receiver closed?"); + dialed = true; + } + } + } + + Ok(dialed) + } + } +} From 3daeea09e630623cb58965ca1292069779ca2051 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 22:21:23 -0500 Subject: [PATCH 244/368] Only let active Serai validators connect over P2P --- coordinator/src/p2p/authenticate.rs | 24 ++++++---- coordinator/src/p2p/dial.rs | 47 +++++--------------- coordinator/src/p2p/gossip.rs | 7 ++- coordinator/src/p2p/mod.rs | 47 +++++++++++++++++++- coordinator/src/p2p/reqres.rs | 13 +++++- coordinator/src/p2p/validators.rs | 69 +++++++++++++++++++++++++++++ 6 files changed, 157 insertions(+), 50 deletions(-) create mode 100644 coordinator/src/p2p/validators.rs diff --git a/coordinator/src/p2p/authenticate.rs b/coordinator/src/p2p/authenticate.rs index 4b61d381..99d98515 100644 --- a/coordinator/src/p2p/authenticate.rs +++ b/coordinator/src/p2p/authenticate.rs @@ -1,5 +1,5 @@ use core::{pin::Pin, future::Future}; -use std::io; +use std::{sync::Arc, io}; use zeroize::Zeroizing; use rand_core::{RngCore, OsRng}; @@ -7,14 +7,19 @@ use rand_core::{RngCore, OsRng}; use blake2::{Digest, Blake2s256}; use schnorrkel::{Keypair, PublicKey, Signature}; -use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, StreamExt}; -use libp2p::{ - core::UpgradeInfo, InboundUpgrade, OutboundUpgrade, multihash::Multihash, identity::PeerId, noise, -}; +use serai_client::primitives::PublicKey as Public; + +use tokio::sync::RwLock; + +use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; +use libp2p::{core::UpgradeInfo, InboundUpgrade, OutboundUpgrade, identity::PeerId, noise}; + +use crate::p2p::{validators::Validators, peer_id_from_public}; const PROTOCOL: &str = "/serai/coordinator/validators"; struct OnlyValidators { + validators: Arc>, serai_key: Zeroizing, our_peer_id: PeerId, } @@ -97,9 +102,12 @@ impl OnlyValidators { .verify_simple(PROTOCOL.as_bytes(), &msg, &sig) .map_err(|_| io::Error::other("invalid signature"))?; - // 0 represents the identity Multihash, that no hash was performed - // It's an internal constant so we can't refer to the constant inside libp2p - Ok(PeerId::from_multihash(Multihash::wrap(0, &public_key.to_bytes()).unwrap()).unwrap()) + let peer_id = peer_id_from_public(Public::from_raw(public_key.to_bytes())); + if !self.validators.read().await.contains(&peer_id) { + Err(io::Error::other("peer which tried to connect isn't a known active validator"))?; + } + + Ok(peer_id) } } diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs index 94ee664e..2e427ee7 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/dial.rs @@ -1,15 +1,10 @@ use core::future::Future; -use std::collections::HashMap; use rand_core::{RngCore, OsRng}; use tokio::sync::mpsc; -use serai_client::{ - primitives::{NetworkId, PublicKey}, - validator_sets::primitives::Session, - Serai, -}; +use serai_client::Serai; use libp2p::{ core::multiaddr::{Protocol, Multiaddr}, @@ -18,16 +13,13 @@ use libp2p::{ use serai_task::ContinuallyRan; -use crate::p2p::{PORT, Peers}; +use crate::p2p::{PORT, Peers, validators::Validators}; const TARGET_PEERS_PER_NETWORK: usize = 5; struct DialTask { serai: Serai, - - sessions: HashMap, - validators: HashMap>, - + validators: Validators, peers: Peers, to_dial: mpsc::UnboundedSender, } @@ -37,29 +29,7 @@ impl ContinuallyRan for DialTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { - let temporal_serai = - self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; - let temporal_serai = temporal_serai.validator_sets(); - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - } - let Some(session) = temporal_serai.session(network).await.map_err(|e| format!("{e:?}"))? - else { - continue; - }; - // If the session has changed, populate it with the current validators - if self.sessions.get(&network) != Some(&session) { - self.validators.insert( - network, - temporal_serai - .active_network_validators(network) - .await - .map_err(|e| format!("{e:?}"))?, - ); - self.sessions.insert(network, session); - } - } + self.validators.update().await?; // If any of our peers is lacking, try to connect to more let mut dialed = false; @@ -81,7 +51,14 @@ impl ContinuallyRan for DialTask { only try to connect to most of the validators actually present. */ if (peer_count < TARGET_PEERS_PER_NETWORK) && - (peer_count < self.validators[&network].len().saturating_sub(1)) + (peer_count < + self + .validators + .validators() + .get(&network) + .map(Vec::len) + .unwrap_or(0) + .saturating_sub(1)) { let mut potential_peers = self.serai.p2p_validators(network).await.map_err(|e| format!("{e:?}"))?; diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs index b5b8ebd9..8e32180b 100644 --- a/coordinator/src/p2p/gossip.rs +++ b/coordinator/src/p2p/gossip.rs @@ -59,12 +59,11 @@ pub(crate) fn new_behavior() -> Behavior { }) .build(); - // TODO: Don't use IdentityTransform here. Authenticate using validator keys - let mut gossipsub = Behavior::new(MessageAuthenticity::Anonymous, config.unwrap()).unwrap(); + let mut gossip = Behavior::new(MessageAuthenticity::Anonymous, config.unwrap()).unwrap(); // Subscribe to the base topic let topic = IdentTopic::new(BASE_TOPIC); - let _ = gossipsub.subscribe(&topic); + let _ = gossip.subscribe(&topic); - gossipsub + gossip } diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 09b7402d..97f00cbf 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,12 +1,46 @@ -use serai_client::primitives::NetworkId; +use std::{ + sync::{Arc, RwLock}, + collections::{HashSet, HashMap}, +}; +use serai_client::primitives::{NetworkId, PublicKey}; + +use tokio::sync::mpsc; + +use futures_util::StreamExt; +use libp2p::{ + multihash::Multihash, + identity::PeerId, + swarm::{dial_opts::DialOpts, NetworkBehaviour, SwarmEvent, Swarm}, +}; + +/// A struct to sync the validators from the Serai node in order to keep track of them. +mod validators; + +/// The authentication protocol upgrade to limit the P2P network to active validators. +mod authenticate; + +/// The dial task, to find new peers to connect to +mod dial; + +/// The request-response messages and behavior mod reqres; use reqres::{Request, Response}; +/// The gossip messages and behavior mod gossip; +/// The heartbeat task, effecting sync of Tributaries mod heartbeat; +const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') + +fn peer_id_from_public(public: PublicKey) -> PeerId { + // 0 represents the identity Multihash, that no hash was performed + // It's an internal constant so we can't refer to the constant inside libp2p + PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() +} + struct Peer; impl Peer { async fn send(&self, request: Request) -> Result { @@ -14,6 +48,11 @@ impl Peer { } } +#[derive(Clone)] +struct Peers { + peers: Arc>>>, +} + #[derive(Clone, Debug)] struct P2p; impl P2p { @@ -28,3 +67,9 @@ impl tributary::P2p for P2p { todo!("TODO") } } + +#[derive(NetworkBehaviour)] +struct Behavior { + reqres: reqres::Behavior, + gossip: gossip::Behavior, +} diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index d1b1a2ec..b5d87c1c 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -1,4 +1,4 @@ -use core::time::Duration; +use core::{fmt, time::Duration}; use std::io::{self, Read}; use async_trait::async_trait; @@ -46,6 +46,15 @@ pub(crate) enum Response { Blocks(Vec), NotableCosigns(Vec), } +impl fmt::Debug for Response { + fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { + (match self { + Response::Blocks(_) => fmt.debug_struct("Response::Block"), + Response::NotableCosigns(_) => fmt.debug_struct("Response::NotableCosigns"), + }) + .finish_non_exhaustive() + } +} /// The codec used for the request-response protocol. /// @@ -53,7 +62,7 @@ pub(crate) enum Response { /// ideally, we'd use borsh directly with the `io` traits defined here, they're async and there /// isn't an amenable API within borsh for incremental deserialization. #[derive(Default, Clone, Copy, Debug)] -struct Codec; +pub(crate) struct Codec; impl Codec { async fn read(io: &mut (impl Unpin + AsyncRead)) -> io::Result { let mut len = [0; 4]; diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs new file mode 100644 index 00000000..1bd10110 --- /dev/null +++ b/coordinator/src/p2p/validators.rs @@ -0,0 +1,69 @@ +use std::collections::HashMap; + +use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; + +use libp2p::PeerId; + +use crate::p2p::peer_id_from_public; + +pub(crate) struct Validators { + serai: Serai, + + // A cache for which session we're populated with the validators of + sessions: HashMap, + // The validators by network + by_network: HashMap>, + // The set of all validators (as a HashMap to represent the amount of inclusions) + set: HashMap, +} + +impl Validators { + pub(crate) async fn update(&mut self) -> Result<(), String> { + let temporal_serai = + self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + let temporal_serai = temporal_serai.validator_sets(); + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + } + let Some(session) = temporal_serai.session(network).await.map_err(|e| format!("{e:?}"))? + else { + continue; + }; + // If the session has changed, populate it with the current validators + if self.sessions.get(&network) != Some(&session) { + let new_validators = + temporal_serai.active_network_validators(network).await.map_err(|e| format!("{e:?}"))?; + let new_validators = + new_validators.into_iter().map(peer_id_from_public).collect::>(); + + // Remove the existing validators + for validator in self.by_network.remove(&network).unwrap_or(vec![]) { + let mut inclusions = self.set.remove(&validator).unwrap(); + inclusions -= 1; + if inclusions != 0 { + self.set.insert(validator, inclusions); + } + } + + // Add the new validators + for validator in new_validators.iter().copied() { + *self.set.entry(validator).or_insert(0) += 1; + } + self.by_network.insert(network, new_validators); + + // Update the session we have populated + self.sessions.insert(network, session); + } + } + Ok(()) + } + + pub(crate) fn validators(&self) -> &HashMap> { + &self.by_network + } + + pub(crate) fn contains(&self, peer_id: &PeerId) -> bool { + self.set.contains_key(peer_id) + } +} From 9a5a661d04215e319f20851449ad3f9aa174bcab Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 23:28:29 -0500 Subject: [PATCH 245/368] Start on the task to manage the swarm --- coordinator/Cargo.toml | 1 + coordinator/src/p2p/dial.rs | 8 +- coordinator/src/p2p/mod.rs | 136 +++++++++++++++++++++++++++++- coordinator/src/p2p/reqres.rs | 2 +- coordinator/src/p2p/validators.rs | 42 +++++---- 5 files changed, 168 insertions(+), 21 deletions(-) diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 9515bd74..d0f8cb24 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -25,6 +25,7 @@ bitvec = { version = "1", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } +schnorrkel = { version = "0.11", default-features = false, features = ["std"] } transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs index 2e427ee7..59c976cd 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/dial.rs @@ -1,4 +1,5 @@ use core::future::Future; +use std::collections::HashSet; use rand_core::{RngCore, OsRng}; @@ -25,6 +26,7 @@ struct DialTask { } impl ContinuallyRan for DialTask { + // Only run every thirty seconds, not the default of every five const DELAY_BETWEEN_ITERATIONS: u64 = 30; fn run_iteration(&mut self) -> impl Send + Future> { @@ -37,7 +39,7 @@ impl ContinuallyRan for DialTask { .peers .peers .read() - .unwrap() + .await .iter() .map(|(network, peers)| (*network, peers.len())) .collect::>(); @@ -54,9 +56,9 @@ impl ContinuallyRan for DialTask { (peer_count < self .validators - .validators() + .by_network() .get(&network) - .map(Vec::len) + .map(HashSet::len) .unwrap_or(0) .saturating_sub(1)) { diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 97f00cbf..c1f6eca5 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,11 +1,12 @@ use std::{ - sync::{Arc, RwLock}, + sync::Arc, collections::{HashSet, HashMap}, + time::{Duration, Instant}, }; use serai_client::primitives::{NetworkId, PublicKey}; -use tokio::sync::mpsc; +use tokio::sync::{mpsc, RwLock}; use futures_util::StreamExt; use libp2p::{ @@ -16,6 +17,7 @@ use libp2p::{ /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; +use validators::Validators; /// The authentication protocol upgrade to limit the P2P network to active validators. mod authenticate; @@ -73,3 +75,133 @@ struct Behavior { reqres: reqres::Behavior, gossip: gossip::Behavior, } + +struct SwarmTask { + to_dial: mpsc::UnboundedReceiver, + + validators: Arc>, + last_refreshed_validators: Instant, + next_refresh_validators: Instant, + + peers: Peers, + rebuild_peers_at: Instant, + + swarm: Swarm, +} + +impl SwarmTask { + async fn run(mut self) { + loop { + let time_till_refresh_validators = + self.next_refresh_validators.saturating_duration_since(Instant::now()); + let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); + + tokio::select! { + biased; + + // Refresh the instance of validators we use to track peers/share with authenticate + () = tokio::time::sleep(time_till_refresh_validators) => { + const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5); + const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(120); + + let update = self.validators.write().await.update().await; + match update { + Ok(removed) => { + for removed in removed { + let _: Result<_, _> = self.swarm.disconnect_peer_id(removed); + } + self.last_refreshed_validators = Instant::now(); + self.next_refresh_validators = Instant::now() + TIME_BETWEEN_REFRESH_VALIDATORS; + } + Err(e) => { + log::warn!("couldn't refresh validators: {e:?}"); + // Increase the delay before the next refresh by using the time since the last + // refresh. This will be 5 seconds, then 5 seconds, then 10 seconds, then 20... + let time_since_last = self + .next_refresh_validators + .saturating_duration_since(self.last_refreshed_validators); + // But limit the delay + self.next_refresh_validators = + Instant::now() + time_since_last.min(MAX_TIME_BETWEEN_REFRESH_VALIDATORS); + }, + } + } + + // Rebuild the peers every 10 minutes + // + // This handles edge cases such as when a validator changes the networks they're present + // in, race conditions, or any other edge cases/quirks which would otherwise risk spiraling + // out of control + () = tokio::time::sleep(time_till_rebuild_peers) => { + const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); + + let validators_by_network = self.validators.read().await.by_network().clone(); + let connected = self.swarm.connected_peers().copied().collect::>(); + let mut peers = HashMap::new(); + for (network, validators) in validators_by_network { + peers.insert(network, validators.intersection(&connected).copied().collect()); + } + *self.peers.peers.write().await = peers; + + self.rebuild_peers_at = Instant::now() + TIME_BETWEEN_REBUILD_PEERS; + } + + // Dial peers we're instructed to + dial_opts = self.to_dial.recv() => { + let dial_opts = dial_opts.expect("DialTask was closed?"); + let _: Result<_, _> = self.swarm.dial(dial_opts); + } + + // Handle swarm events + event = self.swarm.next() => { + // `Swarm::next` will never return `Poll::Ready(None)` + // https://docs.rs/ + // libp2p/0.54.1/libp2p/struct.Swarm.html#impl-Stream-for-Swarm%3CTBehaviour%3E + let event = event.unwrap(); + match event { + SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => todo!("TODO"), + SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => todo!("TODO"), + // New connection, so update peers + SwarmEvent::ConnectionEstablished { peer_id, .. } => { + let Some(networks) = + self.validators.read().await.networks(&peer_id).cloned() else { continue }; + for network in networks { + self + .peers + .peers + .write() + .await + .entry(network) + .or_insert_with(HashSet::new) + .insert(peer_id); + } + }, + // Connection closed, so update peers + SwarmEvent::ConnectionClosed { peer_id, .. } => { + let Some(networks) = + self.validators.read().await.networks(&peer_id).cloned() else { continue }; + for network in networks { + self + .peers + .peers + .write() + .await + .entry(network) + .or_insert_with(HashSet::new) + .remove(&peer_id); + } + }, + SwarmEvent::IncomingConnection { .. } | + SwarmEvent::IncomingConnectionError { .. } | + SwarmEvent::OutgoingConnectionError { .. } | + SwarmEvent::NewListenAddr { .. } | + SwarmEvent::ExpiredListenAddr { .. } | + SwarmEvent::ListenerClosed { .. } | + SwarmEvent::ListenerError { .. } | + SwarmEvent::Dialing { .. } => {} + } + } + } + } + } +} diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index b5d87c1c..7faf2f8b 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -1,5 +1,5 @@ use core::{fmt, time::Duration}; -use std::io::{self, Read}; +use std::io; use async_trait::async_trait; diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 1bd10110..26487a59 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -1,4 +1,4 @@ -use std::collections::HashMap; +use std::collections::{HashSet, HashMap}; use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; @@ -12,13 +12,18 @@ pub(crate) struct Validators { // A cache for which session we're populated with the validators of sessions: HashMap, // The validators by network - by_network: HashMap>, - // The set of all validators (as a HashMap to represent the amount of inclusions) - set: HashMap, + by_network: HashMap>, + // The validators and their networks + validators: HashMap>, } impl Validators { - pub(crate) async fn update(&mut self) -> Result<(), String> { + /// Update the view of the validators. + /// + /// Returns all validators removed from the active validator set. + pub(crate) async fn update(&mut self) -> Result, String> { + let mut removed = HashSet::new(); + let temporal_serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let temporal_serai = temporal_serai.validator_sets(); @@ -35,20 +40,22 @@ impl Validators { let new_validators = temporal_serai.active_network_validators(network).await.map_err(|e| format!("{e:?}"))?; let new_validators = - new_validators.into_iter().map(peer_id_from_public).collect::>(); + new_validators.into_iter().map(peer_id_from_public).collect::>(); // Remove the existing validators - for validator in self.by_network.remove(&network).unwrap_or(vec![]) { - let mut inclusions = self.set.remove(&validator).unwrap(); - inclusions -= 1; - if inclusions != 0 { - self.set.insert(validator, inclusions); + for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { + let mut networks = self.validators.remove(&validator).unwrap(); + networks.remove(&network); + if networks.is_empty() { + removed.insert(validator); + } else { + self.validators.insert(validator, networks); } } // Add the new validators for validator in new_validators.iter().copied() { - *self.set.entry(validator).or_insert(0) += 1; + self.validators.entry(validator).or_insert_with(HashSet::new).insert(network); } self.by_network.insert(network, new_validators); @@ -56,14 +63,19 @@ impl Validators { self.sessions.insert(network, session); } } - Ok(()) + + Ok(removed) } - pub(crate) fn validators(&self) -> &HashMap> { + pub(crate) fn by_network(&self) -> &HashMap> { &self.by_network } pub(crate) fn contains(&self, peer_id: &PeerId) -> bool { - self.set.contains_key(peer_id) + self.validators.contains_key(peer_id) + } + + pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet> { + self.validators.get(peer_id) } } From 479ca0410a6c64f00b5bd26229fd8898b4f89703 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 4 Jan 2025 23:28:54 -0500 Subject: [PATCH 246/368] Add commentary on the use of FuturesOrdered --- coordinator/substrate/src/canonical.rs | 3 +++ coordinator/substrate/src/ephemeral.rs | 5 +++++ 2 files changed, 8 insertions(+) diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index b9479aeb..f333e11f 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -107,6 +107,9 @@ impl ContinuallyRan for CanonicalEventStream { // Sync the next set of upcoming blocks all at once to minimize latency const BLOCKS_TO_SYNC_AT_ONCE: u64 = 10; + // FuturesOrdered can be bad practice due to potentially causing tiemouts if it isn't + // sufficiently polled. Considering our processing loop is minimal and it does poll this, + // it's fine. let mut set = FuturesOrdered::new(); for block_number in next_block ..= latest_finalized_block.min(next_block + BLOCKS_TO_SYNC_AT_ONCE) diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index 858b5895..703d5b3a 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -100,6 +100,11 @@ impl ContinuallyRan for EphemeralEventStream { // Sync the next set of upcoming blocks all at once to minimize latency const BLOCKS_TO_SYNC_AT_ONCE: u64 = 50; + // FuturesOrdered can be bad practice due to potentially causing tiemouts if it isn't + // sufficiently polled. Our processing loop isn't minimal, itself making multiple requests, + // but the loop body should only be executed a few times a week. It's better to get through + // most blocks with this optimization, and have timeouts a few times a week, than not have + // this at all. let mut set = FuturesOrdered::new(); for block_number in next_block ..= latest_finalized_block.min(next_block + BLOCKS_TO_SYNC_AT_ONCE) From 2b8f481364a7598bd244cba833a2160a0345eb3c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 5 Jan 2025 00:17:05 -0500 Subject: [PATCH 247/368] Parallelize requests within Validators::update --- coordinator/src/p2p/dial.rs | 5 +- coordinator/src/p2p/mod.rs | 4 +- coordinator/src/p2p/validators.rs | 86 ++++++++++++++++++++----------- 3 files changed, 60 insertions(+), 35 deletions(-) diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs index 59c976cd..13d7f7ff 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/dial.rs @@ -26,8 +26,9 @@ struct DialTask { } impl ContinuallyRan for DialTask { - // Only run every thirty seconds, not the default of every five - const DELAY_BETWEEN_ITERATIONS: u64 = 30; + // Only run every five minutes, not the default of every five seconds + const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; + const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60; fn run_iteration(&mut self) -> impl Send + Future> { async move { diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index c1f6eca5..6ca71816 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -101,8 +101,8 @@ impl SwarmTask { // Refresh the instance of validators we use to track peers/share with authenticate () = tokio::time::sleep(time_till_refresh_validators) => { - const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5); - const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(120); + const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(60); + const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5 * 60); let update = self.validators.write().await.update().await; match update { diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 26487a59..3956a547 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -4,6 +4,8 @@ use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, S use libp2p::PeerId; +use futures_util::stream::{StreamExt, FuturesUnordered}; + use crate::p2p::peer_id_from_public; pub(crate) struct Validators { @@ -27,41 +29,63 @@ impl Validators { let temporal_serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let temporal_serai = temporal_serai.validator_sets(); - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - } - let Some(session) = temporal_serai.session(network).await.map_err(|e| format!("{e:?}"))? - else { - continue; - }; - // If the session has changed, populate it with the current validators - if self.sessions.get(&network) != Some(&session) { - let new_validators = - temporal_serai.active_network_validators(network).await.map_err(|e| format!("{e:?}"))?; - let new_validators = - new_validators.into_iter().map(peer_id_from_public).collect::>(); - // Remove the existing validators - for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { - let mut networks = self.validators.remove(&validator).unwrap(); - networks.remove(&network); - if networks.is_empty() { - removed.insert(validator); + let mut session_changes = vec![]; + { + // FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but + // we poll it till it yields all futures with the most minimal processing possible + let mut futures = FuturesUnordered::new(); + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + } + let sessions = &self.sessions; + futures.push(async move { + let session = match temporal_serai.session(network).await { + Ok(Some(session)) => session, + Ok(None) => return Ok(None), + Err(e) => return Err(format!("{e:?}")), + }; + + if sessions.get(&network) == Some(&session) { + Ok(None) } else { - self.validators.insert(validator, networks); + match temporal_serai.active_network_validators(network).await { + Ok(validators) => Ok(Some((network, session, validators))), + Err(e) => Err(format!("{e:?}")), + } } - } - - // Add the new validators - for validator in new_validators.iter().copied() { - self.validators.entry(validator).or_insert_with(HashSet::new).insert(network); - } - self.by_network.insert(network, new_validators); - - // Update the session we have populated - self.sessions.insert(network, session); + }); } + while let Some(session_change) = futures.next().await { + if let Some(session_change) = session_change? { + session_changes.push(session_change); + } + } + } + + for (network, session, validators) in session_changes { + let validators = validators.into_iter().map(peer_id_from_public).collect::>(); + + // Remove the existing validators + for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { + let mut networks = self.validators.remove(&validator).unwrap(); + networks.remove(&network); + if networks.is_empty() { + removed.insert(validator); + } else { + self.validators.insert(validator, networks); + } + } + + // Add the new validators + for validator in validators.iter().copied() { + self.validators.entry(validator).or_insert_with(HashSet::new).insert(network); + } + self.by_network.insert(network, validators); + + // Update the session we have populated + self.sessions.insert(network, session); } Ok(removed) From 96518500b19a2e02aff925846c3fa920b04ea5bb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 5 Jan 2025 00:29:11 -0500 Subject: [PATCH 248/368] Don't hold the shared Validators write lock while making requests to Serai --- coordinator/src/p2p/mod.rs | 4 +- coordinator/src/p2p/validators.rs | 61 ++++++++++++++++++++++++------- 2 files changed, 50 insertions(+), 15 deletions(-) diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 6ca71816..dbecc7c7 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -17,7 +17,7 @@ use libp2p::{ /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; -use validators::Validators; +use validators::{Validators, update_shared_validators}; /// The authentication protocol upgrade to limit the P2P network to active validators. mod authenticate; @@ -104,7 +104,7 @@ impl SwarmTask { const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(60); const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5 * 60); - let update = self.validators.write().await.update().await; + let update = update_shared_validators(&self.validators).await; match update { Ok(removed) => { for removed in removed { diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 3956a547..c3d3b2ba 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -1,10 +1,15 @@ -use std::collections::{HashSet, HashMap}; +use core::borrow::Borrow; +use std::{ + sync::Arc, + collections::{HashSet, HashMap}, +}; use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; use libp2p::PeerId; use futures_util::stream::{StreamExt, FuturesUnordered}; +use tokio::sync::RwLock; use crate::p2p::peer_id_from_public; @@ -20,14 +25,12 @@ pub(crate) struct Validators { } impl Validators { - /// Update the view of the validators. - /// - /// Returns all validators removed from the active validator set. - pub(crate) async fn update(&mut self) -> Result, String> { - let mut removed = HashSet::new(); - + async fn session_changes( + serai: impl Borrow, + sessions: impl Borrow>, + ) -> Result)>, String> { let temporal_serai = - self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + serai.borrow().as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let temporal_serai = temporal_serai.validator_sets(); let mut session_changes = vec![]; @@ -39,7 +42,7 @@ impl Validators { if network == NetworkId::Serai { continue; } - let sessions = &self.sessions; + let sessions = sessions.borrow(); futures.push(async move { let session = match temporal_serai.session(network).await { Ok(Some(session)) => session, @@ -51,7 +54,11 @@ impl Validators { Ok(None) } else { match temporal_serai.active_network_validators(network).await { - Ok(validators) => Ok(Some((network, session, validators))), + Ok(validators) => Ok(Some(( + network, + session, + validators.into_iter().map(peer_id_from_public).collect(), + ))), Err(e) => Err(format!("{e:?}")), } } @@ -64,9 +71,16 @@ impl Validators { } } - for (network, session, validators) in session_changes { - let validators = validators.into_iter().map(peer_id_from_public).collect::>(); + Ok(session_changes) + } + fn incorporate_session_changes( + &mut self, + session_changes: Vec<(NetworkId, Session, HashSet)>, + ) -> HashSet { + let mut removed = HashSet::new(); + + for (network, session, validators) in session_changes { // Remove the existing validators for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { let mut networks = self.validators.remove(&validator).unwrap(); @@ -88,7 +102,15 @@ impl Validators { self.sessions.insert(network, session); } - Ok(removed) + removed + } + + /// Update the view of the validators. + /// + /// Returns all validators removed from the active validator set. + pub(crate) async fn update(&mut self) -> Result, String> { + let session_changes = Self::session_changes(&self.serai, &self.sessions).await?; + Ok(self.incorporate_session_changes(session_changes)) } pub(crate) fn by_network(&self) -> &HashMap> { @@ -103,3 +125,16 @@ impl Validators { self.validators.get(peer_id) } } + +/// Update the view of the validators. +/// +/// Returns all validators removed from the active validator set. +pub(crate) async fn update_shared_validators( + validators: &Arc>, +) -> Result, String> { + let session_changes = { + let validators = validators.read().await; + Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await? + }; + Ok(validators.write().await.incorporate_session_changes(session_changes)) +} From c6d0fb477cce4840662ae36781b937a6c5549c26 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 5 Jan 2025 00:55:25 -0500 Subject: [PATCH 249/368] Inline noise into OnlyValidators libp2p does support (noise, OnlyValidators) but it'll interpret it as either, not a chain. This will act as the desired chain. --- Cargo.lock | 1 + coordinator/cosign/src/lib.rs | 2 +- coordinator/src/p2p/authenticate.rs | 55 ++++++++++++++++------------- coordinator/src/p2p/mod.rs | 1 + substrate/node/src/rpc.rs | 2 +- 5 files changed, 35 insertions(+), 26 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 802c5f00..eb3c9b4f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8327,6 +8327,7 @@ dependencies = [ "parity-scale-codec", "rand_core", "schnorr-signatures", + "schnorrkel", "serai-client", "serai-cosign", "serai-db", diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index faa449dd..3d2ab5ab 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -29,7 +29,7 @@ pub use delay::BROADCAST_FREQUENCY; use delay::LatestCosignedBlockNumber; /// The schnorrkel context to used when signing a cosign. -pub const COSIGN_CONTEXT: &[u8] = b"serai-cosign"; +pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign"; /// A 'global session', defined as all validator sets used for cosigning at a given moment. /// diff --git a/coordinator/src/p2p/authenticate.rs b/coordinator/src/p2p/authenticate.rs index 99d98515..ffa8a33b 100644 --- a/coordinator/src/p2p/authenticate.rs +++ b/coordinator/src/p2p/authenticate.rs @@ -12,7 +12,12 @@ use serai_client::primitives::PublicKey as Public; use tokio::sync::RwLock; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; -use libp2p::{core::UpgradeInfo, InboundUpgrade, OutboundUpgrade, identity::PeerId, noise}; +use libp2p::{ + core::UpgradeInfo, + InboundUpgrade, OutboundUpgrade, + identity::{self, PeerId}, + noise, +}; use crate::p2p::{validators::Validators, peer_id_from_public}; @@ -21,7 +26,7 @@ const PROTOCOL: &str = "/serai/coordinator/validators"; struct OnlyValidators { validators: Arc>, serai_key: Zeroizing, - our_peer_id: PeerId, + noise_keypair: identity::Keypair, } impl OnlyValidators { @@ -112,33 +117,35 @@ impl OnlyValidators { } impl UpgradeInfo for OnlyValidators { - type Info = &'static str; - type InfoIter = [&'static str; 1]; - fn protocol_info(&self) -> [&'static str; 1] { - [PROTOCOL] + type Info = ::Info; + type InfoIter = ::InfoIter; + fn protocol_info(&self) -> Self::InfoIter { + // A keypair only causes an error if its sign operation fails, which is only possible with RSA, + // which isn't used within this codebase + noise::Config::new(&self.noise_keypair).unwrap().protocol_info() } } -impl InboundUpgrade<(PeerId, noise::Output)> - for OnlyValidators -{ +impl InboundUpgrade for OnlyValidators { type Output = (PeerId, noise::Output); type Error = io::Error; type Future = Pin>>>; - fn upgrade_inbound( - self, - (dialer_noise_peer_id, mut socket): (PeerId, noise::Output), - _: Self::Info, - ) -> Self::Future { + fn upgrade_inbound(self, socket: S, info: Self::Info) -> Self::Future { Box::pin(async move { + let (dialer_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair) + .unwrap() + .upgrade_inbound(socket, info) + .await + .map_err(io::Error::other)?; + let (our_challenge, dialer_challenge) = OnlyValidators::challenges(&mut socket).await?; let dialer_serai_validator = self .authenticate( &mut socket, dialer_noise_peer_id, dialer_challenge, - self.our_peer_id, + PeerId::from_public_key(&self.noise_keypair.public()), our_challenge, ) .await?; @@ -147,24 +154,24 @@ impl InboundUpgrade<(PeerId, } } -impl OutboundUpgrade<(PeerId, noise::Output)> - for OnlyValidators -{ +impl OutboundUpgrade for OnlyValidators { type Output = (PeerId, noise::Output); type Error = io::Error; type Future = Pin>>>; - fn upgrade_outbound( - self, - (listener_noise_peer_id, mut socket): (PeerId, noise::Output), - _: Self::Info, - ) -> Self::Future { + fn upgrade_outbound(self, socket: S, info: Self::Info) -> Self::Future { Box::pin(async move { + let (listener_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair) + .unwrap() + .upgrade_outbound(socket, info) + .await + .map_err(io::Error::other)?; + let (our_challenge, listener_challenge) = OnlyValidators::challenges(&mut socket).await?; let listener_serai_validator = self .authenticate( &mut socket, - self.our_peer_id, + PeerId::from_public_key(&self.noise_keypair.public()), our_challenge, listener_noise_peer_id, listener_challenge, diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index dbecc7c7..71984b8c 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -100,6 +100,7 @@ impl SwarmTask { biased; // Refresh the instance of validators we use to track peers/share with authenticate + // TODO: Move this to a task () = tokio::time::sleep(time_till_refresh_validators) => { const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(60); const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5 * 60); diff --git a/substrate/node/src/rpc.rs b/substrate/node/src/rpc.rs index b818c798..63330419 100644 --- a/substrate/node/src/rpc.rs +++ b/substrate/node/src/rpc.rs @@ -65,7 +65,7 @@ where let validators = client.runtime_api().validators(latest_block, network).map_err(|_| { jsonrpsee::core::Error::to_call_error(std::io::Error::other(format!( "couldn't get validators from the latest block, which is likely a fatal bug. {}", - "please report this at https://github.com/serai-dex/serai", + "please report this at https://github.com/serai-dex/serai/issues", ))) })?; // Always return the protocol's bootnodes From 257f69127787401fba13dc5a2ecb4757251df978 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 5 Jan 2025 01:23:28 -0500 Subject: [PATCH 250/368] Start filling out message handling in SwarmTask --- coordinator/cosign/src/lib.rs | 4 +-- coordinator/src/p2p/gossip.rs | 3 +- coordinator/src/p2p/mod.rs | 52 ++++++++++++++++++++++++++++++++--- coordinator/src/p2p/reqres.rs | 3 +- 4 files changed, 54 insertions(+), 8 deletions(-) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 3d2ab5ab..29420d56 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -285,10 +285,10 @@ impl Cosigning { /// /// If this global session hasn't produced any notable cosigns, this will return the latest /// cosigns for this session. - pub fn notable_cosigns(&self, global_session: [u8; 32]) -> Vec { + pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec { let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); for network in serai_client::primitives::NETWORKS { - if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) { + if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) { cosigns.push(cosign); } } diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs index 8e32180b..dc6a5849 100644 --- a/coordinator/src/p2p/gossip.rs +++ b/coordinator/src/p2p/gossip.rs @@ -10,6 +10,7 @@ use libp2p::gossipsub::{ IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, IdentityTransform, AllowAllSubscriptionFilter, Behaviour, }; +pub use libp2p::gossipsub::Event; use serai_cosign::SignedCosign; @@ -27,7 +28,7 @@ fn topic_for_set(set: ValidatorSet) -> IdentTopic { #[derive(Clone, BorshSerialize, BorshDeserialize)] pub(crate) enum Message { - Tribuary { genesis: [u8; 32], message: Vec }, + Tributary { set: ValidatorSet, message: Vec }, Cosign(SignedCosign), } diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 71984b8c..55cc311b 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -4,10 +4,15 @@ use std::{ time::{Duration, Instant}, }; +use borsh::BorshDeserialize; + use serai_client::primitives::{NetworkId, PublicKey}; use tokio::sync::{mpsc, RwLock}; +use serai_db::Db; +use serai_cosign::Cosigning; + use futures_util::StreamExt; use libp2p::{ multihash::Multihash, @@ -76,7 +81,7 @@ struct Behavior { gossip: gossip::Behavior, } -struct SwarmTask { +struct SwarmTask { to_dial: mpsc::UnboundedReceiver, validators: Arc>, @@ -86,10 +91,11 @@ struct SwarmTask { peers: Peers, rebuild_peers_at: Instant, + db: D, swarm: Swarm, } -impl SwarmTask { +impl SwarmTask { async fn run(mut self) { loop { let time_till_refresh_validators = @@ -160,8 +166,42 @@ impl SwarmTask { // libp2p/0.54.1/libp2p/struct.Swarm.html#impl-Stream-for-Swarm%3CTBehaviour%3E let event = event.unwrap(); match event { - SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => todo!("TODO"), - SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => todo!("TODO"), + SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => match event { + reqres::Event::Message { message, .. } => match message { + reqres::Message::Request { request_id: _, request, channel } => { + match request { + // TODO: Send these + reqres::Request::KeepAlive => {}, + reqres::Request::Heartbeat { set, latest_block_hash } => todo!("TODO"), + reqres::Request::NotableCosigns { global_session } => { + let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); + let res = reqres::Response::NotableCosigns(cosigns); + let _: Result<_, _> = + self.swarm.behaviour_mut().reqres.send_response(channel, res); + }, + } + } + reqres::Message::Response { request_id, response } => todo!("TODO"), + } + reqres::Event::OutboundFailure { request_id, .. } => todo!("TODO"), + reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {}, + }, + SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => match event { + gossip::Event::Message { message, .. } => { + let Ok(message) = gossip::Message::deserialize(&mut message.data.as_slice()) else { + continue + }; + match message { + gossip::Message::Tributary { set, message } => todo!("TODO"), + gossip::Message::Cosign(signed_cosign) => todo!("TODO"), + } + } + gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}, + gossip::Event::GossipsubNotSupported { peer_id } => { + let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id); + } + }, + // New connection, so update peers SwarmEvent::ConnectionEstablished { peer_id, .. } => { let Some(networks) = @@ -177,6 +217,7 @@ impl SwarmTask { .insert(peer_id); } }, + // Connection closed, so update peers SwarmEvent::ConnectionClosed { peer_id, .. } => { let Some(networks) = @@ -191,7 +232,10 @@ impl SwarmTask { .or_insert_with(HashSet::new) .remove(&peer_id); } + // TODO: dial_task.run_now() if haven't in past minute }, + + // We don't handle any of these SwarmEvent::IncomingConnection { .. } | SwarmEvent::IncomingConnectionError { .. } | SwarmEvent::OutgoingConnectionError { .. } | diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index 7faf2f8b..0793e839 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -8,7 +8,8 @@ use serai_client::validator_sets::primitives::ValidatorSet; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; -use libp2p::request_response::{Codec as CodecTrait, Config, Behaviour, ProtocolSupport}; +use libp2p::request_response::{self, Codec as CodecTrait, Config, Behaviour, ProtocolSupport}; +pub use request_response::{Message, Event}; use serai_cosign::SignedCosign; From 47a4e534ef625817521931af057e790208e23020 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 15:25:32 -0500 Subject: [PATCH 251/368] Update serai-processor-signers to VariantSignid::Batch([u8; 32]) --- Cargo.lock | 1 + processor/signers/Cargo.toml | 1 + processor/signers/src/batch/db.rs | 5 +- processor/signers/src/batch/mod.rs | 77 ++++++++++++++---------- processor/signers/src/coordinator/mod.rs | 2 +- 5 files changed, 51 insertions(+), 35 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index eb3c9b4f..fcb9b442 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8996,6 +8996,7 @@ dependencies = [ name = "serai-processor-signers" version = "0.1.0" dependencies = [ + "blake2", "borsh", "ciphersuite", "frost-schnorrkel", diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index 07e42052..ddd295d3 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -24,6 +24,7 @@ workspace = true rand_core = { version = "0.6", default-features = false } zeroize = { version = "1", default-features = false, features = ["std"] } +blake2 = { version = "0.10", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false } frost-schnorrkel = { path = "../../crypto/schnorrkel", default-features = false } diff --git a/processor/signers/src/batch/db.rs b/processor/signers/src/batch/db.rs index a895e0bb..8d9bc605 100644 --- a/processor/signers/src/batch/db.rs +++ b/processor/signers/src/batch/db.rs @@ -5,8 +5,9 @@ use serai_db::{Get, DbTxn, create_db}; create_db! { SignersBatch { - ActiveSigningProtocols: (session: Session) -> Vec, - Batches: (id: u32) -> Batch, + ActiveSigningProtocols: (session: Session) -> Vec<[u8; 32]>, + BatchHash: (id: u32) -> [u8; 32], + Batches: (hash: [u8; 32]) -> Batch, SignedBatches: (id: u32) -> SignedBatch, LastAcknowledgedBatch: () -> u32, } diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs index b8ad7ccb..c791f4e0 100644 --- a/processor/signers/src/batch/mod.rs +++ b/processor/signers/src/batch/mod.rs @@ -1,9 +1,12 @@ use core::future::Future; use std::collections::HashSet; +use blake2::{digest::typenum::U32, Digest, Blake2b}; use ciphersuite::{group::GroupEncoding, Ristretto}; use frost::dkg::ThresholdKeys; +use scale::Encode; + use serai_validator_sets_primitives::Session; use serai_in_instructions_primitives::{SignedBatch, batch_message}; @@ -40,7 +43,7 @@ pub(crate) struct BatchSignerTask { external_key: E, keys: Vec>, - active_signing_protocols: HashSet, + active_signing_protocols: HashSet<[u8; 32]>, attempt_manager: AttemptManager, } @@ -63,7 +66,6 @@ impl BatchSignerTask { active_signing_protocols.insert(id); let batch = Batches::get(&db, id).unwrap(); - assert_eq!(batch.id, id); let mut machines = Vec::with_capacity(keys.len()); for keys in &keys { @@ -90,19 +92,21 @@ impl ContinuallyRan for BatchSignerTask { iterated = true; // Save this to the database as a transaction to sign - self.active_signing_protocols.insert(batch.id); + let batch_hash = <[u8; 32]>::from(Blake2b::::digest(batch.encode())); + self.active_signing_protocols.insert(batch_hash); ActiveSigningProtocols::set( &mut txn, self.session, &self.active_signing_protocols.iter().copied().collect(), ); - Batches::set(&mut txn, batch.id, &batch); + BatchHash::set(&mut txn, batch.id, &batch_hash); + Batches::set(&mut txn, batch_hash, &batch); let mut machines = Vec::with_capacity(self.keys.len()); for keys in &self.keys { machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); } - for msg in self.attempt_manager.register(VariantSignId::Batch(batch.id), machines) { + for msg in self.attempt_manager.register(VariantSignId::Batch(batch_hash), machines) { BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } @@ -112,48 +116,57 @@ impl ContinuallyRan for BatchSignerTask { // Check for acknowledged Batches (meaning we should no longer sign for these Batches) loop { let mut txn = self.db.txn(); - let Some(id) = AcknowledgedBatches::try_recv(&mut txn, &self.external_key) else { - break; - }; + let batch_hash = { + let Some(batch_id) = AcknowledgedBatches::try_recv(&mut txn, &self.external_key) else { + break; + }; + /* + We may have yet to register this signing protocol. + + While `BatchesToSign` is populated before `AcknowledgedBatches`, we could theoretically + have `BatchesToSign` populated with a new batch _while iterating over + `AcknowledgedBatches`_, and then have `AcknowledgedBatched` populated. In that edge + case, we will see the acknowledgement notification before we see the transaction. + + In such a case, we break (dropping the txn, re-queueing the acknowledgement + notification). On the task's next iteration, we'll process the Batch from + `BatchesToSign` and be able to make progress. + */ + let Some(batch_hash) = BatchHash::take(&mut txn, batch_id) else { + drop(txn); + break; + }; + batch_hash + }; + let batch = + Batches::take(&mut txn, batch_hash).expect("BatchHash populated but not Batches"); + + iterated = true; + + // Update the last acknowledged Batch { let last_acknowledged = LastAcknowledgedBatch::get(&txn); - if Some(id) > last_acknowledged { - LastAcknowledgedBatch::set(&mut txn, &id); + if Some(batch.id) > last_acknowledged { + LastAcknowledgedBatch::set(&mut txn, &batch.id); } } - /* - We may have yet to register this signing protocol. - - While `BatchesToSign` is populated before `AcknowledgedBatches`, we could theoretically - have `BatchesToSign` populated with a new batch _while iterating over - `AcknowledgedBatches`_, and then have `AcknowledgedBatched` populated. In that edge case, - we will see the acknowledgement notification before we see the transaction. - - In such a case, we break (dropping the txn, re-queueing the acknowledgement notification). - On the task's next iteration, we'll process the Batch from `BatchesToSign` and be - able to make progress. - */ - if !self.active_signing_protocols.remove(&id) { - break; - } - iterated = true; - - // Since it was, remove this as an active signing protocol + // Remove this as an active signing protocol + assert!(self.active_signing_protocols.remove(&batch_hash)); ActiveSigningProtocols::set( &mut txn, self.session, &self.active_signing_protocols.iter().copied().collect(), ); - // Clean up the database - Batches::del(&mut txn, id); - SignedBatches::del(&mut txn, id); + + // Clean up SignedBatches + SignedBatches::del(&mut txn, batch.id); // We retire with a txn so we either successfully flag this Batch as acknowledged, and // won't re-register it (making this retire safe), or we don't flag it, meaning we will // re-register it, yet that's safe as we have yet to retire it - self.attempt_manager.retire(&mut txn, VariantSignId::Batch(id)); + self.attempt_manager.retire(&mut txn, VariantSignId::Batch(batch_hash)); txn.commit(); } diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index b57742a5..319f098c 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -143,7 +143,7 @@ impl ContinuallyRan for CoordinatorTask { // the prior Batch(es) (and accordingly didn't publish them) let last_batch = crate::batch::last_acknowledged_batch(&txn).max(db::LastPublishedBatch::get(&txn)); - let mut next_batch = last_batch.map_or(0, |id| id + 1); + let mut next_batch = last_batch.map(|id| id + 1).unwrap_or(0); while let Some(batch) = crate::batch::signed_batch(&txn, next_batch) { iterated = true; db::LastPublishedBatch::set(&mut txn, &batch.batch.id); From 052388285b0b78f00dfc3ddc3430faf87d9fec03 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 15:26:41 -0500 Subject: [PATCH 252/368] Remove TaskHandle::close TaskHandle::close meant run_now may panic if the task was closed. Now, tasks are only closed when all handles are dropped, causing all handles to point to running tasks (ensuring run_now won't panic). --- common/task/src/lib.rs | 49 +++++++++++------------------------------- 1 file changed, 13 insertions(+), 36 deletions(-) diff --git a/common/task/src/lib.rs b/common/task/src/lib.rs index b5523cda..2a061c10 100644 --- a/common/task/src/lib.rs +++ b/common/task/src/lib.rs @@ -3,27 +3,29 @@ #![deny(missing_docs)] use core::{future::Future, time::Duration}; -use std::sync::Arc; -use tokio::sync::{mpsc, oneshot, Mutex}; - -enum Closed { - NotClosed(Option>), - Closed, -} +use tokio::sync::mpsc; /// A handle for a task. +/// +/// The task will only stop running once all handles for it are dropped. +// +// `run_now` isn't infallible if the task may have been closed. `run_now` on a closed task would +// either need to panic (historic behavior), silently drop the fact the task can't be run, or +// return an error. Instead of having a potential panic, and instead of modeling the error +// behavior, this task can't be closed unless all handles are dropped, ensuring calls to `run_now` +// are infallible. #[derive(Clone)] pub struct TaskHandle { run_now: mpsc::Sender<()>, + #[allow(dead_code)] // This is used to track if all handles have been dropped close: mpsc::Sender<()>, - closed: Arc>, } + /// A task's internal structures. pub struct Task { run_now: mpsc::Receiver<()>, close: mpsc::Receiver<()>, - closed: oneshot::Sender<()>, } impl Task { @@ -34,14 +36,9 @@ impl Task { let (run_now_send, run_now_recv) = mpsc::channel(1); // And any call to close satisfies all calls to close let (close_send, close_recv) = mpsc::channel(1); - let (closed_send, closed_recv) = oneshot::channel(); ( - Self { run_now: run_now_recv, close: close_recv, closed: closed_send }, - TaskHandle { - run_now: run_now_send, - close: close_send, - closed: Arc::new(Mutex::new(Closed::NotClosed(Some(closed_recv)))), - }, + Self { run_now: run_now_recv, close: close_recv }, + TaskHandle { run_now: run_now_send, close: close_send }, ) } } @@ -61,24 +58,6 @@ impl TaskHandle { } } } - - /// Close the task. - /// - /// Returns once the task shuts down after it finishes its current iteration (which may be of - /// unbounded time). - pub async fn close(self) { - // If another instance of the handle called tfhis, don't error - let _ = self.close.send(()).await; - // Wait until we receive the closed message - let mut closed = self.closed.lock().await; - match &mut *closed { - Closed::NotClosed(ref mut recv) => { - assert_eq!(recv.take().unwrap().await, Ok(()), "continually ran task dropped itself?"); - *closed = Closed::Closed; - } - Closed::Closed => {} - } - } } /// A task to be continually ran. @@ -152,8 +131,6 @@ pub trait ContinuallyRan: Sized + Send { }, } } - - task.closed.send(()).unwrap(); } } } From 82e753db30aa4a2f940a314a5530d4967b0a2413 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 15:35:34 -0500 Subject: [PATCH 253/368] Document risk of eclipse in the dial task --- coordinator/src/p2p/dial.rs | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs index 13d7f7ff..6fc6cb50 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/dial.rs @@ -17,6 +17,16 @@ use serai_task::ContinuallyRan; use crate::p2p::{PORT, Peers, validators::Validators}; const TARGET_PEERS_PER_NETWORK: usize = 5; +/* + If we only tracked the target amount of peers per network, we'd risk being eclipsed by an + adversary who immediately connects to us with their array of validators upon our boot. Their + array would satisfy our target amount of peers, so we'd never seek more, enabling the adversary + to be the only entity we peered with. + + We solve this by additionally requiring an explicit amount of peers we dialed. That means we + randomly chose to connect to these peers. +*/ +// TODO const TARGET_DIALED_PEERS_PER_NETWORK: usize = 3; struct DialTask { serai: Serai, From d9e9887d34b3ac5dd5ac204ecef21b38893cf743 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 15:36:06 -0500 Subject: [PATCH 254/368] Run the dial task whenever we have a peer disconnect --- coordinator/src/p2p/mod.rs | 24 +++++++++++++++++++++++- coordinator/src/p2p/validators.rs | 3 +++ 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 55cc311b..55c14cdb 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -11,6 +11,8 @@ use serai_client::primitives::{NetworkId, PublicKey}; use tokio::sync::{mpsc, RwLock}; use serai_db::Db; +use serai_task::TaskHandle; + use serai_cosign::Cosigning; use futures_util::StreamExt; @@ -82,7 +84,9 @@ struct Behavior { } struct SwarmTask { + dial_task: TaskHandle, to_dial: mpsc::UnboundedReceiver, + last_dial_task_run: Instant, validators: Arc>, last_refreshed_validators: Instant, @@ -232,7 +236,25 @@ impl SwarmTask { .or_insert_with(HashSet::new) .remove(&peer_id); } - // TODO: dial_task.run_now() if haven't in past minute + + /* + We want to re-run the dial task, since we lost a peer, in case we should find new + peers. This opens a DoS where a validator repeatedly opens/closes connections to + force iterations of the dial task. We prevent this by setting a minimum distance + since the last explicit iteration. + + This is suboptimal. If we have several disconnects in immediate proximity, we'll + trigger the dial task upon the first (where we may still have enough peers we + shouldn't dial more) but not the last (where we may have so few peers left we + should dial more). This is accepted as the dial task will eventually run on its + natural timer. + */ + const MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL: Duration = Duration::from_secs(60); + let now = Instant::now(); + if (self.last_dial_task_run + MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL) < now { + self.dial_task.run_now(); + self.last_dial_task_run = now; + } }, // We don't handle any of these diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index c3d3b2ba..95cbe8b1 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -128,6 +128,9 @@ impl Validators { /// Update the view of the validators. /// +/// This minimizes the time an exclusive lock is held over the validators to minimize the +/// disruption to functioning. +/// /// Returns all validators removed from the active validator set. pub(crate) async fn update_shared_validators( validators: &Arc>, From f55165e01607f6ea71047b7118342d7ce0a10558 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 15:51:15 -0500 Subject: [PATCH 255/368] Add channels to send requests/recv responses --- coordinator/src/p2p/mod.rs | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 55c14cdb..cedaed3e 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -8,7 +8,7 @@ use borsh::BorshDeserialize; use serai_client::primitives::{NetworkId, PublicKey}; -use tokio::sync::{mpsc, RwLock}; +use tokio::sync::{mpsc, oneshot, RwLock}; use serai_db::Db; use serai_task::TaskHandle; @@ -19,6 +19,7 @@ use futures_util::StreamExt; use libp2p::{ multihash::Multihash, identity::PeerId, + request_response::RequestId, swarm::{dial_opts::DialOpts, NetworkBehaviour, SwarmEvent, Swarm}, }; @@ -97,6 +98,9 @@ struct SwarmTask { db: D, swarm: Swarm, + + request_recv: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender>)>, + request_resp: HashMap>>, } impl SwarmTask { @@ -163,6 +167,13 @@ impl SwarmTask { let _: Result<_, _> = self.swarm.dial(dial_opts); } + request = self.request_recv.recv() => { + let (peer, request, response_channel) = + request.expect("channel for requests was closed?"); + let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request); + self.request_resp.insert(request_id, response_channel); + } + // Handle swarm events event = self.swarm.next() => { // `Swarm::next` will never return `Poll::Ready(None)` @@ -178,6 +189,7 @@ impl SwarmTask { reqres::Request::KeepAlive => {}, reqres::Request::Heartbeat { set, latest_block_hash } => todo!("TODO"), reqres::Request::NotableCosigns { global_session } => { + // TODO: Move this out let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); let res = reqres::Response::NotableCosigns(cosigns); let _: Result<_, _> = @@ -185,9 +197,19 @@ impl SwarmTask { }, } } - reqres::Message::Response { request_id, response } => todo!("TODO"), + reqres::Message::Response { request_id, response } => { + // Send Some(response) as the response for the request + if let Some(channel) = self.request_resp.remove(&request_id) { + let _: Result<_, _> = channel.send(Some(response)); + } + }, } - reqres::Event::OutboundFailure { request_id, .. } => todo!("TODO"), + reqres::Event::OutboundFailure { request_id, .. } => { + // Send None as the response for the request + if let Some(channel) = self.request_resp.remove(&request_id) { + let _: Result<_, _> = channel.send(None); + } + }, reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {}, }, SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => match event { From f27e4e320262a4d431150163339f1d29c5b30d02 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 16:34:19 -0500 Subject: [PATCH 256/368] Move the WIP SwarmTask to its own file --- coordinator/src/p2p/gossip.rs | 13 +- coordinator/src/p2p/mod.rs | 234 ++-------------------------- coordinator/src/p2p/reqres.rs | 8 +- coordinator/src/p2p/swarm.rs | 247 ++++++++++++++++++++++++++++++ coordinator/src/p2p/validators.rs | 23 ++- 5 files changed, 291 insertions(+), 234 deletions(-) create mode 100644 coordinator/src/p2p/swarm.rs diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs index dc6a5849..7f5a078c 100644 --- a/coordinator/src/p2p/gossip.rs +++ b/coordinator/src/p2p/gossip.rs @@ -7,8 +7,8 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::validator_sets::primitives::ValidatorSet; use libp2p::gossipsub::{ - IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, IdentityTransform, - AllowAllSubscriptionFilter, Behaviour, + TopicHash, IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, + IdentityTransform, AllowAllSubscriptionFilter, Behaviour, }; pub use libp2p::gossipsub::Event; @@ -32,6 +32,15 @@ pub(crate) enum Message { Cosign(SignedCosign), } +impl Message { + pub(crate) fn topic(&self) -> TopicHash { + match self { + Message::Tributary { set, .. } => topic_for_set(*set).hash(), + Message::Cosign(_) => IdentTopic::new(BASE_TOPIC).hash(), + } + } +} + pub(crate) type Behavior = Behaviour; pub(crate) fn new_behavior() -> Behavior { diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index cedaed3e..7ccb46a3 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,27 +1,16 @@ +use core::future::Future; use std::{ sync::Arc, collections::{HashSet, HashMap}, - time::{Duration, Instant}, }; -use borsh::BorshDeserialize; - use serai_client::primitives::{NetworkId, PublicKey}; -use tokio::sync::{mpsc, oneshot, RwLock}; +use tokio::sync::RwLock; -use serai_db::Db; -use serai_task::TaskHandle; +use serai_task::ContinuallyRan; -use serai_cosign::Cosigning; - -use futures_util::StreamExt; -use libp2p::{ - multihash::Multihash, - identity::PeerId, - request_response::RequestId, - swarm::{dial_opts::DialOpts, NetworkBehaviour, SwarmEvent, Swarm}, -}; +use libp2p::{multihash::Multihash, identity::PeerId, swarm::NetworkBehaviour}; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; @@ -43,6 +32,9 @@ mod gossip; /// The heartbeat task, effecting sync of Tributaries mod heartbeat; +/// The swarm task, running it and dispatching to/from it +mod swarm; + const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') fn peer_id_from_public(public: PublicKey) -> PeerId { @@ -84,213 +76,19 @@ struct Behavior { gossip: gossip::Behavior, } -struct SwarmTask { - dial_task: TaskHandle, - to_dial: mpsc::UnboundedReceiver, - last_dial_task_run: Instant, - +struct UpdateSharedValidatorsTask { validators: Arc>, - last_refreshed_validators: Instant, - next_refresh_validators: Instant, - - peers: Peers, - rebuild_peers_at: Instant, - - db: D, - swarm: Swarm, - - request_recv: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender>)>, - request_resp: HashMap>>, } -impl SwarmTask { - async fn run(mut self) { - loop { - let time_till_refresh_validators = - self.next_refresh_validators.saturating_duration_since(Instant::now()); - let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); +impl ContinuallyRan for UpdateSharedValidatorsTask { + // Only run every minute, not the default of every five seconds + const DELAY_BETWEEN_ITERATIONS: u64 = 60; + const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; - tokio::select! { - biased; - - // Refresh the instance of validators we use to track peers/share with authenticate - // TODO: Move this to a task - () = tokio::time::sleep(time_till_refresh_validators) => { - const TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(60); - const MAX_TIME_BETWEEN_REFRESH_VALIDATORS: Duration = Duration::from_secs(5 * 60); - - let update = update_shared_validators(&self.validators).await; - match update { - Ok(removed) => { - for removed in removed { - let _: Result<_, _> = self.swarm.disconnect_peer_id(removed); - } - self.last_refreshed_validators = Instant::now(); - self.next_refresh_validators = Instant::now() + TIME_BETWEEN_REFRESH_VALIDATORS; - } - Err(e) => { - log::warn!("couldn't refresh validators: {e:?}"); - // Increase the delay before the next refresh by using the time since the last - // refresh. This will be 5 seconds, then 5 seconds, then 10 seconds, then 20... - let time_since_last = self - .next_refresh_validators - .saturating_duration_since(self.last_refreshed_validators); - // But limit the delay - self.next_refresh_validators = - Instant::now() + time_since_last.min(MAX_TIME_BETWEEN_REFRESH_VALIDATORS); - }, - } - } - - // Rebuild the peers every 10 minutes - // - // This handles edge cases such as when a validator changes the networks they're present - // in, race conditions, or any other edge cases/quirks which would otherwise risk spiraling - // out of control - () = tokio::time::sleep(time_till_rebuild_peers) => { - const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); - - let validators_by_network = self.validators.read().await.by_network().clone(); - let connected = self.swarm.connected_peers().copied().collect::>(); - let mut peers = HashMap::new(); - for (network, validators) in validators_by_network { - peers.insert(network, validators.intersection(&connected).copied().collect()); - } - *self.peers.peers.write().await = peers; - - self.rebuild_peers_at = Instant::now() + TIME_BETWEEN_REBUILD_PEERS; - } - - // Dial peers we're instructed to - dial_opts = self.to_dial.recv() => { - let dial_opts = dial_opts.expect("DialTask was closed?"); - let _: Result<_, _> = self.swarm.dial(dial_opts); - } - - request = self.request_recv.recv() => { - let (peer, request, response_channel) = - request.expect("channel for requests was closed?"); - let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request); - self.request_resp.insert(request_id, response_channel); - } - - // Handle swarm events - event = self.swarm.next() => { - // `Swarm::next` will never return `Poll::Ready(None)` - // https://docs.rs/ - // libp2p/0.54.1/libp2p/struct.Swarm.html#impl-Stream-for-Swarm%3CTBehaviour%3E - let event = event.unwrap(); - match event { - SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => match event { - reqres::Event::Message { message, .. } => match message { - reqres::Message::Request { request_id: _, request, channel } => { - match request { - // TODO: Send these - reqres::Request::KeepAlive => {}, - reqres::Request::Heartbeat { set, latest_block_hash } => todo!("TODO"), - reqres::Request::NotableCosigns { global_session } => { - // TODO: Move this out - let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); - let res = reqres::Response::NotableCosigns(cosigns); - let _: Result<_, _> = - self.swarm.behaviour_mut().reqres.send_response(channel, res); - }, - } - } - reqres::Message::Response { request_id, response } => { - // Send Some(response) as the response for the request - if let Some(channel) = self.request_resp.remove(&request_id) { - let _: Result<_, _> = channel.send(Some(response)); - } - }, - } - reqres::Event::OutboundFailure { request_id, .. } => { - // Send None as the response for the request - if let Some(channel) = self.request_resp.remove(&request_id) { - let _: Result<_, _> = channel.send(None); - } - }, - reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {}, - }, - SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => match event { - gossip::Event::Message { message, .. } => { - let Ok(message) = gossip::Message::deserialize(&mut message.data.as_slice()) else { - continue - }; - match message { - gossip::Message::Tributary { set, message } => todo!("TODO"), - gossip::Message::Cosign(signed_cosign) => todo!("TODO"), - } - } - gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}, - gossip::Event::GossipsubNotSupported { peer_id } => { - let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id); - } - }, - - // New connection, so update peers - SwarmEvent::ConnectionEstablished { peer_id, .. } => { - let Some(networks) = - self.validators.read().await.networks(&peer_id).cloned() else { continue }; - for network in networks { - self - .peers - .peers - .write() - .await - .entry(network) - .or_insert_with(HashSet::new) - .insert(peer_id); - } - }, - - // Connection closed, so update peers - SwarmEvent::ConnectionClosed { peer_id, .. } => { - let Some(networks) = - self.validators.read().await.networks(&peer_id).cloned() else { continue }; - for network in networks { - self - .peers - .peers - .write() - .await - .entry(network) - .or_insert_with(HashSet::new) - .remove(&peer_id); - } - - /* - We want to re-run the dial task, since we lost a peer, in case we should find new - peers. This opens a DoS where a validator repeatedly opens/closes connections to - force iterations of the dial task. We prevent this by setting a minimum distance - since the last explicit iteration. - - This is suboptimal. If we have several disconnects in immediate proximity, we'll - trigger the dial task upon the first (where we may still have enough peers we - shouldn't dial more) but not the last (where we may have so few peers left we - should dial more). This is accepted as the dial task will eventually run on its - natural timer. - */ - const MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL: Duration = Duration::from_secs(60); - let now = Instant::now(); - if (self.last_dial_task_run + MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL) < now { - self.dial_task.run_now(); - self.last_dial_task_run = now; - } - }, - - // We don't handle any of these - SwarmEvent::IncomingConnection { .. } | - SwarmEvent::IncomingConnectionError { .. } | - SwarmEvent::OutgoingConnectionError { .. } | - SwarmEvent::NewListenAddr { .. } | - SwarmEvent::ExpiredListenAddr { .. } | - SwarmEvent::ListenerClosed { .. } | - SwarmEvent::ListenerError { .. } | - SwarmEvent::Dialing { .. } => {} - } - } - } + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + update_shared_validators(&self.validators).await.map_err(|e| format!("{e:?}"))?; + Ok(true) } } } diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index 0793e839..ad9075d7 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -8,8 +8,10 @@ use serai_client::validator_sets::primitives::ValidatorSet; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; -use libp2p::request_response::{self, Codec as CodecTrait, Config, Behaviour, ProtocolSupport}; -pub use request_response::{Message, Event}; +use libp2p::request_response::{ + self, Codec as CodecTrait, Event as GenericEvent, Config, Behaviour, ProtocolSupport, +}; +pub use request_response::Message; use serai_cosign::SignedCosign; @@ -128,6 +130,8 @@ impl CodecTrait for Codec { } } +pub(crate) type Event = GenericEvent; + pub(crate) type Behavior = Behaviour; pub(crate) fn new_behavior() -> Behavior { let mut config = Config::default(); diff --git a/coordinator/src/p2p/swarm.rs b/coordinator/src/p2p/swarm.rs new file mode 100644 index 00000000..8aab3d90 --- /dev/null +++ b/coordinator/src/p2p/swarm.rs @@ -0,0 +1,247 @@ +use std::{ + sync::Arc, + collections::{HashSet, HashMap}, + time::{Duration, Instant}, +}; + +use borsh::BorshDeserialize; + +use tokio::sync::{mpsc, oneshot, RwLock}; + +use serai_db::Db; +use serai_task::TaskHandle; + +use serai_cosign::Cosigning; + +use futures_util::StreamExt; +use libp2p::{ + identity::PeerId, + request_response::RequestId, + swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, +}; + +use crate::p2p::{ + Peers, BehaviorEvent, Behavior, + validators::Validators, + reqres::{self, Request, Response}, + gossip, +}; + +/* + `SwarmTask` handles everything we need the `Swarm` object for. The goal is to minimize the + contention on this task. Unfortunately, the `Swarm` object itself is needed for a variety of + purposes making this a rather large task. + + Responsibilities include: + - Actually dialing new peers (the selection process occurs in another task) + - Maintaining the peers structure (as we need the Swarm object to see who our peers are) + - Gossiping messages + - Dispatching gossiped messages + - Sending requests + - Dispatching responses to requests + - Dispatching received requests + - Sending responses +*/ +struct SwarmTask { + dial_task: TaskHandle, + to_dial: mpsc::UnboundedReceiver, + last_dial_task_run: Instant, + + validators: Arc>, + + peers: Peers, + rebuild_peers_at: Instant, + + db: D, + swarm: Swarm, + + gossip: mpsc::UnboundedReceiver, + + outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender>)>, + outbound_requests_responses: HashMap>>, +} + +impl SwarmTask { + fn handle_reqres(&mut self, event: reqres::Event) { + match event { + reqres::Event::Message { message, .. } => match message { + reqres::Message::Request { request_id: _, request, channel } => { + match request { + // TODO: Send these + reqres::Request::KeepAlive => {} + reqres::Request::Heartbeat { set, latest_block_hash } => todo!("TODO"), + reqres::Request::NotableCosigns { global_session } => { + // TODO: Move this out + let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); + let res = reqres::Response::NotableCosigns(cosigns); + let _: Result<_, _> = self.swarm.behaviour_mut().reqres.send_response(channel, res); + } + } + } + reqres::Message::Response { request_id, response } => { + // Send Some(response) as the response for the request + if let Some(channel) = self.outbound_requests_responses.remove(&request_id) { + let _: Result<_, _> = channel.send(Some(response)); + } + } + }, + reqres::Event::OutboundFailure { request_id, .. } => { + // Send None as the response for the request + if let Some(channel) = self.outbound_requests_responses.remove(&request_id) { + let _: Result<_, _> = channel.send(None); + } + } + reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {} + } + } + + fn handle_gossip(&mut self, event: gossip::Event) { + match event { + gossip::Event::Message { message, .. } => { + let Ok(message) = gossip::Message::deserialize(&mut message.data.as_slice()) else { + // TODO: Penalize the PeerId which sent this message + return; + }; + match message { + gossip::Message::Tributary { set, message } => todo!("TODO"), + gossip::Message::Cosign(signed_cosign) => todo!("TODO"), + } + } + gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {} + gossip::Event::GossipsubNotSupported { peer_id } => { + let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id); + } + } + } + + async fn run(mut self) { + loop { + let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); + + tokio::select! { + // Dial peers we're instructed to + dial_opts = self.to_dial.recv() => { + let dial_opts = dial_opts.expect("DialTask was closed?"); + let _: Result<_, _> = self.swarm.dial(dial_opts); + } + + /* + Rebuild the peers every 10 minutes. + + This protects against any race conditions/edge cases we have in our logic to track peers, + along with unrepresented behavior such as when a peer changes the networks they're active + in. This lets the peer tracking logic simply be 'good enough' to not become horribly + corrupt over the span of `TIME_BETWEEN_REBUILD_PEERS`. + + We also use this to disconnect all peers who are no longer active in any network. + */ + () = tokio::time::sleep(time_till_rebuild_peers) => { + const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); + + let validators_by_network = self.validators.read().await.by_network().clone(); + let connected_peers = self.swarm.connected_peers().copied().collect::>(); + + // We initially populate the list of peers to disconnect with all peers + let mut to_disconnect = connected_peers.clone(); + + // Build the new peers object + let mut peers = HashMap::new(); + for (network, validators) in validators_by_network { + peers.insert(network, validators.intersection(&connected_peers).copied().collect()); + + // If this peer is in this validator set, don't keep it flagged for disconnection + to_disconnect.retain(|peer| !validators.contains(peer)); + } + + // Write the new peers object + *self.peers.peers.write().await = peers; + self.rebuild_peers_at = Instant::now() + TIME_BETWEEN_REBUILD_PEERS; + + // Disconnect all peers marked for disconnection + for peer in to_disconnect { + let _: Result<_, _> = self.swarm.disconnect_peer_id(peer); + } + } + + // Handle swarm events + event = self.swarm.next() => { + // `Swarm::next` will never return `Poll::Ready(None)` + // https://docs.rs/ + // libp2p/0.54.1/libp2p/struct.Swarm.html#impl-Stream-for-Swarm%3CTBehaviour%3E + let event = event.unwrap(); + match event { + // New connection, so update peers + SwarmEvent::ConnectionEstablished { peer_id, .. } => { + let Some(networks) = + self.validators.read().await.networks(&peer_id).cloned() else { continue }; + let mut peers = self.peers.peers.write().await; + for network in networks { + peers.entry(network).or_insert_with(HashSet::new).insert(peer_id); + } + } + + // Connection closed, so update peers + SwarmEvent::ConnectionClosed { peer_id, .. } => { + let Some(networks) = + self.validators.read().await.networks(&peer_id).cloned() else { continue }; + let mut peers = self.peers.peers.write().await; + for network in networks { + peers.entry(network).or_insert_with(HashSet::new).remove(&peer_id); + } + + /* + We want to re-run the dial task, since we lost a peer, in case we should find new + peers. This opens a DoS where a validator repeatedly opens/closes connections to + force iterations of the dial task. We prevent this by setting a minimum distance + since the last explicit iteration. + + This is suboptimal. If we have several disconnects in immediate proximity, we'll + trigger the dial task upon the first (where we may still have enough peers we + shouldn't dial more) but not the last (where we may have so few peers left we + should dial more). This is accepted as the dial task will eventually run on its + natural timer. + */ + const MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL: Duration = Duration::from_secs(60); + let now = Instant::now(); + if (self.last_dial_task_run + MINIMUM_TIME_SINCE_LAST_EXPLICIT_DIAL) < now { + self.dial_task.run_now(); + self.last_dial_task_run = now; + } + } + + SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => { + self.handle_reqres(event) + } + SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => { + self.handle_gossip(event) + } + + // We don't handle any of these + SwarmEvent::IncomingConnection { .. } | + SwarmEvent::IncomingConnectionError { .. } | + SwarmEvent::OutgoingConnectionError { .. } | + SwarmEvent::NewListenAddr { .. } | + SwarmEvent::ExpiredListenAddr { .. } | + SwarmEvent::ListenerClosed { .. } | + SwarmEvent::ListenerError { .. } | + SwarmEvent::Dialing { .. } => {} + } + } + + request = self.outbound_requests.recv() => { + let (peer, request, response_channel) = + request.expect("channel for requests was closed?"); + let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request); + self.outbound_requests_responses.insert(request_id, response_channel); + } + + message = self.gossip.recv() => { + let message = message.expect("channel for messages to gossip was closed?"); + let topic = message.topic(); + let message = borsh::to_vec(&message).unwrap(); + let _: Result<_, _> = self.swarm.behaviour_mut().gossip.publish(topic, message); + } + } + } + } +} diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 95cbe8b1..4b3d3870 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -77,17 +77,16 @@ impl Validators { fn incorporate_session_changes( &mut self, session_changes: Vec<(NetworkId, Session, HashSet)>, - ) -> HashSet { - let mut removed = HashSet::new(); - + ) { for (network, session, validators) in session_changes { // Remove the existing validators for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { + // Get all networks this validator is in let mut networks = self.validators.remove(&validator).unwrap(); + // Remove this one networks.remove(&network); - if networks.is_empty() { - removed.insert(validator); - } else { + // Insert the networks back if the validator was present in other networks + if !networks.is_empty() { self.validators.insert(validator, networks); } } @@ -101,16 +100,15 @@ impl Validators { // Update the session we have populated self.sessions.insert(network, session); } - - removed } /// Update the view of the validators. /// /// Returns all validators removed from the active validator set. - pub(crate) async fn update(&mut self) -> Result, String> { + pub(crate) async fn update(&mut self) -> Result<(), String> { let session_changes = Self::session_changes(&self.serai, &self.sessions).await?; - Ok(self.incorporate_session_changes(session_changes)) + self.incorporate_session_changes(session_changes); + Ok(()) } pub(crate) fn by_network(&self) -> &HashMap> { @@ -134,10 +132,11 @@ impl Validators { /// Returns all validators removed from the active validator set. pub(crate) async fn update_shared_validators( validators: &Arc>, -) -> Result, String> { +) -> Result<(), String> { let session_changes = { let validators = validators.read().await; Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await? }; - Ok(validators.write().await.incorporate_session_changes(session_changes)) + validators.write().await.incorporate_session_changes(session_changes); + Ok(()) } From a731c0005d39ffe1b55c6926443b7be59c17c9bd Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 16:51:56 -0500 Subject: [PATCH 257/368] Finish routing our own channel abstraction around the Swarm event stream --- coordinator/src/p2p/reqres.rs | 13 ++-- coordinator/src/p2p/swarm.rs | 129 +++++++++++++++++++++------------- 2 files changed, 87 insertions(+), 55 deletions(-) diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index ad9075d7..fd6efcbd 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -46,16 +46,19 @@ pub(crate) struct TributaryBlockWithCommit { /// Responses which can be received via the request-response protocol. #[derive(Clone, BorshSerialize, BorshDeserialize)] pub(crate) enum Response { + NoResponse, Blocks(Vec), NotableCosigns(Vec), } impl fmt::Debug for Response { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { - (match self { - Response::Blocks(_) => fmt.debug_struct("Response::Block"), - Response::NotableCosigns(_) => fmt.debug_struct("Response::NotableCosigns"), - }) - .finish_non_exhaustive() + match self { + Response::NoResponse => fmt.debug_struct("Response::NoResponse").finish(), + Response::Blocks(_) => fmt.debug_struct("Response::Block").finish_non_exhaustive(), + Response::NotableCosigns(_) => { + fmt.debug_struct("Response::NotableCosigns").finish_non_exhaustive() + } + } } } diff --git a/coordinator/src/p2p/swarm.rs b/coordinator/src/p2p/swarm.rs index 8aab3d90..10d14a6d 100644 --- a/coordinator/src/p2p/swarm.rs +++ b/coordinator/src/p2p/swarm.rs @@ -6,17 +6,18 @@ use std::{ use borsh::BorshDeserialize; +use serai_client::validator_sets::primitives::ValidatorSet; + use tokio::sync::{mpsc, oneshot, RwLock}; -use serai_db::Db; use serai_task::TaskHandle; -use serai_cosign::Cosigning; +use serai_cosign::SignedCosign; use futures_util::StreamExt; use libp2p::{ identity::PeerId, - request_response::RequestId, + request_response::{RequestId, ResponseChannel}, swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; @@ -42,59 +43,36 @@ use crate::p2p::{ - Dispatching received requests - Sending responses */ -struct SwarmTask { +struct SwarmTask { dial_task: TaskHandle, to_dial: mpsc::UnboundedReceiver, last_dial_task_run: Instant, validators: Arc>, - peers: Peers, rebuild_peers_at: Instant, - db: D, swarm: Swarm, gossip: mpsc::UnboundedReceiver, + signed_cosigns: mpsc::UnboundedSender, + tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender>)>, - outbound_requests_responses: HashMap>>, + outbound_request_responses: HashMap>>, + + inbound_request_response_channels: HashMap>, + heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, + /* TODO + let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); + let res = reqres::Response::NotableCosigns(cosigns); + let _: Result<_, _> = self.swarm.behaviour_mut().reqres.send_response(channel, res); + */ + notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, + inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>, } -impl SwarmTask { - fn handle_reqres(&mut self, event: reqres::Event) { - match event { - reqres::Event::Message { message, .. } => match message { - reqres::Message::Request { request_id: _, request, channel } => { - match request { - // TODO: Send these - reqres::Request::KeepAlive => {} - reqres::Request::Heartbeat { set, latest_block_hash } => todo!("TODO"), - reqres::Request::NotableCosigns { global_session } => { - // TODO: Move this out - let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); - let res = reqres::Response::NotableCosigns(cosigns); - let _: Result<_, _> = self.swarm.behaviour_mut().reqres.send_response(channel, res); - } - } - } - reqres::Message::Response { request_id, response } => { - // Send Some(response) as the response for the request - if let Some(channel) = self.outbound_requests_responses.remove(&request_id) { - let _: Result<_, _> = channel.send(Some(response)); - } - } - }, - reqres::Event::OutboundFailure { request_id, .. } => { - // Send None as the response for the request - if let Some(channel) = self.outbound_requests_responses.remove(&request_id) { - let _: Result<_, _> = channel.send(None); - } - } - reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {} - } - } - +impl SwarmTask { fn handle_gossip(&mut self, event: gossip::Event) { match event { gossip::Event::Message { message, .. } => { @@ -103,8 +81,12 @@ impl SwarmTask { return; }; match message { - gossip::Message::Tributary { set, message } => todo!("TODO"), - gossip::Message::Cosign(signed_cosign) => todo!("TODO"), + gossip::Message::Tributary { set, message } => { + let _: Result<_, _> = self.tributary_gossip.send((set, message)); + } + gossip::Message::Cosign(signed_cosign) => { + let _: Result<_, _> = self.signed_cosigns.send(signed_cosign); + } } } gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {} @@ -114,6 +96,44 @@ impl SwarmTask { } } + fn handle_reqres(&mut self, event: reqres::Event) { + match event { + reqres::Event::Message { message, .. } => match message { + reqres::Message::Request { request_id, request, channel } => { + match request { + // TODO: Send these + reqres::Request::KeepAlive => { + let _: Result<_, _> = + self.swarm.behaviour_mut().reqres.send_response(channel, Response::NoResponse); + } + reqres::Request::Heartbeat { set, latest_block_hash } => { + self.inbound_request_response_channels.insert(request_id, channel); + let _: Result<_, _> = + self.heartbeat_requests.send((request_id, set, latest_block_hash)); + } + reqres::Request::NotableCosigns { global_session } => { + self.inbound_request_response_channels.insert(request_id, channel); + let _: Result<_, _> = self.notable_cosign_requests.send((request_id, global_session)); + } + } + } + reqres::Message::Response { request_id, response } => { + // Send Some(response) as the response for the request + if let Some(channel) = self.outbound_request_responses.remove(&request_id) { + let _: Result<_, _> = channel.send(Some(response)); + } + } + }, + reqres::Event::OutboundFailure { request_id, .. } => { + // Send None as the response for the request + if let Some(channel) = self.outbound_request_responses.remove(&request_id) { + let _: Result<_, _> = channel.send(None); + } + } + reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {} + } + } + async fn run(mut self) { loop { let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); @@ -228,19 +248,28 @@ impl SwarmTask { } } - request = self.outbound_requests.recv() => { - let (peer, request, response_channel) = - request.expect("channel for requests was closed?"); - let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request); - self.outbound_requests_responses.insert(request_id, response_channel); - } - message = self.gossip.recv() => { let message = message.expect("channel for messages to gossip was closed?"); let topic = message.topic(); let message = borsh::to_vec(&message).unwrap(); let _: Result<_, _> = self.swarm.behaviour_mut().gossip.publish(topic, message); } + + request = self.outbound_requests.recv() => { + let (peer, request, response_channel) = + request.expect("channel for requests was closed?"); + let request_id = self.swarm.behaviour_mut().reqres.send_request(&peer, request); + self.outbound_request_responses.insert(request_id, response_channel); + } + + response = self.inbound_request_responses.recv() => { + let (request_id, response) = + response.expect("channel for inbound request responses was closed?"); + if let Some(channel) = self.inbound_request_response_channels.remove(&request_id) { + let _: Result<_, _> = + self.swarm.behaviour_mut().reqres.send_response(channel, response); + } + } } } } From 419223c54ef8013b428584adb8915fb23ad90766 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 18:09:25 -0500 Subject: [PATCH 258/368] Build the swarm Moves UpdateSharedValidatorsTask to validators.rs. While prior planned to re-use a validators object across connecting and peer state management, the current plan is to use an independent validators object for each to minimize any contention. They should be built infrequently enough, and cheap enough to update in the majority case (due to quickly checking if an update is needed), that this is fine. --- coordinator/src/p2p/authenticate.rs | 9 ++- coordinator/src/p2p/gossip.rs | 2 +- coordinator/src/p2p/mod.rs | 114 +++++++++++++++++++++++----- coordinator/src/p2p/reqres.rs | 6 +- coordinator/src/p2p/swarm.rs | 105 ++++++++++++++++++++----- coordinator/src/p2p/validators.rs | 71 ++++++++++++----- 6 files changed, 243 insertions(+), 64 deletions(-) diff --git a/coordinator/src/p2p/authenticate.rs b/coordinator/src/p2p/authenticate.rs index ffa8a33b..c678a034 100644 --- a/coordinator/src/p2p/authenticate.rs +++ b/coordinator/src/p2p/authenticate.rs @@ -23,10 +23,11 @@ use crate::p2p::{validators::Validators, peer_id_from_public}; const PROTOCOL: &str = "/serai/coordinator/validators"; -struct OnlyValidators { - validators: Arc>, - serai_key: Zeroizing, - noise_keypair: identity::Keypair, +#[derive(Clone)] +pub(crate) struct OnlyValidators { + pub(crate) validators: Arc>, + pub(crate) serai_key: Zeroizing, + pub(crate) noise_keypair: identity::Keypair, } impl OnlyValidators { diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/gossip.rs index 7f5a078c..99196fb6 100644 --- a/coordinator/src/p2p/gossip.rs +++ b/coordinator/src/p2p/gossip.rs @@ -15,7 +15,7 @@ pub use libp2p::gossipsub::Event; use serai_cosign::SignedCosign; // Block size limit + 16 KB of space for signatures/metadata -const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384; +pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384; const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 7ccb46a3..b104e94f 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,23 +1,36 @@ -use core::future::Future; use std::{ sync::Arc, collections::{HashSet, HashMap}, }; -use serai_client::primitives::{NetworkId, PublicKey}; +use zeroize::Zeroizing; +use schnorrkel::Keypair; -use tokio::sync::RwLock; +use serai_client::{ + primitives::{NetworkId, PublicKey}, + Serai, +}; -use serai_task::ContinuallyRan; +use tokio::sync::{mpsc, RwLock}; -use libp2p::{multihash::Multihash, identity::PeerId, swarm::NetworkBehaviour}; +use serai_task::Task; + +use libp2p::{ + multihash::Multihash, + identity::{self, PeerId}, + tcp::Config as TcpConfig, + yamux, + swarm::NetworkBehaviour, + SwarmBuilder, +}; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; -use validators::{Validators, update_shared_validators}; +use validators::{Validators, UpdateValidatorsTask}; /// The authentication protocol upgrade to limit the P2P network to active validators. mod authenticate; +use authenticate::OnlyValidators; /// The dial task, to find new peers to connect to mod dial; @@ -34,9 +47,18 @@ mod heartbeat; /// The swarm task, running it and dispatching to/from it mod swarm; +use swarm::SwarmTask; const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') +// usize::max, manually implemented, as max isn't a const fn +const MAX_LIBP2P_MESSAGE_SIZE: usize = + if gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE { + gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE + } else { + reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE + }; + fn peer_id_from_public(public: PublicKey) -> PeerId { // 0 represents the identity Multihash, that no hash was performed // It's an internal constant so we can't refer to the constant inside libp2p @@ -76,19 +98,73 @@ struct Behavior { gossip: gossip::Behavior, } -struct UpdateSharedValidatorsTask { - validators: Arc>, -} +pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> P2p { + // Define the object we track peers with + let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; -impl ContinuallyRan for UpdateSharedValidatorsTask { - // Only run every minute, not the default of every five seconds - const DELAY_BETWEEN_ITERATIONS: u64 = 60; - const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; + // Define the dial task + let (dial_task_def, dial_task) = Task::new(); + let (to_dial_send, to_dial_recv) = mpsc::unbounded_channel(); + todo!("TODO: Dial task"); - fn run_iteration(&mut self) -> impl Send + Future> { - async move { - update_shared_validators(&self.validators).await.map_err(|e| format!("{e:?}"))?; - Ok(true) - } - } + // Define the Validators object used for validating new connections + let connection_validators = UpdateValidatorsTask::spawn(serai.clone()); + let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> { + Ok(OnlyValidators { + serai_key: serai_key.clone(), + validators: connection_validators.clone(), + noise_keypair: noise_keypair.clone(), + }) + }; + + let new_yamux = || { + let mut config = yamux::Config::default(); + // 1 MiB default + max message size + config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); + // 256 KiB default + max message size + config.set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); + config + }; + + let behavior = Behavior { reqres: reqres::new_behavior(), gossip: gossip::new_behavior() }; + + let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) + .with_tokio() + .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) + .unwrap() + .with_behaviour(|_| behavior) + .unwrap() + .build(); + swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); + swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); + + let swarm_validators = UpdateValidatorsTask::spawn(serai); + + let (gossip_send, gossip_recv) = mpsc::unbounded_channel(); + let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); + let (tributary_gossip_send, tributary_gossip_recv) = mpsc::unbounded_channel(); + + let (outbound_requests_send, outbound_requests_recv) = mpsc::unbounded_channel(); + + let (heartbeat_requests_send, heartbeat_requests_recv) = mpsc::unbounded_channel(); + let (notable_cosign_requests_send, notable_cosign_requests_recv) = mpsc::unbounded_channel(); + let (inbound_request_responses_send, inbound_request_responses_recv) = mpsc::unbounded_channel(); + + // Create the swarm task + SwarmTask::new( + dial_task, + to_dial_recv, + swarm_validators, + peers, + swarm, + gossip_recv, + signed_cosigns_send, + tributary_gossip_send, + outbound_requests_recv, + heartbeat_requests_send, + notable_cosign_requests_send, + inbound_request_responses_recv, + ); + + todo!("TODO") } diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/reqres.rs index fd6efcbd..cf7575e4 100644 --- a/coordinator/src/p2p/reqres.rs +++ b/coordinator/src/p2p/reqres.rs @@ -17,7 +17,7 @@ use serai_cosign::SignedCosign; /// The maximum message size for the request-response protocol // This is derived from the heartbeat message size as it's our largest message -const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = +pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = (tributary::BLOCK_SIZE_LIMIT * crate::p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0"; @@ -46,14 +46,14 @@ pub(crate) struct TributaryBlockWithCommit { /// Responses which can be received via the request-response protocol. #[derive(Clone, BorshSerialize, BorshDeserialize)] pub(crate) enum Response { - NoResponse, + None, Blocks(Vec), NotableCosigns(Vec), } impl fmt::Debug for Response { fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result { match self { - Response::NoResponse => fmt.debug_struct("Response::NoResponse").finish(), + Response::None => fmt.debug_struct("Response::None").finish(), Response::Blocks(_) => fmt.debug_struct("Response::Block").finish_non_exhaustive(), Response::NotableCosigns(_) => { fmt.debug_struct("Response::NotableCosigns").finish_non_exhaustive() diff --git a/coordinator/src/p2p/swarm.rs b/coordinator/src/p2p/swarm.rs index 10d14a6d..87440c92 100644 --- a/coordinator/src/p2p/swarm.rs +++ b/coordinator/src/p2p/swarm.rs @@ -28,6 +28,9 @@ use crate::p2p::{ gossip, }; +const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); +const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); + /* `SwarmTask` handles everything we need the `Swarm` object for. The goal is to minimize the contention on this task. Unfortunately, the `Swarm` object itself is needed for a variety of @@ -43,7 +46,7 @@ use crate::p2p::{ - Dispatching received requests - Sending responses */ -struct SwarmTask { +pub(crate) struct SwarmTask { dial_task: TaskHandle, to_dial: mpsc::UnboundedReceiver, last_dial_task_run: Instant, @@ -54,6 +57,8 @@ struct SwarmTask { swarm: Swarm, + last_message: Instant, + gossip: mpsc::UnboundedReceiver, signed_cosigns: mpsc::UnboundedSender, tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, @@ -99,24 +104,21 @@ impl SwarmTask { fn handle_reqres(&mut self, event: reqres::Event) { match event { reqres::Event::Message { message, .. } => match message { - reqres::Message::Request { request_id, request, channel } => { - match request { - // TODO: Send these - reqres::Request::KeepAlive => { - let _: Result<_, _> = - self.swarm.behaviour_mut().reqres.send_response(channel, Response::NoResponse); - } - reqres::Request::Heartbeat { set, latest_block_hash } => { - self.inbound_request_response_channels.insert(request_id, channel); - let _: Result<_, _> = - self.heartbeat_requests.send((request_id, set, latest_block_hash)); - } - reqres::Request::NotableCosigns { global_session } => { - self.inbound_request_response_channels.insert(request_id, channel); - let _: Result<_, _> = self.notable_cosign_requests.send((request_id, global_session)); - } + reqres::Message::Request { request_id, request, channel } => match request { + reqres::Request::KeepAlive => { + let _: Result<_, _> = + self.swarm.behaviour_mut().reqres.send_response(channel, Response::None); } - } + reqres::Request::Heartbeat { set, latest_block_hash } => { + self.inbound_request_response_channels.insert(request_id, channel); + let _: Result<_, _> = + self.heartbeat_requests.send((request_id, set, latest_block_hash)); + } + reqres::Request::NotableCosigns { global_session } => { + self.inbound_request_response_channels.insert(request_id, channel); + let _: Result<_, _> = self.notable_cosign_requests.send((request_id, global_session)); + } + }, reqres::Message::Response { request_id, response } => { // Send Some(response) as the response for the request if let Some(channel) = self.outbound_request_responses.remove(&request_id) { @@ -136,9 +138,19 @@ impl SwarmTask { async fn run(mut self) { loop { + let time_till_keep_alive = Instant::now().saturating_duration_since(self.last_message); let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); tokio::select! { + () = tokio::time::sleep(time_till_keep_alive) => { + let peers = self.swarm.connected_peers().copied().collect::>(); + let behavior = self.swarm.behaviour_mut(); + for peer in peers { + behavior.reqres.send_request(&peer, Request::KeepAlive); + } + self.last_message = Instant::now(); + } + // Dial peers we're instructed to dial_opts = self.to_dial.recv() => { let dial_opts = dial_opts.expect("DialTask was closed?"); @@ -156,8 +168,6 @@ impl SwarmTask { We also use this to disconnect all peers who are no longer active in any network. */ () = tokio::time::sleep(time_till_rebuild_peers) => { - const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); - let validators_by_network = self.validators.read().await.by_network().clone(); let connected_peers = self.swarm.connected_peers().copied().collect::>(); @@ -253,6 +263,7 @@ impl SwarmTask { let topic = message.topic(); let message = borsh::to_vec(&message).unwrap(); let _: Result<_, _> = self.swarm.behaviour_mut().gossip.publish(topic, message); + self.last_message = Instant::now(); } request = self.outbound_requests.recv() => { @@ -273,4 +284,58 @@ impl SwarmTask { } } } + + #[allow(clippy::too_many_arguments)] + pub(crate) fn new( + dial_task: TaskHandle, + to_dial: mpsc::UnboundedReceiver, + + validators: Arc>, + peers: Peers, + + swarm: Swarm, + + gossip: mpsc::UnboundedReceiver, + signed_cosigns: mpsc::UnboundedSender, + tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, + + outbound_requests: mpsc::UnboundedReceiver<( + PeerId, + Request, + oneshot::Sender>, + )>, + + heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, + notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, + inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>, + ) { + tokio::spawn( + SwarmTask { + dial_task, + to_dial, + last_dial_task_run: Instant::now(), + + validators, + peers, + rebuild_peers_at: Instant::now() + TIME_BETWEEN_REBUILD_PEERS, + + swarm, + + last_message: Instant::now(), + + gossip, + signed_cosigns, + tributary_gossip, + + outbound_requests, + outbound_request_responses: HashMap::new(), + + inbound_request_response_channels: HashMap::new(), + heartbeat_requests, + notable_cosign_requests, + inbound_request_responses, + } + .run(), + ); + } } diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 4b3d3870..5d802f4b 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -1,4 +1,4 @@ -use core::borrow::Borrow; +use core::{borrow::Borrow, future::Future}; use std::{ sync::Arc, collections::{HashSet, HashMap}, @@ -6,6 +6,8 @@ use std::{ use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; +use serai_task::{Task, ContinuallyRan}; + use libp2p::PeerId; use futures_util::stream::{StreamExt, FuturesUnordered}; @@ -103,8 +105,6 @@ impl Validators { } /// Update the view of the validators. - /// - /// Returns all validators removed from the active validator set. pub(crate) async fn update(&mut self) -> Result<(), String> { let session_changes = Self::session_changes(&self.serai, &self.sessions).await?; self.incorporate_session_changes(session_changes); @@ -124,19 +124,56 @@ impl Validators { } } -/// Update the view of the validators. +/// A task which updates a set of validators. /// -/// This minimizes the time an exclusive lock is held over the validators to minimize the -/// disruption to functioning. -/// -/// Returns all validators removed from the active validator set. -pub(crate) async fn update_shared_validators( - validators: &Arc>, -) -> Result<(), String> { - let session_changes = { - let validators = validators.read().await; - Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await? - }; - validators.write().await.incorporate_session_changes(session_changes); - Ok(()) +/// The validators managed by this tak will have their exclusive lock held for a minimal amount of +/// time while the update occurs to minimize the disruption to the services relying on it. +pub(crate) struct UpdateValidatorsTask { + validators: Arc>, +} + +impl UpdateValidatorsTask { + /// Spawn a new instance of the UpdateValidatorsTask. + /// + /// This returns a reference to the Validators it updates after spawning itself. + pub(crate) fn spawn(serai: Serai) -> Arc> { + // The validators which will be updated + let validators = Arc::new(RwLock::new(Validators { + serai, + sessions: HashMap::new(), + by_network: HashMap::new(), + validators: HashMap::new(), + })); + + // Define the task + let (update_validators_task, update_validators_task_handle) = Task::new(); + // Forget the handle, as dropping the handle would stop the task + core::mem::forget(update_validators_task_handle); + // Spawn the task + tokio::spawn( + (Self { validators: validators.clone() }).continually_run(update_validators_task, vec![]), + ); + + // Return the validators + validators + } +} + +impl ContinuallyRan for UpdateValidatorsTask { + // Only run every minute, not the default of every five seconds + const DELAY_BETWEEN_ITERATIONS: u64 = 60; + const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; + + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let session_changes = { + let validators = self.validators.read().await; + Validators::session_changes(validators.serai.clone(), validators.sessions.clone()) + .await + .map_err(|e| format!("{e:?}"))? + }; + self.validators.write().await.incorporate_session_changes(session_changes); + Ok(true) + } + } } From 2121a9b131d6f13f269addf14268dd3b49e98769 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 7 Jan 2025 18:16:34 -0500 Subject: [PATCH 259/368] Spawn the task to select validators to dial --- coordinator/src/p2p/dial.rs | 8 +++++++- coordinator/src/p2p/mod.rs | 14 ++++++++++---- coordinator/src/p2p/validators.rs | 16 ++++++++++------ 3 files changed, 27 insertions(+), 11 deletions(-) diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/dial.rs index 6fc6cb50..74eaba9a 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/dial.rs @@ -28,13 +28,19 @@ const TARGET_PEERS_PER_NETWORK: usize = 5; */ // TODO const TARGET_DIALED_PEERS_PER_NETWORK: usize = 3; -struct DialTask { +pub(crate) struct DialTask { serai: Serai, validators: Validators, peers: Peers, to_dial: mpsc::UnboundedSender, } +impl DialTask { + pub(crate) fn new(serai: Serai, peers: Peers, to_dial: mpsc::UnboundedSender) -> Self { + DialTask { serai: serai.clone(), validators: Validators::new(serai), peers, to_dial } + } +} + impl ContinuallyRan for DialTask { // Only run every five minutes, not the default of every five seconds const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index b104e94f..ba09b273 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -13,7 +13,7 @@ use serai_client::{ use tokio::sync::{mpsc, RwLock}; -use serai_task::Task; +use serai_task::{Task, ContinuallyRan}; use libp2p::{ multihash::Multihash, @@ -26,7 +26,7 @@ use libp2p::{ /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; -use validators::{Validators, UpdateValidatorsTask}; +use validators::UpdateValidatorsTask; /// The authentication protocol upgrade to limit the P2P network to active validators. mod authenticate; @@ -34,6 +34,7 @@ use authenticate::OnlyValidators; /// The dial task, to find new peers to connect to mod dial; +use dial::DialTask; /// The request-response messages and behavior mod reqres; @@ -105,7 +106,10 @@ pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> P2p { // Define the dial task let (dial_task_def, dial_task) = Task::new(); let (to_dial_send, to_dial_recv) = mpsc::unbounded_channel(); - todo!("TODO: Dial task"); + tokio::spawn( + DialTask::new(serai.clone(), peers.clone(), to_dial_send) + .continually_run(dial_task_def, vec![]), + ); // Define the Validators object used for validating new connections let connection_validators = UpdateValidatorsTask::spawn(serai.clone()); @@ -166,5 +170,7 @@ pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> P2p { inbound_request_responses_recv, ); - todo!("TODO") + // gossip_send, signed_cosigns_recv, tributary_gossip_recv, outbound_requests_send, + // heartbeat_requests_recv, notable_cosign_requests_recv, inbound_request_responses_send + todo!("TODO"); } diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/validators.rs index 5d802f4b..5a639148 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/validators.rs @@ -27,6 +27,15 @@ pub(crate) struct Validators { } impl Validators { + pub(crate) fn new(serai: Serai) -> Self { + Validators { + serai, + sessions: HashMap::new(), + by_network: HashMap::new(), + validators: HashMap::new(), + } + } + async fn session_changes( serai: impl Borrow, sessions: impl Borrow>, @@ -138,12 +147,7 @@ impl UpdateValidatorsTask { /// This returns a reference to the Validators it updates after spawning itself. pub(crate) fn spawn(serai: Serai) -> Arc> { // The validators which will be updated - let validators = Arc::new(RwLock::new(Validators { - serai, - sessions: HashMap::new(), - by_network: HashMap::new(), - validators: HashMap::new(), - })); + let validators = Arc::new(RwLock::new(Validators::new(serai))); // Define the task let (update_validators_task, update_validators_task_handle) = Task::new(); From 376a66b0003537a23fcefa61d3f6278b72eb6387 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 16:41:11 -0500 Subject: [PATCH 260/368] Remove async-trait from tendermint-machine, tributary-chain --- Cargo.lock | 2 - coordinator/tributary/Cargo.toml | 1 - coordinator/tributary/src/lib.rs | 12 +- coordinator/tributary/src/tendermint/mod.rs | 262 +++++++++--------- coordinator/tributary/src/tests/p2p.rs | 7 +- coordinator/tributary/src/tests/tendermint.rs | 8 +- coordinator/tributary/tendermint/Cargo.toml | 1 - coordinator/tributary/tendermint/src/ext.rs | 33 +-- coordinator/tributary/tendermint/tests/ext.rs | 21 +- 9 files changed, 178 insertions(+), 169 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index fcb9b442..3902b794 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -10498,7 +10498,6 @@ dependencies = [ name = "tendermint-machine" version = "0.2.0" dependencies = [ - "async-trait", "futures-channel", "futures-util", "hex", @@ -10941,7 +10940,6 @@ dependencies = [ name = "tributary-chain" version = "0.1.0" dependencies = [ - "async-trait", "blake2", "ciphersuite", "flexible-transcript", diff --git a/coordinator/tributary/Cargo.toml b/coordinator/tributary/Cargo.toml index 28beb2ab..d88c3b33 100644 --- a/coordinator/tributary/Cargo.toml +++ b/coordinator/tributary/Cargo.toml @@ -16,7 +16,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } thiserror = { version = "2", default-features = false, features = ["std"] } subtle = { version = "^2", default-features = false, features = ["std"] } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 476dbf93..3e946381 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -1,8 +1,6 @@ -use core::{marker::PhantomData, fmt::Debug}; +use core::{marker::PhantomData, fmt::Debug, future::Future}; use std::{sync::Arc, io}; -use async_trait::async_trait; - use zeroize::Zeroizing; use ciphersuite::{Ciphersuite, Ristretto}; @@ -131,20 +129,18 @@ pub trait ReadWrite: Sized { } } -#[async_trait] pub trait P2p: 'static + Send + Sync + Clone + Debug { /// Broadcast a message to all other members of the Tributary with the specified genesis. /// /// The Tributary will re-broadcast consensus messages on a fixed interval to ensure they aren't /// prematurely dropped from the P2P layer. THe P2P layer SHOULD perform content-based /// deduplication to ensure a sane amount of load. - async fn broadcast(&self, genesis: [u8; 32], msg: Vec); + fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future; } -#[async_trait] impl P2p for Arc

{ - async fn broadcast(&self, genesis: [u8; 32], msg: Vec) { - (*self).broadcast(genesis, msg).await + fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future { + P::broadcast(self, genesis, msg) } } diff --git a/coordinator/tributary/src/tendermint/mod.rs b/coordinator/tributary/src/tendermint/mod.rs index 0ce6232c..0fd618ca 100644 --- a/coordinator/tributary/src/tendermint/mod.rs +++ b/coordinator/tributary/src/tendermint/mod.rs @@ -1,8 +1,6 @@ -use core::ops::Deref; +use core::{ops::Deref, future::Future}; use std::{sync::Arc, collections::HashMap}; -use async_trait::async_trait; - use subtle::ConstantTimeEq; use zeroize::{Zeroize, Zeroizing}; @@ -74,50 +72,52 @@ impl Signer { } } -#[async_trait] impl SignerTrait for Signer { type ValidatorId = [u8; 32]; type Signature = [u8; 64]; /// Returns the validator's current ID. Returns None if they aren't a current validator. - async fn validator_id(&self) -> Option { - Some((Ristretto::generator() * self.key.deref()).to_bytes()) + fn validator_id(&self) -> impl Send + Future> { + async move { Some((Ristretto::generator() * self.key.deref()).to_bytes()) } } /// Sign a signature with the current validator's private key. - async fn sign(&self, msg: &[u8]) -> Self::Signature { - let mut nonce = Zeroizing::new(RecommendedTranscript::new(b"Tributary Chain Tendermint Nonce")); - nonce.append_message(b"genesis", self.genesis); - nonce.append_message(b"key", Zeroizing::new(self.key.deref().to_repr()).as_ref()); - nonce.append_message(b"message", msg); - let mut nonce = nonce.challenge(b"nonce"); + fn sign(&self, msg: &[u8]) -> impl Send + Future { + async move { + let mut nonce = + Zeroizing::new(RecommendedTranscript::new(b"Tributary Chain Tendermint Nonce")); + nonce.append_message(b"genesis", self.genesis); + nonce.append_message(b"key", Zeroizing::new(self.key.deref().to_repr()).as_ref()); + nonce.append_message(b"message", msg); + let mut nonce = nonce.challenge(b"nonce"); - let mut nonce_arr = [0; 64]; - nonce_arr.copy_from_slice(nonce.as_ref()); + let mut nonce_arr = [0; 64]; + nonce_arr.copy_from_slice(nonce.as_ref()); - let nonce_ref: &mut [u8] = nonce.as_mut(); - nonce_ref.zeroize(); - let nonce_ref: &[u8] = nonce.as_ref(); - assert_eq!(nonce_ref, [0; 64].as_ref()); + let nonce_ref: &mut [u8] = nonce.as_mut(); + nonce_ref.zeroize(); + let nonce_ref: &[u8] = nonce.as_ref(); + assert_eq!(nonce_ref, [0; 64].as_ref()); - let nonce = - Zeroizing::new(::F::from_bytes_mod_order_wide(&nonce_arr)); - nonce_arr.zeroize(); + let nonce = + Zeroizing::new(::F::from_bytes_mod_order_wide(&nonce_arr)); + nonce_arr.zeroize(); - assert!(!bool::from(nonce.ct_eq(&::F::ZERO))); + assert!(!bool::from(nonce.ct_eq(&::F::ZERO))); - let challenge = challenge( - self.genesis, - (Ristretto::generator() * self.key.deref()).to_bytes(), - (Ristretto::generator() * nonce.deref()).to_bytes().as_ref(), - msg, - ); + let challenge = challenge( + self.genesis, + (Ristretto::generator() * self.key.deref()).to_bytes(), + (Ristretto::generator() * nonce.deref()).to_bytes().as_ref(), + msg, + ); - let sig = SchnorrSignature::::sign(&self.key, nonce, challenge).serialize(); + let sig = SchnorrSignature::::sign(&self.key, nonce, challenge).serialize(); - let mut res = [0; 64]; - res.copy_from_slice(&sig); - res + let mut res = [0; 64]; + res.copy_from_slice(&sig); + res + } } } @@ -274,7 +274,6 @@ pub const BLOCK_PROCESSING_TIME: u32 = 999; pub const LATENCY_TIME: u32 = 1667; pub const TARGET_BLOCK_TIME: u32 = BLOCK_PROCESSING_TIME + (3 * LATENCY_TIME); -#[async_trait] impl Network for TendermintNetwork { type Db = D; @@ -300,111 +299,126 @@ impl Network for TendermintNetwork self.validators.clone() } - async fn broadcast(&mut self, msg: SignedMessageFor) { - let mut to_broadcast = vec![TENDERMINT_MESSAGE]; - to_broadcast.extend(msg.encode()); - self.p2p.broadcast(self.genesis, to_broadcast).await - } - - async fn slash(&mut self, validator: Self::ValidatorId, slash_event: SlashEvent) { - log::error!( - "validator {} triggered a slash event on tributary {} (with evidence: {})", - hex::encode(validator), - hex::encode(self.genesis), - matches!(slash_event, SlashEvent::WithEvidence(_)), - ); - - let signer = self.signer(); - let Some(tx) = (match slash_event { - SlashEvent::WithEvidence(evidence) => { - // create an unsigned evidence tx - Some(TendermintTx::SlashEvidence(evidence)) - } - SlashEvent::Id(_reason, _block, _round) => { - // TODO: Increase locally observed slash points - None - } - }) else { - return; - }; - - // add tx to blockchain and broadcast to peers - let mut to_broadcast = vec![TRANSACTION_MESSAGE]; - tx.write(&mut to_broadcast).unwrap(); - if self.blockchain.write().await.add_transaction::( - true, - Transaction::Tendermint(tx), - &self.signature_scheme(), - ) == Ok(true) - { - self.p2p.broadcast(signer.genesis, to_broadcast).await; + fn broadcast(&mut self, msg: SignedMessageFor) -> impl Send + Future { + async move { + let mut to_broadcast = vec![TENDERMINT_MESSAGE]; + to_broadcast.extend(msg.encode()); + self.p2p.broadcast(self.genesis, to_broadcast).await } } - async fn validate(&self, block: &Self::Block) -> Result<(), TendermintBlockError> { - let block = - Block::read::<&[u8]>(&mut block.0.as_ref()).map_err(|_| TendermintBlockError::Fatal)?; - self - .blockchain - .read() - .await - .verify_block::(&block, &self.signature_scheme(), false) - .map_err(|e| match e { - BlockError::NonLocalProvided(_) => TendermintBlockError::Temporal, - _ => { - log::warn!("Tributary Tendermint validate returning BlockError::Fatal due to {e:?}"); - TendermintBlockError::Fatal + fn slash( + &mut self, + validator: Self::ValidatorId, + slash_event: SlashEvent, + ) -> impl Send + Future { + async move { + log::error!( + "validator {} triggered a slash event on tributary {} (with evidence: {})", + hex::encode(validator), + hex::encode(self.genesis), + matches!(slash_event, SlashEvent::WithEvidence(_)), + ); + + let signer = self.signer(); + let Some(tx) = (match slash_event { + SlashEvent::WithEvidence(evidence) => { + // create an unsigned evidence tx + Some(TendermintTx::SlashEvidence(evidence)) } - }) + SlashEvent::Id(_reason, _block, _round) => { + // TODO: Increase locally observed slash points + None + } + }) else { + return; + }; + + // add tx to blockchain and broadcast to peers + let mut to_broadcast = vec![TRANSACTION_MESSAGE]; + tx.write(&mut to_broadcast).unwrap(); + if self.blockchain.write().await.add_transaction::( + true, + Transaction::Tendermint(tx), + &self.signature_scheme(), + ) == Ok(true) + { + self.p2p.broadcast(signer.genesis, to_broadcast).await; + } + } } - async fn add_block( + fn validate( + &self, + block: &Self::Block, + ) -> impl Send + Future> { + async move { + let block = + Block::read::<&[u8]>(&mut block.0.as_ref()).map_err(|_| TendermintBlockError::Fatal)?; + self + .blockchain + .read() + .await + .verify_block::(&block, &self.signature_scheme(), false) + .map_err(|e| match e { + BlockError::NonLocalProvided(_) => TendermintBlockError::Temporal, + _ => { + log::warn!("Tributary Tendermint validate returning BlockError::Fatal due to {e:?}"); + TendermintBlockError::Fatal + } + }) + } + } + + fn add_block( &mut self, serialized_block: Self::Block, commit: Commit, - ) -> Option { - let invalid_block = || { - // There's a fatal flaw in the code, it's behind a hard fork, or the validators turned - // malicious - // All justify a halt to then achieve social consensus from - // TODO: Under multiple validator sets, a small validator set turning malicious knocks - // off the entire network. That's an unacceptable DoS. - panic!("validators added invalid block to tributary {}", hex::encode(self.genesis)); - }; + ) -> impl Send + Future> { + async move { + let invalid_block = || { + // There's a fatal flaw in the code, it's behind a hard fork, or the validators turned + // malicious + // All justify a halt to then achieve social consensus from + // TODO: Under multiple validator sets, a small validator set turning malicious knocks + // off the entire network. That's an unacceptable DoS. + panic!("validators added invalid block to tributary {}", hex::encode(self.genesis)); + }; - // Tendermint should only produce valid commits - assert!(self.verify_commit(serialized_block.id(), &commit)); + // Tendermint should only produce valid commits + assert!(self.verify_commit(serialized_block.id(), &commit)); - let Ok(block) = Block::read::<&[u8]>(&mut serialized_block.0.as_ref()) else { - return invalid_block(); - }; + let Ok(block) = Block::read::<&[u8]>(&mut serialized_block.0.as_ref()) else { + return invalid_block(); + }; - let encoded_commit = commit.encode(); - loop { - let block_res = self.blockchain.write().await.add_block::( - &block, - encoded_commit.clone(), - &self.signature_scheme(), - ); - match block_res { - Ok(()) => { - // If we successfully added this block, break - break; + let encoded_commit = commit.encode(); + loop { + let block_res = self.blockchain.write().await.add_block::( + &block, + encoded_commit.clone(), + &self.signature_scheme(), + ); + match block_res { + Ok(()) => { + // If we successfully added this block, break + break; + } + Err(BlockError::NonLocalProvided(hash)) => { + log::error!( + "missing provided transaction {} which other validators on tributary {} had", + hex::encode(hash), + hex::encode(self.genesis) + ); + tokio::time::sleep(core::time::Duration::from_secs(5)).await; + } + _ => return invalid_block(), } - Err(BlockError::NonLocalProvided(hash)) => { - log::error!( - "missing provided transaction {} which other validators on tributary {} had", - hex::encode(hash), - hex::encode(self.genesis) - ); - tokio::time::sleep(core::time::Duration::from_secs(5)).await; - } - _ => return invalid_block(), } - } - Some(TendermintBlock( - self.blockchain.write().await.build_block::(&self.signature_scheme()).serialize(), - )) + Some(TendermintBlock( + self.blockchain.write().await.build_block::(&self.signature_scheme()).serialize(), + )) + } } } diff --git a/coordinator/tributary/src/tests/p2p.rs b/coordinator/tributary/src/tests/p2p.rs index d3e3b74c..32bca7d1 100644 --- a/coordinator/tributary/src/tests/p2p.rs +++ b/coordinator/tributary/src/tests/p2p.rs @@ -1,11 +1,12 @@ +use core::future::Future; + pub use crate::P2p; #[derive(Clone, Debug)] pub struct DummyP2p; -#[async_trait::async_trait] impl P2p for DummyP2p { - async fn broadcast(&self, _: [u8; 32], _: Vec) { - unimplemented!() + fn broadcast(&self, _: [u8; 32], _: Vec) -> impl Send + Future { + async move { unimplemented!() } } } diff --git a/coordinator/tributary/src/tests/tendermint.rs b/coordinator/tributary/src/tests/tendermint.rs index 77dfc9e5..fc8f190e 100644 --- a/coordinator/tributary/src/tests/tendermint.rs +++ b/coordinator/tributary/src/tests/tendermint.rs @@ -1,4 +1,7 @@ +use core::future::Future; + use tendermint::ext::Network; + use crate::{ P2p, TendermintTx, tendermint::{TARGET_BLOCK_TIME, TendermintNetwork}, @@ -11,10 +14,9 @@ fn assert_target_block_time() { #[derive(Clone, Debug)] pub struct DummyP2p; - #[async_trait::async_trait] impl P2p for DummyP2p { - async fn broadcast(&self, _: [u8; 32], _: Vec) { - unimplemented!() + fn broadcast(&self, _: [u8; 32], _: Vec) -> impl Send + Future { + async move { unimplemented!() } } } diff --git a/coordinator/tributary/tendermint/Cargo.toml b/coordinator/tributary/tendermint/Cargo.toml index 807938c8..7f7e2186 100644 --- a/coordinator/tributary/tendermint/Cargo.toml +++ b/coordinator/tributary/tendermint/Cargo.toml @@ -16,7 +16,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } thiserror = { version = "2", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } diff --git a/coordinator/tributary/tendermint/src/ext.rs b/coordinator/tributary/tendermint/src/ext.rs index 3869d9d9..67b8b07d 100644 --- a/coordinator/tributary/tendermint/src/ext.rs +++ b/coordinator/tributary/tendermint/src/ext.rs @@ -1,7 +1,6 @@ -use core::{hash::Hash, fmt::Debug}; +use core::{hash::Hash, fmt::Debug, future::Future}; use std::{sync::Arc, collections::HashSet}; -use async_trait::async_trait; use thiserror::Error; use parity_scale_codec::{Encode, Decode}; @@ -34,7 +33,6 @@ pub struct BlockNumber(pub u64); pub struct RoundNumber(pub u32); /// A signer for a validator. -#[async_trait] pub trait Signer: Send + Sync { // Type used to identify validators. type ValidatorId: ValidatorId; @@ -42,22 +40,21 @@ pub trait Signer: Send + Sync { type Signature: Signature; /// Returns the validator's current ID. Returns None if they aren't a current validator. - async fn validator_id(&self) -> Option; + fn validator_id(&self) -> impl Send + Future>; /// Sign a signature with the current validator's private key. - async fn sign(&self, msg: &[u8]) -> Self::Signature; + fn sign(&self, msg: &[u8]) -> impl Send + Future; } -#[async_trait] impl Signer for Arc { type ValidatorId = S::ValidatorId; type Signature = S::Signature; - async fn validator_id(&self) -> Option { - self.as_ref().validator_id().await + fn validator_id(&self) -> impl Send + Future> { + self.as_ref().validator_id() } - async fn sign(&self, msg: &[u8]) -> Self::Signature { - self.as_ref().sign(msg).await + fn sign(&self, msg: &[u8]) -> impl Send + Future { + self.as_ref().sign(msg) } } @@ -210,7 +207,6 @@ pub trait Block: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode } /// Trait representing the distributed system Tendermint is providing consensus over. -#[async_trait] pub trait Network: Sized + Send + Sync { /// The database used to back this. type Db: serai_db::Db; @@ -229,6 +225,7 @@ pub trait Network: Sized + Send + Sync { /// This should include both the time to download the block and the actual processing time. /// /// BLOCK_PROCESSING_TIME + (3 * LATENCY_TIME) must be divisible by 1000. + // TODO: Redefine as Duration const BLOCK_PROCESSING_TIME: u32; /// Network latency time in milliseconds. /// @@ -280,15 +277,19 @@ pub trait Network: Sized + Send + Sync { /// Switching to unauthenticated channels in a system already providing authenticated channels is /// not recommended as this is a minor, temporal inefficiency, while downgrading channels may /// have wider implications. - async fn broadcast(&mut self, msg: SignedMessageFor); + fn broadcast(&mut self, msg: SignedMessageFor) -> impl Send + Future; /// Trigger a slash for the validator in question who was definitively malicious. /// /// The exact process of triggering a slash is undefined and left to the network as a whole. - async fn slash(&mut self, validator: Self::ValidatorId, slash_event: SlashEvent); + fn slash( + &mut self, + validator: Self::ValidatorId, + slash_event: SlashEvent, + ) -> impl Send + Future; /// Validate a block. - async fn validate(&self, block: &Self::Block) -> Result<(), BlockError>; + fn validate(&self, block: &Self::Block) -> impl Send + Future>; /// Add a block, returning the proposal for the next one. /// @@ -298,9 +299,9 @@ pub trait Network: Sized + Send + Sync { /// This deviates from the paper which will have a local node refuse to decide on a block it /// considers invalid. This library acknowledges the network did decide on it, leaving handling /// of it to the network, and outside of this scope. - async fn add_block( + fn add_block( &mut self, block: Self::Block, commit: Commit, - ) -> Option; + ) -> impl Send + Future>; } diff --git a/coordinator/tributary/tendermint/tests/ext.rs b/coordinator/tributary/tendermint/tests/ext.rs index bec95ddc..58a5d468 100644 --- a/coordinator/tributary/tendermint/tests/ext.rs +++ b/coordinator/tributary/tendermint/tests/ext.rs @@ -1,10 +1,9 @@ +use core::future::Future; use std::{ sync::Arc, time::{UNIX_EPOCH, SystemTime, Duration}, }; -use async_trait::async_trait; - use parity_scale_codec::{Encode, Decode}; use futures_util::sink::SinkExt; @@ -21,20 +20,21 @@ type TestValidatorId = u16; type TestBlockId = [u8; 4]; struct TestSigner(u16); -#[async_trait] impl Signer for TestSigner { type ValidatorId = TestValidatorId; type Signature = [u8; 32]; - async fn validator_id(&self) -> Option { - Some(self.0) + fn validator_id(&self) -> impl Send + Future> { + async move { Some(self.0) } } - async fn sign(&self, msg: &[u8]) -> [u8; 32] { - let mut sig = [0; 32]; - sig[.. 2].copy_from_slice(&self.0.to_le_bytes()); - sig[2 .. (2 + 30.min(msg.len()))].copy_from_slice(&msg[.. 30.min(msg.len())]); - sig + fn sign(&self, msg: &[u8]) -> impl Send + Future { + async move { + let mut sig = [0; 32]; + sig[.. 2].copy_from_slice(&self.0.to_le_bytes()); + sig[2 .. (2 + 30.min(msg.len()))].copy_from_slice(&msg[.. 30.min(msg.len())]); + sig + } } } @@ -111,7 +111,6 @@ struct TestNetwork( Arc, SyncedBlockSender, SyncedBlockResultReceiver)>>>, ); -#[async_trait] impl Network for TestNetwork { type Db = MemDb; From fd9b464b359eeaa035b8e919a274766504c33e9c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 17:01:37 -0500 Subject: [PATCH 261/368] Add a trait for the P2p network used in the coordinator Moves all of the Libp2p code to a dedicated directory. Makes the Heartbeat task abstract over any P2p network. --- coordinator/src/p2p/heartbeat.rs | 16 +- .../src/p2p/{ => libp2p}/authenticate.rs | 2 +- coordinator/src/p2p/{ => libp2p}/dial.rs | 2 +- coordinator/src/p2p/{ => libp2p}/gossip.rs | 0 coordinator/src/p2p/libp2p/mod.rs | 163 ++++++++++++++++ coordinator/src/p2p/{ => libp2p}/reqres.rs | 0 coordinator/src/p2p/{ => libp2p}/swarm.rs | 4 +- .../src/p2p/{ => libp2p}/validators.rs | 2 +- coordinator/src/p2p/mod.rs | 183 ++---------------- 9 files changed, 194 insertions(+), 178 deletions(-) rename coordinator/src/p2p/{ => libp2p}/authenticate.rs (98%) rename coordinator/src/p2p/{ => libp2p}/dial.rs (98%) rename coordinator/src/p2p/{ => libp2p}/gossip.rs (100%) create mode 100644 coordinator/src/p2p/libp2p/mod.rs rename coordinator/src/p2p/{ => libp2p}/reqres.rs (100%) rename coordinator/src/p2p/{ => libp2p}/swarm.rs (99%) rename coordinator/src/p2p/{ => libp2p}/validators.rs (99%) diff --git a/coordinator/src/p2p/heartbeat.rs b/coordinator/src/p2p/heartbeat.rs index 85b07dc6..0f000dcc 100644 --- a/coordinator/src/p2p/heartbeat.rs +++ b/coordinator/src/p2p/heartbeat.rs @@ -11,10 +11,7 @@ use serai_task::ContinuallyRan; use crate::{ tributary::Transaction, - p2p::{ - reqres::{Request, Response}, - P2p, - }, + p2p::{Peer, P2p}, }; // Amount of blocks in a minute @@ -28,14 +25,14 @@ pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; /// /// If the other validator has more blocks then we do, they're expected to inform us. This forms /// the sync protocol for our Tributaries. -struct HeartbeatTask { +struct HeartbeatTask { set: ValidatorSet, - tributary: Tributary, + tributary: Tributary, reader: TributaryReader, - p2p: P2p, + p2p: P, } -impl ContinuallyRan for HeartbeatTask { +impl ContinuallyRan for HeartbeatTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { // If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol @@ -74,8 +71,7 @@ impl ContinuallyRan for HeartbeatTask { tip = self.reader.tip(); tip_is_stale = false; } - let request = Request::Heartbeat { set: self.set, latest_block_hash: tip }; - let Ok(Response::Blocks(blocks)) = peer.send(request).await else { continue 'peer }; + let Ok(blocks) = peer.send_heartbeat(self.set, tip).await else { continue 'peer }; // This is the final batch if it has less than the maximum amount of blocks // (signifying there weren't more blocks after this to fill the batch with) diff --git a/coordinator/src/p2p/authenticate.rs b/coordinator/src/p2p/libp2p/authenticate.rs similarity index 98% rename from coordinator/src/p2p/authenticate.rs rename to coordinator/src/p2p/libp2p/authenticate.rs index c678a034..d00d0dac 100644 --- a/coordinator/src/p2p/authenticate.rs +++ b/coordinator/src/p2p/libp2p/authenticate.rs @@ -19,7 +19,7 @@ use libp2p::{ noise, }; -use crate::p2p::{validators::Validators, peer_id_from_public}; +use crate::p2p::libp2p::{validators::Validators, peer_id_from_public}; const PROTOCOL: &str = "/serai/coordinator/validators"; diff --git a/coordinator/src/p2p/dial.rs b/coordinator/src/p2p/libp2p/dial.rs similarity index 98% rename from coordinator/src/p2p/dial.rs rename to coordinator/src/p2p/libp2p/dial.rs index 74eaba9a..03795a51 100644 --- a/coordinator/src/p2p/dial.rs +++ b/coordinator/src/p2p/libp2p/dial.rs @@ -14,7 +14,7 @@ use libp2p::{ use serai_task::ContinuallyRan; -use crate::p2p::{PORT, Peers, validators::Validators}; +use crate::p2p::libp2p::{PORT, Peers, validators::Validators}; const TARGET_PEERS_PER_NETWORK: usize = 5; /* diff --git a/coordinator/src/p2p/gossip.rs b/coordinator/src/p2p/libp2p/gossip.rs similarity index 100% rename from coordinator/src/p2p/gossip.rs rename to coordinator/src/p2p/libp2p/gossip.rs diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs new file mode 100644 index 00000000..b103a63f --- /dev/null +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -0,0 +1,163 @@ +use core::future::Future; +use std::{ + sync::Arc, + collections::{HashSet, HashMap}, +}; + +use zeroize::Zeroizing; +use schnorrkel::Keypair; + +use serai_client::{ + primitives::{NetworkId, PublicKey}, + Serai, +}; + +use tokio::sync::{mpsc, RwLock}; + +use serai_task::{Task, ContinuallyRan}; + +use libp2p::{ + multihash::Multihash, + identity::{self, PeerId}, + tcp::Config as TcpConfig, + yamux, + swarm::NetworkBehaviour, + SwarmBuilder, +}; + +/// A struct to sync the validators from the Serai node in order to keep track of them. +mod validators; +use validators::UpdateValidatorsTask; + +/// The authentication protocol upgrade to limit the P2P network to active validators. +mod authenticate; +use authenticate::OnlyValidators; + +/// The dial task, to find new peers to connect to +mod dial; +use dial::DialTask; + +/// The request-response messages and behavior +mod reqres; +use reqres::{Request, Response}; + +/// The gossip messages and behavior +mod gossip; + +/// The swarm task, running it and dispatching to/from it +mod swarm; +use swarm::SwarmTask; + +const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') + +// usize::max, manually implemented, as max isn't a const fn +const MAX_LIBP2P_MESSAGE_SIZE: usize = + if gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE { + gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE + } else { + reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE + }; + +fn peer_id_from_public(public: PublicKey) -> PeerId { + // 0 represents the identity Multihash, that no hash was performed + // It's an internal constant so we can't refer to the constant inside libp2p + PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() +} + +struct Peer; +impl Peer { + async fn send(&self, request: Request) -> Result { + (async move { todo!("TODO") }).await + } +} + +#[derive(Clone)] +struct Peers { + peers: Arc>>>, +} + +#[derive(NetworkBehaviour)] +struct Behavior { + reqres: reqres::Behavior, + gossip: gossip::Behavior, +} + +struct LibP2p; +impl LibP2p { + pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> LibP2p { + // Define the object we track peers with + let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; + + // Define the dial task + let (dial_task_def, dial_task) = Task::new(); + let (to_dial_send, to_dial_recv) = mpsc::unbounded_channel(); + tokio::spawn( + DialTask::new(serai.clone(), peers.clone(), to_dial_send) + .continually_run(dial_task_def, vec![]), + ); + + // Define the Validators object used for validating new connections + let connection_validators = UpdateValidatorsTask::spawn(serai.clone()); + let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> { + Ok(OnlyValidators { + serai_key: serai_key.clone(), + validators: connection_validators.clone(), + noise_keypair: noise_keypair.clone(), + }) + }; + + let new_yamux = || { + let mut config = yamux::Config::default(); + // 1 MiB default + max message size + config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); + // 256 KiB default + max message size + config.set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); + config + }; + + let behavior = Behavior { reqres: reqres::new_behavior(), gossip: gossip::new_behavior() }; + + let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) + .with_tokio() + .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) + .unwrap() + .with_behaviour(|_| behavior) + .unwrap() + .build(); + swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); + swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); + + let swarm_validators = UpdateValidatorsTask::spawn(serai); + + let (gossip_send, gossip_recv) = mpsc::unbounded_channel(); + let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); + let (tributary_gossip_send, tributary_gossip_recv) = mpsc::unbounded_channel(); + + let (outbound_requests_send, outbound_requests_recv) = mpsc::unbounded_channel(); + + let (heartbeat_requests_send, heartbeat_requests_recv) = mpsc::unbounded_channel(); + let (notable_cosign_requests_send, notable_cosign_requests_recv) = mpsc::unbounded_channel(); + let (inbound_request_responses_send, inbound_request_responses_recv) = + mpsc::unbounded_channel(); + + // Create the swarm task + SwarmTask::spawn( + dial_task, + to_dial_recv, + swarm_validators, + peers, + swarm, + gossip_recv, + signed_cosigns_send, + tributary_gossip_send, + outbound_requests_recv, + heartbeat_requests_send, + notable_cosign_requests_send, + inbound_request_responses_recv, + ); + + // gossip_send, signed_cosigns_recv, tributary_gossip_recv, outbound_requests_send, + // heartbeat_requests_recv, notable_cosign_requests_recv, inbound_request_responses_send + todo!("TODO"); + } +} diff --git a/coordinator/src/p2p/reqres.rs b/coordinator/src/p2p/libp2p/reqres.rs similarity index 100% rename from coordinator/src/p2p/reqres.rs rename to coordinator/src/p2p/libp2p/reqres.rs diff --git a/coordinator/src/p2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs similarity index 99% rename from coordinator/src/p2p/swarm.rs rename to coordinator/src/p2p/libp2p/swarm.rs index 87440c92..3962e81b 100644 --- a/coordinator/src/p2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -21,7 +21,7 @@ use libp2p::{ swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; -use crate::p2p::{ +use crate::p2p::libp2p::{ Peers, BehaviorEvent, Behavior, validators::Validators, reqres::{self, Request, Response}, @@ -286,7 +286,7 @@ impl SwarmTask { } #[allow(clippy::too_many_arguments)] - pub(crate) fn new( + pub(crate) fn spawn( dial_task: TaskHandle, to_dial: mpsc::UnboundedReceiver, diff --git a/coordinator/src/p2p/validators.rs b/coordinator/src/p2p/libp2p/validators.rs similarity index 99% rename from coordinator/src/p2p/validators.rs rename to coordinator/src/p2p/libp2p/validators.rs index 5a639148..b5be7c9e 100644 --- a/coordinator/src/p2p/validators.rs +++ b/coordinator/src/p2p/libp2p/validators.rs @@ -13,7 +13,7 @@ use libp2p::PeerId; use futures_util::stream::{StreamExt, FuturesUnordered}; use tokio::sync::RwLock; -use crate::p2p::peer_id_from_public; +use crate::p2p::libp2p::peer_id_from_public; pub(crate) struct Validators { serai: Serai, diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index ba09b273..534e44dc 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,176 +1,33 @@ -use std::{ - sync::Arc, - collections::{HashSet, HashMap}, -}; +use core::future::Future; -use zeroize::Zeroizing; -use schnorrkel::Keypair; +use tokio::time::error::Elapsed; -use serai_client::{ - primitives::{NetworkId, PublicKey}, - Serai, -}; +use borsh::{BorshSerialize, BorshDeserialize}; -use tokio::sync::{mpsc, RwLock}; +use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; -use serai_task::{Task, ContinuallyRan}; - -use libp2p::{ - multihash::Multihash, - identity::{self, PeerId}, - tcp::Config as TcpConfig, - yamux, - swarm::NetworkBehaviour, - SwarmBuilder, -}; - -/// A struct to sync the validators from the Serai node in order to keep track of them. -mod validators; -use validators::UpdateValidatorsTask; - -/// The authentication protocol upgrade to limit the P2P network to active validators. -mod authenticate; -use authenticate::OnlyValidators; - -/// The dial task, to find new peers to connect to -mod dial; -use dial::DialTask; - -/// The request-response messages and behavior -mod reqres; -use reqres::{Request, Response}; - -/// The gossip messages and behavior -mod gossip; +/// The libp2p-backed P2p network +mod libp2p; /// The heartbeat task, effecting sync of Tributaries mod heartbeat; -/// The swarm task, running it and dispatching to/from it -mod swarm; -use swarm::SwarmTask; - -const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') - -// usize::max, manually implemented, as max isn't a const fn -const MAX_LIBP2P_MESSAGE_SIZE: usize = - if gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE { - gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE - } else { - reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE - }; - -fn peer_id_from_public(public: PublicKey) -> PeerId { - // 0 represents the identity Multihash, that no hash was performed - // It's an internal constant so we can't refer to the constant inside libp2p - PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() +/// A tributary block and its commit. +#[derive(Clone, BorshSerialize, BorshDeserialize)] +pub(crate) struct TributaryBlockWithCommit { + pub(crate) block: Vec, + pub(crate) commit: Vec, } -struct Peer; -impl Peer { - async fn send(&self, request: Request) -> Result { - (async move { todo!("TODO") }).await - } +trait Peer: Send { + fn send_heartbeat( + &self, + set: ValidatorSet, + latest_block_hash: [u8; 32], + ) -> impl Send + Future, Elapsed>>; } -#[derive(Clone)] -struct Peers { - peers: Arc>>>, -} - -#[derive(Clone, Debug)] -struct P2p; -impl P2p { - async fn peers(&self, set: NetworkId) -> Vec { - (async move { todo!("TODO") }).await - } -} - -#[async_trait::async_trait] -impl tributary::P2p for P2p { - async fn broadcast(&self, genesis: [u8; 32], msg: Vec) { - todo!("TODO") - } -} - -#[derive(NetworkBehaviour)] -struct Behavior { - reqres: reqres::Behavior, - gossip: gossip::Behavior, -} - -pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> P2p { - // Define the object we track peers with - let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; - - // Define the dial task - let (dial_task_def, dial_task) = Task::new(); - let (to_dial_send, to_dial_recv) = mpsc::unbounded_channel(); - tokio::spawn( - DialTask::new(serai.clone(), peers.clone(), to_dial_send) - .continually_run(dial_task_def, vec![]), - ); - - // Define the Validators object used for validating new connections - let connection_validators = UpdateValidatorsTask::spawn(serai.clone()); - let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> { - Ok(OnlyValidators { - serai_key: serai_key.clone(), - validators: connection_validators.clone(), - noise_keypair: noise_keypair.clone(), - }) - }; - - let new_yamux = || { - let mut config = yamux::Config::default(); - // 1 MiB default + max message size - config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); - // 256 KiB default + max message size - config.set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); - config - }; - - let behavior = Behavior { reqres: reqres::new_behavior(), gossip: gossip::new_behavior() }; - - let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) - .with_tokio() - .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) - .unwrap() - .with_behaviour(|_| behavior) - .unwrap() - .build(); - swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); - swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); - - let swarm_validators = UpdateValidatorsTask::spawn(serai); - - let (gossip_send, gossip_recv) = mpsc::unbounded_channel(); - let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); - let (tributary_gossip_send, tributary_gossip_recv) = mpsc::unbounded_channel(); - - let (outbound_requests_send, outbound_requests_recv) = mpsc::unbounded_channel(); - - let (heartbeat_requests_send, heartbeat_requests_recv) = mpsc::unbounded_channel(); - let (notable_cosign_requests_send, notable_cosign_requests_recv) = mpsc::unbounded_channel(); - let (inbound_request_responses_send, inbound_request_responses_recv) = mpsc::unbounded_channel(); - - // Create the swarm task - SwarmTask::new( - dial_task, - to_dial_recv, - swarm_validators, - peers, - swarm, - gossip_recv, - signed_cosigns_send, - tributary_gossip_send, - outbound_requests_recv, - heartbeat_requests_send, - notable_cosign_requests_send, - inbound_request_responses_recv, - ); - - // gossip_send, signed_cosigns_recv, tributary_gossip_recv, outbound_requests_send, - // heartbeat_requests_recv, notable_cosign_requests_recv, inbound_request_responses_send - todo!("TODO"); +trait P2p: Send + Sync + tributary::P2p { + type Peer: Peer; + fn peers(&self, network: NetworkId) -> impl Send + Future>; } From de2d6568a410bd64714cc2b01742453e6a01785e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 17:40:08 -0500 Subject: [PATCH 262/368] Actually implement the Peer abstraction for Libp2p --- coordinator/src/p2p/heartbeat.rs | 8 +++- coordinator/src/p2p/libp2p/mod.rs | 69 ++++++++++++++++++++++++---- coordinator/src/p2p/libp2p/reqres.rs | 9 +--- coordinator/src/p2p/libp2p/swarm.rs | 15 ++---- coordinator/src/p2p/mod.rs | 10 ++-- 5 files changed, 77 insertions(+), 34 deletions(-) diff --git a/coordinator/src/p2p/heartbeat.rs b/coordinator/src/p2p/heartbeat.rs index 0f000dcc..025bfd73 100644 --- a/coordinator/src/p2p/heartbeat.rs +++ b/coordinator/src/p2p/heartbeat.rs @@ -1,9 +1,10 @@ use core::future::Future; - use std::time::{Duration, SystemTime}; use serai_client::validator_sets::primitives::ValidatorSet; +use futures_util::FutureExt; + use tributary::{ReadWrite, Block, Tributary, TributaryReader}; use serai_db::*; @@ -71,7 +72,10 @@ impl ContinuallyRan for HeartbeatTask { tip = self.reader.tip(); tip_is_stale = false; } - let Ok(blocks) = peer.send_heartbeat(self.set, tip).await else { continue 'peer }; + // Necessary due to https://github.com/rust-lang/rust/issues/100013 + let Some(blocks) = peer.send_heartbeat(self.set, tip).boxed().await else { + continue 'peer; + }; // This is the final batch if it has less than the maximum amount of blocks // (signifying there weren't more blocks after this to fill the batch with) diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index b103a63f..79f06c19 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -1,4 +1,4 @@ -use core::future::Future; +use core::{future::Future, time::Duration}; use std::{ sync::Arc, collections::{HashSet, HashMap}, @@ -9,10 +9,11 @@ use schnorrkel::Keypair; use serai_client::{ primitives::{NetworkId, PublicKey}, + validator_sets::primitives::ValidatorSet, Serai, }; -use tokio::sync::{mpsc, RwLock}; +use tokio::sync::{mpsc, oneshot, RwLock}; use serai_task::{Task, ContinuallyRan}; @@ -25,6 +26,8 @@ use libp2p::{ SwarmBuilder, }; +use crate::p2p::TributaryBlockWithCommit; + /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; use validators::UpdateValidatorsTask; @@ -64,10 +67,31 @@ fn peer_id_from_public(public: PublicKey) -> PeerId { PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() } -struct Peer; -impl Peer { - async fn send(&self, request: Request) -> Result { - (async move { todo!("TODO") }).await +struct Peer<'a> { + outbound_requests: &'a mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender)>, + id: PeerId, +} +impl crate::p2p::Peer<'_> for Peer<'_> { + fn send_heartbeat( + &self, + set: ValidatorSet, + latest_block_hash: [u8; 32], + ) -> impl Send + Future>> { + const HEARBEAT_TIMEOUT: Duration = Duration::from_secs(5); + async move { + let request = Request::Heartbeat { set, latest_block_hash }; + let (sender, receiver) = oneshot::channel(); + self + .outbound_requests + .send((self.id, request, sender)) + .expect("outbound requests recv channel was dropped?"); + match tokio::time::timeout(HEARBEAT_TIMEOUT, receiver).await.ok()?.ok()? { + Response::None => Some(vec![]), + Response::Blocks(blocks) => Some(blocks), + // TODO: Disconnect this peer + Response::NotableCosigns(_) => None, + } + } } } @@ -82,9 +106,14 @@ struct Behavior { gossip: gossip::Behavior, } -struct LibP2p; -impl LibP2p { - pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> LibP2p { +#[derive(Clone)] +struct Libp2p { + peers: Peers, + outbound_requests: mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender)>, +} + +impl Libp2p { + pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> Libp2p { // Define the object we track peers with let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; @@ -161,3 +190,25 @@ impl LibP2p { todo!("TODO"); } } + +impl tributary::P2p for Libp2p { + fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future { + async move { todo!("TODO") } + } +} + +impl crate::p2p::P2p for Libp2p { + type Peer<'a> = Peer<'a>; + fn peers(&self, network: NetworkId) -> impl Send + Future>> { + async move { + let Some(peer_ids) = self.peers.peers.read().await.get(&network).cloned() else { + return vec![]; + }; + let mut res = vec![]; + for id in peer_ids { + res.push(Peer { outbound_requests: &self.outbound_requests, id }); + } + res + } + } +} diff --git a/coordinator/src/p2p/libp2p/reqres.rs b/coordinator/src/p2p/libp2p/reqres.rs index cf7575e4..e3d761e5 100644 --- a/coordinator/src/p2p/libp2p/reqres.rs +++ b/coordinator/src/p2p/libp2p/reqres.rs @@ -15,6 +15,8 @@ pub use request_response::Message; use serai_cosign::SignedCosign; +use crate::p2p::TributaryBlockWithCommit; + /// The maximum message size for the request-response protocol // This is derived from the heartbeat message size as it's our largest message pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = @@ -36,13 +38,6 @@ pub(crate) enum Request { NotableCosigns { global_session: [u8; 32] }, } -/// A tributary block and its commit. -#[derive(Clone, BorshSerialize, BorshDeserialize)] -pub(crate) struct TributaryBlockWithCommit { - pub(crate) block: Vec, - pub(crate) commit: Vec, -} - /// Responses which can be received via the request-response protocol. #[derive(Clone, BorshSerialize, BorshDeserialize)] pub(crate) enum Response { diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index 3962e81b..615295f4 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -63,8 +63,8 @@ pub(crate) struct SwarmTask { signed_cosigns: mpsc::UnboundedSender, tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, - outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender>)>, - outbound_request_responses: HashMap>>, + outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, + outbound_request_responses: HashMap>, inbound_request_response_channels: HashMap>, heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, @@ -120,16 +120,15 @@ impl SwarmTask { } }, reqres::Message::Response { request_id, response } => { - // Send Some(response) as the response for the request if let Some(channel) = self.outbound_request_responses.remove(&request_id) { - let _: Result<_, _> = channel.send(Some(response)); + let _: Result<_, _> = channel.send(response); } } }, reqres::Event::OutboundFailure { request_id, .. } => { // Send None as the response for the request if let Some(channel) = self.outbound_request_responses.remove(&request_id) { - let _: Result<_, _> = channel.send(None); + let _: Result<_, _> = channel.send(Response::None); } } reqres::Event::InboundFailure { .. } | reqres::Event::ResponseSent { .. } => {} @@ -299,11 +298,7 @@ impl SwarmTask { signed_cosigns: mpsc::UnboundedSender, tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, - outbound_requests: mpsc::UnboundedReceiver<( - PeerId, - Request, - oneshot::Sender>, - )>, + outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 534e44dc..414e4ec3 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -1,7 +1,5 @@ use core::future::Future; -use tokio::time::error::Elapsed; - use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; @@ -19,15 +17,15 @@ pub(crate) struct TributaryBlockWithCommit { pub(crate) commit: Vec, } -trait Peer: Send { +trait Peer<'a>: Send { fn send_heartbeat( &self, set: ValidatorSet, latest_block_hash: [u8; 32], - ) -> impl Send + Future, Elapsed>>; + ) -> impl Send + Future>>; } trait P2p: Send + Sync + tributary::P2p { - type Peer: Peer; - fn peers(&self, network: NetworkId) -> impl Send + Future>; + type Peer<'a>: Peer<'a>; + fn peers(&self, network: NetworkId) -> impl Send + Future>>; } From b2bd5d3a44d9f4c0a47e3c0ea3a0a39d028a14ff Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 17:40:32 -0500 Subject: [PATCH 263/368] Remove Debug bound on tributary::P2p --- coordinator/tributary/src/lib.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 3e946381..2e4a6115 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -129,7 +129,7 @@ pub trait ReadWrite: Sized { } } -pub trait P2p: 'static + Send + Sync + Clone + Debug { +pub trait P2p: 'static + Send + Sync + Clone { /// Broadcast a message to all other members of the Tributary with the specified genesis. /// /// The Tributary will re-broadcast consensus messages on a fixed interval to ensure they aren't From ce83b41712b5cdb3fdaf4fbe655daf2bddeab2e5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 19:39:09 -0500 Subject: [PATCH 264/368] Finish mapping Libp2p to the P2p trait API --- coordinator/src/p2p/libp2p/gossip.rs | 10 +- coordinator/src/p2p/libp2p/mod.rs | 195 ++++++++++++++++++++++++--- coordinator/src/p2p/libp2p/reqres.rs | 2 +- coordinator/src/p2p/libp2p/swarm.rs | 11 +- coordinator/src/p2p/mod.rs | 32 ++++- 5 files changed, 222 insertions(+), 28 deletions(-) diff --git a/coordinator/src/p2p/libp2p/gossip.rs b/coordinator/src/p2p/libp2p/gossip.rs index 99196fb6..66a0b24a 100644 --- a/coordinator/src/p2p/libp2p/gossip.rs +++ b/coordinator/src/p2p/libp2p/gossip.rs @@ -2,9 +2,7 @@ use core::time::Duration; use blake2::{Digest, Blake2s256}; -use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::validator_sets::primitives::ValidatorSet; use libp2p::gossipsub::{ TopicHash, IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, @@ -22,20 +20,20 @@ const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0"; const BASE_TOPIC: &str = "/"; -fn topic_for_set(set: ValidatorSet) -> IdentTopic { - IdentTopic::new(format!("/set/{}", hex::encode(set.encode()))) +fn topic_for_tributary(tributary: [u8; 32]) -> IdentTopic { + IdentTopic::new(format!("/tributary/{}", hex::encode(tributary))) } #[derive(Clone, BorshSerialize, BorshDeserialize)] pub(crate) enum Message { - Tributary { set: ValidatorSet, message: Vec }, + Tributary { tributary: [u8; 32], message: Vec }, Cosign(SignedCosign), } impl Message { pub(crate) fn topic(&self) -> TopicHash { match self { - Message::Tributary { set, .. } => topic_for_set(*set).hash(), + Message::Tributary { tributary, .. } => topic_for_tributary(*tributary).hash(), Message::Cosign(_) => IdentTopic::new(BASE_TOPIC).hash(), } } diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index 79f06c19..93db7c88 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -4,6 +4,8 @@ use std::{ collections::{HashSet, HashMap}, }; +use rand_core::{RngCore, OsRng}; + use zeroize::Zeroizing; use schnorrkel::Keypair; @@ -13,10 +15,12 @@ use serai_client::{ Serai, }; -use tokio::sync::{mpsc, oneshot, RwLock}; +use tokio::sync::{mpsc, oneshot, Mutex, RwLock}; use serai_task::{Task, ContinuallyRan}; +use serai_cosign::SignedCosign; + use libp2p::{ multihash::Multihash, identity::{self, PeerId}, @@ -42,10 +46,11 @@ use dial::DialTask; /// The request-response messages and behavior mod reqres; -use reqres::{Request, Response}; +use reqres::{RequestId, Request, Response}; /// The gossip messages and behavior mod gossip; +use gossip::Message; /// The swarm task, running it and dispatching to/from it mod swarm; @@ -77,19 +82,21 @@ impl crate::p2p::Peer<'_> for Peer<'_> { set: ValidatorSet, latest_block_hash: [u8; 32], ) -> impl Send + Future>> { - const HEARBEAT_TIMEOUT: Duration = Duration::from_secs(5); async move { + const HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(5); + let request = Request::Heartbeat { set, latest_block_hash }; let (sender, receiver) = oneshot::channel(); self .outbound_requests .send((self.id, request, sender)) .expect("outbound requests recv channel was dropped?"); - match tokio::time::timeout(HEARBEAT_TIMEOUT, receiver).await.ok()?.ok()? { - Response::None => Some(vec![]), - Response::Blocks(blocks) => Some(blocks), - // TODO: Disconnect this peer - Response::NotableCosigns(_) => None, + if let Ok(Ok(Response::Blocks(blocks))) = + tokio::time::timeout(HEARTBEAT_TIMEOUT, receiver).await + { + Some(blocks) + } else { + None } } } @@ -109,7 +116,18 @@ struct Behavior { #[derive(Clone)] struct Libp2p { peers: Peers, + + gossip: mpsc::UnboundedSender, outbound_requests: mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender)>, + + tributary_gossip: Arc)>>>, + + signed_cosigns: Arc>>, + signed_cosigns_send: mpsc::UnboundedSender, + + heartbeat_requests: Arc>>, + notable_cosign_requests: Arc>>, + inbound_request_responses: mpsc::UnboundedSender<(RequestId, Response)>, } impl Libp2p { @@ -174,10 +192,10 @@ impl Libp2p { dial_task, to_dial_recv, swarm_validators, - peers, + peers.clone(), swarm, gossip_recv, - signed_cosigns_send, + signed_cosigns_send.clone(), tributary_gossip_send, outbound_requests_recv, heartbeat_requests_send, @@ -185,20 +203,92 @@ impl Libp2p { inbound_request_responses_recv, ); - // gossip_send, signed_cosigns_recv, tributary_gossip_recv, outbound_requests_send, - // heartbeat_requests_recv, notable_cosign_requests_recv, inbound_request_responses_send - todo!("TODO"); + Libp2p { + peers, + + gossip: gossip_send, + outbound_requests: outbound_requests_send, + + tributary_gossip: Arc::new(Mutex::new(tributary_gossip_recv)), + + signed_cosigns: Arc::new(Mutex::new(signed_cosigns_recv)), + signed_cosigns_send, + + heartbeat_requests: Arc::new(Mutex::new(heartbeat_requests_recv)), + notable_cosign_requests: Arc::new(Mutex::new(notable_cosign_requests_recv)), + inbound_request_responses: inbound_request_responses_send, + } } } impl tributary::P2p for Libp2p { - fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future { - async move { todo!("TODO") } + fn broadcast(&self, tributary: [u8; 32], message: Vec) -> impl Send + Future { + async move { + self + .gossip + .send(Message::Tributary { tributary, message }) + .expect("gossip recv channel was dropped?"); + } + } +} + +impl serai_cosign::RequestNotableCosigns for Libp2p { + type Error = (); + + fn request_notable_cosigns( + &self, + global_session: [u8; 32], + ) -> impl Send + Future> { + async move { + const AMOUNT_OF_PEERS_TO_REQUEST_FROM: usize = 3; + const NOTABLE_COSIGNS_TIMEOUT: Duration = Duration::from_secs(5); + + let request = Request::NotableCosigns { global_session }; + + let peers = self.peers.peers.read().await.clone(); + // HashSet of all peers + let peers = peers.into_values().flat_map(<_>::into_iter).collect::>(); + // Vec of all peers + let mut peers = peers.into_iter().collect::>(); + + let mut channels = Vec::with_capacity(AMOUNT_OF_PEERS_TO_REQUEST_FROM); + for _ in 0 .. AMOUNT_OF_PEERS_TO_REQUEST_FROM { + if peers.is_empty() { + break; + } + let i = usize::try_from(OsRng.next_u64() % u64::try_from(peers.len()).unwrap()).unwrap(); + let peer = peers.swap_remove(i); + + let (sender, receiver) = oneshot::channel(); + self + .outbound_requests + .send((peer, request, sender)) + .expect("outbound requests recv channel was dropped?"); + channels.push(receiver); + } + + // We could reduce our latency by using FuturesUnordered here but the latency isn't a concern + for channel in channels { + if let Ok(Ok(Response::NotableCosigns(cosigns))) = + tokio::time::timeout(NOTABLE_COSIGNS_TIMEOUT, channel).await + { + for cosign in cosigns { + self + .signed_cosigns_send + .send(cosign) + .expect("signed_cosigns recv in this object was dropped?"); + } + } + } + + Ok(()) + } } } impl crate::p2p::P2p for Libp2p { type Peer<'a> = Peer<'a>; + fn peers(&self, network: NetworkId) -> impl Send + Future>> { async move { let Some(peer_ids) = self.peers.peers.read().await.get(&network).cloned() else { @@ -211,4 +301,79 @@ impl crate::p2p::P2p for Libp2p { res } } + + fn heartbeat( + &self, + ) -> impl Send + + Future>)> + { + async move { + let (request_id, set, latest_block_hash) = self + .heartbeat_requests + .lock() + .await + .recv() + .await + .expect("heartbeat_requests_send was dropped?"); + let (sender, receiver) = oneshot::channel(); + tokio::spawn({ + let respond = self.inbound_request_responses.clone(); + async move { + let response = + if let Ok(blocks) = receiver.await { Response::Blocks(blocks) } else { Response::None }; + respond + .send((request_id, response)) + .expect("inbound_request_responses_recv was dropped?"); + } + }); + (set, latest_block_hash, sender) + } + } + + fn notable_cosigns_request( + &self, + ) -> impl Send + Future>)> { + async move { + let (request_id, global_session) = self + .notable_cosign_requests + .lock() + .await + .recv() + .await + .expect("notable_cosign_requests_send was dropped?"); + let (sender, receiver) = oneshot::channel(); + tokio::spawn({ + let respond = self.inbound_request_responses.clone(); + async move { + let response = if let Ok(notable_cosigns) = receiver.await { + Response::NotableCosigns(notable_cosigns) + } else { + Response::None + }; + respond + .send((request_id, response)) + .expect("inbound_request_responses_recv was dropped?"); + } + }); + (global_session, sender) + } + } + + fn tributary_message(&self) -> impl Send + Future)> { + async move { + self.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?") + } + } + + fn cosign(&self) -> impl Send + Future { + async move { + self + .signed_cosigns + .lock() + .await + .recv() + .await + .expect("signed_cosigns couldn't recv despite send in same object?") + } + } } diff --git a/coordinator/src/p2p/libp2p/reqres.rs b/coordinator/src/p2p/libp2p/reqres.rs index e3d761e5..f58abc8b 100644 --- a/coordinator/src/p2p/libp2p/reqres.rs +++ b/coordinator/src/p2p/libp2p/reqres.rs @@ -11,7 +11,7 @@ use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; use libp2p::request_response::{ self, Codec as CodecTrait, Event as GenericEvent, Config, Behaviour, ProtocolSupport, }; -pub use request_response::Message; +pub use request_response::{RequestId, Message}; use serai_cosign::SignedCosign; diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index 615295f4..148e615f 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -61,7 +61,7 @@ pub(crate) struct SwarmTask { gossip: mpsc::UnboundedReceiver, signed_cosigns: mpsc::UnboundedSender, - tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, + tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec)>, outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, outbound_request_responses: HashMap>, @@ -82,12 +82,13 @@ impl SwarmTask { match event { gossip::Event::Message { message, .. } => { let Ok(message) = gossip::Message::deserialize(&mut message.data.as_slice()) else { - // TODO: Penalize the PeerId which sent this message + // TODO: Penalize the PeerId which created this message, which requires authenticating + // each message OR moving to explicit acknowledgement before re-gossiping return; }; match message { - gossip::Message::Tributary { set, message } => { - let _: Result<_, _> = self.tributary_gossip.send((set, message)); + gossip::Message::Tributary { tributary, message } => { + let _: Result<_, _> = self.tributary_gossip.send((tributary, message)); } gossip::Message::Cosign(signed_cosign) => { let _: Result<_, _> = self.signed_cosigns.send(signed_cosign); @@ -296,7 +297,7 @@ impl SwarmTask { gossip: mpsc::UnboundedReceiver, signed_cosigns: mpsc::UnboundedSender, - tributary_gossip: mpsc::UnboundedSender<(ValidatorSet, Vec)>, + tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec)>, outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 414e4ec3..9c501973 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -4,6 +4,10 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; +use tokio::sync::oneshot; + +use serai_cosign::SignedCosign; + /// The libp2p-backed P2p network mod libp2p; @@ -25,7 +29,33 @@ trait Peer<'a>: Send { ) -> impl Send + Future>>; } -trait P2p: Send + Sync + tributary::P2p { +trait P2p: Send + Sync + tributary::P2p + serai_cosign::RequestNotableCosigns { type Peer<'a>: Peer<'a>; + + /// Fetch the peers for this network. fn peers(&self, network: NetworkId) -> impl Send + Future>>; + + /// A cancel-safe future for the next heartbeat received over the P2P network. + /// + /// Yields the validator set its for, the latest block hash observed, and a channel to return the + /// descending blocks. + fn heartbeat( + &self, + ) -> impl Send + + Future>)>; + + /// A cancel-safe future for the next request for the notable cosigns of a gloabl session. + /// + /// Yields the global session the request is for and a channel to return the notable cosigns. + fn notable_cosigns_request( + &self, + ) -> impl Send + Future>)>; + + /// A cancel-safe future for the next message regarding a Tributary. + /// + /// Yields the message's Tributary's genesis block hash and the message. + fn tributary_message(&self) -> impl Send + Future)>; + + /// A cancel-safe future for the next cosign received. + fn cosign(&self) -> impl Send + Future; } From 20326bba733ea87c1c6e15767c4507190325e2ab Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 23:01:09 -0500 Subject: [PATCH 265/368] Replace KeepAlive with ping This is more standard and allows measuring latency. --- coordinator/Cargo.toml | 2 +- coordinator/src/p2p/libp2p/gossip.rs | 1 - coordinator/src/p2p/libp2p/mod.rs | 15 ++++++++++++--- coordinator/src/p2p/libp2p/ping.rs | 17 +++++++++++++++++ coordinator/src/p2p/libp2p/reqres.rs | 2 -- coordinator/src/p2p/libp2p/swarm.rs | 23 ++++++++--------------- 6 files changed, 38 insertions(+), 22 deletions(-) create mode 100644 coordinator/src/p2p/libp2p/ping.rs diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index d0f8cb24..2fc373aa 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -55,7 +55,7 @@ env_logger = { version = "0.10", default-features = false, features = ["humantim futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } -libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "request-response", "gossipsub", "macros"] } +libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } serai-cosign = { path = "./cosign" } diff --git a/coordinator/src/p2p/libp2p/gossip.rs b/coordinator/src/p2p/libp2p/gossip.rs index 66a0b24a..4d75d9ea 100644 --- a/coordinator/src/p2p/libp2p/gossip.rs +++ b/coordinator/src/p2p/libp2p/gossip.rs @@ -58,7 +58,6 @@ pub(crate) fn new_behavior() -> Behavior { .history_gossip(usize::try_from(heartbeats_to_gossip).unwrap()) .heartbeat_interval(Duration::from_millis(heartbeat_interval.into())) .max_transmit_size(MAX_LIBP2P_GOSSIP_MESSAGE_SIZE) - .idle_timeout(KEEP_ALIVE_INTERVAL + Duration::from_secs(5)) .duplicate_cache_time(Duration::from_millis((heartbeats_to_keep * heartbeat_interval).into())) .validation_mode(ValidationMode::Anonymous) // Uses a content based message ID to avoid duplicates as much as possible diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index 93db7c88..ce60d285 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -44,6 +44,9 @@ use authenticate::OnlyValidators; mod dial; use dial::DialTask; +/// The ping behavior, used to ensure connection latency is below the limit +mod ping; + /// The request-response messages and behavior mod reqres; use reqres::{RequestId, Request, Response}; @@ -109,6 +112,7 @@ struct Peers { #[derive(NetworkBehaviour)] struct Behavior { + ping: ping::Behavior, reqres: reqres::Behavior, gossip: gossip::Behavior, } @@ -162,14 +166,19 @@ impl Libp2p { config }; - let behavior = Behavior { reqres: reqres::new_behavior(), gossip: gossip::new_behavior() }; - let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) .with_tokio() .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) .unwrap() - .with_behaviour(|_| behavior) + .with_behaviour(|_| Behavior { + ping: ping::new_behavior(), + reqres: reqres::new_behavior(), + gossip: gossip::new_behavior(), + }) .unwrap() + .with_swarm_config(|config| { + config.with_idle_connection_timeout(ping::INTERVAL + ping::TIMEOUT + Duration::from_secs(5)) + }) .build(); swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); diff --git a/coordinator/src/p2p/libp2p/ping.rs b/coordinator/src/p2p/libp2p/ping.rs new file mode 100644 index 00000000..d579af05 --- /dev/null +++ b/coordinator/src/p2p/libp2p/ping.rs @@ -0,0 +1,17 @@ +use core::time::Duration; + +use tributary::tendermint::LATENCY_TIME; + +use libp2p::ping::{self, Config, Behaviour}; +pub use ping::Event; + +pub(crate) const INTERVAL: Duration = Duration::from_secs(30); +// LATENCY_TIME represents the maximum latency for message delivery. Sending the ping, and +// receiving the pong, each have to occur within this time bound to validate the connection. We +// enforce that, as best we can, by requiring the round-trip be within twice the allowed latency. +pub(crate) const TIMEOUT: Duration = Duration::from_millis((2 * LATENCY_TIME) as u64); + +pub(crate) type Behavior = Behaviour; +pub(crate) fn new_behavior() -> Behavior { + Behavior::new(Config::default().with_interval(INTERVAL).with_timeout(TIMEOUT)) +} diff --git a/coordinator/src/p2p/libp2p/reqres.rs b/coordinator/src/p2p/libp2p/reqres.rs index f58abc8b..8fe02c30 100644 --- a/coordinator/src/p2p/libp2p/reqres.rs +++ b/coordinator/src/p2p/libp2p/reqres.rs @@ -27,8 +27,6 @@ const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0"; /// Requests which can be made via the request-response protocol. #[derive(Clone, Copy, Debug, BorshSerialize, BorshDeserialize)] pub(crate) enum Request { - /// A keep-alive to prevent our connections from being dropped. - KeepAlive, /// A heartbeat informing our peers of our latest block, for the specified blockchain, on regular /// intervals. /// diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index 148e615f..63f8f734 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -24,11 +24,11 @@ use libp2p::{ use crate::p2p::libp2p::{ Peers, BehaviorEvent, Behavior, validators::Validators, + ping, reqres::{self, Request, Response}, gossip, }; -const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); const TIME_BETWEEN_REBUILD_PEERS: Duration = Duration::from_secs(10 * 60); /* @@ -106,10 +106,6 @@ impl SwarmTask { match event { reqres::Event::Message { message, .. } => match message { reqres::Message::Request { request_id, request, channel } => match request { - reqres::Request::KeepAlive => { - let _: Result<_, _> = - self.swarm.behaviour_mut().reqres.send_response(channel, Response::None); - } reqres::Request::Heartbeat { set, latest_block_hash } => { self.inbound_request_response_channels.insert(request_id, channel); let _: Result<_, _> = @@ -138,19 +134,9 @@ impl SwarmTask { async fn run(mut self) { loop { - let time_till_keep_alive = Instant::now().saturating_duration_since(self.last_message); let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); tokio::select! { - () = tokio::time::sleep(time_till_keep_alive) => { - let peers = self.swarm.connected_peers().copied().collect::>(); - let behavior = self.swarm.behaviour_mut(); - for peer in peers { - behavior.reqres.send_request(&peer, Request::KeepAlive); - } - self.last_message = Instant::now(); - } - // Dial peers we're instructed to dial_opts = self.to_dial.recv() => { let dial_opts = dial_opts.expect("DialTask was closed?"); @@ -239,6 +225,13 @@ impl SwarmTask { } } + SwarmEvent::Behaviour( + BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) + ) => { + if result.is_err() { + self.swarm.close_connection(connection); + } + } SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => { self.handle_reqres(event) } From 6cde2bb6ef8be11fe239bd61f7148b07e93527eb Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 23:16:04 -0500 Subject: [PATCH 266/368] Correct and document topic subscription --- coordinator/src/p2p/libp2p/gossip.rs | 6 +++--- coordinator/src/p2p/libp2p/swarm.rs | 31 ++++++++++++++++++++++------ 2 files changed, 28 insertions(+), 9 deletions(-) diff --git a/coordinator/src/p2p/libp2p/gossip.rs b/coordinator/src/p2p/libp2p/gossip.rs index 4d75d9ea..db5af299 100644 --- a/coordinator/src/p2p/libp2p/gossip.rs +++ b/coordinator/src/p2p/libp2p/gossip.rs @@ -31,10 +31,10 @@ pub(crate) enum Message { } impl Message { - pub(crate) fn topic(&self) -> TopicHash { + pub(crate) fn topic(&self) -> IdentTopic { match self { - Message::Tributary { tributary, .. } => topic_for_tributary(*tributary).hash(), - Message::Cosign(_) => IdentTopic::new(BASE_TOPIC).hash(), + Message::Tributary { tributary, .. } => topic_for_tributary(*tributary), + Message::Cosign(_) => IdentTopic::new(BASE_TOPIC), } } } diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index 63f8f734..f4c5d7fe 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -57,8 +57,6 @@ pub(crate) struct SwarmTask { swarm: Swarm, - last_message: Instant, - gossip: mpsc::UnboundedReceiver, signed_cosigns: mpsc::UnboundedSender, tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec)>, @@ -255,8 +253,31 @@ impl SwarmTask { let message = message.expect("channel for messages to gossip was closed?"); let topic = message.topic(); let message = borsh::to_vec(&message).unwrap(); - let _: Result<_, _> = self.swarm.behaviour_mut().gossip.publish(topic, message); - self.last_message = Instant::now(); + + /* + If we're sending a message for this topic, it's because this topic is relevant to us. + Subscribe to it. + + We create topics roughly weekly, one per validator set/session. Once present in a + topic, we're interested in all messages for it until the validator set/session retires. + Then there should no longer be any messages for the topic as we should drop the + Tributary which creates the messages. + + We use this as an argument to not bother implement unsubscribing from topics. They're + incredibly infrequently created and old topics shouldn't still have messages published + to them. Having the coordinator reboot being our method of unsubscribing is fine. + + Alternatively, we could route an API to determine when a topic is retired, or retire + any topics we haven't sent messages on in the past hour. + */ + let behavior = self.swarm.behaviour_mut(); + let _: Result<_, _> = behavior.gossip.subscribe(&topic); + /* + This may be an error of `InsufficientPeers`. If so, we could ask DialTask to dial more + peers for this network. We don't as we assume DialTask will detect the lack of peers + for this network, and will already successfully handle this. + */ + let _: Result<_, _> = behavior.gossip.publish(topic.hash(), message); } request = self.outbound_requests.recv() => { @@ -310,8 +331,6 @@ impl SwarmTask { swarm, - last_message: Instant::now(), - gossip, signed_cosigns, tributary_gossip, From 75a00f2a1a3147f90350d6e31dbbf15035205d41 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 8 Jan 2025 23:54:27 -0500 Subject: [PATCH 267/368] Add allow_block_list to libp2p The check in validators prevented connections from non-validators. Non-validators could still participate in the network if they laundered their connection through a malicious validator. allow_block_list ensures that peers, not connections, are explicitly limited to validators. --- coordinator/src/p2p/libp2p/authenticate.rs | 10 +-- coordinator/src/p2p/libp2p/dial.rs | 2 +- coordinator/src/p2p/libp2p/mod.rs | 74 +++++++++++----------- coordinator/src/p2p/libp2p/swarm.rs | 32 ++++++---- coordinator/src/p2p/libp2p/validators.rs | 41 +++++++++--- 5 files changed, 94 insertions(+), 65 deletions(-) diff --git a/coordinator/src/p2p/libp2p/authenticate.rs b/coordinator/src/p2p/libp2p/authenticate.rs index d00d0dac..a8167d9e 100644 --- a/coordinator/src/p2p/libp2p/authenticate.rs +++ b/coordinator/src/p2p/libp2p/authenticate.rs @@ -19,13 +19,12 @@ use libp2p::{ noise, }; -use crate::p2p::libp2p::{validators::Validators, peer_id_from_public}; +use crate::p2p::libp2p::peer_id_from_public; const PROTOCOL: &str = "/serai/coordinator/validators"; #[derive(Clone)] pub(crate) struct OnlyValidators { - pub(crate) validators: Arc>, pub(crate) serai_key: Zeroizing, pub(crate) noise_keypair: identity::Keypair, } @@ -108,12 +107,7 @@ impl OnlyValidators { .verify_simple(PROTOCOL.as_bytes(), &msg, &sig) .map_err(|_| io::Error::other("invalid signature"))?; - let peer_id = peer_id_from_public(Public::from_raw(public_key.to_bytes())); - if !self.validators.read().await.contains(&peer_id) { - Err(io::Error::other("peer which tried to connect isn't a known active validator"))?; - } - - Ok(peer_id) + Ok(peer_id_from_public(Public::from_raw(public_key.to_bytes()))) } } diff --git a/coordinator/src/p2p/libp2p/dial.rs b/coordinator/src/p2p/libp2p/dial.rs index 03795a51..e8611797 100644 --- a/coordinator/src/p2p/libp2p/dial.rs +++ b/coordinator/src/p2p/libp2p/dial.rs @@ -37,7 +37,7 @@ pub(crate) struct DialTask { impl DialTask { pub(crate) fn new(serai: Serai, peers: Peers, to_dial: mpsc::UnboundedSender) -> Self { - DialTask { serai: serai.clone(), validators: Validators::new(serai), peers, to_dial } + DialTask { serai: serai.clone(), validators: Validators::new(serai).0, peers, to_dial } } } diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index ce60d285..fccf7ce1 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -25,7 +25,7 @@ use libp2p::{ multihash::Multihash, identity::{self, PeerId}, tcp::Config as TcpConfig, - yamux, + yamux, allow_block_list, swarm::NetworkBehaviour, SwarmBuilder, }; @@ -112,6 +112,7 @@ struct Peers { #[derive(NetworkBehaviour)] struct Behavior { + allow_list: allow_block_list::Behaviour, ping: ping::Behavior, reqres: reqres::Behavior, gossip: gossip::Behavior, @@ -147,43 +148,43 @@ impl Libp2p { .continually_run(dial_task_def, vec![]), ); - // Define the Validators object used for validating new connections - let connection_validators = UpdateValidatorsTask::spawn(serai.clone()); - let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> { - Ok(OnlyValidators { - serai_key: serai_key.clone(), - validators: connection_validators.clone(), - noise_keypair: noise_keypair.clone(), - }) + let swarm = { + let new_only_validators = |noise_keypair: &identity::Keypair| -> Result<_, ()> { + Ok(OnlyValidators { serai_key: serai_key.clone(), noise_keypair: noise_keypair.clone() }) + }; + + let new_yamux = || { + let mut config = yamux::Config::default(); + // 1 MiB default + max message size + config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); + // 256 KiB default + max message size + config + .set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); + config + }; + + let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) + .with_tokio() + .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) + .unwrap() + .with_behaviour(|_| Behavior { + allow_list: allow_block_list::Behaviour::default(), + ping: ping::new_behavior(), + reqres: reqres::new_behavior(), + gossip: gossip::new_behavior(), + }) + .unwrap() + .with_swarm_config(|config| { + config + .with_idle_connection_timeout(ping::INTERVAL + ping::TIMEOUT + Duration::from_secs(5)) + }) + .build(); + swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); + swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); + swarm }; - let new_yamux = || { - let mut config = yamux::Config::default(); - // 1 MiB default + max message size - config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); - // 256 KiB default + max message size - config.set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); - config - }; - - let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) - .with_tokio() - .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) - .unwrap() - .with_behaviour(|_| Behavior { - ping: ping::new_behavior(), - reqres: reqres::new_behavior(), - gossip: gossip::new_behavior(), - }) - .unwrap() - .with_swarm_config(|config| { - config.with_idle_connection_timeout(ping::INTERVAL + ping::TIMEOUT + Duration::from_secs(5)) - }) - .build(); - swarm.listen_on(format!("/ip4/0.0.0.0/tcp/{PORT}").parse().unwrap()).unwrap(); - swarm.listen_on(format!("/ip6/::/tcp/{PORT}").parse().unwrap()).unwrap(); - - let swarm_validators = UpdateValidatorsTask::spawn(serai); + let (swarm_validators, validator_changes) = UpdateValidatorsTask::spawn(serai); let (gossip_send, gossip_recv) = mpsc::unbounded_channel(); let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); @@ -201,6 +202,7 @@ impl Libp2p { dial_task, to_dial_recv, swarm_validators, + validator_changes, peers.clone(), swarm, gossip_recv, diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index f4c5d7fe..10d91818 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -23,7 +23,7 @@ use libp2p::{ use crate::p2p::libp2p::{ Peers, BehaviorEvent, Behavior, - validators::Validators, + validators::{self, Validators}, ping, reqres::{self, Request, Response}, gossip, @@ -52,6 +52,7 @@ pub(crate) struct SwarmTask { last_dial_task_run: Instant, validators: Arc>, + validator_changes: mpsc::UnboundedReceiver, peers: Peers, rebuild_peers_at: Instant, @@ -135,6 +136,18 @@ impl SwarmTask { let time_till_rebuild_peers = self.rebuild_peers_at.saturating_duration_since(Instant::now()); tokio::select! { + // If the validators have changed, update the allow list + validator_changes = self.validator_changes.recv() => { + let validator_changes = validator_changes.expect("validators update task shut down?"); + let behavior = &mut self.swarm.behaviour_mut().allow_list; + for removed in validator_changes.removed { + behavior.disallow_peer(removed); + } + for added in validator_changes.added { + behavior.allow_peer(added); + } + } + // Dial peers we're instructed to dial_opts = self.to_dial.recv() => { let dial_opts = dial_opts.expect("DialTask was closed?"); @@ -155,26 +168,15 @@ impl SwarmTask { let validators_by_network = self.validators.read().await.by_network().clone(); let connected_peers = self.swarm.connected_peers().copied().collect::>(); - // We initially populate the list of peers to disconnect with all peers - let mut to_disconnect = connected_peers.clone(); - // Build the new peers object let mut peers = HashMap::new(); for (network, validators) in validators_by_network { peers.insert(network, validators.intersection(&connected_peers).copied().collect()); - - // If this peer is in this validator set, don't keep it flagged for disconnection - to_disconnect.retain(|peer| !validators.contains(peer)); } // Write the new peers object *self.peers.peers.write().await = peers; self.rebuild_peers_at = Instant::now() + TIME_BETWEEN_REBUILD_PEERS; - - // Disconnect all peers marked for disconnection - for peer in to_disconnect { - let _: Result<_, _> = self.swarm.disconnect_peer_id(peer); - } } // Handle swarm events @@ -223,6 +225,10 @@ impl SwarmTask { } } + SwarmEvent::Behaviour(BehaviorEvent::AllowList(event)) => { + // Ensure this is an unreachable case, not an actual event + let _: void::Void = event; + } SwarmEvent::Behaviour( BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) ) => { @@ -305,6 +311,7 @@ impl SwarmTask { to_dial: mpsc::UnboundedReceiver, validators: Arc>, + validator_changes: mpsc::UnboundedReceiver, peers: Peers, swarm: Swarm, @@ -326,6 +333,7 @@ impl SwarmTask { last_dial_task_run: Instant::now(), validators, + validator_changes, peers, rebuild_peers_at: Instant::now() + TIME_BETWEEN_REBUILD_PEERS, diff --git a/coordinator/src/p2p/libp2p/validators.rs b/coordinator/src/p2p/libp2p/validators.rs index b5be7c9e..7eb2e996 100644 --- a/coordinator/src/p2p/libp2p/validators.rs +++ b/coordinator/src/p2p/libp2p/validators.rs @@ -11,10 +11,15 @@ use serai_task::{Task, ContinuallyRan}; use libp2p::PeerId; use futures_util::stream::{StreamExt, FuturesUnordered}; -use tokio::sync::RwLock; +use tokio::sync::{mpsc, RwLock}; use crate::p2p::libp2p::peer_id_from_public; +pub(crate) struct Changes { + pub(crate) removed: HashSet, + pub(crate) added: HashSet, +} + pub(crate) struct Validators { serai: Serai, @@ -24,16 +29,22 @@ pub(crate) struct Validators { by_network: HashMap>, // The validators and their networks validators: HashMap>, + + // The channel to send the changes down + changes: mpsc::UnboundedSender, } impl Validators { - pub(crate) fn new(serai: Serai) -> Self { - Validators { + pub(crate) fn new(serai: Serai) -> (Self, mpsc::UnboundedReceiver) { + let (send, recv) = mpsc::unbounded_channel(); + let validators = Validators { serai, sessions: HashMap::new(), by_network: HashMap::new(), validators: HashMap::new(), - } + changes: send, + }; + (validators, recv) } async fn session_changes( @@ -89,6 +100,9 @@ impl Validators { &mut self, session_changes: Vec<(NetworkId, Session, HashSet)>, ) { + let mut removed = HashSet::new(); + let mut added = HashSet::new(); + for (network, session, validators) in session_changes { // Remove the existing validators for validator in self.by_network.remove(&network).unwrap_or_else(HashSet::new) { @@ -96,21 +110,31 @@ impl Validators { let mut networks = self.validators.remove(&validator).unwrap(); // Remove this one networks.remove(&network); - // Insert the networks back if the validator was present in other networks if !networks.is_empty() { + // Insert the networks back if the validator was present in other networks self.validators.insert(validator, networks); + } else { + // Because this validator is no longer present in any network, mark them as removed + removed.insert(validator); } } // Add the new validators for validator in validators.iter().copied() { self.validators.entry(validator).or_insert_with(HashSet::new).insert(network); + added.insert(validator); } self.by_network.insert(network, validators); // Update the session we have populated self.sessions.insert(network, session); } + + // Only flag validators for removal if they weren't simultaneously added by these changes + removed.retain(|validator| !added.contains(validator)); + // Send the changes, dropping the error + // This lets the caller opt-out of change notifications by dropping the receiver + let _: Result<_, _> = self.changes.send(Changes { removed, added }); } /// Update the view of the validators. @@ -145,9 +169,10 @@ impl UpdateValidatorsTask { /// Spawn a new instance of the UpdateValidatorsTask. /// /// This returns a reference to the Validators it updates after spawning itself. - pub(crate) fn spawn(serai: Serai) -> Arc> { + pub(crate) fn spawn(serai: Serai) -> (Arc>, mpsc::UnboundedReceiver) { // The validators which will be updated - let validators = Arc::new(RwLock::new(Validators::new(serai))); + let (validators, changes) = Validators::new(serai); + let validators = Arc::new(RwLock::new(validators)); // Define the task let (update_validators_task, update_validators_task_handle) = Task::new(); @@ -159,7 +184,7 @@ impl UpdateValidatorsTask { ); // Return the validators - validators + (validators, changes) } } From dda6e3e899acb7c94f3bb64363a764139aa044a2 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 00:06:51 -0500 Subject: [PATCH 268/368] Limit each peer to one connection Prevents dialing the same peer multiple times (successfully). --- Cargo.lock | 1 + coordinator/Cargo.toml | 1 + coordinator/src/p2p/libp2p/authenticate.rs | 4 +--- coordinator/src/p2p/libp2p/gossip.rs | 4 ++-- coordinator/src/p2p/libp2p/mod.rs | 14 ++++++++++---- coordinator/src/p2p/libp2p/swarm.rs | 6 ++++-- 6 files changed, 19 insertions(+), 11 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 3902b794..49bbf65d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8339,6 +8339,7 @@ dependencies = [ "sp-runtime", "tokio", "tributary-chain", + "void", "zalloc", "zeroize", ] diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 2fc373aa..e0b84346 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -55,6 +55,7 @@ env_logger = { version = "0.10", default-features = false, features = ["humantim futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +void = { version = "1", default-features = false } libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } serai-cosign = { path = "./cosign" } diff --git a/coordinator/src/p2p/libp2p/authenticate.rs b/coordinator/src/p2p/libp2p/authenticate.rs index a8167d9e..56e5336b 100644 --- a/coordinator/src/p2p/libp2p/authenticate.rs +++ b/coordinator/src/p2p/libp2p/authenticate.rs @@ -1,5 +1,5 @@ use core::{pin::Pin, future::Future}; -use std::{sync::Arc, io}; +use std::io; use zeroize::Zeroizing; use rand_core::{RngCore, OsRng}; @@ -9,8 +9,6 @@ use schnorrkel::{Keypair, PublicKey, Signature}; use serai_client::primitives::PublicKey as Public; -use tokio::sync::RwLock; - use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; use libp2p::{ core::UpgradeInfo, diff --git a/coordinator/src/p2p/libp2p/gossip.rs b/coordinator/src/p2p/libp2p/gossip.rs index db5af299..f64fddb5 100644 --- a/coordinator/src/p2p/libp2p/gossip.rs +++ b/coordinator/src/p2p/libp2p/gossip.rs @@ -5,8 +5,8 @@ use blake2::{Digest, Blake2s256}; use borsh::{BorshSerialize, BorshDeserialize}; use libp2p::gossipsub::{ - TopicHash, IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, - IdentityTransform, AllowAllSubscriptionFilter, Behaviour, + IdentTopic, MessageId, MessageAuthenticity, ValidationMode, ConfigBuilder, IdentityTransform, + AllowAllSubscriptionFilter, Behaviour, }; pub use libp2p::gossipsub::Event; diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index fccf7ce1..bce1006d 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -26,6 +26,7 @@ use libp2p::{ identity::{self, PeerId}, tcp::Config as TcpConfig, yamux, allow_block_list, + connection_limits::{self, ConnectionLimits}, swarm::NetworkBehaviour, SwarmBuilder, }; @@ -40,10 +41,6 @@ use validators::UpdateValidatorsTask; mod authenticate; use authenticate::OnlyValidators; -/// The dial task, to find new peers to connect to -mod dial; -use dial::DialTask; - /// The ping behavior, used to ensure connection latency is below the limit mod ping; @@ -59,6 +56,10 @@ use gossip::Message; mod swarm; use swarm::SwarmTask; +/// The dial task, to find new peers to connect to +mod dial; +use dial::DialTask; + const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') // usize::max, manually implemented, as max isn't a const fn @@ -113,6 +114,7 @@ struct Peers { #[derive(NetworkBehaviour)] struct Behavior { allow_list: allow_block_list::Behaviour, + connection_limits: connection_limits::Behaviour, ping: ping::Behavior, reqres: reqres::Behavior, gossip: gossip::Behavior, @@ -169,6 +171,10 @@ impl Libp2p { .unwrap() .with_behaviour(|_| Behavior { allow_list: allow_block_list::Behaviour::default(), + // Limit each per to a single connection + connection_limits: connection_limits::Behaviour::new( + ConnectionLimits::default().with_max_established_per_peer(Some(1)), + ), ping: ping::new_behavior(), reqres: reqres::new_behavior(), gossip: gossip::new_behavior(), diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/src/p2p/libp2p/swarm.rs index 10d91818..f62cb659 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/src/p2p/libp2p/swarm.rs @@ -225,8 +225,10 @@ impl SwarmTask { } } - SwarmEvent::Behaviour(BehaviorEvent::AllowList(event)) => { - // Ensure this is an unreachable case, not an actual event + SwarmEvent::Behaviour( + BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) + ) => { + // Ensure these are unreachable cases, not actual events let _: void::Void = event; } SwarmEvent::Behaviour( From 295c1bd044b45b21ca6920a7c657cf5154fc99a9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 00:16:45 -0500 Subject: [PATCH 269/368] Document improper handling of session rotation in P2P allow list --- coordinator/src/p2p/libp2p/validators.rs | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/coordinator/src/p2p/libp2p/validators.rs b/coordinator/src/p2p/libp2p/validators.rs index 7eb2e996..7ba48907 100644 --- a/coordinator/src/p2p/libp2p/validators.rs +++ b/coordinator/src/p2p/libp2p/validators.rs @@ -115,6 +115,15 @@ impl Validators { self.validators.insert(validator, networks); } else { // Because this validator is no longer present in any network, mark them as removed + /* + This isn't accurate. The validator isn't present in the latest session for this + network. The validator was present in the prior session which has yet to retire. Our + lack of explicit inclusion for both the prior session and the current session causes + only the validators mutually present in both sessions to be responsible for all actions + still ongoing as the prior validator set retires. + + TODO: Fix this + */ removed.insert(validator); } } From adf20773acc78a7faab7c7dbc9ebd8217cde3fe9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 00:40:07 -0500 Subject: [PATCH 270/368] Add libp2p module documentation --- coordinator/src/p2p/libp2p/mod.rs | 17 +++++++++++++++++ coordinator/src/p2p/mod.rs | 2 +- 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/src/p2p/libp2p/mod.rs index bce1006d..3e799ab7 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/src/p2p/libp2p/mod.rs @@ -1,3 +1,15 @@ +//! A libp2p-based backend for P2p. +//! +//! The libp2p swarm is limited to validators from the Serai network. The swarm does not maintain +//! any of its own peer finding/routing infrastructure, instead relying on the Serai network's +//! connection information to dial peers. This does limit the listening peers to only the peers +//! immediately reachable via the same IP address (despite the two distinct services), not hidden +//! behind a NAT, yet is also quite simple and gives full control of who to connect to to us. +//! +//! Peers are decided via the `DialTask` which aims to maintain a target amount of peers from each +//! external network. +// TODO: Consider adding that infrastructure, leaving the Serai network solely for bootstrapping + use core::{future::Future, time::Duration}; use std::{ sync::Arc, @@ -113,10 +125,15 @@ struct Peers { #[derive(NetworkBehaviour)] struct Behavior { + // Used to only allow Serai validators as peers allow_list: allow_block_list::Behaviour, + // Used to limit each peer to a single connection connection_limits: connection_limits::Behaviour, + // Used to ensure connection latency is within tolerances ping: ping::Behavior, + // Used to request data from specific peers reqres: reqres::Behavior, + // Used to broadcast messages to all other peers subscribed to a topic gossip: gossip::Behavior, } diff --git a/coordinator/src/p2p/mod.rs b/coordinator/src/p2p/mod.rs index 9c501973..4bb657f4 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/src/p2p/mod.rs @@ -8,7 +8,7 @@ use tokio::sync::oneshot; use serai_cosign::SignedCosign; -/// The libp2p-backed P2p network +/// The libp2p-backed P2P network mod libp2p; /// The heartbeat task, effecting sync of Tributaries From 465e8498c48c17e991a9ed215721c8a95788afbf Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 01:26:25 -0500 Subject: [PATCH 271/368] Make the coordinator's P2P modules their own crates --- .github/workflows/msrv.yml | 2 + .github/workflows/tests.yml | 2 + Cargo.lock | 44 +++++++++++++++++-- Cargo.toml | 2 + coordinator/Cargo.toml | 10 ++--- coordinator/LICENSE | 2 +- coordinator/p2p/Cargo.toml | 33 ++++++++++++++ coordinator/p2p/LICENSE | 15 +++++++ coordinator/p2p/README.md | 3 ++ coordinator/p2p/libp2p/Cargo.toml | 43 ++++++++++++++++++ coordinator/p2p/libp2p/LICENSE | 15 +++++++ coordinator/p2p/libp2p/README.md | 14 ++++++ .../libp2p => p2p/libp2p/src}/authenticate.rs | 2 +- .../p2p/libp2p => p2p/libp2p/src}/dial.rs | 2 +- .../p2p/libp2p => p2p/libp2p/src}/gossip.rs | 2 - .../libp2p/mod.rs => p2p/libp2p/src/lib.rs} | 40 ++++++++++------- .../p2p/libp2p => p2p/libp2p/src}/ping.rs | 0 .../p2p/libp2p => p2p/libp2p/src}/reqres.rs | 8 ++-- .../p2p/libp2p => p2p/libp2p/src}/swarm.rs | 2 +- .../libp2p => p2p/libp2p/src}/validators.rs | 6 +-- coordinator/{src/p2p => p2p/src}/heartbeat.rs | 17 +++---- .../{src/p2p/mod.rs => p2p/src/lib.rs} | 25 +++++++---- coordinator/src/main.rs | 6 ++- deny.toml | 2 + 24 files changed, 234 insertions(+), 63 deletions(-) create mode 100644 coordinator/p2p/Cargo.toml create mode 100644 coordinator/p2p/LICENSE create mode 100644 coordinator/p2p/README.md create mode 100644 coordinator/p2p/libp2p/Cargo.toml create mode 100644 coordinator/p2p/libp2p/LICENSE create mode 100644 coordinator/p2p/libp2p/README.md rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/authenticate.rs (99%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/dial.rs (98%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/gossip.rs (97%) rename coordinator/{src/p2p/libp2p/mod.rs => p2p/libp2p/src/lib.rs} (91%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/ping.rs (100%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/reqres.rs (95%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/swarm.rs (99%) rename coordinator/{src/p2p/libp2p => p2p/libp2p/src}/validators.rs (97%) rename coordinator/{src/p2p => p2p/src}/heartbeat.rs (91%) rename coordinator/{src/p2p/mod.rs => p2p/src/lib.rs} (76%) diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml index 4d37fab7..e1636482 100644 --- a/.github/workflows/msrv.yml +++ b/.github/workflows/msrv.yml @@ -177,6 +177,8 @@ jobs: cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml + cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml + cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml cargo msrv verify --manifest-path coordinator/Cargo.toml msrv-substrate: diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 65a35cc3..0c311b99 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -63,6 +63,8 @@ jobs: -p tributary-chain \ -p serai-cosign \ -p serai-coordinator-substrate \ + -p serai-coordinator-p2p \ + -p serai-coordinator-libp2p-p2p \ -p serai-coordinator \ -p serai-orchestrator \ -p serai-docker-tests diff --git a/Cargo.lock b/Cargo.lock index 49bbf65d..dfeea6f6 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8311,7 +8311,6 @@ dependencies = [ name = "serai-coordinator" version = "0.1.0" dependencies = [ - "async-trait", "bitvec", "blake2", "borsh", @@ -8319,9 +8318,7 @@ dependencies = [ "env_logger", "flexible-transcript", "frost-schnorrkel", - "futures-util", "hex", - "libp2p", "log", "modular-frost", "parity-scale-codec", @@ -8329,6 +8326,9 @@ dependencies = [ "schnorr-signatures", "schnorrkel", "serai-client", + "serai-coordinator-libp2p-p2p", + "serai-coordinator-p2p", + "serai-coordinator-substrate", "serai-cosign", "serai-db", "serai-env", @@ -8337,13 +8337,49 @@ dependencies = [ "serai-task", "sp-application-crypto", "sp-runtime", + "tributary-chain", + "zalloc", + "zeroize", +] + +[[package]] +name = "serai-coordinator-libp2p-p2p" +version = "0.1.0" +dependencies = [ + "async-trait", + "blake2", + "borsh", + "futures-util", + "hex", + "libp2p", + "log", + "rand_core", + "schnorrkel", + "serai-client", + "serai-coordinator-p2p", + "serai-cosign", + "serai-task", "tokio", "tributary-chain", "void", - "zalloc", "zeroize", ] +[[package]] +name = "serai-coordinator-p2p" +version = "0.1.0" +dependencies = [ + "borsh", + "futures-util", + "log", + "serai-client", + "serai-cosign", + "serai-db", + "serai-task", + "tokio", + "tributary-chain", +] + [[package]] name = "serai-coordinator-substrate" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 688537b1..39507b16 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -100,6 +100,8 @@ members = [ "coordinator/tributary", "coordinator/cosign", "coordinator/substrate", + "coordinator/p2p", + "coordinator/p2p/libp2p", "coordinator", "substrate/primitives", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index e0b84346..3ecce4be 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -18,8 +18,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -async-trait = { version = "0.1", default-features = false } - zeroize = { version = "^1.5", default-features = false, features = ["std"] } bitvec = { version = "1", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] } @@ -53,12 +51,10 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } -futures-util = { version = "0.3", default-features = false, features = ["std"] } -tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } -void = { version = "1", default-features = false } -libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } - serai-cosign = { path = "./cosign" } +serai-coordinator-substrate = { path = "./substrate" } +serai-coordinator-p2p = { path = "./p2p" } +serai-coordinator-libp2p-p2p = { path = "./p2p/libp2p" } [dev-dependencies] tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] } diff --git a/coordinator/LICENSE b/coordinator/LICENSE index 26d57cbb..621233a9 100644 --- a/coordinator/LICENSE +++ b/coordinator/LICENSE @@ -1,6 +1,6 @@ AGPL-3.0-only license -Copyright (c) 2023-2024 Luke Parker +Copyright (c) 2023-2025 Luke Parker This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License Version 3 as diff --git a/coordinator/p2p/Cargo.toml b/coordinator/p2p/Cargo.toml new file mode 100644 index 00000000..44183258 --- /dev/null +++ b/coordinator/p2p/Cargo.toml @@ -0,0 +1,33 @@ +[package] +name = "serai-coordinator-p2p" +version = "0.1.0" +description = "Serai coordinator's P2P abstraction" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/p2p" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false +rust-version = "1.81" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-db = { path = "../../common/db", version = "0.1" } + +serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } +serai-cosign = { path = "../cosign" } +tributary = { package = "tributary-chain", path = "../tributary" } + +futures-util = { version = "0.3", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["sync"] } + +log = { version = "0.4", default-features = false, features = ["std"] } +serai-task = { path = "../../common/task", version = "0.1" } diff --git a/coordinator/p2p/LICENSE b/coordinator/p2p/LICENSE new file mode 100644 index 00000000..621233a9 --- /dev/null +++ b/coordinator/p2p/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2023-2025 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/coordinator/p2p/README.md b/coordinator/p2p/README.md new file mode 100644 index 00000000..9d6d9aef --- /dev/null +++ b/coordinator/p2p/README.md @@ -0,0 +1,3 @@ +# Serai Coordinator P2P + +The P2P abstraction used by Serai's coordinator. diff --git a/coordinator/p2p/libp2p/Cargo.toml b/coordinator/p2p/libp2p/Cargo.toml new file mode 100644 index 00000000..8916d961 --- /dev/null +++ b/coordinator/p2p/libp2p/Cargo.toml @@ -0,0 +1,43 @@ +[package] +name = "serai-coordinator-libp2p-p2p" +version = "0.1.0" +description = "Serai coordinator's libp2p-based P2P backend" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/p2p/libp2p" +authors = ["Luke Parker "] +keywords = [] +edition = "2021" +publish = false +rust-version = "1.81" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +async-trait = { version = "0.1", default-features = false } + +rand_core = { version = "0.6", default-features = false, features = ["std"] } + +zeroize = { version = "^1.5", default-features = false, features = ["std"] } +blake2 = { version = "0.10", default-features = false, features = ["std"] } +schnorrkel = { version = "0.11", default-features = false, features = ["std"] } + +hex = { version = "0.4", default-features = false, features = ["std"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] } +serai-cosign = { path = "../../cosign" } +tributary = { package = "tributary-chain", path = "../../tributary" } + +void = { version = "1", default-features = false } +futures-util = { version = "0.3", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["sync"] } +libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } + +log = { version = "0.4", default-features = false, features = ["std"] } +serai-task = { path = "../../../common/task", version = "0.1" } +serai-coordinator-p2p = { path = "../" } diff --git a/coordinator/p2p/libp2p/LICENSE b/coordinator/p2p/libp2p/LICENSE new file mode 100644 index 00000000..621233a9 --- /dev/null +++ b/coordinator/p2p/libp2p/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2023-2025 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/coordinator/p2p/libp2p/README.md b/coordinator/p2p/libp2p/README.md new file mode 100644 index 00000000..82edec80 --- /dev/null +++ b/coordinator/p2p/libp2p/README.md @@ -0,0 +1,14 @@ +# Serai Coordinator libp2p P2P + +A libp2p-backed P2P instantiation for Serai's coordinator. + +The libp2p swarm is limited to validators from the Serai network. The swarm +does not maintain any of its own peer finding/routing infrastructure, instead +relying on the Serai network's connection information to dial peers. This does +limit the listening peers to only the peers immediately reachable via the same +IP address (despite the two distinct services), not hidden behind a NAT, yet is +also quite simple and gives full control of who to connect to to us. + +Peers are decided via the internal `DialTask` which aims to maintain a target +amount of peers for each external network. This ensures cosigns are able to +propagate across the external networks which sign them. diff --git a/coordinator/src/p2p/libp2p/authenticate.rs b/coordinator/p2p/libp2p/src/authenticate.rs similarity index 99% rename from coordinator/src/p2p/libp2p/authenticate.rs rename to coordinator/p2p/libp2p/src/authenticate.rs index 56e5336b..fbdcf7c9 100644 --- a/coordinator/src/p2p/libp2p/authenticate.rs +++ b/coordinator/p2p/libp2p/src/authenticate.rs @@ -17,7 +17,7 @@ use libp2p::{ noise, }; -use crate::p2p::libp2p::peer_id_from_public; +use crate::peer_id_from_public; const PROTOCOL: &str = "/serai/coordinator/validators"; diff --git a/coordinator/src/p2p/libp2p/dial.rs b/coordinator/p2p/libp2p/src/dial.rs similarity index 98% rename from coordinator/src/p2p/libp2p/dial.rs rename to coordinator/p2p/libp2p/src/dial.rs index e8611797..f8576217 100644 --- a/coordinator/src/p2p/libp2p/dial.rs +++ b/coordinator/p2p/libp2p/src/dial.rs @@ -14,7 +14,7 @@ use libp2p::{ use serai_task::ContinuallyRan; -use crate::p2p::libp2p::{PORT, Peers, validators::Validators}; +use crate::{PORT, Peers, validators::Validators}; const TARGET_PEERS_PER_NETWORK: usize = 5; /* diff --git a/coordinator/src/p2p/libp2p/gossip.rs b/coordinator/p2p/libp2p/src/gossip.rs similarity index 97% rename from coordinator/src/p2p/libp2p/gossip.rs rename to coordinator/p2p/libp2p/src/gossip.rs index f64fddb5..f48c1c4e 100644 --- a/coordinator/src/p2p/libp2p/gossip.rs +++ b/coordinator/p2p/libp2p/src/gossip.rs @@ -15,8 +15,6 @@ use serai_cosign::SignedCosign; // Block size limit + 16 KB of space for signatures/metadata pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384; -const KEEP_ALIVE_INTERVAL: Duration = Duration::from_secs(80); - const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0"; const BASE_TOPIC: &str = "/"; diff --git a/coordinator/src/p2p/libp2p/mod.rs b/coordinator/p2p/libp2p/src/lib.rs similarity index 91% rename from coordinator/src/p2p/libp2p/mod.rs rename to coordinator/p2p/libp2p/src/lib.rs index 3e799ab7..0778813f 100644 --- a/coordinator/src/p2p/libp2p/mod.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -1,14 +1,6 @@ -//! A libp2p-based backend for P2p. -//! -//! The libp2p swarm is limited to validators from the Serai network. The swarm does not maintain -//! any of its own peer finding/routing infrastructure, instead relying on the Serai network's -//! connection information to dial peers. This does limit the listening peers to only the peers -//! immediately reachable via the same IP address (despite the two distinct services), not hidden -//! behind a NAT, yet is also quite simple and gives full control of who to connect to to us. -//! -//! Peers are decided via the `DialTask` which aims to maintain a target amount of peers from each -//! external network. -// TODO: Consider adding that infrastructure, leaving the Serai network solely for bootstrapping +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] use core::{future::Future, time::Duration}; use std::{ @@ -43,7 +35,7 @@ use libp2p::{ SwarmBuilder, }; -use crate::p2p::TributaryBlockWithCommit; +use serai_coordinator_p2p::TributaryBlockWithCommit; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; @@ -88,11 +80,12 @@ fn peer_id_from_public(public: PublicKey) -> PeerId { PeerId::from_multihash(Multihash::wrap(0, &public.0).unwrap()).unwrap() } -struct Peer<'a> { +/// The representation of a peer. +pub struct Peer<'a> { outbound_requests: &'a mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender)>, id: PeerId, } -impl crate::p2p::Peer<'_> for Peer<'_> { +impl serai_coordinator_p2p::Peer<'_> for Peer<'_> { fn send_heartbeat( &self, set: ValidatorSet, @@ -123,6 +116,8 @@ struct Peers { peers: Arc>>>, } +// Consider adding identify/kad/autonat/rendevous/(relay + dcutr). While we currently use the Serai +// network for peers, we could use it solely for bootstrapping/as a fallback. #[derive(NetworkBehaviour)] struct Behavior { // Used to only allow Serai validators as peers @@ -137,8 +132,13 @@ struct Behavior { gossip: gossip::Behavior, } +/// The libp2p-backed P2P implementation. +/// +/// The P2p trait implementation does not support backpressure and is expected to be fully +/// utilized. Failure to poll the entire API will cause unbounded memory growth. +#[allow(clippy::type_complexity)] #[derive(Clone)] -struct Libp2p { +pub struct Libp2p { peers: Peers, gossip: mpsc::UnboundedSender, @@ -155,7 +155,10 @@ struct Libp2p { } impl Libp2p { - pub(crate) fn new(serai_key: &Zeroizing, serai: Serai) -> Libp2p { + /// Create a new libp2p-backed P2P instance. + /// + /// This will spawn all of the internal tasks necessary for functioning. + pub fn new(serai_key: &Zeroizing, serai: Serai) -> Libp2p { // Define the object we track peers with let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; @@ -320,7 +323,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p { } } -impl crate::p2p::P2p for Libp2p { +impl serai_coordinator_p2p::P2p for Libp2p { type Peer<'a> = Peer<'a>; fn peers(&self, network: NetworkId) -> impl Send + Future>> { @@ -353,6 +356,9 @@ impl crate::p2p::P2p for Libp2p { tokio::spawn({ let respond = self.inbound_request_responses.clone(); async move { + // The swarm task expects us to respond to every request. If the caller drops this + // channel, we'll receive `Err` and respond with `None`, safely satisfying that bound + // without requiring the caller send a value down this channel let response = if let Ok(blocks) = receiver.await { Response::Blocks(blocks) } else { Response::None }; respond diff --git a/coordinator/src/p2p/libp2p/ping.rs b/coordinator/p2p/libp2p/src/ping.rs similarity index 100% rename from coordinator/src/p2p/libp2p/ping.rs rename to coordinator/p2p/libp2p/src/ping.rs diff --git a/coordinator/src/p2p/libp2p/reqres.rs b/coordinator/p2p/libp2p/src/reqres.rs similarity index 95% rename from coordinator/src/p2p/libp2p/reqres.rs rename to coordinator/p2p/libp2p/src/reqres.rs index 8fe02c30..4f8fa236 100644 --- a/coordinator/src/p2p/libp2p/reqres.rs +++ b/coordinator/p2p/libp2p/src/reqres.rs @@ -15,12 +15,12 @@ pub use request_response::{RequestId, Message}; use serai_cosign::SignedCosign; -use crate::p2p::TributaryBlockWithCommit; +use serai_coordinator_p2p::TributaryBlockWithCommit; /// The maximum message size for the request-response protocol // This is derived from the heartbeat message size as it's our largest message pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = - (tributary::BLOCK_SIZE_LIMIT * crate::p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; + (tributary::BLOCK_SIZE_LIMIT * serai_coordinator_p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0"; @@ -103,7 +103,7 @@ impl CodecTrait for Codec { } async fn read_response( &mut self, - proto: &Self::Protocol, + _: &Self::Protocol, io: &mut R, ) -> io::Result { Self::read(io).await @@ -118,7 +118,7 @@ impl CodecTrait for Codec { } async fn write_response( &mut self, - proto: &Self::Protocol, + _: &Self::Protocol, io: &mut W, res: Response, ) -> io::Result<()> { diff --git a/coordinator/src/p2p/libp2p/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs similarity index 99% rename from coordinator/src/p2p/libp2p/swarm.rs rename to coordinator/p2p/libp2p/src/swarm.rs index f62cb659..e0a6762b 100644 --- a/coordinator/src/p2p/libp2p/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -21,7 +21,7 @@ use libp2p::{ swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; -use crate::p2p::libp2p::{ +use crate::{ Peers, BehaviorEvent, Behavior, validators::{self, Validators}, ping, diff --git a/coordinator/src/p2p/libp2p/validators.rs b/coordinator/p2p/libp2p/src/validators.rs similarity index 97% rename from coordinator/src/p2p/libp2p/validators.rs rename to coordinator/p2p/libp2p/src/validators.rs index 7ba48907..0ce4c91b 100644 --- a/coordinator/src/p2p/libp2p/validators.rs +++ b/coordinator/p2p/libp2p/src/validators.rs @@ -13,7 +13,7 @@ use libp2p::PeerId; use futures_util::stream::{StreamExt, FuturesUnordered}; use tokio::sync::{mpsc, RwLock}; -use crate::p2p::libp2p::peer_id_from_public; +use crate::peer_id_from_public; pub(crate) struct Changes { pub(crate) removed: HashSet, @@ -157,10 +157,6 @@ impl Validators { &self.by_network } - pub(crate) fn contains(&self, peer_id: &PeerId) -> bool { - self.validators.contains_key(peer_id) - } - pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet> { self.validators.get(peer_id) } diff --git a/coordinator/src/p2p/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs similarity index 91% rename from coordinator/src/p2p/heartbeat.rs rename to coordinator/p2p/src/heartbeat.rs index 025bfd73..87827e7f 100644 --- a/coordinator/src/p2p/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -5,20 +5,17 @@ use serai_client::validator_sets::primitives::ValidatorSet; use futures_util::FutureExt; -use tributary::{ReadWrite, Block, Tributary, TributaryReader}; +use tributary::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader}; use serai_db::*; use serai_task::ContinuallyRan; -use crate::{ - tributary::Transaction, - p2p::{Peer, P2p}, -}; +use crate::{Peer, P2p}; // Amount of blocks in a minute const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; -// Maximum amount of blocks to send in a batch of blocks +/// The maximum amount of blocks to include/included within a batch. pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; /// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's @@ -26,14 +23,14 @@ pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; /// /// If the other validator has more blocks then we do, they're expected to inform us. This forms /// the sync protocol for our Tributaries. -struct HeartbeatTask { +pub struct HeartbeatTask { set: ValidatorSet, - tributary: Tributary, - reader: TributaryReader, + tributary: Tributary, + reader: TributaryReader, p2p: P, } -impl ContinuallyRan for HeartbeatTask { +impl ContinuallyRan for HeartbeatTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { // If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol diff --git a/coordinator/src/p2p/mod.rs b/coordinator/p2p/src/lib.rs similarity index 76% rename from coordinator/src/p2p/mod.rs rename to coordinator/p2p/src/lib.rs index 4bb657f4..26529fef 100644 --- a/coordinator/src/p2p/mod.rs +++ b/coordinator/p2p/src/lib.rs @@ -1,3 +1,7 @@ +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] + use core::future::Future; use borsh::{BorshSerialize, BorshDeserialize}; @@ -8,20 +12,21 @@ use tokio::sync::oneshot; use serai_cosign::SignedCosign; -/// The libp2p-backed P2P network -mod libp2p; - /// The heartbeat task, effecting sync of Tributaries -mod heartbeat; +pub mod heartbeat; /// A tributary block and its commit. #[derive(Clone, BorshSerialize, BorshDeserialize)] -pub(crate) struct TributaryBlockWithCommit { - pub(crate) block: Vec, - pub(crate) commit: Vec, +pub struct TributaryBlockWithCommit { + /// The serialized block. + pub block: Vec, + /// The serialized commit. + pub commit: Vec, } -trait Peer<'a>: Send { +/// A representation of a peer. +pub trait Peer<'a>: Send { + /// Send a heartbeat to this peer. fn send_heartbeat( &self, set: ValidatorSet, @@ -29,7 +34,9 @@ trait Peer<'a>: Send { ) -> impl Send + Future>>; } -trait P2p: Send + Sync + tributary::P2p + serai_cosign::RequestNotableCosigns { +/// The representation of the P2P network. +pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotableCosigns { + /// The representation of a peer. type Peer<'a>: Peer<'a>; /// Fetch the peers for this network. diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 2316af2e..9d555b71 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,5 +1,9 @@ mod tributary; -mod p2p; + +mod p2p { + use serai_coordinator_p2p::*; + pub use serai_coordinator_libp2p_p2p::Libp2p; +} fn main() { todo!("TODO") diff --git a/deny.toml b/deny.toml index fa12461c..f530b6a2 100644 --- a/deny.toml +++ b/deny.toml @@ -75,6 +75,8 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "tributary-chain" }, { allow = ["AGPL-3.0"], name = "serai-cosign" }, { allow = ["AGPL-3.0"], name = "serai-coordinator-substrate" }, + { allow = ["AGPL-3.0"], name = "serai-coordinator-p2p" }, + { allow = ["AGPL-3.0"], name = "serai-coordinator-libp2p-p2p" }, { allow = ["AGPL-3.0"], name = "serai-coordinator" }, { allow = ["AGPL-3.0"], name = "serai-coins-pallet" }, From 9833911e0630a598db547f73ef10eab5c392c1ad Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 01:41:42 -0500 Subject: [PATCH 272/368] Promote Request::Heartbeat from an enum variant to a struct --- coordinator/p2p/libp2p/src/lib.rs | 24 ++++++++++++------------ coordinator/p2p/libp2p/src/reqres.rs | 5 ++--- coordinator/p2p/libp2p/src/swarm.rs | 4 +++- coordinator/p2p/src/cosign.rs | 0 coordinator/p2p/src/heartbeat.rs | 17 ++++++++++++++--- coordinator/p2p/src/lib.rs | 15 +++++++++++---- 6 files changed, 42 insertions(+), 23 deletions(-) create mode 100644 coordinator/p2p/src/cosign.rs diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index 0778813f..2f6defa7 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -35,7 +35,7 @@ use libp2p::{ SwarmBuilder, }; -use serai_coordinator_p2p::TributaryBlockWithCommit; +use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit}; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; @@ -88,13 +88,12 @@ pub struct Peer<'a> { impl serai_coordinator_p2p::Peer<'_> for Peer<'_> { fn send_heartbeat( &self, - set: ValidatorSet, - latest_block_hash: [u8; 32], + heartbeat: Heartbeat, ) -> impl Send + Future>> { async move { const HEARTBEAT_TIMEOUT: Duration = Duration::from_secs(5); - let request = Request::Heartbeat { set, latest_block_hash }; + let request = Request::Heartbeat(heartbeat); let (sender, receiver) = oneshot::channel(); self .outbound_requests @@ -341,9 +340,7 @@ impl serai_coordinator_p2p::P2p for Libp2p { fn heartbeat( &self, - ) -> impl Send - + Future>)> - { + ) -> impl Send + Future>)> { async move { let (request_id, set, latest_block_hash) = self .heartbeat_requests @@ -357,16 +354,19 @@ impl serai_coordinator_p2p::P2p for Libp2p { let respond = self.inbound_request_responses.clone(); async move { // The swarm task expects us to respond to every request. If the caller drops this - // channel, we'll receive `Err` and respond with `None`, safely satisfying that bound + // channel, we'll receive `Err` and respond with `vec![]`, safely satisfying that bound // without requiring the caller send a value down this channel - let response = - if let Ok(blocks) = receiver.await { Response::Blocks(blocks) } else { Response::None }; + let response = if let Ok(blocks) = receiver.await { + Response::Blocks(blocks) + } else { + Response::Blocks(vec![]) + }; respond .send((request_id, response)) .expect("inbound_request_responses_recv was dropped?"); } }); - (set, latest_block_hash, sender) + (Heartbeat { set, latest_block_hash }, sender) } } @@ -388,7 +388,7 @@ impl serai_coordinator_p2p::P2p for Libp2p { let response = if let Ok(notable_cosigns) = receiver.await { Response::NotableCosigns(notable_cosigns) } else { - Response::None + Response::NotableCosigns(vec![]) }; respond .send((request_id, response)) diff --git a/coordinator/p2p/libp2p/src/reqres.rs b/coordinator/p2p/libp2p/src/reqres.rs index 4f8fa236..617e1027 100644 --- a/coordinator/p2p/libp2p/src/reqres.rs +++ b/coordinator/p2p/libp2p/src/reqres.rs @@ -4,7 +4,6 @@ use std::io; use async_trait::async_trait; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::validator_sets::primitives::ValidatorSet; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; @@ -15,7 +14,7 @@ pub use request_response::{RequestId, Message}; use serai_cosign::SignedCosign; -use serai_coordinator_p2p::TributaryBlockWithCommit; +use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit}; /// The maximum message size for the request-response protocol // This is derived from the heartbeat message size as it's our largest message @@ -31,7 +30,7 @@ pub(crate) enum Request { /// intervals. /// /// If our peers have more blocks than us, they're expected to respond with those blocks. - Heartbeat { set: ValidatorSet, latest_block_hash: [u8; 32] }, + Heartbeat(Heartbeat), /// A request for the notable cosigns for a global session. NotableCosigns { global_session: [u8; 32] }, } diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index e0a6762b..0c3e2664 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -21,6 +21,8 @@ use libp2p::{ swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; +use serai_coordinator_p2p::Heartbeat; + use crate::{ Peers, BehaviorEvent, Behavior, validators::{self, Validators}, @@ -105,7 +107,7 @@ impl SwarmTask { match event { reqres::Event::Message { message, .. } => match message { reqres::Message::Request { request_id, request, channel } => match request { - reqres::Request::Heartbeat { set, latest_block_hash } => { + reqres::Request::Heartbeat(Heartbeat { set, latest_block_hash }) => { self.inbound_request_response_channels.insert(request_id, channel); let _: Result<_, _> = self.heartbeat_requests.send((request_id, set, latest_block_hash)); diff --git a/coordinator/p2p/src/cosign.rs b/coordinator/p2p/src/cosign.rs new file mode 100644 index 00000000..e69de29b diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index 87827e7f..4966c471 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -10,7 +10,7 @@ use tributary::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader}; use serai_db::*; use serai_task::ContinuallyRan; -use crate::{Peer, P2p}; +use crate::{Heartbeat, Peer, P2p}; // Amount of blocks in a minute const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; @@ -70,7 +70,11 @@ impl ContinuallyRan for HeartbeatTask ContinuallyRan for HeartbeatTask: Send { /// Send a heartbeat to this peer. fn send_heartbeat( &self, - set: ValidatorSet, - latest_block_hash: [u8; 32], + heartbeat: Heartbeat, ) -> impl Send + Future>>; } @@ -48,8 +56,7 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab /// descending blocks. fn heartbeat( &self, - ) -> impl Send - + Future>)>; + ) -> impl Send + Future>)>; /// A cancel-safe future for the next request for the notable cosigns of a gloabl session. /// From 201a444e8927c80bdafc856280af1d3637364249 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 02:16:05 -0500 Subject: [PATCH 273/368] Remove tokio dependency from serai-coordinator-p2p Re-implements tokio::mpsc::oneshot with a thin wrapper around async-channel. Also replaces futures-util with futures-lite. --- Cargo.lock | 41 ++++++++++++++++++++--------- coordinator/p2p/Cargo.toml | 4 +-- coordinator/p2p/libp2p/src/lib.rs | 4 +-- coordinator/p2p/libp2p/src/swarm.rs | 4 +-- coordinator/p2p/src/cosign.rs | 0 coordinator/p2p/src/heartbeat.rs | 2 +- coordinator/p2p/src/lib.rs | 5 ++-- coordinator/p2p/src/oneshot.rs | 35 ++++++++++++++++++++++++ 8 files changed, 73 insertions(+), 22 deletions(-) delete mode 100644 coordinator/p2p/src/cosign.rs create mode 100644 coordinator/p2p/src/oneshot.rs diff --git a/Cargo.lock b/Cargo.lock index dfeea6f6..379c81ba 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -840,6 +840,18 @@ dependencies = [ "futures-core", ] +[[package]] +name = "async-channel" +version = "2.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "89b47800b0be77592da0afd425cc03468052844aff33b84e33cc696f64e77b6a" +dependencies = [ + "concurrent-queue", + "event-listener-strategy", + "futures-core", + "pin-project-lite", +] + [[package]] name = "async-io" version = "2.4.0" @@ -2528,7 +2540,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d" dependencies = [ "libc", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -3047,7 +3059,10 @@ version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cef40d21ae2c515b51041df9ed313ed21e572df340ea58a922a0aefe7e8891a1" dependencies = [ + "fastrand", "futures-core", + "futures-io", + "parking", "pin-project-lite", ] @@ -3548,7 +3563,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.4.10", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -4112,7 +4127,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fc2f4eb4bc735547cfed7c0a4922cbd04a4655978c09b54f1f7b228750664c34" dependencies = [ "cfg-if", - "windows-targets 0.48.5", + "windows-targets 0.52.6", ] [[package]] @@ -6907,7 +6922,7 @@ dependencies = [ "errno", "libc", "linux-raw-sys", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -7450,7 +7465,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel", + "async-channel 1.9.0", "async-trait", "asynchronous-codec", "bytes", @@ -7491,7 +7506,7 @@ name = "sc-network-bitswap" version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "async-channel", + "async-channel 1.9.0", "cid", "futures", "libp2p-identity", @@ -7548,7 +7563,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel", + "async-channel 1.9.0", "futures", "libp2p-identity", "log", @@ -7569,7 +7584,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel", + "async-channel 1.9.0", "async-trait", "fork-tree", "futures", @@ -7943,7 +7958,7 @@ name = "sc-utils" version = "4.0.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "async-channel", + "async-channel 1.9.0", "futures", "futures-timer", "lazy_static", @@ -8369,14 +8384,14 @@ dependencies = [ name = "serai-coordinator-p2p" version = "0.1.0" dependencies = [ + "async-channel 2.3.1", "borsh", - "futures-util", + "futures-lite", "log", "serai-client", "serai-cosign", "serai-db", "serai-task", - "tokio", "tributary-chain", ] @@ -10528,7 +10543,7 @@ dependencies = [ "getrandom", "once_cell", "rustix", - "windows-sys 0.52.0", + "windows-sys 0.59.0", ] [[package]] @@ -11752,7 +11767,7 @@ version = "0.1.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb" dependencies = [ - "windows-sys 0.48.0", + "windows-sys 0.59.0", ] [[package]] diff --git a/coordinator/p2p/Cargo.toml b/coordinator/p2p/Cargo.toml index 44183258..afb0b483 100644 --- a/coordinator/p2p/Cargo.toml +++ b/coordinator/p2p/Cargo.toml @@ -26,8 +26,8 @@ serai-client = { path = "../../substrate/client", default-features = false, feat serai-cosign = { path = "../cosign" } tributary = { package = "tributary-chain", path = "../tributary" } -futures-util = { version = "0.3", default-features = false, features = ["std"] } -tokio = { version = "1", default-features = false, features = ["sync"] } +async-channel = { version = "2", default-features = false, features = ["std"] } +futures-lite = { version = "2", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] } serai-task = { path = "../../common/task", version = "0.1" } diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index 2f6defa7..f15606d0 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -19,7 +19,7 @@ use serai_client::{ Serai, }; -use tokio::sync::{mpsc, oneshot, Mutex, RwLock}; +use tokio::sync::{mpsc, Mutex, RwLock}; use serai_task::{Task, ContinuallyRan}; @@ -35,7 +35,7 @@ use libp2p::{ SwarmBuilder, }; -use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit}; +use serai_coordinator_p2p::{oneshot, Heartbeat, TributaryBlockWithCommit}; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index 0c3e2664..16200954 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -8,7 +8,7 @@ use borsh::BorshDeserialize; use serai_client::validator_sets::primitives::ValidatorSet; -use tokio::sync::{mpsc, oneshot, RwLock}; +use tokio::sync::{mpsc, RwLock}; use serai_task::TaskHandle; @@ -21,7 +21,7 @@ use libp2p::{ swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; -use serai_coordinator_p2p::Heartbeat; +use serai_coordinator_p2p::{oneshot, Heartbeat}; use crate::{ Peers, BehaviorEvent, Behavior, diff --git a/coordinator/p2p/src/cosign.rs b/coordinator/p2p/src/cosign.rs deleted file mode 100644 index e69de29b..00000000 diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index 4966c471..f79151ad 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -3,7 +3,7 @@ use std::time::{Duration, SystemTime}; use serai_client::validator_sets::primitives::ValidatorSet; -use futures_util::FutureExt; +use futures_lite::FutureExt; use tributary::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader}; diff --git a/coordinator/p2p/src/lib.rs b/coordinator/p2p/src/lib.rs index 262cb3bf..11c9cf53 100644 --- a/coordinator/p2p/src/lib.rs +++ b/coordinator/p2p/src/lib.rs @@ -8,10 +8,11 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; -use tokio::sync::oneshot; - use serai_cosign::SignedCosign; +/// A oneshot channel. +pub mod oneshot; + /// The heartbeat task, effecting sync of Tributaries pub mod heartbeat; diff --git a/coordinator/p2p/src/oneshot.rs b/coordinator/p2p/src/oneshot.rs new file mode 100644 index 00000000..bced6a1a --- /dev/null +++ b/coordinator/p2p/src/oneshot.rs @@ -0,0 +1,35 @@ +use core::{ + pin::Pin, + task::{Poll, Context}, + future::Future, +}; + +pub use async_channel::{SendError, RecvError}; + +/// The sender for a oneshot channel. +pub struct Sender(async_channel::Sender); +impl Sender { + /// Send a value down the channel. + /// + /// Returns an error if the channel's receiver was dropped. + pub fn send(self, msg: T) -> Result<(), SendError> { + self.0.send_blocking(msg) + } +} + +/// The receiver for a oneshot channel. +pub struct Receiver(async_channel::Receiver); +impl Future for Receiver { + type Output = Result; + fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { + let recv = self.0.recv(); + futures_lite::pin!(recv); + recv.poll(cx) + } +} + +/// Create a new oneshot channel. +pub fn channel() -> (Sender, Receiver) { + let (send, recv) = async_channel::bounded(1); + (Sender(send), Receiver(recv)) +} From b101e2211a0b9163abd71c4ff036cb9a22f0789d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 06:23:14 -0500 Subject: [PATCH 274/368] Complete serai-coordinator-p2p --- Cargo.lock | 24 ++---- coordinator/p2p/Cargo.toml | 2 +- coordinator/p2p/README.md | 2 +- coordinator/p2p/libp2p/src/lib.rs | 4 +- coordinator/p2p/libp2p/src/reqres.rs | 2 +- coordinator/p2p/libp2p/src/swarm.rs | 9 +- coordinator/p2p/src/heartbeat.rs | 31 +++++-- coordinator/p2p/src/lib.rs | 124 ++++++++++++++++++++++++++- coordinator/p2p/src/oneshot.rs | 35 -------- 9 files changed, 156 insertions(+), 77 deletions(-) delete mode 100644 coordinator/p2p/src/oneshot.rs diff --git a/Cargo.lock b/Cargo.lock index 379c81ba..840a6c53 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -840,18 +840,6 @@ dependencies = [ "futures-core", ] -[[package]] -name = "async-channel" -version = "2.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "89b47800b0be77592da0afd425cc03468052844aff33b84e33cc696f64e77b6a" -dependencies = [ - "concurrent-queue", - "event-listener-strategy", - "futures-core", - "pin-project-lite", -] - [[package]] name = "async-io" version = "2.4.0" @@ -7465,7 +7453,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel 1.9.0", + "async-channel", "async-trait", "asynchronous-codec", "bytes", @@ -7506,7 +7494,7 @@ name = "sc-network-bitswap" version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "async-channel 1.9.0", + "async-channel", "cid", "futures", "libp2p-identity", @@ -7563,7 +7551,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel 1.9.0", + "async-channel", "futures", "libp2p-identity", "log", @@ -7584,7 +7572,7 @@ version = "0.10.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ "array-bytes", - "async-channel 1.9.0", + "async-channel", "async-trait", "fork-tree", "futures", @@ -7958,7 +7946,7 @@ name = "sc-utils" version = "4.0.0-dev" source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" dependencies = [ - "async-channel 1.9.0", + "async-channel", "futures", "futures-timer", "lazy_static", @@ -8384,7 +8372,6 @@ dependencies = [ name = "serai-coordinator-p2p" version = "0.1.0" dependencies = [ - "async-channel 2.3.1", "borsh", "futures-lite", "log", @@ -8392,6 +8379,7 @@ dependencies = [ "serai-cosign", "serai-db", "serai-task", + "tokio", "tributary-chain", ] diff --git a/coordinator/p2p/Cargo.toml b/coordinator/p2p/Cargo.toml index afb0b483..7b7c055c 100644 --- a/coordinator/p2p/Cargo.toml +++ b/coordinator/p2p/Cargo.toml @@ -26,8 +26,8 @@ serai-client = { path = "../../substrate/client", default-features = false, feat serai-cosign = { path = "../cosign" } tributary = { package = "tributary-chain", path = "../tributary" } -async-channel = { version = "2", default-features = false, features = ["std"] } futures-lite = { version = "2", default-features = false, features = ["std"] } +tokio = { version = "1", default-features = false, features = ["sync", "macros"] } log = { version = "0.4", default-features = false, features = ["std"] } serai-task = { path = "../../common/task", version = "0.1" } diff --git a/coordinator/p2p/README.md b/coordinator/p2p/README.md index 9d6d9aef..7a8de210 100644 --- a/coordinator/p2p/README.md +++ b/coordinator/p2p/README.md @@ -1,3 +1,3 @@ # Serai Coordinator P2P -The P2P abstraction used by Serai's coordinator. +The P2P abstraction used by Serai's coordinator, and tasks over it. diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index f15606d0..2f6defa7 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -19,7 +19,7 @@ use serai_client::{ Serai, }; -use tokio::sync::{mpsc, Mutex, RwLock}; +use tokio::sync::{mpsc, oneshot, Mutex, RwLock}; use serai_task::{Task, ContinuallyRan}; @@ -35,7 +35,7 @@ use libp2p::{ SwarmBuilder, }; -use serai_coordinator_p2p::{oneshot, Heartbeat, TributaryBlockWithCommit}; +use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit}; /// A struct to sync the validators from the Serai node in order to keep track of them. mod validators; diff --git a/coordinator/p2p/libp2p/src/reqres.rs b/coordinator/p2p/libp2p/src/reqres.rs index 617e1027..221cbdf3 100644 --- a/coordinator/p2p/libp2p/src/reqres.rs +++ b/coordinator/p2p/libp2p/src/reqres.rs @@ -19,7 +19,7 @@ use serai_coordinator_p2p::{Heartbeat, TributaryBlockWithCommit}; /// The maximum message size for the request-response protocol // This is derived from the heartbeat message size as it's our largest message pub(crate) const MAX_LIBP2P_REQRES_MESSAGE_SIZE: usize = - (tributary::BLOCK_SIZE_LIMIT * serai_coordinator_p2p::heartbeat::BLOCKS_PER_BATCH) + 1024; + 1024 + serai_coordinator_p2p::heartbeat::BATCH_SIZE_LIMIT; const PROTOCOL: &str = "/serai/coordinator/reqres/1.0.0"; diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index 16200954..a9b13bf0 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -8,7 +8,7 @@ use borsh::BorshDeserialize; use serai_client::validator_sets::primitives::ValidatorSet; -use tokio::sync::{mpsc, RwLock}; +use tokio::sync::{mpsc, oneshot, RwLock}; use serai_task::TaskHandle; @@ -21,7 +21,7 @@ use libp2p::{ swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; -use serai_coordinator_p2p::{oneshot, Heartbeat}; +use serai_coordinator_p2p::Heartbeat; use crate::{ Peers, BehaviorEvent, Behavior, @@ -69,11 +69,6 @@ pub(crate) struct SwarmTask { inbound_request_response_channels: HashMap>, heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, - /* TODO - let cosigns = Cosigning::::notable_cosigns(&self.db, global_session); - let res = reqres::Response::NotableCosigns(cosigns); - let _: Result<_, _> = self.swarm.behaviour_mut().reqres.send_response(channel, res); - */ notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>, } diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index f79151ad..76d160ea 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -1,7 +1,7 @@ use core::future::Future; use std::time::{Duration, SystemTime}; -use serai_client::validator_sets::primitives::ValidatorSet; +use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet}; use futures_lite::FutureExt; @@ -15,19 +15,32 @@ use crate::{Heartbeat, Peer, P2p}; // Amount of blocks in a minute const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; -/// The maximum amount of blocks to include/included within a batch. -pub const BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; +/// The minimum amount of blocks to include/included within a batch, assuming there's blocks to +/// include in the batch. +/// +/// This decides the size limit of the Batch (the Block size limit multiplied by the minimum amount +/// of blocks we'll send). The actual amount of blocks sent will be the amount which fits within +/// the size limit. +pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; + +/// The size limit for a batch of blocks sent in response to a Heartbeat. +/// +/// This estimates the size of a commit as `32 + (MAX_VALIDATORS * 128)`. At the time of writing, a +/// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators, +/// and aggregate signature). Accordingly, this should be a safe over-estimate. +pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH * + (tributary::BLOCK_SIZE_LIMIT + 32 + ((MAX_KEY_SHARES_PER_SET as usize) * 128)); /// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's /// tip. /// /// If the other validator has more blocks then we do, they're expected to inform us. This forms /// the sync protocol for our Tributaries. -pub struct HeartbeatTask { - set: ValidatorSet, - tributary: Tributary, - reader: TributaryReader, - p2p: P, +pub(crate) struct HeartbeatTask { + pub(crate) set: ValidatorSet, + pub(crate) tributary: Tributary, + pub(crate) reader: TributaryReader, + pub(crate) p2p: P, } impl ContinuallyRan for HeartbeatTask { @@ -80,7 +93,7 @@ impl ContinuallyRan for HeartbeatTask impl Send + Future; } + +fn handle_notable_cosigns_request( + db: &D, + global_session: [u8; 32], + channel: oneshot::Sender>, +) { + let cosigns = Cosigning::::notable_cosigns(db, global_session); + channel.send(cosigns).expect("channel listening for cosign oneshot response was dropped?"); +} + +fn handle_heartbeat( + reader: &TributaryReader, + mut latest_block_hash: [u8; 32], + channel: oneshot::Sender>, +) { + let mut res_size = 8; + let mut res = vec![]; + // This former case should be covered by this latter case + while (res.len() < heartbeat::MIN_BLOCKS_PER_BATCH) || (res_size < heartbeat::BATCH_SIZE_LIMIT) { + let Some(block_after) = reader.block_after(&latest_block_hash) else { break }; + + let block = reader.block(&block_after).unwrap().serialize(); + let commit = reader.commit(&block_after).unwrap(); + res_size += 8 + block.len() + 8 + commit.len(); + res.push(TributaryBlockWithCommit { block, commit }); + + latest_block_hash = block_after; + } + channel + .send(res) + .map_err(|_| ()) + .expect("channel listening for heartbeat oneshot response was dropped?"); +} + +/// Run the P2P instance. +/// +/// `add_tributary`'s and `retire_tributary's senders, along with `send_cosigns`'s receiver, must +/// never be dropped. `retire_tributary` is not required to only be instructed with added +/// Tributaries. +pub async fn run( + db: impl Db, + p2p: P, + mut add_tributary: mpsc::UnboundedReceiver<(ValidatorSet, Tributary)>, + mut retire_tributary: mpsc::UnboundedReceiver, + send_cosigns: mpsc::UnboundedSender, +) { + let mut readers = HashMap::>::new(); + let mut tributaries = HashMap::<[u8; 32], mpsc::UnboundedSender>>::new(); + let mut heartbeat_tasks = HashMap::::new(); + + loop { + tokio::select! { + tributary = add_tributary.recv() => { + let (set, tributary) = tributary.expect("add_tributary send was dropped?"); + let reader = tributary.reader(); + readers.insert(set, reader.clone()); + + let (heartbeat_task_def, heartbeat_task) = Task::new(); + tokio::spawn( + (HeartbeatTask { + set, + tributary: tributary.clone(), + reader: reader.clone(), + p2p: p2p.clone(), + }).continually_run(heartbeat_task_def, vec![]) + ); + heartbeat_tasks.insert(set, heartbeat_task); + + let (tributary_message_send, mut tributary_message_recv) = mpsc::unbounded_channel(); + tributaries.insert(tributary.genesis(), tributary_message_send); + // For as long as this sender exists, handle the messages from it on a dedicated task + tokio::spawn(async move { + while let Some(message) = tributary_message_recv.recv().await { + tributary.handle_message(&message).await; + } + }); + } + set = retire_tributary.recv() => { + let set = set.expect("retire_tributary send was dropped?"); + let Some(reader) = readers.remove(&set) else { continue }; + tributaries.remove(&reader.genesis()).expect("tributary reader but no tributary"); + heartbeat_tasks.remove(&set).expect("tributary but no heartbeat task"); + } + + (heartbeat, channel) = p2p.heartbeat() => { + if let Some(reader) = readers.get(&heartbeat.set) { + let reader = reader.clone(); // This is a cheap clone + // We spawn this on a task due to the DB reads needed + tokio::spawn(async move { + handle_heartbeat(&reader, heartbeat.latest_block_hash, channel) + }); + } + } + (global_session, channel) = p2p.notable_cosigns_request() => { + tokio::spawn({ + let db = db.clone(); + async move { handle_notable_cosigns_request(&db, global_session, channel) } + }); + } + (tributary, message) = p2p.tributary_message() => { + if let Some(tributary) = tributaries.get(&tributary) { + tributary.send(message).expect("tributary message recv was dropped?"); + } + } + cosign = p2p.cosign() => { + // We don't call `Cosigning::intake_cosign` here as that can only be called from a single + // location. We also need to intake the cosigns we produce, which means we need to merge + // these streams (signing, network) somehow. That's done with this mpsc channel + send_cosigns.send(cosign).expect("channel receiving cosigns was dropped?"); + } + } + } +} diff --git a/coordinator/p2p/src/oneshot.rs b/coordinator/p2p/src/oneshot.rs deleted file mode 100644 index bced6a1a..00000000 --- a/coordinator/p2p/src/oneshot.rs +++ /dev/null @@ -1,35 +0,0 @@ -use core::{ - pin::Pin, - task::{Poll, Context}, - future::Future, -}; - -pub use async_channel::{SendError, RecvError}; - -/// The sender for a oneshot channel. -pub struct Sender(async_channel::Sender); -impl Sender { - /// Send a value down the channel. - /// - /// Returns an error if the channel's receiver was dropped. - pub fn send(self, msg: T) -> Result<(), SendError> { - self.0.send_blocking(msg) - } -} - -/// The receiver for a oneshot channel. -pub struct Receiver(async_channel::Receiver); -impl Future for Receiver { - type Output = Result; - fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { - let recv = self.0.recv(); - futures_lite::pin!(recv); - recv.poll(cx) - } -} - -/// Create a new oneshot channel. -pub fn channel() -> (Sender, Receiver) { - let (send, recv) = async_channel::bounded(1); - (Sender(send), Receiver(recv)) -} From 893a24a1cc615c57a1dc6d80e9f1ea21ba7c70fa Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 06:57:12 -0500 Subject: [PATCH 275/368] Better document bounds in serai-coordinator-p2p --- coordinator/p2p/src/lib.rs | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/coordinator/p2p/src/lib.rs b/coordinator/p2p/src/lib.rs index 1f7743cb..d285c8f0 100644 --- a/coordinator/p2p/src/lib.rs +++ b/coordinator/p2p/src/lib.rs @@ -59,7 +59,8 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab /// A cancel-safe future for the next heartbeat received over the P2P network. /// /// Yields the validator set its for, the latest block hash observed, and a channel to return the - /// descending blocks. + /// descending blocks. This channel MUST NOT and will not have its receiver dropped before a + /// message is sent. fn heartbeat( &self, ) -> impl Send + Future>)>; @@ -67,6 +68,7 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab /// A cancel-safe future for the next request for the notable cosigns of a gloabl session. /// /// Yields the global session the request is for and a channel to return the notable cosigns. + /// This channel MUST NOT and will not have its receiver dropped before a message is sent. fn notable_cosigns_request( &self, ) -> impl Send + Future>)>; @@ -100,8 +102,11 @@ fn handle_heartbeat( while (res.len() < heartbeat::MIN_BLOCKS_PER_BATCH) || (res_size < heartbeat::BATCH_SIZE_LIMIT) { let Some(block_after) = reader.block_after(&latest_block_hash) else { break }; - let block = reader.block(&block_after).unwrap().serialize(); - let commit = reader.commit(&block_after).unwrap(); + // These `break` conditions should only occur under edge cases, such as if we're actively + // deleting this Tributary due to being done with it + let Some(block) = reader.block(&block_after) else { break }; + let block = block.serialize(); + let Some(commit) = reader.commit(&block_after) else { break }; res_size += 8 + block.len() + 8 + commit.len(); res.push(TributaryBlockWithCommit { block, commit }); @@ -132,7 +137,7 @@ pub async fn run( loop { tokio::select! { tributary = add_tributary.recv() => { - let (set, tributary) = tributary.expect("add_tributary send was dropped?"); + let (set, tributary) = tributary.expect("add_tributary send was dropped"); let reader = tributary.reader(); readers.insert(set, reader.clone()); @@ -157,7 +162,7 @@ pub async fn run( }); } set = retire_tributary.recv() => { - let set = set.expect("retire_tributary send was dropped?"); + let set = set.expect("retire_tributary send was dropped"); let Some(reader) = readers.remove(&set) else { continue }; tributaries.remove(&reader.genesis()).expect("tributary reader but no tributary"); heartbeat_tasks.remove(&set).expect("tributary but no heartbeat task"); @@ -187,7 +192,7 @@ pub async fn run( // We don't call `Cosigning::intake_cosign` here as that can only be called from a single // location. We also need to intake the cosigns we produce, which means we need to merge // these streams (signing, network) somehow. That's done with this mpsc channel - send_cosigns.send(cosign).expect("channel receiving cosigns was dropped?"); + send_cosigns.send(cosign).expect("channel receiving cosigns was dropped"); } } } From 9b0b5fd1e2cc88ae4eb097853819d48d73d894a5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 06:57:26 -0500 Subject: [PATCH 276/368] Have serai-cosign index finalized blocks' numbers --- coordinator/cosign/src/intend.rs | 10 ++++++---- coordinator/cosign/src/lib.rs | 16 ++++++++++++---- 2 files changed, 18 insertions(+), 8 deletions(-) diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index 7466ae5a..db38c905 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -78,7 +78,7 @@ impl ContinuallyRan for CosignIntendTask { // Check we are indexing a linear chain if (block_number > 1) && (<[u8; 32]>::from(block.header.parent_hash) != - SubstrateBlocks::get(&txn, block_number - 1) + SubstrateBlockHash::get(&txn, block_number - 1) .expect("indexing a block but haven't indexed its parent")) { Err(format!( @@ -86,14 +86,16 @@ impl ContinuallyRan for CosignIntendTask { block_number - 1 ))?; } - SubstrateBlocks::set(&mut txn, block_number, &block.hash()); + let block_hash = block.hash(); + SubstrateBlockHash::set(&mut txn, block_number, &block_hash); + SubstrateBlockNumber::set(&mut txn, block_hash, &block_number); let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn); // If this is notable, it creates a new global session, which we index into the database // now if has_events == HasEvents::Notable { - let serai = self.serai.as_of(block.hash()); + let serai = self.serai.as_of(block_hash); let sets_and_keys = cosigning_sets(&serai).await?; let global_session = GlobalSession::id(sets_and_keys.iter().map(|(set, _key)| *set).collect()); @@ -159,7 +161,7 @@ impl ContinuallyRan for CosignIntendTask { &CosignIntent { global_session: global_session_for_this_block, block_number, - block_hash: block.hash(), + block_hash, notable: has_events == HasEvents::Notable, }, ); diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 29420d56..7d909712 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -127,7 +127,8 @@ create_db! { // The following are populated by the intend task and used throughout the library // An index of Substrate blocks - SubstrateBlocks: (block_number: u64) -> [u8; 32], + SubstrateBlockHash: (block_number: u64) -> [u8; 32], + SubstrateBlockNumber: (block_hash: [u8; 32]) -> u64, // A mapping from a global session's ID to its relevant information. GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, // The last block to be cosigned by a global session. @@ -270,17 +271,24 @@ impl Cosigning { Ok(LatestCosignedBlockNumber::get(getter).unwrap_or(0)) } - /// Fetch an cosigned Substrate block by its block number. + /// Fetch a cosigned Substrate block's hash by its block number. pub fn cosigned_block(getter: &impl Get, block_number: u64) -> Result, Faulted> { if block_number > Self::latest_cosigned_block_number(getter)? { return Ok(None); } Ok(Some( - SubstrateBlocks::get(getter, block_number).expect("cosigned block but didn't index it"), + SubstrateBlockHash::get(getter, block_number).expect("cosigned block but didn't index it"), )) } + /// Fetch a finalized block's number by its hash. + /// + /// This block is not guaranteed to be cosigned. + pub fn finalized_block_number(getter: &impl Get, block_hash: [u8; 32]) -> Option { + SubstrateBlockNumber::get(getter, block_hash) + } + /// Fetch the notable cosigns for a global session in order to respond to requests. /// /// If this global session hasn't produced any notable cosigns, this will return the latest @@ -345,7 +353,7 @@ impl Cosigning { let network = cosign.cosigner; // Check our indexed blockchain includes a block with this block number - let Some(our_block_hash) = SubstrateBlocks::get(&self.db, cosign.block_number) else { + let Some(our_block_hash) = SubstrateBlockHash::get(&self.db, cosign.block_number) else { return Ok(true); }; let faulty = cosign.block_hash != our_block_hash; From 47eb793ce9062fc3f7f4af429488ba4b63e2925e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 06:58:00 -0500 Subject: [PATCH 277/368] Slash upon Tendermint evidence Decoding slash evidence requires specifying the instantiated generic `TendermintNetwork`. While irrelevant, that generic includes a type satisfying `tributary::P2p`. It was only possible to route now that we've redone the P2P API. --- coordinator/src/main.rs | 2 +- coordinator/src/tributary/scan.rs | 47 ++++++++++++++++++++----------- 2 files changed, 32 insertions(+), 17 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 9d555b71..8af0f824 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,7 +1,7 @@ mod tributary; mod p2p { - use serai_coordinator_p2p::*; + pub use serai_coordinator_p2p::*; pub use serai_coordinator_libp2p_p2p::Libp2p; } diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index bfc9760b..2c39cd13 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -1,4 +1,4 @@ -use core::future::Future; +use core::{marker::PhantomData, future::Future}; use std::collections::HashMap; use ciphersuite::group::GroupEncoding; @@ -22,20 +22,27 @@ use serai_task::ContinuallyRan; use messages::sign::VariantSignId; -use crate::tributary::{ - db::*, - transaction::{SigningProtocolRound, Signed, Transaction}, +use serai_cosign::Cosigning; + +use crate::{ + p2p::P2p, + tributary::{ + db::*, + transaction::{SigningProtocolRound, Signed, Transaction}, + }, }; -struct ScanBlock<'a, D: DbTxn, TD: Db> { - txn: &'a mut D, +struct ScanBlock<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> { + _db: PhantomData, + _p2p: PhantomData

, + txn: &'a mut DT, set: ValidatorSet, validators: &'a [SeraiAddress], total_weight: u64, validator_weights: &'a HashMap, tributary: &'a TributaryReader, } -impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { +impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one if TributaryDb::actively_cosigning(self.txn, self.set) { @@ -49,7 +56,11 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { return; }; - let substrate_block_number = todo!("TODO"); + let Some(substrate_block_number) = + Cosigning::::finalized_block_number(self.txn, latest_substrate_block_to_cosign) + else { + panic!("cosigning a block our cosigner didn't index") + }; // Mark us as actively cosigning TributaryDb::start_cosigning(self.txn, self.set, substrate_block_number); @@ -320,12 +331,11 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { Evidence::ConflictingMessages(first, second) => (first, Some(second)), Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None), }; - /* TODO let msgs = ( - decode_signed_message::>(&data.0).unwrap(), + decode_signed_message::>(&data.0).unwrap(), if data.1.is_some() { Some( - decode_signed_message::>(&data.1.unwrap()) + decode_signed_message::>(&data.1.unwrap()) .unwrap(), ) } else { @@ -336,9 +346,11 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { // Since anything with evidence is fundamentally faulty behavior, not just temporal // errors, mark the node as fatally slashed TributaryDb::fatal_slash( - self.txn, msgs.0.msg.sender, &format!("invalid tendermint messages: {msgs:?}")); - */ - todo!("TODO") + self.txn, + self.set, + SeraiAddress(msgs.0.msg.sender), + &format!("invalid tendermint messages: {msgs:?}"), + ); } TributaryTransaction::Application(tx) => { self.handle_application_tx(block_number, tx); @@ -348,15 +360,16 @@ impl<'a, D: DbTxn, TD: Db> ScanBlock<'a, D, TD> { } } -struct ScanTributaryTask { +struct ScanTributaryTask { db: D, set: ValidatorSet, validators: Vec, total_weight: u64, validator_weights: HashMap, tributary: TributaryReader, + _p2p: PhantomData

, } -impl ContinuallyRan for ScanTributaryTask { +impl ContinuallyRan for ScanTributaryTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let (mut last_block_number, mut last_block_hash) = @@ -386,6 +399,8 @@ impl ContinuallyRan for ScanTributaryTask { let mut txn = self.db.txn(); (ScanBlock { + _db: PhantomData::, + _p2p: PhantomData::

, txn: &mut txn, set: self.set, validators: &self.validators, From 23122712cb73003b9ab21e96c6eb001f46068f97 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 9 Jan 2025 19:50:20 -0500 Subject: [PATCH 278/368] Document validator jailing upon participation failures and slash report determination These are TODOs. I just wanted to ensure this was written down and each seemed too small for GH issues. --- coordinator/src/tributary/db.rs | 14 +++++++++++++- coordinator/src/tributary/scan.rs | 12 +++++++++++- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index fbcfcc01..0d85110d 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -272,7 +272,19 @@ impl TributaryDb { pub(crate) fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { for topic in Reattempt::take(txn, set, block_number).unwrap_or(vec![]) { - // TODO: Slash all people who preprocessed but didn't share + /* + TODO: Slash all people who preprocessed but didn't share, and add a delay to their + participations in future protocols. When we call accumulate, if the participant has no + delay, their accumulation occurs immediately. Else, the accumulation occurs after the + specified delay. + + This means even if faulty validators are first to preprocess, they won't be selected for + the signing set unless there's a lack of less faulty validators available. + + We need to decrease this delay upon successful partipations, and set it to the maximum upon + `f + 1` validators voting to fatally slash the validator in question. This won't issue the + fatal slash but should still be effective. + */ Self::recognize_topic(txn, set, topic); if let Some(id) = topic.sign_id(set) { Self::send_message(txn, set, messages::sign::CoordinatorMessage::Reattempt { id }); diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index 2c39cd13..fec89f28 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -201,7 +201,17 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { DataSet::None => {} DataSet::Participating(data_set) => { // Find the median reported slashes for this validator - // TODO: This lets 34% perform a fatal slash. Should that be allowed? + /* + TODO: This lets 34% perform a fatal slash. That shouldn't be allowed. We need + to accept slash reports for a period past the threshold, and only fatally slash if we + have a supermajority agree the slash should be fatal. If there isn't a supermajority, + but the median believe the slash should be fatal, we need to fallback to a large + constant. + + Also, TODO, each slash point should probably be considered as + `MAX_KEY_SHARES_PER_SET * BLOCK_TIME` seconds of downtime. As this time crosses + various thresholds (1 day, 3 days, etc), a multiplier should be attached. + */ let mut median_slash_report = Vec::with_capacity(self.validators.len()); for i in 0 .. self.validators.len() { let mut this_validator = From 2a3eaf4d7ef2330bf802820e8ad8d3fc249a3c38 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 10 Jan 2025 01:20:26 -0500 Subject: [PATCH 279/368] Wrap the entire Libp2p object in an Arc Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned (the inner ones have been removed). Also wraps uses of Serai in an Arc as we shouldn't actually need/want multiple caller connection pools. --- coordinator/cosign/src/intend.rs | 4 +- coordinator/cosign/src/lib.rs | 9 ++-- coordinator/p2p/libp2p/src/dial.rs | 10 ++-- coordinator/p2p/libp2p/src/lib.rs | 60 +++++++++++++++--------- coordinator/p2p/libp2p/src/validators.rs | 10 ++-- coordinator/p2p/src/lib.rs | 3 ++ 6 files changed, 59 insertions(+), 37 deletions(-) diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index db38c905..9fa229c5 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -1,5 +1,5 @@ use core::future::Future; -use std::collections::HashMap; +use std::{sync::Arc, collections::HashMap}; use serai_client::{ primitives::{SeraiAddress, Amount}, @@ -57,7 +57,7 @@ async fn block_has_events_justifying_a_cosign( /// A task to determine which blocks we should intend to cosign. pub(crate) struct CosignIntendTask { pub(crate) db: D, - pub(crate) serai: Serai, + pub(crate) serai: Arc, } impl ContinuallyRan for CosignIntendTask { diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 7d909712..aa2883aa 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -3,7 +3,7 @@ #![deny(missing_docs)] use core::{fmt::Debug, future::Future}; -use std::collections::HashMap; +use std::{sync::Arc, collections::HashMap}; use blake2::{Digest, Blake2s256}; @@ -240,7 +240,7 @@ impl Cosigning { /// only used once at any given time. pub fn spawn( db: D, - serai: Serai, + serai: Arc, request: R, tasks_to_run_upon_cosigning: Vec, ) -> Self { @@ -334,10 +334,9 @@ impl Cosigning { } } - /// Intake a cosign from the Serai network. + /// Intake a cosign. /// - /// - Returns Err(_) if there was an error trying to validate the cosign and it should be retired - /// later. + /// - Returns Err(_) if there was an error trying to validate the cosign. /// - Returns Ok(true) if the cosign was successfully handled or could not be handled at this /// time. /// - Returns Ok(false) if the cosign was invalid. diff --git a/coordinator/p2p/libp2p/src/dial.rs b/coordinator/p2p/libp2p/src/dial.rs index f8576217..1530e34b 100644 --- a/coordinator/p2p/libp2p/src/dial.rs +++ b/coordinator/p2p/libp2p/src/dial.rs @@ -1,5 +1,5 @@ use core::future::Future; -use std::collections::HashSet; +use std::{sync::Arc, collections::HashSet}; use rand_core::{RngCore, OsRng}; @@ -29,14 +29,18 @@ const TARGET_PEERS_PER_NETWORK: usize = 5; // TODO const TARGET_DIALED_PEERS_PER_NETWORK: usize = 3; pub(crate) struct DialTask { - serai: Serai, + serai: Arc, validators: Validators, peers: Peers, to_dial: mpsc::UnboundedSender, } impl DialTask { - pub(crate) fn new(serai: Serai, peers: Peers, to_dial: mpsc::UnboundedSender) -> Self { + pub(crate) fn new( + serai: Arc, + peers: Peers, + to_dial: mpsc::UnboundedSender, + ) -> Self { DialTask { serai: serai.clone(), validators: Validators::new(serai).0, peers, to_dial } } } diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index 2f6defa7..d3f09e61 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -131,33 +131,35 @@ struct Behavior { gossip: gossip::Behavior, } -/// The libp2p-backed P2P implementation. -/// -/// The P2p trait implementation does not support backpressure and is expected to be fully -/// utilized. Failure to poll the entire API will cause unbounded memory growth. #[allow(clippy::type_complexity)] -#[derive(Clone)] -pub struct Libp2p { +struct Libp2pInner { peers: Peers, gossip: mpsc::UnboundedSender, outbound_requests: mpsc::UnboundedSender<(PeerId, Request, oneshot::Sender)>, - tributary_gossip: Arc)>>>, + tributary_gossip: Mutex)>>, - signed_cosigns: Arc>>, + signed_cosigns: Mutex>, signed_cosigns_send: mpsc::UnboundedSender, - heartbeat_requests: Arc>>, - notable_cosign_requests: Arc>>, + heartbeat_requests: Mutex>, + notable_cosign_requests: Mutex>, inbound_request_responses: mpsc::UnboundedSender<(RequestId, Response)>, } +/// The libp2p-backed P2P implementation. +/// +/// The P2p trait implementation does not support backpressure and is expected to be fully +/// utilized. Failure to poll the entire API will cause unbounded memory growth. +#[derive(Clone)] +pub struct Libp2p(Arc); + impl Libp2p { /// Create a new libp2p-backed P2P instance. /// /// This will spawn all of the internal tasks necessary for functioning. - pub fn new(serai_key: &Zeroizing, serai: Serai) -> Libp2p { + pub fn new(serai_key: &Zeroizing, serai: Arc) -> Libp2p { // Define the object we track peers with let peers = Peers { peers: Arc::new(RwLock::new(HashMap::new())) }; @@ -239,21 +241,21 @@ impl Libp2p { inbound_request_responses_recv, ); - Libp2p { + Libp2p(Arc::new(Libp2pInner { peers, gossip: gossip_send, outbound_requests: outbound_requests_send, - tributary_gossip: Arc::new(Mutex::new(tributary_gossip_recv)), + tributary_gossip: Mutex::new(tributary_gossip_recv), - signed_cosigns: Arc::new(Mutex::new(signed_cosigns_recv)), + signed_cosigns: Mutex::new(signed_cosigns_recv), signed_cosigns_send, - heartbeat_requests: Arc::new(Mutex::new(heartbeat_requests_recv)), - notable_cosign_requests: Arc::new(Mutex::new(notable_cosign_requests_recv)), + heartbeat_requests: Mutex::new(heartbeat_requests_recv), + notable_cosign_requests: Mutex::new(notable_cosign_requests_recv), inbound_request_responses: inbound_request_responses_send, - } + })) } } @@ -261,6 +263,7 @@ impl tributary::P2p for Libp2p { fn broadcast(&self, tributary: [u8; 32], message: Vec) -> impl Send + Future { async move { self + .0 .gossip .send(Message::Tributary { tributary, message }) .expect("gossip recv channel was dropped?"); @@ -281,7 +284,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p { let request = Request::NotableCosigns { global_session }; - let peers = self.peers.peers.read().await.clone(); + let peers = self.0.peers.peers.read().await.clone(); // HashSet of all peers let peers = peers.into_values().flat_map(<_>::into_iter).collect::>(); // Vec of all peers @@ -297,6 +300,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p { let (sender, receiver) = oneshot::channel(); self + .0 .outbound_requests .send((peer, request, sender)) .expect("outbound requests recv channel was dropped?"); @@ -310,6 +314,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p { { for cosign in cosigns { self + .0 .signed_cosigns_send .send(cosign) .expect("signed_cosigns recv in this object was dropped?"); @@ -327,22 +332,29 @@ impl serai_coordinator_p2p::P2p for Libp2p { fn peers(&self, network: NetworkId) -> impl Send + Future>> { async move { - let Some(peer_ids) = self.peers.peers.read().await.get(&network).cloned() else { + let Some(peer_ids) = self.0.peers.peers.read().await.get(&network).cloned() else { return vec![]; }; let mut res = vec![]; for id in peer_ids { - res.push(Peer { outbound_requests: &self.outbound_requests, id }); + res.push(Peer { outbound_requests: &self.0.outbound_requests, id }); } res } } + fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future { + async move { + self.0.gossip.send(Message::Cosign(cosign)).expect("gossip recv channel was dropped?"); + } + } + fn heartbeat( &self, ) -> impl Send + Future>)> { async move { let (request_id, set, latest_block_hash) = self + .0 .heartbeat_requests .lock() .await @@ -351,7 +363,7 @@ impl serai_coordinator_p2p::P2p for Libp2p { .expect("heartbeat_requests_send was dropped?"); let (sender, receiver) = oneshot::channel(); tokio::spawn({ - let respond = self.inbound_request_responses.clone(); + let respond = self.0.inbound_request_responses.clone(); async move { // The swarm task expects us to respond to every request. If the caller drops this // channel, we'll receive `Err` and respond with `vec![]`, safely satisfying that bound @@ -375,6 +387,7 @@ impl serai_coordinator_p2p::P2p for Libp2p { ) -> impl Send + Future>)> { async move { let (request_id, global_session) = self + .0 .notable_cosign_requests .lock() .await @@ -383,7 +396,7 @@ impl serai_coordinator_p2p::P2p for Libp2p { .expect("notable_cosign_requests_send was dropped?"); let (sender, receiver) = oneshot::channel(); tokio::spawn({ - let respond = self.inbound_request_responses.clone(); + let respond = self.0.inbound_request_responses.clone(); async move { let response = if let Ok(notable_cosigns) = receiver.await { Response::NotableCosigns(notable_cosigns) @@ -401,13 +414,14 @@ impl serai_coordinator_p2p::P2p for Libp2p { fn tributary_message(&self) -> impl Send + Future)> { async move { - self.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?") + self.0.tributary_gossip.lock().await.recv().await.expect("tributary_gossip send was dropped?") } } fn cosign(&self) -> impl Send + Future { async move { self + .0 .signed_cosigns .lock() .await diff --git a/coordinator/p2p/libp2p/src/validators.rs b/coordinator/p2p/libp2p/src/validators.rs index 0ce4c91b..951a5e99 100644 --- a/coordinator/p2p/libp2p/src/validators.rs +++ b/coordinator/p2p/libp2p/src/validators.rs @@ -21,7 +21,7 @@ pub(crate) struct Changes { } pub(crate) struct Validators { - serai: Serai, + serai: Arc, // A cache for which session we're populated with the validators of sessions: HashMap, @@ -35,7 +35,7 @@ pub(crate) struct Validators { } impl Validators { - pub(crate) fn new(serai: Serai) -> (Self, mpsc::UnboundedReceiver) { + pub(crate) fn new(serai: Arc) -> (Self, mpsc::UnboundedReceiver) { let (send, recv) = mpsc::unbounded_channel(); let validators = Validators { serai, @@ -148,7 +148,7 @@ impl Validators { /// Update the view of the validators. pub(crate) async fn update(&mut self) -> Result<(), String> { - let session_changes = Self::session_changes(&self.serai, &self.sessions).await?; + let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?; self.incorporate_session_changes(session_changes); Ok(()) } @@ -174,7 +174,9 @@ impl UpdateValidatorsTask { /// Spawn a new instance of the UpdateValidatorsTask. /// /// This returns a reference to the Validators it updates after spawning itself. - pub(crate) fn spawn(serai: Serai) -> (Arc>, mpsc::UnboundedReceiver) { + pub(crate) fn spawn( + serai: Arc, + ) -> (Arc>, mpsc::UnboundedReceiver) { // The validators which will be updated let (validators, changes) = Validators::new(serai); let validators = Arc::new(RwLock::new(validators)); diff --git a/coordinator/p2p/src/lib.rs b/coordinator/p2p/src/lib.rs index d285c8f0..71eb8f2c 100644 --- a/coordinator/p2p/src/lib.rs +++ b/coordinator/p2p/src/lib.rs @@ -56,6 +56,9 @@ pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotab /// Fetch the peers for this network. fn peers(&self, network: NetworkId) -> impl Send + Future>>; + /// Broadcast a cosign. + fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future; + /// A cancel-safe future for the next heartbeat received over the P2P network. /// /// Yields the validator set its for, the latest block hash observed, and a channel to return the From 091d485fd88eaf04732d901f65880eb6aa301b1b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 10 Jan 2025 02:22:58 -0500 Subject: [PATCH 280/368] Have the Tributary scanner DB be distinct from the cosign DB Allows deleting the entire Tributary scanner DB upon retiry. --- coordinator/src/tributary/mod.rs | 1 + coordinator/src/tributary/scan.rs | 98 ++++++++++++++++++------------- 2 files changed, 58 insertions(+), 41 deletions(-) diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary/mod.rs index 6d748940..60f005e3 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary/mod.rs @@ -4,3 +4,4 @@ pub use transaction::Transaction; mod db; mod scan; +pub(crate) use scan::ScanTributaryTask; diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index fec89f28..9da982e5 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -32,41 +32,43 @@ use crate::{ }, }; -struct ScanBlock<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> { - _db: PhantomData, +struct ScanBlock<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> { _p2p: PhantomData

, - txn: &'a mut DT, + cosign_db: &'a CD, + tributary_txn: &'a mut TDT, set: ValidatorSet, validators: &'a [SeraiAddress], total_weight: u64, validator_weights: &'a HashMap, tributary: &'a TributaryReader, } -impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { +impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one - if TributaryDb::actively_cosigning(self.txn, self.set) { + if TributaryDb::actively_cosigning(self.tributary_txn, self.set) { return; } // Start cosigning the latest intended-to-be-cosigned block let Some(latest_substrate_block_to_cosign) = - TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set) + TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) else { return; }; let Some(substrate_block_number) = - Cosigning::::finalized_block_number(self.txn, latest_substrate_block_to_cosign) + Cosigning::::finalized_block_number(self.cosign_db, latest_substrate_block_to_cosign) else { + // This is a valid panic as we shouldn't be scanning this block if we didn't provide all + // Provided transactions within it, and the block to cosign is a Provided transaction panic!("cosigning a block our cosigner didn't index") }; // Mark us as actively cosigning - TributaryDb::start_cosigning(self.txn, self.set, substrate_block_number); + TributaryDb::start_cosigning(self.tributary_txn, self.set, substrate_block_number); // Send the message for the processor to start signing TributaryDb::send_message( - self.txn, + self.tributary_txn, self.set, messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { session: self.set.session, @@ -81,7 +83,11 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { if let TransactionKind::Signed(_, TributarySigned { signer, .. }) = tx.kind() { // Don't handle transactions from those fatally slashed // TODO: The fact they can publish these TXs makes this a notable spam vector - if TributaryDb::is_fatally_slashed(self.txn, self.set, SeraiAddress(signer.to_bytes())) { + if TributaryDb::is_fatally_slashed( + self.tributary_txn, + self.set, + SeraiAddress(signer.to_bytes()), + ) { return; } } @@ -94,7 +100,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { // Check the participant voted to be removed actually exists if !self.validators.iter().any(|validator| *validator == participant) { TributaryDb::fatal_slash( - self.txn, + self.tributary_txn, self.set, signer, "voted to remove non-existent participant", @@ -103,7 +109,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { } match TributaryDb::accumulate( - self.txn, + self.tributary_txn, self.set, self.validators, self.total_weight, @@ -115,7 +121,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { ) { DataSet::None => {} DataSet::Participating(_) => { - TributaryDb::fatal_slash(self.txn, self.set, participant, "voted to remove"); + TributaryDb::fatal_slash(self.tributary_txn, self.set, participant, "voted to remove"); } }; } @@ -123,7 +129,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { // Send the participation to the processor Transaction::DkgParticipation { participation, signed } => { TributaryDb::send_message( - self.txn, + self.tributary_txn, self.set, messages::key_gen::CoordinatorMessage::Participation { session: self.set.session, @@ -143,16 +149,20 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { Transaction::Cosign { substrate_block_hash } => { // Update the latest intended-to-be-cosigned Substrate block - TributaryDb::set_latest_substrate_block_to_cosign(self.txn, self.set, substrate_block_hash); + TributaryDb::set_latest_substrate_block_to_cosign( + self.tributary_txn, + self.set, + substrate_block_hash, + ); // Start a new cosign if we weren't already working on one self.potentially_start_cosign(); } Transaction::Cosigned { substrate_block_hash } => { - TributaryDb::finish_cosigning(self.txn, self.set); + TributaryDb::finish_cosigning(self.tributary_txn, self.set); // Fetch the latest intended-to-be-cosigned block let Some(latest_substrate_block_to_cosign) = - TributaryDb::latest_substrate_block_to_cosign(self.txn, self.set) + TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) else { return; }; @@ -178,7 +188,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { if slash_points.len() != self.validators.len() { TributaryDb::fatal_slash( - self.txn, + self.tributary_txn, self.set, signer, "slash report was for a distinct amount of signers", @@ -188,7 +198,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { // Accumulate, and if past the threshold, calculate *the* slash report and start signing it match TributaryDb::accumulate( - self.txn, + self.tributary_txn, self.set, self.validators, self.total_weight, @@ -260,7 +270,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { // Recognize the topic for signing the slash report TributaryDb::recognize_topic( - self.txn, + self.tributary_txn, self.set, Topic::Sign { id: VariantSignId::SlashReport, @@ -270,7 +280,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { ); // Send the message for the processor to start signing TributaryDb::send_message( - self.txn, + self.tributary_txn, self.set, messages::coordinator::CoordinatorMessage::SignSlashReport { session: self.set.session, @@ -287,7 +297,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { TributaryDb::fatal_slash( - self.txn, + self.tributary_txn, self.set, signer, "signer signed with a distinct amount of key shares than they had key shares", @@ -296,7 +306,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { } match TributaryDb::accumulate( - self.txn, + self.tributary_txn, self.set, self.validators, self.total_weight, @@ -312,7 +322,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { let flatten_data_set = |data_set| todo!("TODO"); let data_set = flatten_data_set(data_set); TributaryDb::send_message( - self.txn, + self.tributary_txn, self.set, match round { SigningProtocolRound::Preprocess => { @@ -330,7 +340,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { } fn handle_block(mut self, block_number: u64, block: Block) { - TributaryDb::start_of_block(self.txn, self.set, block_number); + TributaryDb::start_of_block(self.tributary_txn, self.set, block_number); for tx in block.transactions { match tx { @@ -356,7 +366,7 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { // Since anything with evidence is fundamentally faulty behavior, not just temporal // errors, mark the node as fatally slashed TributaryDb::fatal_slash( - self.txn, + self.tributary_txn, self.set, SeraiAddress(msgs.0.msg.sender), &format!("invalid tendermint messages: {msgs:?}"), @@ -370,20 +380,21 @@ impl<'a, D: Db, DT: DbTxn, TD: Db, P: P2p> ScanBlock<'a, D, DT, TD, P> { } } -struct ScanTributaryTask { - db: D, - set: ValidatorSet, - validators: Vec, - total_weight: u64, - validator_weights: HashMap, - tributary: TributaryReader, - _p2p: PhantomData

, +pub(crate) struct ScanTributaryTask { + pub(crate) cosign_db: CD, + pub(crate) tributary_db: TD, + pub(crate) set: ValidatorSet, + pub(crate) validators: Vec, + pub(crate) total_weight: u64, + pub(crate) validator_weights: HashMap, + pub(crate) tributary: TributaryReader, + pub(crate) _p2p: PhantomData

, } -impl ContinuallyRan for ScanTributaryTask { +impl ContinuallyRan for ScanTributaryTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let (mut last_block_number, mut last_block_hash) = - TributaryDb::last_handled_tributary_block(&self.db, self.set) + TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) .unwrap_or((0, self.tributary.genesis())); let mut made_progess = false; @@ -407,11 +418,11 @@ impl ContinuallyRan for ScanTributaryTask { } } - let mut txn = self.db.txn(); + let mut tributary_txn = self.tributary_db.txn(); (ScanBlock { - _db: PhantomData::, _p2p: PhantomData::

, - txn: &mut txn, + cosign_db: &self.cosign_db, + tributary_txn: &mut tributary_txn, set: self.set, validators: &self.validators, total_weight: self.total_weight, @@ -419,10 +430,15 @@ impl ContinuallyRan for ScanTributaryTask { tributary: &self.tributary, }) .handle_block(block_number, block); - TributaryDb::set_last_handled_tributary_block(&mut txn, self.set, block_number, block_hash); + TributaryDb::set_last_handled_tributary_block( + &mut tributary_txn, + self.set, + block_number, + block_hash, + ); last_block_number = block_number; last_block_hash = block_hash; - txn.commit(); + tributary_txn.commit(); made_progess = true; } From cbe83956aadcd477b67d1c2ea6e624cf749ca50c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 10 Jan 2025 02:24:24 -0500 Subject: [PATCH 281/368] Flesh out Coordinator main Lot of TODOs as the APIs are all being routed together. --- Cargo.lock | 2 +- coordinator/Cargo.toml | 11 +- coordinator/src/main.rs | 321 ++++++++++++++++++++++++- coordinator/substrate/src/canonical.rs | 7 +- coordinator/substrate/src/ephemeral.rs | 9 +- coordinator/substrate/src/lib.rs | 20 +- 6 files changed, 347 insertions(+), 23 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 840a6c53..1f67f626 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8319,7 +8319,6 @@ dependencies = [ "borsh", "ciphersuite", "env_logger", - "flexible-transcript", "frost-schnorrkel", "hex", "log", @@ -8340,6 +8339,7 @@ dependencies = [ "serai-task", "sp-application-crypto", "sp-runtime", + "tokio", "tributary-chain", "zalloc", "zeroize", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 3ecce4be..38adbf15 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -25,7 +25,6 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] } -transcript = { package = "flexible-transcript", path = "../crypto/transcript", default-features = false, features = ["std", "recommended"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] } frost = { package = "modular-frost", path = "../crypto/frost" } @@ -42,7 +41,6 @@ messages = { package = "serai-processor-messages", path = "../processor/messages message-queue = { package = "serai-message-queue", path = "../message-queue" } tributary = { package = "tributary-chain", path = "./tributary" } -sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] } hex = { version = "0.4", default-features = false, features = ["std"] } @@ -51,17 +49,14 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } +tokio = { version = "1", default-features = false, features = ["time", "sync", "macros", "rt-multi-thread"] } + serai-cosign = { path = "./cosign" } serai-coordinator-substrate = { path = "./substrate" } serai-coordinator-p2p = { path = "./p2p" } serai-coordinator-libp2p-p2p = { path = "./p2p/libp2p" } -[dev-dependencies] -tributary = { package = "tributary-chain", path = "./tributary", features = ["tests"] } -sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } -sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] } - [features] -longer-reattempts = [] +longer-reattempts = [] # TODO parity-db = ["serai-db/parity-db"] rocksdb = ["serai-db/rocksdb"] diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 8af0f824..f1090284 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,10 +1,329 @@ +use core::{marker::PhantomData, ops::Deref, time::Duration}; +use std::{sync::Arc, time::Instant, collections::HashMap}; + +use zeroize::{Zeroize, Zeroizing}; +use rand_core::{RngCore, OsRng}; + +use blake2::{digest::typenum::U32, Digest, Blake2s}; +use ciphersuite::{ + group::{ff::PrimeField, GroupEncoding}, + Ciphersuite, Ristretto, +}; + +use tokio::sync::mpsc; + +use scale::Encode; +use serai_client::{ + primitives::{NetworkId, PublicKey, SeraiAddress}, + validator_sets::primitives::ValidatorSet, + Serai, +}; +use message_queue::{Service, client::MessageQueue}; + +use ::tributary::Tributary; + +use serai_task::{Task, TaskHandle, ContinuallyRan}; + +use serai_cosign::{SignedCosign, Cosigning}; +use serai_coordinator_substrate::{NewSetInformation, CanonicalEventStream, EphemeralEventStream}; + mod tributary; +use tributary::{Transaction, ScanTributaryTask}; mod p2p { pub use serai_coordinator_p2p::*; pub use serai_coordinator_libp2p_p2p::Libp2p; } -fn main() { +// Use a zeroizing allocator for this entire application +// While secrets should already be zeroized, the presence of secret keys in a networked application +// (at increased risk of OOB reads) justifies the performance hit in case any secrets weren't +// already +#[global_allocator] +static ALLOCATOR: zalloc::ZeroizingAlloc = + zalloc::ZeroizingAlloc(std::alloc::System); + +#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] +type Db = serai_db::ParityDb; +#[cfg(feature = "rocksdb")] +type Db = serai_db::RocksDB; + +#[allow(unused_variables, unreachable_code)] +fn db(path: &str) -> Db { + #[cfg(all(feature = "parity-db", feature = "rocksdb"))] + panic!("built with parity-db and rocksdb"); + #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] + let db = serai_db::new_parity_db(path); + #[cfg(feature = "rocksdb")] + let db = serai_db::new_rocksdb(path); + db +} + +fn coordinator_db() -> Db { + let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); + db(&format!("{root_path}/coordinator")) +} + +fn tributary_db(set: ValidatorSet) -> Db { + let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); + let network = match set.network { + NetworkId::Serai => panic!("creating Tributary for the Serai network"), + NetworkId::Bitcoin => "Bitcoin", + NetworkId::Ethereum => "Ethereum", + NetworkId::Monero => "Monero", + }; + db(&format!("{root_path}/tributary-{network}-{}", set.session.0)) +} + +async fn serai() -> Arc { + const SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(10); + const MAX_SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(300); + + let mut delay = SERAI_CONNECTION_DELAY; + loop { + let Ok(serai) = Serai::new(format!( + "http://{}:9944", + serai_env::var("SERAI_HOSTNAME").expect("Serai hostname wasn't provided") + )) + .await + else { + log::error!("couldn't connect to the Serai node"); + tokio::time::sleep(delay).await; + delay = (delay + SERAI_CONNECTION_DELAY).min(MAX_SERAI_CONNECTION_DELAY); + continue; + }; + log::info!("made initial connection to Serai node"); + return Arc::new(serai); + } +} + +// TODO: intended_cosigns +fn spawn_cosigning( + db: impl serai_db::Db, + serai: Arc, + p2p: impl p2p::P2p, + tasks_to_run_upon_cosigning: Vec, + mut p2p_cosigns: mpsc::UnboundedReceiver, + mut signed_cosigns: mpsc::UnboundedReceiver, +) { + let mut cosigning = Cosigning::spawn(db, serai, p2p.clone(), tasks_to_run_upon_cosigning); + tokio::spawn(async move { + let last_cosign_rebroadcast = Instant::now(); + loop { + let time_till_cosign_rebroadcast = (last_cosign_rebroadcast + + serai_cosign::BROADCAST_FREQUENCY) + .saturating_duration_since(Instant::now()); + tokio::select! { + () = tokio::time::sleep(time_till_cosign_rebroadcast) => { + for cosign in cosigning.cosigns_to_rebroadcast() { + p2p.publish_cosign(cosign).await; + } + } + cosign = p2p_cosigns.recv() => { + let cosign = cosign.expect("p2p cosigns channel was dropped?"); + let _: Result<_, _> = cosigning.intake_cosign(&cosign); + } + cosign = signed_cosigns.recv() => { + let cosign = cosign.expect("signed cosigns channel was dropped?"); + // TODO: Handle this error + let _: Result<_, _> = cosigning.intake_cosign(&cosign); + p2p.publish_cosign(cosign).await; + } + } + } + }); +} + +/// Spawn an existing Tributary. +/// +/// This will spawn the Tributary, the Tributary scanning task, and inform the P2P network. +async fn spawn_tributary( + db: Db, + p2p: P, + p2p_add_tributary: mpsc::UnboundedSender>, + set: NewSetInformation, + serai_key: Zeroizing<::F>, +) { + let genesis = <[u8; 32]>::from(Blake2s::::digest((set.serai_block, set.set).encode())); + + // Since the Serai block will be finalized, then cosigned, before we handle this, this time will + // be a couple of minutes stale. While the Tributary will still function with a start time in the + // past, the Tributary will immediately incur round timeouts. We reduce these by adding a + // constant delay of a couple of minutes. + const TRIBUTARY_START_TIME_DELAY: u64 = 120; + let start_time = set.declaration_time + TRIBUTARY_START_TIME_DELAY; + + let mut tributary_validators = Vec::with_capacity(set.validators.len()); + let mut validators = Vec::with_capacity(set.validators.len()); + let mut total_weight = 0; + let mut validator_weights = HashMap::with_capacity(set.validators.len()); + for (validator, weight) in set.validators { + let validator_key = ::read_G(&mut validator.0.as_slice()) + .expect("Serai validator had an invalid public key"); + let validator = SeraiAddress::from(validator); + let weight = u64::from(weight); + tributary_validators.push((validator_key, weight)); + validators.push(validator); + total_weight += weight; + validator_weights.insert(validator, weight); + } + + let tributary_db = tributary_db(set.set); + let tributary = Tributary::<_, Transaction, _>::new( + tributary_db.clone(), + genesis, + start_time, + serai_key, + tributary_validators, + p2p, + ) + .await + .unwrap(); + let reader = tributary.reader(); + + p2p_add_tributary.send(tributary).expect("p2p's add_tributary channel was closed?"); + + let (scan_tributary_task_def, scan_tributary_task) = Task::new(); + tokio::spawn( + (ScanTributaryTask { + cosign_db: db, + tributary_db, + set: set.set, + validators, + total_weight, + validator_weights, + tributary: reader, + _p2p: PhantomData::

, + }) + .continually_run(scan_tributary_task_def, vec![todo!("TODO")]), + ); + // TODO^ On Tributary block, drain this task's ProcessorMessages + + // Have the tributary scanner run as soon as there's a new block + // TODO: Implement retiry, this will hold the tributary/handle indefinitely + tokio::spawn(async move { + loop { + tributary + .next_block_notification() + .await + .await + .map_err(|_| ()) + // unreachable since this owns the tributary object and doesn't drop it + .expect("tributary was dropped causing notification to error"); + scan_tributary_task.run_now(); + } + }); +} + +#[tokio::main] +async fn main() { + // Override the panic handler with one which will panic if any tokio task panics + { + let existing = std::panic::take_hook(); + std::panic::set_hook(Box::new(move |panic| { + existing(panic); + const MSG: &str = "exiting the process due to a task panicking"; + println!("{MSG}"); + log::error!("{MSG}"); + std::process::exit(1); + })); + } + + // Initialize the logger + if std::env::var("RUST_LOG").is_err() { + std::env::set_var("RUST_LOG", serai_env::var("RUST_LOG").unwrap_or_else(|| "info".to_string())); + } + env_logger::init(); + log::info!("starting coordinator service..."); + + // Read the Serai key from the env + let serai_key = { + let mut key_hex = serai_env::var("SERAI_KEY").expect("Serai key wasn't provided"); + let mut key_vec = hex::decode(&key_hex).map_err(|_| ()).expect("Serai key wasn't hex-encoded"); + key_hex.zeroize(); + if key_vec.len() != 32 { + key_vec.zeroize(); + panic!("Serai key had an invalid length"); + } + let mut key_bytes = [0; 32]; + key_bytes.copy_from_slice(&key_vec); + key_vec.zeroize(); + let key = Zeroizing::new(::F::from_repr(key_bytes).unwrap()); + key_bytes.zeroize(); + key + }; + + // Open the database + let db = coordinator_db(); + + // Connect to the message-queue + let message_queue = MessageQueue::from_env(Service::Coordinator); + + // Connect to the Serai node + let serai = serai().await; + + let (p2p_add_tributary_send, p2p_add_tributary_recv) = mpsc::unbounded_channel(); + let (p2p_retire_tributary_send, p2p_retire_tributary_recv) = mpsc::unbounded_channel(); + let (p2p_cosigns_send, p2p_cosigns_recv) = mpsc::unbounded_channel(); + + // Spawn the P2P network + let p2p = { + let serai_keypair = { + let mut key_bytes = serai_key.to_bytes(); + // Schnorrkel SecretKey is the key followed by 32 bytes of entropy for nonces + let mut expanded_key = Zeroizing::new([0; 64]); + expanded_key.as_mut_slice()[.. 32].copy_from_slice(&key_bytes); + OsRng.fill_bytes(&mut expanded_key.as_mut_slice()[32 ..]); + key_bytes.zeroize(); + Zeroizing::new( + schnorrkel::SecretKey::from_bytes(expanded_key.as_slice()).unwrap().to_keypair(), + ) + }; + let p2p = p2p::Libp2p::new(&serai_keypair, serai.clone()); + tokio::spawn(p2p::run::( + db.clone(), + p2p.clone(), + p2p_add_tributary_recv, + p2p_retire_tributary_recv, + p2p_cosigns_send, + )); + p2p + }; + + // TODO: p2p_add_tributary_send, p2p_retire_tributary_send + + // Spawn the Substrate scanners + // TODO: Canonical, NewSet, SignSlashReport + let (substrate_canonical_task_def, substrate_canonical_task) = Task::new(); + tokio::spawn( + CanonicalEventStream::new(db.clone(), serai.clone()) + .continually_run(substrate_canonical_task_def, todo!("TODO")), + ); + let (substrate_ephemeral_task_def, substrate_ephemeral_task) = Task::new(); + tokio::spawn( + EphemeralEventStream::new( + db.clone(), + serai.clone(), + PublicKey::from_raw((::generator() * serai_key.deref()).to_bytes()), + ) + .continually_run(substrate_ephemeral_task_def, todo!("TODO")), + ); + + // Spawn the cosign handler + let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); + spawn_cosigning( + db.clone(), + serai.clone(), + p2p.clone(), + // Run the Substrate scanners once we cosign new blocks + vec![substrate_canonical_task, substrate_ephemeral_task], + p2p_cosigns_recv, + signed_cosigns_recv, + ); + + // TODO: Reload tributaries from disk, handle processor messages + + // TODO: On NewSet, save to DB, send KeyGen, spawn tributary task, inform P2P network + todo!("TODO") } diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index f333e11f..e1bbe6c2 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -1,4 +1,5 @@ -use std::future::Future; +use core::future::Future; +use std::sync::Arc; use futures::stream::{StreamExt, FuturesOrdered}; @@ -20,14 +21,14 @@ create_db!( /// The event stream for canonical events. pub struct CanonicalEventStream { db: D, - serai: Serai, + serai: Arc, } impl CanonicalEventStream { /// Create a new canonical event stream. /// /// Only one of these may exist over the provided database. - pub fn new(db: D, serai: Serai) -> Self { + pub fn new(db: D, serai: Arc) -> Self { Self { db, serai } } } diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index 703d5b3a..d889d59f 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -1,4 +1,5 @@ -use std::future::Future; +use core::future::Future; +use std::sync::Arc; use futures::stream::{StreamExt, FuturesOrdered}; @@ -24,7 +25,7 @@ create_db!( /// The event stream for ephemeral events. pub struct EphemeralEventStream { db: D, - serai: Serai, + serai: Arc, validator: PublicKey, } @@ -32,7 +33,7 @@ impl EphemeralEventStream { /// Create a new ephemeral event stream. /// /// Only one of these may exist over the provided database. - pub fn new(db: D, serai: Serai, validator: PublicKey) -> Self { + pub fn new(db: D, serai: Arc, validator: PublicKey) -> Self { Self { db, serai, validator } } } @@ -216,7 +217,7 @@ impl ContinuallyRan for EphemeralEventStream { &NewSetInformation { set: *set, serai_block: block.block_hash, - start_time: block.time, + declaration_time: block.time, // TODO: Why do we have this as an explicit field here? // Shouldn't thiis be inlined into the Processor's key gen code, where it's used? threshold: ((total_weight * 2) / 3) + 1, diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index 41378508..f723332d 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -13,7 +13,9 @@ use serai_client::{ use serai_db::*; mod canonical; +pub use canonical::CanonicalEventStream; mod ephemeral; +pub use ephemeral::EphemeralEventStream; fn borsh_serialize_validators( validators: &Vec<(PublicKey, u16)>, @@ -32,16 +34,22 @@ fn borsh_deserialize_validators( /// The information for a new set. #[derive(Debug, BorshSerialize, BorshDeserialize)] pub struct NewSetInformation { - set: ValidatorSet, - serai_block: [u8; 32], - start_time: u64, - threshold: u16, + /// The set. + pub set: ValidatorSet, + /// The Serai block which declared it. + pub serai_block: [u8; 32], + /// The time of the block which declared it, in seconds. + pub declaration_time: u64, + /// The threshold to use. + pub threshold: u16, + /// The validators, with the amount of key shares they have. #[borsh( serialize_with = "borsh_serialize_validators", deserialize_with = "borsh_deserialize_validators" )] - validators: Vec<(PublicKey, u16)>, - evrf_public_keys: Vec<([u8; 32], Vec)>, + pub validators: Vec<(PublicKey, u16)>, + /// The eVRF public keys. + pub evrf_public_keys: Vec<([u8; 32], Vec)>, } mod _public_db { From 378d6b90cfc896cede7583f84de1077effbb587f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 10 Jan 2025 20:10:05 -0500 Subject: [PATCH 282/368] Delete old Tributaries on reboot --- coordinator/src/main.rs | 104 +++++++++++++++++++++++++++++++++++----- 1 file changed, 91 insertions(+), 13 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index f1090284..fb96dc76 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,5 +1,5 @@ use core::{marker::PhantomData, ops::Deref, time::Duration}; -use std::{sync::Arc, time::Instant, collections::HashMap}; +use std::{sync::Arc, collections::HashMap, time::Instant, path::Path, fs}; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, OsRng}; @@ -15,11 +15,13 @@ use tokio::sync::mpsc; use scale::Encode; use serai_client::{ primitives::{NetworkId, PublicKey, SeraiAddress}, - validator_sets::primitives::ValidatorSet, + validator_sets::primitives::{Session, ValidatorSet}, Serai, }; use message_queue::{Service, client::MessageQueue}; +use serai_db::{*, Db as DbTrait}; + use ::tributary::Tributary; use serai_task::{Task, TaskHandle, ContinuallyRan}; @@ -30,6 +32,19 @@ use serai_coordinator_substrate::{NewSetInformation, CanonicalEventStream, Ephem mod tributary; use tributary::{Transaction, ScanTributaryTask}; +create_db! { + Coordinator { + ActiveTributaries: () -> Vec, + RetiredTributary: (set: ValidatorSet) -> (), + } +} + +db_channel! { + Coordinator { + TributaryCleanup: () -> ValidatorSet, + } +} + mod p2p { pub use serai_coordinator_p2p::*; pub use serai_coordinator_libp2p_p2p::Libp2p; @@ -50,6 +65,14 @@ type Db = serai_db::RocksDB; #[allow(unused_variables, unreachable_code)] fn db(path: &str) -> Db { + { + let path: &Path = path.as_ref(); + // This may error if this path already exists, which we shouldn't propagate/panic on. If this + // is a problem (such as we don't have the necessary permissions to write to this path), we + // expect the following DB opening to error. + let _: Result<_, _> = fs::create_dir_all(path.parent().unwrap()); + } + #[cfg(all(feature = "parity-db", feature = "rocksdb"))] panic!("built with parity-db and rocksdb"); #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] @@ -61,10 +84,10 @@ fn db(path: &str) -> Db { fn coordinator_db() -> Db { let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); - db(&format!("{root_path}/coordinator")) + db(&format!("{root_path}/coordinator/db")) } -fn tributary_db(set: ValidatorSet) -> Db { +fn tributary_db_folder(set: ValidatorSet) -> String { let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); let network = match set.network { NetworkId::Serai => panic!("creating Tributary for the Serai network"), @@ -72,7 +95,11 @@ fn tributary_db(set: ValidatorSet) -> Db { NetworkId::Ethereum => "Ethereum", NetworkId::Monero => "Monero", }; - db(&format!("{root_path}/tributary-{network}-{}", set.session.0)) + format!("{root_path}/tributary-{network}-{}", set.session.0) +} + +fn tributary_db(set: ValidatorSet) -> Db { + db(&format!("{}/db", tributary_db_folder(set))) } async fn serai() -> Arc { @@ -138,12 +165,32 @@ fn spawn_cosigning( /// /// This will spawn the Tributary, the Tributary scanning task, and inform the P2P network. async fn spawn_tributary( - db: Db, + mut db: Db, p2p: P, - p2p_add_tributary: mpsc::UnboundedSender>, + p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, set: NewSetInformation, serai_key: Zeroizing<::F>, ) { + // Don't spawn retired Tributaries + if RetiredTributary::get(&db, set.set).is_some() { + return; + } + + // TODO: Move from spawn_tributary to on NewSet + // Queue the historical Tributary for this network for deletion + // We explicitly don't queue this upon Tributary retire to give time to investigate retired + // Tributaries if questions are raised post-retiry. This gives a week after the Tributary has + // been retired to make a backup of the data directory for any investigations. + if let Some(historic_session) = set.set.session.0.checked_sub(2) { + // This may get fired several times but that isn't an issue + let mut txn = db.txn(); + TributaryCleanup::send( + &mut txn, + &ValidatorSet { network: set.set.network, session: Session(historic_session) }, + ); + txn.commit(); + } + let genesis = <[u8; 32]>::from(Blake2s::::digest((set.serai_block, set.set).encode())); // Since the Serai block will be finalized, then cosigned, before we handle this, this time will @@ -181,12 +228,12 @@ async fn spawn_tributary( .unwrap(); let reader = tributary.reader(); - p2p_add_tributary.send(tributary).expect("p2p's add_tributary channel was closed?"); + p2p_add_tributary.send((set.set, tributary)).expect("p2p's add_tributary channel was closed?"); let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( (ScanTributaryTask { - cosign_db: db, + cosign_db: db.clone(), tributary_db, set: set.set, validators, @@ -200,9 +247,12 @@ async fn spawn_tributary( // TODO^ On Tributary block, drain this task's ProcessorMessages // Have the tributary scanner run as soon as there's a new block - // TODO: Implement retiry, this will hold the tributary/handle indefinitely tokio::spawn(async move { loop { + if RetiredTributary::get(&db, set.set).is_some() { + break; + } + tributary .next_block_notification() .await @@ -254,7 +304,28 @@ async fn main() { }; // Open the database - let db = coordinator_db(); + let mut db = coordinator_db(); + + let existing_tributaries_at_boot = { + let mut txn = db.txn(); + + // Cleanup all historic Tributaries + while let Some(to_cleanup) = TributaryCleanup::try_recv(&mut txn) { + // TributaryCleanup may fire this message multiple times so this may fail if we've already + // performed cleanup + log::info!("pruning data directory for historic tributary {to_cleanup:?}"); + let _: Result<_, _> = fs::remove_dir_all(tributary_db_folder(to_cleanup)); + } + + // Remove retired Tributaries from ActiveTributaries + let mut active_tributaries = ActiveTributaries::get(&txn).unwrap_or(vec![]); + active_tributaries.retain(|tributary| RetiredTributary::get(&txn, tributary.set).is_none()); + ActiveTributaries::set(&mut txn, &active_tributaries); + + txn.commit(); + + active_tributaries + }; // Connect to the message-queue let message_queue = MessageQueue::from_env(Service::Coordinator); @@ -321,9 +392,16 @@ async fn main() { signed_cosigns_recv, ); - // TODO: Reload tributaries from disk, handle processor messages + // Spawn all Tributaries on-disk + for tributary in existing_tributaries_at_boot { + spawn_tributary(db.clone(), p2p.clone(), &p2p_add_tributary_send, tributary, serai_key.clone()) + .await; + } - // TODO: On NewSet, save to DB, send KeyGen, spawn tributary task, inform P2P network + // TODO: Hndle processor messages + + // TODO: On NewSet, queue historical for deletionn, save to DB, send KeyGen, spawn tributary + // task, inform P2P network todo!("TODO") } From 542bf2170ae1db01e7ff80f2bbc852d171771d7a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 01:31:28 -0500 Subject: [PATCH 283/368] Provide Cosign/CosignIntent for Tributaries --- Cargo.lock | 2 - coordinator/cosign/src/lib.rs | 8 +- coordinator/src/db.rs | 79 +++++++++ coordinator/src/main.rs | 211 +++++++++++++---------- coordinator/src/tributary/db.rs | 27 ++- coordinator/src/tributary/scan.rs | 45 +++-- coordinator/src/tributary/transaction.rs | 4 +- processor/messages/src/lib.rs | 6 +- 8 files changed, 265 insertions(+), 117 deletions(-) create mode 100644 coordinator/src/db.rs diff --git a/Cargo.lock b/Cargo.lock index 1f67f626..ede4518f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8337,8 +8337,6 @@ dependencies = [ "serai-message-queue", "serai-processor-messages", "serai-task", - "sp-application-crypto", - "sp-runtime", "tokio", "tributary-chain", "zalloc", diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index aa2883aa..c4428a39 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -82,13 +82,13 @@ enum HasEvents { #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct CosignIntent { /// The global session this cosign is being performed under. - global_session: [u8; 32], + pub global_session: [u8; 32], /// The number of the block to cosign. - block_number: u64, + pub block_number: u64, /// The hash of the block to cosign. - block_hash: [u8; 32], + pub block_hash: [u8; 32], /// If this cosign must be handled before further cosigns are. - notable: bool, + pub notable: bool, } /// A cosign. diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs new file mode 100644 index 00000000..4e932306 --- /dev/null +++ b/coordinator/src/db.rs @@ -0,0 +1,79 @@ +use std::{path::Path, fs}; + +pub(crate) use serai_db::{Get, DbTxn, Db as DbTrait}; +use serai_db::{create_db, db_channel}; + +use serai_client::{ + primitives::NetworkId, + validator_sets::primitives::{Session, ValidatorSet}, +}; + +use serai_cosign::CosignIntent; + +use serai_coordinator_substrate::NewSetInformation; + +#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] +pub(crate) type Db = serai_db::ParityDb; +#[cfg(feature = "rocksdb")] +pub(crate) type Db = serai_db::RocksDB; + +#[allow(unused_variables, unreachable_code)] +fn db(path: &str) -> Db { + { + let path: &Path = path.as_ref(); + // This may error if this path already exists, which we shouldn't propagate/panic on. If this + // is a problem (such as we don't have the necessary permissions to write to this path), we + // expect the following DB opening to error. + let _: Result<_, _> = fs::create_dir_all(path.parent().unwrap()); + } + + #[cfg(all(feature = "parity-db", feature = "rocksdb"))] + panic!("built with parity-db and rocksdb"); + #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] + let db = serai_db::new_parity_db(path); + #[cfg(feature = "rocksdb")] + let db = serai_db::new_rocksdb(path); + db +} + +pub(crate) fn coordinator_db() -> Db { + let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); + db(&format!("{root_path}/coordinator/db")) +} + +fn tributary_db_folder(set: ValidatorSet) -> String { + let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); + let network = match set.network { + NetworkId::Serai => panic!("creating Tributary for the Serai network"), + NetworkId::Bitcoin => "Bitcoin", + NetworkId::Ethereum => "Ethereum", + NetworkId::Monero => "Monero", + }; + format!("{root_path}/tributary-{network}-{}", set.session.0) +} + +pub(crate) fn tributary_db(set: ValidatorSet) -> Db { + db(&format!("{}/db", tributary_db_folder(set))) +} + +pub(crate) fn prune_tributary_db(set: ValidatorSet) { + log::info!("pruning data directory for tributary {set:?}"); + let db = tributary_db_folder(set); + if fs::exists(&db).expect("couldn't check if tributary DB exists") { + fs::remove_dir_all(db).unwrap(); + } +} + +create_db! { + Coordinator { + ActiveTributaries: () -> Vec, + RetiredTributary: (network: NetworkId) -> Session, + } +} + +db_channel! { + Coordinator { + TributaryCleanup: () -> ValidatorSet, + PendingCosigns: (set: ValidatorSet) -> CosignIntent, + } +} diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index fb96dc76..dbe185fb 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,5 +1,5 @@ use core::{marker::PhantomData, ops::Deref, time::Duration}; -use std::{sync::Arc, collections::HashMap, time::Instant, path::Path, fs}; +use std::{sync::Arc, collections::HashMap, time::Instant}; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, OsRng}; @@ -20,31 +20,19 @@ use serai_client::{ }; use message_queue::{Service, client::MessageQueue}; -use serai_db::{*, Db as DbTrait}; - -use ::tributary::Tributary; +use ::tributary::ProvidedError; use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{SignedCosign, Cosigning}; use serai_coordinator_substrate::{NewSetInformation, CanonicalEventStream, EphemeralEventStream}; +mod db; +use db::*; + mod tributary; use tributary::{Transaction, ScanTributaryTask}; -create_db! { - Coordinator { - ActiveTributaries: () -> Vec, - RetiredTributary: (set: ValidatorSet) -> (), - } -} - -db_channel! { - Coordinator { - TributaryCleanup: () -> ValidatorSet, - } -} - mod p2p { pub use serai_coordinator_p2p::*; pub use serai_coordinator_libp2p_p2p::Libp2p; @@ -58,49 +46,7 @@ mod p2p { static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); -#[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] -type Db = serai_db::ParityDb; -#[cfg(feature = "rocksdb")] -type Db = serai_db::RocksDB; - -#[allow(unused_variables, unreachable_code)] -fn db(path: &str) -> Db { - { - let path: &Path = path.as_ref(); - // This may error if this path already exists, which we shouldn't propagate/panic on. If this - // is a problem (such as we don't have the necessary permissions to write to this path), we - // expect the following DB opening to error. - let _: Result<_, _> = fs::create_dir_all(path.parent().unwrap()); - } - - #[cfg(all(feature = "parity-db", feature = "rocksdb"))] - panic!("built with parity-db and rocksdb"); - #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] - let db = serai_db::new_parity_db(path); - #[cfg(feature = "rocksdb")] - let db = serai_db::new_rocksdb(path); - db -} - -fn coordinator_db() -> Db { - let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); - db(&format!("{root_path}/coordinator/db")) -} - -fn tributary_db_folder(set: ValidatorSet) -> String { - let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); - let network = match set.network { - NetworkId::Serai => panic!("creating Tributary for the Serai network"), - NetworkId::Bitcoin => "Bitcoin", - NetworkId::Ethereum => "Ethereum", - NetworkId::Monero => "Monero", - }; - format!("{root_path}/tributary-{network}-{}", set.session.0) -} - -fn tributary_db(set: ValidatorSet) -> Db { - db(&format!("{}/db", tributary_db_folder(set))) -} +type Tributary

= ::tributary::Tributary; async fn serai() -> Arc { const SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(10); @@ -124,7 +70,6 @@ async fn serai() -> Arc { } } -// TODO: intended_cosigns fn spawn_cosigning( db: impl serai_db::Db, serai: Arc, @@ -167,12 +112,13 @@ fn spawn_cosigning( async fn spawn_tributary( mut db: Db, p2p: P, - p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, + p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, set: NewSetInformation, serai_key: Zeroizing<::F>, ) { // Don't spawn retired Tributaries - if RetiredTributary::get(&db, set.set).is_some() { + if RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= Some(set.set.session.0) + { return; } @@ -216,21 +162,15 @@ async fn spawn_tributary( } let tributary_db = tributary_db(set.set); - let tributary = Tributary::<_, Transaction, _>::new( - tributary_db.clone(), - genesis, - start_time, - serai_key, - tributary_validators, - p2p, - ) - .await - .unwrap(); + let mut tributary = + Tributary::new(tributary_db.clone(), genesis, start_time, serai_key, tributary_validators, p2p) + .await + .unwrap(); let reader = tributary.reader(); p2p_add_tributary.send((set.set, tributary)).expect("p2p's add_tributary channel was closed?"); - let (scan_tributary_task_def, scan_tributary_task) = Task::new(); + let (scan_tributary_task_def, mut scan_tributary_task) = Task::new(); tokio::spawn( (ScanTributaryTask { cosign_db: db.clone(), @@ -246,21 +186,114 @@ async fn spawn_tributary( ); // TODO^ On Tributary block, drain this task's ProcessorMessages - // Have the tributary scanner run as soon as there's a new block tokio::spawn(async move { loop { - if RetiredTributary::get(&db, set.set).is_some() { + // Break once this Tributary is retired + if RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= + Some(set.set.session.0) + { break; } - tributary - .next_block_notification() - .await - .await - .map_err(|_| ()) + let provide = |tributary: Tributary<_>, scan_tributary_task, tx: Transaction| async move { + match tributary.provide_transaction(tx.clone()).await { + // The Tributary uses its own DB, so we may provide this multiple times if we reboot + // before committing the txn which provoked this + Ok(()) | Err(ProvidedError::AlreadyProvided) => {} + Err(ProvidedError::NotProvided) => { + panic!("providing a Transaction which wasn't a Provided transaction?"); + } + Err(ProvidedError::InvalidProvided(e)) => { + panic!("providing an invalid Provided transaction: {e:?}") + } + Err(ProvidedError::LocalMismatchesOnChain) => { + // Drop the Tributary and scan Tributary task so we don't continue running them here + drop(tributary); + drop(scan_tributary_task); + + loop { + // We're actually only halting the Tributary's scan task (which already only scans + // if all Provided transactions align) as the P2P task is still maintaining a clone + // of the Tributary handle + log::error!( + "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", + set.set, + tx, + ); + // Print this every five minutes as this does need to be handled + tokio::time::sleep(Duration::from_secs(5 * 60)).await; + } + + // Declare this unreachable so Rust will let us perform the above drops + unreachable!(); + } + } + (tributary, scan_tributary_task) + }; + + // Check if we produced any cosigns we were supposed to + let mut pending_notable_cosign = false; + loop { + let mut txn = db.txn(); + + // Fetch the next cosign this tributary should handle + let Some(cosign) = PendingCosigns::try_recv(&mut txn, set.set) else { break }; + pending_notable_cosign = cosign.notable; + + // If we (Serai) haven't cosigned this block, break as this is still pending + let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; + if latest < cosign.block_number { + break; + } + + // Because we've cosigned it, provide the TX for that + (tributary, scan_tributary_task) = provide( + tributary, + scan_tributary_task, + Transaction::Cosigned { substrate_block_hash: cosign.block_hash }, + ) + .await; + // Clear pending_notable_cosign since this cosign isn't pending + pending_notable_cosign = false; + + // Commit the txn to clear this from PendingCosigns + txn.commit(); + } + + // If we don't have any notable cosigns pending, provide the next set of cosign intents + if pending_notable_cosign { + let mut txn = db.txn(); + // intended_cosigns will only yield up to and including the next notable cosign + for cosign in Cosigning::::intended_cosigns(&mut txn, set.set) { + // Flag this cosign as pending + PendingCosigns::send(&mut txn, set.set, &cosign); + // Provide the transaction to queue it for work + (tributary, scan_tributary_task) = provide( + tributary, + scan_tributary_task, + Transaction::Cosign { substrate_block_hash: cosign.block_hash }, + ) + .await; + } + txn.commit(); + } + + // Have the tributary scanner run as soon as there's a new block + // This is wrapped in a timeout so we don't go too long without running the above code + match tokio::time::timeout( + Duration::from_millis(::tributary::tendermint::TARGET_BLOCK_TIME.into()), + tributary.next_block_notification().await, + ) + .await + { + // Future resolved within the timeout, notification + Ok(Ok(())) => scan_tributary_task.run_now(), + // Future resolved within the timeout, notification failed due to sender being dropped // unreachable since this owns the tributary object and doesn't drop it - .expect("tributary was dropped causing notification to error"); - scan_tributary_task.run_now(); + Ok(Err(_)) => panic!("tributary was dropped causing notification to error"), + // Future didn't resolve within the timeout + Err(_) => {} + } } }); } @@ -311,15 +344,17 @@ async fn main() { // Cleanup all historic Tributaries while let Some(to_cleanup) = TributaryCleanup::try_recv(&mut txn) { - // TributaryCleanup may fire this message multiple times so this may fail if we've already - // performed cleanup - log::info!("pruning data directory for historic tributary {to_cleanup:?}"); - let _: Result<_, _> = fs::remove_dir_all(tributary_db_folder(to_cleanup)); + prune_tributary_db(to_cleanup); + // Drain the cosign intents created for this set + while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} } // Remove retired Tributaries from ActiveTributaries let mut active_tributaries = ActiveTributaries::get(&txn).unwrap_or(vec![]); - active_tributaries.retain(|tributary| RetiredTributary::get(&txn, tributary.set).is_none()); + active_tributaries.retain(|tributary| { + RetiredTributary::get(&txn, tributary.set.network).map(|session| session.0) < + Some(tributary.set.session.0) + }); ActiveTributaries::set(&mut txn, &active_tributaries); txn.commit(); diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index 0d85110d..a3eab8db 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -189,8 +189,10 @@ create_db!( // The latest Substrate block to cosign. LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], - // If we're actively cosigning or not. - ActivelyCosigning: (set: ValidatorSet) -> (), + // The hash of the block we're actively cosigning. + ActivelyCosigning: (set: ValidatorSet) -> [u8; 32], + // If this block has already been cosigned. + Cosigned: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> (), // The weight accumulated for a topic. AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u64, @@ -238,19 +240,20 @@ impl TributaryDb { ) { LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash); } - pub(crate) fn actively_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) -> bool { - ActivelyCosigning::get(txn, set).is_some() + pub(crate) fn actively_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) -> Option<[u8; 32]> { + ActivelyCosigning::get(txn, set) } pub(crate) fn start_cosigning( txn: &mut impl DbTxn, set: ValidatorSet, + substrate_block_hash: [u8; 32], substrate_block_number: u64, ) { assert!( ActivelyCosigning::get(txn, set).is_none(), "starting cosigning while already cosigning" ); - ActivelyCosigning::set(txn, set, &()); + ActivelyCosigning::set(txn, set, &substrate_block_hash); TributaryDb::recognize_topic( txn, @@ -265,6 +268,20 @@ impl TributaryDb { pub(crate) fn finish_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) { assert!(ActivelyCosigning::take(txn, set).is_some(), "finished cosigning but not cosigning"); } + pub(crate) fn mark_cosigned( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + ) { + Cosigned::set(txn, set, substrate_block_hash, &()); + } + pub(crate) fn cosigned( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + ) -> bool { + Cosigned::get(txn, set, substrate_block_hash).is_some() + } pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { AccumulatedWeight::set(txn, set, topic, &0); diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index 9da982e5..6e4d8d3f 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -45,17 +45,22 @@ struct ScanBlock<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> { impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one - if TributaryDb::actively_cosigning(self.tributary_txn, self.set) { + if TributaryDb::actively_cosigning(self.tributary_txn, self.set).is_some() { return; } - // Start cosigning the latest intended-to-be-cosigned block + // Fetch the latest intended-to-be-cosigned block let Some(latest_substrate_block_to_cosign) = TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) else { return; }; + // If it was already cosigned, return + if TributaryDb::cosigned(self.tributary_txn, self.set, latest_substrate_block_to_cosign) { + return; + } + let Some(substrate_block_number) = Cosigning::::finalized_block_number(self.cosign_db, latest_substrate_block_to_cosign) else { @@ -65,7 +70,12 @@ impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { }; // Mark us as actively cosigning - TributaryDb::start_cosigning(self.tributary_txn, self.set, substrate_block_number); + TributaryDb::start_cosigning( + self.tributary_txn, + self.set, + latest_substrate_block_to_cosign, + substrate_block_number, + ); // Send the message for the processor to start signing TributaryDb::send_message( self.tributary_txn, @@ -154,24 +164,31 @@ impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { self.set, substrate_block_hash, ); - // Start a new cosign if we weren't already working on one + // Start a new cosign if we aren't already working on one self.potentially_start_cosign(); } Transaction::Cosigned { substrate_block_hash } => { - TributaryDb::finish_cosigning(self.tributary_txn, self.set); + /* + We provide one Cosigned per Cosign transaction, but they have independent orders. This + means we may receive Cosigned before Cosign. In order to ensure we only start work on + not-yet-Cosigned cosigns, we flag all cosigned blocks as cosigned. Then, when we choose + the next block to work on, we won't if it's already been cosigned. + */ + TributaryDb::mark_cosigned(self.tributary_txn, self.set, substrate_block_hash); - // Fetch the latest intended-to-be-cosigned block - let Some(latest_substrate_block_to_cosign) = - TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) - else { - return; - }; - // If this is the block we just cosigned, return, preventing us from signing it again - if latest_substrate_block_to_cosign == substrate_block_hash { + // If we aren't actively cosigning this block, return + // This occurs when we have Cosign TXs A, B, C, we received Cosigned for A and start on C, + // and then receive Cosigned for B + if TributaryDb::actively_cosigning(self.tributary_txn, self.set) != + Some(substrate_block_hash) + { return; } - // Since we do have a new cosign to work on, start it + // Since this is the block we were cosigning, mark us as having finished cosigning + TributaryDb::finish_cosigning(self.tributary_txn, self.set); + + // Start working on the next cosign self.potentially_start_cosign(); } Transaction::SubstrateBlock { hash } => { diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs index 0befbf36..34528cb9 100644 --- a/coordinator/src/tributary/transaction.rs +++ b/coordinator/src/tributary/transaction.rs @@ -231,9 +231,11 @@ impl TransactionTrait for Transaction { TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(1)) } - Transaction::Cosign { .. } => TransactionKind::Provided("CosignSubstrateBlock"), + Transaction::Cosign { .. } => TransactionKind::Provided("Cosign"), Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"), + // TODO: Provide this Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), + // TODO: Provide this Transaction::Batch { .. } => TransactionKind::Provided("Batch"), Transaction::Sign { id, attempt, round, signed, .. } => { diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index ee6ed8ac..5b3d325f 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -213,17 +213,17 @@ pub mod substrate { pub enum CoordinatorMessage { /// Keys set on the Serai blockchain. /// - /// This is set by the Coordinator's Substrate canonical event stream. + /// This is sent by the Coordinator's Substrate canonical event stream. SetKeys { serai_time: u64, session: Session, key_pair: KeyPair }, /// Slashes reported on the Serai blockchain OR the process timed out. /// /// This is the final message for a session, /// - /// This is set by the Coordinator's Substrate canonical event stream. + /// This is sent by the Coordinator's Substrate canonical event stream. SlashesReported { session: Session }, /// A block from Serai with relevance to this processor. /// - /// This is set by the Coordinator's Substrate canonical event stream. + /// This is sent by the Coordinator's Substrate canonical event stream. Block { serai_block_number: u64, batch: Option, From 1419ba570a9f79194e452de65bc9b23f7f39f494 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 01:55:36 -0500 Subject: [PATCH 284/368] Route from tributary scanner to message-queue --- coordinator/src/main.rs | 32 +++++++++++++++++------ coordinator/src/tributary/db.rs | 7 ++++++ coordinator/src/tributary/mod.rs | 42 +++++++++++++++++++++++++++++++ coordinator/src/tributary/scan.rs | 6 ++--- 4 files changed, 77 insertions(+), 10 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index dbe185fb..496d9ac1 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -31,7 +31,7 @@ mod db; use db::*; mod tributary; -use tributary::{Transaction, ScanTributaryTask}; +use tributary::{Transaction, ScanTributaryTask, ScanTributaryMessagesTask}; mod p2p { pub use serai_coordinator_p2p::*; @@ -111,6 +111,7 @@ fn spawn_cosigning( /// This will spawn the Tributary, the Tributary scanning task, and inform the P2P network. async fn spawn_tributary( mut db: Db, + message_queue: Arc, p2p: P, p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, set: NewSetInformation, @@ -168,7 +169,16 @@ async fn spawn_tributary( .unwrap(); let reader = tributary.reader(); - p2p_add_tributary.send((set.set, tributary)).expect("p2p's add_tributary channel was closed?"); + p2p_add_tributary + .send((set.set, tributary.clone())) + .expect("p2p's add_tributary channel was closed?"); + + // Spawn the task to send all messages from the Tributary scanner to the message-queue + let (scan_tributary_messages_task_def, scan_tributary_messages_task) = Task::new(); + tokio::spawn( + (ScanTributaryMessagesTask { tributary_db: tributary_db.clone(), set: set.set, message_queue }) + .continually_run(scan_tributary_messages_task_def, vec![]), + ); let (scan_tributary_task_def, mut scan_tributary_task) = Task::new(); tokio::spawn( @@ -182,9 +192,10 @@ async fn spawn_tributary( tributary: reader, _p2p: PhantomData::

, }) - .continually_run(scan_tributary_task_def, vec![todo!("TODO")]), + // This is the only handle for this ScanTributaryMessagesTask, so when this task is dropped, it + // will be too + .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), ); - // TODO^ On Tributary block, drain this task's ProcessorMessages tokio::spawn(async move { loop { @@ -363,7 +374,7 @@ async fn main() { }; // Connect to the message-queue - let message_queue = MessageQueue::from_env(Service::Coordinator); + let message_queue = Arc::new(MessageQueue::from_env(Service::Coordinator)); // Connect to the Serai node let serai = serai().await; @@ -429,8 +440,15 @@ async fn main() { // Spawn all Tributaries on-disk for tributary in existing_tributaries_at_boot { - spawn_tributary(db.clone(), p2p.clone(), &p2p_add_tributary_send, tributary, serai_key.clone()) - .await; + spawn_tributary( + db.clone(), + message_queue.clone(), + p2p.clone(), + &p2p_add_tributary_send, + tributary, + serai_key.clone(), + ) + .await; } // TODO: Hndle processor messages diff --git a/coordinator/src/tributary/db.rs b/coordinator/src/tributary/db.rs index a3eab8db..99fbe69a 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/src/tributary/db.rs @@ -446,4 +446,11 @@ impl TributaryDb { ) { ProcessorMessages::send(txn, set, &message.into()); } + + pub(crate) fn try_recv_message( + txn: &mut impl DbTxn, + set: ValidatorSet, + ) -> Option { + ProcessorMessages::try_recv(txn, set) + } } diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary/mod.rs index 60f005e3..6b7e3dbb 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary/mod.rs @@ -1,3 +1,17 @@ +use core::future::Future; +use std::sync::Arc; + +use serai_db::{DbTxn, Db}; + +use serai_client::validator_sets::primitives::ValidatorSet; + +use serai_task::ContinuallyRan; + +use message_queue::{Service, Metadata, client::MessageQueue}; + +use serai_coordinator_substrate::NewSetInformation; +use serai_coordinator_p2p::P2p; + mod transaction; pub use transaction::Transaction; @@ -5,3 +19,31 @@ mod db; mod scan; pub(crate) use scan::ScanTributaryTask; + +pub(crate) struct ScanTributaryMessagesTask { + pub(crate) tributary_db: TD, + pub(crate) set: ValidatorSet, + pub(crate) message_queue: Arc, +} +impl ContinuallyRan for ScanTributaryMessagesTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + loop { + let mut txn = self.tributary_db.txn(); + let Some(msg) = db::TributaryDb::try_recv_message(&mut txn, self.set) else { break }; + let metadata = Metadata { + from: Service::Coordinator, + to: Service::Processor(self.set.network), + intent: msg.intent(), + }; + let msg = borsh::to_vec(&msg).unwrap(); + // TODO: Make this fallible + self.message_queue.queue(metadata, msg).await; + txn.commit(); + made_progress = true; + } + Ok(made_progress) + } + } +} diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs index 6e4d8d3f..ac7fd43b 100644 --- a/coordinator/src/tributary/scan.rs +++ b/coordinator/src/tributary/scan.rs @@ -414,7 +414,7 @@ impl ContinuallyRan for ScanTributaryTask { TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) .unwrap_or((0, self.tributary.genesis())); - let mut made_progess = false; + let mut made_progress = false; while let Some(next) = self.tributary.block_after(&last_block_hash) { let block = self.tributary.block(&next).unwrap(); let block_number = last_block_number + 1; @@ -457,10 +457,10 @@ impl ContinuallyRan for ScanTributaryTask { last_block_hash = block_hash; tributary_txn.commit(); - made_progess = true; + made_progress = true; } - Ok(made_progess) + Ok(made_progress) } } } From 6d5049cab26872e9e3d28acc7dcf2b698822542e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 02:10:15 -0500 Subject: [PATCH 285/368] Move the task providing transactions onto the Tributary to the Tributary module Slims down the main file a bit --- coordinator/src/main.rs | 113 +--------------------------- coordinator/src/tributary/mod.rs | 124 ++++++++++++++++++++++++++++++- 2 files changed, 124 insertions(+), 113 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 496d9ac1..842f7f9c 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -151,7 +151,7 @@ async fn spawn_tributary( let mut validators = Vec::with_capacity(set.validators.len()); let mut total_weight = 0; let mut validator_weights = HashMap::with_capacity(set.validators.len()); - for (validator, weight) in set.validators { + for (validator, weight) in set.validators.iter().copied() { let validator_key = ::read_G(&mut validator.0.as_slice()) .expect("Serai validator had an invalid public key"); let validator = SeraiAddress::from(validator); @@ -197,116 +197,7 @@ async fn spawn_tributary( .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), ); - tokio::spawn(async move { - loop { - // Break once this Tributary is retired - if RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= - Some(set.set.session.0) - { - break; - } - - let provide = |tributary: Tributary<_>, scan_tributary_task, tx: Transaction| async move { - match tributary.provide_transaction(tx.clone()).await { - // The Tributary uses its own DB, so we may provide this multiple times if we reboot - // before committing the txn which provoked this - Ok(()) | Err(ProvidedError::AlreadyProvided) => {} - Err(ProvidedError::NotProvided) => { - panic!("providing a Transaction which wasn't a Provided transaction?"); - } - Err(ProvidedError::InvalidProvided(e)) => { - panic!("providing an invalid Provided transaction: {e:?}") - } - Err(ProvidedError::LocalMismatchesOnChain) => { - // Drop the Tributary and scan Tributary task so we don't continue running them here - drop(tributary); - drop(scan_tributary_task); - - loop { - // We're actually only halting the Tributary's scan task (which already only scans - // if all Provided transactions align) as the P2P task is still maintaining a clone - // of the Tributary handle - log::error!( - "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", - set.set, - tx, - ); - // Print this every five minutes as this does need to be handled - tokio::time::sleep(Duration::from_secs(5 * 60)).await; - } - - // Declare this unreachable so Rust will let us perform the above drops - unreachable!(); - } - } - (tributary, scan_tributary_task) - }; - - // Check if we produced any cosigns we were supposed to - let mut pending_notable_cosign = false; - loop { - let mut txn = db.txn(); - - // Fetch the next cosign this tributary should handle - let Some(cosign) = PendingCosigns::try_recv(&mut txn, set.set) else { break }; - pending_notable_cosign = cosign.notable; - - // If we (Serai) haven't cosigned this block, break as this is still pending - let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; - if latest < cosign.block_number { - break; - } - - // Because we've cosigned it, provide the TX for that - (tributary, scan_tributary_task) = provide( - tributary, - scan_tributary_task, - Transaction::Cosigned { substrate_block_hash: cosign.block_hash }, - ) - .await; - // Clear pending_notable_cosign since this cosign isn't pending - pending_notable_cosign = false; - - // Commit the txn to clear this from PendingCosigns - txn.commit(); - } - - // If we don't have any notable cosigns pending, provide the next set of cosign intents - if pending_notable_cosign { - let mut txn = db.txn(); - // intended_cosigns will only yield up to and including the next notable cosign - for cosign in Cosigning::::intended_cosigns(&mut txn, set.set) { - // Flag this cosign as pending - PendingCosigns::send(&mut txn, set.set, &cosign); - // Provide the transaction to queue it for work - (tributary, scan_tributary_task) = provide( - tributary, - scan_tributary_task, - Transaction::Cosign { substrate_block_hash: cosign.block_hash }, - ) - .await; - } - txn.commit(); - } - - // Have the tributary scanner run as soon as there's a new block - // This is wrapped in a timeout so we don't go too long without running the above code - match tokio::time::timeout( - Duration::from_millis(::tributary::tendermint::TARGET_BLOCK_TIME.into()), - tributary.next_block_notification().await, - ) - .await - { - // Future resolved within the timeout, notification - Ok(Ok(())) => scan_tributary_task.run_now(), - // Future resolved within the timeout, notification failed due to sender being dropped - // unreachable since this owns the tributary object and doesn't drop it - Ok(Err(_)) => panic!("tributary was dropped causing notification to error"), - // Future didn't resolve within the timeout - Err(_) => {} - } - } - }); + tokio::spawn(tributary::run(db, set, tributary, scan_tributary_task)); } #[tokio::main] diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary/mod.rs index 6b7e3dbb..4ca3cbbe 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary/mod.rs @@ -1,14 +1,17 @@ -use core::future::Future; +use core::{future::Future, time::Duration}; use std::sync::Arc; use serai_db::{DbTxn, Db}; use serai_client::validator_sets::primitives::ValidatorSet; -use serai_task::ContinuallyRan; +use ::tributary::{ProvidedError, Tributary}; + +use serai_task::{TaskHandle, ContinuallyRan}; use message_queue::{Service, Metadata, client::MessageQueue}; +use serai_cosign::Cosigning; use serai_coordinator_substrate::NewSetInformation; use serai_coordinator_p2p::P2p; @@ -25,6 +28,7 @@ pub(crate) struct ScanTributaryMessagesTask { pub(crate) set: ValidatorSet, pub(crate) message_queue: Arc, } + impl ContinuallyRan for ScanTributaryMessagesTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { @@ -47,3 +51,119 @@ impl ContinuallyRan for ScanTributaryMessagesTask { } } } + +async fn provide_transaction( + set: ValidatorSet, + tributary: &Tributary, + tx: Transaction, +) { + match tributary.provide_transaction(tx.clone()).await { + // The Tributary uses its own DB, so we may provide this multiple times if we reboot before + // committing the txn which provoked this + Ok(()) | Err(ProvidedError::AlreadyProvided) => {} + Err(ProvidedError::NotProvided) => { + panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}"); + } + Err(ProvidedError::InvalidProvided(e)) => { + panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}") + } + Err(ProvidedError::LocalMismatchesOnChain) => loop { + // The Tributary's scan task won't advance if we don't have the Provided transactions + // present on-chain, and this enters an infinite loop to block the calling task from + // advancing + log::error!( + "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", + set, + tx, + ); + // Print this every five minutes as this does need to be handled + tokio::time::sleep(Duration::from_secs(5 * 60)).await; + }, + } +} + +/// Run a Tributary. +/// +/// The Tributary handle existing causes the Tributary's consensus engine to be run. We distinctly +/// have `ScanTributaryTask` to scan the produced blocks. This function provides Provided +/// transactions onto the Tributary and invokes ScanTributaryTask whenver a new Tributary block is +/// produced (instead of only on the standard interval). +pub(crate) async fn run( + mut db: CD, + set: NewSetInformation, + tributary: Tributary, + scan_tributary_task: TaskHandle, +) { + loop { + // Break once this Tributary is retired + if crate::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= + Some(set.set.session.0) + { + break; + } + + // Check if we produced any cosigns we were supposed to + let mut pending_notable_cosign = false; + loop { + let mut txn = db.txn(); + + // Fetch the next cosign this tributary should handle + let Some(cosign) = crate::PendingCosigns::try_recv(&mut txn, set.set) else { break }; + pending_notable_cosign = cosign.notable; + + // If we (Serai) haven't cosigned this block, break as this is still pending + let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; + if latest < cosign.block_number { + break; + } + + // Because we've cosigned it, provide the TX for that + provide_transaction( + set.set, + &tributary, + Transaction::Cosigned { substrate_block_hash: cosign.block_hash }, + ) + .await; + // Clear pending_notable_cosign since this cosign isn't pending + pending_notable_cosign = false; + + // Commit the txn to clear this from PendingCosigns + txn.commit(); + } + + // If we don't have any notable cosigns pending, provide the next set of cosign intents + if pending_notable_cosign { + let mut txn = db.txn(); + // intended_cosigns will only yield up to and including the next notable cosign + for cosign in Cosigning::::intended_cosigns(&mut txn, set.set) { + // Flag this cosign as pending + crate::PendingCosigns::send(&mut txn, set.set, &cosign); + // Provide the transaction to queue it for work + provide_transaction( + set.set, + &tributary, + Transaction::Cosign { substrate_block_hash: cosign.block_hash }, + ) + .await; + } + txn.commit(); + } + + // Have the tributary scanner run as soon as there's a new block + // This is wrapped in a timeout so we don't go too long without running the above code + match tokio::time::timeout( + Duration::from_millis(::tributary::tendermint::TARGET_BLOCK_TIME.into()), + tributary.next_block_notification().await, + ) + .await + { + // Future resolved within the timeout, notification + Ok(Ok(())) => scan_tributary_task.run_now(), + // Future resolved within the timeout, notification failed due to sender being dropped + // unreachable since this owns the tributary object and doesn't drop it + Ok(Err(_)) => panic!("tributary was dropped causing notification to error"), + // Future didn't resolve within the timeout + Err(_) => {} + } + } +} From c05b0c9eba6c677f172f9d004089eac2452ea5d7 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 03:07:15 -0500 Subject: [PATCH 286/368] Handle Canonical, NewSet from serai-coordinator-substrate --- coordinator/src/main.rs | 191 ++++++++++++++++++++++++++----- coordinator/substrate/src/lib.rs | 2 +- processor/messages/src/lib.rs | 2 +- 3 files changed, 162 insertions(+), 33 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 842f7f9c..71f73d65 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,4 +1,4 @@ -use core::{marker::PhantomData, ops::Deref, time::Duration}; +use core::{marker::PhantomData, ops::Deref, future::Future, time::Duration}; use std::{sync::Arc, collections::HashMap, time::Instant}; use zeroize::{Zeroize, Zeroizing}; @@ -14,13 +14,11 @@ use tokio::sync::mpsc; use scale::Encode; use serai_client::{ - primitives::{NetworkId, PublicKey, SeraiAddress}, + primitives::{PublicKey, SeraiAddress}, validator_sets::primitives::{Session, ValidatorSet}, Serai, }; -use message_queue::{Service, client::MessageQueue}; - -use ::tributary::ProvidedError; +use message_queue::{Service, Metadata, client::MessageQueue}; use serai_task::{Task, TaskHandle, ContinuallyRan}; @@ -110,7 +108,7 @@ fn spawn_cosigning( /// /// This will spawn the Tributary, the Tributary scanning task, and inform the P2P network. async fn spawn_tributary( - mut db: Db, + db: Db, message_queue: Arc, p2p: P, p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, @@ -123,21 +121,6 @@ async fn spawn_tributary( return; } - // TODO: Move from spawn_tributary to on NewSet - // Queue the historical Tributary for this network for deletion - // We explicitly don't queue this upon Tributary retire to give time to investigate retired - // Tributaries if questions are raised post-retiry. This gives a week after the Tributary has - // been retired to make a backup of the data directory for any investigations. - if let Some(historic_session) = set.set.session.0.checked_sub(2) { - // This may get fired several times but that isn't an issue - let mut txn = db.txn(); - TributaryCleanup::send( - &mut txn, - &ValidatorSet { network: set.set.network, session: Session(historic_session) }, - ); - txn.commit(); - } - let genesis = <[u8; 32]>::from(Blake2s::::digest((set.serai_block, set.set).encode())); // Since the Serai block will be finalized, then cosigned, before we handle this, this time will @@ -163,7 +146,7 @@ async fn spawn_tributary( } let tributary_db = tributary_db(set.set); - let mut tributary = + let tributary = Tributary::new(tributary_db.clone(), genesis, start_time, serai_key, tributary_validators, p2p) .await .unwrap(); @@ -180,7 +163,7 @@ async fn spawn_tributary( .continually_run(scan_tributary_messages_task_def, vec![]), ); - let (scan_tributary_task_def, mut scan_tributary_task) = Task::new(); + let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( (ScanTributaryTask { cosign_db: db.clone(), @@ -200,6 +183,143 @@ async fn spawn_tributary( tokio::spawn(tributary::run(db, set, tributary, scan_tributary_task)); } +struct SubstrateTask { + serai_key: Zeroizing<::F>, + db: Db, + message_queue: Arc, + p2p: P, + p2p_add_tributary: mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, + p2p_retire_tributary: mpsc::UnboundedSender, +} + +impl ContinuallyRan for SubstrateTask

{ + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + + // Handle the Canonical events + for network in serai_client::primitives::NETWORKS { + loop { + let mut txn = self.db.txn(); + let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network) + else { + break; + }; + + match msg { + // TODO: Stop trying to confirm the DKG + messages::substrate::CoordinatorMessage::SetKeys { .. } => todo!("TODO"), + messages::substrate::CoordinatorMessage::SlashesReported { session } => { + let prior_retired = RetiredTributary::get(&txn, network); + let next_to_be_retired = + prior_retired.map(|session| Session(session.0 + 1)).unwrap_or(Session(0)); + assert_eq!(session, next_to_be_retired); + RetiredTributary::set(&mut txn, network, &session); + self + .p2p_retire_tributary + .send(ValidatorSet { network, session }) + .expect("p2p retire_tributary channel dropped?"); + } + messages::substrate::CoordinatorMessage::Block { .. } => {} + } + + let msg = messages::CoordinatorMessage::from(msg); + let metadata = Metadata { + from: Service::Coordinator, + to: Service::Processor(network), + intent: msg.intent(), + }; + let msg = borsh::to_vec(&msg).unwrap(); + // TODO: Make this fallible + self.message_queue.queue(metadata, msg).await; + txn.commit(); + made_progress = true; + } + } + + // Handle the NewSet events + loop { + let mut txn = self.db.txn(); + let Some(new_set) = serai_coordinator_substrate::NewSet::try_recv(&mut txn) else { break }; + + if let Some(historic_session) = new_set.set.session.0.checked_sub(2) { + // We should have retired this session if we're here + if RetiredTributary::get(&txn, new_set.set.network).map(|session| session.0) < + Some(historic_session) + { + /* + If we haven't, it's because we're processing the NewSet event before the retiry + event from the Canonical event stream. This happens if the Canonical event, and + then the NewSet event, is fired while we're already iterating over NewSet events. + + We break, dropping the txn, restoring this NewSet to the database, so we'll only + handle it once a future iteration of this loop handles the retiry event. + */ + break; + } + + /* + Queue this historical Tributary for deletion. + + We explicitly don't queue this upon Tributary retire, instead here, to give time to + investigate retired Tributaries if questions are raised post-retiry. This gives a + week (the duration of the following session) after the Tributary has been retired to + make a backup of the data directory for any investigations. + */ + TributaryCleanup::send( + &mut txn, + &ValidatorSet { network: new_set.set.network, session: Session(historic_session) }, + ); + } + + // Save this Tributary as active to the database + { + let mut active_tributaries = + ActiveTributaries::get(&txn).unwrap_or(Vec::with_capacity(1)); + active_tributaries.push(new_set.clone()); + ActiveTributaries::set(&mut txn, &active_tributaries); + } + + // Send GenerateKey to the processor + let msg = messages::key_gen::CoordinatorMessage::GenerateKey { + session: new_set.set.session, + threshold: new_set.threshold, + evrf_public_keys: new_set.evrf_public_keys.clone(), + }; + let msg = messages::CoordinatorMessage::from(msg); + let metadata = Metadata { + from: Service::Coordinator, + to: Service::Processor(new_set.set.network), + intent: msg.intent(), + }; + let msg = borsh::to_vec(&msg).unwrap(); + // TODO: Make this fallible + self.message_queue.queue(metadata, msg).await; + + // Commit the transaction for all of this + txn.commit(); + + // Now spawn the Tributary + // If we reboot after committing the txn, but before this is called, this will be called + // on boot + spawn_tributary( + self.db.clone(), + self.message_queue.clone(), + self.p2p.clone(), + &self.p2p_add_tributary, + new_set, + self.serai_key.clone(), + ) + .await; + + made_progress = true; + } + + Ok(made_progress) + } + } +} + #[tokio::main] async fn main() { // Override the panic handler with one which will panic if any tokio task panics @@ -298,14 +418,13 @@ async fn main() { p2p }; - // TODO: p2p_add_tributary_send, p2p_retire_tributary_send - // Spawn the Substrate scanners - // TODO: Canonical, NewSet, SignSlashReport + // TODO: SignSlashReport + let (substrate_task_def, substrate_task) = Task::new(); let (substrate_canonical_task_def, substrate_canonical_task) = Task::new(); tokio::spawn( CanonicalEventStream::new(db.clone(), serai.clone()) - .continually_run(substrate_canonical_task_def, todo!("TODO")), + .continually_run(substrate_canonical_task_def, vec![substrate_task.clone()]), ); let (substrate_ephemeral_task_def, substrate_ephemeral_task) = Task::new(); tokio::spawn( @@ -314,7 +433,7 @@ async fn main() { serai.clone(), PublicKey::from_raw((::generator() * serai_key.deref()).to_bytes()), ) - .continually_run(substrate_ephemeral_task_def, todo!("TODO")), + .continually_run(substrate_ephemeral_task_def, vec![substrate_task]), ); // Spawn the cosign handler @@ -342,10 +461,20 @@ async fn main() { .await; } - // TODO: Hndle processor messages + // Handle the events from the Substrate scanner + tokio::spawn( + (SubstrateTask { + serai_key: serai_key.clone(), + db: db.clone(), + message_queue: message_queue.clone(), + p2p: p2p.clone(), + p2p_add_tributary: p2p_add_tributary_send.clone(), + p2p_retire_tributary: p2p_retire_tributary_send.clone(), + }) + .continually_run(substrate_task_def, vec![]), + ); - // TODO: On NewSet, queue historical for deletionn, save to DB, send KeyGen, spawn tributary - // task, inform P2P network + // TODO: Handle processor messages todo!("TODO") } diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index f723332d..b3f00a5e 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -32,7 +32,7 @@ fn borsh_deserialize_validators( } /// The information for a new set. -#[derive(Debug, BorshSerialize, BorshDeserialize)] +#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub struct NewSetInformation { /// The set. pub set: ValidatorSet, diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 5b3d325f..ec072fe5 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -24,7 +24,7 @@ pub mod key_gen { pub enum CoordinatorMessage { /// Instructs the Processor to begin the key generation process. /// - /// This is sent by the Coordinator when it creates the Tributary (TODO). + /// This is sent by the Coordinator when it creates the Tributary. GenerateKey { session: Session, threshold: u16, evrf_public_keys: Vec<([u8; 32], Vec)> }, /// Received participations for the specified key generation protocol. /// From 3c664ff05fb1d071b6f882e99fb0d050f44ea963 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 04:14:21 -0500 Subject: [PATCH 287/368] Re-arrange coordinator/ coordinator/tributary was tributary-chain. This crate has been renamed tributary-sdk and moved to coordinator/tributary-sdk. coordinator/src/tributary was our instantion of a Tributary, the Transaction type and scan task. This has been moved to coordinator/tributary. The main reason for this was due to coordinator/main.rs becoming untidy. There is now a collection of clean, independent APIs present in the codebase. coordinator/main.rs is to compose them. Sometimes, these compositions are a bit silly (reading from a channel just to forward the message to a distinct channel). That's more than fine as the code is still readable and the value from the cleanliness of the APIs composed far exceeds the nits from having these odd compositions. This breaks down a bit as we now define a global database, and have some APIs interact with multiple other APIs. coordinator/src/tributary was a self-contained, clean API. The recently added task present in coordinator/tributary/mod.rs, which bound it to the rest of the Coordinator, wasn't. Now, coordinator/src is solely the API compositions, and all self-contained APIs are their own crates. --- .github/workflows/msrv.yml | 5 +- .github/workflows/tests.yml | 3 +- Cargo.lock | 30 +- Cargo.toml | 5 +- coordinator/Cargo.toml | 5 +- coordinator/p2p/Cargo.toml | 2 +- coordinator/p2p/libp2p/Cargo.toml | 2 +- coordinator/p2p/libp2p/src/gossip.rs | 7 +- coordinator/p2p/libp2p/src/lib.rs | 2 +- coordinator/p2p/libp2p/src/ping.rs | 2 +- coordinator/p2p/src/heartbeat.rs | 7 +- coordinator/p2p/src/lib.rs | 6 +- coordinator/src/main.rs | 185 +--- coordinator/src/substrate.rs | 160 ++++ .../src/{tributary/mod.rs => tributary.rs} | 15 +- coordinator/src/tributary/scan.rs | 466 ---------- coordinator/src/tributary/transaction.rs | 340 -------- coordinator/tributary-sdk/Cargo.toml | 49 ++ coordinator/tributary-sdk/LICENSE | 15 + coordinator/tributary-sdk/README.md | 3 + .../{tributary => tributary-sdk}/src/block.rs | 0 .../src/blockchain.rs | 0 coordinator/tributary-sdk/src/lib.rs | 388 ++++++++ .../src/mempool.rs | 0 .../src/merkle.rs | 0 .../src/provided.rs | 0 .../src/tendermint/mod.rs | 0 .../src/tendermint/tx.rs | 0 .../src/tests/block.rs | 0 .../src/tests/blockchain.rs | 0 .../src/tests/mempool.rs | 0 .../src/tests/merkle.rs | 0 .../src/tests/mod.rs | 0 .../src/tests/p2p.rs | 0 .../src/tests/tendermint.rs | 0 .../src/tests/transaction/mod.rs | 0 .../src/tests/transaction/signed.rs | 0 .../src/tests/transaction/tendermint.rs | 0 coordinator/tributary-sdk/src/transaction.rs | 218 +++++ .../tendermint/Cargo.toml | 0 .../tendermint/LICENSE | 0 .../tendermint/README.md | 0 .../tendermint/src/block.rs | 0 .../tendermint/src/ext.rs | 0 .../tendermint/src/lib.rs | 0 .../tendermint/src/message_log.rs | 0 .../tendermint/src/round.rs | 0 .../tendermint/src/time.rs | 0 .../tendermint/tests/ext.rs | 0 coordinator/tributary/Cargo.toml | 39 +- coordinator/tributary/LICENSE | 2 +- coordinator/tributary/README.md | 5 +- .../{src/tributary => tributary/src}/db.rs | 19 +- coordinator/tributary/src/lib.rs | 825 ++++++++++-------- coordinator/tributary/src/transaction.rs | 481 ++++++---- deny.toml | 3 +- 56 files changed, 1719 insertions(+), 1570 deletions(-) create mode 100644 coordinator/src/substrate.rs rename coordinator/src/{tributary/mod.rs => tributary.rs} (94%) delete mode 100644 coordinator/src/tributary/scan.rs delete mode 100644 coordinator/src/tributary/transaction.rs create mode 100644 coordinator/tributary-sdk/Cargo.toml create mode 100644 coordinator/tributary-sdk/LICENSE create mode 100644 coordinator/tributary-sdk/README.md rename coordinator/{tributary => tributary-sdk}/src/block.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/blockchain.rs (100%) create mode 100644 coordinator/tributary-sdk/src/lib.rs rename coordinator/{tributary => tributary-sdk}/src/mempool.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/merkle.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/provided.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tendermint/mod.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tendermint/tx.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/block.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/blockchain.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/mempool.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/merkle.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/mod.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/p2p.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/tendermint.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/transaction/mod.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/transaction/signed.rs (100%) rename coordinator/{tributary => tributary-sdk}/src/tests/transaction/tendermint.rs (100%) create mode 100644 coordinator/tributary-sdk/src/transaction.rs rename coordinator/{tributary => tributary-sdk}/tendermint/Cargo.toml (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/LICENSE (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/README.md (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/block.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/ext.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/lib.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/message_log.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/round.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/src/time.rs (100%) rename coordinator/{tributary => tributary-sdk}/tendermint/tests/ext.rs (100%) rename coordinator/{src/tributary => tributary/src}/db.rs (97%) diff --git a/.github/workflows/msrv.yml b/.github/workflows/msrv.yml index e1636482..acf0eb32 100644 --- a/.github/workflows/msrv.yml +++ b/.github/workflows/msrv.yml @@ -173,10 +173,11 @@ jobs: - name: Run cargo msrv on coordinator run: | - cargo msrv verify --manifest-path coordinator/tributary/tendermint/Cargo.toml - cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml + cargo msrv verify --manifest-path coordinator/tributary-sdk/tendermint/Cargo.toml + cargo msrv verify --manifest-path coordinator/tributary-sdk/Cargo.toml cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml + cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml cargo msrv verify --manifest-path coordinator/Cargo.toml diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 0c311b99..af93154e 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -60,9 +60,10 @@ jobs: -p serai-ethereum-processor \ -p serai-monero-processor \ -p tendermint-machine \ - -p tributary-chain \ + -p tributary-sdk \ -p serai-cosign \ -p serai-coordinator-substrate \ + -p serai-coordinator-tributary \ -p serai-coordinator-p2p \ -p serai-coordinator-libp2p-p2p \ -p serai-coordinator \ diff --git a/Cargo.lock b/Cargo.lock index ede4518f..f93ff6c1 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8331,6 +8331,7 @@ dependencies = [ "serai-coordinator-libp2p-p2p", "serai-coordinator-p2p", "serai-coordinator-substrate", + "serai-coordinator-tributary", "serai-cosign", "serai-db", "serai-env", @@ -8338,7 +8339,7 @@ dependencies = [ "serai-processor-messages", "serai-task", "tokio", - "tributary-chain", + "tributary-sdk", "zalloc", "zeroize", ] @@ -8361,7 +8362,7 @@ dependencies = [ "serai-cosign", "serai-task", "tokio", - "tributary-chain", + "tributary-sdk", "void", "zeroize", ] @@ -8378,7 +8379,7 @@ dependencies = [ "serai-db", "serai-task", "tokio", - "tributary-chain", + "tributary-sdk", ] [[package]] @@ -8422,6 +8423,27 @@ dependencies = [ "zeroize", ] +[[package]] +name = "serai-coordinator-tributary" +version = "0.1.0" +dependencies = [ + "blake2", + "borsh", + "ciphersuite", + "log", + "parity-scale-codec", + "rand_core", + "schnorr-signatures", + "serai-client", + "serai-coordinator-substrate", + "serai-cosign", + "serai-db", + "serai-processor-messages", + "serai-task", + "tributary-sdk", + "zeroize", +] + [[package]] name = "serai-cosign" version = "0.1.0" @@ -10975,7 +10997,7 @@ dependencies = [ ] [[package]] -name = "tributary-chain" +name = "tributary-sdk" version = "0.1.0" dependencies = [ "blake2", diff --git a/Cargo.toml b/Cargo.toml index 39507b16..f11d5644 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -96,10 +96,11 @@ members = [ "processor/ethereum", "processor/monero", - "coordinator/tributary/tendermint", - "coordinator/tributary", + "coordinator/tributary-sdk/tendermint", + "coordinator/tributary-sdk", "coordinator/cosign", "coordinator/substrate", + "coordinator/tributary", "coordinator/p2p", "coordinator/p2p/libp2p", "coordinator", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 38adbf15..2eec60c8 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -39,7 +39,7 @@ serai-task = { path = "../common/task", version = "0.1" } messages = { package = "serai-processor-messages", path = "../processor/messages" } message-queue = { package = "serai-message-queue", path = "../message-queue" } -tributary = { package = "tributary-chain", path = "./tributary" } +tributary-sdk = { path = "./tributary-sdk" } serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] } @@ -53,10 +53,11 @@ tokio = { version = "1", default-features = false, features = ["time", "sync", " serai-cosign = { path = "./cosign" } serai-coordinator-substrate = { path = "./substrate" } +serai-coordinator-tributary = { path = "./tributary" } serai-coordinator-p2p = { path = "./p2p" } serai-coordinator-libp2p-p2p = { path = "./p2p/libp2p" } [features] -longer-reattempts = [] # TODO +longer-reattempts = ["serai-coordinator-tributary/longer-reattempts"] parity-db = ["serai-db/parity-db"] rocksdb = ["serai-db/rocksdb"] diff --git a/coordinator/p2p/Cargo.toml b/coordinator/p2p/Cargo.toml index 7b7c055c..0e55e8e6 100644 --- a/coordinator/p2p/Cargo.toml +++ b/coordinator/p2p/Cargo.toml @@ -24,7 +24,7 @@ serai-db = { path = "../../common/db", version = "0.1" } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-cosign = { path = "../cosign" } -tributary = { package = "tributary-chain", path = "../tributary" } +tributary-sdk = { path = "../tributary-sdk" } futures-lite = { version = "2", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["sync", "macros"] } diff --git a/coordinator/p2p/libp2p/Cargo.toml b/coordinator/p2p/libp2p/Cargo.toml index 8916d961..7a393588 100644 --- a/coordinator/p2p/libp2p/Cargo.toml +++ b/coordinator/p2p/libp2p/Cargo.toml @@ -31,7 +31,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive", serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-cosign = { path = "../../cosign" } -tributary = { package = "tributary-chain", path = "../../tributary" } +tributary-sdk = { path = "../../tributary-sdk" } void = { version = "1", default-features = false } futures-util = { version = "0.3", default-features = false, features = ["std"] } diff --git a/coordinator/p2p/libp2p/src/gossip.rs b/coordinator/p2p/libp2p/src/gossip.rs index f48c1c4e..f4ec666b 100644 --- a/coordinator/p2p/libp2p/src/gossip.rs +++ b/coordinator/p2p/libp2p/src/gossip.rs @@ -13,7 +13,7 @@ pub use libp2p::gossipsub::Event; use serai_cosign::SignedCosign; // Block size limit + 16 KB of space for signatures/metadata -pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary::BLOCK_SIZE_LIMIT + 16384; +pub(crate) const MAX_LIBP2P_GOSSIP_MESSAGE_SIZE: usize = tributary_sdk::BLOCK_SIZE_LIMIT + 16384; const LIBP2P_PROTOCOL: &str = "/serai/coordinator/gossip/1.0.0"; const BASE_TOPIC: &str = "/"; @@ -42,9 +42,10 @@ pub(crate) type Behavior = Behaviour Behavior { // The latency used by the Tendermint protocol, used here as the gossip epoch duration // libp2p-rs defaults to 1 second, whereas ours will be ~2 - let heartbeat_interval = tributary::tendermint::LATENCY_TIME; + let heartbeat_interval = tributary_sdk::tendermint::LATENCY_TIME; // The amount of heartbeats which will occur within a single Tributary block - let heartbeats_per_block = tributary::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval); + let heartbeats_per_block = + tributary_sdk::tendermint::TARGET_BLOCK_TIME.div_ceil(heartbeat_interval); // libp2p-rs defaults to 5, whereas ours will be ~8 let heartbeats_to_keep = 2 * heartbeats_per_block; // libp2p-rs defaults to 3 whereas ours will be ~4 diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index d3f09e61..d92eae42 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -259,7 +259,7 @@ impl Libp2p { } } -impl tributary::P2p for Libp2p { +impl tributary_sdk::P2p for Libp2p { fn broadcast(&self, tributary: [u8; 32], message: Vec) -> impl Send + Future { async move { self diff --git a/coordinator/p2p/libp2p/src/ping.rs b/coordinator/p2p/libp2p/src/ping.rs index d579af05..2b9afa41 100644 --- a/coordinator/p2p/libp2p/src/ping.rs +++ b/coordinator/p2p/libp2p/src/ping.rs @@ -1,6 +1,6 @@ use core::time::Duration; -use tributary::tendermint::LATENCY_TIME; +use tributary_sdk::tendermint::LATENCY_TIME; use libp2p::ping::{self, Config, Behaviour}; pub use ping::Event; diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index 76d160ea..8a2f3220 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -5,7 +5,7 @@ use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, Validator use futures_lite::FutureExt; -use tributary::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader}; +use tributary_sdk::{ReadWrite, TransactionTrait, Block, Tributary, TributaryReader}; use serai_db::*; use serai_task::ContinuallyRan; @@ -13,7 +13,8 @@ use serai_task::ContinuallyRan; use crate::{Heartbeat, Peer, P2p}; // Amount of blocks in a minute -const BLOCKS_PER_MINUTE: usize = (60 / (tributary::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; +const BLOCKS_PER_MINUTE: usize = + (60 / (tributary_sdk::tendermint::TARGET_BLOCK_TIME / 1000)) as usize; /// The minimum amount of blocks to include/included within a batch, assuming there's blocks to /// include in the batch. @@ -29,7 +30,7 @@ pub const MIN_BLOCKS_PER_BATCH: usize = BLOCKS_PER_MINUTE + 1; /// commit is `8 + (validators * 32) + (32 + (validators * 32))` (for the time, list of validators, /// and aggregate signature). Accordingly, this should be a safe over-estimate. pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH * - (tributary::BLOCK_SIZE_LIMIT + 32 + ((MAX_KEY_SHARES_PER_SET as usize) * 128)); + (tributary_sdk::BLOCK_SIZE_LIMIT + 32 + ((MAX_KEY_SHARES_PER_SET as usize) * 128)); /// Sends a heartbeat to other validators on regular intervals informing them of our Tributary's /// tip. diff --git a/coordinator/p2p/src/lib.rs b/coordinator/p2p/src/lib.rs index 71eb8f2c..9bf245ca 100644 --- a/coordinator/p2p/src/lib.rs +++ b/coordinator/p2p/src/lib.rs @@ -10,7 +10,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; use serai_db::Db; -use tributary::{ReadWrite, TransactionTrait, Tributary, TributaryReader}; +use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader}; use serai_cosign::{SignedCosign, Cosigning}; use tokio::sync::{mpsc, oneshot}; @@ -49,7 +49,9 @@ pub trait Peer<'a>: Send { } /// The representation of the P2P network. -pub trait P2p: Send + Sync + Clone + tributary::P2p + serai_cosign::RequestNotableCosigns { +pub trait P2p: + Send + Sync + Clone + tributary_sdk::P2p + serai_cosign::RequestNotableCosigns +{ /// The representation of a peer. type Peer<'a>: Peer<'a>; diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 71f73d65..0e2db23c 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,5 +1,5 @@ -use core::{marker::PhantomData, ops::Deref, future::Future, time::Duration}; -use std::{sync::Arc, collections::HashMap, time::Instant}; +use core::{ops::Deref, time::Duration}; +use std::{sync::Arc, time::Instant}; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, OsRng}; @@ -13,23 +13,25 @@ use ciphersuite::{ use tokio::sync::mpsc; use scale::Encode; -use serai_client::{ - primitives::{PublicKey, SeraiAddress}, - validator_sets::primitives::{Session, ValidatorSet}, - Serai, -}; -use message_queue::{Service, Metadata, client::MessageQueue}; +use serai_client::{primitives::PublicKey, validator_sets::primitives::ValidatorSet, Serai}; +use message_queue::{Service, client::MessageQueue}; + +use tributary_sdk::Tributary; use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{SignedCosign, Cosigning}; use serai_coordinator_substrate::{NewSetInformation, CanonicalEventStream, EphemeralEventStream}; +use serai_coordinator_tributary::{Transaction, ScanTributaryTask}; mod db; use db::*; mod tributary; -use tributary::{Transaction, ScanTributaryTask, ScanTributaryMessagesTask}; +use tributary::ScanTributaryMessagesTask; + +mod substrate; +use substrate::SubstrateTask; mod p2p { pub use serai_coordinator_p2p::*; @@ -44,8 +46,6 @@ mod p2p { static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); -type Tributary

= ::tributary::Tributary; - async fn serai() -> Arc { const SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(10); const MAX_SERAI_CONNECTION_DELAY: Duration = Duration::from_secs(300); @@ -111,7 +111,7 @@ async fn spawn_tributary( db: Db, message_queue: Arc, p2p: P, - p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, + p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, set: NewSetInformation, serai_key: Zeroizing<::F>, ) { @@ -131,18 +131,11 @@ async fn spawn_tributary( let start_time = set.declaration_time + TRIBUTARY_START_TIME_DELAY; let mut tributary_validators = Vec::with_capacity(set.validators.len()); - let mut validators = Vec::with_capacity(set.validators.len()); - let mut total_weight = 0; - let mut validator_weights = HashMap::with_capacity(set.validators.len()); for (validator, weight) in set.validators.iter().copied() { let validator_key = ::read_G(&mut validator.0.as_slice()) .expect("Serai validator had an invalid public key"); - let validator = SeraiAddress::from(validator); let weight = u64::from(weight); tributary_validators.push((validator_key, weight)); - validators.push(validator); - total_weight += weight; - validator_weights.insert(validator, weight); } let tributary_db = tributary_db(set.set); @@ -165,161 +158,15 @@ async fn spawn_tributary( let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( - (ScanTributaryTask { - cosign_db: db.clone(), - tributary_db, - set: set.set, - validators, - total_weight, - validator_weights, - tributary: reader, - _p2p: PhantomData::

, - }) - // This is the only handle for this ScanTributaryMessagesTask, so when this task is dropped, it - // will be too - .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), + ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db, &set, reader) + // This is the only handle for this ScanTributaryMessagesTask, so when this task is dropped, + // it will be too + .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), ); tokio::spawn(tributary::run(db, set, tributary, scan_tributary_task)); } -struct SubstrateTask { - serai_key: Zeroizing<::F>, - db: Db, - message_queue: Arc, - p2p: P, - p2p_add_tributary: mpsc::UnboundedSender<(ValidatorSet, Tributary

)>, - p2p_retire_tributary: mpsc::UnboundedSender, -} - -impl ContinuallyRan for SubstrateTask

{ - fn run_iteration(&mut self) -> impl Send + Future> { - async move { - let mut made_progress = false; - - // Handle the Canonical events - for network in serai_client::primitives::NETWORKS { - loop { - let mut txn = self.db.txn(); - let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network) - else { - break; - }; - - match msg { - // TODO: Stop trying to confirm the DKG - messages::substrate::CoordinatorMessage::SetKeys { .. } => todo!("TODO"), - messages::substrate::CoordinatorMessage::SlashesReported { session } => { - let prior_retired = RetiredTributary::get(&txn, network); - let next_to_be_retired = - prior_retired.map(|session| Session(session.0 + 1)).unwrap_or(Session(0)); - assert_eq!(session, next_to_be_retired); - RetiredTributary::set(&mut txn, network, &session); - self - .p2p_retire_tributary - .send(ValidatorSet { network, session }) - .expect("p2p retire_tributary channel dropped?"); - } - messages::substrate::CoordinatorMessage::Block { .. } => {} - } - - let msg = messages::CoordinatorMessage::from(msg); - let metadata = Metadata { - from: Service::Coordinator, - to: Service::Processor(network), - intent: msg.intent(), - }; - let msg = borsh::to_vec(&msg).unwrap(); - // TODO: Make this fallible - self.message_queue.queue(metadata, msg).await; - txn.commit(); - made_progress = true; - } - } - - // Handle the NewSet events - loop { - let mut txn = self.db.txn(); - let Some(new_set) = serai_coordinator_substrate::NewSet::try_recv(&mut txn) else { break }; - - if let Some(historic_session) = new_set.set.session.0.checked_sub(2) { - // We should have retired this session if we're here - if RetiredTributary::get(&txn, new_set.set.network).map(|session| session.0) < - Some(historic_session) - { - /* - If we haven't, it's because we're processing the NewSet event before the retiry - event from the Canonical event stream. This happens if the Canonical event, and - then the NewSet event, is fired while we're already iterating over NewSet events. - - We break, dropping the txn, restoring this NewSet to the database, so we'll only - handle it once a future iteration of this loop handles the retiry event. - */ - break; - } - - /* - Queue this historical Tributary for deletion. - - We explicitly don't queue this upon Tributary retire, instead here, to give time to - investigate retired Tributaries if questions are raised post-retiry. This gives a - week (the duration of the following session) after the Tributary has been retired to - make a backup of the data directory for any investigations. - */ - TributaryCleanup::send( - &mut txn, - &ValidatorSet { network: new_set.set.network, session: Session(historic_session) }, - ); - } - - // Save this Tributary as active to the database - { - let mut active_tributaries = - ActiveTributaries::get(&txn).unwrap_or(Vec::with_capacity(1)); - active_tributaries.push(new_set.clone()); - ActiveTributaries::set(&mut txn, &active_tributaries); - } - - // Send GenerateKey to the processor - let msg = messages::key_gen::CoordinatorMessage::GenerateKey { - session: new_set.set.session, - threshold: new_set.threshold, - evrf_public_keys: new_set.evrf_public_keys.clone(), - }; - let msg = messages::CoordinatorMessage::from(msg); - let metadata = Metadata { - from: Service::Coordinator, - to: Service::Processor(new_set.set.network), - intent: msg.intent(), - }; - let msg = borsh::to_vec(&msg).unwrap(); - // TODO: Make this fallible - self.message_queue.queue(metadata, msg).await; - - // Commit the transaction for all of this - txn.commit(); - - // Now spawn the Tributary - // If we reboot after committing the txn, but before this is called, this will be called - // on boot - spawn_tributary( - self.db.clone(), - self.message_queue.clone(), - self.p2p.clone(), - &self.p2p_add_tributary, - new_set, - self.serai_key.clone(), - ) - .await; - - made_progress = true; - } - - Ok(made_progress) - } - } -} - #[tokio::main] async fn main() { // Override the panic handler with one which will panic if any tokio task panics diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs new file mode 100644 index 00000000..7ea5c257 --- /dev/null +++ b/coordinator/src/substrate.rs @@ -0,0 +1,160 @@ +use core::future::Future; +use std::sync::Arc; + +use zeroize::Zeroizing; + +use ciphersuite::{Ciphersuite, Ristretto}; + +use tokio::sync::mpsc; + +use serai_db::{DbTxn, Db as DbTrait}; + +use serai_client::validator_sets::primitives::{Session, ValidatorSet}; +use message_queue::{Service, Metadata, client::MessageQueue}; + +use tributary_sdk::Tributary; + +use serai_task::ContinuallyRan; + +use serai_coordinator_tributary::Transaction; +use serai_coordinator_p2p::P2p; + +use crate::Db; + +pub(crate) struct SubstrateTask { + pub(crate) serai_key: Zeroizing<::F>, + pub(crate) db: Db, + pub(crate) message_queue: Arc, + pub(crate) p2p: P, + pub(crate) p2p_add_tributary: + mpsc::UnboundedSender<(ValidatorSet, Tributary)>, + pub(crate) p2p_retire_tributary: mpsc::UnboundedSender, +} + +impl ContinuallyRan for SubstrateTask

{ + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + + // Handle the Canonical events + for network in serai_client::primitives::NETWORKS { + loop { + let mut txn = self.db.txn(); + let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network) + else { + break; + }; + + match msg { + // TODO: Stop trying to confirm the DKG + messages::substrate::CoordinatorMessage::SetKeys { .. } => todo!("TODO"), + messages::substrate::CoordinatorMessage::SlashesReported { session } => { + let prior_retired = crate::db::RetiredTributary::get(&txn, network); + let next_to_be_retired = + prior_retired.map(|session| Session(session.0 + 1)).unwrap_or(Session(0)); + assert_eq!(session, next_to_be_retired); + crate::db::RetiredTributary::set(&mut txn, network, &session); + self + .p2p_retire_tributary + .send(ValidatorSet { network, session }) + .expect("p2p retire_tributary channel dropped?"); + } + messages::substrate::CoordinatorMessage::Block { .. } => {} + } + + let msg = messages::CoordinatorMessage::from(msg); + let metadata = Metadata { + from: Service::Coordinator, + to: Service::Processor(network), + intent: msg.intent(), + }; + let msg = borsh::to_vec(&msg).unwrap(); + // TODO: Make this fallible + self.message_queue.queue(metadata, msg).await; + txn.commit(); + made_progress = true; + } + } + + // Handle the NewSet events + loop { + let mut txn = self.db.txn(); + let Some(new_set) = serai_coordinator_substrate::NewSet::try_recv(&mut txn) else { break }; + + if let Some(historic_session) = new_set.set.session.0.checked_sub(2) { + // We should have retired this session if we're here + if crate::db::RetiredTributary::get(&txn, new_set.set.network).map(|session| session.0) < + Some(historic_session) + { + /* + If we haven't, it's because we're processing the NewSet event before the retiry + event from the Canonical event stream. This happens if the Canonical event, and + then the NewSet event, is fired while we're already iterating over NewSet events. + + We break, dropping the txn, restoring this NewSet to the database, so we'll only + handle it once a future iteration of this loop handles the retiry event. + */ + break; + } + + /* + Queue this historical Tributary for deletion. + + We explicitly don't queue this upon Tributary retire, instead here, to give time to + investigate retired Tributaries if questions are raised post-retiry. This gives a + week (the duration of the following session) after the Tributary has been retired to + make a backup of the data directory for any investigations. + */ + crate::db::TributaryCleanup::send( + &mut txn, + &ValidatorSet { network: new_set.set.network, session: Session(historic_session) }, + ); + } + + // Save this Tributary as active to the database + { + let mut active_tributaries = + crate::db::ActiveTributaries::get(&txn).unwrap_or(Vec::with_capacity(1)); + active_tributaries.push(new_set.clone()); + crate::db::ActiveTributaries::set(&mut txn, &active_tributaries); + } + + // Send GenerateKey to the processor + let msg = messages::key_gen::CoordinatorMessage::GenerateKey { + session: new_set.set.session, + threshold: new_set.threshold, + evrf_public_keys: new_set.evrf_public_keys.clone(), + }; + let msg = messages::CoordinatorMessage::from(msg); + let metadata = Metadata { + from: Service::Coordinator, + to: Service::Processor(new_set.set.network), + intent: msg.intent(), + }; + let msg = borsh::to_vec(&msg).unwrap(); + // TODO: Make this fallible + self.message_queue.queue(metadata, msg).await; + + // Commit the transaction for all of this + txn.commit(); + + // Now spawn the Tributary + // If we reboot after committing the txn, but before this is called, this will be called + // on boot + crate::spawn_tributary( + self.db.clone(), + self.message_queue.clone(), + self.p2p.clone(), + &self.p2p_add_tributary, + new_set, + self.serai_key.clone(), + ) + .await; + + made_progress = true; + } + + Ok(made_progress) + } + } +} diff --git a/coordinator/src/tributary/mod.rs b/coordinator/src/tributary.rs similarity index 94% rename from coordinator/src/tributary/mod.rs rename to coordinator/src/tributary.rs index 4ca3cbbe..4fb193b3 100644 --- a/coordinator/src/tributary/mod.rs +++ b/coordinator/src/tributary.rs @@ -5,7 +5,7 @@ use serai_db::{DbTxn, Db}; use serai_client::validator_sets::primitives::ValidatorSet; -use ::tributary::{ProvidedError, Tributary}; +use tributary_sdk::{ProvidedError, Tributary}; use serai_task::{TaskHandle, ContinuallyRan}; @@ -13,16 +13,9 @@ use message_queue::{Service, Metadata, client::MessageQueue}; use serai_cosign::Cosigning; use serai_coordinator_substrate::NewSetInformation; +use serai_coordinator_tributary::{Transaction, ProcessorMessages}; use serai_coordinator_p2p::P2p; -mod transaction; -pub use transaction::Transaction; - -mod db; - -mod scan; -pub(crate) use scan::ScanTributaryTask; - pub(crate) struct ScanTributaryMessagesTask { pub(crate) tributary_db: TD, pub(crate) set: ValidatorSet, @@ -35,7 +28,7 @@ impl ContinuallyRan for ScanTributaryMessagesTask { let mut made_progress = false; loop { let mut txn = self.tributary_db.txn(); - let Some(msg) = db::TributaryDb::try_recv_message(&mut txn, self.set) else { break }; + let Some(msg) = ProcessorMessages::try_recv(&mut txn, self.set) else { break }; let metadata = Metadata { from: Service::Coordinator, to: Service::Processor(self.set.network), @@ -152,7 +145,7 @@ pub(crate) async fn run( // Have the tributary scanner run as soon as there's a new block // This is wrapped in a timeout so we don't go too long without running the above code match tokio::time::timeout( - Duration::from_millis(::tributary::tendermint::TARGET_BLOCK_TIME.into()), + Duration::from_millis(tributary_sdk::tendermint::TARGET_BLOCK_TIME.into()), tributary.next_block_notification().await, ) .await diff --git a/coordinator/src/tributary/scan.rs b/coordinator/src/tributary/scan.rs deleted file mode 100644 index ac7fd43b..00000000 --- a/coordinator/src/tributary/scan.rs +++ /dev/null @@ -1,466 +0,0 @@ -use core::{marker::PhantomData, future::Future}; -use std::collections::HashMap; - -use ciphersuite::group::GroupEncoding; - -use serai_client::{ - primitives::SeraiAddress, - validator_sets::primitives::{ValidatorSet, Slash}, -}; - -use tributary::{ - Signed as TributarySigned, TransactionKind, TransactionTrait, - Transaction as TributaryTransaction, Block, TributaryReader, - tendermint::{ - tx::{TendermintTx, Evidence, decode_signed_message}, - TendermintNetwork, - }, -}; - -use serai_db::*; -use serai_task::ContinuallyRan; - -use messages::sign::VariantSignId; - -use serai_cosign::Cosigning; - -use crate::{ - p2p::P2p, - tributary::{ - db::*, - transaction::{SigningProtocolRound, Signed, Transaction}, - }, -}; - -struct ScanBlock<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> { - _p2p: PhantomData

, - cosign_db: &'a CD, - tributary_txn: &'a mut TDT, - set: ValidatorSet, - validators: &'a [SeraiAddress], - total_weight: u64, - validator_weights: &'a HashMap, - tributary: &'a TributaryReader, -} -impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { - fn potentially_start_cosign(&mut self) { - // Don't start a new cosigning instance if we're actively running one - if TributaryDb::actively_cosigning(self.tributary_txn, self.set).is_some() { - return; - } - - // Fetch the latest intended-to-be-cosigned block - let Some(latest_substrate_block_to_cosign) = - TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) - else { - return; - }; - - // If it was already cosigned, return - if TributaryDb::cosigned(self.tributary_txn, self.set, latest_substrate_block_to_cosign) { - return; - } - - let Some(substrate_block_number) = - Cosigning::::finalized_block_number(self.cosign_db, latest_substrate_block_to_cosign) - else { - // This is a valid panic as we shouldn't be scanning this block if we didn't provide all - // Provided transactions within it, and the block to cosign is a Provided transaction - panic!("cosigning a block our cosigner didn't index") - }; - - // Mark us as actively cosigning - TributaryDb::start_cosigning( - self.tributary_txn, - self.set, - latest_substrate_block_to_cosign, - substrate_block_number, - ); - // Send the message for the processor to start signing - TributaryDb::send_message( - self.tributary_txn, - self.set, - messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - session: self.set.session, - block_number: substrate_block_number, - block: latest_substrate_block_to_cosign, - }, - ); - } - fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) { - let signer = |signed: Signed| SeraiAddress(signed.signer.to_bytes()); - - if let TransactionKind::Signed(_, TributarySigned { signer, .. }) = tx.kind() { - // Don't handle transactions from those fatally slashed - // TODO: The fact they can publish these TXs makes this a notable spam vector - if TributaryDb::is_fatally_slashed( - self.tributary_txn, - self.set, - SeraiAddress(signer.to_bytes()), - ) { - return; - } - } - - match tx { - // Accumulate this vote and fatally slash the participant if past the threshold - Transaction::RemoveParticipant { participant, signed } => { - let signer = signer(signed); - - // Check the participant voted to be removed actually exists - if !self.validators.iter().any(|validator| *validator == participant) { - TributaryDb::fatal_slash( - self.tributary_txn, - self.set, - signer, - "voted to remove non-existent participant", - ); - return; - } - - match TributaryDb::accumulate( - self.tributary_txn, - self.set, - self.validators, - self.total_weight, - block_number, - Topic::RemoveParticipant { participant }, - signer, - self.validator_weights[&signer], - &(), - ) { - DataSet::None => {} - DataSet::Participating(_) => { - TributaryDb::fatal_slash(self.tributary_txn, self.set, participant, "voted to remove"); - } - }; - } - - // Send the participation to the processor - Transaction::DkgParticipation { participation, signed } => { - TributaryDb::send_message( - self.tributary_txn, - self.set, - messages::key_gen::CoordinatorMessage::Participation { - session: self.set.session, - participant: todo!("TODO"), - participation, - }, - ); - } - Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { - // Accumulate the preprocesses into our own FROST attempt manager - todo!("TODO") - } - Transaction::DkgConfirmationShare { attempt, share, signed } => { - // Accumulate the shares into our own FROST attempt manager - todo!("TODO") - } - - Transaction::Cosign { substrate_block_hash } => { - // Update the latest intended-to-be-cosigned Substrate block - TributaryDb::set_latest_substrate_block_to_cosign( - self.tributary_txn, - self.set, - substrate_block_hash, - ); - // Start a new cosign if we aren't already working on one - self.potentially_start_cosign(); - } - Transaction::Cosigned { substrate_block_hash } => { - /* - We provide one Cosigned per Cosign transaction, but they have independent orders. This - means we may receive Cosigned before Cosign. In order to ensure we only start work on - not-yet-Cosigned cosigns, we flag all cosigned blocks as cosigned. Then, when we choose - the next block to work on, we won't if it's already been cosigned. - */ - TributaryDb::mark_cosigned(self.tributary_txn, self.set, substrate_block_hash); - - // If we aren't actively cosigning this block, return - // This occurs when we have Cosign TXs A, B, C, we received Cosigned for A and start on C, - // and then receive Cosigned for B - if TributaryDb::actively_cosigning(self.tributary_txn, self.set) != - Some(substrate_block_hash) - { - return; - } - - // Since this is the block we were cosigning, mark us as having finished cosigning - TributaryDb::finish_cosigning(self.tributary_txn, self.set); - - // Start working on the next cosign - self.potentially_start_cosign(); - } - Transaction::SubstrateBlock { hash } => { - // Whitelist all of the IDs this Substrate block causes to be signed - todo!("TODO") - } - Transaction::Batch { hash } => { - // Whitelist the signing of this batch, publishing our own preprocess - todo!("TODO") - } - - Transaction::SlashReport { slash_points, signed } => { - let signer = signer(signed); - - if slash_points.len() != self.validators.len() { - TributaryDb::fatal_slash( - self.tributary_txn, - self.set, - signer, - "slash report was for a distinct amount of signers", - ); - return; - } - - // Accumulate, and if past the threshold, calculate *the* slash report and start signing it - match TributaryDb::accumulate( - self.tributary_txn, - self.set, - self.validators, - self.total_weight, - block_number, - Topic::SlashReport, - signer, - self.validator_weights[&signer], - &slash_points, - ) { - DataSet::None => {} - DataSet::Participating(data_set) => { - // Find the median reported slashes for this validator - /* - TODO: This lets 34% perform a fatal slash. That shouldn't be allowed. We need - to accept slash reports for a period past the threshold, and only fatally slash if we - have a supermajority agree the slash should be fatal. If there isn't a supermajority, - but the median believe the slash should be fatal, we need to fallback to a large - constant. - - Also, TODO, each slash point should probably be considered as - `MAX_KEY_SHARES_PER_SET * BLOCK_TIME` seconds of downtime. As this time crosses - various thresholds (1 day, 3 days, etc), a multiplier should be attached. - */ - let mut median_slash_report = Vec::with_capacity(self.validators.len()); - for i in 0 .. self.validators.len() { - let mut this_validator = - data_set.values().map(|report| report[i]).collect::>(); - this_validator.sort_unstable(); - // Choose the median, where if there are two median values, the lower one is chosen - let median_index = if (this_validator.len() % 2) == 1 { - this_validator.len() / 2 - } else { - (this_validator.len() / 2) - 1 - }; - median_slash_report.push(this_validator[median_index]); - } - - // We only publish slashes for the `f` worst performers to: - // 1) Effect amnesty if there were network disruptions which affected everyone - // 2) Ensure the signing threshold doesn't have a disincentive to do their job - - // Find the worst performer within the signing threshold's slash points - let f = (self.validators.len() - 1) / 3; - let worst_validator_in_supermajority_slash_points = { - let mut sorted_slash_points = median_slash_report.clone(); - sorted_slash_points.sort_unstable(); - // This won't be a valid index if `f == 0`, which means we don't have any validators - // to slash - let index_of_first_validator_to_slash = self.validators.len() - f; - let index_of_worst_validator_in_supermajority = index_of_first_validator_to_slash - 1; - sorted_slash_points[index_of_worst_validator_in_supermajority] - }; - - // Perform the amortization - for slash_points in &mut median_slash_report { - *slash_points = - slash_points.saturating_sub(worst_validator_in_supermajority_slash_points) - } - let amortized_slash_report = median_slash_report; - - // Create the resulting slash report - let mut slash_report = vec![]; - for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) { - if points != 0 { - slash_report.push(Slash { key: validator.into(), points }); - } - } - assert!(slash_report.len() <= f); - - // Recognize the topic for signing the slash report - TributaryDb::recognize_topic( - self.tributary_txn, - self.set, - Topic::Sign { - id: VariantSignId::SlashReport, - attempt: 0, - round: SigningProtocolRound::Preprocess, - }, - ); - // Send the message for the processor to start signing - TributaryDb::send_message( - self.tributary_txn, - self.set, - messages::coordinator::CoordinatorMessage::SignSlashReport { - session: self.set.session, - report: slash_report, - }, - ); - } - }; - } - - Transaction::Sign { id, attempt, round, data, signed } => { - let topic = Topic::Sign { id, attempt, round }; - let signer = signer(signed); - - if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { - TributaryDb::fatal_slash( - self.tributary_txn, - self.set, - signer, - "signer signed with a distinct amount of key shares than they had key shares", - ); - return; - } - - match TributaryDb::accumulate( - self.tributary_txn, - self.set, - self.validators, - self.total_weight, - block_number, - topic, - signer, - self.validator_weights[&signer], - &data, - ) { - DataSet::None => {} - DataSet::Participating(data_set) => { - let id = topic.sign_id(self.set).expect("Topic::Sign didn't have SignId"); - let flatten_data_set = |data_set| todo!("TODO"); - let data_set = flatten_data_set(data_set); - TributaryDb::send_message( - self.tributary_txn, - self.set, - match round { - SigningProtocolRound::Preprocess => { - messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set } - } - SigningProtocolRound::Share => { - messages::sign::CoordinatorMessage::Shares { id, shares: data_set } - } - }, - ) - } - }; - } - } - } - - fn handle_block(mut self, block_number: u64, block: Block) { - TributaryDb::start_of_block(self.tributary_txn, self.set, block_number); - - for tx in block.transactions { - match tx { - TributaryTransaction::Tendermint(TendermintTx::SlashEvidence(ev)) => { - // Since the evidence is on the chain, it will have already been validated - // We can just punish the signer - let data = match ev { - Evidence::ConflictingMessages(first, second) => (first, Some(second)), - Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None), - }; - let msgs = ( - decode_signed_message::>(&data.0).unwrap(), - if data.1.is_some() { - Some( - decode_signed_message::>(&data.1.unwrap()) - .unwrap(), - ) - } else { - None - }, - ); - - // Since anything with evidence is fundamentally faulty behavior, not just temporal - // errors, mark the node as fatally slashed - TributaryDb::fatal_slash( - self.tributary_txn, - self.set, - SeraiAddress(msgs.0.msg.sender), - &format!("invalid tendermint messages: {msgs:?}"), - ); - } - TributaryTransaction::Application(tx) => { - self.handle_application_tx(block_number, tx); - } - } - } - } -} - -pub(crate) struct ScanTributaryTask { - pub(crate) cosign_db: CD, - pub(crate) tributary_db: TD, - pub(crate) set: ValidatorSet, - pub(crate) validators: Vec, - pub(crate) total_weight: u64, - pub(crate) validator_weights: HashMap, - pub(crate) tributary: TributaryReader, - pub(crate) _p2p: PhantomData

, -} -impl ContinuallyRan for ScanTributaryTask { - fn run_iteration(&mut self) -> impl Send + Future> { - async move { - let (mut last_block_number, mut last_block_hash) = - TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) - .unwrap_or((0, self.tributary.genesis())); - - let mut made_progress = false; - while let Some(next) = self.tributary.block_after(&last_block_hash) { - let block = self.tributary.block(&next).unwrap(); - let block_number = last_block_number + 1; - let block_hash = block.hash(); - - // Make sure we have all of the provided transactions for this block - for tx in &block.transactions { - let TransactionKind::Provided(order) = tx.kind() else { - continue; - }; - - // make sure we have all the provided txs in this block locally - if !self.tributary.locally_provided_txs_in_block(&block_hash, order) { - return Err(format!( - "didn't have the provided Transactions on-chain for set (ephemeral error): {:?}", - self.set - )); - } - } - - let mut tributary_txn = self.tributary_db.txn(); - (ScanBlock { - _p2p: PhantomData::

, - cosign_db: &self.cosign_db, - tributary_txn: &mut tributary_txn, - set: self.set, - validators: &self.validators, - total_weight: self.total_weight, - validator_weights: &self.validator_weights, - tributary: &self.tributary, - }) - .handle_block(block_number, block); - TributaryDb::set_last_handled_tributary_block( - &mut tributary_txn, - self.set, - block_number, - block_hash, - ); - last_block_number = block_number; - last_block_hash = block_hash; - tributary_txn.commit(); - - made_progress = true; - } - - Ok(made_progress) - } - } -} diff --git a/coordinator/src/tributary/transaction.rs b/coordinator/src/tributary/transaction.rs deleted file mode 100644 index 34528cb9..00000000 --- a/coordinator/src/tributary/transaction.rs +++ /dev/null @@ -1,340 +0,0 @@ -use core::{ops::Deref, fmt::Debug}; -use std::io; - -use zeroize::Zeroizing; -use rand_core::{RngCore, CryptoRng}; - -use blake2::{digest::typenum::U32, Digest, Blake2b}; -use ciphersuite::{ - group::{ff::Field, GroupEncoding}, - Ciphersuite, Ristretto, -}; -use schnorr::SchnorrSignature; - -use scale::Encode; -use borsh::{BorshSerialize, BorshDeserialize}; - -use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET}; - -use messages::sign::VariantSignId; - -use tributary::{ - ReadWrite, - transaction::{ - Signed as TributarySigned, TransactionError, TransactionKind, Transaction as TransactionTrait, - }, -}; - -/// The round this data is for, within a signing protocol. -#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] -pub enum SigningProtocolRound { - /// A preprocess. - Preprocess, - /// A signature share. - Share, -} - -impl SigningProtocolRound { - fn nonce(&self) -> u32 { - match self { - SigningProtocolRound::Preprocess => 0, - SigningProtocolRound::Share => 1, - } - } -} - -/// `tributary::Signed` but without the nonce. -/// -/// All of our nonces are deterministic to the type of transaction and fields within. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] -pub struct Signed { - /// The signer. - pub signer: ::G, - /// The signature. - pub signature: SchnorrSignature, -} - -impl BorshSerialize for Signed { - fn serialize(&self, writer: &mut W) -> Result<(), io::Error> { - writer.write_all(self.signer.to_bytes().as_ref())?; - self.signature.write(writer) - } -} -impl BorshDeserialize for Signed { - fn deserialize_reader(reader: &mut R) -> Result { - let signer = Ristretto::read_G(reader)?; - let signature = SchnorrSignature::read(reader)?; - Ok(Self { signer, signature }) - } -} - -impl Signed { - /// Provide a nonce to convert a `Signed` into a `tributary::Signed`. - fn nonce(&self, nonce: u32) -> TributarySigned { - TributarySigned { signer: self.signer, nonce, signature: self.signature } - } -} - -/// The Tributary transaction definition used by Serai -#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] -pub enum Transaction { - /// A vote to remove a participant for invalid behavior - RemoveParticipant { - /// The participant to remove - participant: SeraiAddress, - /// The transaction's signer and signature - signed: Signed, - }, - - /// A participation in the DKG - DkgParticipation { - participation: Vec, - /// The transaction's signer and signature - signed: Signed, - }, - /// The preprocess to confirm the DKG results on-chain - DkgConfirmationPreprocess { - /// The attempt number of this signing protocol - attempt: u32, - // The preprocess - preprocess: [u8; 64], - /// The transaction's signer and signature - signed: Signed, - }, - /// The signature share to confirm the DKG results on-chain - DkgConfirmationShare { - /// The attempt number of this signing protocol - attempt: u32, - // The signature share - share: [u8; 32], - /// The transaction's signer and signature - signed: Signed, - }, - - /// Intend to co-sign a finalized Substrate block - /// - /// When the time comes to start a new co-signing protocol, the most recent Substrate block will - /// be the one selected to be cosigned. - Cosign { - /// The hash of the Substrate block to sign - substrate_block_hash: [u8; 32], - }, - - /// The cosign for a Substrate block - /// - /// After producing this cosign, we need to start work on the latest intended-to-be cosigned - /// block. That requires agreement on when this cosign was produced, which we solve by embedding - /// this cosign on chain. - /// - /// We ideally don't have this transaction at all. The coordinator, without access to any of the - /// key shares, could observe the FROST signing session and determine a successful completion. - /// Unfortunately, that functionality is not present in modular-frost, so we do need to support - /// *some* asynchronous flow (where the processor or P2P network informs us of the successful - /// completion). - /// - /// If we use a `Provided` transaction, that requires everyone observe this cosign. - /// - /// If we use an `Unsigned` transaction, we can't verify the cosign signature inside - /// `Transaction::verify` unless we embedded the full `SignedCosign` on-chain. The issue is since - /// a Tributary is stateless with regards to the on-chain logic, including `Transaction::verify`, - /// we can't verify the signature against the group's public key unless we also include that (but - /// then we open a DoS where arbitrary group keys are specified to cause inclusion of arbitrary - /// blobs on chain). - /// - /// If we use a `Signed` transaction, we mitigate the DoS risk by having someone to fatally - /// slash. We have horrible performance though as for 100 validators, all 100 will publish this - /// transaction. - /// - /// We could use a signed `Unsigned` transaction, where it includes a signer and signature but - /// isn't technically a Signed transaction. This lets us de-duplicate the transaction premised on - /// its contents. - /// - /// The optimal choice is likely to use a `Provided` transaction. We don't actually need to - /// observe the produced cosign (which is ephemeral). As long as it's agreed the cosign in - /// question no longer needs to produced, which would mean the cosigning protocol at-large - /// cosigning the block in question, it'd be safe to provide this and move on to the next cosign. - Cosigned { substrate_block_hash: [u8; 32] }, - - /// Acknowledge a Substrate block - /// - /// This is provided after the block has been cosigned. - /// - /// With the acknowledgement of a Substrate block, we can whitelist all the `VariantSignId`s - /// resulting from its handling. - SubstrateBlock { - /// The hash of the Substrate block - hash: [u8; 32], - }, - - /// Acknowledge a Batch - /// - /// Once everyone has acknowledged the Batch, we can begin signing it. - Batch { - /// The hash of the Batch's serialization. - /// - /// Generally, we refer to a Batch by its ID/the hash of its instructions. Here, we want to - /// ensure consensus on the Batch, and achieving consensus on its hash is the most effective - /// way to do that. - hash: [u8; 32], - }, - - /// Data from a signing protocol. - Sign { - /// The ID of the object being signed - id: VariantSignId, - /// The attempt number of this signing protocol - attempt: u32, - /// The round this data is for, within the signing protocol - round: SigningProtocolRound, - /// The data itself - /// - /// There will be `n` blobs of data where `n` is the amount of key shares the validator sending - /// this transaction has. - data: Vec>, - /// The transaction's signer and signature - signed: Signed, - }, - - /// The local view of slashes observed by the transaction's sender - SlashReport { - /// The slash points accrued by each validator - slash_points: Vec, - /// The transaction's signer and signature - signed: Signed, - }, -} - -impl ReadWrite for Transaction { - fn read(reader: &mut R) -> io::Result { - borsh::from_reader(reader) - } - - fn write(&self, writer: &mut W) -> io::Result<()> { - borsh::to_writer(writer, self) - } -} - -impl TransactionTrait for Transaction { - fn kind(&self) -> TransactionKind { - match self { - Transaction::RemoveParticipant { participant, signed } => { - TransactionKind::Signed((b"RemoveParticipant", participant).encode(), signed.nonce(0)) - } - - Transaction::DkgParticipation { signed, .. } => { - TransactionKind::Signed(b"DkgParticipation".encode(), signed.nonce(0)) - } - Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => { - TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(0)) - } - Transaction::DkgConfirmationShare { attempt, signed, .. } => { - TransactionKind::Signed((b"DkgConfirmation", attempt).encode(), signed.nonce(1)) - } - - Transaction::Cosign { .. } => TransactionKind::Provided("Cosign"), - Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"), - // TODO: Provide this - Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), - // TODO: Provide this - Transaction::Batch { .. } => TransactionKind::Provided("Batch"), - - Transaction::Sign { id, attempt, round, signed, .. } => { - TransactionKind::Signed((b"Sign", id, attempt).encode(), signed.nonce(round.nonce())) - } - - Transaction::SlashReport { signed, .. } => { - TransactionKind::Signed(b"SlashReport".encode(), signed.nonce(0)) - } - } - } - - fn hash(&self) -> [u8; 32] { - let mut tx = ReadWrite::serialize(self); - if let TransactionKind::Signed(_, signed) = self.kind() { - // Make sure the part we're cutting off is the signature - assert_eq!(tx.drain((tx.len() - 64) ..).collect::>(), signed.signature.serialize()); - } - Blake2b::::digest(&tx).into() - } - - // This is a stateless verification which we use to enforce some size limits. - fn verify(&self) -> Result<(), TransactionError> { - #[allow(clippy::match_same_arms)] - match self { - // Fixed-length TX - Transaction::RemoveParticipant { .. } => {} - - // TODO: MAX_DKG_PARTICIPATION_LEN - Transaction::DkgParticipation { .. } => {} - // These are fixed-length TXs - Transaction::DkgConfirmationPreprocess { .. } | Transaction::DkgConfirmationShare { .. } => {} - - // Provided TXs - Transaction::Cosign { .. } | - Transaction::Cosigned { .. } | - Transaction::SubstrateBlock { .. } | - Transaction::Batch { .. } => {} - - Transaction::Sign { data, .. } => { - if data.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { - Err(TransactionError::InvalidContent)? - } - // TODO: MAX_SIGN_LEN - } - - Transaction::SlashReport { slash_points, .. } => { - if slash_points.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { - Err(TransactionError::InvalidContent)? - } - } - }; - Ok(()) - } -} - -impl Transaction { - // Sign a transaction - // - // Panics if signing a transaction type which isn't `TransactionKind::Signed` - pub fn sign( - &mut self, - rng: &mut R, - genesis: [u8; 32], - key: &Zeroizing<::F>, - ) { - fn signed(tx: &mut Transaction) -> &mut Signed { - #[allow(clippy::match_same_arms)] // This doesn't make semantic sense here - match tx { - Transaction::RemoveParticipant { ref mut signed, .. } | - Transaction::DkgParticipation { ref mut signed, .. } | - Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed, - Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, - - Transaction::Cosign { .. } => panic!("signing CosignSubstrateBlock"), - Transaction::Cosigned { .. } => panic!("signing Cosigned"), - Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"), - Transaction::Batch { .. } => panic!("signing Batch"), - - Transaction::Sign { ref mut signed, .. } => signed, - - Transaction::SlashReport { ref mut signed, .. } => signed, - } - } - - // Decide the nonce to sign with - let sig_nonce = Zeroizing::new(::F::random(rng)); - - { - // Set the signer and the nonce - let signed = signed(self); - signed.signer = Ristretto::generator() * key.deref(); - signed.signature.R = ::generator() * sig_nonce.deref(); - } - - // Get the signature hash (which now includes `R || A` making it valid as the challenge) - let sig_hash = self.sig_hash(genesis); - - // Sign the signature - signed(self).signature = SchnorrSignature::::sign(key, sig_nonce, sig_hash); - } -} diff --git a/coordinator/tributary-sdk/Cargo.toml b/coordinator/tributary-sdk/Cargo.toml new file mode 100644 index 00000000..be72ff0c --- /dev/null +++ b/coordinator/tributary-sdk/Cargo.toml @@ -0,0 +1,49 @@ +[package] +name = "tributary-sdk" +version = "0.1.0" +description = "A micro-blockchain to provide consensus and ordering to P2P communication" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary-sdk" +authors = ["Luke Parker "] +edition = "2021" +rust-version = "1.81" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[lints] +workspace = true + +[dependencies] +thiserror = { version = "2", default-features = false, features = ["std"] } + +subtle = { version = "^2", default-features = false, features = ["std"] } +zeroize = { version = "^1.5", default-features = false, features = ["std"] } + +rand = { version = "0.8", default-features = false, features = ["std"] } +rand_chacha = { version = "0.3", default-features = false, features = ["std"] } + +blake2 = { version = "0.10", default-features = false, features = ["std"] } +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } + +ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } +schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } + +hex = { version = "0.4", default-features = false, features = ["std"] } +log = { version = "0.4", default-features = false, features = ["std"] } + +serai-db = { path = "../../common/db" } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] } +futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } +tendermint = { package = "tendermint-machine", path = "./tendermint" } + +tokio = { version = "1", default-features = false, features = ["sync", "time", "rt"] } + +[dev-dependencies] +tokio = { version = "1", features = ["macros"] } + +[features] +tests = [] diff --git a/coordinator/tributary-sdk/LICENSE b/coordinator/tributary-sdk/LICENSE new file mode 100644 index 00000000..f684d027 --- /dev/null +++ b/coordinator/tributary-sdk/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2023 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/coordinator/tributary-sdk/README.md b/coordinator/tributary-sdk/README.md new file mode 100644 index 00000000..6fce976e --- /dev/null +++ b/coordinator/tributary-sdk/README.md @@ -0,0 +1,3 @@ +# Tributary + +A verifiable, ordered broadcast layer implemented as a BFT micro-blockchain. diff --git a/coordinator/tributary/src/block.rs b/coordinator/tributary-sdk/src/block.rs similarity index 100% rename from coordinator/tributary/src/block.rs rename to coordinator/tributary-sdk/src/block.rs diff --git a/coordinator/tributary/src/blockchain.rs b/coordinator/tributary-sdk/src/blockchain.rs similarity index 100% rename from coordinator/tributary/src/blockchain.rs rename to coordinator/tributary-sdk/src/blockchain.rs diff --git a/coordinator/tributary-sdk/src/lib.rs b/coordinator/tributary-sdk/src/lib.rs new file mode 100644 index 00000000..2e4a6115 --- /dev/null +++ b/coordinator/tributary-sdk/src/lib.rs @@ -0,0 +1,388 @@ +use core::{marker::PhantomData, fmt::Debug, future::Future}; +use std::{sync::Arc, io}; + +use zeroize::Zeroizing; + +use ciphersuite::{Ciphersuite, Ristretto}; + +use scale::Decode; +use futures_channel::mpsc::UnboundedReceiver; +use futures_util::{StreamExt, SinkExt}; +use ::tendermint::{ + ext::{BlockNumber, Commit, Block as BlockTrait, Network}, + SignedMessageFor, SyncedBlock, SyncedBlockSender, SyncedBlockResultReceiver, MessageSender, + TendermintMachine, TendermintHandle, +}; + +pub use ::tendermint::Evidence; + +use serai_db::Db; + +use tokio::sync::RwLock; + +mod merkle; +pub(crate) use merkle::*; + +pub mod transaction; +pub use transaction::{TransactionError, Signed, TransactionKind, Transaction as TransactionTrait}; + +use crate::tendermint::tx::TendermintTx; + +mod provided; +pub(crate) use provided::*; +pub use provided::ProvidedError; + +mod block; +pub use block::*; + +mod blockchain; +pub(crate) use blockchain::*; + +mod mempool; +pub(crate) use mempool::*; + +pub mod tendermint; +pub(crate) use crate::tendermint::*; + +#[cfg(any(test, feature = "tests"))] +pub mod tests; + +/// Size limit for an individual transaction. +// This needs to be big enough to participate in a 101-of-150 eVRF DKG with each element taking +// `MAX_KEY_LEN`. This also needs to be big enough to pariticpate in signing 520 Bitcoin inputs +// with 49 key shares, and signing 120 Monero inputs with 49 key shares. +// TODO: Add a test for these properties +pub const TRANSACTION_SIZE_LIMIT: usize = 2_000_000; +/// Amount of transactions a single account may have in the mempool. +pub const ACCOUNT_MEMPOOL_LIMIT: u32 = 50; +/// Block size limit. +// This targets a growth limit of roughly 30 GB a day, under load, in order to prevent a malicious +// participant from flooding disks and causing out of space errors in order processes. +pub const BLOCK_SIZE_LIMIT: usize = 2_001_000; + +pub(crate) const TENDERMINT_MESSAGE: u8 = 0; +pub(crate) const TRANSACTION_MESSAGE: u8 = 1; + +#[allow(clippy::large_enum_variant)] +#[derive(Clone, PartialEq, Eq, Debug)] +pub enum Transaction { + Tendermint(TendermintTx), + Application(T), +} + +impl ReadWrite for Transaction { + fn read(reader: &mut R) -> io::Result { + let mut kind = [0]; + reader.read_exact(&mut kind)?; + match kind[0] { + 0 => { + let tx = TendermintTx::read(reader)?; + Ok(Transaction::Tendermint(tx)) + } + 1 => { + let tx = T::read(reader)?; + Ok(Transaction::Application(tx)) + } + _ => Err(io::Error::other("invalid transaction type")), + } + } + fn write(&self, writer: &mut W) -> io::Result<()> { + match self { + Transaction::Tendermint(tx) => { + writer.write_all(&[0])?; + tx.write(writer) + } + Transaction::Application(tx) => { + writer.write_all(&[1])?; + tx.write(writer) + } + } + } +} + +impl Transaction { + pub fn hash(&self) -> [u8; 32] { + match self { + Transaction::Tendermint(tx) => tx.hash(), + Transaction::Application(tx) => tx.hash(), + } + } + + pub fn kind(&self) -> TransactionKind { + match self { + Transaction::Tendermint(tx) => tx.kind(), + Transaction::Application(tx) => tx.kind(), + } + } +} + +/// An item which can be read and written. +pub trait ReadWrite: Sized { + fn read(reader: &mut R) -> io::Result; + fn write(&self, writer: &mut W) -> io::Result<()>; + + fn serialize(&self) -> Vec { + // BlockHeader is 64 bytes and likely the smallest item in this system + let mut buf = Vec::with_capacity(64); + self.write(&mut buf).unwrap(); + buf + } +} + +pub trait P2p: 'static + Send + Sync + Clone { + /// Broadcast a message to all other members of the Tributary with the specified genesis. + /// + /// The Tributary will re-broadcast consensus messages on a fixed interval to ensure they aren't + /// prematurely dropped from the P2P layer. THe P2P layer SHOULD perform content-based + /// deduplication to ensure a sane amount of load. + fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future; +} + +impl P2p for Arc

{ + fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future { + P::broadcast(self, genesis, msg) + } +} + +#[derive(Clone)] +pub struct Tributary { + db: D, + + genesis: [u8; 32], + network: TendermintNetwork, + + synced_block: Arc>>>, + synced_block_result: Arc>, + messages: Arc>>>, +} + +impl Tributary { + pub async fn new( + db: D, + genesis: [u8; 32], + start_time: u64, + key: Zeroizing<::F>, + validators: Vec<(::G, u64)>, + p2p: P, + ) -> Option { + log::info!("new Tributary with genesis {}", hex::encode(genesis)); + + let validators_vec = validators.iter().map(|validator| validator.0).collect::>(); + + let signer = Arc::new(Signer::new(genesis, key)); + let validators = Arc::new(Validators::new(genesis, validators)?); + + let mut blockchain = Blockchain::new(db.clone(), genesis, &validators_vec); + let block_number = BlockNumber(blockchain.block_number()); + + let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) { + Commit::::decode(&mut commit.as_ref()).unwrap().end_time + } else { + start_time + }; + let proposal = TendermintBlock( + blockchain.build_block::>(&validators).serialize(), + ); + let blockchain = Arc::new(RwLock::new(blockchain)); + + let network = TendermintNetwork { genesis, signer, validators, blockchain, p2p }; + + let TendermintHandle { synced_block, synced_block_result, messages, machine } = + TendermintMachine::new( + db.clone(), + network.clone(), + genesis, + block_number, + start_time, + proposal, + ) + .await; + tokio::spawn(machine.run()); + + Some(Self { + db, + genesis, + network, + synced_block: Arc::new(RwLock::new(synced_block)), + synced_block_result: Arc::new(RwLock::new(synced_block_result)), + messages: Arc::new(RwLock::new(messages)), + }) + } + + pub fn block_time() -> u32 { + TendermintNetwork::::block_time() + } + + pub fn genesis(&self) -> [u8; 32] { + self.genesis + } + + pub async fn block_number(&self) -> u64 { + self.network.blockchain.read().await.block_number() + } + pub async fn tip(&self) -> [u8; 32] { + self.network.blockchain.read().await.tip() + } + + pub fn reader(&self) -> TributaryReader { + TributaryReader(self.db.clone(), self.genesis, PhantomData) + } + + pub async fn provide_transaction(&self, tx: T) -> Result<(), ProvidedError> { + self.network.blockchain.write().await.provide_transaction(tx) + } + + pub async fn next_nonce( + &self, + signer: &::G, + order: &[u8], + ) -> Option { + self.network.blockchain.read().await.next_nonce(signer, order) + } + + // Returns Ok(true) if new, Ok(false) if an already present unsigned, or the error. + // Safe to be &self since the only meaningful usage of self is self.network.blockchain which + // successfully acquires its own write lock + pub async fn add_transaction(&self, tx: T) -> Result { + let tx = Transaction::Application(tx); + let mut to_broadcast = vec![TRANSACTION_MESSAGE]; + tx.write(&mut to_broadcast).unwrap(); + let res = self.network.blockchain.write().await.add_transaction::>( + true, + tx, + &self.network.signature_scheme(), + ); + if res == Ok(true) { + self.network.p2p.broadcast(self.genesis, to_broadcast).await; + } + res + } + + async fn sync_block_internal( + &self, + block: Block, + commit: Vec, + result: &mut UnboundedReceiver, + ) -> bool { + let (tip, block_number) = { + let blockchain = self.network.blockchain.read().await; + (blockchain.tip(), blockchain.block_number()) + }; + + if block.header.parent != tip { + log::debug!("told to sync a block whose parent wasn't our tip"); + return false; + } + + let block = TendermintBlock(block.serialize()); + let mut commit_ref = commit.as_ref(); + let Ok(commit) = Commit::>::decode(&mut commit_ref) else { + log::error!("sent an invalidly serialized commit"); + return false; + }; + // Storage DoS vector. We *could* truncate to solely the relevant portion, trying to save this, + // yet then we'd have to test the truncation was performed correctly. + if !commit_ref.is_empty() { + log::error!("sent an commit with additional data after it"); + return false; + } + if !self.network.verify_commit(block.id(), &commit) { + log::error!("sent an invalid commit"); + return false; + } + + let number = BlockNumber(block_number + 1); + self.synced_block.write().await.send(SyncedBlock { number, block, commit }).await.unwrap(); + result.next().await.unwrap() + } + + // Sync a block. + // TODO: Since we have a static validator set, we should only need the tail commit? + pub async fn sync_block(&self, block: Block, commit: Vec) -> bool { + let mut result = self.synced_block_result.write().await; + self.sync_block_internal(block, commit, &mut result).await + } + + // Return true if the message should be rebroadcasted. + pub async fn handle_message(&self, msg: &[u8]) -> bool { + match msg.first() { + Some(&TRANSACTION_MESSAGE) => { + let Ok(tx) = Transaction::read::<&[u8]>(&mut &msg[1 ..]) else { + log::error!("received invalid transaction message"); + return false; + }; + + // TODO: Sync mempools with fellow peers + // Can we just rebroadcast transactions not included for at least two blocks? + let res = + self.network.blockchain.write().await.add_transaction::>( + false, + tx, + &self.network.signature_scheme(), + ); + log::debug!("received transaction message. valid new transaction: {res:?}"); + res == Ok(true) + } + + Some(&TENDERMINT_MESSAGE) => { + let Ok(msg) = + SignedMessageFor::>::decode::<&[u8]>(&mut &msg[1 ..]) + else { + log::error!("received invalid tendermint message"); + return false; + }; + + self.messages.write().await.send(msg).await.unwrap(); + false + } + + _ => false, + } + } + + /// Get a Future which will resolve once the next block has been added. + pub async fn next_block_notification( + &self, + ) -> impl Send + Sync + core::future::Future> { + let (tx, rx) = tokio::sync::oneshot::channel(); + self.network.blockchain.write().await.next_block_notifications.push_back(tx); + rx + } +} + +#[derive(Clone)] +pub struct TributaryReader(D, [u8; 32], PhantomData); +impl TributaryReader { + pub fn genesis(&self) -> [u8; 32] { + self.1 + } + + // Since these values are static once set, they can be safely read from the database without lock + // acquisition + pub fn block(&self, hash: &[u8; 32]) -> Option> { + Blockchain::::block_from_db(&self.0, self.1, hash) + } + pub fn commit(&self, hash: &[u8; 32]) -> Option> { + Blockchain::::commit_from_db(&self.0, self.1, hash) + } + pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option> { + self.commit(hash).map(|commit| Commit::::decode(&mut commit.as_ref()).unwrap()) + } + pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> { + Blockchain::::block_after(&self.0, self.1, hash) + } + pub fn time_of_block(&self, hash: &[u8; 32]) -> Option { + self + .commit(hash) + .map(|commit| Commit::::decode(&mut commit.as_ref()).unwrap().end_time) + } + + pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool { + Blockchain::::locally_provided_txs_in_block(&self.0, &self.1, hash, order) + } + + // This isn't static, yet can be read with only minor discrepancy risks + pub fn tip(&self) -> [u8; 32] { + Blockchain::::tip_from_db(&self.0, self.1) + } +} diff --git a/coordinator/tributary/src/mempool.rs b/coordinator/tributary-sdk/src/mempool.rs similarity index 100% rename from coordinator/tributary/src/mempool.rs rename to coordinator/tributary-sdk/src/mempool.rs diff --git a/coordinator/tributary/src/merkle.rs b/coordinator/tributary-sdk/src/merkle.rs similarity index 100% rename from coordinator/tributary/src/merkle.rs rename to coordinator/tributary-sdk/src/merkle.rs diff --git a/coordinator/tributary/src/provided.rs b/coordinator/tributary-sdk/src/provided.rs similarity index 100% rename from coordinator/tributary/src/provided.rs rename to coordinator/tributary-sdk/src/provided.rs diff --git a/coordinator/tributary/src/tendermint/mod.rs b/coordinator/tributary-sdk/src/tendermint/mod.rs similarity index 100% rename from coordinator/tributary/src/tendermint/mod.rs rename to coordinator/tributary-sdk/src/tendermint/mod.rs diff --git a/coordinator/tributary/src/tendermint/tx.rs b/coordinator/tributary-sdk/src/tendermint/tx.rs similarity index 100% rename from coordinator/tributary/src/tendermint/tx.rs rename to coordinator/tributary-sdk/src/tendermint/tx.rs diff --git a/coordinator/tributary/src/tests/block.rs b/coordinator/tributary-sdk/src/tests/block.rs similarity index 100% rename from coordinator/tributary/src/tests/block.rs rename to coordinator/tributary-sdk/src/tests/block.rs diff --git a/coordinator/tributary/src/tests/blockchain.rs b/coordinator/tributary-sdk/src/tests/blockchain.rs similarity index 100% rename from coordinator/tributary/src/tests/blockchain.rs rename to coordinator/tributary-sdk/src/tests/blockchain.rs diff --git a/coordinator/tributary/src/tests/mempool.rs b/coordinator/tributary-sdk/src/tests/mempool.rs similarity index 100% rename from coordinator/tributary/src/tests/mempool.rs rename to coordinator/tributary-sdk/src/tests/mempool.rs diff --git a/coordinator/tributary/src/tests/merkle.rs b/coordinator/tributary-sdk/src/tests/merkle.rs similarity index 100% rename from coordinator/tributary/src/tests/merkle.rs rename to coordinator/tributary-sdk/src/tests/merkle.rs diff --git a/coordinator/tributary/src/tests/mod.rs b/coordinator/tributary-sdk/src/tests/mod.rs similarity index 100% rename from coordinator/tributary/src/tests/mod.rs rename to coordinator/tributary-sdk/src/tests/mod.rs diff --git a/coordinator/tributary/src/tests/p2p.rs b/coordinator/tributary-sdk/src/tests/p2p.rs similarity index 100% rename from coordinator/tributary/src/tests/p2p.rs rename to coordinator/tributary-sdk/src/tests/p2p.rs diff --git a/coordinator/tributary/src/tests/tendermint.rs b/coordinator/tributary-sdk/src/tests/tendermint.rs similarity index 100% rename from coordinator/tributary/src/tests/tendermint.rs rename to coordinator/tributary-sdk/src/tests/tendermint.rs diff --git a/coordinator/tributary/src/tests/transaction/mod.rs b/coordinator/tributary-sdk/src/tests/transaction/mod.rs similarity index 100% rename from coordinator/tributary/src/tests/transaction/mod.rs rename to coordinator/tributary-sdk/src/tests/transaction/mod.rs diff --git a/coordinator/tributary/src/tests/transaction/signed.rs b/coordinator/tributary-sdk/src/tests/transaction/signed.rs similarity index 100% rename from coordinator/tributary/src/tests/transaction/signed.rs rename to coordinator/tributary-sdk/src/tests/transaction/signed.rs diff --git a/coordinator/tributary/src/tests/transaction/tendermint.rs b/coordinator/tributary-sdk/src/tests/transaction/tendermint.rs similarity index 100% rename from coordinator/tributary/src/tests/transaction/tendermint.rs rename to coordinator/tributary-sdk/src/tests/transaction/tendermint.rs diff --git a/coordinator/tributary-sdk/src/transaction.rs b/coordinator/tributary-sdk/src/transaction.rs new file mode 100644 index 00000000..d7ff4092 --- /dev/null +++ b/coordinator/tributary-sdk/src/transaction.rs @@ -0,0 +1,218 @@ +use core::fmt::Debug; +use std::io; + +use zeroize::Zeroize; +use thiserror::Error; + +use blake2::{Digest, Blake2b512}; + +use ciphersuite::{ + group::{Group, GroupEncoding}, + Ciphersuite, Ristretto, +}; +use schnorr::SchnorrSignature; + +use crate::{TRANSACTION_SIZE_LIMIT, ReadWrite}; + +#[derive(Clone, PartialEq, Eq, Debug, Error)] +pub enum TransactionError { + /// Transaction exceeded the size limit. + #[error("transaction is too large")] + TooLargeTransaction, + /// Transaction's signer isn't a participant. + #[error("invalid signer")] + InvalidSigner, + /// Transaction's nonce isn't the prior nonce plus one. + #[error("invalid nonce")] + InvalidNonce, + /// Transaction's signature is invalid. + #[error("invalid signature")] + InvalidSignature, + /// Transaction's content is invalid. + #[error("transaction content is invalid")] + InvalidContent, + /// Transaction's signer has too many transactions in the mempool. + #[error("signer has too many transactions in the mempool")] + TooManyInMempool, + /// Provided Transaction added to mempool. + #[error("provided transaction added to mempool")] + ProvidedAddedToMempool, +} + +/// Data for a signed transaction. +#[derive(Clone, PartialEq, Eq, Debug)] +pub struct Signed { + pub signer: ::G, + pub nonce: u32, + pub signature: SchnorrSignature, +} + +impl ReadWrite for Signed { + fn read(reader: &mut R) -> io::Result { + let signer = Ristretto::read_G(reader)?; + + let mut nonce = [0; 4]; + reader.read_exact(&mut nonce)?; + let nonce = u32::from_le_bytes(nonce); + if nonce >= (u32::MAX - 1) { + Err(io::Error::other("nonce exceeded limit"))?; + } + + let mut signature = SchnorrSignature::::read(reader)?; + if signature.R.is_identity().into() { + // Anyone malicious could remove this and try to find zero signatures + // We should never produce zero signatures though meaning this should never come up + // If it does somehow come up, this is a decent courtesy + signature.zeroize(); + Err(io::Error::other("signature nonce was identity"))?; + } + + Ok(Signed { signer, nonce, signature }) + } + + fn write(&self, writer: &mut W) -> io::Result<()> { + // This is either an invalid signature or a private key leak + if self.signature.R.is_identity().into() { + Err(io::Error::other("signature nonce was identity"))?; + } + writer.write_all(&self.signer.to_bytes())?; + writer.write_all(&self.nonce.to_le_bytes())?; + self.signature.write(writer) + } +} + +impl Signed { + pub fn read_without_nonce(reader: &mut R, nonce: u32) -> io::Result { + let signer = Ristretto::read_G(reader)?; + + let mut signature = SchnorrSignature::::read(reader)?; + if signature.R.is_identity().into() { + // Anyone malicious could remove this and try to find zero signatures + // We should never produce zero signatures though meaning this should never come up + // If it does somehow come up, this is a decent courtesy + signature.zeroize(); + Err(io::Error::other("signature nonce was identity"))?; + } + + Ok(Signed { signer, nonce, signature }) + } + + pub fn write_without_nonce(&self, writer: &mut W) -> io::Result<()> { + // This is either an invalid signature or a private key leak + if self.signature.R.is_identity().into() { + Err(io::Error::other("signature nonce was identity"))?; + } + writer.write_all(&self.signer.to_bytes())?; + self.signature.write(writer) + } +} + +#[allow(clippy::large_enum_variant)] +#[derive(Clone, PartialEq, Eq, Debug)] +pub enum TransactionKind { + /// This transaction should be provided by every validator, in an exact order. + /// + /// The contained static string names the orderer to use. This allows two distinct provided + /// transaction kinds, without a synchronized order, to be ordered within their own kind without + /// requiring ordering with each other. + /// + /// The only malleability is in when this transaction appears on chain. The block producer will + /// include it when they have it. Block verification will fail for validators without it. + /// + /// If a supermajority of validators produce a commit for a block with a provided transaction + /// which isn't locally held, the block will be added to the local chain. When the transaction is + /// locally provided, it will be compared for correctness to the on-chain version + /// + /// In order to ensure TXs aren't accidentally provided multiple times, all provided transactions + /// must have a unique hash which is also unique to all Unsigned transactions. + Provided(&'static str), + + /// An unsigned transaction, only able to be included by the block producer. + /// + /// Once an Unsigned transaction is included on-chain, it may not be included again. In order to + /// have multiple Unsigned transactions with the same values included on-chain, some distinct + /// nonce must be included in order to cause a distinct hash. + /// + /// The hash must also be unique with all Provided transactions. + Unsigned, + + /// A signed transaction. + Signed(Vec, Signed), +} + +// TODO: Should this be renamed TransactionTrait now that a literal Transaction exists? +// Or should the literal Transaction be renamed to Event? +pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite { + /// Return what type of transaction this is. + fn kind(&self) -> TransactionKind; + + /// Return the hash of this transaction. + /// + /// The hash must NOT commit to the signature. + fn hash(&self) -> [u8; 32]; + + /// Perform transaction-specific verification. + fn verify(&self) -> Result<(), TransactionError>; + + /// Obtain the challenge for this transaction's signature. + /// + /// Do not override this unless you know what you're doing. + /// + /// Panics if called on non-signed transactions. + fn sig_hash(&self, genesis: [u8; 32]) -> ::F { + match self.kind() { + TransactionKind::Signed(order, Signed { signature, .. }) => { + ::F::from_bytes_mod_order_wide( + &Blake2b512::digest( + [ + b"Tributary Signed Transaction", + genesis.as_ref(), + &self.hash(), + order.as_ref(), + signature.R.to_bytes().as_ref(), + ] + .concat(), + ) + .into(), + ) + } + _ => panic!("sig_hash called on non-signed transaction"), + } + } +} + +pub trait GAIN: FnMut(&::G, &[u8]) -> Option {} +impl::G, &[u8]) -> Option> GAIN for F {} + +pub(crate) fn verify_transaction( + tx: &T, + genesis: [u8; 32], + get_and_increment_nonce: &mut F, +) -> Result<(), TransactionError> { + if tx.serialize().len() > TRANSACTION_SIZE_LIMIT { + Err(TransactionError::TooLargeTransaction)?; + } + + tx.verify()?; + + match tx.kind() { + TransactionKind::Provided(_) | TransactionKind::Unsigned => {} + TransactionKind::Signed(order, Signed { signer, nonce, signature }) => { + if let Some(next_nonce) = get_and_increment_nonce(&signer, &order) { + if nonce != next_nonce { + Err(TransactionError::InvalidNonce)?; + } + } else { + // Not a participant + Err(TransactionError::InvalidSigner)?; + } + + // TODO: Use a batch verification here + if !signature.verify(signer, tx.sig_hash(genesis)) { + Err(TransactionError::InvalidSignature)?; + } + } + } + + Ok(()) +} diff --git a/coordinator/tributary/tendermint/Cargo.toml b/coordinator/tributary-sdk/tendermint/Cargo.toml similarity index 100% rename from coordinator/tributary/tendermint/Cargo.toml rename to coordinator/tributary-sdk/tendermint/Cargo.toml diff --git a/coordinator/tributary/tendermint/LICENSE b/coordinator/tributary-sdk/tendermint/LICENSE similarity index 100% rename from coordinator/tributary/tendermint/LICENSE rename to coordinator/tributary-sdk/tendermint/LICENSE diff --git a/coordinator/tributary/tendermint/README.md b/coordinator/tributary-sdk/tendermint/README.md similarity index 100% rename from coordinator/tributary/tendermint/README.md rename to coordinator/tributary-sdk/tendermint/README.md diff --git a/coordinator/tributary/tendermint/src/block.rs b/coordinator/tributary-sdk/tendermint/src/block.rs similarity index 100% rename from coordinator/tributary/tendermint/src/block.rs rename to coordinator/tributary-sdk/tendermint/src/block.rs diff --git a/coordinator/tributary/tendermint/src/ext.rs b/coordinator/tributary-sdk/tendermint/src/ext.rs similarity index 100% rename from coordinator/tributary/tendermint/src/ext.rs rename to coordinator/tributary-sdk/tendermint/src/ext.rs diff --git a/coordinator/tributary/tendermint/src/lib.rs b/coordinator/tributary-sdk/tendermint/src/lib.rs similarity index 100% rename from coordinator/tributary/tendermint/src/lib.rs rename to coordinator/tributary-sdk/tendermint/src/lib.rs diff --git a/coordinator/tributary/tendermint/src/message_log.rs b/coordinator/tributary-sdk/tendermint/src/message_log.rs similarity index 100% rename from coordinator/tributary/tendermint/src/message_log.rs rename to coordinator/tributary-sdk/tendermint/src/message_log.rs diff --git a/coordinator/tributary/tendermint/src/round.rs b/coordinator/tributary-sdk/tendermint/src/round.rs similarity index 100% rename from coordinator/tributary/tendermint/src/round.rs rename to coordinator/tributary-sdk/tendermint/src/round.rs diff --git a/coordinator/tributary/tendermint/src/time.rs b/coordinator/tributary-sdk/tendermint/src/time.rs similarity index 100% rename from coordinator/tributary/tendermint/src/time.rs rename to coordinator/tributary-sdk/tendermint/src/time.rs diff --git a/coordinator/tributary/tendermint/tests/ext.rs b/coordinator/tributary-sdk/tendermint/tests/ext.rs similarity index 100% rename from coordinator/tributary/tendermint/tests/ext.rs rename to coordinator/tributary-sdk/tendermint/tests/ext.rs diff --git a/coordinator/tributary/Cargo.toml b/coordinator/tributary/Cargo.toml index d88c3b33..3e374bc0 100644 --- a/coordinator/tributary/Cargo.toml +++ b/coordinator/tributary/Cargo.toml @@ -1,11 +1,13 @@ [package] -name = "tributary-chain" +name = "serai-coordinator-tributary" version = "0.1.0" -description = "A micro-blockchain to provide consensus and ordering to P2P communication" +description = "The Tributary used by the Serai Coordinator" license = "AGPL-3.0-only" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary" authors = ["Luke Parker "] +keywords = [] edition = "2021" +publish = false rust-version = "1.81" [package.metadata.docs.rs] @@ -16,34 +18,29 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -thiserror = { version = "2", default-features = false, features = ["std"] } - -subtle = { version = "^2", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] } - -rand = { version = "0.8", default-features = false, features = ["std"] } -rand_chacha = { version = "0.3", default-features = false, features = ["std"] } +rand_core = { version = "0.6", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } - -ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } -hex = { version = "0.4", default-features = false, features = ["std"] } -log = { version = "0.4", default-features = false, features = ["std"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-db = { path = "../../common/db" } +serai-task = { path = "../../common/task", version = "0.1" } -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } -futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] } -futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } -tendermint = { package = "tendermint-machine", path = "./tendermint" } +tributary-sdk = { path = "../tributary-sdk" } -tokio = { version = "1", default-features = false, features = ["sync", "time", "rt"] } +serai-cosign = { path = "../cosign" } +serai-coordinator-substrate = { path = "../substrate" } -[dev-dependencies] -tokio = { version = "1", features = ["macros"] } +messages = { package = "serai-processor-messages", path = "../../processor/messages" } + +log = { version = "0.4", default-features = false, features = ["std"] } [features] -tests = [] +longer-reattempts = [] diff --git a/coordinator/tributary/LICENSE b/coordinator/tributary/LICENSE index f684d027..621233a9 100644 --- a/coordinator/tributary/LICENSE +++ b/coordinator/tributary/LICENSE @@ -1,6 +1,6 @@ AGPL-3.0-only license -Copyright (c) 2023 Luke Parker +Copyright (c) 2023-2025 Luke Parker This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License Version 3 as diff --git a/coordinator/tributary/README.md b/coordinator/tributary/README.md index 6fce976e..384b8f97 100644 --- a/coordinator/tributary/README.md +++ b/coordinator/tributary/README.md @@ -1,3 +1,4 @@ -# Tributary +# Serai Coordinator Tributary -A verifiable, ordered broadcast layer implemented as a BFT micro-blockchain. +The Tributary used by the Serai Coordinator. This includes the `Transaction` +definition and the code to handle blocks added on-chain. diff --git a/coordinator/src/tributary/db.rs b/coordinator/tributary/src/db.rs similarity index 97% rename from coordinator/src/tributary/db.rs rename to coordinator/tributary/src/db.rs index 99fbe69a..87567846 100644 --- a/coordinator/src/tributary/db.rs +++ b/coordinator/tributary/src/db.rs @@ -9,7 +9,7 @@ use messages::sign::{VariantSignId, SignId}; use serai_db::*; -use crate::tributary::transaction::SigningProtocolRound; +use crate::transaction::SigningProtocolRound; /// A topic within the database which the group participates in #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] @@ -167,6 +167,9 @@ impl Topic { } } +pub(crate) trait Borshy: BorshSerialize + BorshDeserialize {} +impl Borshy for T {} + /// The resulting data set from an accumulation pub(crate) enum DataSet { /// Accumulating this did not produce a data set to act on @@ -176,9 +179,6 @@ pub(crate) enum DataSet { Participating(HashMap), } -trait Borshy: BorshSerialize + BorshDeserialize {} -impl Borshy for T {} - create_db!( CoordinatorTributary { // The last handled tributary block's (number, hash) @@ -389,12 +389,12 @@ impl TributaryDb { // 5 minutes #[cfg(not(feature = "longer-reattempts"))] const BASE_REATTEMPT_DELAY: u32 = - (5u32 * 60 * 1000).div_ceil(tributary::tendermint::TARGET_BLOCK_TIME); + (5u32 * 60 * 1000).div_ceil(tributary_sdk::tendermint::TARGET_BLOCK_TIME); // 10 minutes, intended for latent environments like the GitHub CI #[cfg(feature = "longer-reattempts")] const BASE_REATTEMPT_DELAY: u32 = - (10u32 * 60 * 1000).div_ceil(tributary::tendermint::TARGET_BLOCK_TIME); + (10u32 * 60 * 1000).div_ceil(tributary_sdk::tendermint::TARGET_BLOCK_TIME); // Linearly scale the time for the protocol with the attempt number let blocks_till_reattempt = u64::from(attempt * BASE_REATTEMPT_DELAY); @@ -446,11 +446,4 @@ impl TributaryDb { ) { ProcessorMessages::send(txn, set, &message.into()); } - - pub(crate) fn try_recv_message( - txn: &mut impl DbTxn, - set: ValidatorSet, - ) -> Option { - ProcessorMessages::try_recv(txn, set) - } } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 2e4a6115..9b059820 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -1,388 +1,513 @@ -use core::{marker::PhantomData, fmt::Debug, future::Future}; -use std::{sync::Arc, io}; +#![cfg_attr(docsrs, feature(doc_auto_cfg))] +#![doc = include_str!("../README.md")] +#![deny(missing_docs)] -use zeroize::Zeroizing; +use core::{marker::PhantomData, future::Future}; +use std::collections::HashMap; -use ciphersuite::{Ciphersuite, Ristretto}; +use ciphersuite::group::GroupEncoding; -use scale::Decode; -use futures_channel::mpsc::UnboundedReceiver; -use futures_util::{StreamExt, SinkExt}; -use ::tendermint::{ - ext::{BlockNumber, Commit, Block as BlockTrait, Network}, - SignedMessageFor, SyncedBlock, SyncedBlockSender, SyncedBlockResultReceiver, MessageSender, - TendermintMachine, TendermintHandle, +use serai_client::{ + primitives::SeraiAddress, + validator_sets::primitives::{ValidatorSet, Slash}, }; -pub use ::tendermint::Evidence; +use serai_db::*; +use serai_task::ContinuallyRan; -use serai_db::Db; +use tributary_sdk::{ + tendermint::{ + tx::{TendermintTx, Evidence, decode_signed_message}, + TendermintNetwork, + }, + Signed as TributarySigned, TransactionKind, TransactionTrait, + Transaction as TributaryTransaction, Block, TributaryReader, P2p, +}; -use tokio::sync::RwLock; +use serai_cosign::Cosigning; +use serai_coordinator_substrate::NewSetInformation; -mod merkle; -pub(crate) use merkle::*; +use messages::sign::VariantSignId; -pub mod transaction; -pub use transaction::{TransactionError, Signed, TransactionKind, Transaction as TransactionTrait}; +mod transaction; +pub(crate) use transaction::{SigningProtocolRound, Signed}; +pub use transaction::Transaction; -use crate::tendermint::tx::TendermintTx; +mod db; +use db::*; -mod provided; -pub(crate) use provided::*; -pub use provided::ProvidedError; - -mod block; -pub use block::*; - -mod blockchain; -pub(crate) use blockchain::*; - -mod mempool; -pub(crate) use mempool::*; - -pub mod tendermint; -pub(crate) use crate::tendermint::*; - -#[cfg(any(test, feature = "tests"))] -pub mod tests; - -/// Size limit for an individual transaction. -// This needs to be big enough to participate in a 101-of-150 eVRF DKG with each element taking -// `MAX_KEY_LEN`. This also needs to be big enough to pariticpate in signing 520 Bitcoin inputs -// with 49 key shares, and signing 120 Monero inputs with 49 key shares. -// TODO: Add a test for these properties -pub const TRANSACTION_SIZE_LIMIT: usize = 2_000_000; -/// Amount of transactions a single account may have in the mempool. -pub const ACCOUNT_MEMPOOL_LIMIT: u32 = 50; -/// Block size limit. -// This targets a growth limit of roughly 30 GB a day, under load, in order to prevent a malicious -// participant from flooding disks and causing out of space errors in order processes. -pub const BLOCK_SIZE_LIMIT: usize = 2_001_000; - -pub(crate) const TENDERMINT_MESSAGE: u8 = 0; -pub(crate) const TRANSACTION_MESSAGE: u8 = 1; - -#[allow(clippy::large_enum_variant)] -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum Transaction { - Tendermint(TendermintTx), - Application(T), +/// Messages to send to the Processors. +pub struct ProcessorMessages; +impl ProcessorMessages { + /// Try to receive a message to send to a Processor. + pub fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + db::ProcessorMessages::try_recv(txn, set) + } } -impl ReadWrite for Transaction { - fn read(reader: &mut R) -> io::Result { - let mut kind = [0]; - reader.read_exact(&mut kind)?; - match kind[0] { - 0 => { - let tx = TendermintTx::read(reader)?; - Ok(Transaction::Tendermint(tx)) - } - 1 => { - let tx = T::read(reader)?; - Ok(Transaction::Application(tx)) - } - _ => Err(io::Error::other("invalid transaction type")), +struct ScanBlock<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> { + _td: PhantomData, + _p2p: PhantomData

, + cosign_db: &'a CD, + tributary_txn: &'a mut TDT, + set: ValidatorSet, + validators: &'a [SeraiAddress], + total_weight: u64, + validator_weights: &'a HashMap, +} +impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { + fn potentially_start_cosign(&mut self) { + // Don't start a new cosigning instance if we're actively running one + if TributaryDb::actively_cosigning(self.tributary_txn, self.set).is_some() { + return; } - } - fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - Transaction::Tendermint(tx) => { - writer.write_all(&[0])?; - tx.write(writer) - } - Transaction::Application(tx) => { - writer.write_all(&[1])?; - tx.write(writer) - } - } - } -} -impl Transaction { - pub fn hash(&self) -> [u8; 32] { - match self { - Transaction::Tendermint(tx) => tx.hash(), - Transaction::Application(tx) => tx.hash(), - } - } - - pub fn kind(&self) -> TransactionKind { - match self { - Transaction::Tendermint(tx) => tx.kind(), - Transaction::Application(tx) => tx.kind(), - } - } -} - -/// An item which can be read and written. -pub trait ReadWrite: Sized { - fn read(reader: &mut R) -> io::Result; - fn write(&self, writer: &mut W) -> io::Result<()>; - - fn serialize(&self) -> Vec { - // BlockHeader is 64 bytes and likely the smallest item in this system - let mut buf = Vec::with_capacity(64); - self.write(&mut buf).unwrap(); - buf - } -} - -pub trait P2p: 'static + Send + Sync + Clone { - /// Broadcast a message to all other members of the Tributary with the specified genesis. - /// - /// The Tributary will re-broadcast consensus messages on a fixed interval to ensure they aren't - /// prematurely dropped from the P2P layer. THe P2P layer SHOULD perform content-based - /// deduplication to ensure a sane amount of load. - fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future; -} - -impl P2p for Arc

{ - fn broadcast(&self, genesis: [u8; 32], msg: Vec) -> impl Send + Future { - P::broadcast(self, genesis, msg) - } -} - -#[derive(Clone)] -pub struct Tributary { - db: D, - - genesis: [u8; 32], - network: TendermintNetwork, - - synced_block: Arc>>>, - synced_block_result: Arc>, - messages: Arc>>>, -} - -impl Tributary { - pub async fn new( - db: D, - genesis: [u8; 32], - start_time: u64, - key: Zeroizing<::F>, - validators: Vec<(::G, u64)>, - p2p: P, - ) -> Option { - log::info!("new Tributary with genesis {}", hex::encode(genesis)); - - let validators_vec = validators.iter().map(|validator| validator.0).collect::>(); - - let signer = Arc::new(Signer::new(genesis, key)); - let validators = Arc::new(Validators::new(genesis, validators)?); - - let mut blockchain = Blockchain::new(db.clone(), genesis, &validators_vec); - let block_number = BlockNumber(blockchain.block_number()); - - let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) { - Commit::::decode(&mut commit.as_ref()).unwrap().end_time - } else { - start_time + // Fetch the latest intended-to-be-cosigned block + let Some(latest_substrate_block_to_cosign) = + TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) + else { + return; }; - let proposal = TendermintBlock( - blockchain.build_block::>(&validators).serialize(), + + // If it was already cosigned, return + if TributaryDb::cosigned(self.tributary_txn, self.set, latest_substrate_block_to_cosign) { + return; + } + + let Some(substrate_block_number) = + Cosigning::::finalized_block_number(self.cosign_db, latest_substrate_block_to_cosign) + else { + // This is a valid panic as we shouldn't be scanning this block if we didn't provide all + // Provided transactions within it, and the block to cosign is a Provided transaction + panic!("cosigning a block our cosigner didn't index") + }; + + // Mark us as actively cosigning + TributaryDb::start_cosigning( + self.tributary_txn, + self.set, + latest_substrate_block_to_cosign, + substrate_block_number, ); - let blockchain = Arc::new(RwLock::new(blockchain)); - - let network = TendermintNetwork { genesis, signer, validators, blockchain, p2p }; - - let TendermintHandle { synced_block, synced_block_result, messages, machine } = - TendermintMachine::new( - db.clone(), - network.clone(), - genesis, - block_number, - start_time, - proposal, - ) - .await; - tokio::spawn(machine.run()); - - Some(Self { - db, - genesis, - network, - synced_block: Arc::new(RwLock::new(synced_block)), - synced_block_result: Arc::new(RwLock::new(synced_block_result)), - messages: Arc::new(RwLock::new(messages)), - }) - } - - pub fn block_time() -> u32 { - TendermintNetwork::::block_time() - } - - pub fn genesis(&self) -> [u8; 32] { - self.genesis - } - - pub async fn block_number(&self) -> u64 { - self.network.blockchain.read().await.block_number() - } - pub async fn tip(&self) -> [u8; 32] { - self.network.blockchain.read().await.tip() - } - - pub fn reader(&self) -> TributaryReader { - TributaryReader(self.db.clone(), self.genesis, PhantomData) - } - - pub async fn provide_transaction(&self, tx: T) -> Result<(), ProvidedError> { - self.network.blockchain.write().await.provide_transaction(tx) - } - - pub async fn next_nonce( - &self, - signer: &::G, - order: &[u8], - ) -> Option { - self.network.blockchain.read().await.next_nonce(signer, order) - } - - // Returns Ok(true) if new, Ok(false) if an already present unsigned, or the error. - // Safe to be &self since the only meaningful usage of self is self.network.blockchain which - // successfully acquires its own write lock - pub async fn add_transaction(&self, tx: T) -> Result { - let tx = Transaction::Application(tx); - let mut to_broadcast = vec![TRANSACTION_MESSAGE]; - tx.write(&mut to_broadcast).unwrap(); - let res = self.network.blockchain.write().await.add_transaction::>( - true, - tx, - &self.network.signature_scheme(), + // Send the message for the processor to start signing + TributaryDb::send_message( + self.tributary_txn, + self.set, + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { + session: self.set.session, + block_number: substrate_block_number, + block: latest_substrate_block_to_cosign, + }, ); - if res == Ok(true) { - self.network.p2p.broadcast(self.genesis, to_broadcast).await; - } - res } + fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) { + let signer = |signed: Signed| SeraiAddress(signed.signer().to_bytes()); - async fn sync_block_internal( - &self, - block: Block, - commit: Vec, - result: &mut UnboundedReceiver, - ) -> bool { - let (tip, block_number) = { - let blockchain = self.network.blockchain.read().await; - (blockchain.tip(), blockchain.block_number()) - }; - - if block.header.parent != tip { - log::debug!("told to sync a block whose parent wasn't our tip"); - return false; + if let TransactionKind::Signed(_, TributarySigned { signer, .. }) = tx.kind() { + // Don't handle transactions from those fatally slashed + // TODO: The fact they can publish these TXs makes this a notable spam vector + if TributaryDb::is_fatally_slashed( + self.tributary_txn, + self.set, + SeraiAddress(signer.to_bytes()), + ) { + return; + } } - let block = TendermintBlock(block.serialize()); - let mut commit_ref = commit.as_ref(); - let Ok(commit) = Commit::>::decode(&mut commit_ref) else { - log::error!("sent an invalidly serialized commit"); - return false; - }; - // Storage DoS vector. We *could* truncate to solely the relevant portion, trying to save this, - // yet then we'd have to test the truncation was performed correctly. - if !commit_ref.is_empty() { - log::error!("sent an commit with additional data after it"); - return false; - } - if !self.network.verify_commit(block.id(), &commit) { - log::error!("sent an invalid commit"); - return false; - } + match tx { + // Accumulate this vote and fatally slash the participant if past the threshold + Transaction::RemoveParticipant { participant, signed } => { + let signer = signer(signed); - let number = BlockNumber(block_number + 1); - self.synced_block.write().await.send(SyncedBlock { number, block, commit }).await.unwrap(); - result.next().await.unwrap() - } - - // Sync a block. - // TODO: Since we have a static validator set, we should only need the tail commit? - pub async fn sync_block(&self, block: Block, commit: Vec) -> bool { - let mut result = self.synced_block_result.write().await; - self.sync_block_internal(block, commit, &mut result).await - } - - // Return true if the message should be rebroadcasted. - pub async fn handle_message(&self, msg: &[u8]) -> bool { - match msg.first() { - Some(&TRANSACTION_MESSAGE) => { - let Ok(tx) = Transaction::read::<&[u8]>(&mut &msg[1 ..]) else { - log::error!("received invalid transaction message"); - return false; - }; - - // TODO: Sync mempools with fellow peers - // Can we just rebroadcast transactions not included for at least two blocks? - let res = - self.network.blockchain.write().await.add_transaction::>( - false, - tx, - &self.network.signature_scheme(), + // Check the participant voted to be removed actually exists + if !self.validators.iter().any(|validator| *validator == participant) { + TributaryDb::fatal_slash( + self.tributary_txn, + self.set, + signer, + "voted to remove non-existent participant", ); - log::debug!("received transaction message. valid new transaction: {res:?}"); - res == Ok(true) - } + return; + } - Some(&TENDERMINT_MESSAGE) => { - let Ok(msg) = - SignedMessageFor::>::decode::<&[u8]>(&mut &msg[1 ..]) - else { - log::error!("received invalid tendermint message"); - return false; + match TributaryDb::accumulate( + self.tributary_txn, + self.set, + self.validators, + self.total_weight, + block_number, + Topic::RemoveParticipant { participant }, + signer, + self.validator_weights[&signer], + &(), + ) { + DataSet::None => {} + DataSet::Participating(_) => { + TributaryDb::fatal_slash(self.tributary_txn, self.set, participant, "voted to remove"); + } }; - - self.messages.write().await.send(msg).await.unwrap(); - false } - _ => false, + // Send the participation to the processor + Transaction::DkgParticipation { participation, signed } => { + TributaryDb::send_message( + self.tributary_txn, + self.set, + messages::key_gen::CoordinatorMessage::Participation { + session: self.set.session, + participant: todo!("TODO"), + participation, + }, + ); + } + Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { + // Accumulate the preprocesses into our own FROST attempt manager + todo!("TODO") + } + Transaction::DkgConfirmationShare { attempt, share, signed } => { + // Accumulate the shares into our own FROST attempt manager + todo!("TODO") + } + + Transaction::Cosign { substrate_block_hash } => { + // Update the latest intended-to-be-cosigned Substrate block + TributaryDb::set_latest_substrate_block_to_cosign( + self.tributary_txn, + self.set, + substrate_block_hash, + ); + // Start a new cosign if we aren't already working on one + self.potentially_start_cosign(); + } + Transaction::Cosigned { substrate_block_hash } => { + /* + We provide one Cosigned per Cosign transaction, but they have independent orders. This + means we may receive Cosigned before Cosign. In order to ensure we only start work on + not-yet-Cosigned cosigns, we flag all cosigned blocks as cosigned. Then, when we choose + the next block to work on, we won't if it's already been cosigned. + */ + TributaryDb::mark_cosigned(self.tributary_txn, self.set, substrate_block_hash); + + // If we aren't actively cosigning this block, return + // This occurs when we have Cosign TXs A, B, C, we received Cosigned for A and start on C, + // and then receive Cosigned for B + if TributaryDb::actively_cosigning(self.tributary_txn, self.set) != + Some(substrate_block_hash) + { + return; + } + + // Since this is the block we were cosigning, mark us as having finished cosigning + TributaryDb::finish_cosigning(self.tributary_txn, self.set); + + // Start working on the next cosign + self.potentially_start_cosign(); + } + Transaction::SubstrateBlock { hash } => { + // Whitelist all of the IDs this Substrate block causes to be signed + todo!("TODO") + } + Transaction::Batch { hash } => { + // Whitelist the signing of this batch, publishing our own preprocess + todo!("TODO") + } + + Transaction::SlashReport { slash_points, signed } => { + let signer = signer(signed); + + if slash_points.len() != self.validators.len() { + TributaryDb::fatal_slash( + self.tributary_txn, + self.set, + signer, + "slash report was for a distinct amount of signers", + ); + return; + } + + // Accumulate, and if past the threshold, calculate *the* slash report and start signing it + match TributaryDb::accumulate( + self.tributary_txn, + self.set, + self.validators, + self.total_weight, + block_number, + Topic::SlashReport, + signer, + self.validator_weights[&signer], + &slash_points, + ) { + DataSet::None => {} + DataSet::Participating(data_set) => { + // Find the median reported slashes for this validator + /* + TODO: This lets 34% perform a fatal slash. That shouldn't be allowed. We need + to accept slash reports for a period past the threshold, and only fatally slash if we + have a supermajority agree the slash should be fatal. If there isn't a supermajority, + but the median believe the slash should be fatal, we need to fallback to a large + constant. + + Also, TODO, each slash point should probably be considered as + `MAX_KEY_SHARES_PER_SET * BLOCK_TIME` seconds of downtime. As this time crosses + various thresholds (1 day, 3 days, etc), a multiplier should be attached. + */ + let mut median_slash_report = Vec::with_capacity(self.validators.len()); + for i in 0 .. self.validators.len() { + let mut this_validator = + data_set.values().map(|report| report[i]).collect::>(); + this_validator.sort_unstable(); + // Choose the median, where if there are two median values, the lower one is chosen + let median_index = if (this_validator.len() % 2) == 1 { + this_validator.len() / 2 + } else { + (this_validator.len() / 2) - 1 + }; + median_slash_report.push(this_validator[median_index]); + } + + // We only publish slashes for the `f` worst performers to: + // 1) Effect amnesty if there were network disruptions which affected everyone + // 2) Ensure the signing threshold doesn't have a disincentive to do their job + + // Find the worst performer within the signing threshold's slash points + let f = (self.validators.len() - 1) / 3; + let worst_validator_in_supermajority_slash_points = { + let mut sorted_slash_points = median_slash_report.clone(); + sorted_slash_points.sort_unstable(); + // This won't be a valid index if `f == 0`, which means we don't have any validators + // to slash + let index_of_first_validator_to_slash = self.validators.len() - f; + let index_of_worst_validator_in_supermajority = index_of_first_validator_to_slash - 1; + sorted_slash_points[index_of_worst_validator_in_supermajority] + }; + + // Perform the amortization + for slash_points in &mut median_slash_report { + *slash_points = + slash_points.saturating_sub(worst_validator_in_supermajority_slash_points) + } + let amortized_slash_report = median_slash_report; + + // Create the resulting slash report + let mut slash_report = vec![]; + for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) { + if points != 0 { + slash_report.push(Slash { key: validator.into(), points }); + } + } + assert!(slash_report.len() <= f); + + // Recognize the topic for signing the slash report + TributaryDb::recognize_topic( + self.tributary_txn, + self.set, + Topic::Sign { + id: VariantSignId::SlashReport, + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); + // Send the message for the processor to start signing + TributaryDb::send_message( + self.tributary_txn, + self.set, + messages::coordinator::CoordinatorMessage::SignSlashReport { + session: self.set.session, + report: slash_report, + }, + ); + } + }; + } + + Transaction::Sign { id, attempt, round, data, signed } => { + let topic = Topic::Sign { id, attempt, round }; + let signer = signer(signed); + + if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { + TributaryDb::fatal_slash( + self.tributary_txn, + self.set, + signer, + "signer signed with a distinct amount of key shares than they had key shares", + ); + return; + } + + match TributaryDb::accumulate( + self.tributary_txn, + self.set, + self.validators, + self.total_weight, + block_number, + topic, + signer, + self.validator_weights[&signer], + &data, + ) { + DataSet::None => {} + DataSet::Participating(data_set) => { + let id = topic.sign_id(self.set).expect("Topic::Sign didn't have SignId"); + let flatten_data_set = |data_set| todo!("TODO"); + let data_set = flatten_data_set(data_set); + TributaryDb::send_message( + self.tributary_txn, + self.set, + match round { + SigningProtocolRound::Preprocess => { + messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set } + } + SigningProtocolRound::Share => { + messages::sign::CoordinatorMessage::Shares { id, shares: data_set } + } + }, + ) + } + }; + } } } - /// Get a Future which will resolve once the next block has been added. - pub async fn next_block_notification( - &self, - ) -> impl Send + Sync + core::future::Future> { - let (tx, rx) = tokio::sync::oneshot::channel(); - self.network.blockchain.write().await.next_block_notifications.push_back(tx); - rx + fn handle_block(mut self, block_number: u64, block: Block) { + TributaryDb::start_of_block(self.tributary_txn, self.set, block_number); + + for tx in block.transactions { + match tx { + TributaryTransaction::Tendermint(TendermintTx::SlashEvidence(ev)) => { + // Since the evidence is on the chain, it will have already been validated + // We can just punish the signer + let data = match ev { + Evidence::ConflictingMessages(first, second) => (first, Some(second)), + Evidence::InvalidPrecommit(first) | Evidence::InvalidValidRound(first) => (first, None), + }; + let msgs = ( + decode_signed_message::>(&data.0).unwrap(), + if data.1.is_some() { + Some( + decode_signed_message::>(&data.1.unwrap()) + .unwrap(), + ) + } else { + None + }, + ); + + // Since anything with evidence is fundamentally faulty behavior, not just temporal + // errors, mark the node as fatally slashed + TributaryDb::fatal_slash( + self.tributary_txn, + self.set, + SeraiAddress(msgs.0.msg.sender), + &format!("invalid tendermint messages: {msgs:?}"), + ); + } + TributaryTransaction::Application(tx) => { + self.handle_application_tx(block_number, tx); + } + } + } } } -#[derive(Clone)] -pub struct TributaryReader(D, [u8; 32], PhantomData); -impl TributaryReader { - pub fn genesis(&self) -> [u8; 32] { - self.1 - } +/// The task to scan the Tributary, populating `ProcessorMessages`. +pub struct ScanTributaryTask { + cosign_db: CD, + tributary_db: TD, + set: ValidatorSet, + validators: Vec, + total_weight: u64, + validator_weights: HashMap, + tributary: TributaryReader, + _p2p: PhantomData

, +} - // Since these values are static once set, they can be safely read from the database without lock - // acquisition - pub fn block(&self, hash: &[u8; 32]) -> Option> { - Blockchain::::block_from_db(&self.0, self.1, hash) - } - pub fn commit(&self, hash: &[u8; 32]) -> Option> { - Blockchain::::commit_from_db(&self.0, self.1, hash) - } - pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option> { - self.commit(hash).map(|commit| Commit::::decode(&mut commit.as_ref()).unwrap()) - } - pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> { - Blockchain::::block_after(&self.0, self.1, hash) - } - pub fn time_of_block(&self, hash: &[u8; 32]) -> Option { - self - .commit(hash) - .map(|commit| Commit::::decode(&mut commit.as_ref()).unwrap().end_time) - } +impl ScanTributaryTask { + /// Create a new instance of this task. + pub fn new( + cosign_db: CD, + tributary_db: TD, + new_set: &NewSetInformation, + tributary: TributaryReader, + ) -> Self { + let mut validators = Vec::with_capacity(new_set.validators.len()); + let mut total_weight = 0; + let mut validator_weights = HashMap::with_capacity(new_set.validators.len()); + for (validator, weight) in new_set.validators.iter().copied() { + let validator = SeraiAddress::from(validator); + let weight = u64::from(weight); + validators.push(validator); + total_weight += weight; + validator_weights.insert(validator, weight); + } - pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool { - Blockchain::::locally_provided_txs_in_block(&self.0, &self.1, hash, order) - } - - // This isn't static, yet can be read with only minor discrepancy risks - pub fn tip(&self) -> [u8; 32] { - Blockchain::::tip_from_db(&self.0, self.1) + ScanTributaryTask { + cosign_db, + tributary_db, + set: new_set.set, + validators, + total_weight, + validator_weights, + tributary, + _p2p: PhantomData, + } + } +} + +impl ContinuallyRan for ScanTributaryTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let (mut last_block_number, mut last_block_hash) = + TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) + .unwrap_or((0, self.tributary.genesis())); + + let mut made_progress = false; + while let Some(next) = self.tributary.block_after(&last_block_hash) { + let block = self.tributary.block(&next).unwrap(); + let block_number = last_block_number + 1; + let block_hash = block.hash(); + + // Make sure we have all of the provided transactions for this block + for tx in &block.transactions { + let TransactionKind::Provided(order) = tx.kind() else { + continue; + }; + + // make sure we have all the provided txs in this block locally + if !self.tributary.locally_provided_txs_in_block(&block_hash, order) { + return Err(format!( + "didn't have the provided Transactions on-chain for set (ephemeral error): {:?}", + self.set + )); + } + } + + let mut tributary_txn = self.tributary_db.txn(); + (ScanBlock { + _td: PhantomData::, + _p2p: PhantomData::

, + cosign_db: &self.cosign_db, + tributary_txn: &mut tributary_txn, + set: self.set, + validators: &self.validators, + total_weight: self.total_weight, + validator_weights: &self.validator_weights, + }) + .handle_block(block_number, block); + TributaryDb::set_last_handled_tributary_block( + &mut tributary_txn, + self.set, + block_number, + block_hash, + ); + last_block_number = block_number; + last_block_hash = block_hash; + tributary_txn.commit(); + + made_progress = true; + } + + Ok(made_progress) + } } } diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index d7ff4092..f9fd016d 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -1,218 +1,353 @@ -use core::fmt::Debug; +use core::{ops::Deref, fmt::Debug}; use std::io; -use zeroize::Zeroize; -use thiserror::Error; - -use blake2::{Digest, Blake2b512}; +use zeroize::Zeroizing; +use rand_core::{RngCore, CryptoRng}; +use blake2::{digest::typenum::U32, Digest, Blake2b}; use ciphersuite::{ - group::{Group, GroupEncoding}, + group::{ff::Field, GroupEncoding}, Ciphersuite, Ristretto, }; use schnorr::SchnorrSignature; -use crate::{TRANSACTION_SIZE_LIMIT, ReadWrite}; +use scale::Encode; +use borsh::{BorshSerialize, BorshDeserialize}; -#[derive(Clone, PartialEq, Eq, Debug, Error)] -pub enum TransactionError { - /// Transaction exceeded the size limit. - #[error("transaction is too large")] - TooLargeTransaction, - /// Transaction's signer isn't a participant. - #[error("invalid signer")] - InvalidSigner, - /// Transaction's nonce isn't the prior nonce plus one. - #[error("invalid nonce")] - InvalidNonce, - /// Transaction's signature is invalid. - #[error("invalid signature")] - InvalidSignature, - /// Transaction's content is invalid. - #[error("transaction content is invalid")] - InvalidContent, - /// Transaction's signer has too many transactions in the mempool. - #[error("signer has too many transactions in the mempool")] - TooManyInMempool, - /// Provided Transaction added to mempool. - #[error("provided transaction added to mempool")] - ProvidedAddedToMempool, +use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET}; + +use messages::sign::VariantSignId; + +use tributary_sdk::{ + ReadWrite, + transaction::{ + Signed as TributarySigned, TransactionError, TransactionKind, Transaction as TransactionTrait, + }, +}; + +/// The round this data is for, within a signing protocol. +#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] +pub enum SigningProtocolRound { + /// A preprocess. + Preprocess, + /// A signature share. + Share, } -/// Data for a signed transaction. -#[derive(Clone, PartialEq, Eq, Debug)] -pub struct Signed { - pub signer: ::G, - pub nonce: u32, - pub signature: SchnorrSignature, -} - -impl ReadWrite for Signed { - fn read(reader: &mut R) -> io::Result { - let signer = Ristretto::read_G(reader)?; - - let mut nonce = [0; 4]; - reader.read_exact(&mut nonce)?; - let nonce = u32::from_le_bytes(nonce); - if nonce >= (u32::MAX - 1) { - Err(io::Error::other("nonce exceeded limit"))?; +impl SigningProtocolRound { + fn nonce(&self) -> u32 { + match self { + SigningProtocolRound::Preprocess => 0, + SigningProtocolRound::Share => 1, } - - let mut signature = SchnorrSignature::::read(reader)?; - if signature.R.is_identity().into() { - // Anyone malicious could remove this and try to find zero signatures - // We should never produce zero signatures though meaning this should never come up - // If it does somehow come up, this is a decent courtesy - signature.zeroize(); - Err(io::Error::other("signature nonce was identity"))?; - } - - Ok(Signed { signer, nonce, signature }) } +} - fn write(&self, writer: &mut W) -> io::Result<()> { - // This is either an invalid signature or a private key leak - if self.signature.R.is_identity().into() { - Err(io::Error::other("signature nonce was identity"))?; - } - writer.write_all(&self.signer.to_bytes())?; - writer.write_all(&self.nonce.to_le_bytes())?; +/// `tributary::Signed` but without the nonce. +/// +/// All of our nonces are deterministic to the type of transaction and fields within. +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub struct Signed { + /// The signer. + signer: ::G, + /// The signature. + signature: SchnorrSignature, +} + +impl BorshSerialize for Signed { + fn serialize(&self, writer: &mut W) -> Result<(), io::Error> { + writer.write_all(self.signer.to_bytes().as_ref())?; self.signature.write(writer) } } +impl BorshDeserialize for Signed { + fn deserialize_reader(reader: &mut R) -> Result { + let signer = Ristretto::read_G(reader)?; + let signature = SchnorrSignature::read(reader)?; + Ok(Self { signer, signature }) + } +} impl Signed { - pub fn read_without_nonce(reader: &mut R, nonce: u32) -> io::Result { - let signer = Ristretto::read_G(reader)?; - - let mut signature = SchnorrSignature::::read(reader)?; - if signature.R.is_identity().into() { - // Anyone malicious could remove this and try to find zero signatures - // We should never produce zero signatures though meaning this should never come up - // If it does somehow come up, this is a decent courtesy - signature.zeroize(); - Err(io::Error::other("signature nonce was identity"))?; - } - - Ok(Signed { signer, nonce, signature }) + /// Fetch the signer. + pub(crate) fn signer(&self) -> ::G { + self.signer } - pub fn write_without_nonce(&self, writer: &mut W) -> io::Result<()> { - // This is either an invalid signature or a private key leak - if self.signature.R.is_identity().into() { - Err(io::Error::other("signature nonce was identity"))?; - } - writer.write_all(&self.signer.to_bytes())?; - self.signature.write(writer) + /// Provide a nonce to convert a `Signed` into a `tributary::Signed`. + fn to_tributary_signed(self, nonce: u32) -> TributarySigned { + TributarySigned { signer: self.signer, nonce, signature: self.signature } } } -#[allow(clippy::large_enum_variant)] -#[derive(Clone, PartialEq, Eq, Debug)] -pub enum TransactionKind { - /// This transaction should be provided by every validator, in an exact order. - /// - /// The contained static string names the orderer to use. This allows two distinct provided - /// transaction kinds, without a synchronized order, to be ordered within their own kind without - /// requiring ordering with each other. - /// - /// The only malleability is in when this transaction appears on chain. The block producer will - /// include it when they have it. Block verification will fail for validators without it. - /// - /// If a supermajority of validators produce a commit for a block with a provided transaction - /// which isn't locally held, the block will be added to the local chain. When the transaction is - /// locally provided, it will be compared for correctness to the on-chain version - /// - /// In order to ensure TXs aren't accidentally provided multiple times, all provided transactions - /// must have a unique hash which is also unique to all Unsigned transactions. - Provided(&'static str), +/// The Tributary transaction definition used by Serai +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub enum Transaction { + /// A vote to remove a participant for invalid behavior + RemoveParticipant { + /// The participant to remove + participant: SeraiAddress, + /// The transaction's signer and signature + signed: Signed, + }, - /// An unsigned transaction, only able to be included by the block producer. - /// - /// Once an Unsigned transaction is included on-chain, it may not be included again. In order to - /// have multiple Unsigned transactions with the same values included on-chain, some distinct - /// nonce must be included in order to cause a distinct hash. - /// - /// The hash must also be unique with all Provided transactions. - Unsigned, + /// A participation in the DKG + DkgParticipation { + /// The serialized participation + participation: Vec, + /// The transaction's signer and signature + signed: Signed, + }, + /// The preprocess to confirm the DKG results on-chain + DkgConfirmationPreprocess { + /// The attempt number of this signing protocol + attempt: u32, + /// The preprocess + preprocess: [u8; 64], + /// The transaction's signer and signature + signed: Signed, + }, + /// The signature share to confirm the DKG results on-chain + DkgConfirmationShare { + /// The attempt number of this signing protocol + attempt: u32, + /// The signature share + share: [u8; 32], + /// The transaction's signer and signature + signed: Signed, + }, - /// A signed transaction. - Signed(Vec, Signed), + /// Intend to cosign a finalized Substrate block + /// + /// When the time comes to start a new cosigning protocol, the most recent Substrate block will + /// be the one selected to be cosigned. + Cosign { + /// The hash of the Substrate block to cosign + substrate_block_hash: [u8; 32], + }, + + /// Note an intended-to-be-cosigned Substrate block as cosigned + /// + /// After producing this cosign, we need to start work on the latest intended-to-be cosigned + /// block. That requires agreement on when this cosign was produced, which we solve by noting + /// this cosign on-chain. + /// + /// We ideally don't have this transaction at all. The coordinator, without access to any of the + /// key shares, could observe the FROST signing session and determine a successful completion. + /// Unfortunately, that functionality is not present in modular-frost, so we do need to support + /// *some* asynchronous flow (where the processor or P2P network informs us of the successful + /// completion). + /// + /// If we use a `Provided` transaction, that requires everyone observe this cosign. + /// + /// If we use an `Unsigned` transaction, we can't verify the cosign signature inside + /// `Transaction::verify` unless we embedded the full `SignedCosign` on-chain. The issue is since + /// a Tributary is stateless with regards to the on-chain logic, including `Transaction::verify`, + /// we can't verify the signature against the group's public key unless we also include that (but + /// then we open a DoS where arbitrary group keys are specified to cause inclusion of arbitrary + /// blobs on chain). + /// + /// If we use a `Signed` transaction, we mitigate the DoS risk by having someone to fatally + /// slash. We have horrible performance though as for 100 validators, all 100 will publish this + /// transaction. + /// + /// We could use a signed `Unsigned` transaction, where it includes a signer and signature but + /// isn't technically a Signed transaction. This lets us de-duplicate the transaction premised on + /// its contents. + /// + /// The optimal choice is likely to use a `Provided` transaction. We don't actually need to + /// observe the produced cosign (which is ephemeral). As long as it's agreed the cosign in + /// question no longer needs to produced, which would mean the cosigning protocol at-large + /// cosigning the block in question, it'd be safe to provide this and move on to the next cosign. + Cosigned { + /// The hash of the Substrate block which was cosigned + substrate_block_hash: [u8; 32], + }, + + /// Acknowledge a Substrate block + /// + /// This is provided after the block has been cosigned. + /// + /// With the acknowledgement of a Substrate block, we can whitelist all the `VariantSignId`s + /// resulting from its handling. + SubstrateBlock { + /// The hash of the Substrate block + hash: [u8; 32], + }, + + /// Acknowledge a Batch + /// + /// Once everyone has acknowledged the Batch, we can begin signing it. + Batch { + /// The hash of the Batch's serialization. + /// + /// Generally, we refer to a Batch by its ID/the hash of its instructions. Here, we want to + /// ensure consensus on the Batch, and achieving consensus on its hash is the most effective + /// way to do that. + hash: [u8; 32], + }, + + /// Data from a signing protocol. + Sign { + /// The ID of the object being signed + id: VariantSignId, + /// The attempt number of this signing protocol + attempt: u32, + /// The round this data is for, within the signing protocol + round: SigningProtocolRound, + /// The data itself + /// + /// There will be `n` blobs of data where `n` is the amount of key shares the validator sending + /// this transaction has. + data: Vec>, + /// The transaction's signer and signature + signed: Signed, + }, + + /// The local view of slashes observed by the transaction's sender + SlashReport { + /// The slash points accrued by each validator + slash_points: Vec, + /// The transaction's signer and signature + signed: Signed, + }, } -// TODO: Should this be renamed TransactionTrait now that a literal Transaction exists? -// Or should the literal Transaction be renamed to Event? -pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite { - /// Return what type of transaction this is. - fn kind(&self) -> TransactionKind; +impl ReadWrite for Transaction { + fn read(reader: &mut R) -> io::Result { + borsh::from_reader(reader) + } - /// Return the hash of this transaction. - /// - /// The hash must NOT commit to the signature. - fn hash(&self) -> [u8; 32]; + fn write(&self, writer: &mut W) -> io::Result<()> { + borsh::to_writer(writer, self) + } +} - /// Perform transaction-specific verification. - fn verify(&self) -> Result<(), TransactionError>; +impl TransactionTrait for Transaction { + fn kind(&self) -> TransactionKind { + match self { + Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed( + (b"RemoveParticipant", participant).encode(), + signed.to_tributary_signed(0), + ), - /// Obtain the challenge for this transaction's signature. - /// - /// Do not override this unless you know what you're doing. - /// - /// Panics if called on non-signed transactions. - fn sig_hash(&self, genesis: [u8; 32]) -> ::F { - match self.kind() { - TransactionKind::Signed(order, Signed { signature, .. }) => { - ::F::from_bytes_mod_order_wide( - &Blake2b512::digest( - [ - b"Tributary Signed Transaction", - genesis.as_ref(), - &self.hash(), - order.as_ref(), - signature.R.to_bytes().as_ref(), - ] - .concat(), - ) - .into(), - ) + Transaction::DkgParticipation { signed, .. } => { + TransactionKind::Signed(b"DkgParticipation".encode(), signed.to_tributary_signed(0)) + } + Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed( + (b"DkgConfirmation", attempt).encode(), + signed.to_tributary_signed(0), + ), + Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed( + (b"DkgConfirmation", attempt).encode(), + signed.to_tributary_signed(1), + ), + + Transaction::Cosign { .. } => TransactionKind::Provided("Cosign"), + Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"), + // TODO: Provide this + Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), + // TODO: Provide this + Transaction::Batch { .. } => TransactionKind::Provided("Batch"), + + Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed( + (b"Sign", id, attempt).encode(), + signed.to_tributary_signed(round.nonce()), + ), + + Transaction::SlashReport { signed, .. } => { + TransactionKind::Signed(b"SlashReport".encode(), signed.to_tributary_signed(0)) } - _ => panic!("sig_hash called on non-signed transaction"), } } -} -pub trait GAIN: FnMut(&::G, &[u8]) -> Option {} -impl::G, &[u8]) -> Option> GAIN for F {} - -pub(crate) fn verify_transaction( - tx: &T, - genesis: [u8; 32], - get_and_increment_nonce: &mut F, -) -> Result<(), TransactionError> { - if tx.serialize().len() > TRANSACTION_SIZE_LIMIT { - Err(TransactionError::TooLargeTransaction)?; + fn hash(&self) -> [u8; 32] { + let mut tx = ReadWrite::serialize(self); + if let TransactionKind::Signed(_, signed) = self.kind() { + // Make sure the part we're cutting off is the signature + assert_eq!(tx.drain((tx.len() - 64) ..).collect::>(), signed.signature.serialize()); + } + Blake2b::::digest(&tx).into() } - tx.verify()?; + // This is a stateless verification which we use to enforce some size limits. + fn verify(&self) -> Result<(), TransactionError> { + #[allow(clippy::match_same_arms)] + match self { + // Fixed-length TX + Transaction::RemoveParticipant { .. } => {} - match tx.kind() { - TransactionKind::Provided(_) | TransactionKind::Unsigned => {} - TransactionKind::Signed(order, Signed { signer, nonce, signature }) => { - if let Some(next_nonce) = get_and_increment_nonce(&signer, &order) { - if nonce != next_nonce { - Err(TransactionError::InvalidNonce)?; + // TODO: MAX_DKG_PARTICIPATION_LEN + Transaction::DkgParticipation { .. } => {} + // These are fixed-length TXs + Transaction::DkgConfirmationPreprocess { .. } | Transaction::DkgConfirmationShare { .. } => {} + + // Provided TXs + Transaction::Cosign { .. } | + Transaction::Cosigned { .. } | + Transaction::SubstrateBlock { .. } | + Transaction::Batch { .. } => {} + + Transaction::Sign { data, .. } => { + if data.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + Err(TransactionError::InvalidContent)? } - } else { - // Not a participant - Err(TransactionError::InvalidSigner)?; + // TODO: MAX_SIGN_LEN } - // TODO: Use a batch verification here - if !signature.verify(signer, tx.sig_hash(genesis)) { - Err(TransactionError::InvalidSignature)?; + Transaction::SlashReport { slash_points, .. } => { + if slash_points.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + Err(TransactionError::InvalidContent)? + } + } + }; + Ok(()) + } +} + +impl Transaction { + /// Sign a transaction. + /// + /// Panics if signing a transaction whose type isn't `TransactionKind::Signed`. + pub fn sign( + &mut self, + rng: &mut R, + genesis: [u8; 32], + key: &Zeroizing<::F>, + ) { + fn signed(tx: &mut Transaction) -> &mut Signed { + #[allow(clippy::match_same_arms)] // This doesn't make semantic sense here + match tx { + Transaction::RemoveParticipant { ref mut signed, .. } | + Transaction::DkgParticipation { ref mut signed, .. } | + Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed, + Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, + + Transaction::Cosign { .. } => panic!("signing CosignSubstrateBlock"), + Transaction::Cosigned { .. } => panic!("signing Cosigned"), + Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"), + Transaction::Batch { .. } => panic!("signing Batch"), + + Transaction::Sign { ref mut signed, .. } => signed, + + Transaction::SlashReport { ref mut signed, .. } => signed, } } - } - Ok(()) + // Decide the nonce to sign with + let sig_nonce = Zeroizing::new(::F::random(rng)); + + { + // Set the signer and the nonce + let signed = signed(self); + signed.signer = Ristretto::generator() * key.deref(); + signed.signature.R = ::generator() * sig_nonce.deref(); + } + + // Get the signature hash (which now includes `R || A` making it valid as the challenge) + let sig_hash = self.sig_hash(genesis); + + // Sign the signature + signed(self).signature = SchnorrSignature::::sign(key, sig_nonce, sig_hash); + } } diff --git a/deny.toml b/deny.toml index f530b6a2..23ddd386 100644 --- a/deny.toml +++ b/deny.toml @@ -72,9 +72,10 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-ethereum-processor" }, { allow = ["AGPL-3.0"], name = "serai-monero-processor" }, - { allow = ["AGPL-3.0"], name = "tributary-chain" }, + { allow = ["AGPL-3.0"], name = "tributary-sdk" }, { allow = ["AGPL-3.0"], name = "serai-cosign" }, { allow = ["AGPL-3.0"], name = "serai-coordinator-substrate" }, + { allow = ["AGPL-3.0"], name = "serai-coordinator-tributary" }, { allow = ["AGPL-3.0"], name = "serai-coordinator-p2p" }, { allow = ["AGPL-3.0"], name = "serai-coordinator-libp2p-p2p" }, { allow = ["AGPL-3.0"], name = "serai-coordinator" }, From 77d60660d2225d5c7795cde896383d47e4f0dc7e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 05:12:56 -0500 Subject: [PATCH 288/368] Move spawn_cosign from main.rs into tributary.rs Also refines the tasks within tributary.rs a good bit. --- coordinator/src/main.rs | 76 +------- coordinator/src/substrate.rs | 2 +- coordinator/src/tributary.rs | 326 +++++++++++++++++++++++------------ 3 files changed, 225 insertions(+), 179 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 0e2db23c..9f2e9e7a 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -4,7 +4,6 @@ use std::{sync::Arc, time::Instant}; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, OsRng}; -use blake2::{digest::typenum::U32, Digest, Blake2s}; use ciphersuite::{ group::{ff::PrimeField, GroupEncoding}, Ciphersuite, Ristretto, @@ -12,23 +11,19 @@ use ciphersuite::{ use tokio::sync::mpsc; -use scale::Encode; -use serai_client::{primitives::PublicKey, validator_sets::primitives::ValidatorSet, Serai}; +use serai_client::{primitives::PublicKey, Serai}; use message_queue::{Service, client::MessageQueue}; -use tributary_sdk::Tributary; - use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{SignedCosign, Cosigning}; -use serai_coordinator_substrate::{NewSetInformation, CanonicalEventStream, EphemeralEventStream}; -use serai_coordinator_tributary::{Transaction, ScanTributaryTask}; +use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream}; +use serai_coordinator_tributary::Transaction; mod db; use db::*; mod tributary; -use tributary::ScanTributaryMessagesTask; mod substrate; use substrate::SubstrateTask; @@ -104,69 +99,6 @@ fn spawn_cosigning( }); } -/// Spawn an existing Tributary. -/// -/// This will spawn the Tributary, the Tributary scanning task, and inform the P2P network. -async fn spawn_tributary( - db: Db, - message_queue: Arc, - p2p: P, - p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, - set: NewSetInformation, - serai_key: Zeroizing<::F>, -) { - // Don't spawn retired Tributaries - if RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= Some(set.set.session.0) - { - return; - } - - let genesis = <[u8; 32]>::from(Blake2s::::digest((set.serai_block, set.set).encode())); - - // Since the Serai block will be finalized, then cosigned, before we handle this, this time will - // be a couple of minutes stale. While the Tributary will still function with a start time in the - // past, the Tributary will immediately incur round timeouts. We reduce these by adding a - // constant delay of a couple of minutes. - const TRIBUTARY_START_TIME_DELAY: u64 = 120; - let start_time = set.declaration_time + TRIBUTARY_START_TIME_DELAY; - - let mut tributary_validators = Vec::with_capacity(set.validators.len()); - for (validator, weight) in set.validators.iter().copied() { - let validator_key = ::read_G(&mut validator.0.as_slice()) - .expect("Serai validator had an invalid public key"); - let weight = u64::from(weight); - tributary_validators.push((validator_key, weight)); - } - - let tributary_db = tributary_db(set.set); - let tributary = - Tributary::new(tributary_db.clone(), genesis, start_time, serai_key, tributary_validators, p2p) - .await - .unwrap(); - let reader = tributary.reader(); - - p2p_add_tributary - .send((set.set, tributary.clone())) - .expect("p2p's add_tributary channel was closed?"); - - // Spawn the task to send all messages from the Tributary scanner to the message-queue - let (scan_tributary_messages_task_def, scan_tributary_messages_task) = Task::new(); - tokio::spawn( - (ScanTributaryMessagesTask { tributary_db: tributary_db.clone(), set: set.set, message_queue }) - .continually_run(scan_tributary_messages_task_def, vec![]), - ); - - let (scan_tributary_task_def, scan_tributary_task) = Task::new(); - tokio::spawn( - ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db, &set, reader) - // This is the only handle for this ScanTributaryMessagesTask, so when this task is dropped, - // it will be too - .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), - ); - - tokio::spawn(tributary::run(db, set, tributary, scan_tributary_task)); -} - #[tokio::main] async fn main() { // Override the panic handler with one which will panic if any tokio task panics @@ -297,7 +229,7 @@ async fn main() { // Spawn all Tributaries on-disk for tributary in existing_tributaries_at_boot { - spawn_tributary( + crate::tributary::spawn_tributary( db.clone(), message_queue.clone(), p2p.clone(), diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 7ea5c257..8b5d2b41 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -141,7 +141,7 @@ impl ContinuallyRan for SubstrateTask

{ // Now spawn the Tributary // If we reboot after committing the txn, but before this is called, this will be called // on boot - crate::spawn_tributary( + crate::tributary::spawn_tributary( self.db.clone(), self.message_queue.clone(), self.p2p.clone(), diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 4fb193b3..56fe8a37 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -1,28 +1,138 @@ use core::{future::Future, time::Duration}; use std::sync::Arc; -use serai_db::{DbTxn, Db}; +use zeroize::Zeroizing; +use blake2::{digest::typenum::U32, Digest, Blake2s}; +use ciphersuite::{Ciphersuite, Ristretto}; +use tokio::sync::mpsc; + +use serai_db::{DbTxn, Db as DbTrait}; + +use scale::Encode; use serai_client::validator_sets::primitives::ValidatorSet; use tributary_sdk::{ProvidedError, Tributary}; -use serai_task::{TaskHandle, ContinuallyRan}; +use serai_task::{Task, TaskHandle, ContinuallyRan}; use message_queue::{Service, Metadata, client::MessageQueue}; use serai_cosign::Cosigning; use serai_coordinator_substrate::NewSetInformation; -use serai_coordinator_tributary::{Transaction, ProcessorMessages}; +use serai_coordinator_tributary::{Transaction, ProcessorMessages, ScanTributaryTask}; use serai_coordinator_p2p::P2p; -pub(crate) struct ScanTributaryMessagesTask { +use crate::Db; + +/// Provides Cosign/Cosigned Transactions onto the Tributary. +pub(crate) struct ProvideCosignCosignedTransactionsTask { + pub(crate) db: CD, + pub(crate) set: NewSetInformation, + pub(crate) tributary: Tributary, +} +impl ContinuallyRan + for ProvideCosignCosignedTransactionsTask +{ + fn run_iteration(&mut self) -> impl Send + Future> { + /// Provide a Provided Transaction to the Tributary. + /// + /// This is not a well-designed function. This is specific to the context in which its called, + /// within this file. It should only be considered an internal helper for this domain alone. + async fn provide_transaction( + set: ValidatorSet, + tributary: &Tributary, + tx: Transaction, + ) { + match tributary.provide_transaction(tx.clone()).await { + // The Tributary uses its own DB, so we may provide this multiple times if we reboot before + // committing the txn which provoked this + Ok(()) | Err(ProvidedError::AlreadyProvided) => {} + Err(ProvidedError::NotProvided) => { + panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}"); + } + Err(ProvidedError::InvalidProvided(e)) => { + panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}") + } + Err(ProvidedError::LocalMismatchesOnChain) => loop { + // The Tributary's scan task won't advance if we don't have the Provided transactions + // present on-chain, and this enters an infinite loop to block the calling task from + // advancing + log::error!( + "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", + set, + tx, + ); + // Print this every five minutes as this does need to be handled + tokio::time::sleep(Duration::from_secs(5 * 60)).await; + }, + } + } + + async move { + let mut made_progress = false; + + // Check if we produced any cosigns we were supposed to + let mut pending_notable_cosign = false; + loop { + let mut txn = self.db.txn(); + + // Fetch the next cosign this tributary should handle + let Some(cosign) = crate::PendingCosigns::try_recv(&mut txn, self.set.set) else { break }; + pending_notable_cosign = cosign.notable; + + // If we (Serai) haven't cosigned this block, break as this is still pending + let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; + if latest < cosign.block_number { + break; + } + + // Because we've cosigned it, provide the TX for that + provide_transaction( + self.set.set, + &self.tributary, + Transaction::Cosigned { substrate_block_hash: cosign.block_hash }, + ) + .await; + // Clear pending_notable_cosign since this cosign isn't pending + pending_notable_cosign = false; + + // Commit the txn to clear this from PendingCosigns + txn.commit(); + made_progress = true; + } + + // If we don't have any notable cosigns pending, provide the next set of cosign intents + if !pending_notable_cosign { + let mut txn = self.db.txn(); + // intended_cosigns will only yield up to and including the next notable cosign + for cosign in Cosigning::::intended_cosigns(&mut txn, self.set.set) { + // Flag this cosign as pending + crate::PendingCosigns::send(&mut txn, self.set.set, &cosign); + // Provide the transaction to queue it for work + provide_transaction( + self.set.set, + &self.tributary, + Transaction::Cosign { substrate_block_hash: cosign.block_hash }, + ) + .await; + } + txn.commit(); + made_progress = true; + } + + Ok(made_progress) + } + } +} + +/// Takes the messages from ScanTributaryTask and publishes them to the message-queue. +pub(crate) struct TributaryProcessorMessagesTask { pub(crate) tributary_db: TD, pub(crate) set: ValidatorSet, pub(crate) message_queue: Arc, } - -impl ContinuallyRan for ScanTributaryMessagesTask { +impl ContinuallyRan for TributaryProcessorMessagesTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; @@ -45,118 +155,122 @@ impl ContinuallyRan for ScanTributaryMessagesTask { } } -async fn provide_transaction( +/// Run the scan task whenever the Tributary adds a new block. +async fn scan_on_new_block( + db: CD, set: ValidatorSet, - tributary: &Tributary, - tx: Transaction, -) { - match tributary.provide_transaction(tx.clone()).await { - // The Tributary uses its own DB, so we may provide this multiple times if we reboot before - // committing the txn which provoked this - Ok(()) | Err(ProvidedError::AlreadyProvided) => {} - Err(ProvidedError::NotProvided) => { - panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}"); - } - Err(ProvidedError::InvalidProvided(e)) => { - panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}") - } - Err(ProvidedError::LocalMismatchesOnChain) => loop { - // The Tributary's scan task won't advance if we don't have the Provided transactions - // present on-chain, and this enters an infinite loop to block the calling task from - // advancing - log::error!( - "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", - set, - tx, - ); - // Print this every five minutes as this does need to be handled - tokio::time::sleep(Duration::from_secs(5 * 60)).await; - }, - } -} - -/// Run a Tributary. -/// -/// The Tributary handle existing causes the Tributary's consensus engine to be run. We distinctly -/// have `ScanTributaryTask` to scan the produced blocks. This function provides Provided -/// transactions onto the Tributary and invokes ScanTributaryTask whenver a new Tributary block is -/// produced (instead of only on the standard interval). -pub(crate) async fn run( - mut db: CD, - set: NewSetInformation, tributary: Tributary, scan_tributary_task: TaskHandle, + tasks_to_keep_alive: Vec, ) { loop { // Break once this Tributary is retired - if crate::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= - Some(set.set.session.0) + if crate::RetiredTributary::get(&db, set.network).map(|session| session.0) >= + Some(set.session.0) { + drop(tasks_to_keep_alive); break; } - // Check if we produced any cosigns we were supposed to - let mut pending_notable_cosign = false; - loop { - let mut txn = db.txn(); - - // Fetch the next cosign this tributary should handle - let Some(cosign) = crate::PendingCosigns::try_recv(&mut txn, set.set) else { break }; - pending_notable_cosign = cosign.notable; - - // If we (Serai) haven't cosigned this block, break as this is still pending - let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; - if latest < cosign.block_number { - break; - } - - // Because we've cosigned it, provide the TX for that - provide_transaction( - set.set, - &tributary, - Transaction::Cosigned { substrate_block_hash: cosign.block_hash }, - ) - .await; - // Clear pending_notable_cosign since this cosign isn't pending - pending_notable_cosign = false; - - // Commit the txn to clear this from PendingCosigns - txn.commit(); - } - - // If we don't have any notable cosigns pending, provide the next set of cosign intents - if pending_notable_cosign { - let mut txn = db.txn(); - // intended_cosigns will only yield up to and including the next notable cosign - for cosign in Cosigning::::intended_cosigns(&mut txn, set.set) { - // Flag this cosign as pending - crate::PendingCosigns::send(&mut txn, set.set, &cosign); - // Provide the transaction to queue it for work - provide_transaction( - set.set, - &tributary, - Transaction::Cosign { substrate_block_hash: cosign.block_hash }, - ) - .await; - } - txn.commit(); - } - // Have the tributary scanner run as soon as there's a new block - // This is wrapped in a timeout so we don't go too long without running the above code - match tokio::time::timeout( - Duration::from_millis(tributary_sdk::tendermint::TARGET_BLOCK_TIME.into()), - tributary.next_block_notification().await, - ) - .await - { - // Future resolved within the timeout, notification - Ok(Ok(())) => scan_tributary_task.run_now(), - // Future resolved within the timeout, notification failed due to sender being dropped + match tributary.next_block_notification().await.await { + Ok(()) => scan_tributary_task.run_now(), // unreachable since this owns the tributary object and doesn't drop it - Ok(Err(_)) => panic!("tributary was dropped causing notification to error"), - // Future didn't resolve within the timeout - Err(_) => {} + Err(_) => panic!("tributary was dropped causing notification to error"), } } } + +/// Spawn a Tributary. +/// +/// This will spawn the Tributary, the Tributary scan task, forward the messages from the scan task +/// to the message queue, provide Cosign/Cosigned transactions, and inform the P2P network. +pub(crate) async fn spawn_tributary( + db: Db, + message_queue: Arc, + p2p: P, + p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, + set: NewSetInformation, + serai_key: Zeroizing<::F>, +) { + // Don't spawn retired Tributaries + if crate::db::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= + Some(set.set.session.0) + { + return; + } + + let genesis = <[u8; 32]>::from(Blake2s::::digest((set.serai_block, set.set).encode())); + + // Since the Serai block will be finalized, then cosigned, before we handle this, this time will + // be a couple of minutes stale. While the Tributary will still function with a start time in the + // past, the Tributary will immediately incur round timeouts. We reduce these by adding a + // constant delay of a couple of minutes. + const TRIBUTARY_START_TIME_DELAY: u64 = 120; + let start_time = set.declaration_time + TRIBUTARY_START_TIME_DELAY; + + let mut tributary_validators = Vec::with_capacity(set.validators.len()); + for (validator, weight) in set.validators.iter().copied() { + let validator_key = ::read_G(&mut validator.0.as_slice()) + .expect("Serai validator had an invalid public key"); + let weight = u64::from(weight); + tributary_validators.push((validator_key, weight)); + } + + // Spawn the Tributary + let tributary_db = crate::db::tributary_db(set.set); + let tributary = + Tributary::new(tributary_db.clone(), genesis, start_time, serai_key, tributary_validators, p2p) + .await + .unwrap(); + let reader = tributary.reader(); + + // Inform the P2P network + p2p_add_tributary + .send((set.set, tributary.clone())) + .expect("p2p's add_tributary channel was closed?"); + + // Spawn the task to provide Cosign/Cosigned transactions onto the Tributary + let (provide_cosign_cosigned_transactions_task_def, provide_cosign_cosigned_transactions_task) = + Task::new(); + tokio::spawn( + (ProvideCosignCosignedTransactionsTask { + db: db.clone(), + set: set.clone(), + tributary: tributary.clone(), + }) + .continually_run(provide_cosign_cosigned_transactions_task_def, vec![]), + ); + + // Spawn the task to send all messages from the Tributary scanner to the message-queue + let (scan_tributary_messages_task_def, scan_tributary_messages_task) = Task::new(); + tokio::spawn( + (TributaryProcessorMessagesTask { + tributary_db: tributary_db.clone(), + set: set.set, + message_queue, + }) + .continually_run(scan_tributary_messages_task_def, vec![]), + ); + + // Spawn the scan task + let (scan_tributary_task_def, scan_tributary_task) = Task::new(); + tokio::spawn( + ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db, &set, reader) + // This is the only handle for this TributaryProcessorMessagesTask, so when this task is + // dropped, it will be too + .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), + ); + + // Whenever a new block occurs, immediately run the scan task + // This function also preserves the ProvideCosignCosignedTransactionsTask handle until the + // Tributary is retired, ensuring it isn't dropped prematurely and that the task don't run ad + // infinitum + tokio::spawn(scan_on_new_block( + db, + set.set, + tributary, + scan_tributary_task, + vec![provide_cosign_cosigned_transactions_task], + )); +} From e731b546ab978d15bd7b777fc73a07e916a8f5e6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 05:13:43 -0500 Subject: [PATCH 289/368] Update documentation --- coordinator/README.md | 32 ++++++++++++++++++---------- coordinator/tributary-sdk/Cargo.toml | 10 ++++----- 2 files changed, 26 insertions(+), 16 deletions(-) diff --git a/coordinator/README.md b/coordinator/README.md index 27552a5a..88d2ef63 100644 --- a/coordinator/README.md +++ b/coordinator/README.md @@ -1,19 +1,29 @@ # Coordinator -- [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint BFT algorithm. +- [`tendermint`](/tributary/tendermint) is an implementation of the Tendermint + BFT algorithm. -- [`tributary`](./tributary) is a micro-blockchain framework. Instead of a producing a blockchain - daemon like the Polkadot SDK or Cosmos SDK intend to, `tributary` is solely intended to be an - embedded asynchronous task within an application. +- [`tributary-sdk`](./tributary-sdk) is a micro-blockchain framework. Instead + of a producing a blockchain daemon like the Polkadot SDK or Cosmos SDK intend + to, `tributary` is solely intended to be an embedded asynchronous task within + an application. - The Serai coordinator spawns a tributary for each validator set it's coordinating. This allows - the participating validators to communicate in a byzantine-fault-tolerant manner (relying on - Tendermint for consensus). + The Serai coordinator spawns a tributary for each validator set it's + coordinating. This allows the participating validators to communicate in a + byzantine-fault-tolerant manner (relying on Tendermint for consensus). -- [`cosign`](./cosign) contains a library to decide which Substrate blocks should be cosigned and - to evaluate cosigns. +- [`cosign`](./cosign) contains a library to decide which Substrate blocks + should be cosigned and to evaluate cosigns. -- [`substrate`](./substrate) contains a library to index the Substrate blockchain and handle its - events. +- [`substrate`](./substrate) contains a library to index the Substrate + blockchain and handle its events. + +- [`tributary`](./tributary) is our instantiation of the Tributary SDK for the + Serai processor. It includes the `Transaction` definition and deferred + execution logic. + +- [`p2p`](./p2p) is our abstract P2P API to service the Coordinator. + +- [`libp2p`](./p2p/libp2p) is our libp2p-backed implementation of the P2P API. - [`src`](./src) contains the source code for the Coordinator binary itself. diff --git a/coordinator/tributary-sdk/Cargo.toml b/coordinator/tributary-sdk/Cargo.toml index be72ff0c..2e92c03d 100644 --- a/coordinator/tributary-sdk/Cargo.toml +++ b/coordinator/tributary-sdk/Cargo.toml @@ -25,20 +25,20 @@ rand = { version = "0.8", default-features = false, features = ["std"] } rand_chacha = { version = "0.3", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } -transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["std", "recommended"] } +transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["std", "recommended"] } -ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } -schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } +ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", version = "0.4", default-features = false, features = ["std", "ristretto"] } +schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", version = "0.5", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] } -serai-db = { path = "../../common/db" } +serai-db = { path = "../../common/db", version = "0.1" } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] } futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } -tendermint = { package = "tendermint-machine", path = "./tendermint" } +tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" } tokio = { version = "1", default-features = false, features = ["sync", "time", "rt"] } From 74106b025fecb13f90dd389f3175ae5228ee6466 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 06:51:55 -0500 Subject: [PATCH 290/368] Publish SlashReport onto the Tributary --- coordinator/src/main.rs | 5 +- coordinator/src/tributary.rs | 103 +++++++++++++++++++---- coordinator/substrate/src/ephemeral.rs | 2 +- coordinator/substrate/src/lib.rs | 12 +-- coordinator/tributary/src/db.rs | 8 +- coordinator/tributary/src/lib.rs | 10 +++ coordinator/tributary/src/transaction.rs | 14 ++- 7 files changed, 124 insertions(+), 30 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 9f2e9e7a..0dae9b40 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -17,7 +17,7 @@ use message_queue::{Service, client::MessageQueue}; use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{SignedCosign, Cosigning}; -use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream}; +use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream, SignSlashReport}; use serai_coordinator_tributary::Transaction; mod db; @@ -148,6 +148,8 @@ async fn main() { prune_tributary_db(to_cleanup); // Drain the cosign intents created for this set while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} + // Remove the SignSlashReport notification + SignSlashReport::try_recv(&mut txn, to_cleanup); } // Remove retired Tributaries from ActiveTributaries @@ -198,7 +200,6 @@ async fn main() { }; // Spawn the Substrate scanners - // TODO: SignSlashReport let (substrate_task_def, substrate_task) = Task::new(); let (substrate_canonical_task_def, substrate_canonical_task) = Task::new(); tokio::spawn( diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 56fe8a37..55fae37c 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -2,6 +2,7 @@ use core::{future::Future, time::Duration}; use std::sync::Arc; use zeroize::Zeroizing; +use rand_core::OsRng; use blake2::{digest::typenum::U32, Digest, Blake2s}; use ciphersuite::{Ciphersuite, Ristretto}; @@ -12,14 +13,14 @@ use serai_db::{DbTxn, Db as DbTrait}; use scale::Encode; use serai_client::validator_sets::primitives::ValidatorSet; -use tributary_sdk::{ProvidedError, Tributary}; +use tributary_sdk::{TransactionError, ProvidedError, Tributary}; use serai_task::{Task, TaskHandle, ContinuallyRan}; use message_queue::{Service, Metadata, client::MessageQueue}; use serai_cosign::Cosigning; -use serai_coordinator_substrate::NewSetInformation; +use serai_coordinator_substrate::{NewSetInformation, SignSlashReport}; use serai_coordinator_tributary::{Transaction, ProcessorMessages, ScanTributaryTask}; use serai_coordinator_p2p::P2p; @@ -27,9 +28,9 @@ use crate::Db; /// Provides Cosign/Cosigned Transactions onto the Tributary. pub(crate) struct ProvideCosignCosignedTransactionsTask { - pub(crate) db: CD, - pub(crate) set: NewSetInformation, - pub(crate) tributary: Tributary, + db: CD, + set: NewSetInformation, + tributary: Tributary, } impl ContinuallyRan for ProvideCosignCosignedTransactionsTask @@ -128,9 +129,9 @@ impl ContinuallyRan /// Takes the messages from ScanTributaryTask and publishes them to the message-queue. pub(crate) struct TributaryProcessorMessagesTask { - pub(crate) tributary_db: TD, - pub(crate) set: ValidatorSet, - pub(crate) message_queue: Arc, + tributary_db: TD, + set: ValidatorSet, + message_queue: Arc, } impl ContinuallyRan for TributaryProcessorMessagesTask { fn run_iteration(&mut self) -> impl Send + Future> { @@ -155,6 +156,51 @@ impl ContinuallyRan for TributaryProcessorMessagesTask { } } +/// Checks for the notification to sign a slash report and does so if present. +pub(crate) struct SignSlashReportTask { + db: CD, + tributary_db: TD, + tributary: Tributary, + set: NewSetInformation, + key: Zeroizing<::F>, +} +impl ContinuallyRan for SignSlashReportTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut txn = self.db.txn(); + let Some(()) = SignSlashReport::try_recv(&mut txn, self.set.set) else { return Ok(false) }; + + // Fetch the slash report for this Tributary + let mut tx = + serai_coordinator_tributary::slash_report_transaction(&self.tributary_db, &self.set); + tx.sign(&mut OsRng, self.tributary.genesis(), &self.key); + + let res = self.tributary.add_transaction(tx.clone()).await; + match &res { + // Fresh publication, already published + Ok(true | false) => {} + Err( + TransactionError::TooLargeTransaction | + TransactionError::InvalidSigner | + TransactionError::InvalidNonce | + TransactionError::InvalidSignature | + TransactionError::InvalidContent, + ) => { + panic!("created an invalid SlashReport transaction, tx: {tx:?}, err: {res:?}"); + } + // We've published too many transactions recently + // Drop this txn to try to publish it again later on a future iteration + Err(TransactionError::TooManyInMempool) => return Ok(false), + // This isn't a Provided transaction so this should never be hit + Err(TransactionError::ProvidedAddedToMempool) => unreachable!(), + } + + txn.commit(); + Ok(true) + } + } +} + /// Run the scan task whenever the Tributary adds a new block. async fn scan_on_new_block( db: CD, @@ -183,8 +229,14 @@ async fn scan_on_new_block( /// Spawn a Tributary. /// -/// This will spawn the Tributary, the Tributary scan task, forward the messages from the scan task -/// to the message queue, provide Cosign/Cosigned transactions, and inform the P2P network. +/// This will: +/// - Spawn the Tributary +/// - Inform the P2P network of the Tributary +/// - Spawn the ScanTributaryTask +/// - Spawn the ProvideCosignCosignedTransactionsTask +/// - Spawn the TributaryProcessorMessagesTask +/// - Spawn the SignSlashReportTask +/// - Iterate the scan task whenever a new block occurs (not just on the standard interval) pub(crate) async fn spawn_tributary( db: Db, message_queue: Arc, @@ -219,10 +271,16 @@ pub(crate) async fn spawn_tributary( // Spawn the Tributary let tributary_db = crate::db::tributary_db(set.set); - let tributary = - Tributary::new(tributary_db.clone(), genesis, start_time, serai_key, tributary_validators, p2p) - .await - .unwrap(); + let tributary = Tributary::new( + tributary_db.clone(), + genesis, + start_time, + serai_key.clone(), + tributary_validators, + p2p, + ) + .await + .unwrap(); let reader = tributary.reader(); // Inform the P2P network @@ -256,12 +314,25 @@ pub(crate) async fn spawn_tributary( // Spawn the scan task let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( - ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db, &set, reader) + ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db.clone(), &set, reader) // This is the only handle for this TributaryProcessorMessagesTask, so when this task is // dropped, it will be too .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), ); + // Spawn the sign slash report task + let (sign_slash_report_task_def, sign_slash_report_task) = Task::new(); + tokio::spawn( + (SignSlashReportTask { + db: db.clone(), + tributary_db, + tributary: tributary.clone(), + set: set.clone(), + key: serai_key, + }) + .continually_run(sign_slash_report_task_def, vec![]), + ); + // Whenever a new block occurs, immediately run the scan task // This function also preserves the ProvideCosignCosignedTransactionsTask handle until the // Tributary is retired, ensuring it isn't dropped prematurely and that the task don't run ad @@ -271,6 +342,6 @@ pub(crate) async fn spawn_tributary( set.set, tributary, scan_tributary_task, - vec![provide_cosign_cosigned_transactions_task], + vec![provide_cosign_cosigned_transactions_task, sign_slash_report_task], )); } diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index d889d59f..54df6b3c 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -234,7 +234,7 @@ impl ContinuallyRan for EphemeralEventStream { else { panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}"); }; - crate::SignSlashReport::send(&mut txn, set); + crate::SignSlashReport::send(&mut txn, *set); } txn.commit(); diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index b3f00a5e..228cbed9 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -66,8 +66,8 @@ mod _public_db { // Relevant new set, from an ephemeral event stream NewSet: () -> NewSetInformation, - // Relevant sign slash report, from an ephemeral event stream - SignSlashReport: () -> ValidatorSet, + // Potentially relevant sign slash report, from an ephemeral event stream + SignSlashReport: (set: ValidatorSet) -> (), } ); } @@ -109,12 +109,12 @@ impl NewSet { /// notifications for all relevant validator sets will be included. pub struct SignSlashReport; impl SignSlashReport { - pub(crate) fn send(txn: &mut impl DbTxn, set: &ValidatorSet) { - _public_db::SignSlashReport::send(txn, set); + pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet) { + _public_db::SignSlashReport::send(txn, set, &()); } /// Try to receive a notification to sign a slash report, returning `None` if there is none to /// receive. - pub fn try_recv(txn: &mut impl DbTxn) -> Option { - _public_db::SignSlashReport::try_recv(txn) + pub fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option<()> { + _public_db::SignSlashReport::try_recv(txn, set) } } diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index 87567846..c48393af 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -184,8 +184,8 @@ create_db!( // The last handled tributary block's (number, hash) LastHandledTributaryBlock: (set: ValidatorSet) -> (u64, [u8; 32]), - // The slash points a validator has accrued, with u64::MAX representing a fatal slash. - SlashPoints: (set: ValidatorSet, validator: SeraiAddress) -> u64, + // The slash points a validator has accrued, with u32::MAX representing a fatal slash. + SlashPoints: (set: ValidatorSet, validator: SeraiAddress) -> u32, // The latest Substrate block to cosign. LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], @@ -316,7 +316,7 @@ impl TributaryDb { reason: &str, ) { log::warn!("{validator} fatally slashed: {reason}"); - SlashPoints::set(txn, set, validator, &u64::MAX); + SlashPoints::set(txn, set, validator, &u32::MAX); } pub(crate) fn is_fatally_slashed( @@ -324,7 +324,7 @@ impl TributaryDb { set: ValidatorSet, validator: SeraiAddress, ) -> bool { - SlashPoints::get(getter, set, validator).unwrap_or(0) == u64::MAX + SlashPoints::get(getter, set, validator).unwrap_or(0) == u32::MAX } #[allow(clippy::too_many_arguments)] diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 9b059820..686af18d 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -511,3 +511,13 @@ impl ContinuallyRan for ScanTributaryTask { } } } + +/// Create the Transaction::SlashReport to publish per the local view. +pub fn slash_report_transaction(getter: &impl Get, set: &NewSetInformation) -> Transaction { + let mut slash_points = Vec::with_capacity(set.validators.len()); + for (validator, _weight) in set.validators.iter().copied() { + let validator = SeraiAddress::from(validator); + slash_points.push(SlashPoints::get(getter, set.set, validator).unwrap_or(0)); + } + Transaction::SlashReport { slash_points, signed: Signed::default() } +} diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index f9fd016d..befad461 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -6,7 +6,7 @@ use rand_core::{RngCore, CryptoRng}; use blake2::{digest::typenum::U32, Digest, Blake2b}; use ciphersuite::{ - group::{ff::Field, GroupEncoding}, + group::{ff::Field, Group, GroupEncoding}, Ciphersuite, Ristretto, }; use schnorr::SchnorrSignature; @@ -80,6 +80,18 @@ impl Signed { } } +impl Default for Signed { + fn default() -> Self { + Self { + signer: ::G::identity(), + signature: SchnorrSignature { + R: ::G::identity(), + s: ::F::ZERO, + }, + } + } +} + /// The Tributary transaction definition used by Serai #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum Transaction { From f501d46d445b205c29ced7a8c5a084dc6faf978a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 06:54:43 -0500 Subject: [PATCH 291/368] Correct disabling of Nagle's algorithm --- coordinator/p2p/libp2p/src/lib.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index d92eae42..4a998289 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -188,7 +188,7 @@ impl Libp2p { let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) .with_tokio() - .with_tcp(TcpConfig::default().nodelay(false), new_only_validators, new_yamux) + .with_tcp(TcpConfig::default().nodelay(true), new_only_validators, new_yamux) .unwrap() .with_behaviour(|_| Behavior { allow_list: allow_block_list::Behaviour::default(), From d854807eddb942e3a0c6a17c773fcd321d583b5f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 11 Jan 2025 21:57:58 -0500 Subject: [PATCH 292/368] Make message_queue::client::Client::send fallible Allows tasks to report the errors themselves and handle retry in our standardized way. --- coordinator/src/substrate.rs | 6 ++-- coordinator/src/tributary.rs | 3 +- message-queue/src/client.rs | 50 ++++++++++++++++++-------------- processor/bin/src/coordinator.rs | 5 ++-- 4 files changed, 35 insertions(+), 29 deletions(-) diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 8b5d2b41..224b6278 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -69,8 +69,7 @@ impl ContinuallyRan for SubstrateTask

{ intent: msg.intent(), }; let msg = borsh::to_vec(&msg).unwrap(); - // TODO: Make this fallible - self.message_queue.queue(metadata, msg).await; + self.message_queue.queue(metadata, msg).await?; txn.commit(); made_progress = true; } @@ -132,8 +131,7 @@ impl ContinuallyRan for SubstrateTask

{ intent: msg.intent(), }; let msg = borsh::to_vec(&msg).unwrap(); - // TODO: Make this fallible - self.message_queue.queue(metadata, msg).await; + self.message_queue.queue(metadata, msg).await?; // Commit the transaction for all of this txn.commit(); diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 55fae37c..76a034d5 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -146,8 +146,7 @@ impl ContinuallyRan for TributaryProcessorMessagesTask { intent: msg.intent(), }; let msg = borsh::to_vec(&msg).unwrap(); - // TODO: Make this fallible - self.message_queue.queue(metadata, msg).await; + self.message_queue.queue(metadata, msg).await?; txn.commit(); made_progress = true; } diff --git a/message-queue/src/client.rs b/message-queue/src/client.rs index 3aaf5a24..b503c232 100644 --- a/message-queue/src/client.rs +++ b/message-queue/src/client.rs @@ -64,22 +64,20 @@ impl MessageQueue { Self::new(service, url, priv_key) } - #[must_use] - async fn send(socket: &mut TcpStream, msg: MessageQueueRequest) -> bool { + async fn send(socket: &mut TcpStream, msg: MessageQueueRequest) -> Result<(), String> { let msg = borsh::to_vec(&msg).unwrap(); - let Ok(()) = socket.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await else { - log::warn!("couldn't send the message len"); - return false; + match socket.write_all(&u32::try_from(msg.len()).unwrap().to_le_bytes()).await { + Ok(()) => {} + Err(e) => Err(format!("couldn't send the message len: {e:?}"))?, }; - let Ok(()) = socket.write_all(&msg).await else { - log::warn!("couldn't write the message"); - return false; - }; - true + match socket.write_all(&msg).await { + Ok(()) => {} + Err(e) => Err(format!("couldn't write the message: {e:?}"))?, + } + Ok(()) } - pub async fn queue(&self, metadata: Metadata, msg: Vec) { - // TODO: Should this use OsRng? Deterministic or deterministic + random may be better. + pub async fn queue(&self, metadata: Metadata, msg: Vec) -> Result<(), String> { let nonce = Zeroizing::new(::F::random(&mut OsRng)); let nonce_pub = Ristretto::generator() * nonce.deref(); let sig = SchnorrSignature::::sign( @@ -97,6 +95,21 @@ impl MessageQueue { .serialize(); let msg = MessageQueueRequest::Queue { meta: metadata, msg, sig }; + + let mut socket = match TcpStream::connect(&self.url).await { + Ok(socket) => socket, + Err(e) => Err(format!("failed to connect to the message-queue service: {e:?}"))?, + }; + Self::send(&mut socket, msg.clone()).await?; + match socket.read_u8().await { + Ok(1) => {} + Ok(b) => Err(format!("message-queue didn't return for 1 for its ack, recieved: {b}"))?, + Err(e) => Err(format!("failed to read the response from the message-queue service: {e:?}"))?, + } + Ok(()) + } + + pub async fn queue_with_retry(&self, metadata: Metadata, msg: Vec) { let mut first = true; loop { // Sleep, so we don't hammer re-attempts @@ -105,14 +118,9 @@ impl MessageQueue { } first = false; - let Ok(mut socket) = TcpStream::connect(&self.url).await else { continue }; - if !Self::send(&mut socket, msg.clone()).await { - continue; + if self.queue(metadata.clone(), msg.clone()).await.is_ok() { + break; } - if socket.read_u8().await.ok() != Some(1) { - continue; - } - break; } } @@ -136,7 +144,7 @@ impl MessageQueue { log::trace!("opened socket for next"); loop { - if !Self::send(&mut socket, msg.clone()).await { + if Self::send(&mut socket, msg.clone()).await.is_err() { continue 'outer; } let status = match socket.read_u8().await { @@ -224,7 +232,7 @@ impl MessageQueue { first = false; let Ok(mut socket) = TcpStream::connect(&self.url).await else { continue }; - if !Self::send(&mut socket, msg.clone()).await { + if Self::send(&mut socket, msg.clone()).await.is_err() { continue; } if socket.read_u8().await.ok() != Some(1) { diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index 255525a2..ffafd466 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -103,6 +103,7 @@ impl Coordinator { }); // Spawn a task to send messages to the message-queue + // TODO: Define a proper task for this and remove use of queue_with_retry tokio::spawn({ let mut db = db.clone(); async move { @@ -115,12 +116,12 @@ impl Coordinator { to: Service::Coordinator, intent: borsh::from_slice::(&msg).unwrap().intent(), }; - message_queue.queue(metadata, msg).await; + message_queue.queue_with_retry(metadata, msg).await; txn.commit(); } None => { let _ = - tokio::time::timeout(core::time::Duration::from_secs(60), sent_message_recv.recv()) + tokio::time::timeout(core::time::Duration::from_secs(6), sent_message_recv.recv()) .await; } } From df9a9adaa872604dbb6a3329696eb002a51a158c Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 03:48:43 -0500 Subject: [PATCH 293/368] Remove direct dependencies of void, async-trait --- Cargo.lock | 3 --- coordinator/p2p/libp2p/Cargo.toml | 1 - coordinator/p2p/libp2p/src/swarm.rs | 4 ++-- tests/coordinator/Cargo.toml | 1 - tests/full-stack/Cargo.toml | 2 -- 5 files changed, 2 insertions(+), 9 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index f93ff6c1..980d9405 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8363,7 +8363,6 @@ dependencies = [ "serai-task", "tokio", "tributary-sdk", - "void", "zeroize", ] @@ -8402,7 +8401,6 @@ dependencies = [ name = "serai-coordinator-tests" version = "0.1.0" dependencies = [ - "async-trait", "blake2", "borsh", "ciphersuite", @@ -8605,7 +8603,6 @@ dependencies = [ name = "serai-full-stack-tests" version = "0.1.0" dependencies = [ - "async-trait", "bitcoin-serai", "curve25519-dalek", "dockertest", diff --git a/coordinator/p2p/libp2p/Cargo.toml b/coordinator/p2p/libp2p/Cargo.toml index 7a393588..948df9a4 100644 --- a/coordinator/p2p/libp2p/Cargo.toml +++ b/coordinator/p2p/libp2p/Cargo.toml @@ -33,7 +33,6 @@ serai-client = { path = "../../../substrate/client", default-features = false, f serai-cosign = { path = "../../cosign" } tributary-sdk = { path = "../../tributary-sdk" } -void = { version = "1", default-features = false } futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["sync"] } libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index a9b13bf0..a8c9556c 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -225,8 +225,8 @@ impl SwarmTask { SwarmEvent::Behaviour( BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) ) => { - // Ensure these are unreachable cases, not actual events - let _: void::Void = event; + // This *is* an exhaustive match as these events are empty enums + match event {} } SwarmEvent::Behaviour( BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) diff --git a/tests/coordinator/Cargo.toml b/tests/coordinator/Cargo.toml index ca7a10d6..6038da38 100644 --- a/tests/coordinator/Cargo.toml +++ b/tests/coordinator/Cargo.toml @@ -19,7 +19,6 @@ workspace = true [dependencies] hex = "0.4" -async-trait = "0.1" zeroize = { version = "1", default-features = false } rand_core = { version = "0.6", default-features = false } diff --git a/tests/full-stack/Cargo.toml b/tests/full-stack/Cargo.toml index a9dbdc63..5bafb346 100644 --- a/tests/full-stack/Cargo.toml +++ b/tests/full-stack/Cargo.toml @@ -19,8 +19,6 @@ workspace = true [dependencies] hex = "0.4" -async-trait = "0.1" - zeroize = { version = "1", default-features = false } rand_core = { version = "0.6", default-features = false } From 158140c3a7647a5d88f9dc0653a394640e6b645f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 05:49:17 -0500 Subject: [PATCH 294/368] Add a proper error for intake_cosign --- coordinator/cosign/src/lib.rs | 83 +++++++++++++++++++++-------------- 1 file changed, 51 insertions(+), 32 deletions(-) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index c4428a39..dae8647b 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -128,7 +128,6 @@ create_db! { // An index of Substrate blocks SubstrateBlockHash: (block_number: u64) -> [u8; 32], - SubstrateBlockNumber: (block_hash: [u8; 32]) -> u64, // A mapping from a global session's ID to its relevant information. GlobalSessions: (global_session: [u8; 32]) -> GlobalSession, // The last block to be cosigned by a global session. @@ -229,6 +228,43 @@ pub trait RequestNotableCosigns: 'static + Send { #[derive(Debug)] pub struct Faulted; +/// An error incurred while intaking a cosign. +#[derive(Debug)] +pub enum IntakeCosignError { + /// Cosign is for a not-yet-indexed block + NotYetIndexedBlock, + /// A later cosign for this cosigner has already been handled + StaleCosign, + /// The cosign's global session isn't recognized + UnrecognizedGlobalSession, + /// The cosign is for a block before its global session starts + BeforeGlobalSessionStart, + /// The cosign is for a block after its global session ends + AfterGlobalSessionEnd, + /// The cosign's signing network wasn't a participant in this global session + NonParticipatingNetwork, + /// The cosign had an invalid signature + InvalidSignature, + /// The cosign is for a global session which has yet to have its declaration block cosigned + FutureGlobalSession, +} + +impl IntakeCosignError { + /// If this error is temporal to the local view + pub fn temporal(&self) -> bool { + match self { + IntakeCosignError::NotYetIndexedBlock | + IntakeCosignError::StaleCosign | + IntakeCosignError::UnrecognizedGlobalSession | + IntakeCosignError::FutureGlobalSession => true, + IntakeCosignError::BeforeGlobalSessionStart | + IntakeCosignError::AfterGlobalSessionEnd | + IntakeCosignError::NonParticipatingNetwork | + IntakeCosignError::InvalidSignature => false, + } + } +} + /// The interface to manage cosigning with. pub struct Cosigning { db: D, @@ -282,13 +318,6 @@ impl Cosigning { )) } - /// Fetch a finalized block's number by its hash. - /// - /// This block is not guaranteed to be cosigned. - pub fn finalized_block_number(getter: &impl Get, block_hash: [u8; 32]) -> Option { - SubstrateBlockNumber::get(getter, block_hash) - } - /// Fetch the notable cosigns for a global session in order to respond to requests. /// /// If this global session hasn't produced any notable cosigns, this will return the latest @@ -335,25 +364,15 @@ impl Cosigning { } /// Intake a cosign. - /// - /// - Returns Err(_) if there was an error trying to validate the cosign. - /// - Returns Ok(true) if the cosign was successfully handled or could not be handled at this - /// time. - /// - Returns Ok(false) if the cosign was invalid. - // - // We collapse a cosign which shouldn't be handled yet into a valid cosign (`Ok(true)`) as we - // assume we'll either explicitly request it if we need it or we'll naturally see it (or a later, - // more relevant, cosign) again. // // Takes `&mut self` as this should only be called once at any given moment. - // TODO: Don't overload bool here - pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result { + pub fn intake_cosign(&mut self, signed_cosign: &SignedCosign) -> Result<(), IntakeCosignError> { let cosign = &signed_cosign.cosign; let network = cosign.cosigner; // Check our indexed blockchain includes a block with this block number let Some(our_block_hash) = SubstrateBlockHash::get(&self.db, cosign.block_number) else { - return Ok(true); + Err(IntakeCosignError::NotYetIndexedBlock)? }; let faulty = cosign.block_hash != our_block_hash; @@ -363,20 +382,19 @@ impl Cosigning { NetworksLatestCosignedBlock::get(&self.db, cosign.global_session, network) { if existing.cosign.block_number >= cosign.block_number { - return Ok(true); + Err(IntakeCosignError::StaleCosign)?; } } } let Some(global_session) = GlobalSessions::get(&self.db, cosign.global_session) else { - // Unrecognized global session - return Ok(true); + Err(IntakeCosignError::UnrecognizedGlobalSession)? }; // Check the cosigned block number is in range to the global session if cosign.block_number < global_session.start_block_number { // Cosign is for a block predating the global session - return Ok(false); + Err(IntakeCosignError::BeforeGlobalSessionStart)?; } if !faulty { // This prevents a malicious validator set, on the same chain, from producing a cosign after @@ -384,7 +402,7 @@ impl Cosigning { if let Some(last_block) = GlobalSessionsLastBlock::get(&self.db, cosign.global_session) { if cosign.block_number > last_block { // Cosign is for a block after the last block this global session should have signed - return Ok(false); + Err(IntakeCosignError::AfterGlobalSessionEnd)?; } } } @@ -393,13 +411,13 @@ impl Cosigning { { let key = Public::from({ let Some(key) = global_session.keys.get(&network) else { - return Ok(false); + Err(IntakeCosignError::NonParticipatingNetwork)? }; *key }); if !signed_cosign.verify_signature(key) { - return Ok(false); + Err(IntakeCosignError::InvalidSignature)?; } } @@ -415,7 +433,7 @@ impl Cosigning { // block declaring it was cosigned if (global_session.start_block_number - 1) > latest_cosigned_block_number { drop(txn); - return Ok(true); + return Err(IntakeCosignError::FutureGlobalSession); } // This is safe as it's in-range and newer, as prior checked since it isn't faulty @@ -429,9 +447,10 @@ impl Cosigning { let mut weight_cosigned = 0; for fault in &faults { - let Some(stake) = global_session.stakes.get(&fault.cosign.cosigner) else { - Err("cosigner with recognized key didn't have a stake entry saved".to_string())? - }; + let stake = global_session + .stakes + .get(&fault.cosign.cosigner) + .expect("cosigner with recognized key didn't have a stake entry saved"); weight_cosigned += stake; } @@ -443,7 +462,7 @@ impl Cosigning { } txn.commit(); - Ok(true) + Ok(()) } /// Receive intended cosigns to produce for this ValidatorSet. From e7de5125a27d36931eba32bab16f7eb50c1a7c43 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 05:52:33 -0500 Subject: [PATCH 295/368] Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format Has yet to update the processor accordingly. --- Cargo.lock | 1 + coordinator/cosign/src/intend.rs | 1 - coordinator/src/db.rs | 14 +++++- coordinator/src/tributary.rs | 33 +++++++++++--- coordinator/tributary/src/db.rs | 4 ++ coordinator/tributary/src/lib.rs | 54 ++++++++++++++--------- processor/messages/Cargo.toml | 2 + processor/messages/src/lib.rs | 76 ++++++++++++++++---------------- 8 files changed, 115 insertions(+), 70 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 980d9405..2c649c36 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8998,6 +8998,7 @@ dependencies = [ "hex", "parity-scale-codec", "serai-coins-primitives", + "serai-cosign", "serai-in-instructions-primitives", "serai-primitives", "serai-validator-sets-primitives", diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index 9fa229c5..ebe3513c 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -88,7 +88,6 @@ impl ContinuallyRan for CosignIntendTask { } let block_hash = block.hash(); SubstrateBlockHash::set(&mut txn, block_number, &block_hash); - SubstrateBlockNumber::set(&mut txn, block_hash, &block_number); let global_session_for_this_block = LatestGlobalSessionIntended::get(&txn); diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 4e932306..012d0257 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -8,7 +8,7 @@ use serai_client::{ validator_sets::primitives::{Session, ValidatorSet}, }; -use serai_cosign::CosignIntent; +use serai_cosign::SignedCosign; use serai_coordinator_substrate::NewSetInformation; @@ -66,14 +66,24 @@ pub(crate) fn prune_tributary_db(set: ValidatorSet) { create_db! { Coordinator { + // The currently active Tributaries ActiveTributaries: () -> Vec, + // The latest Tributary to have been retired for a network + // Since Tributaries are retired sequentially, this is informative to if any Tributary has been + // retired RetiredTributary: (network: NetworkId) -> Session, + // The last handled message from a Processor + LastProcessorMessage: (network: NetworkId) -> u64, + // Cosigns we produced and tried to intake yet incurred an error while doing so + ErroneousCosigns: () -> Vec, } } db_channel! { Coordinator { + // Tributaries to clean up upon reboot TributaryCleanup: () -> ValidatorSet, - PendingCosigns: (set: ValidatorSet) -> CosignIntent, + // Cosigns we produced + SignedCosigns: () -> SignedCosign, } } diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 76a034d5..f09c14cd 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -8,7 +8,7 @@ use ciphersuite::{Ciphersuite, Ristretto}; use tokio::sync::mpsc; -use serai_db::{DbTxn, Db as DbTrait}; +use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use scale::Encode; use serai_client::validator_sets::primitives::ValidatorSet; @@ -19,16 +19,23 @@ use serai_task::{Task, TaskHandle, ContinuallyRan}; use message_queue::{Service, Metadata, client::MessageQueue}; -use serai_cosign::Cosigning; +use serai_cosign::{Faulted, CosignIntent, Cosigning}; use serai_coordinator_substrate::{NewSetInformation, SignSlashReport}; -use serai_coordinator_tributary::{Transaction, ProcessorMessages, ScanTributaryTask}; +use serai_coordinator_tributary::{Transaction, ProcessorMessages, CosignIntents, ScanTributaryTask}; use serai_coordinator_p2p::P2p; use crate::Db; +db_channel! { + Coordinator { + PendingCosigns: (set: ValidatorSet) -> CosignIntent, + } +} + /// Provides Cosign/Cosigned Transactions onto the Tributary. pub(crate) struct ProvideCosignCosignedTransactionsTask { db: CD, + tributary_db: TD, set: NewSetInformation, tributary: Tributary, } @@ -79,16 +86,27 @@ impl ContinuallyRan let mut txn = self.db.txn(); // Fetch the next cosign this tributary should handle - let Some(cosign) = crate::PendingCosigns::try_recv(&mut txn, self.set.set) else { break }; + let Some(cosign) = PendingCosigns::try_recv(&mut txn, self.set.set) else { break }; pending_notable_cosign = cosign.notable; // If we (Serai) haven't cosigned this block, break as this is still pending - let Ok(latest) = Cosigning::::latest_cosigned_block_number(&txn) else { break }; + let latest = match Cosigning::::latest_cosigned_block_number(&txn) { + Ok(latest) => latest, + Err(Faulted) => { + log::error!("cosigning faulted"); + Err("cosigning faulted")? + } + }; if latest < cosign.block_number { break; } // Because we've cosigned it, provide the TX for that + { + let mut txn = self.tributary_db.txn(); + CosignIntents::provide(&mut txn, self.set.set, &cosign); + txn.commit(); + } provide_transaction( self.set.set, &self.tributary, @@ -109,7 +127,7 @@ impl ContinuallyRan // intended_cosigns will only yield up to and including the next notable cosign for cosign in Cosigning::::intended_cosigns(&mut txn, self.set.set) { // Flag this cosign as pending - crate::PendingCosigns::send(&mut txn, self.set.set, &cosign); + PendingCosigns::send(&mut txn, self.set.set, &cosign); // Provide the transaction to queue it for work provide_transaction( self.set.set, @@ -293,6 +311,7 @@ pub(crate) async fn spawn_tributary( tokio::spawn( (ProvideCosignCosignedTransactionsTask { db: db.clone(), + tributary_db: tributary_db.clone(), set: set.clone(), tributary: tributary.clone(), }) @@ -313,7 +332,7 @@ pub(crate) async fn spawn_tributary( // Spawn the scan task let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( - ScanTributaryTask::<_, _, P>::new(db.clone(), tributary_db.clone(), &set, reader) + ScanTributaryTask::<_, P>::new(tributary_db.clone(), &set, reader) // This is the only handle for this TributaryProcessorMessagesTask, so when this task is // dropped, it will be too .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index c48393af..9d426d96 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -9,6 +9,8 @@ use messages::sign::{VariantSignId, SignId}; use serai_db::*; +use serai_cosign::CosignIntent; + use crate::transaction::SigningProtocolRound; /// A topic within the database which the group participates in @@ -187,6 +189,8 @@ create_db!( // The slash points a validator has accrued, with u32::MAX representing a fatal slash. SlashPoints: (set: ValidatorSet, validator: SeraiAddress) -> u32, + // The cosign intent for a Substrate block + CosignIntents: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> CosignIntent, // The latest Substrate block to cosign. LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], // The hash of the block we're actively cosigning. diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 686af18d..91a77a62 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -24,7 +24,7 @@ use tributary_sdk::{ Transaction as TributaryTransaction, Block, TributaryReader, P2p, }; -use serai_cosign::Cosigning; +use serai_cosign::CosignIntent; use serai_coordinator_substrate::NewSetInformation; use messages::sign::VariantSignId; @@ -45,17 +45,34 @@ impl ProcessorMessages { } } -struct ScanBlock<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> { +/// The cosign intents. +pub struct CosignIntents; +impl CosignIntents { + /// Provide a CosignIntent for this Tributary. + /// + /// This must be done before the associated `Transaction::Cosign` is provided. + pub fn provide(txn: &mut impl DbTxn, set: ValidatorSet, intent: &CosignIntent) { + db::CosignIntents::set(txn, set, intent.block_hash, intent); + } + fn take( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + ) -> Option { + db::CosignIntents::take(txn, set, substrate_block_hash) + } +} + +struct ScanBlock<'a, TD: Db, TDT: DbTxn, P: P2p> { _td: PhantomData, _p2p: PhantomData

, - cosign_db: &'a CD, tributary_txn: &'a mut TDT, set: ValidatorSet, validators: &'a [SeraiAddress], total_weight: u64, validator_weights: &'a HashMap, } -impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { +impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one if TributaryDb::actively_cosigning(self.tributary_txn, self.set).is_some() { @@ -74,20 +91,20 @@ impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { return; } - let Some(substrate_block_number) = - Cosigning::::finalized_block_number(self.cosign_db, latest_substrate_block_to_cosign) - else { - // This is a valid panic as we shouldn't be scanning this block if we didn't provide all - // Provided transactions within it, and the block to cosign is a Provided transaction - panic!("cosigning a block our cosigner didn't index") - }; + let intent = + CosignIntents::take(self.tributary_txn, self.set, latest_substrate_block_to_cosign) + .expect("Transaction::Cosign locally provided but CosignIntents wasn't populated"); + assert_eq!( + intent.block_hash, latest_substrate_block_to_cosign, + "provided CosignIntent wasn't saved by its block hash" + ); // Mark us as actively cosigning TributaryDb::start_cosigning( self.tributary_txn, self.set, latest_substrate_block_to_cosign, - substrate_block_number, + intent.block_number, ); // Send the message for the processor to start signing TributaryDb::send_message( @@ -95,8 +112,7 @@ impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { self.set, messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { session: self.set.session, - block_number: substrate_block_number, - block: latest_substrate_block_to_cosign, + intent, }, ); } @@ -411,8 +427,7 @@ impl<'a, CD: Db, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, CD, TD, TDT, P> { } /// The task to scan the Tributary, populating `ProcessorMessages`. -pub struct ScanTributaryTask { - cosign_db: CD, +pub struct ScanTributaryTask { tributary_db: TD, set: ValidatorSet, validators: Vec, @@ -422,10 +437,9 @@ pub struct ScanTributaryTask { _p2p: PhantomData

, } -impl ScanTributaryTask { +impl ScanTributaryTask { /// Create a new instance of this task. pub fn new( - cosign_db: CD, tributary_db: TD, new_set: &NewSetInformation, tributary: TributaryReader, @@ -442,7 +456,6 @@ impl ScanTributaryTask { } ScanTributaryTask { - cosign_db, tributary_db, set: new_set.set, validators, @@ -454,7 +467,7 @@ impl ScanTributaryTask { } } -impl ContinuallyRan for ScanTributaryTask { +impl ContinuallyRan for ScanTributaryTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let (mut last_block_number, mut last_block_hash) = @@ -486,7 +499,6 @@ impl ContinuallyRan for ScanTributaryTask { (ScanBlock { _td: PhantomData::, _p2p: PhantomData::

, - cosign_db: &self.cosign_db, tributary_txn: &mut tributary_txn, set: self.set, validators: &self.validators, diff --git a/processor/messages/Cargo.toml b/processor/messages/Cargo.toml index 03dc0441..b1387301 100644 --- a/processor/messages/Cargo.toml +++ b/processor/messages/Cargo.toml @@ -29,3 +29,5 @@ serai-primitives = { path = "../../substrate/primitives", default-features = fal in-instructions-primitives = { package = "serai-in-instructions-primitives", path = "../../substrate/in-instructions/primitives", default-features = false, features = ["std", "borsh"] } coins-primitives = { package = "serai-coins-primitives", path = "../../substrate/coins/primitives", default-features = false, features = ["std", "borsh"] } validator-sets-primitives = { package = "serai-validator-sets-primitives", path = "../../substrate/validator-sets/primitives", default-features = false, features = ["std", "borsh"] } + +serai-cosign = { path = "../../coordinator/cosign", default-features = false } diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index ec072fe5..5cda454b 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -11,6 +11,8 @@ use validator_sets_primitives::{Session, KeyPair, Slash}; use coins_primitives::OutInstructionWithBalance; use in_instructions_primitives::SignedBatch; +use serai_cosign::{CosignIntent, SignedCosign}; + #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct SubstrateContext { pub serai_time: u64, @@ -50,7 +52,8 @@ pub mod key_gen { } } - #[derive(Clone, PartialEq, Eq, BorshSerialize, BorshDeserialize)] + // This set of messages is sent entirely and solely by serai-processor-key-gen. + #[derive(Clone, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { // Participated in the specified key generation protocol. Participation { session: Session, participation: Vec }, @@ -141,7 +144,8 @@ pub mod sign { } } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + // This set of messages is sent entirely and solely by serai-processor-frost-attempt-manager. + #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { // Participant sent an invalid message during the sign protocol. InvalidParticipant { session: Session, participant: Participant }, @@ -155,39 +159,25 @@ pub mod sign { pub mod coordinator { use super::*; - // TODO: Remove this for the one defined in serai-cosign - pub fn cosign_block_msg(block_number: u64, block: [u8; 32]) -> Vec { - const DST: &[u8] = b"Cosign"; - let mut res = vec![u8::try_from(DST.len()).unwrap()]; - res.extend(DST); - res.extend(block_number.to_le_bytes()); - res.extend(block); - res - } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum CoordinatorMessage { /// Cosign the specified Substrate block. /// /// This is sent by the Coordinator's Tributary scanner. - CosignSubstrateBlock { session: Session, block_number: u64, block: [u8; 32] }, + CosignSubstrateBlock { session: Session, intent: CosignIntent }, /// Sign the slash report for this session. /// /// This is sent by the Coordinator's Tributary scanner. SignSlashReport { session: Session, report: Vec }, } - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] - pub struct PlanMeta { - pub session: Session, - pub id: [u8; 32], - } - - #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + // This set of messages is sent entirely and solely by serai-processor-bin's implementation of + // the signers::Coordinator trait. + // TODO: Move message creation into serai-processor-signers + #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { - CosignedBlock { block_number: u64, block: [u8; 32], signature: Vec }, + CosignedBlock { cosign: SignedCosign }, SignedBatch { batch: SignedBatch }, - SubstrateBlockAck { block: u64, plans: Vec }, SignedSlashReport { session: Session, signature: Vec }, } } @@ -231,17 +221,16 @@ pub mod substrate { }, } - #[derive(Clone, PartialEq, Eq, Debug)] - pub enum ProcessorMessage {} - impl BorshSerialize for ProcessorMessage { - fn serialize(&self, _writer: &mut W) -> borsh::io::Result<()> { - unimplemented!() - } + #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] + pub struct PlanMeta { + pub session: Session, + pub transaction: [u8; 32], } - impl BorshDeserialize for ProcessorMessage { - fn deserialize_reader(_reader: &mut R) -> borsh::io::Result { - unimplemented!() - } + + #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] + pub enum ProcessorMessage { + // TODO: Have the processor send this + SubstrateBlockAck { block: u64, plans: Vec }, } } @@ -268,7 +257,7 @@ impl_from!(sign, CoordinatorMessage, Sign); impl_from!(coordinator, CoordinatorMessage, Coordinator); impl_from!(substrate, CoordinatorMessage, Substrate); -#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { KeyGen(key_gen::ProcessorMessage), Sign(sign::ProcessorMessage), @@ -331,8 +320,8 @@ impl CoordinatorMessage { CoordinatorMessage::Coordinator(msg) => { let (sub, id) = match msg { // We only cosign a block once, and Reattempt is a separate message - coordinator::CoordinatorMessage::CosignSubstrateBlock { block_number, .. } => { - (0, block_number.encode()) + coordinator::CoordinatorMessage::CosignSubstrateBlock { intent, .. } => { + (0, intent.block_number.encode()) } // We only sign one slash report, and Reattempt is a separate message coordinator::CoordinatorMessage::SignSlashReport { session, .. } => (1, session.encode()), @@ -404,17 +393,26 @@ impl ProcessorMessage { } ProcessorMessage::Coordinator(msg) => { let (sub, id) = match msg { - coordinator::ProcessorMessage::CosignedBlock { block, .. } => (0, block.encode()), + coordinator::ProcessorMessage::CosignedBlock { cosign } => { + (0, cosign.cosign.block_hash.encode()) + } coordinator::ProcessorMessage::SignedBatch { batch, .. } => (1, batch.batch.id.encode()), - coordinator::ProcessorMessage::SubstrateBlockAck { block, .. } => (2, block.encode()), - coordinator::ProcessorMessage::SignedSlashReport { session, .. } => (3, session.encode()), + coordinator::ProcessorMessage::SignedSlashReport { session, .. } => (2, session.encode()), }; let mut res = vec![PROCESSOR_UID, TYPE_COORDINATOR_UID, sub]; res.extend(&id); res } - ProcessorMessage::Substrate(_) => panic!("requesting intent for empty message type"), + ProcessorMessage::Substrate(msg) => { + let (sub, id) = match msg { + substrate::ProcessorMessage::SubstrateBlockAck { block, .. } => (0, block.encode()), + }; + + let mut res = vec![PROCESSOR_UID, TYPE_SUBSTRATE_UID, sub]; + res.extend(&id); + res + } } } } From e35aa04afbd524771e8174dc3b10333906f07f16 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 05:53:43 -0500 Subject: [PATCH 296/368] Start handling messages from the processor Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO. --- coordinator/src/main.rs | 165 +++++++++++++++++++++++++++++++++++----- 1 file changed, 148 insertions(+), 17 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 0dae9b40..f549378d 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -9,14 +9,19 @@ use ciphersuite::{ Ciphersuite, Ristretto, }; +use borsh::BorshDeserialize; + use tokio::sync::mpsc; -use serai_client::{primitives::PublicKey, Serai}; +use serai_client::{ + primitives::{NetworkId, PublicKey}, + Serai, +}; use message_queue::{Service, client::MessageQueue}; use serai_task::{Task, TaskHandle, ContinuallyRan}; -use serai_cosign::{SignedCosign, Cosigning}; +use serai_cosign::{Faulted, SignedCosign, Cosigning}; use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream, SignSlashReport}; use serai_coordinator_tributary::Transaction; @@ -63,18 +68,60 @@ async fn serai() -> Arc { } } -fn spawn_cosigning( - db: impl serai_db::Db, +fn spawn_cosigning( + mut db: D, serai: Arc, p2p: impl p2p::P2p, tasks_to_run_upon_cosigning: Vec, mut p2p_cosigns: mpsc::UnboundedReceiver, - mut signed_cosigns: mpsc::UnboundedReceiver, ) { - let mut cosigning = Cosigning::spawn(db, serai, p2p.clone(), tasks_to_run_upon_cosigning); + let mut cosigning = Cosigning::spawn(db.clone(), serai, p2p.clone(), tasks_to_run_upon_cosigning); tokio::spawn(async move { + const COSIGN_LOOP_INTERVAL: Duration = Duration::from_secs(5); + let last_cosign_rebroadcast = Instant::now(); loop { + // Intake our own cosigns + match Cosigning::::latest_cosigned_block_number(&db) { + Ok(latest_cosigned_block_number) => { + let mut txn = db.txn(); + // The cosigns we prior tried to intake yet failed to + let mut cosigns = ErroneousCosigns::get(&txn).unwrap_or(vec![]); + // The cosigns we have yet to intake + while let Some(cosign) = SignedCosigns::try_recv(&mut txn) { + cosigns.push(cosign); + } + + let mut erroneous = vec![]; + for cosign in cosigns { + // If this cosign is stale, move on + if cosign.cosign.block_number <= latest_cosigned_block_number { + continue; + } + + match cosigning.intake_cosign(&cosign) { + // Publish this cosign + Ok(()) => p2p.publish_cosign(cosign).await, + Err(e) => { + assert!(e.temporal(), "signed an invalid cosign: {e:?}"); + // Since this had a temporal error, queue it to try again later + erroneous.push(cosign); + } + }; + } + + // Save the cosigns with temporal errors to the database + ErroneousCosigns::set(&mut txn, &erroneous); + + txn.commit(); + } + Err(Faulted) => { + // We don't panic here as the following code rebroadcasts our cosigns which is + // necessary to inform other coordinators of the faulty cosigns + log::error!("cosigning faulted"); + } + } + let time_till_cosign_rebroadcast = (last_cosign_rebroadcast + serai_cosign::BROADCAST_FREQUENCY) .saturating_duration_since(Instant::now()); @@ -86,19 +133,98 @@ fn spawn_cosigning( } cosign = p2p_cosigns.recv() => { let cosign = cosign.expect("p2p cosigns channel was dropped?"); - let _: Result<_, _> = cosigning.intake_cosign(&cosign); - } - cosign = signed_cosigns.recv() => { - let cosign = cosign.expect("signed cosigns channel was dropped?"); - // TODO: Handle this error - let _: Result<_, _> = cosigning.intake_cosign(&cosign); - p2p.publish_cosign(cosign).await; + if cosigning.intake_cosign(&cosign).is_ok() { + p2p.publish_cosign(cosign).await; + } } + // Make sure this loop runs at least this often + () = tokio::time::sleep(COSIGN_LOOP_INTERVAL) => {} } } }); } +async fn handle_processor_messages( + mut db: impl serai_db::Db, + message_queue: Arc, + network: NetworkId, +) { + loop { + let (msg_id, msg) = { + let msg = message_queue.next(Service::Processor(network)).await; + // Check this message's sender is as expected + assert_eq!(msg.from, Service::Processor(network)); + + // Check this message's ID is as expected + let last = LastProcessorMessage::get(&db, network); + let next = last.map(|id| id + 1).unwrap_or(0); + // This should either be the last message's ID, if we committed but didn't send our ACK, or + // the expected next message's ID + assert!((Some(msg.id) == last) || (msg.id == next)); + + // TODO: Check msg.sig + + // If this is the message we already handled, and just failed to ACK, ACK it now and move on + if Some(msg.id) == last { + message_queue.ack(Service::Processor(network), msg.id).await; + continue; + } + + (msg.id, messages::ProcessorMessage::deserialize(&mut msg.msg.as_slice()).unwrap()) + }; + + let mut txn = db.txn(); + + match msg { + messages::ProcessorMessage::KeyGen(msg) => match msg { + messages::key_gen::ProcessorMessage::Participation { session, participation } => { + todo!("TODO Transaction::DkgParticipation") + } + messages::key_gen::ProcessorMessage::GeneratedKeyPair { + session, + substrate_key, + network_key, + } => todo!("TODO Transaction::DkgConfirmationPreprocess"), + messages::key_gen::ProcessorMessage::Blame { session, participant } => { + todo!("TODO Transaction::RemoveParticipant") + } + }, + messages::ProcessorMessage::Sign(msg) => match msg { + messages::sign::ProcessorMessage::InvalidParticipant { session, participant } => { + todo!("TODO Transaction::RemoveParticipant") + } + messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => { + todo!("TODO Transaction::Batch + Transaction::Sign") + } + messages::sign::ProcessorMessage::Shares { id, shares } => todo!("TODO Transaction::Sign"), + }, + messages::ProcessorMessage::Coordinator(msg) => match msg { + messages::coordinator::ProcessorMessage::CosignedBlock { cosign } => { + SignedCosigns::send(&mut txn, &cosign); + } + messages::coordinator::ProcessorMessage::SignedBatch { batch } => { + todo!("TODO Save to DB, have task read from DB and publish to Serai") + } + messages::coordinator::ProcessorMessage::SignedSlashReport { session, signature } => { + todo!("TODO Save to DB, have task read from DB and publish to Serai") + } + }, + messages::ProcessorMessage::Substrate(msg) => match msg { + messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => { + todo!("TODO Transaction::SubstrateBlock") + } + }, + } + + // Mark this as the last handled message + LastProcessorMessage::set(&mut txn, network, &msg_id); + // Commit the txn + txn.commit(); + // Now that we won't handle this message again, acknowledge it so we won't see it again + message_queue.ack(Service::Processor(network), msg_id).await; + } +} + #[tokio::main] async fn main() { // Override the panic handler with one which will panic if any tokio task panics @@ -217,7 +343,6 @@ async fn main() { ); // Spawn the cosign handler - let (signed_cosigns_send, signed_cosigns_recv) = mpsc::unbounded_channel(); spawn_cosigning( db.clone(), serai.clone(), @@ -225,7 +350,6 @@ async fn main() { // Run the Substrate scanners once we cosign new blocks vec![substrate_canonical_task, substrate_ephemeral_task], p2p_cosigns_recv, - signed_cosigns_recv, ); // Spawn all Tributaries on-disk @@ -254,7 +378,14 @@ async fn main() { .continually_run(substrate_task_def, vec![]), ); - // TODO: Handle processor messages + // Handle all of the Processors' messages + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + } + tokio::spawn(handle_processor_messages(db.clone(), message_queue.clone(), network)); + } - todo!("TODO") + // Run the spawned tasks ad-infinitum + core::future::pending().await } From 0ce9aad9b2a34f936e52ee815beeae404c61b325 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 07:32:45 -0500 Subject: [PATCH 297/368] Add flow to add transactions onto Tributaries --- coordinator/src/db.rs | 30 +++++- coordinator/src/main.rs | 51 ++++++++-- coordinator/src/tributary.rs | 160 +++++++++++++++++++++++-------- coordinator/tributary/src/db.rs | 3 + coordinator/tributary/src/lib.rs | 54 ++++++++++- processor/messages/src/lib.rs | 4 +- 6 files changed, 246 insertions(+), 56 deletions(-) diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 012d0257..8bb2fd24 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -9,8 +9,8 @@ use serai_client::{ }; use serai_cosign::SignedCosign; - use serai_coordinator_substrate::NewSetInformation; +use serai_coordinator_tributary::Transaction; #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] pub(crate) type Db = serai_db::ParityDb; @@ -81,9 +81,33 @@ create_db! { db_channel! { Coordinator { - // Tributaries to clean up upon reboot - TributaryCleanup: () -> ValidatorSet, // Cosigns we produced SignedCosigns: () -> SignedCosign, + // Tributaries to clean up upon reboot + TributaryCleanup: () -> ValidatorSet, + } +} + +mod _internal_db { + use super::*; + + db_channel! { + Coordinator { + // Tributary transactions to publish + TributaryTransactions: (set: ValidatorSet) -> Transaction, + } + } +} + +pub(crate) struct TributaryTransactions; +impl TributaryTransactions { + pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, tx: &Transaction) { + // If this set has yet to be retired, send this transaction + if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { + _internal_db::TributaryTransactions::send(txn, set, tx); + } + } + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + _internal_db::TributaryTransactions::try_recv(txn, set) } } diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index f549378d..f693650a 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -1,5 +1,5 @@ use core::{ops::Deref, time::Duration}; -use std::{sync::Arc, time::Instant}; +use std::{sync::Arc, collections::HashMap, time::Instant}; use zeroize::{Zeroize, Zeroizing}; use rand_core::{RngCore, OsRng}; @@ -15,6 +15,7 @@ use tokio::sync::mpsc; use serai_client::{ primitives::{NetworkId, PublicKey}, + validator_sets::primitives::ValidatorSet, Serai, }; use message_queue::{Service, client::MessageQueue}; @@ -23,7 +24,7 @@ use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{Faulted, SignedCosign, Cosigning}; use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream, SignSlashReport}; -use serai_coordinator_tributary::Transaction; +use serai_coordinator_tributary::{Signed, Transaction, SubstrateBlockPlans}; mod db; use db::*; @@ -178,7 +179,12 @@ async fn handle_processor_messages( match msg { messages::ProcessorMessage::KeyGen(msg) => match msg { messages::key_gen::ProcessorMessage::Participation { session, participation } => { - todo!("TODO Transaction::DkgParticipation") + let set = ValidatorSet { network, session }; + TributaryTransactions::send( + &mut txn, + set, + &Transaction::DkgParticipation { participation, signed: Signed::default() }, + ); } messages::key_gen::ProcessorMessage::GeneratedKeyPair { session, @@ -186,12 +192,28 @@ async fn handle_processor_messages( network_key, } => todo!("TODO Transaction::DkgConfirmationPreprocess"), messages::key_gen::ProcessorMessage::Blame { session, participant } => { - todo!("TODO Transaction::RemoveParticipant") + let set = ValidatorSet { network, session }; + TributaryTransactions::send( + &mut txn, + set, + &Transaction::RemoveParticipant { + participant: todo!("TODO"), + signed: Signed::default(), + }, + ); } }, messages::ProcessorMessage::Sign(msg) => match msg { messages::sign::ProcessorMessage::InvalidParticipant { session, participant } => { - todo!("TODO Transaction::RemoveParticipant") + let set = ValidatorSet { network, session }; + TributaryTransactions::send( + &mut txn, + set, + &Transaction::RemoveParticipant { + participant: todo!("TODO"), + signed: Signed::default(), + }, + ); } messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => { todo!("TODO Transaction::Batch + Transaction::Sign") @@ -211,7 +233,22 @@ async fn handle_processor_messages( }, messages::ProcessorMessage::Substrate(msg) => match msg { messages::substrate::ProcessorMessage::SubstrateBlockAck { block, plans } => { - todo!("TODO Transaction::SubstrateBlock") + let mut by_session = HashMap::new(); + for plan in plans { + by_session + .entry(plan.session) + .or_insert_with(|| Vec::with_capacity(1)) + .push(plan.transaction_plan_id); + } + for (session, plans) in by_session { + let set = ValidatorSet { network, session }; + SubstrateBlockPlans::set(&mut txn, set, block, &plans); + TributaryTransactions::send( + &mut txn, + set, + &Transaction::SubstrateBlock { hash: block }, + ); + } } }, } @@ -274,6 +311,8 @@ async fn main() { prune_tributary_db(to_cleanup); // Drain the cosign intents created for this set while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} + // Drain the transactions to publish for this set + while TributaryTransactions::try_recv(&mut txn, to_cleanup).is_some() {} // Remove the SignSlashReport notification SignSlashReport::try_recv(&mut txn, to_cleanup); } diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index f09c14cd..a96cf225 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -13,7 +13,7 @@ use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use scale::Encode; use serai_client::validator_sets::primitives::ValidatorSet; -use tributary_sdk::{TransactionError, ProvidedError, Tributary}; +use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; use serai_task::{Task, TaskHandle, ContinuallyRan}; @@ -24,7 +24,7 @@ use serai_coordinator_substrate::{NewSetInformation, SignSlashReport}; use serai_coordinator_tributary::{Transaction, ProcessorMessages, CosignIntents, ScanTributaryTask}; use serai_coordinator_p2p::P2p; -use crate::Db; +use crate::{Db, TributaryTransactions}; db_channel! { Coordinator { @@ -32,6 +32,40 @@ db_channel! { } } +/// Provide a Provided Transaction to the Tributary. +/// +/// This is not a well-designed function. This is specific to the context in which its called, +/// within this file. It should only be considered an internal helper for this domain alone. +async fn provide_transaction( + set: ValidatorSet, + tributary: &Tributary, + tx: Transaction, +) { + match tributary.provide_transaction(tx.clone()).await { + // The Tributary uses its own DB, so we may provide this multiple times if we reboot before + // committing the txn which provoked this + Ok(()) | Err(ProvidedError::AlreadyProvided) => {} + Err(ProvidedError::NotProvided) => { + panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}"); + } + Err(ProvidedError::InvalidProvided(e)) => { + panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}") + } + // The Tributary's scan task won't advance if we don't have the Provided transactions + // present on-chain, and this enters an infinite loop to block the calling task from + // advancing + Err(ProvidedError::LocalMismatchesOnChain) => loop { + log::error!( + "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", + set, + tx, + ); + // Print this every five minutes as this does need to be handled + tokio::time::sleep(Duration::from_secs(5 * 60)).await; + }, + } +} + /// Provides Cosign/Cosigned Transactions onto the Tributary. pub(crate) struct ProvideCosignCosignedTransactionsTask { db: CD, @@ -43,40 +77,6 @@ impl ContinuallyRan for ProvideCosignCosignedTransactionsTask { fn run_iteration(&mut self) -> impl Send + Future> { - /// Provide a Provided Transaction to the Tributary. - /// - /// This is not a well-designed function. This is specific to the context in which its called, - /// within this file. It should only be considered an internal helper for this domain alone. - async fn provide_transaction( - set: ValidatorSet, - tributary: &Tributary, - tx: Transaction, - ) { - match tributary.provide_transaction(tx.clone()).await { - // The Tributary uses its own DB, so we may provide this multiple times if we reboot before - // committing the txn which provoked this - Ok(()) | Err(ProvidedError::AlreadyProvided) => {} - Err(ProvidedError::NotProvided) => { - panic!("providing a Transaction which wasn't a Provided transaction: {tx:?}"); - } - Err(ProvidedError::InvalidProvided(e)) => { - panic!("providing an invalid Provided transaction, tx: {tx:?}, error: {e:?}") - } - Err(ProvidedError::LocalMismatchesOnChain) => loop { - // The Tributary's scan task won't advance if we don't have the Provided transactions - // present on-chain, and this enters an infinite loop to block the calling task from - // advancing - log::error!( - "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary", - set, - tx, - ); - // Print this every five minutes as this does need to be handled - tokio::time::sleep(Duration::from_secs(5 * 60)).await; - }, - } - } - async move { let mut made_progress = false; @@ -145,6 +145,66 @@ impl ContinuallyRan } } +/// Adds all of the transactions sent via `TributaryTransactions`. +pub(crate) struct AddTributaryTransactionsTask { + db: CD, + tributary_db: TD, + tributary: Tributary, + set: ValidatorSet, + key: Zeroizing<::F>, +} +impl ContinuallyRan for AddTributaryTransactionsTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + loop { + let mut txn = self.db.txn(); + let Some(mut tx) = TributaryTransactions::try_recv(&mut txn, self.set) else { break }; + + let kind = tx.kind(); + match kind { + TransactionKind::Provided(_) => provide_transaction(self.set, &self.tributary, tx).await, + TransactionKind::Unsigned | TransactionKind::Signed(_, _) => { + // If this is a signed transaction, sign it + if matches!(kind, TransactionKind::Signed(_, _)) { + tx.sign(&mut OsRng, self.tributary.genesis(), &self.key); + } + + // Actually add the transaction + // TODO: If this is a preprocess, make sure the topic has been recognized + let res = self.tributary.add_transaction(tx.clone()).await; + match &res { + // Fresh publication, already published + Ok(true | false) => {} + Err( + TransactionError::TooLargeTransaction | + TransactionError::InvalidSigner | + TransactionError::InvalidNonce | + TransactionError::InvalidSignature | + TransactionError::InvalidContent, + ) => { + panic!("created an invalid transaction, tx: {tx:?}, err: {res:?}"); + } + // We've published too many transactions recently + // Drop this txn to try to publish it again later on a future iteration + Err(TransactionError::TooManyInMempool) => { + drop(txn); + break; + } + // This isn't a Provided transaction so this should never be hit + Err(TransactionError::ProvidedAddedToMempool) => unreachable!(), + } + } + } + + made_progress = true; + txn.commit(); + } + Ok(made_progress) + } + } +} + /// Takes the messages from ScanTributaryTask and publishes them to the message-queue. pub(crate) struct TributaryProcessorMessagesTask { tributary_db: TD, @@ -207,7 +267,10 @@ impl ContinuallyRan for SignSlashReportTask return Ok(false), + Err(TransactionError::TooManyInMempool) => { + drop(txn); + return Ok(false); + } // This isn't a Provided transaction so this should never be hit Err(TransactionError::ProvidedAddedToMempool) => unreachable!(), } @@ -343,14 +406,27 @@ pub(crate) async fn spawn_tributary( tokio::spawn( (SignSlashReportTask { db: db.clone(), - tributary_db, + tributary_db: tributary_db.clone(), tributary: tributary.clone(), set: set.clone(), - key: serai_key, + key: serai_key.clone(), }) .continually_run(sign_slash_report_task_def, vec![]), ); + // Spawn the add transactions task + let (add_tributary_transactions_task_def, add_tributary_transactions_task) = Task::new(); + tokio::spawn( + (AddTributaryTransactionsTask { + db: db.clone(), + tributary_db, + tributary: tributary.clone(), + set: set.set, + key: serai_key, + }) + .continually_run(add_tributary_transactions_task_def, vec![]), + ); + // Whenever a new block occurs, immediately run the scan task // This function also preserves the ProvideCosignCosignedTransactionsTask handle until the // Tributary is retired, ensuring it isn't dropped prematurely and that the task don't run ad @@ -360,6 +436,10 @@ pub(crate) async fn spawn_tributary( set.set, tributary, scan_tributary_task, - vec![provide_cosign_cosigned_transactions_task, sign_slash_report_task], + vec![ + provide_cosign_cosigned_transactions_task, + sign_slash_report_task, + add_tributary_transactions_task, + ], )); } diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index 9d426d96..08fac488 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -198,6 +198,9 @@ create_db!( // If this block has already been cosigned. Cosigned: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> (), + // The plans to whitelist upon a `Transaction::SubstrateBlock` being included on-chain. + SubstrateBlockPlans: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> Vec<[u8; 32]>, + // The weight accumulated for a topic. AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u64, // The entries accumulated for a topic, by validator. diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 91a77a62..e897afe5 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -30,8 +30,7 @@ use serai_coordinator_substrate::NewSetInformation; use messages::sign::VariantSignId; mod transaction; -pub(crate) use transaction::{SigningProtocolRound, Signed}; -pub use transaction::Transaction; +pub use transaction::{SigningProtocolRound, Signed, Transaction}; mod db; use db::*; @@ -63,6 +62,30 @@ impl CosignIntents { } } +/// The plans to whitelist upon a `Transaction::SubstrateBlock` being included on-chain. +pub struct SubstrateBlockPlans; +impl SubstrateBlockPlans { + /// Set the plans to whitelist upon the associated `Transaction::SubstrateBlock` being included + /// on-chain. + /// + /// This must be done before the associated `Transaction::Cosign` is provided. + pub fn set( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + plans: &Vec<[u8; 32]>, + ) { + db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, &plans); + } + fn take( + txn: &mut impl DbTxn, + set: ValidatorSet, + substrate_block_hash: [u8; 32], + ) -> Option> { + db::SubstrateBlockPlans::take(txn, set, substrate_block_hash) + } +} + struct ScanBlock<'a, TD: Db, TDT: DbTxn, P: P2p> { _td: PhantomData, _p2p: PhantomData

, @@ -222,11 +245,32 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } Transaction::SubstrateBlock { hash } => { // Whitelist all of the IDs this Substrate block causes to be signed - todo!("TODO") + let plans = SubstrateBlockPlans::take(self.tributary_txn, self.set, hash).expect( + "Transaction::SubstrateBlock locally provided but SubstrateBlockPlans wasn't populated", + ); + for plan in plans { + TributaryDb::recognize_topic( + self.tributary_txn, + self.set, + Topic::Sign { + id: VariantSignId::Transaction(plan), + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); + } } Transaction::Batch { hash } => { - // Whitelist the signing of this batch, publishing our own preprocess - todo!("TODO") + // Whitelist the signing of this batch + TributaryDb::recognize_topic( + self.tributary_txn, + self.set, + Topic::Sign { + id: VariantSignId::Batch(hash), + attempt: 0, + round: SigningProtocolRound::Preprocess, + }, + ); } Transaction::SlashReport { slash_points, signed } => { diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index 5cda454b..acf01775 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -224,13 +224,13 @@ pub mod substrate { #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct PlanMeta { pub session: Session, - pub transaction: [u8; 32], + pub transaction_plan_id: [u8; 32], } #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub enum ProcessorMessage { // TODO: Have the processor send this - SubstrateBlockAck { block: u64, plans: Vec }, + SubstrateBlockAck { block: [u8; 32], plans: Vec }, } } From 3cc2abfedc9b32da54e9f5c5aadcba2da33c7b78 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 17:47:48 -0500 Subject: [PATCH 298/368] Add a task to publish slash reports --- coordinator/src/main.rs | 2 + coordinator/src/serai.rs | 93 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 95 insertions(+) create mode 100644 coordinator/src/serai.rs diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index f693650a..b6bb275d 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -39,6 +39,8 @@ mod p2p { pub use serai_coordinator_libp2p_p2p::Libp2p; } +mod serai; + // Use a zeroizing allocator for this entire application // While secrets should already be zeroized, the presence of secret keys in a networked application // (at increased risk of OOB reads) justifies the performance hit in case any secrets weren't diff --git a/coordinator/src/serai.rs b/coordinator/src/serai.rs new file mode 100644 index 00000000..b0093002 --- /dev/null +++ b/coordinator/src/serai.rs @@ -0,0 +1,93 @@ +use core::future::Future; +use std::sync::Arc; + +use serai_db::{Get, DbTxn, Db as DbTrait, create_db}; + +use scale::Decode; +use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; + +use serai_task::ContinuallyRan; + +create_db! { + CoordinatorSerai { + SlashReports: (network: NetworkId) -> (Session, Vec), + } +} + +/// Publish `SlashReport`s from `SlashReports` onto Serai. +pub struct PublishSlashReportTask { + db: CD, + serai: Arc, +} +impl ContinuallyRan for PublishSlashReportTask { + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + }; + + let mut txn = self.db.txn(); + let Some((session, slash_report)) = SlashReports::take(&mut txn, network) else { + // No slash report to publish + continue; + }; + let slash_report = serai_client::Transaction::decode(&mut slash_report.as_slice()).unwrap(); + + let serai = + self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + let serai = serai.validator_sets(); + let session_after_slash_report = Session(session.0 + 1); + let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; + let current_session = current_session.map(|session| session.0); + // Only attempt to publish the slash report for session #n while session #n+1 is still + // active + let session_after_slash_report_retired = + current_session > Some(session_after_slash_report.0); + if session_after_slash_report_retired { + // Commit the txn to drain this SlashReport from the database and not try it again later + txn.commit(); + continue; + } + + if Some(session_after_slash_report.0) != current_session { + // We already checked the current session wasn't greater, and they're not equal + assert!(current_session < Some(session_after_slash_report.0)); + // This would mean the Serai node is resyncing and is behind where it prior was + Err("have a SlashReport for a session Serai has yet to retire".to_string())?; + } + + // If this session which should publish a slash report already has, move on + let key_pending_slash_report = + serai.key_pending_slash_report(network).await.map_err(|e| format!("{e:?}"))?; + if key_pending_slash_report.is_none() { + txn.commit(); + continue; + }; + + /* + let tx = serai_client::SeraiValidatorSets::report_slashes( + network, + slash_report, + signature.clone(), + ); + */ + + match self.serai.publish(&slash_report).await { + Ok(()) => { + txn.commit(); + made_progress = true; + } + // This could be specific to this TX (such as an already in mempool error) and it may be + // worthwhile to continue iteration with the other pending slash reports. We assume this + // error ephemeral and that the latency incurred for this ephemeral error to resolve is + // miniscule compared to the window available to publish the slash report. That makes + // this a non-issue. + Err(e) => Err(format!("couldn't publish slash report transaction: {e:?}"))?, + } + } + Ok(made_progress) + } + } +} From b5a6b0693e04783283db070a5dcac88ea98fd0b4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 12 Jan 2025 18:29:08 -0500 Subject: [PATCH 299/368] Add a proper error type to ContinuallyRan This isn't necessary. Because we just log the error, we never match off of it, we don't need any structure beyond String (or now Debug, which still gives us a way to print the error). This is for the ergonomics of not having to constantly write `.map_err(|e| format!("{e:?}"))`. --- common/task/src/lib.rs | 27 +++++++++++++++++++++--- coordinator/cosign/src/delay.rs | 6 ++++-- coordinator/cosign/src/evaluator.rs | 4 +++- coordinator/cosign/src/intend.rs | 4 +++- coordinator/p2p/libp2p/src/dial.rs | 9 ++++---- coordinator/p2p/libp2p/src/validators.rs | 21 +++++++++--------- coordinator/p2p/src/heartbeat.rs | 4 +++- coordinator/src/serai.rs | 4 +++- coordinator/src/substrate.rs | 3 ++- coordinator/src/tributary.rs | 18 +++++++++++----- coordinator/substrate/src/canonical.rs | 4 +++- coordinator/substrate/src/ephemeral.rs | 4 +++- coordinator/tributary/src/lib.rs | 4 +++- processor/bin/src/coordinator.rs | 1 + processor/bitcoin/src/txindex.rs | 4 +++- processor/scanner/src/batch/mod.rs | 9 ++++++-- processor/scanner/src/eventuality/mod.rs | 4 +++- processor/scanner/src/index/mod.rs | 4 +++- processor/scanner/src/report/mod.rs | 6 ++++-- processor/scanner/src/scan/mod.rs | 4 +++- processor/scanner/src/substrate/mod.rs | 6 ++++-- processor/signers/src/batch/mod.rs | 6 ++++-- processor/signers/src/coordinator/mod.rs | 4 +++- processor/signers/src/cosign/mod.rs | 6 ++++-- processor/signers/src/slash_report.rs | 6 ++++-- processor/signers/src/transaction/mod.rs | 10 ++++----- 26 files changed, 126 insertions(+), 56 deletions(-) diff --git a/common/task/src/lib.rs b/common/task/src/lib.rs index 2a061c10..64cf9416 100644 --- a/common/task/src/lib.rs +++ b/common/task/src/lib.rs @@ -2,7 +2,11 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::{future::Future, time::Duration}; +use core::{ + fmt::{self, Debug}, + future::Future, + time::Duration, +}; use tokio::sync::mpsc; @@ -60,6 +64,15 @@ impl TaskHandle { } } +/// An enum which can't be constructed, representing that the task does not error. +pub enum DoesNotError {} +impl Debug for DoesNotError { + fn fmt(&self, _: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { + // This type can't be constructed so we'll never have a `&self` to call this fn with + unreachable!() + } +} + /// A task to be continually ran. pub trait ContinuallyRan: Sized + Send { /// The amount of seconds before this task should be polled again. @@ -69,11 +82,14 @@ pub trait ContinuallyRan: Sized + Send { /// Upon error, the amount of time waited will be linearly increased until this limit. const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120; + /// The error potentially yielded upon running an iteration of this task. + type Error: Debug; + /// Run an iteration of the task. /// /// If this returns `true`, all dependents of the task will immediately have a new iteration ran /// (without waiting for whatever timer they were already on). - fn run_iteration(&mut self) -> impl Send + Future>; + fn run_iteration(&mut self) -> impl Send + Future>; /// Continually run the task. fn continually_run( @@ -115,12 +131,17 @@ pub trait ContinuallyRan: Sized + Send { } } Err(e) => { - log::warn!("{}", e); + log::warn!("{e:?}"); increase_sleep_before_next_task(&mut current_sleep_before_next_task); } } // Don't run the task again for another few seconds UNLESS told to run now + /* + We could replace tokio::mpsc with async_channel, tokio::time::sleep with + patchable_async_sleep::sleep, and tokio::select with futures_lite::future::or + It isn't worth the effort when patchable_async_sleep::sleep will still resolve to tokio + */ tokio::select! { () = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {}, msg = task.run_now.recv() => { diff --git a/coordinator/cosign/src/delay.rs b/coordinator/cosign/src/delay.rs index 5593eaf7..3439135b 100644 --- a/coordinator/cosign/src/delay.rs +++ b/coordinator/cosign/src/delay.rs @@ -2,7 +2,7 @@ use core::future::Future; use std::time::{Duration, SystemTime}; use serai_db::*; -use serai_task::ContinuallyRan; +use serai_task::{DoesNotError, ContinuallyRan}; use crate::evaluator::CosignedBlocks; @@ -25,7 +25,9 @@ pub(crate) struct CosignDelayTask { } impl ContinuallyRan for CosignDelayTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; loop { diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index db286a4f..4216d5a7 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -80,7 +80,9 @@ pub(crate) struct CosignEvaluatorTask { } impl ContinuallyRan for CosignEvaluatorTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut known_cosign = None; let mut made_progress = false; diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index ebe3513c..c42c2d12 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -61,7 +61,9 @@ pub(crate) struct CosignIntendTask { } impl ContinuallyRan for CosignIntendTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let start_block_number = ScanCosignFrom::get(&self.db).unwrap_or(1); let latest_block_number = diff --git a/coordinator/p2p/libp2p/src/dial.rs b/coordinator/p2p/libp2p/src/dial.rs index 1530e34b..b001446b 100644 --- a/coordinator/p2p/libp2p/src/dial.rs +++ b/coordinator/p2p/libp2p/src/dial.rs @@ -5,7 +5,7 @@ use rand_core::{RngCore, OsRng}; use tokio::sync::mpsc; -use serai_client::Serai; +use serai_client::{SeraiError, Serai}; use libp2p::{ core::multiaddr::{Protocol, Multiaddr}, @@ -50,7 +50,9 @@ impl ContinuallyRan for DialTask { const DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 10 * 60; - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = SeraiError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { self.validators.update().await?; @@ -83,8 +85,7 @@ impl ContinuallyRan for DialTask { .unwrap_or(0) .saturating_sub(1)) { - let mut potential_peers = - self.serai.p2p_validators(network).await.map_err(|e| format!("{e:?}"))?; + let mut potential_peers = self.serai.p2p_validators(network).await?; for _ in 0 .. (TARGET_PEERS_PER_NETWORK - peer_count) { if potential_peers.is_empty() { break; diff --git a/coordinator/p2p/libp2p/src/validators.rs b/coordinator/p2p/libp2p/src/validators.rs index 951a5e99..0395ff3a 100644 --- a/coordinator/p2p/libp2p/src/validators.rs +++ b/coordinator/p2p/libp2p/src/validators.rs @@ -4,7 +4,7 @@ use std::{ collections::{HashSet, HashMap}, }; -use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; +use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, SeraiError, Serai}; use serai_task::{Task, ContinuallyRan}; @@ -50,9 +50,8 @@ impl Validators { async fn session_changes( serai: impl Borrow, sessions: impl Borrow>, - ) -> Result)>, String> { - let temporal_serai = - serai.borrow().as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + ) -> Result)>, SeraiError> { + let temporal_serai = serai.borrow().as_of_latest_finalized_block().await?; let temporal_serai = temporal_serai.validator_sets(); let mut session_changes = vec![]; @@ -69,7 +68,7 @@ impl Validators { let session = match temporal_serai.session(network).await { Ok(Some(session)) => session, Ok(None) => return Ok(None), - Err(e) => return Err(format!("{e:?}")), + Err(e) => return Err(e), }; if sessions.get(&network) == Some(&session) { @@ -81,7 +80,7 @@ impl Validators { session, validators.into_iter().map(peer_id_from_public).collect(), ))), - Err(e) => Err(format!("{e:?}")), + Err(e) => Err(e), } } }); @@ -147,7 +146,7 @@ impl Validators { } /// Update the view of the validators. - pub(crate) async fn update(&mut self) -> Result<(), String> { + pub(crate) async fn update(&mut self) -> Result<(), SeraiError> { let session_changes = Self::session_changes(&*self.serai, &self.sessions).await?; self.incorporate_session_changes(session_changes); Ok(()) @@ -200,13 +199,13 @@ impl ContinuallyRan for UpdateValidatorsTask { const DELAY_BETWEEN_ITERATIONS: u64 = 60; const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 5 * 60; - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = SeraiError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let session_changes = { let validators = self.validators.read().await; - Validators::session_changes(validators.serai.clone(), validators.sessions.clone()) - .await - .map_err(|e| format!("{e:?}"))? + Validators::session_changes(validators.serai.clone(), validators.sessions.clone()).await? }; self.validators.write().await.incorporate_session_changes(session_changes); Ok(true) diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index 8a2f3220..f13a0e5c 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -45,7 +45,9 @@ pub(crate) struct HeartbeatTask { } impl ContinuallyRan for HeartbeatTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { // If our blockchain hasn't had a block in the past minute, trigger the heartbeat protocol const TIME_TO_TRIGGER_SYNCING: Duration = Duration::from_secs(60); diff --git a/coordinator/src/serai.rs b/coordinator/src/serai.rs index b0093002..20599b3d 100644 --- a/coordinator/src/serai.rs +++ b/coordinator/src/serai.rs @@ -20,7 +20,9 @@ pub struct PublishSlashReportTask { serai: Arc, } impl ContinuallyRan for PublishSlashReportTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; for network in serai_client::primitives::NETWORKS { diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 224b6278..7601b2cc 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -32,7 +32,8 @@ pub(crate) struct SubstrateTask { } impl ContinuallyRan for SubstrateTask

{ - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; // TODO + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index a96cf225..a445be16 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -15,7 +15,7 @@ use serai_client::validator_sets::primitives::ValidatorSet; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; -use serai_task::{Task, TaskHandle, ContinuallyRan}; +use serai_task::{Task, TaskHandle, DoesNotError, ContinuallyRan}; use message_queue::{Service, Metadata, client::MessageQueue}; @@ -76,7 +76,9 @@ pub(crate) struct ProvideCosignCosignedTransactionsTask ContinuallyRan for ProvideCosignCosignedTransactionsTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; @@ -154,7 +156,9 @@ pub(crate) struct AddTributaryTransactionsTask key: Zeroizing<::F>, } impl ContinuallyRan for AddTributaryTransactionsTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; loop { @@ -212,7 +216,9 @@ pub(crate) struct TributaryProcessorMessagesTask { message_queue: Arc, } impl ContinuallyRan for TributaryProcessorMessagesTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; // TODO + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; loop { @@ -242,7 +248,9 @@ pub(crate) struct SignSlashReportTask { key: Zeroizing<::F>, } impl ContinuallyRan for SignSlashReportTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut txn = self.db.txn(); let Some(()) = SignSlashReport::try_recv(&mut txn, self.set.set) else { return Ok(false) }; diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index e1bbe6c2..34165774 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -34,7 +34,9 @@ impl CanonicalEventStream { } impl ContinuallyRan for CanonicalEventStream { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let next_block = NextBlock::get(&self.db).unwrap_or(0); let latest_finalized_block = diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index 54df6b3c..eacfed9d 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -39,7 +39,9 @@ impl EphemeralEventStream { } impl ContinuallyRan for EphemeralEventStream { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let next_block = NextBlock::get(&self.db).unwrap_or(0); let latest_finalized_block = diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index e897afe5..83300a0d 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -512,7 +512,9 @@ impl ScanTributaryTask { } impl ContinuallyRan for ScanTributaryTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let (mut last_block_number, mut last_block_hash) = TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index ffafd466..591826bd 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -95,6 +95,7 @@ impl Coordinator { message_queue.ack(Service::Coordinator, msg.id).await; // Fire that there's a new message + // This assumes the success path, not the just-rebooted-path received_message_send .send(()) .expect("failed to tell the Coordinator there's a new message"); diff --git a/processor/bitcoin/src/txindex.rs b/processor/bitcoin/src/txindex.rs index 6a55a4c4..2ba40ca8 100644 --- a/processor/bitcoin/src/txindex.rs +++ b/processor/bitcoin/src/txindex.rs @@ -39,7 +39,9 @@ pub(crate) fn script_pubkey_for_on_chain_output( pub(crate) struct TxIndexTask(pub(crate) Rpc); impl ContinuallyRan for TxIndexTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let latest_block_number = self .0 diff --git a/processor/scanner/src/batch/mod.rs b/processor/scanner/src/batch/mod.rs index 158306ca..736c3ac4 100644 --- a/processor/scanner/src/batch/mod.rs +++ b/processor/scanner/src/batch/mod.rs @@ -7,7 +7,10 @@ use serai_db::{DbTxn, Db}; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; -use primitives::{EncodableG, task::ContinuallyRan}; +use primitives::{ + EncodableG, + task::{DoesNotError, ContinuallyRan}, +}; use crate::{ db::{Returnable, ScannerGlobalDb, InInstructionData, ScanToBatchDb, BatchData, BatchToReportDb}, index, @@ -60,7 +63,9 @@ impl BatchTask { } impl ContinuallyRan for BatchTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let highest_batchable = { // Fetch the next to scan block diff --git a/processor/scanner/src/eventuality/mod.rs b/processor/scanner/src/eventuality/mod.rs index bb3e4b7e..8a416903 100644 --- a/processor/scanner/src/eventuality/mod.rs +++ b/processor/scanner/src/eventuality/mod.rs @@ -190,7 +190,9 @@ impl> EventualityTask { } impl> ContinuallyRan for EventualityTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { // Fetch the highest acknowledged block let Some(highest_acknowledged) = ScannerGlobalDb::::highest_acknowledged_block(&self.db) diff --git a/processor/scanner/src/index/mod.rs b/processor/scanner/src/index/mod.rs index 03abc8a8..50032bae 100644 --- a/processor/scanner/src/index/mod.rs +++ b/processor/scanner/src/index/mod.rs @@ -58,7 +58,9 @@ impl IndexTask { } impl ContinuallyRan for IndexTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { // Fetch the latest finalized block let our_latest_finalized = IndexDb::latest_finalized_block(&self.db) diff --git a/processor/scanner/src/report/mod.rs b/processor/scanner/src/report/mod.rs index 2a7ab6a1..9055fcd0 100644 --- a/processor/scanner/src/report/mod.rs +++ b/processor/scanner/src/report/mod.rs @@ -4,7 +4,7 @@ use serai_db::{DbTxn, Db}; use serai_validator_sets_primitives::Session; -use primitives::task::ContinuallyRan; +use primitives::task::{DoesNotError, ContinuallyRan}; use crate::{ db::{BatchData, BatchToReportDb, BatchesToSign}, substrate, ScannerFeed, @@ -27,7 +27,9 @@ impl ReportTask { } impl ContinuallyRan for ReportTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; loop { diff --git a/processor/scanner/src/scan/mod.rs b/processor/scanner/src/scan/mod.rs index 25127ace..24426c62 100644 --- a/processor/scanner/src/scan/mod.rs +++ b/processor/scanner/src/scan/mod.rs @@ -98,7 +98,9 @@ impl ScanTask { } impl ContinuallyRan for ScanTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { // Fetch the safe to scan block let latest_scannable = diff --git a/processor/scanner/src/substrate/mod.rs b/processor/scanner/src/substrate/mod.rs index 6b22a259..4963f66b 100644 --- a/processor/scanner/src/substrate/mod.rs +++ b/processor/scanner/src/substrate/mod.rs @@ -5,7 +5,7 @@ use serai_db::{Get, DbTxn, Db}; use serai_coins_primitives::{OutInstruction, OutInstructionWithBalance}; use messages::substrate::ExecutedBatch; -use primitives::task::ContinuallyRan; +use primitives::task::{DoesNotError, ContinuallyRan}; use crate::{ db::{ScannerGlobalDb, SubstrateToEventualityDb, AcknowledgedBatches}, index, batch, ScannerFeed, KeyFor, @@ -50,7 +50,9 @@ impl SubstrateTask { } impl ContinuallyRan for SubstrateTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; loop { diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs index c791f4e0..2c4fd1f5 100644 --- a/processor/signers/src/batch/mod.rs +++ b/processor/signers/src/batch/mod.rs @@ -14,7 +14,7 @@ use serai_db::{Get, DbTxn, Db}; use messages::sign::VariantSignId; -use primitives::task::ContinuallyRan; +use primitives::task::{DoesNotError, ContinuallyRan}; use scanner::{BatchesToSign, AcknowledgedBatches}; use frost_attempt_manager::*; @@ -79,7 +79,9 @@ impl BatchSignerTask { } impl ContinuallyRan for BatchSignerTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut iterated = false; diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 319f098c..003c14cd 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -22,7 +22,9 @@ impl CoordinatorTask { } impl ContinuallyRan for CoordinatorTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut iterated = false; diff --git a/processor/signers/src/cosign/mod.rs b/processor/signers/src/cosign/mod.rs index 2de18e86..dc5de6cd 100644 --- a/processor/signers/src/cosign/mod.rs +++ b/processor/signers/src/cosign/mod.rs @@ -11,7 +11,7 @@ use serai_db::{DbTxn, Db}; use messages::{sign::VariantSignId, coordinator::cosign_block_msg}; -use primitives::task::ContinuallyRan; +use primitives::task::{DoesNotError, ContinuallyRan}; use frost_attempt_manager::*; @@ -51,7 +51,9 @@ impl CosignerTask { } impl ContinuallyRan for CosignerTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut iterated = false; diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs index 577ec90b..a5d155ef 100644 --- a/processor/signers/src/slash_report.rs +++ b/processor/signers/src/slash_report.rs @@ -13,7 +13,7 @@ use serai_db::{DbTxn, Db}; use messages::sign::VariantSignId; -use primitives::task::ContinuallyRan; +use primitives::task::{DoesNotError, ContinuallyRan}; use scanner::ScannerFeed; use frost_attempt_manager::*; @@ -52,7 +52,9 @@ impl SlashReportSignerTask { } impl ContinuallyRan for SlashReportSignerTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = DoesNotError; + + fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut iterated = false; diff --git a/processor/signers/src/transaction/mod.rs b/processor/signers/src/transaction/mod.rs index efb20217..b62e7303 100644 --- a/processor/signers/src/transaction/mod.rs +++ b/processor/signers/src/transaction/mod.rs @@ -92,7 +92,9 @@ impl> impl>> ContinuallyRan for TransactionSignerTask { - fn run_iteration(&mut self) -> impl Send + Future> { + type Error = P::EphemeralError; + + fn run_iteration(&mut self) -> impl Send + Future> { async { let mut iterated = false; @@ -222,11 +224,7 @@ impl> let tx = TransactionFor::::read(&mut tx_buf).unwrap(); assert!(tx_buf.is_empty()); - self - .publisher - .publish(tx) - .await - .map_err(|e| format!("couldn't re-broadcast transactions: {e:?}"))?; + self.publisher.publish(tx).await?; } self.last_publication = Instant::now(); From 5e0e91c85dc360aa43e069c6c1773b1651fb57a9 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 14 Jan 2025 01:58:26 -0500 Subject: [PATCH 300/368] Add tasks to publish data onto Serai --- Cargo.lock | 1 + coordinator/Cargo.toml | 2 +- coordinator/src/main.rs | 6 +- coordinator/substrate/Cargo.toml | 4 +- coordinator/substrate/README.md | 10 +- coordinator/substrate/src/lib.rs | 122 +++++++++++++++++- coordinator/substrate/src/publish_batch.rs | 66 ++++++++++ .../src/publish_slash_report.rs} | 38 +++--- coordinator/substrate/src/set_keys.rs | 88 +++++++++++++ coordinator/tributary/src/lib.rs | 2 +- substrate/client/src/serai/validator_sets.rs | 2 + 11 files changed, 303 insertions(+), 38 deletions(-) create mode 100644 coordinator/substrate/src/publish_batch.rs rename coordinator/{src/serai.rs => substrate/src/publish_slash_report.rs} (77%) create mode 100644 coordinator/substrate/src/set_keys.rs diff --git a/Cargo.lock b/Cargo.lock index 2c649c36..ea9b74df 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8385,6 +8385,7 @@ dependencies = [ name = "serai-coordinator-substrate" version = "0.1.0" dependencies = [ + "bitvec", "borsh", "futures", "log", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index 2eec60c8..ce3ceda1 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -30,7 +30,7 @@ schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default- frost = { package = "modular-frost", path = "../crypto/frost" } frost-schnorrkel = { path = "../crypto/schnorrkel" } -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] } zalloc = { path = "../common/zalloc" } serai-db = { path = "../common/db" } diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index b6bb275d..e1f6708a 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -39,8 +39,6 @@ mod p2p { pub use serai_coordinator_libp2p_p2p::Libp2p; } -mod serai; - // Use a zeroizing allocator for this entire application // While secrets should already be zeroized, the presence of secret keys in a networked application // (at increased risk of OOB reads) justifies the performance hit in case any secrets weren't @@ -227,10 +225,10 @@ async fn handle_processor_messages( SignedCosigns::send(&mut txn, &cosign); } messages::coordinator::ProcessorMessage::SignedBatch { batch } => { - todo!("TODO Save to DB, have task read from DB and publish to Serai") + todo!("TODO PublishBatchTask") } messages::coordinator::ProcessorMessage::SignedSlashReport { session, signature } => { - todo!("TODO Save to DB, have task read from DB and publish to Serai") + todo!("TODO PublishSlashReportTask") } }, messages::ProcessorMessage::Substrate(msg) => match msg { diff --git a/coordinator/substrate/Cargo.toml b/coordinator/substrate/Cargo.toml index 21577d62..f4eeaa59 100644 --- a/coordinator/substrate/Cargo.toml +++ b/coordinator/substrate/Cargo.toml @@ -18,7 +18,9 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } +bitvec = { version = "1", default-features = false, features = ["std"] } + +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] } diff --git a/coordinator/substrate/README.md b/coordinator/substrate/README.md index 83d217aa..1bce3218 100644 --- a/coordinator/substrate/README.md +++ b/coordinator/substrate/README.md @@ -1,6 +1,6 @@ -# Serai Coordinate Substrate Scanner +# Serai Coordinator Substrate -This is the scanner of the Serai blockchain for the purposes of Serai's coordinator. +This crate manages the Serai coordinators's interactions with Serai's Substrate blockchain. Two event streams are defined: @@ -12,3 +12,9 @@ Two event streams are defined: The canonical event stream is available without provision of a validator's public key. The ephemeral event stream requires provision of a validator's public key. Both are ordered within themselves, yet there are no ordering guarantees across the two. + +Additionally, a collection of tasks are defined to publish data onto Serai: + +- `SetKeysTask`, which sets the keys generated via DKGs onto Serai. +- `PublishBatchTask`, which publishes `Batch`s onto Serai. +- `PublishSlashReportTask`, which publishes `SlashReport`s onto Serai. diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index 228cbed9..dc8056a7 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -6,8 +6,10 @@ use scale::{Encode, Decode}; use borsh::{io, BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{PublicKey, NetworkId}, - validator_sets::primitives::ValidatorSet, + primitives::{NetworkId, PublicKey, Signature, SeraiAddress}, + validator_sets::primitives::{Session, ValidatorSet, KeyPair}, + in_instructions::primitives::SignedBatch, + Transaction, }; use serai_db::*; @@ -17,6 +19,13 @@ pub use canonical::CanonicalEventStream; mod ephemeral; pub use ephemeral::EphemeralEventStream; +mod set_keys; +pub use set_keys::SetKeysTask; +mod publish_batch; +pub use publish_batch::PublishBatchTask; +mod publish_slash_report; +pub use publish_slash_report::PublishSlashReportTask; + fn borsh_serialize_validators( validators: &Vec<(PublicKey, u16)>, writer: &mut W, @@ -53,11 +62,7 @@ pub struct NewSetInformation { } mod _public_db { - use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; - - use serai_db::*; - - use crate::NewSetInformation; + use super::*; db_channel!( CoordinatorSubstrate { @@ -68,6 +73,18 @@ mod _public_db { NewSet: () -> NewSetInformation, // Potentially relevant sign slash report, from an ephemeral event stream SignSlashReport: (set: ValidatorSet) -> (), + + // Signed batches to publish onto the Serai network + SignedBatches: (network: NetworkId) -> SignedBatch, + } + ); + + create_db!( + CoordinatorSubstrate { + // Keys to set on the Serai network + Keys: (network: NetworkId) -> (Session, Vec), + // Slash reports to publish onto the Serai network + SlashReports: (network: NetworkId) -> (Session, Vec), } ); } @@ -118,3 +135,94 @@ impl SignSlashReport { _public_db::SignSlashReport::try_recv(txn, set) } } + +/// The keys to set on Serai. +pub struct Keys; +impl Keys { + /// Set the keys to report for a validator set. + /// + /// This only saves the most recent keys as only a single session is eligible to have its keys + /// reported at once. + pub fn set( + txn: &mut impl DbTxn, + set: ValidatorSet, + key_pair: KeyPair, + signature_participants: bitvec::vec::BitVec, + signature: Signature, + ) { + // If we have a more recent pair of keys, don't write this historic one + if let Some((existing_session, _)) = _public_db::Keys::get(txn, set.network) { + if existing_session.0 >= set.session.0 { + return; + } + } + + let tx = serai_client::validator_sets::SeraiValidatorSets::set_keys( + set.network, + key_pair, + signature_participants, + signature, + ); + _public_db::Keys::set(txn, set.network, &(set.session, tx.encode())); + } + pub(crate) fn take(txn: &mut impl DbTxn, network: NetworkId) -> Option<(Session, Transaction)> { + let (session, tx) = _public_db::Keys::take(txn, network)?; + Some((session, <_>::decode(&mut tx.as_slice()).unwrap())) + } +} + +/// The signed batches to publish onto Serai. +pub struct SignedBatches; +impl SignedBatches { + /// Send a `SignedBatch` to publish onto Serai. + /// + /// These will be published sequentially. Out-of-order sending risks hanging the task. + pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) { + _public_db::SignedBatches::send(txn, batch.batch.network, batch); + } + pub(crate) fn try_recv(txn: &mut impl DbTxn, network: NetworkId) -> Option { + _public_db::SignedBatches::try_recv(txn, network) + } +} + +/// The slash report was invalid. +#[derive(Debug)] +pub struct InvalidSlashReport; + +/// The slash reports to publish onto Serai. +pub struct SlashReports; +impl SlashReports { + /// Set the slashes to report for a validator set. + /// + /// This only saves the most recent slashes as only a single session is eligible to have its + /// slashes reported at once. + /// + /// Returns Err if the slashes are invalid. Returns Ok if the slashes weren't detected as + /// invalid. Slashes may be considered invalid by the Serai blockchain later even if not detected + /// as invalid here. + pub fn set( + txn: &mut impl DbTxn, + set: ValidatorSet, + slashes: Vec<(SeraiAddress, u32)>, + signature: Signature, + ) -> Result<(), InvalidSlashReport> { + // If we have a more recent slash report, don't write this historic one + if let Some((existing_session, _)) = _public_db::SlashReports::get(txn, set.network) { + if existing_session.0 >= set.session.0 { + return Ok(()); + } + } + + let tx = serai_client::validator_sets::SeraiValidatorSets::report_slashes( + set.network, + slashes.try_into().map_err(|_| InvalidSlashReport)?, + signature, + ); + _public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode())); + Ok(()) + } + pub(crate) fn take(txn: &mut impl DbTxn, network: NetworkId) -> Option<(Session, Transaction)> { + let (session, tx) = _public_db::SlashReports::take(txn, network)?; + Some((session, <_>::decode(&mut tx.as_slice()).unwrap())) + } +} diff --git a/coordinator/substrate/src/publish_batch.rs b/coordinator/substrate/src/publish_batch.rs new file mode 100644 index 00000000..6d186266 --- /dev/null +++ b/coordinator/substrate/src/publish_batch.rs @@ -0,0 +1,66 @@ +use core::future::Future; +use std::sync::Arc; + +use serai_db::{DbTxn, Db}; + +use serai_client::{primitives::NetworkId, SeraiError, Serai}; + +use serai_task::ContinuallyRan; + +use crate::SignedBatches; + +/// Publish `SignedBatch`s from `SignedBatches` onto Serai. +pub struct PublishBatchTask { + db: D, + serai: Arc, + network: NetworkId, +} + +impl PublishBatchTask { + /// Create a task to publish `SignedBatch`s onto Serai. + /// + /// Returns None if `network == NetworkId::Serai`. + // TODO: ExternalNetworkId + pub fn new(db: D, serai: Arc, network: NetworkId) -> Option { + if network == NetworkId::Serai { + None? + }; + Some(Self { db, serai, network }) + } +} + +impl ContinuallyRan for PublishBatchTask { + type Error = SeraiError; + + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + + loop { + let mut txn = self.db.txn(); + let Some(batch) = SignedBatches::try_recv(&mut txn, self.network) else { + // No batch to publish at this time + break; + }; + + // Publish this Batch if it hasn't already been published + let serai = self.serai.as_of_latest_finalized_block().await?; + let last_batch = serai.in_instructions().last_batch_for_network(self.network).await?; + if last_batch < Some(batch.batch.id) { + // This stream of Batches *should* be sequential within the larger context of the Serai + // coordinator. In this library, we use a more relaxed definition and don't assert + // sequence. This does risk hanging the task, if Batch #n+1 is sent before Batch #n, but + // that is a documented fault of the `SignedBatches` API. + self + .serai + .publish(&serai_client::in_instructions::SeraiInInstructions::execute_batch(batch)) + .await?; + } + + txn.commit(); + made_progress = true; + } + Ok(made_progress) + } + } +} diff --git a/coordinator/src/serai.rs b/coordinator/substrate/src/publish_slash_report.rs similarity index 77% rename from coordinator/src/serai.rs rename to coordinator/substrate/src/publish_slash_report.rs index 20599b3d..9c20fcdd 100644 --- a/coordinator/src/serai.rs +++ b/coordinator/substrate/src/publish_slash_report.rs @@ -1,25 +1,28 @@ use core::future::Future; use std::sync::Arc; -use serai_db::{Get, DbTxn, Db as DbTrait, create_db}; +use serai_db::{DbTxn, Db}; -use scale::Decode; use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; use serai_task::ContinuallyRan; -create_db! { - CoordinatorSerai { - SlashReports: (network: NetworkId) -> (Session, Vec), +use crate::SlashReports; + +/// Publish slash reports from `SlashReports` onto Serai. +pub struct PublishSlashReportTask { + db: D, + serai: Arc, +} + +impl PublishSlashReportTask { + /// Create a task to publish slash reports onto Serai. + pub fn new(db: D, serai: Arc) -> Self { + Self { db, serai } } } -/// Publish `SlashReport`s from `SlashReports` onto Serai. -pub struct PublishSlashReportTask { - db: CD, - serai: Arc, -} -impl ContinuallyRan for PublishSlashReportTask { +impl ContinuallyRan for PublishSlashReportTask { type Error = String; fn run_iteration(&mut self) -> impl Send + Future> { @@ -35,7 +38,6 @@ impl ContinuallyRan for PublishSlashReportTask { // No slash report to publish continue; }; - let slash_report = serai_client::Transaction::decode(&mut slash_report.as_slice()).unwrap(); let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; @@ -48,7 +50,7 @@ impl ContinuallyRan for PublishSlashReportTask { let session_after_slash_report_retired = current_session > Some(session_after_slash_report.0); if session_after_slash_report_retired { - // Commit the txn to drain this SlashReport from the database and not try it again later + // Commit the txn to drain this slash report from the database and not try it again later txn.commit(); continue; } @@ -57,7 +59,7 @@ impl ContinuallyRan for PublishSlashReportTask { // We already checked the current session wasn't greater, and they're not equal assert!(current_session < Some(session_after_slash_report.0)); // This would mean the Serai node is resyncing and is behind where it prior was - Err("have a SlashReport for a session Serai has yet to retire".to_string())?; + Err("have a slash report for a session Serai has yet to retire".to_string())?; } // If this session which should publish a slash report already has, move on @@ -68,14 +70,6 @@ impl ContinuallyRan for PublishSlashReportTask { continue; }; - /* - let tx = serai_client::SeraiValidatorSets::report_slashes( - network, - slash_report, - signature.clone(), - ); - */ - match self.serai.publish(&slash_report).await { Ok(()) => { txn.commit(); diff --git a/coordinator/substrate/src/set_keys.rs b/coordinator/substrate/src/set_keys.rs new file mode 100644 index 00000000..129bb703 --- /dev/null +++ b/coordinator/substrate/src/set_keys.rs @@ -0,0 +1,88 @@ +use core::future::Future; +use std::sync::Arc; + +use serai_db::{DbTxn, Db}; + +use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet, Serai}; + +use serai_task::ContinuallyRan; + +use crate::Keys; + +/// Set keys from `Keys` on Serai. +pub struct SetKeysTask { + db: D, + serai: Arc, +} + +impl SetKeysTask { + /// Create a task to publish slash reports onto Serai. + pub fn new(db: D, serai: Arc) -> Self { + Self { db, serai } + } +} + +impl ContinuallyRan for SetKeysTask { + type Error = String; + + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + for network in serai_client::primitives::NETWORKS { + if network == NetworkId::Serai { + continue; + }; + + let mut txn = self.db.txn(); + let Some((session, keys)) = Keys::take(&mut txn, network) else { + // No keys to set + continue; + }; + + let serai = + self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + let serai = serai.validator_sets(); + let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; + let current_session = current_session.map(|session| session.0); + // Only attempt to set these keys if this isn't a retired session + if Some(session.0) < current_session { + // Commit the txn to take these keys from the database and not try it again later + txn.commit(); + continue; + } + + if Some(session.0) != current_session { + // We already checked the current session wasn't greater, and they're not equal + assert!(current_session < Some(session.0)); + // This would mean the Serai node is resyncing and is behind where it prior was + Err("have a keys for a session Serai has yet to start".to_string())?; + } + + // If this session already has had its keys set, move on + if serai + .keys(ValidatorSet { network, session }) + .await + .map_err(|e| format!("{e:?}"))? + .is_some() + { + txn.commit(); + continue; + }; + + match self.serai.publish(&keys).await { + Ok(()) => { + txn.commit(); + made_progress = true; + } + // This could be specific to this TX (such as an already in mempool error) and it may be + // worthwhile to continue iteration with the other pending slash reports. We assume this + // error ephemeral and that the latency incurred for this ephemeral error to resolve is + // miniscule compared to the window reasonable to set the keys. That makes this a + // non-issue. + Err(e) => Err(format!("couldn't publish set keys transaction: {e:?}"))?, + } + } + Ok(made_progress) + } + } +} diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 83300a0d..80724c76 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -206,7 +206,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } Transaction::DkgConfirmationShare { attempt, share, signed } => { // Accumulate the shares into our own FROST attempt manager - todo!("TODO") + todo!("TODO: SetKeysTask") } Transaction::Cosign { substrate_block_hash } => { diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index 89990406..8eb50b70 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -238,6 +238,8 @@ impl<'a> SeraiValidatorSets<'a> { pub fn report_slashes( network: NetworkId, + // TODO: This bounds a maximum length but takes more space than just publishing all the u32s + // (50 * (32 + 4)) > (150 * 4) slashes: sp_runtime::BoundedVec< (SeraiAddress, u32), sp_core::ConstU32<{ primitives::MAX_KEY_SHARES_PER_SET / 3 }>, From 291ebf5e2442c318f8630a5dd5540b921271e1e8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 14 Jan 2025 02:22:28 -0500 Subject: [PATCH 301/368] Have serai-task warnings print with the name of the task --- common/task/src/lib.rs | 10 +++++++--- common/task/src/type_name.rs | 31 +++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+), 3 deletions(-) create mode 100644 common/task/src/type_name.rs diff --git a/common/task/src/lib.rs b/common/task/src/lib.rs index 64cf9416..83eac9bf 100644 --- a/common/task/src/lib.rs +++ b/common/task/src/lib.rs @@ -10,6 +10,8 @@ use core::{ use tokio::sync::mpsc; +mod type_name; + /// A handle for a task. /// /// The task will only stop running once all handles for it are dropped. @@ -49,8 +51,6 @@ impl Task { impl TaskHandle { /// Tell the task to run now (and not whenever its next iteration on a timer is). - /// - /// Panics if the task has been dropped. pub fn run_now(&self) { #[allow(clippy::match_same_arms)] match self.run_now.try_send(()) { @@ -58,6 +58,7 @@ impl TaskHandle { // NOP on full, as this task will already be ran as soon as possible Err(mpsc::error::TrySendError::Full(())) => {} Err(mpsc::error::TrySendError::Closed(())) => { + // The task should only be closed if all handles are dropped, and this one hasn't been panic!("task was unexpectedly closed when calling run_now") } } @@ -131,7 +132,10 @@ pub trait ContinuallyRan: Sized + Send { } } Err(e) => { - log::warn!("{e:?}"); + // Get the type name + let type_name = type_name::strip_type_name(core::any::type_name::()); + // Print the error as a warning, prefixed by the task's type + log::warn!("{type_name}: {e:?}"); increase_sleep_before_next_task(&mut current_sleep_before_next_task); } } diff --git a/common/task/src/type_name.rs b/common/task/src/type_name.rs new file mode 100644 index 00000000..c6ba1658 --- /dev/null +++ b/common/task/src/type_name.rs @@ -0,0 +1,31 @@ +/// Strip the modules from a type name. +// This may be of the form `a::b::C`, in which case we only want `C` +pub(crate) fn strip_type_name(full_type_name: &'static str) -> String { + // It also may be `a::b::C`, in which case, we only attempt to strip `a::b` + let mut by_generics = full_type_name.split('<'); + + // Strip to just `C` + let full_outer_object_name = by_generics.next().unwrap(); + let mut outer_object_name_parts = full_outer_object_name.split("::"); + let mut last_part_in_outer_object_name = outer_object_name_parts.next().unwrap(); + for part in outer_object_name_parts { + last_part_in_outer_object_name = part; + } + + // Push back on the generic terms + let mut type_name = last_part_in_outer_object_name.to_string(); + for generic in by_generics { + type_name.push('<'); + type_name.push_str(generic); + } + type_name +} + +#[test] +fn test_strip_type_name() { + assert_eq!(strip_type_name("core::option::Option"), "Option"); + assert_eq!( + strip_type_name("core::option::Option"), + "Option" + ); +} From a7fef2ba7a55dd0f41c5f27ed767a58b26a70dd8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 14 Jan 2025 07:51:39 -0500 Subject: [PATCH 302/368] Redesign Slash/SlashReport types with a function to calculate the penalty --- coordinator/substrate/src/ephemeral.rs | 3 +- coordinator/tributary/src/lib.rs | 7 +- coordinator/tributary/src/transaction.rs | 4 +- substrate/abi/src/validator_sets.rs | 2 +- substrate/client/src/serai/validator_sets.rs | 2 +- substrate/emissions/pallet/src/lib.rs | 4 +- substrate/primitives/src/constants.rs | 1 + substrate/runtime/src/lib.rs | 2 +- substrate/validator-sets/pallet/src/lib.rs | 18 +- .../validator-sets/primitives/src/lib.rs | 58 ++-- .../primitives/src/slash_points.rs | 258 ++++++++++++++++++ 11 files changed, 307 insertions(+), 52 deletions(-) create mode 100644 substrate/validator-sets/primitives/src/slash_points.rs diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index eacfed9d..3ea8de98 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -159,8 +159,9 @@ impl ContinuallyRan for EphemeralEventStream { Err("validator's weight exceeded u16::MAX".to_string())? }; + // Do the summation in u32 so we don't risk a u16 overflow let total_weight = validators.iter().map(|(_, weight)| u32::from(*weight)).sum::(); - if total_weight > MAX_KEY_SHARES_PER_SET { + if total_weight > u32::from(MAX_KEY_SHARES_PER_SET) { Err(format!( "{set:?} has {total_weight} key shares when the max is {MAX_KEY_SHARES_PER_SET}" ))?; diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 80724c76..f0aa8029 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -352,8 +352,11 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Create the resulting slash report let mut slash_report = vec![]; for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) { - if points != 0 { - slash_report.push(Slash { key: validator.into(), points }); + // TODO: Natively store this as a `Slash` + if points == u32::MAX { + slash_report.push(Slash::Fatal); + } else { + slash_report.push(Slash::Points(points)); } } assert!(slash_report.len() <= f); diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index befad461..2cc4600c 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -301,14 +301,14 @@ impl TransactionTrait for Transaction { Transaction::Batch { .. } => {} Transaction::Sign { data, .. } => { - if data.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + if data.len() > usize::from(MAX_KEY_SHARES_PER_SET) { Err(TransactionError::InvalidContent)? } // TODO: MAX_SIGN_LEN } Transaction::SlashReport { slash_points, .. } => { - if slash_points.len() > usize::try_from(MAX_KEY_SHARES_PER_SET).unwrap() { + if slash_points.len() > usize::from(MAX_KEY_SHARES_PER_SET) { Err(TransactionError::InvalidContent)? } } diff --git a/substrate/abi/src/validator_sets.rs b/substrate/abi/src/validator_sets.rs index 7a7bdc00..ec8a5714 100644 --- a/substrate/abi/src/validator_sets.rs +++ b/substrate/abi/src/validator_sets.rs @@ -21,7 +21,7 @@ pub enum Call { }, report_slashes { network: NetworkId, - slashes: BoundedVec<(SeraiAddress, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET / 3 }>>, + slashes: BoundedVec<(SeraiAddress, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 / 3 }>>, signature: Signature, }, allocate { diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index 8eb50b70..c92e4f89 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -242,7 +242,7 @@ impl<'a> SeraiValidatorSets<'a> { // (50 * (32 + 4)) > (150 * 4) slashes: sp_runtime::BoundedVec< (SeraiAddress, u32), - sp_core::ConstU32<{ primitives::MAX_KEY_SHARES_PER_SET / 3 }>, + sp_core::ConstU32<{ primitives::MAX_KEY_SHARES_PER_SET_U32 / 3 }>, >, signature: Signature, ) -> Transaction { diff --git a/substrate/emissions/pallet/src/lib.rs b/substrate/emissions/pallet/src/lib.rs index 400f8921..54c39c46 100644 --- a/substrate/emissions/pallet/src/lib.rs +++ b/substrate/emissions/pallet/src/lib.rs @@ -23,7 +23,7 @@ pub mod pallet { use economic_security_pallet::{Config as EconomicSecurityConfig, Pallet as EconomicSecurity}; use serai_primitives::*; - use validator_sets_primitives::{MAX_KEY_SHARES_PER_SET, Session}; + use validator_sets_primitives::{MAX_KEY_SHARES_PER_SET_U32, Session}; pub use emissions_primitives as primitives; use primitives::*; @@ -74,7 +74,7 @@ pub mod pallet { _, Identity, NetworkId, - BoundedVec<(PublicKey, u64), ConstU32<{ MAX_KEY_SHARES_PER_SET }>>, + BoundedVec<(PublicKey, u64), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 }>>, OptionQuery, >; diff --git a/substrate/primitives/src/constants.rs b/substrate/primitives/src/constants.rs index b3db7317..a3d4b6f9 100644 --- a/substrate/primitives/src/constants.rs +++ b/substrate/primitives/src/constants.rs @@ -3,6 +3,7 @@ use crate::BlockNumber; // 1 MB pub const BLOCK_SIZE: u32 = 1024 * 1024; // 6 seconds +// TODO: Use Duration pub const TARGET_BLOCK_TIME: u64 = 6; /// Measured in blocks. diff --git a/substrate/runtime/src/lib.rs b/substrate/runtime/src/lib.rs index e55270cb..3bb56a74 100644 --- a/substrate/runtime/src/lib.rs +++ b/substrate/runtime/src/lib.rs @@ -282,7 +282,7 @@ impl pallet_authorship::Config for Runtime { } // Maximum number of authorities per session. -pub type MaxAuthorities = ConstU32<{ validator_sets::primitives::MAX_KEY_SHARES_PER_SET }>; +pub type MaxAuthorities = ConstU32<{ validator_sets::primitives::MAX_KEY_SHARES_PER_SET_U32 }>; /// Longevity of an offence report. pub type ReportLongevity = ::EpochDuration; diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index 655c6722..2ba1b45f 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -141,7 +141,7 @@ pub mod pallet { _, Identity, NetworkId, - BoundedVec<(Public, u64), ConstU32<{ MAX_KEY_SHARES_PER_SET }>>, + BoundedVec<(Public, u64), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 }>>, OptionQuery, >; /// The validators selected to be in-set, regardless of if removed. @@ -402,7 +402,7 @@ pub mod pallet { // Clear the current InSet assert_eq!( - InSet::::clear_prefix(network, MAX_KEY_SHARES_PER_SET, None).maybe_cursor, + InSet::::clear_prefix(network, MAX_KEY_SHARES_PER_SET_U32, None).maybe_cursor, None ); @@ -412,11 +412,11 @@ pub mod pallet { { let mut iter = SortedAllocationsIter::::new(network); let mut key_shares = 0; - while key_shares < u64::from(MAX_KEY_SHARES_PER_SET) { + while key_shares < u64::from(MAX_KEY_SHARES_PER_SET_U32) { let Some((key, amount)) = iter.next() else { break }; let these_key_shares = - (amount.0 / allocation_per_key_share).min(u64::from(MAX_KEY_SHARES_PER_SET)); + (amount.0 / allocation_per_key_share).min(u64::from(MAX_KEY_SHARES_PER_SET_U32)); participants.push((key, these_key_shares)); key_shares += these_key_shares; @@ -535,7 +535,7 @@ pub mod pallet { top = Some(key_shares); } - if key_shares > u64::from(MAX_KEY_SHARES_PER_SET) { + if key_shares > u64::from(MAX_KEY_SHARES_PER_SET_U32) { break; } } @@ -547,7 +547,7 @@ pub mod pallet { // post_amortization_key_shares_for_top_validator yields what the top validator's key shares // would be after such a reduction, letting us evaluate this correctly let top = post_amortization_key_shares_for_top_validator(validators_len, top, key_shares); - (top * 3) < key_shares.min(MAX_KEY_SHARES_PER_SET.into()) + (top * 3) < key_shares.min(MAX_KEY_SHARES_PER_SET_U32.into()) } fn increase_allocation( @@ -586,7 +586,7 @@ pub mod pallet { // The above is_bft calls are only used to check a BFT net doesn't become non-BFT // Check here if this call would prevent a non-BFT net from *ever* becoming BFT - if (new_allocation / allocation_per_key_share) >= (MAX_KEY_SHARES_PER_SET / 3).into() { + if (new_allocation / allocation_per_key_share) >= (MAX_KEY_SHARES_PER_SET_U32 / 3).into() { Err(Error::::AllocationWouldPreventFaultTolerance)?; } @@ -1010,7 +1010,7 @@ pub mod pallet { pub fn report_slashes( origin: OriginFor, network: NetworkId, - slashes: BoundedVec<(Public, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET / 3 }>>, + slashes: BoundedVec<(Public, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 / 3 }>>, signature: Signature, ) -> DispatchResult { ensure_none(origin)?; @@ -1209,7 +1209,7 @@ pub mod pallet { ValidTransaction::with_tag_prefix("ValidatorSets") .and_provides((1, set)) - .longevity(MAX_KEY_SHARES_PER_SET.into()) + .longevity(MAX_KEY_SHARES_PER_SET_U32.into()) .propagate(true) .build() } diff --git a/substrate/validator-sets/primitives/src/lib.rs b/substrate/validator-sets/primitives/src/lib.rs index fe78fbca..8822aa00 100644 --- a/substrate/validator-sets/primitives/src/lib.rs +++ b/substrate/validator-sets/primitives/src/lib.rs @@ -1,5 +1,7 @@ #![cfg_attr(not(feature = "std"), no_std)] +use core::time::Duration; + #[cfg(feature = "std")] use zeroize::Zeroize; @@ -13,20 +15,30 @@ use borsh::{BorshSerialize, BorshDeserialize}; #[cfg(feature = "serde")] use serde::{Serialize, Deserialize}; -use sp_core::{ConstU32, sr25519::Public, bounded::BoundedVec}; +use sp_core::{ConstU32, bounded::BoundedVec, sr25519::Public}; #[cfg(not(feature = "std"))] use sp_std::vec::Vec; use serai_primitives::NetworkId; -/// The maximum amount of key shares per set. -pub const MAX_KEY_SHARES_PER_SET: u32 = 150; +mod slash_points; +pub use slash_points::*; + +/// The expected duration for a session. +// 1 week +pub const SESSION_LENGTH: Duration = Duration::from_secs(7 * 24 * 60 * 60); + +/// The maximum length for a key. // Support keys up to 96 bytes (BLS12-381 G2). pub const MAX_KEY_LEN: u32 = 96; +/// The maximum amount of key shares per set. +pub const MAX_KEY_SHARES_PER_SET: u16 = 150; +pub const MAX_KEY_SHARES_PER_SET_U32: u32 = MAX_KEY_SHARES_PER_SET as u32; + /// The type used to identify a specific session of validators. #[derive( - Clone, Copy, PartialEq, Eq, Hash, Default, Debug, Encode, Decode, TypeInfo, MaxEncodedLen, + Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Encode, Decode, TypeInfo, MaxEncodedLen, )] #[cfg_attr(feature = "std", derive(Zeroize))] #[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] @@ -34,7 +46,9 @@ pub const MAX_KEY_LEN: u32 = 96; pub struct Session(pub u32); /// The type used to identify a specific validator set during a specific session. -#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] +#[derive( + Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Encode, Decode, TypeInfo, MaxEncodedLen, +)] #[cfg_attr(feature = "std", derive(Zeroize))] #[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] @@ -43,13 +57,13 @@ pub struct ValidatorSet { pub network: NetworkId, } -type MaxKeyLen = ConstU32; /// The type representing a Key from an external network. -pub type ExternalKey = BoundedVec; +pub type ExternalKey = BoundedVec>; /// The key pair for a validator set. /// -/// This is their Ristretto key, used for signing Batches, and their key on the external network. +/// This is their Ristretto key, used for publishing data onto Serai, and their key on the external +/// network. #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] #[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] @@ -81,12 +95,12 @@ impl Zeroize for KeyPair { /// The MuSig context for a validator set. pub fn musig_context(set: ValidatorSet) -> Vec { - [b"ValidatorSets-musig_key".as_ref(), &set.encode()].concat() + (b"ValidatorSets-musig_key".as_ref(), set).encode() } /// The MuSig public key for a validator set. /// -/// This function panics on invalid input. +/// This function panics on invalid input, per the definition of `dkg::musig::musig_key`. pub fn musig_key(set: ValidatorSet, set_keys: &[Public]) -> Public { let mut keys = Vec::new(); for key in set_keys { @@ -98,33 +112,11 @@ pub fn musig_key(set: ValidatorSet, set_keys: &[Public]) -> Public { Public(dkg::musig::musig_key::(&musig_context(set), &keys).unwrap().to_bytes()) } -/// The message for the set_keys signature. +/// The message for the `set_keys` signature. pub fn set_keys_message(set: &ValidatorSet, key_pair: &KeyPair) -> Vec { (b"ValidatorSets-set_keys", set, key_pair).encode() } -#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] -#[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] -#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] -pub struct Slash { - #[cfg_attr( - feature = "borsh", - borsh( - serialize_with = "serai_primitives::borsh_serialize_public", - deserialize_with = "serai_primitives::borsh_deserialize_public" - ) - )] - pub key: Public, - pub points: u32, -} -#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] -#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] -pub struct SlashReport(pub BoundedVec>); - -pub fn report_slashes_message(set: &ValidatorSet, slashes: &SlashReport) -> Vec { - (b"ValidatorSets-report_slashes", set, slashes).encode() -} - /// For a set of validators whose key shares may exceed the maximum, reduce until they equal the /// maximum. /// diff --git a/substrate/validator-sets/primitives/src/slash_points.rs b/substrate/validator-sets/primitives/src/slash_points.rs new file mode 100644 index 00000000..d6fd0d68 --- /dev/null +++ b/substrate/validator-sets/primitives/src/slash_points.rs @@ -0,0 +1,258 @@ +use core::{num::NonZero, time::Duration}; + +#[cfg(feature = "std")] +use zeroize::Zeroize; + +use scale::{Encode, Decode, MaxEncodedLen}; +use scale_info::TypeInfo; + +#[cfg(feature = "borsh")] +use borsh::{BorshSerialize, BorshDeserialize}; +#[cfg(feature = "serde")] +use serde::{Serialize, Deserialize}; + +use sp_core::{ConstU32, bounded::BoundedVec}; +#[cfg(not(feature = "std"))] +use sp_std::vec::Vec; + +use serai_primitives::{TARGET_BLOCK_TIME, Amount}; + +use crate::{SESSION_LENGTH, MAX_KEY_SHARES_PER_SET_U32}; + +/// Each slash point is equivalent to the downtime implied by missing a block proposal. +// Takes a NonZero so that the result is never 0. +fn downtime_per_slash_point(validators: NonZero) -> Duration { + Duration::from_secs(TARGET_BLOCK_TIME) * u32::from(u16::from(validators)) +} + +/// A slash for a validator. +#[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode, MaxEncodedLen, TypeInfo)] +#[cfg_attr(feature = "std", derive(Zeroize))] +#[cfg_attr(feature = "borsh", derive(BorshSerialize, BorshDeserialize))] +#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +pub enum Slash { + /// The slash points accumulated by this validator. + /// + /// Each point is considered as `downtime_per_slash_point(validators)` downtime, where + /// `validators` is the amount of validators present in the set. + Points(u32), + /// A fatal slash due to fundamentally faulty behavior. + /// + /// This should only be used for misbehavior with explicit evidence of impropriety. This should + /// not be used for liveness failures. The validator will be penalized all allocated stake. + Fatal, +} + +impl Slash { + /// Calculate the penalty which should be applied to the validator. + /// + /// Does not panic, even due to overflows, if `allocated_stake + session_rewards <= u64::MAX`. + pub fn penalty( + self, + validators: NonZero, + allocated_stake: Amount, + session_rewards: Amount, + ) -> Amount { + match self { + Self::Points(slash_points) => { + let mut slash_points = u64::from(slash_points); + // Do the logic with the stake in u128 to prevent overflow from multiplying u64s + let allocated_stake = u128::from(allocated_stake.0); + let session_rewards = u128::from(session_rewards.0); + + // A Serai validator is allowed to be offline for an average of one day every two weeks + // with no additional penalty. They'll solely not earn rewards for the time they were + // offline. + const GRACE_WINDOW: Duration = Duration::from_secs(2 * 7 * 24 * 60 * 60); + const GRACE: Duration = Duration::from_secs(24 * 60 * 60); + + // GRACE / GRACE_WINDOW is the fraction of the time a validator is allowed to be offline + // This means we want SESSION_LENGTH * (GRACE / GRACE_WINDOW), but with the parentheses + // moved so we don't incur the floordiv and hit 0 + const PENALTY_FREE_DOWNTIME: Duration = Duration::from_secs( + (SESSION_LENGTH.as_secs() * GRACE.as_secs()) / GRACE_WINDOW.as_secs(), + ); + + let downtime_per_slash_point = downtime_per_slash_point(validators); + let penalty_free_slash_points = + PENALTY_FREE_DOWNTIME.as_secs() / downtime_per_slash_point.as_secs(); + + /* + In practice, the following means: + + - Hours 0-12 are penalized as if they're hours 0-12. + - Hours 12-24 are penalized as if they're hours 12-36. + - Hours 24-36 are penalized as if they're hours 36-96. + - Hours 36-48 are penalized as if they're hours 96-168. + - Hours 48-168 are penalized for 0-2% of stake. + - 168-336 hours of slashes, for a session only lasting 168 hours, is penalized for 2-10% + of stake. + + This means a validator offline has to be offline for more than two days to start having + their stake slashed. + */ + + const MULTIPLIERS: [u64; 4] = [1, 2, 5, 6]; + let reward_slash = { + // In intervals of the penalty-free slash points, weight the slash points accrued + // The multiplier for the first interval is 1 as it's penalty-free + let mut weighted_slash_points_for_reward_slash = 0; + let mut total_possible_slash_points_for_rewards_slash = 0; + for mult in MULTIPLIERS { + let slash_points_in_interval = slash_points.min(penalty_free_slash_points); + weighted_slash_points_for_reward_slash += slash_points_in_interval * mult; + total_possible_slash_points_for_rewards_slash += penalty_free_slash_points * mult; + slash_points -= slash_points_in_interval; + } + // If there are no penalty-free slash points, and the validator was slashed, slash the + // entire reward + (u128::from(weighted_slash_points_for_reward_slash) * session_rewards) + .checked_div(u128::from(total_possible_slash_points_for_rewards_slash)) + .unwrap_or({ + if weighted_slash_points_for_reward_slash == 0 { + 0 + } else { + session_rewards + } + }) + }; + + let slash_points_for_entire_session = + SESSION_LENGTH.as_secs() / downtime_per_slash_point.as_secs(); + + let offline_slash = { + // The amount of stake to slash for being offline + const MAX_STAKE_SLASH_PERCENTAGE_OFFLINE: u64 = 2; + + let stake_to_slash_for_being_offline = + (allocated_stake * u128::from(MAX_STAKE_SLASH_PERCENTAGE_OFFLINE)) / 100; + + // We already removed the slash points for `intervals * penalty_free_slash_points` + let slash_points_for_reward_slash = + penalty_free_slash_points * u64::try_from(MULTIPLIERS.len()).unwrap(); + let slash_points_for_offline_stake_slash = + slash_points_for_entire_session.saturating_sub(slash_points_for_reward_slash); + + let slash_points_in_interval = slash_points.min(slash_points_for_offline_stake_slash); + slash_points -= slash_points_in_interval; + // If there are no slash points for the entire session, don't slash stake + // That's an extreme edge case which shouldn't start penalizing validators + (u128::from(slash_points_in_interval) * stake_to_slash_for_being_offline) + .checked_div(u128::from(slash_points_for_offline_stake_slash)) + .unwrap_or(0) + }; + + let disruptive_slash = { + /* + A validator may have more slash points than `slash_points_for_stake_slash` if they + didn't just accrue slashes for missing block proposals, yet also accrued slashes for + being disruptive. In that case, we still want to bound their slash points so they can't + somehow be slashed for 100% of their stake (which should only happen on a fatal slash). + */ + const MAX_STAKE_SLASH_PERCENTAGE_DISRUPTIVE: u64 = 8; + + let stake_to_slash_for_being_disruptive = + (allocated_stake * u128::from(MAX_STAKE_SLASH_PERCENTAGE_DISRUPTIVE)) / 100; + // Follows the offline slash for `unwrap_or` policy + (u128::from(slash_points.min(slash_points_for_entire_session)) * + stake_to_slash_for_being_disruptive) + .checked_div(u128::from(slash_points_for_entire_session)) + .unwrap_or(0) + }; + + // The penalty is all slashes, but never more than the validator's balance + // (handles any rounding errors which may or may not exist) + let penalty_u128 = + (reward_slash + offline_slash + disruptive_slash).min(allocated_stake + session_rewards); + // saturating_into + Amount(u64::try_from(penalty_u128).unwrap_or(u64::MAX)) + } + // On fatal slash, their entire stake is removed + Self::Fatal => Amount(allocated_stake.0 + session_rewards.0), + } + } +} + +#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, TypeInfo, MaxEncodedLen)] +#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +pub struct SlashReport(pub BoundedVec>); + +// This is assumed binding to the ValidatorSet via the key signed with +pub fn report_slashes_message(slashes: &SlashReport) -> Vec { + (b"ValidatorSets-report_slashes", slashes).encode() +} + +#[test] +fn test_penalty() { + for validators in [1, 50, 100, crate::MAX_KEY_SHARES_PER_SET] { + let validators = NonZero::new(validators).unwrap(); + // 12 hours of slash points should only decrease the rewards proportionately + let twelve_hours_of_slash_points = + u32::try_from((12 * 60 * 60) / downtime_per_slash_point(validators).as_secs()).unwrap(); + assert_eq!( + Slash::Points(twelve_hours_of_slash_points).penalty( + validators, + Amount(u64::MAX), + Amount(168) + ), + Amount(12) + ); + // 24 hours of slash points should be counted as 36 hours + assert_eq!( + Slash::Points(2 * twelve_hours_of_slash_points).penalty( + validators, + Amount(u64::MAX), + Amount(168) + ), + Amount(36) + ); + // 36 hours of slash points should be counted as 96 hours + assert_eq!( + Slash::Points(3 * twelve_hours_of_slash_points).penalty( + validators, + Amount(u64::MAX), + Amount(168) + ), + Amount(96) + ); + // 48 hours of slash points should be counted as 168 hours + assert_eq!( + Slash::Points(4 * twelve_hours_of_slash_points).penalty( + validators, + Amount(u64::MAX), + Amount(168) + ), + Amount(168) + ); + + // A full week of slash points should slash 2% + let week_of_slash_points = 14 * twelve_hours_of_slash_points; + assert_eq!( + Slash::Points(week_of_slash_points).penalty(validators, Amount(1000), Amount(168)), + Amount(20 + 168) + ); + + // Two weeks of slash points should slash 10% + assert_eq!( + Slash::Points(2 * week_of_slash_points).penalty(validators, Amount(1000), Amount(168)), + Amount(100 + 168) + ); + + // Anything greater should still only slash 10% + assert_eq!( + Slash::Points(u32::MAX).penalty(validators, Amount(1000), Amount(168)), + Amount(100 + 168) + ); + } +} + +#[test] +fn no_overflow() { + Slash::Points(u32::MAX).penalty( + NonZero::new(u16::MAX).unwrap(), + Amount(u64::MAX), + Amount(u64::MAX), + ); + + Slash::Points(u32::MAX).penalty(NonZero::new(1).unwrap(), Amount(u64::MAX), Amount(u64::MAX)); +} From 6c145a5ec3d28e4c71b5d5ad287cbb14367736d8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 14 Jan 2025 11:44:13 -0500 Subject: [PATCH 303/368] Disable offline, disruptive slashes Reasoning commented in codebase --- .../primitives/src/slash_points.rs | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/substrate/validator-sets/primitives/src/slash_points.rs b/substrate/validator-sets/primitives/src/slash_points.rs index d6fd0d68..20bb4e72 100644 --- a/substrate/validator-sets/primitives/src/slash_points.rs +++ b/substrate/validator-sets/primitives/src/slash_points.rs @@ -84,12 +84,17 @@ impl Slash { - Hours 12-24 are penalized as if they're hours 12-36. - Hours 24-36 are penalized as if they're hours 36-96. - Hours 36-48 are penalized as if they're hours 96-168. + + /* Commented, see below explanation of why. - Hours 48-168 are penalized for 0-2% of stake. - 168-336 hours of slashes, for a session only lasting 168 hours, is penalized for 2-10% of stake. This means a validator offline has to be offline for more than two days to start having their stake slashed. + */ + + This means a validator offline for two days will not earn any rewards for that session. */ const MULTIPLIERS: [u64; 4] = [1, 2, 5, 6]; @@ -117,6 +122,7 @@ impl Slash { }) }; + /* let slash_points_for_entire_session = SESSION_LENGTH.as_secs() / downtime_per_slash_point.as_secs(); @@ -159,6 +165,32 @@ impl Slash { .checked_div(u128::from(slash_points_for_entire_session)) .unwrap_or(0) }; + */ + + /* + We do not slash for being offline/disruptive at this time. Doing so allows an adversary + to DoS nodes to not just take them offline, yet also take away their stake. This isn't + preferable to the increased incentive to properly maintain a node when the rewards should + already be sufficient for that purpose. + + Validators also shouldn't be able to be so disruptive due to their limiting upon + disruption *while its ongoing*. Slashes as a post-response, while an arguably worthwhile + economic penalty, can never be a response in the moment (as necessary to actually handle + the disruption). + + If stake slashing was to be re-enabled, the percentage of stake which is eligible for + slashing should be variable to how close we are to losing liveness. This would mean if + less than 10% of validators are offline, no stake is slashes. If 10% are, 2% is eligible. + If 20% are, 5% is eligible. If 30% are, 10% is eligible. + + (or similar) + + This would mean that a DoS is insufficient to cause a validator to lose their stake. + Instead, a coordinated DoS against multiple Serai validators would be needed, + strengthening our assumptions. + */ + let offline_slash = 0; + let disruptive_slash = 0; // The penalty is all slashes, but never more than the validator's balance // (handles any rounding errors which may or may not exist) @@ -225,6 +257,7 @@ fn test_penalty() { Amount(168) ); + /* // A full week of slash points should slash 2% let week_of_slash_points = 14 * twelve_hours_of_slash_points; assert_eq!( @@ -243,6 +276,13 @@ fn test_penalty() { Slash::Points(u32::MAX).penalty(validators, Amount(1000), Amount(168)), Amount(100 + 168) ); + */ + + // Anything greater should still only slash the rewards + assert_eq!( + Slash::Points(u32::MAX).penalty(validators, Amount(u64::MAX), Amount(168)), + Amount(168) + ); } } From cb410cc4e08044216c7216606c04ce4b57264e1a Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 02:46:31 -0500 Subject: [PATCH 304/368] Correct how we handle rounding errors within the penalty fn We explicitly no longer slash stakes but we still set the maximum slash to the allocated stake + the rewards. Now, the reward slash is bound to the rewards and the stake slash is bound to the stake. This prevents an improperly rounded reward slash from effecting a stake slash. --- substrate/validator-sets/primitives/src/slash_points.rs | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/substrate/validator-sets/primitives/src/slash_points.rs b/substrate/validator-sets/primitives/src/slash_points.rs index 20bb4e72..c045a4ef 100644 --- a/substrate/validator-sets/primitives/src/slash_points.rs +++ b/substrate/validator-sets/primitives/src/slash_points.rs @@ -121,6 +121,8 @@ impl Slash { } }) }; + // Ensure the slash never exceeds the amount slashable (due to rounding errors) + let reward_slash = reward_slash.min(session_rewards); /* let slash_points_for_entire_session = @@ -192,10 +194,9 @@ impl Slash { let offline_slash = 0; let disruptive_slash = 0; - // The penalty is all slashes, but never more than the validator's balance - // (handles any rounding errors which may or may not exist) - let penalty_u128 = - (reward_slash + offline_slash + disruptive_slash).min(allocated_stake + session_rewards); + let stake_slash = (offline_slash + disruptive_slash).min(allocated_stake); + + let penalty_u128 = reward_slash + stake_slash; // saturating_into Amount(u64::try_from(penalty_u128).unwrap_or(u64::MAX)) } From 0de3fda921f2cd4e641c3d0b917e7aa1a321095f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 05:59:56 -0500 Subject: [PATCH 305/368] Further space out requests for cosigns from the network --- coordinator/cosign/src/evaluator.rs | 37 +++++++++++++++++++---------- coordinator/cosign/src/lib.rs | 10 +++++--- 2 files changed, 32 insertions(+), 15 deletions(-) diff --git a/coordinator/cosign/src/evaluator.rs b/coordinator/cosign/src/evaluator.rs index 4216d5a7..905e60fc 100644 --- a/coordinator/cosign/src/evaluator.rs +++ b/coordinator/cosign/src/evaluator.rs @@ -1,5 +1,5 @@ use core::future::Future; -use std::time::{Duration, SystemTime}; +use std::time::{Duration, Instant, SystemTime}; use serai_db::*; use serai_task::ContinuallyRan; @@ -77,12 +77,22 @@ pub(crate) fn currently_evaluated_global_session(getter: &impl Get) -> Option<[u pub(crate) struct CosignEvaluatorTask { pub(crate) db: D, pub(crate) request: R, + pub(crate) last_request_for_cosigns: Instant, } impl ContinuallyRan for CosignEvaluatorTask { type Error = String; fn run_iteration(&mut self) -> impl Send + Future> { + let should_request_cosigns = |last_request_for_cosigns: &mut Instant| { + const REQUEST_COSIGNS_SPACING: Duration = Duration::from_secs(60); + if Instant::now() < (*last_request_for_cosigns + REQUEST_COSIGNS_SPACING) { + return false; + } + *last_request_for_cosigns = Instant::now(); + true + }; + async move { let mut known_cosign = None; let mut made_progress = false; @@ -118,12 +128,13 @@ impl ContinuallyRan for CosignEvaluatorTask ContinuallyRan for CosignEvaluatorTask Cosigning { .continually_run(intend_task, vec![evaluator_task_handle]), ); tokio::spawn( - (evaluator::CosignEvaluatorTask { db: db.clone(), request }) - .continually_run(evaluator_task, vec![delay_task_handle]), + (evaluator::CosignEvaluatorTask { + db: db.clone(), + request, + last_request_for_cosigns: Instant::now(), + }) + .continually_run(evaluator_task, vec![delay_task_handle]), ); tokio::spawn( (delay::CosignDelayTask { db: db.clone() }) From 7ce5bdad4489112a8d56e9eb74c9540442107498 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 07:01:24 -0500 Subject: [PATCH 306/368] Don't add transactions for topics which have yet to be recognized --- coordinator/src/tributary.rs | 102 ++++++++++++++++++----- coordinator/tributary/src/db.rs | 52 +++++++++--- coordinator/tributary/src/lib.rs | 38 +++++++-- coordinator/tributary/src/transaction.rs | 34 +++++++- 4 files changed, 181 insertions(+), 45 deletions(-) diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index a445be16..2ebfd223 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -21,11 +21,19 @@ use message_queue::{Service, Metadata, client::MessageQueue}; use serai_cosign::{Faulted, CosignIntent, Cosigning}; use serai_coordinator_substrate::{NewSetInformation, SignSlashReport}; -use serai_coordinator_tributary::{Transaction, ProcessorMessages, CosignIntents, ScanTributaryTask}; +use serai_coordinator_tributary::{ + Topic, Transaction, ProcessorMessages, CosignIntents, RecognizedTopics, ScanTributaryTask, +}; use serai_coordinator_p2p::P2p; use crate::{Db, TributaryTransactions}; +create_db! { + Coordinator { + PublishOnRecognition: (set: ValidatorSet, topic: Topic) -> Transaction, + } +} + db_channel! { Coordinator { PendingCosigns: (set: ValidatorSet) -> CosignIntent, @@ -147,6 +155,37 @@ impl ContinuallyRan } } +#[must_use] +async fn add_signed_unsigned_transaction( + tributary: &Tributary, + tx: &Transaction, +) -> bool { + let res = tributary.add_transaction(tx.clone()).await; + match &res { + // Fresh publication, already published + Ok(true | false) => {} + // InvalidNonce may be out-of-order TXs, not invalid ones, but we only create nonce #n+1 after + // on-chain inclusion of the TX with nonce #n, so it is invalid within our context + Err( + TransactionError::TooLargeTransaction | + TransactionError::InvalidSigner | + TransactionError::InvalidNonce | + TransactionError::InvalidSignature | + TransactionError::InvalidContent, + ) => { + panic!("created an invalid transaction, tx: {tx:?}, err: {res:?}"); + } + // We've published too many transactions recently + Err(TransactionError::TooManyInMempool) => { + return false; + } + // This isn't a Provided transaction so this should never be hit + Err(TransactionError::ProvidedAddedToMempool) => unreachable!(), + } + + true +} + /// Adds all of the transactions sent via `TributaryTransactions`. pub(crate) struct AddTributaryTransactionsTask { db: CD, @@ -161,6 +200,8 @@ impl ContinuallyRan for AddTributaryTransactio fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; + + // Provide/add all transactions sent our way loop { let mut txn = self.db.txn(); let Some(mut tx) = TributaryTransactions::try_recv(&mut txn, self.set) else { break }; @@ -174,29 +215,27 @@ impl ContinuallyRan for AddTributaryTransactio tx.sign(&mut OsRng, self.tributary.genesis(), &self.key); } - // Actually add the transaction - // TODO: If this is a preprocess, make sure the topic has been recognized - let res = self.tributary.add_transaction(tx.clone()).await; - match &res { - // Fresh publication, already published - Ok(true | false) => {} - Err( - TransactionError::TooLargeTransaction | - TransactionError::InvalidSigner | - TransactionError::InvalidNonce | - TransactionError::InvalidSignature | - TransactionError::InvalidContent, - ) => { - panic!("created an invalid transaction, tx: {tx:?}, err: {res:?}"); - } - // We've published too many transactions recently - // Drop this txn to try to publish it again later on a future iteration - Err(TransactionError::TooManyInMempool) => { - drop(txn); + // If this is a transaction with signing data, check the topic is recognized before + // publishing + let topic = tx.topic(); + let still_requires_recognition = if let Some(topic) = topic { + (topic.requires_recognition() && + (!RecognizedTopics::recognized(&self.tributary_db, self.set, topic))) + .then_some(topic) + } else { + None + }; + if let Some(topic) = still_requires_recognition { + // Queue the transaction until the topic is recognized + // We use the Tributary DB for this so it's cleaned up when the Tributary DB is + let mut txn = self.tributary_db.txn(); + PublishOnRecognition::set(&mut txn, self.set, topic, &tx); + txn.commit(); + } else { + // Actually add the transaction + if !add_signed_unsigned_transaction(&self.tributary, &tx).await { break; } - // This isn't a Provided transaction so this should never be hit - Err(TransactionError::ProvidedAddedToMempool) => unreachable!(), } } } @@ -204,6 +243,25 @@ impl ContinuallyRan for AddTributaryTransactio made_progress = true; txn.commit(); } + + // Provide/add all transactions due to newly recognized topics + loop { + let mut txn = self.tributary_db.txn(); + let Some(topic) = + RecognizedTopics::try_recv_topic_requiring_recognition(&mut txn, self.set) + else { + break; + }; + if let Some(tx) = PublishOnRecognition::take(&mut txn, self.set, topic) { + if !add_signed_unsigned_transaction(&self.tributary, &tx).await { + break; + } + } + + made_progress = true; + txn.commit(); + } + Ok(made_progress) } } diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index 08fac488..aefe45d3 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -15,20 +15,35 @@ use crate::transaction::SigningProtocolRound; /// A topic within the database which the group participates in #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] -pub(crate) enum Topic { +pub enum Topic { /// Vote to remove a participant - RemoveParticipant { participant: SeraiAddress }, + RemoveParticipant { + /// The participant to remove + participant: SeraiAddress, + }, // DkgParticipation isn't represented here as participations are immediately sent to the // processor, not accumulated within this databse /// Participation in the signing protocol to confirm the DKG results on Substrate - DkgConfirmation { attempt: u32, round: SigningProtocolRound }, + DkgConfirmation { + /// The attempt number this is for + attempt: u32, + /// The round of the signing protocol + round: SigningProtocolRound, + }, /// The local view of the SlashReport, to be aggregated into the final SlashReport SlashReport, /// Participation in a signing protocol - Sign { id: VariantSignId, attempt: u32, round: SigningProtocolRound }, + Sign { + /// The ID of the signing protocol + id: VariantSignId, + /// The attempt number this is for + attempt: u32, + /// The round of the signing protocol + round: SigningProtocolRound, + }, } enum Participating { @@ -138,16 +153,17 @@ impl Topic { } } - fn requires_whitelisting(&self) -> bool { + /// If this topic requires recognition before entries are permitted for it. + pub fn requires_recognition(&self) -> bool { #[allow(clippy::match_same_arms)] match self { - // We don't require whitelisting to remove a participant + // We don't require recognition to remove a participant Topic::RemoveParticipant { .. } => false, - // We don't require whitelisting for the first attempt, solely the re-attempts + // We don't require recognition for the first attempt, solely the re-attempts Topic::DkgConfirmation { attempt, .. } => *attempt != 0, - // We don't require whitelisting for the slash report + // We don't require recognition for the slash report Topic::SlashReport { .. } => false, - // We do require whitelisting for every sign protocol + // We do require recognition for every sign protocol Topic::Sign { .. } => true, } } @@ -198,7 +214,7 @@ create_db!( // If this block has already been cosigned. Cosigned: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> (), - // The plans to whitelist upon a `Transaction::SubstrateBlock` being included on-chain. + // The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain. SubstrateBlockPlans: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> Vec<[u8; 32]>, // The weight accumulated for a topic. @@ -214,6 +230,7 @@ create_db!( db_channel!( CoordinatorTributary { ProcessorMessages: (set: ValidatorSet) -> messages::CoordinatorMessage, + RecognizedTopics: (set: ValidatorSet) -> Topic, } ); @@ -262,7 +279,7 @@ impl TributaryDb { ); ActivelyCosigning::set(txn, set, &substrate_block_hash); - TributaryDb::recognize_topic( + Self::recognize_topic( txn, set, Topic::Sign { @@ -292,6 +309,10 @@ impl TributaryDb { pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { AccumulatedWeight::set(txn, set, topic, &0); + RecognizedTopics::send(txn, set, &topic); + } + pub(crate) fn recognized(getter: &impl Get, set: ValidatorSet, topic: Topic) -> bool { + AccumulatedWeight::get(getter, set, topic).is_some() } pub(crate) fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { @@ -350,8 +371,13 @@ impl TributaryDb { // nonces on transactions (deterministically to the topic) let accumulated_weight = AccumulatedWeight::get(txn, set, topic); - if topic.requires_whitelisting() && accumulated_weight.is_none() { - Self::fatal_slash(txn, set, validator, "participated in unrecognized topic"); + if topic.requires_recognition() && accumulated_weight.is_none() { + Self::fatal_slash( + txn, + set, + validator, + "participated in unrecognized topic which requires recognition", + ); return DataSet::None; } let mut accumulated_weight = accumulated_weight.unwrap_or(0); diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index f0aa8029..bd6119dd 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -34,6 +34,7 @@ pub use transaction::{SigningProtocolRound, Signed, Transaction}; mod db; use db::*; +pub use db::Topic; /// Messages to send to the Processors. pub struct ProcessorMessages; @@ -62,10 +63,28 @@ impl CosignIntents { } } -/// The plans to whitelist upon a `Transaction::SubstrateBlock` being included on-chain. +/// An interface to the topics recognized on this Tributary. +pub struct RecognizedTopics; +impl RecognizedTopics { + /// If this topic has been recognized by this Tributary. + /// + /// This will either be by explicit recognition or participation. + pub fn recognized(getter: &impl Get, set: ValidatorSet, topic: Topic) -> bool { + TributaryDb::recognized(getter, set, topic) + } + /// The next topic requiring recognition which has been recognized by this Tributary. + pub fn try_recv_topic_requiring_recognition( + txn: &mut impl DbTxn, + set: ValidatorSet, + ) -> Option { + db::RecognizedTopics::try_recv(txn, set) + } +} + +/// The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain. pub struct SubstrateBlockPlans; impl SubstrateBlockPlans { - /// Set the plans to whitelist upon the associated `Transaction::SubstrateBlock` being included + /// Set the plans to recognize upon the associated `Transaction::SubstrateBlock` being included /// on-chain. /// /// This must be done before the associated `Transaction::Cosign` is provided. @@ -75,7 +94,7 @@ impl SubstrateBlockPlans { substrate_block_hash: [u8; 32], plans: &Vec<[u8; 32]>, ) { - db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, &plans); + db::SubstrateBlockPlans::set(txn, set, substrate_block_hash, plans); } fn take( txn: &mut impl DbTxn, @@ -154,6 +173,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } } + let topic = tx.topic(); match tx { // Accumulate this vote and fatally slash the participant if past the threshold Transaction::RemoveParticipant { participant, signed } => { @@ -176,7 +196,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { self.validators, self.total_weight, block_number, - Topic::RemoveParticipant { participant }, + topic.unwrap(), signer, self.validator_weights[&signer], &(), @@ -244,7 +264,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { self.potentially_start_cosign(); } Transaction::SubstrateBlock { hash } => { - // Whitelist all of the IDs this Substrate block causes to be signed + // Recognize all of the IDs this Substrate block causes to be signed let plans = SubstrateBlockPlans::take(self.tributary_txn, self.set, hash).expect( "Transaction::SubstrateBlock locally provided but SubstrateBlockPlans wasn't populated", ); @@ -261,7 +281,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } } Transaction::Batch { hash } => { - // Whitelist the signing of this batch + // Recognize the signing of this batch TributaryDb::recognize_topic( self.tributary_txn, self.set, @@ -293,7 +313,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { self.validators, self.total_weight, block_number, - Topic::SlashReport, + topic.unwrap(), signer, self.validator_weights[&signer], &slash_points, @@ -351,7 +371,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Create the resulting slash report let mut slash_report = vec![]; - for (validator, points) in self.validators.iter().copied().zip(amortized_slash_report) { + for (_, points) in self.validators.iter().copied().zip(amortized_slash_report) { // TODO: Natively store this as a `Slash` if points == u32::MAX { slash_report.push(Slash::Fatal); @@ -385,7 +405,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } Transaction::Sign { id, attempt, round, data, signed } => { - let topic = Topic::Sign { id, attempt, round }; + let topic = topic.unwrap(); let signer = signer(signed); if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index 2cc4600c..f72d2620 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -25,6 +25,8 @@ use tributary_sdk::{ }, }; +use crate::db::Topic; + /// The round this data is for, within a signing protocol. #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)] pub enum SigningProtocolRound { @@ -180,7 +182,7 @@ pub enum Transaction { /// /// This is provided after the block has been cosigned. /// - /// With the acknowledgement of a Substrate block, we can whitelist all the `VariantSignId`s + /// With the acknowledgement of a Substrate block, we can recognize all the `VariantSignId`s /// resulting from its handling. SubstrateBlock { /// The hash of the Substrate block @@ -318,6 +320,36 @@ impl TransactionTrait for Transaction { } impl Transaction { + /// The topic in the database for this transaction. + pub fn topic(&self) -> Option { + #[allow(clippy::match_same_arms)] // This doesn't make semantic sense here + match self { + Transaction::RemoveParticipant { participant, .. } => { + Some(Topic::RemoveParticipant { participant: *participant }) + } + + Transaction::DkgParticipation { .. } => None, + Transaction::DkgConfirmationPreprocess { attempt, .. } => { + Some(Topic::DkgConfirmation { attempt: *attempt, round: SigningProtocolRound::Preprocess }) + } + Transaction::DkgConfirmationShare { attempt, .. } => { + Some(Topic::DkgConfirmation { attempt: *attempt, round: SigningProtocolRound::Share }) + } + + // Provided TXs + Transaction::Cosign { .. } | + Transaction::Cosigned { .. } | + Transaction::SubstrateBlock { .. } | + Transaction::Batch { .. } => None, + + Transaction::Sign { id, attempt, round, .. } => { + Some(Topic::Sign { id: *id, attempt: *attempt, round: *round }) + } + + Transaction::SlashReport { .. } => Some(Topic::SlashReport), + } + } + /// Sign a transaction. /// /// Panics if signing a transaction whose type isn't `TransactionKind::Signed`. From 3357181fe2771cbbf67450f3c1f6fb2deeb14b78 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 10:47:47 -0500 Subject: [PATCH 307/368] Handle sign::ProcessorMessage::[Preprocesses, Shares] --- coordinator/src/main.rs | 38 ++++++++++++++++++++++-- coordinator/tributary/src/transaction.rs | 2 -- 2 files changed, 35 insertions(+), 5 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index e1f6708a..f189ffad 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -24,7 +24,7 @@ use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{Faulted, SignedCosign, Cosigning}; use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream, SignSlashReport}; -use serai_coordinator_tributary::{Signed, Transaction, SubstrateBlockPlans}; +use serai_coordinator_tributary::{SigningProtocolRound, Signed, Transaction, SubstrateBlockPlans}; mod db; use db::*; @@ -216,9 +216,41 @@ async fn handle_processor_messages( ); } messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => { - todo!("TODO Transaction::Batch + Transaction::Sign") + let set = ValidatorSet { network, session: id.session }; + if id.attempt == 0 { + // Batches are declared by their intent to be signed + // TODO: Document this in processor <-> coordinator rebuild issue + if let messages::sign::VariantSignId::Batch(hash) = id.id { + TributaryTransactions::send(&mut txn, set, &Transaction::Batch { hash }); + } + } + + TributaryTransactions::send( + &mut txn, + set, + &Transaction::Sign { + id: id.id, + attempt: id.attempt, + round: SigningProtocolRound::Preprocess, + data: preprocesses, + signed: Signed::default(), + }, + ); + } + messages::sign::ProcessorMessage::Shares { id, shares } => { + let set = ValidatorSet { network, session: id.session }; + TributaryTransactions::send( + &mut txn, + set, + &Transaction::Sign { + id: id.id, + attempt: id.attempt, + round: SigningProtocolRound::Share, + data: shares, + signed: Signed::default(), + }, + ); } - messages::sign::ProcessorMessage::Shares { id, shares } => todo!("TODO Transaction::Sign"), }, messages::ProcessorMessage::Coordinator(msg) => match msg { messages::coordinator::ProcessorMessage::CosignedBlock { cosign } => { diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index f72d2620..b302f8d7 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -259,9 +259,7 @@ impl TransactionTrait for Transaction { Transaction::Cosign { .. } => TransactionKind::Provided("Cosign"), Transaction::Cosigned { .. } => TransactionKind::Provided("Cosigned"), - // TODO: Provide this Transaction::SubstrateBlock { .. } => TransactionKind::Provided("SubstrateBlock"), - // TODO: Provide this Transaction::Batch { .. } => TransactionKind::Provided("Batch"), Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed( From 92a4cceeebac52e3d572dfe018243b853cdfd4c6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 11:21:55 -0500 Subject: [PATCH 308/368] Spawn PublishBatchTask Also removes the expectation Batches published via it are sent in an ordered fashion. That won't be true if the signing protocols complete out-of-order (as possible when we are signing them in parallel). --- coordinator/src/main.rs | 25 +++++++-- coordinator/substrate/src/lib.rs | 2 - coordinator/substrate/src/publish_batch.rs | 59 +++++++++++++++------- 3 files changed, 62 insertions(+), 24 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index f189ffad..5895f74c 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -23,7 +23,9 @@ use message_queue::{Service, client::MessageQueue}; use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{Faulted, SignedCosign, Cosigning}; -use serai_coordinator_substrate::{CanonicalEventStream, EphemeralEventStream, SignSlashReport}; +use serai_coordinator_substrate::{ + CanonicalEventStream, EphemeralEventStream, SignSlashReport, SignedBatches, PublishBatchTask, +}; use serai_coordinator_tributary::{SigningProtocolRound, Signed, Transaction, SubstrateBlockPlans}; mod db; @@ -145,11 +147,24 @@ fn spawn_cosigning( }); } -async fn handle_processor_messages( +async fn handle_network( mut db: impl serai_db::Db, message_queue: Arc, + serai: Arc, network: NetworkId, ) { + // Spawn the task to publish batches for this network + { + let (publish_batch_task_def, publish_batch_task) = Task::new(); + tokio::spawn( + PublishBatchTask::new(db.clone(), serai.clone(), network) + .unwrap() + .continually_run(publish_batch_task_def, vec![]), + ); + core::mem::forget(publish_batch_task); + } + + // Handle Processor messages loop { let (msg_id, msg) = { let msg = message_queue.next(Service::Processor(network)).await; @@ -257,7 +272,7 @@ async fn handle_processor_messages( SignedCosigns::send(&mut txn, &cosign); } messages::coordinator::ProcessorMessage::SignedBatch { batch } => { - todo!("TODO PublishBatchTask") + SignedBatches::send(&mut txn, &batch); } messages::coordinator::ProcessorMessage::SignedSlashReport { session, signature } => { todo!("TODO PublishSlashReportTask") @@ -449,12 +464,12 @@ async fn main() { .continually_run(substrate_task_def, vec![]), ); - // Handle all of the Processors' messages + // Handle each of the networks for network in serai_client::primitives::NETWORKS { if network == NetworkId::Serai { continue; } - tokio::spawn(handle_processor_messages(db.clone(), message_queue.clone(), network)); + tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network)); } // Run the spawned tasks ad-infinitum diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index dc8056a7..a03c05dd 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -175,8 +175,6 @@ impl Keys { pub struct SignedBatches; impl SignedBatches { /// Send a `SignedBatch` to publish onto Serai. - /// - /// These will be published sequentially. Out-of-order sending risks hanging the task. pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) { _public_db::SignedBatches::send(txn, batch.batch.network, batch); } diff --git a/coordinator/substrate/src/publish_batch.rs b/coordinator/substrate/src/publish_batch.rs index 6d186266..e9038d87 100644 --- a/coordinator/substrate/src/publish_batch.rs +++ b/coordinator/substrate/src/publish_batch.rs @@ -1,14 +1,21 @@ use core::future::Future; use std::sync::Arc; -use serai_db::{DbTxn, Db}; - -use serai_client::{primitives::NetworkId, SeraiError, Serai}; +#[rustfmt::skip] +use serai_client::{primitives::NetworkId, in_instructions::primitives::SignedBatch, SeraiError, Serai}; +use serai_db::{Get, DbTxn, Db, create_db}; use serai_task::ContinuallyRan; use crate::SignedBatches; +create_db!( + CoordinatorSubstrate { + LastPublishedBatch: (network: NetworkId) -> u32, + BatchesToPublish: (network: NetworkId, batch: u32) -> SignedBatch, + } +); + /// Publish `SignedBatch`s from `SignedBatches` onto Serai. pub struct PublishBatchTask { db: D, @@ -34,32 +41,50 @@ impl ContinuallyRan for PublishBatchTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { - let mut made_progress = false; - + // Read from SignedBatches, which is sequential, into our own mapping loop { let mut txn = self.db.txn(); let Some(batch) = SignedBatches::try_recv(&mut txn, self.network) else { - // No batch to publish at this time break; }; - // Publish this Batch if it hasn't already been published + // If this is a Batch not yet published, save it into our unordered mapping + if LastPublishedBatch::get(&txn, self.network) < Some(batch.batch.id) { + BatchesToPublish::set(&mut txn, self.network, batch.batch.id, &batch); + } + + txn.commit(); + } + + // Synchronize our last published batch with the Serai network's + let next_to_publish = { let serai = self.serai.as_of_latest_finalized_block().await?; let last_batch = serai.in_instructions().last_batch_for_network(self.network).await?; - if last_batch < Some(batch.batch.id) { - // This stream of Batches *should* be sequential within the larger context of the Serai - // coordinator. In this library, we use a more relaxed definition and don't assert - // sequence. This does risk hanging the task, if Batch #n+1 is sent before Batch #n, but - // that is a documented fault of the `SignedBatches` API. + + let mut txn = self.db.txn(); + let mut our_last_batch = LastPublishedBatch::get(&txn, self.network); + while our_last_batch < last_batch { + let next_batch = our_last_batch.map(|batch| batch + 1).unwrap_or(0); + // Clean up the Batch to publish since it's already been published + BatchesToPublish::take(&mut txn, self.network, next_batch); + our_last_batch = Some(next_batch); + } + if let Some(last_batch) = our_last_batch { + LastPublishedBatch::set(&mut txn, self.network, &last_batch); + } + last_batch.map(|batch| batch + 1).unwrap_or(0) + }; + + let made_progress = + if let Some(batch) = BatchesToPublish::get(&self.db, self.network, next_to_publish) { self .serai .publish(&serai_client::in_instructions::SeraiInInstructions::execute_batch(batch)) .await?; - } - - txn.commit(); - made_progress = true; - } + true + } else { + false + }; Ok(made_progress) } } From 7312fa8d3ccd8bc812085159afbc2eaa04021666 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 12:08:28 -0500 Subject: [PATCH 309/368] Spawn PublishSlashReportTask Updates it so that it'll try for every network instead of returning after any network fails. Uses the SlashReport type throughout the codebase. --- coordinator/src/main.rs | 27 ++++- coordinator/substrate/src/lib.rs | 21 +--- .../substrate/src/publish_slash_report.rs | 112 ++++++++++-------- coordinator/tributary/src/lib.rs | 4 +- processor/messages/src/lib.rs | 10 +- substrate/abi/src/validator_sets.rs | 2 +- substrate/client/src/serai/validator_sets.rs | 11 +- substrate/runtime/src/abi.rs | 20 +--- substrate/validator-sets/pallet/src/lib.rs | 2 +- .../primitives/src/slash_points.rs | 24 ++++ 10 files changed, 132 insertions(+), 101 deletions(-) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 5895f74c..0048ebd1 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -14,7 +14,7 @@ use borsh::BorshDeserialize; use tokio::sync::mpsc; use serai_client::{ - primitives::{NetworkId, PublicKey}, + primitives::{NetworkId, PublicKey, Signature}, validator_sets::primitives::ValidatorSet, Serai, }; @@ -25,6 +25,7 @@ use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{Faulted, SignedCosign, Cosigning}; use serai_coordinator_substrate::{ CanonicalEventStream, EphemeralEventStream, SignSlashReport, SignedBatches, PublishBatchTask, + SlashReports, PublishSlashReportTask, }; use serai_coordinator_tributary::{SigningProtocolRound, Signed, Transaction, SubstrateBlockPlans}; @@ -161,6 +162,7 @@ async fn handle_network( .unwrap() .continually_run(publish_batch_task_def, vec![]), ); + // Forget its handle so it always runs in the background core::mem::forget(publish_batch_task); } @@ -274,8 +276,17 @@ async fn handle_network( messages::coordinator::ProcessorMessage::SignedBatch { batch } => { SignedBatches::send(&mut txn, &batch); } - messages::coordinator::ProcessorMessage::SignedSlashReport { session, signature } => { - todo!("TODO PublishSlashReportTask") + messages::coordinator::ProcessorMessage::SignedSlashReport { + session, + slash_report, + signature, + } => { + SlashReports::set( + &mut txn, + ValidatorSet { network, session }, + slash_report, + Signature(signature), + ); } }, messages::ProcessorMessage::Substrate(msg) => match msg { @@ -472,6 +483,16 @@ async fn main() { tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network)); } + // Spawn the task to publish slash reports + { + let (publish_slash_report_task_def, publish_slash_report_task) = Task::new(); + tokio::spawn( + PublishSlashReportTask::new(db, serai).continually_run(publish_slash_report_task_def, vec![]), + ); + // Always have this run in the background + core::mem::forget(publish_slash_report_task); + } + // Run the spawned tasks ad-infinitum core::future::pending().await } diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index a03c05dd..c8e437f4 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -6,8 +6,8 @@ use scale::{Encode, Decode}; use borsh::{io, BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{NetworkId, PublicKey, Signature, SeraiAddress}, - validator_sets::primitives::{Session, ValidatorSet, KeyPair}, + primitives::{NetworkId, PublicKey, Signature}, + validator_sets::primitives::{Session, ValidatorSet, KeyPair, SlashReport}, in_instructions::primitives::SignedBatch, Transaction, }; @@ -183,10 +183,6 @@ impl SignedBatches { } } -/// The slash report was invalid. -#[derive(Debug)] -pub struct InvalidSlashReport; - /// The slash reports to publish onto Serai. pub struct SlashReports; impl SlashReports { @@ -194,30 +190,25 @@ impl SlashReports { /// /// This only saves the most recent slashes as only a single session is eligible to have its /// slashes reported at once. - /// - /// Returns Err if the slashes are invalid. Returns Ok if the slashes weren't detected as - /// invalid. Slashes may be considered invalid by the Serai blockchain later even if not detected - /// as invalid here. pub fn set( txn: &mut impl DbTxn, set: ValidatorSet, - slashes: Vec<(SeraiAddress, u32)>, + slash_report: SlashReport, signature: Signature, - ) -> Result<(), InvalidSlashReport> { + ) { // If we have a more recent slash report, don't write this historic one if let Some((existing_session, _)) = _public_db::SlashReports::get(txn, set.network) { if existing_session.0 >= set.session.0 { - return Ok(()); + return; } } let tx = serai_client::validator_sets::SeraiValidatorSets::report_slashes( set.network, - slashes.try_into().map_err(|_| InvalidSlashReport)?, + slash_report, signature, ); _public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode())); - Ok(()) } pub(crate) fn take(txn: &mut impl DbTxn, network: NetworkId) -> Option<(Session, Transaction)> { let (session, tx) = _public_db::SlashReports::take(txn, network)?; diff --git a/coordinator/substrate/src/publish_slash_report.rs b/coordinator/substrate/src/publish_slash_report.rs index 9c20fcdd..a26d4bd6 100644 --- a/coordinator/substrate/src/publish_slash_report.rs +++ b/coordinator/substrate/src/publish_slash_report.rs @@ -22,66 +22,80 @@ impl PublishSlashReportTask { } } +impl PublishSlashReportTask { + // Returns if a slash report was successfully published + async fn publish(&mut self, network: NetworkId) -> Result { + let mut txn = self.db.txn(); + let Some((session, slash_report)) = SlashReports::take(&mut txn, network) else { + // No slash report to publish + return Ok(false); + }; + + let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; + let serai = serai.validator_sets(); + let session_after_slash_report = Session(session.0 + 1); + let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; + let current_session = current_session.map(|session| session.0); + // Only attempt to publish the slash report for session #n while session #n+1 is still + // active + let session_after_slash_report_retired = current_session > Some(session_after_slash_report.0); + if session_after_slash_report_retired { + // Commit the txn to drain this slash report from the database and not try it again later + txn.commit(); + return Ok(false); + } + + if Some(session_after_slash_report.0) != current_session { + // We already checked the current session wasn't greater, and they're not equal + assert!(current_session < Some(session_after_slash_report.0)); + // This would mean the Serai node is resyncing and is behind where it prior was + Err("have a slash report for a session Serai has yet to retire".to_string())?; + } + + // If this session which should publish a slash report already has, move on + let key_pending_slash_report = + serai.key_pending_slash_report(network).await.map_err(|e| format!("{e:?}"))?; + if key_pending_slash_report.is_none() { + txn.commit(); + return Ok(false); + }; + + match self.serai.publish(&slash_report).await { + Ok(()) => { + txn.commit(); + Ok(true) + } + // This could be specific to this TX (such as an already in mempool error) and it may be + // worthwhile to continue iteration with the other pending slash reports. We assume this + // error ephemeral and that the latency incurred for this ephemeral error to resolve is + // miniscule compared to the window available to publish the slash report. That makes + // this a non-issue. + Err(e) => Err(format!("couldn't publish slash report transaction: {e:?}")), + } + } +} + impl ContinuallyRan for PublishSlashReportTask { type Error = String; fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; + let mut error = None; for network in serai_client::primitives::NETWORKS { if network == NetworkId::Serai { continue; }; - let mut txn = self.db.txn(); - let Some((session, slash_report)) = SlashReports::take(&mut txn, network) else { - // No slash report to publish - continue; - }; - - let serai = - self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; - let serai = serai.validator_sets(); - let session_after_slash_report = Session(session.0 + 1); - let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; - let current_session = current_session.map(|session| session.0); - // Only attempt to publish the slash report for session #n while session #n+1 is still - // active - let session_after_slash_report_retired = - current_session > Some(session_after_slash_report.0); - if session_after_slash_report_retired { - // Commit the txn to drain this slash report from the database and not try it again later - txn.commit(); - continue; - } - - if Some(session_after_slash_report.0) != current_session { - // We already checked the current session wasn't greater, and they're not equal - assert!(current_session < Some(session_after_slash_report.0)); - // This would mean the Serai node is resyncing and is behind where it prior was - Err("have a slash report for a session Serai has yet to retire".to_string())?; - } - - // If this session which should publish a slash report already has, move on - let key_pending_slash_report = - serai.key_pending_slash_report(network).await.map_err(|e| format!("{e:?}"))?; - if key_pending_slash_report.is_none() { - txn.commit(); - continue; - }; - - match self.serai.publish(&slash_report).await { - Ok(()) => { - txn.commit(); - made_progress = true; - } - // This could be specific to this TX (such as an already in mempool error) and it may be - // worthwhile to continue iteration with the other pending slash reports. We assume this - // error ephemeral and that the latency incurred for this ephemeral error to resolve is - // miniscule compared to the window available to publish the slash report. That makes - // this a non-issue. - Err(e) => Err(format!("couldn't publish slash report transaction: {e:?}"))?, - } + let network_res = self.publish(network).await; + // We made progress if any network successfully published their slash report + made_progress |= network_res == Ok(true); + // We want to yield the first error *after* attempting for every network + error = error.or(network_res.err()); + } + // Yield the error + if let Some(error) = error { + Err(error)? } Ok(made_progress) } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index bd6119dd..6b8616aa 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -371,7 +371,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Create the resulting slash report let mut slash_report = vec![]; - for (_, points) in self.validators.iter().copied().zip(amortized_slash_report) { + for points in amortized_slash_report { // TODO: Natively store this as a `Slash` if points == u32::MAX { slash_report.push(Slash::Fatal); @@ -397,7 +397,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { self.set, messages::coordinator::CoordinatorMessage::SignSlashReport { session: self.set.session, - report: slash_report, + slash_report: slash_report.try_into().unwrap(), }, ); } diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index acf01775..b8f496ab 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -7,7 +7,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use dkg::Participant; use serai_primitives::BlockHash; -use validator_sets_primitives::{Session, KeyPair, Slash}; +use validator_sets_primitives::{Session, KeyPair, SlashReport}; use coins_primitives::OutInstructionWithBalance; use in_instructions_primitives::SignedBatch; @@ -100,7 +100,9 @@ pub mod sign { Self::Cosign(cosign) => { f.debug_struct("VariantSignId::Cosign").field("0", &cosign).finish() } - Self::Batch(batch) => f.debug_struct("VariantSignId::Batch").field("0", &batch).finish(), + Self::Batch(batch) => { + f.debug_struct("VariantSignId::Batch").field("0", &hex::encode(batch)).finish() + } Self::SlashReport => f.debug_struct("VariantSignId::SlashReport").finish(), Self::Transaction(tx) => { f.debug_struct("VariantSignId::Transaction").field("0", &hex::encode(tx)).finish() @@ -168,7 +170,7 @@ pub mod coordinator { /// Sign the slash report for this session. /// /// This is sent by the Coordinator's Tributary scanner. - SignSlashReport { session: Session, report: Vec }, + SignSlashReport { session: Session, slash_report: SlashReport }, } // This set of messages is sent entirely and solely by serai-processor-bin's implementation of @@ -178,7 +180,7 @@ pub mod coordinator { pub enum ProcessorMessage { CosignedBlock { cosign: SignedCosign }, SignedBatch { batch: SignedBatch }, - SignedSlashReport { session: Session, signature: Vec }, + SignedSlashReport { session: Session, slash_report: SlashReport, signature: [u8; 64] }, } } diff --git a/substrate/abi/src/validator_sets.rs b/substrate/abi/src/validator_sets.rs index ec8a5714..85317c0d 100644 --- a/substrate/abi/src/validator_sets.rs +++ b/substrate/abi/src/validator_sets.rs @@ -21,7 +21,7 @@ pub enum Call { }, report_slashes { network: NetworkId, - slashes: BoundedVec<(SeraiAddress, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 / 3 }>>, + slashes: SlashReport, signature: Signature, }, allocate { diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index c92e4f89..882f7af6 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -5,10 +5,10 @@ use sp_runtime::BoundedVec; use serai_abi::primitives::Amount; pub use serai_abi::validator_sets::primitives; -use primitives::{MAX_KEY_LEN, Session, ValidatorSet, KeyPair}; +use primitives::{MAX_KEY_LEN, Session, ValidatorSet, KeyPair, SlashReport}; use crate::{ - primitives::{EmbeddedEllipticCurve, NetworkId, SeraiAddress}, + primitives::{EmbeddedEllipticCurve, NetworkId}, Transaction, Serai, TemporalSerai, SeraiError, }; @@ -238,12 +238,7 @@ impl<'a> SeraiValidatorSets<'a> { pub fn report_slashes( network: NetworkId, - // TODO: This bounds a maximum length but takes more space than just publishing all the u32s - // (50 * (32 + 4)) > (150 * 4) - slashes: sp_runtime::BoundedVec< - (SeraiAddress, u32), - sp_core::ConstU32<{ primitives::MAX_KEY_SHARES_PER_SET_U32 / 3 }>, - >, + slashes: SlashReport, signature: Signature, ) -> Transaction { Serai::unsigned(serai_abi::Call::ValidatorSets( diff --git a/substrate/runtime/src/abi.rs b/substrate/runtime/src/abi.rs index 107389c1..81c8b202 100644 --- a/substrate/runtime/src/abi.rs +++ b/substrate/runtime/src/abi.rs @@ -111,13 +111,7 @@ impl From for RuntimeCall { serai_abi::validator_sets::Call::report_slashes { network, slashes, signature } => { RuntimeCall::ValidatorSets(validator_sets::Call::report_slashes { network, - slashes: <_>::try_from( - slashes - .into_iter() - .map(|(addr, slash)| (PublicKey::from(addr), slash)) - .collect::>(), - ) - .unwrap(), + slashes, signature, }) } @@ -301,17 +295,7 @@ impl TryInto for RuntimeCall { } } validator_sets::Call::report_slashes { network, slashes, signature } => { - serai_abi::validator_sets::Call::report_slashes { - network, - slashes: <_>::try_from( - slashes - .into_iter() - .map(|(addr, slash)| (SeraiAddress::from(addr), slash)) - .collect::>(), - ) - .unwrap(), - signature, - } + serai_abi::validator_sets::Call::report_slashes { network, slashes, signature } } validator_sets::Call::allocate { network, amount } => { serai_abi::validator_sets::Call::allocate { network, amount } diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index 2ba1b45f..4fbddda4 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -1010,7 +1010,7 @@ pub mod pallet { pub fn report_slashes( origin: OriginFor, network: NetworkId, - slashes: BoundedVec<(Public, u32), ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 / 3 }>>, + slashes: SlashReport, signature: Signature, ) -> DispatchResult { ensure_none(origin)?; diff --git a/substrate/validator-sets/primitives/src/slash_points.rs b/substrate/validator-sets/primitives/src/slash_points.rs index c045a4ef..d420157e 100644 --- a/substrate/validator-sets/primitives/src/slash_points.rs +++ b/substrate/validator-sets/primitives/src/slash_points.rs @@ -210,6 +210,30 @@ impl Slash { #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] pub struct SlashReport(pub BoundedVec>); +#[cfg(feature = "borsh")] +impl BorshSerialize for SlashReport { + fn serialize(&self, writer: &mut W) -> borsh::io::Result<()> { + BorshSerialize::serialize(self.0.as_slice(), writer) + } +} +#[cfg(feature = "borsh")] +impl BorshDeserialize for SlashReport { + fn deserialize_reader(reader: &mut R) -> borsh::io::Result { + let slashes = Vec::::deserialize_reader(reader)?; + slashes + .try_into() + .map(Self) + .map_err(|_| borsh::io::Error::other("length of slash report exceeds max validators")) + } +} + +impl TryFrom> for SlashReport { + type Error = &'static str; + fn try_from(slashes: Vec) -> Result { + slashes.try_into().map(Self).map_err(|_| "length of slash report exceeds max validators") + } +} + // This is assumed binding to the ValidatorSet via the key signed with pub fn report_slashes_message(slashes: &SlashReport) -> Vec { (b"ValidatorSets-report_slashes", slashes).encode() From bea4f92b7a4bd2243d522d8d756d5bcfaef69e48 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 12:10:11 -0500 Subject: [PATCH 310/368] Fix parity-db builds for the Coordinator --- coordinator/src/db.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 8bb2fd24..b17e569b 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -13,7 +13,7 @@ use serai_coordinator_substrate::NewSetInformation; use serai_coordinator_tributary::Transaction; #[cfg(all(feature = "parity-db", not(feature = "rocksdb")))] -pub(crate) type Db = serai_db::ParityDb; +pub(crate) type Db = std::sync::Arc; #[cfg(feature = "rocksdb")] pub(crate) type Db = serai_db::RocksDB; From 167826aa88ad1e88655672442bbadcdb8ef22008 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 12:51:35 -0500 Subject: [PATCH 311/368] Implement SeraiAddress <-> Participant mapping and add RemoveParticipant transactions --- Cargo.lock | 2 + coordinator/Cargo.toml | 1 + coordinator/src/db.rs | 17 ++++++++ coordinator/src/main.rs | 26 +++---------- coordinator/src/tributary.rs | 52 +++++++++++++++++-------- coordinator/substrate/Cargo.toml | 3 ++ coordinator/substrate/src/ephemeral.rs | 47 +++++++++++++--------- coordinator/substrate/src/lib.rs | 54 ++++++++++++++++---------- coordinator/tributary/src/lib.rs | 2 - 9 files changed, 125 insertions(+), 79 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index ea9b74df..a6b1c37e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8318,6 +8318,7 @@ dependencies = [ "blake2", "borsh", "ciphersuite", + "dkg", "env_logger", "frost-schnorrkel", "hex", @@ -8387,6 +8388,7 @@ version = "0.1.0" dependencies = [ "bitvec", "borsh", + "dkg", "futures", "log", "parity-scale-codec", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index ce3ceda1..bd3c2e3d 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -26,6 +26,7 @@ blake2 = { version = "0.10", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } +dkg = { path = "../crypto/dkg", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] } frost = { package = "modular-frost", path = "../crypto/frost" } frost-schnorrkel = { path = "../crypto/schnorrkel" } diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index b17e569b..6f336bcc 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -3,6 +3,8 @@ use std::{path::Path, fs}; pub(crate) use serai_db::{Get, DbTxn, Db as DbTrait}; use serai_db::{create_db, db_channel}; +use dkg::Participant; + use serai_client::{ primitives::NetworkId, validator_sets::primitives::{Session, ValidatorSet}, @@ -95,6 +97,8 @@ mod _internal_db { Coordinator { // Tributary transactions to publish TributaryTransactions: (set: ValidatorSet) -> Transaction, + // Participants to remove + RemoveParticipant: (set: ValidatorSet) -> Participant, } } } @@ -111,3 +115,16 @@ impl TributaryTransactions { _internal_db::TributaryTransactions::try_recv(txn, set) } } + +pub(crate) struct RemoveParticipant; +impl RemoveParticipant { + pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, participant: Participant) { + // If this set has yet to be retired, send this transaction + if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { + _internal_db::RemoveParticipant::send(txn, set, &participant); + } + } + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + _internal_db::RemoveParticipant::try_recv(txn, set) + } +} diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 0048ebd1..c739d390 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -14,7 +14,7 @@ use borsh::BorshDeserialize; use tokio::sync::mpsc; use serai_client::{ - primitives::{NetworkId, PublicKey, Signature}, + primitives::{NetworkId, SeraiAddress, Signature}, validator_sets::primitives::ValidatorSet, Serai, }; @@ -209,28 +209,12 @@ async fn handle_network( network_key, } => todo!("TODO Transaction::DkgConfirmationPreprocess"), messages::key_gen::ProcessorMessage::Blame { session, participant } => { - let set = ValidatorSet { network, session }; - TributaryTransactions::send( - &mut txn, - set, - &Transaction::RemoveParticipant { - participant: todo!("TODO"), - signed: Signed::default(), - }, - ); + RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); } }, messages::ProcessorMessage::Sign(msg) => match msg { messages::sign::ProcessorMessage::InvalidParticipant { session, participant } => { - let set = ValidatorSet { network, session }; - TributaryTransactions::send( - &mut txn, - set, - &Transaction::RemoveParticipant { - participant: todo!("TODO"), - signed: Signed::default(), - }, - ); + RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); } messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => { let set = ValidatorSet { network, session: id.session }; @@ -371,6 +355,8 @@ async fn main() { while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} // Drain the transactions to publish for this set while TributaryTransactions::try_recv(&mut txn, to_cleanup).is_some() {} + // Drain the participants to remove for this set + while RemoveParticipant::try_recv(&mut txn, to_cleanup).is_some() {} // Remove the SignSlashReport notification SignSlashReport::try_recv(&mut txn, to_cleanup); } @@ -434,7 +420,7 @@ async fn main() { EphemeralEventStream::new( db.clone(), serai.clone(), - PublicKey::from_raw((::generator() * serai_key.deref()).to_bytes()), + SeraiAddress((::generator() * serai_key.deref()).to_bytes()), ) .continually_run(substrate_ephemeral_task_def, vec![substrate_task]), ); diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 2ebfd223..6f3020cb 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -26,7 +26,7 @@ use serai_coordinator_tributary::{ }; use serai_coordinator_p2p::P2p; -use crate::{Db, TributaryTransactions}; +use crate::{Db, TributaryTransactions, RemoveParticipant}; create_db! { Coordinator { @@ -158,8 +158,14 @@ impl ContinuallyRan #[must_use] async fn add_signed_unsigned_transaction( tributary: &Tributary, - tx: &Transaction, + key: &Zeroizing<::F>, + mut tx: Transaction, ) -> bool { + // If this is a signed transaction, sign it + if matches!(tx.kind(), TransactionKind::Signed(_, _)) { + tx.sign(&mut OsRng, tributary.genesis(), key); + } + let res = tributary.add_transaction(tx.clone()).await; match &res { // Fresh publication, already published @@ -191,7 +197,7 @@ pub(crate) struct AddTributaryTransactionsTask db: CD, tributary_db: TD, tributary: Tributary, - set: ValidatorSet, + set: NewSetInformation, key: Zeroizing<::F>, } impl ContinuallyRan for AddTributaryTransactionsTask { @@ -204,23 +210,20 @@ impl ContinuallyRan for AddTributaryTransactio // Provide/add all transactions sent our way loop { let mut txn = self.db.txn(); - let Some(mut tx) = TributaryTransactions::try_recv(&mut txn, self.set) else { break }; + let Some(tx) = TributaryTransactions::try_recv(&mut txn, self.set.set) else { break }; let kind = tx.kind(); match kind { - TransactionKind::Provided(_) => provide_transaction(self.set, &self.tributary, tx).await, + TransactionKind::Provided(_) => { + provide_transaction(self.set.set, &self.tributary, tx).await + } TransactionKind::Unsigned | TransactionKind::Signed(_, _) => { - // If this is a signed transaction, sign it - if matches!(kind, TransactionKind::Signed(_, _)) { - tx.sign(&mut OsRng, self.tributary.genesis(), &self.key); - } - // If this is a transaction with signing data, check the topic is recognized before // publishing let topic = tx.topic(); let still_requires_recognition = if let Some(topic) = topic { (topic.requires_recognition() && - (!RecognizedTopics::recognized(&self.tributary_db, self.set, topic))) + (!RecognizedTopics::recognized(&self.tributary_db, self.set.set, topic))) .then_some(topic) } else { None @@ -229,11 +232,11 @@ impl ContinuallyRan for AddTributaryTransactio // Queue the transaction until the topic is recognized // We use the Tributary DB for this so it's cleaned up when the Tributary DB is let mut txn = self.tributary_db.txn(); - PublishOnRecognition::set(&mut txn, self.set, topic, &tx); + PublishOnRecognition::set(&mut txn, self.set.set, topic, &tx); txn.commit(); } else { // Actually add the transaction - if !add_signed_unsigned_transaction(&self.tributary, &tx).await { + if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await { break; } } @@ -248,12 +251,12 @@ impl ContinuallyRan for AddTributaryTransactio loop { let mut txn = self.tributary_db.txn(); let Some(topic) = - RecognizedTopics::try_recv_topic_requiring_recognition(&mut txn, self.set) + RecognizedTopics::try_recv_topic_requiring_recognition(&mut txn, self.set.set) else { break; }; - if let Some(tx) = PublishOnRecognition::take(&mut txn, self.set, topic) { - if !add_signed_unsigned_transaction(&self.tributary, &tx).await { + if let Some(tx) = PublishOnRecognition::take(&mut txn, self.set.set, topic) { + if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await { break; } } @@ -262,6 +265,21 @@ impl ContinuallyRan for AddTributaryTransactio txn.commit(); } + // Publish any participant removals + loop { + let mut txn = self.db.txn(); + let Some(participant) = RemoveParticipant::try_recv(&mut txn, self.set.set) else { break }; + let tx = Transaction::RemoveParticipant { + participant: self.set.participant_indexes_reverse_lookup[&participant], + signed: Default::default(), + }; + if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await { + break; + } + made_progress = true; + txn.commit(); + } + Ok(made_progress) } } @@ -487,7 +505,7 @@ pub(crate) async fn spawn_tributary( db: db.clone(), tributary_db, tributary: tributary.clone(), - set: set.set, + set: set.clone(), key: serai_key, }) .continually_run(add_tributary_transactions_task_def, vec![]), diff --git a/coordinator/substrate/Cargo.toml b/coordinator/substrate/Cargo.toml index f4eeaa59..c733cc31 100644 --- a/coordinator/substrate/Cargo.toml +++ b/coordinator/substrate/Cargo.toml @@ -22,6 +22,9 @@ bitvec = { version = "1", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } + serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] } log = { version = "0.4", default-features = false, features = ["std"] } diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index 3ea8de98..18c11d00 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -4,7 +4,7 @@ use std::sync::Arc; use futures::stream::{StreamExt, FuturesOrdered}; use serai_client::{ - primitives::{PublicKey, NetworkId, EmbeddedEllipticCurve}, + primitives::{NetworkId, SeraiAddress, EmbeddedEllipticCurve}, validator_sets::primitives::MAX_KEY_SHARES_PER_SET, Serai, }; @@ -26,14 +26,14 @@ create_db!( pub struct EphemeralEventStream { db: D, serai: Arc, - validator: PublicKey, + validator: SeraiAddress, } impl EphemeralEventStream { /// Create a new ephemeral event stream. /// /// Only one of these may exist over the provided database. - pub fn new(db: D, serai: Arc, validator: PublicKey) -> Self { + pub fn new(db: D, serai: Arc, validator: SeraiAddress) -> Self { Self { db, serai, validator } } } @@ -145,6 +145,10 @@ impl ContinuallyRan for EphemeralEventStream { "block #{block_number} declared a new set but didn't have the participants" ))? }; + let validators = validators + .into_iter() + .map(|(validator, weight)| (SeraiAddress::from(validator), weight)) + .collect::>(); let in_set = validators.iter().any(|(validator, _)| *validator == self.validator); if in_set { if u16::try_from(validators.len()).is_err() { @@ -177,14 +181,16 @@ impl ContinuallyRan for EphemeralEventStream { embedded_elliptic_curve_keys.push_back(async move { tokio::try_join!( // One future to fetch the substrate embedded key - serai - .embedded_elliptic_curve_key(validator, EmbeddedEllipticCurve::Embedwards25519), + serai.embedded_elliptic_curve_key( + validator.into(), + EmbeddedEllipticCurve::Embedwards25519 + ), // One future to fetch the external embedded key, if there is a distinct curve async { // `embedded_elliptic_curves` is documented to have the second entry be the // network-specific curve (if it exists and is distinct from Embedwards25519) if let Some(curve) = set.network.embedded_elliptic_curves().get(1) { - serai.embedded_elliptic_curve_key(validator, *curve).await.map(Some) + serai.embedded_elliptic_curve_key(validator.into(), *curve).await.map(Some) } else { Ok(None) } @@ -215,19 +221,22 @@ impl ContinuallyRan for EphemeralEventStream { } } - crate::NewSet::send( - &mut txn, - &NewSetInformation { - set: *set, - serai_block: block.block_hash, - declaration_time: block.time, - // TODO: Why do we have this as an explicit field here? - // Shouldn't thiis be inlined into the Processor's key gen code, where it's used? - threshold: ((total_weight * 2) / 3) + 1, - validators, - evrf_public_keys, - }, - ); + let mut new_set = NewSetInformation { + set: *set, + serai_block: block.block_hash, + declaration_time: block.time, + // TODO: Why do we have this as an explicit field here? + // Shouldn't this be inlined into the Processor's key gen code, where it's used? + threshold: ((total_weight * 2) / 3) + 1, + validators, + evrf_public_keys, + participant_indexes: Default::default(), + participant_indexes_reverse_lookup: Default::default(), + }; + // These aren't serialized, and we immediately serialize and drop this, so this isn't + // necessary. It's just good practice not have this be dirty + new_set.init_participant_indexes(); + crate::NewSet::send(&mut txn, &new_set); } } diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index c8e437f4..f313eb36 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -2,11 +2,15 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use std::collections::HashMap; + use scale::{Encode, Decode}; -use borsh::{io, BorshSerialize, BorshDeserialize}; +use borsh::{BorshSerialize, BorshDeserialize}; + +use dkg::Participant; use serai_client::{ - primitives::{NetworkId, PublicKey, Signature}, + primitives::{NetworkId, SeraiAddress, Signature}, validator_sets::primitives::{Session, ValidatorSet, KeyPair, SlashReport}, in_instructions::primitives::SignedBatch, Transaction, @@ -26,22 +30,9 @@ pub use publish_batch::PublishBatchTask; mod publish_slash_report; pub use publish_slash_report::PublishSlashReportTask; -fn borsh_serialize_validators( - validators: &Vec<(PublicKey, u16)>, - writer: &mut W, -) -> Result<(), io::Error> { - // This doesn't use `encode_to` as `encode_to` panics if the writer returns an error - writer.write_all(&validators.encode()) -} - -fn borsh_deserialize_validators( - reader: &mut R, -) -> Result, io::Error> { - Decode::decode(&mut scale::IoReader(reader)).map_err(io::Error::other) -} - /// The information for a new set. #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] +#[borsh(init = init_participant_indexes)] pub struct NewSetInformation { /// The set. pub set: ValidatorSet, @@ -52,13 +43,34 @@ pub struct NewSetInformation { /// The threshold to use. pub threshold: u16, /// The validators, with the amount of key shares they have. - #[borsh( - serialize_with = "borsh_serialize_validators", - deserialize_with = "borsh_deserialize_validators" - )] - pub validators: Vec<(PublicKey, u16)>, + pub validators: Vec<(SeraiAddress, u16)>, /// The eVRF public keys. pub evrf_public_keys: Vec<([u8; 32], Vec)>, + /// The participant indexes, indexed by their validator. + #[borsh(skip)] + pub participant_indexes: HashMap>, + /// The validators, indexed by their participant indexes. + #[borsh(skip)] + pub participant_indexes_reverse_lookup: HashMap, +} + +impl NewSetInformation { + fn init_participant_indexes(&mut self) { + let mut next_i = 1; + self.participant_indexes = HashMap::with_capacity(self.validators.len()); + self.participant_indexes_reverse_lookup = HashMap::with_capacity(self.validators.len()); + for (validator, weight) in &self.validators { + let mut these_is = Vec::with_capacity((*weight).into()); + for _ in 0 .. *weight { + let this_i = Participant::new(next_i).unwrap(); + next_i += 1; + + these_is.push(this_i); + self.participant_indexes_reverse_lookup.insert(this_i, *validator); + } + self.participant_indexes.insert(*validator, these_is); + } + } } mod _public_db { diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 6b8616aa..f37928c3 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -515,7 +515,6 @@ impl ScanTributaryTask { let mut total_weight = 0; let mut validator_weights = HashMap::with_capacity(new_set.validators.len()); for (validator, weight) in new_set.validators.iter().copied() { - let validator = SeraiAddress::from(validator); let weight = u64::from(weight); validators.push(validator); total_weight += weight; @@ -597,7 +596,6 @@ impl ContinuallyRan for ScanTributaryTask { pub fn slash_report_transaction(getter: &impl Get, set: &NewSetInformation) -> Transaction { let mut slash_points = Vec::with_capacity(set.validators.len()); for (validator, _weight) in set.validators.iter().copied() { - let validator = SeraiAddress::from(validator); slash_points.push(SlashPoints::get(getter, set.set, validator).unwrap_or(0)); } Transaction::SlashReport { slash_points, signed: Signed::default() } From f36bbcba25f6abdbed5f80f02a2b63f9f0600c10 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 14:24:51 -0500 Subject: [PATCH 312/368] Flatten the map of preprocesses/shares, send Participant index with DkgParticipation --- coordinator/src/tributary.rs | 2 +- coordinator/substrate/src/lib.rs | 3 + coordinator/tributary/src/db.rs | 8 +- coordinator/tributary/src/lib.rs | 118 +++++++++++++++------------- processor/key-gen/src/generators.rs | 4 +- 5 files changed, 74 insertions(+), 61 deletions(-) diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 6f3020cb..7162bbe1 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -479,7 +479,7 @@ pub(crate) async fn spawn_tributary( // Spawn the scan task let (scan_tributary_task_def, scan_tributary_task) = Task::new(); tokio::spawn( - ScanTributaryTask::<_, P>::new(tributary_db.clone(), &set, reader) + ScanTributaryTask::<_, P>::new(tributary_db.clone(), set.clone(), reader) // This is the only handle for this TributaryProcessorMessagesTask, so when this task is // dropped, it will be too .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index f313eb36..68566ff4 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -45,6 +45,9 @@ pub struct NewSetInformation { /// The validators, with the amount of key shares they have. pub validators: Vec<(SeraiAddress, u16)>, /// The eVRF public keys. + /// + /// This will have the necessary copies of the keys proper for each validator's weight, + /// accordingly syncing up with `participant_indexes`. pub evrf_public_keys: Vec<([u8; 32], Vec)>, /// The participant indexes, indexed by their validator. #[borsh(skip)] diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index aefe45d3..5812063e 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -168,7 +168,7 @@ impl Topic { } } - fn required_participation(&self, n: u64) -> u64 { + fn required_participation(&self, n: u16) -> u16 { let _ = self; // All of our topics require 2/3rds participation ((2 * n) / 3) + 1 @@ -218,7 +218,7 @@ create_db!( SubstrateBlockPlans: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> Vec<[u8; 32]>, // The weight accumulated for a topic. - AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u64, + AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u16, // The entries accumulated for a topic, by validator. Accumulated: (set: ValidatorSet, topic: Topic, validator: SeraiAddress) -> D, @@ -360,11 +360,11 @@ impl TributaryDb { txn: &mut impl DbTxn, set: ValidatorSet, validators: &[SeraiAddress], - total_weight: u64, + total_weight: u16, block_number: u64, topic: Topic, validator: SeraiAddress, - validator_weight: u64, + validator_weight: u16, data: &D, ) -> DataSet { // This function will only be called once for a (validator, topic) tuple due to how we handle diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index f37928c3..d8390511 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -109,32 +109,32 @@ struct ScanBlock<'a, TD: Db, TDT: DbTxn, P: P2p> { _td: PhantomData, _p2p: PhantomData

, tributary_txn: &'a mut TDT, - set: ValidatorSet, + set: &'a NewSetInformation, validators: &'a [SeraiAddress], - total_weight: u64, - validator_weights: &'a HashMap, + total_weight: u16, + validator_weights: &'a HashMap, } impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one - if TributaryDb::actively_cosigning(self.tributary_txn, self.set).is_some() { + if TributaryDb::actively_cosigning(self.tributary_txn, self.set.set).is_some() { return; } // Fetch the latest intended-to-be-cosigned block let Some(latest_substrate_block_to_cosign) = - TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set) + TributaryDb::latest_substrate_block_to_cosign(self.tributary_txn, self.set.set) else { return; }; // If it was already cosigned, return - if TributaryDb::cosigned(self.tributary_txn, self.set, latest_substrate_block_to_cosign) { + if TributaryDb::cosigned(self.tributary_txn, self.set.set, latest_substrate_block_to_cosign) { return; } let intent = - CosignIntents::take(self.tributary_txn, self.set, latest_substrate_block_to_cosign) + CosignIntents::take(self.tributary_txn, self.set.set, latest_substrate_block_to_cosign) .expect("Transaction::Cosign locally provided but CosignIntents wasn't populated"); assert_eq!( intent.block_hash, latest_substrate_block_to_cosign, @@ -144,16 +144,16 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Mark us as actively cosigning TributaryDb::start_cosigning( self.tributary_txn, - self.set, + self.set.set, latest_substrate_block_to_cosign, intent.block_number, ); // Send the message for the processor to start signing TributaryDb::send_message( self.tributary_txn, - self.set, + self.set.set, messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - session: self.set.session, + session: self.set.set.session, intent, }, ); @@ -166,7 +166,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // TODO: The fact they can publish these TXs makes this a notable spam vector if TributaryDb::is_fatally_slashed( self.tributary_txn, - self.set, + self.set.set, SeraiAddress(signer.to_bytes()), ) { return; @@ -183,7 +183,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { if !self.validators.iter().any(|validator| *validator == participant) { TributaryDb::fatal_slash( self.tributary_txn, - self.set, + self.set.set, signer, "voted to remove non-existent participant", ); @@ -192,7 +192,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { match TributaryDb::accumulate( self.tributary_txn, - self.set, + self.set.set, self.validators, self.total_weight, block_number, @@ -203,7 +203,12 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { ) { DataSet::None => {} DataSet::Participating(_) => { - TributaryDb::fatal_slash(self.tributary_txn, self.set, participant, "voted to remove"); + TributaryDb::fatal_slash( + self.tributary_txn, + self.set.set, + participant, + "voted to remove", + ); } }; } @@ -212,10 +217,10 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { Transaction::DkgParticipation { participation, signed } => { TributaryDb::send_message( self.tributary_txn, - self.set, + self.set.set, messages::key_gen::CoordinatorMessage::Participation { - session: self.set.session, - participant: todo!("TODO"), + session: self.set.set.session, + participant: self.set.participant_indexes[&signer(signed)][0], participation, }, ); @@ -233,7 +238,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Update the latest intended-to-be-cosigned Substrate block TributaryDb::set_latest_substrate_block_to_cosign( self.tributary_txn, - self.set, + self.set.set, substrate_block_hash, ); // Start a new cosign if we aren't already working on one @@ -246,32 +251,32 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { not-yet-Cosigned cosigns, we flag all cosigned blocks as cosigned. Then, when we choose the next block to work on, we won't if it's already been cosigned. */ - TributaryDb::mark_cosigned(self.tributary_txn, self.set, substrate_block_hash); + TributaryDb::mark_cosigned(self.tributary_txn, self.set.set, substrate_block_hash); // If we aren't actively cosigning this block, return // This occurs when we have Cosign TXs A, B, C, we received Cosigned for A and start on C, // and then receive Cosigned for B - if TributaryDb::actively_cosigning(self.tributary_txn, self.set) != + if TributaryDb::actively_cosigning(self.tributary_txn, self.set.set) != Some(substrate_block_hash) { return; } // Since this is the block we were cosigning, mark us as having finished cosigning - TributaryDb::finish_cosigning(self.tributary_txn, self.set); + TributaryDb::finish_cosigning(self.tributary_txn, self.set.set); // Start working on the next cosign self.potentially_start_cosign(); } Transaction::SubstrateBlock { hash } => { // Recognize all of the IDs this Substrate block causes to be signed - let plans = SubstrateBlockPlans::take(self.tributary_txn, self.set, hash).expect( + let plans = SubstrateBlockPlans::take(self.tributary_txn, self.set.set, hash).expect( "Transaction::SubstrateBlock locally provided but SubstrateBlockPlans wasn't populated", ); for plan in plans { TributaryDb::recognize_topic( self.tributary_txn, - self.set, + self.set.set, Topic::Sign { id: VariantSignId::Transaction(plan), attempt: 0, @@ -284,7 +289,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Recognize the signing of this batch TributaryDb::recognize_topic( self.tributary_txn, - self.set, + self.set.set, Topic::Sign { id: VariantSignId::Batch(hash), attempt: 0, @@ -299,7 +304,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { if slash_points.len() != self.validators.len() { TributaryDb::fatal_slash( self.tributary_txn, - self.set, + self.set.set, signer, "slash report was for a distinct amount of signers", ); @@ -309,7 +314,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Accumulate, and if past the threshold, calculate *the* slash report and start signing it match TributaryDb::accumulate( self.tributary_txn, - self.set, + self.set.set, self.validators, self.total_weight, block_number, @@ -327,10 +332,6 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { have a supermajority agree the slash should be fatal. If there isn't a supermajority, but the median believe the slash should be fatal, we need to fallback to a large constant. - - Also, TODO, each slash point should probably be considered as - `MAX_KEY_SHARES_PER_SET * BLOCK_TIME` seconds of downtime. As this time crosses - various thresholds (1 day, 3 days, etc), a multiplier should be attached. */ let mut median_slash_report = Vec::with_capacity(self.validators.len()); for i in 0 .. self.validators.len() { @@ -384,7 +385,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Recognize the topic for signing the slash report TributaryDb::recognize_topic( self.tributary_txn, - self.set, + self.set.set, Topic::Sign { id: VariantSignId::SlashReport, attempt: 0, @@ -394,9 +395,9 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // Send the message for the processor to start signing TributaryDb::send_message( self.tributary_txn, - self.set, + self.set.set, messages::coordinator::CoordinatorMessage::SignSlashReport { - session: self.set.session, + session: self.set.set.session, slash_report: slash_report.try_into().unwrap(), }, ); @@ -408,10 +409,10 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { let topic = topic.unwrap(); let signer = signer(signed); - if u64::try_from(data.len()).unwrap() != self.validator_weights[&signer] { + if data.len() != usize::from(self.validator_weights[&signer]) { TributaryDb::fatal_slash( self.tributary_txn, - self.set, + self.set.set, signer, "signer signed with a distinct amount of key shares than they had key shares", ); @@ -420,7 +421,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { match TributaryDb::accumulate( self.tributary_txn, - self.set, + self.set.set, self.validators, self.total_weight, block_number, @@ -431,12 +432,22 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { ) { DataSet::None => {} DataSet::Participating(data_set) => { - let id = topic.sign_id(self.set).expect("Topic::Sign didn't have SignId"); - let flatten_data_set = |data_set| todo!("TODO"); + let id = topic.sign_id(self.set.set).expect("Topic::Sign didn't have SignId"); + let flatten_data_set = |data_set: HashMap<_, Vec<_>>| { + let mut entries = HashMap::with_capacity(usize::from(self.total_weight)); + for (validator, shares) in data_set { + let indexes = &self.set.participant_indexes[&validator]; + assert_eq!(indexes.len(), shares.len()); + for (index, share) in indexes.iter().zip(shares) { + entries.insert(*index, share); + } + } + entries + }; let data_set = flatten_data_set(data_set); TributaryDb::send_message( self.tributary_txn, - self.set, + self.set.set, match round { SigningProtocolRound::Preprocess => { messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set } @@ -453,7 +464,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { } fn handle_block(mut self, block_number: u64, block: Block) { - TributaryDb::start_of_block(self.tributary_txn, self.set, block_number); + TributaryDb::start_of_block(self.tributary_txn, self.set.set, block_number); for tx in block.transactions { match tx { @@ -480,7 +491,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { // errors, mark the node as fatally slashed TributaryDb::fatal_slash( self.tributary_txn, - self.set, + self.set.set, SeraiAddress(msgs.0.msg.sender), &format!("invalid tendermint messages: {msgs:?}"), ); @@ -496,10 +507,10 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { /// The task to scan the Tributary, populating `ProcessorMessages`. pub struct ScanTributaryTask { tributary_db: TD, - set: ValidatorSet, + set: NewSetInformation, validators: Vec, - total_weight: u64, - validator_weights: HashMap, + total_weight: u16, + validator_weights: HashMap, tributary: TributaryReader, _p2p: PhantomData

, } @@ -508,14 +519,13 @@ impl ScanTributaryTask { /// Create a new instance of this task. pub fn new( tributary_db: TD, - new_set: &NewSetInformation, + set: NewSetInformation, tributary: TributaryReader, ) -> Self { - let mut validators = Vec::with_capacity(new_set.validators.len()); + let mut validators = Vec::with_capacity(set.validators.len()); let mut total_weight = 0; - let mut validator_weights = HashMap::with_capacity(new_set.validators.len()); - for (validator, weight) in new_set.validators.iter().copied() { - let weight = u64::from(weight); + let mut validator_weights = HashMap::with_capacity(set.validators.len()); + for (validator, weight) in set.validators.iter().copied() { validators.push(validator); total_weight += weight; validator_weights.insert(validator, weight); @@ -523,7 +533,7 @@ impl ScanTributaryTask { ScanTributaryTask { tributary_db, - set: new_set.set, + set, validators, total_weight, validator_weights, @@ -539,7 +549,7 @@ impl ContinuallyRan for ScanTributaryTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let (mut last_block_number, mut last_block_hash) = - TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set) + TributaryDb::last_handled_tributary_block(&self.tributary_db, self.set.set) .unwrap_or((0, self.tributary.genesis())); let mut made_progress = false; @@ -558,7 +568,7 @@ impl ContinuallyRan for ScanTributaryTask { if !self.tributary.locally_provided_txs_in_block(&block_hash, order) { return Err(format!( "didn't have the provided Transactions on-chain for set (ephemeral error): {:?}", - self.set + self.set.set )); } } @@ -568,7 +578,7 @@ impl ContinuallyRan for ScanTributaryTask { _td: PhantomData::, _p2p: PhantomData::

, tributary_txn: &mut tributary_txn, - set: self.set, + set: &self.set, validators: &self.validators, total_weight: self.total_weight, validator_weights: &self.validator_weights, @@ -576,7 +586,7 @@ impl ContinuallyRan for ScanTributaryTask { .handle_block(block_number, block); TributaryDb::set_last_handled_tributary_block( &mut tributary_txn, - self.set, + self.set.set, block_number, block_hash, ); diff --git a/processor/key-gen/src/generators.rs b/processor/key-gen/src/generators.rs index 3570ca6e..cff9c2f1 100644 --- a/processor/key-gen/src/generators.rs +++ b/processor/key-gen/src/generators.rs @@ -29,8 +29,8 @@ pub(crate) fn generators() -> &'static EvrfGenerators { .or_insert_with(|| { // If we haven't prior needed generators for this Ciphersuite, generate new ones Box::leak(Box::new(EvrfGenerators::::new( - ((MAX_KEY_SHARES_PER_SET * 2 / 3) + 1).try_into().unwrap(), - MAX_KEY_SHARES_PER_SET.try_into().unwrap(), + (MAX_KEY_SHARES_PER_SET * 2 / 3) + 1, + MAX_KEY_SHARES_PER_SET, ))) }) .downcast_ref() From 8b52b921f3f0d2233093ba7b77d2539fcfce8161 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 15:15:38 -0500 Subject: [PATCH 313/368] Have the Tributary scanner yield DKG confirmation signing protocol data --- Cargo.lock | 1 + coordinator/tributary/Cargo.toml | 9 +-- coordinator/tributary/src/db.rs | 4 ++ coordinator/tributary/src/lib.rs | 115 +++++++++++++++++++++++++++++-- 4 files changed, 118 insertions(+), 11 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index a6b1c37e..3fcda89f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8431,6 +8431,7 @@ dependencies = [ "blake2", "borsh", "ciphersuite", + "dkg", "log", "parity-scale-codec", "rand_core", diff --git a/coordinator/tributary/Cargo.toml b/coordinator/tributary/Cargo.toml index 3e374bc0..431dae3c 100644 --- a/coordinator/tributary/Cargo.toml +++ b/coordinator/tributary/Cargo.toml @@ -21,13 +21,14 @@ workspace = true zeroize = { version = "^1.5", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] } -blake2 = { version = "0.10", default-features = false, features = ["std"] } -ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } -schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } - scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } +blake2 = { version = "0.10", default-features = false, features = ["std"] } +ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } +dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } +schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } + serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-db = { path = "../../common/db" } diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index 5812063e..1a475734 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -229,7 +229,11 @@ create_db!( db_channel!( CoordinatorTributary { + // Messages to send to the processor ProcessorMessages: (set: ValidatorSet) -> messages::CoordinatorMessage, + // Messages for the DKG confirmation + DkgConfirmationMessages: (set: ValidatorSet) -> messages::sign::CoordinatorMessage, + // Topics which have been explicitly recognized RecognizedTopics: (set: ValidatorSet) -> Topic, } ); diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index d8390511..00bd5f51 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -5,7 +5,10 @@ use core::{marker::PhantomData, future::Future}; use std::collections::HashMap; +use scale::Encode; + use ciphersuite::group::GroupEncoding; +use dkg::Participant; use serai_client::{ primitives::SeraiAddress, @@ -27,7 +30,7 @@ use tributary_sdk::{ use serai_cosign::CosignIntent; use serai_coordinator_substrate::NewSetInformation; -use messages::sign::VariantSignId; +use messages::sign::{VariantSignId, SignId}; mod transaction; pub use transaction::{SigningProtocolRound, Signed, Transaction}; @@ -45,6 +48,24 @@ impl ProcessorMessages { } } +/// Messages for the DKG confirmation. +pub struct DkgConfirmationMessages; +impl DkgConfirmationMessages { + /// Receive a message for the DKG confirmation. + /// + /// These messages use the ProcessorMessage API as that's what existing flows are designed + /// around, enabling their reuse. The ProcessorMessage includes a VariantSignId which isn't + /// applicable to the DKG confirmation (as there's no such variant of the VariantSignId). The + /// actual ID is undefined other than it will be consistent to the signing protocol and unique + /// across validator sets, with no guarantees of uniqueness across contexts. + pub fn try_recv( + txn: &mut impl DbTxn, + set: ValidatorSet, + ) -> Option { + db::DkgConfirmationMessages::try_recv(txn, set) + } +} + /// The cosign intents. pub struct CosignIntents; impl CosignIntents { @@ -158,6 +179,62 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { }, ); } + + fn accumulate_dkg_confirmation + Borshy>( + &mut self, + block_number: u64, + topic: Topic, + attempt: u32, + data: &D, + signer: SeraiAddress, + ) -> Option<(SignId, HashMap>)> { + match TributaryDb::accumulate::( + self.tributary_txn, + self.set.set, + self.validators, + self.total_weight, + block_number, + topic, + signer, + self.validator_weights[&signer], + data, + ) { + DataSet::None => None, + DataSet::Participating(data_set) => { + // Consistent ID for the DKG confirmation, unqie across sets + let id = { + let mut id = [0; 32]; + let encoded_set = self.set.set.encode(); + id[.. encoded_set.len()].copy_from_slice(&encoded_set); + VariantSignId::Batch(id) + }; + let id = SignId { session: self.set.set.session, id, attempt }; + + // This will be used in a MuSig protocol, so the Participant indexes are the validator's + // position in the list regardless of their weight + let flatten_data_set = |data_set: HashMap<_, D>| { + let mut entries = HashMap::with_capacity(usize::from(self.total_weight)); + for (validator, participation) in data_set { + let (index, (_validator, _weight)) = &self + .set + .validators + .iter() + .enumerate() + .find(|(_i, (validator_i, _weight))| validator == *validator_i) + .unwrap(); + entries.insert( + Participant::new(u16::try_from(*index).unwrap()).unwrap(), + participation.as_ref().to_vec(), + ); + } + entries + }; + let data_set = flatten_data_set(data_set); + Some((id, data_set)) + } + } + } + fn handle_application_tx(&mut self, block_number: u64, tx: Transaction) { let signer = |signed: Signed| SeraiAddress(signed.signer().to_bytes()); @@ -226,12 +303,36 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { ); } Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { - // Accumulate the preprocesses into our own FROST attempt manager - todo!("TODO") + let topic = topic.unwrap(); + let signer = signer(signed); + + let Some((id, data_set)) = + self.accumulate_dkg_confirmation(block_number, topic, attempt, &preprocess, signer) + else { + return; + }; + + db::DkgConfirmationMessages::send( + self.tributary_txn, + self.set.set, + &messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set }, + ); } Transaction::DkgConfirmationShare { attempt, share, signed } => { - // Accumulate the shares into our own FROST attempt manager - todo!("TODO: SetKeysTask") + let topic = topic.unwrap(); + let signer = signer(signed); + + let Some((id, data_set)) = + self.accumulate_dkg_confirmation(block_number, topic, attempt, &share, signer) + else { + return; + }; + + db::DkgConfirmationMessages::send( + self.tributary_txn, + self.set.set, + &messages::sign::CoordinatorMessage::Shares { id, shares: data_set }, + ); } Transaction::Cosign { substrate_block_hash } => { @@ -405,7 +506,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { }; } - Transaction::Sign { id, attempt, round, data, signed } => { + Transaction::Sign { id: _, attempt: _, round, data, signed } => { let topic = topic.unwrap(); let signer = signer(signed); @@ -458,7 +559,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { }, ) } - }; + } } } } From 505f1b20a4cf29417e6c462f219635d9e291bf89 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 17:49:00 -0500 Subject: [PATCH 314/368] Correct re-attempts for the DKG Confirmation protocol Also spawns the SetKeys task. --- Cargo.lock | 2 -- coordinator/Cargo.toml | 9 +++----- coordinator/src/main.rs | 17 ++++++++++---- coordinator/tributary/src/db.rs | 39 +++++++++++++++++++++++++++++--- coordinator/tributary/src/lib.rs | 25 ++++++++------------ 5 files changed, 61 insertions(+), 31 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 3fcda89f..72675804 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -8323,10 +8323,8 @@ dependencies = [ "frost-schnorrkel", "hex", "log", - "modular-frost", "parity-scale-codec", "rand_core", - "schnorr-signatures", "schnorrkel", "serai-client", "serai-coordinator-libp2p-p2p", diff --git a/coordinator/Cargo.toml b/coordinator/Cargo.toml index bd3c2e3d..4296423f 100644 --- a/coordinator/Cargo.toml +++ b/coordinator/Cargo.toml @@ -25,13 +25,13 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } blake2 = { version = "0.10", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] } -ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } +ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] } dkg = { path = "../crypto/dkg", default-features = false, features = ["std"] } -schnorr = { package = "schnorr-signatures", path = "../crypto/schnorr", default-features = false, features = ["std"] } -frost = { package = "modular-frost", path = "../crypto/frost" } frost-schnorrkel = { path = "../crypto/schnorrkel" } +hex = { version = "0.4", default-features = false, features = ["std"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] } +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } zalloc = { path = "../common/zalloc" } serai-db = { path = "../common/db" } @@ -44,9 +44,6 @@ tributary-sdk = { path = "./tributary-sdk" } serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] } -hex = { version = "0.4", default-features = false, features = ["std"] } -borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } - log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index c739d390..22043392 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -24,8 +24,8 @@ use serai_task::{Task, TaskHandle, ContinuallyRan}; use serai_cosign::{Faulted, SignedCosign, Cosigning}; use serai_coordinator_substrate::{ - CanonicalEventStream, EphemeralEventStream, SignSlashReport, SignedBatches, PublishBatchTask, - SlashReports, PublishSlashReportTask, + CanonicalEventStream, EphemeralEventStream, SignSlashReport, SetKeysTask, SignedBatches, + PublishBatchTask, SlashReports, PublishSlashReportTask, }; use serai_coordinator_tributary::{SigningProtocolRound, Signed, Transaction, SubstrateBlockPlans}; @@ -207,7 +207,7 @@ async fn handle_network( session, substrate_key, network_key, - } => todo!("TODO Transaction::DkgConfirmationPreprocess"), + } => todo!("TODO DkgConfirmationMessages, Transaction::DkgConfirmationPreprocess"), messages::key_gen::ProcessorMessage::Blame { session, participant } => { RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); } @@ -220,7 +220,6 @@ async fn handle_network( let set = ValidatorSet { network, session: id.session }; if id.attempt == 0 { // Batches are declared by their intent to be signed - // TODO: Document this in processor <-> coordinator rebuild issue if let messages::sign::VariantSignId::Batch(hash) = id.id { TributaryTransactions::send(&mut txn, set, &Transaction::Batch { hash }); } @@ -469,6 +468,16 @@ async fn main() { tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network)); } + // Spawn the task to set keys + { + let (set_keys_task_def, set_keys_task) = Task::new(); + tokio::spawn( + SetKeysTask::new(db.clone(), serai.clone()).continually_run(set_keys_task_def, vec![]), + ); + // Forget its handle so it always runs in the background + core::mem::forget(set_keys_task); + } + // Spawn the task to publish slash reports { let (publish_slash_report_task_def, publish_slash_report_task) = Task::new(); diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index 1a475734..ef4199b8 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -94,9 +94,9 @@ impl Topic { } } - // The SignId for this topic - // - // Returns None if Topic isn't Topic::Sign + /// The SignId for this topic + /// + /// Returns None if Topic isn't Topic::Sign pub(crate) fn sign_id(self, set: ValidatorSet) -> Option { #[allow(clippy::match_same_arms)] match self { @@ -107,6 +107,33 @@ impl Topic { } } + /// The SignId for this DKG Confirmation. + /// + /// This is undefined except for being consistent to the DKG Confirmation signing protocol and + /// unique across sets. + /// + /// Returns None if Topic isn't Topic::DkgConfirmation. + pub(crate) fn dkg_confirmation_sign_id( + self, + set: ValidatorSet, + ) -> Option { + #[allow(clippy::match_same_arms)] + match self { + Topic::RemoveParticipant { .. } => None, + Topic::DkgConfirmation { attempt, round: _ } => Some({ + let id = { + let mut id = [0; 32]; + let encoded_set = set.encode(); + id[.. encoded_set.len()].copy_from_slice(&encoded_set); + VariantSignId::Batch(id) + }; + SignId { session: set.session, id, attempt } + }), + Topic::SlashReport { .. } => None, + Topic::Sign { .. } => None, + } + } + /// The topic which precedes this topic as a prerequisite /// /// The preceding topic must define this topic as succeeding @@ -337,6 +364,12 @@ impl TributaryDb { Self::recognize_topic(txn, set, topic); if let Some(id) = topic.sign_id(set) { Self::send_message(txn, set, messages::sign::CoordinatorMessage::Reattempt { id }); + } else if let Some(id) = topic.dkg_confirmation_sign_id(set) { + DkgConfirmationMessages::send( + txn, + set, + &messages::sign::CoordinatorMessage::Reattempt { id }, + ); } } } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 00bd5f51..27baf45d 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -5,8 +5,6 @@ use core::{marker::PhantomData, future::Future}; use std::collections::HashMap; -use scale::Encode; - use ciphersuite::group::GroupEncoding; use dkg::Participant; @@ -184,7 +182,6 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { &mut self, block_number: u64, topic: Topic, - attempt: u32, data: &D, signer: SeraiAddress, ) -> Option<(SignId, HashMap>)> { @@ -201,14 +198,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { ) { DataSet::None => None, DataSet::Participating(data_set) => { - // Consistent ID for the DKG confirmation, unqie across sets - let id = { - let mut id = [0; 32]; - let encoded_set = self.set.set.encode(); - id[.. encoded_set.len()].copy_from_slice(&encoded_set); - VariantSignId::Batch(id) - }; - let id = SignId { session: self.set.set.session, id, attempt }; + let id = topic.dkg_confirmation_sign_id(self.set.set).unwrap(); // This will be used in a MuSig protocol, so the Participant indexes are the validator's // position in the list regardless of their weight @@ -222,8 +212,11 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { .enumerate() .find(|(_i, (validator_i, _weight))| validator == *validator_i) .unwrap(); + // The index is zero-indexed yet participants are one-indexed + let index = index + 1; + entries.insert( - Participant::new(u16::try_from(*index).unwrap()).unwrap(), + Participant::new(u16::try_from(index).unwrap()).unwrap(), participation.as_ref().to_vec(), ); } @@ -302,12 +295,12 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { }, ); } - Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed } => { + Transaction::DkgConfirmationPreprocess { attempt: _, preprocess, signed } => { let topic = topic.unwrap(); let signer = signer(signed); let Some((id, data_set)) = - self.accumulate_dkg_confirmation(block_number, topic, attempt, &preprocess, signer) + self.accumulate_dkg_confirmation(block_number, topic, &preprocess, signer) else { return; }; @@ -318,12 +311,12 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { &messages::sign::CoordinatorMessage::Preprocesses { id, preprocesses: data_set }, ); } - Transaction::DkgConfirmationShare { attempt, share, signed } => { + Transaction::DkgConfirmationShare { attempt: _, share, signed } => { let topic = topic.unwrap(); let signer = signer(signed); let Some((id, data_set)) = - self.accumulate_dkg_confirmation(block_number, topic, attempt, &share, signer) + self.accumulate_dkg_confirmation(block_number, topic, &share, signer) else { return; }; From 19b87c7f5acab14bbc4ee094ef59ce520be31f98 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 20:29:57 -0500 Subject: [PATCH 315/368] Add the DKG confirmation flow Finishes the coordinator redo --- coordinator/src/db.rs | 31 +- coordinator/src/dkg_confirmation.rs | 442 ++++++++++++++++++++++++++++ coordinator/src/main.rs | 39 ++- coordinator/src/substrate.rs | 7 +- coordinator/src/tributary.rs | 56 ++-- 5 files changed, 541 insertions(+), 34 deletions(-) create mode 100644 coordinator/src/dkg_confirmation.rs diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 6f336bcc..97500b4e 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -7,7 +7,7 @@ use dkg::Participant; use serai_client::{ primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet}, + validator_sets::primitives::{Session, ValidatorSet, KeyPair}, }; use serai_cosign::SignedCosign; @@ -78,6 +78,8 @@ create_db! { LastProcessorMessage: (network: NetworkId) -> u64, // Cosigns we produced and tried to intake yet incurred an error while doing so ErroneousCosigns: () -> Vec, + // The keys to confirm and set on the Serai network + KeysToConfirm: (set: ValidatorSet) -> KeyPair, } } @@ -95,24 +97,39 @@ mod _internal_db { db_channel! { Coordinator { - // Tributary transactions to publish - TributaryTransactions: (set: ValidatorSet) -> Transaction, + // Tributary transactions to publish from the Processor messages + TributaryTransactionsFromProcessorMessages: (set: ValidatorSet) -> Transaction, + // Tributary transactions to publish from the DKG confirmation task + TributaryTransactionsFromDkgConfirmation: (set: ValidatorSet) -> Transaction, // Participants to remove RemoveParticipant: (set: ValidatorSet) -> Participant, } } } -pub(crate) struct TributaryTransactions; -impl TributaryTransactions { +pub(crate) struct TributaryTransactionsFromProcessorMessages; +impl TributaryTransactionsFromProcessorMessages { pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, tx: &Transaction) { // If this set has yet to be retired, send this transaction if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { - _internal_db::TributaryTransactions::send(txn, set, tx); + _internal_db::TributaryTransactionsFromProcessorMessages::send(txn, set, tx); } } pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { - _internal_db::TributaryTransactions::try_recv(txn, set) + _internal_db::TributaryTransactionsFromProcessorMessages::try_recv(txn, set) + } +} + +pub(crate) struct TributaryTransactionsFromDkgConfirmation; +impl TributaryTransactionsFromDkgConfirmation { + pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, tx: &Transaction) { + // If this set has yet to be retired, send this transaction + if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { + _internal_db::TributaryTransactionsFromDkgConfirmation::send(txn, set, tx); + } + } + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + _internal_db::TributaryTransactionsFromDkgConfirmation::try_recv(txn, set) } } diff --git a/coordinator/src/dkg_confirmation.rs b/coordinator/src/dkg_confirmation.rs new file mode 100644 index 00000000..e09b0a4d --- /dev/null +++ b/coordinator/src/dkg_confirmation.rs @@ -0,0 +1,442 @@ +use core::{ops::Deref, future::Future}; +use std::{boxed::Box, sync::Arc, collections::HashMap}; + +use zeroize::Zeroizing; +use rand_core::OsRng; +use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; +use frost_schnorrkel::{ + frost::{ + dkg::{Participant, musig::musig}, + FrostError, + sign::*, + }, + Schnorrkel, +}; + +use serai_db::{DbTxn, Db as DbTrait}; + +use serai_client::{ + primitives::SeraiAddress, + validator_sets::primitives::{ValidatorSet, musig_context, set_keys_message}, + SeraiError, Serai, +}; + +use serai_task::ContinuallyRan; + +use serai_coordinator_substrate::{NewSetInformation, Keys}; +use serai_coordinator_tributary::{Transaction, DkgConfirmationMessages}; + +use crate::{KeysToConfirm, TributaryTransactionsFromDkgConfirmation}; + +fn schnorrkel() -> Schnorrkel { + Schnorrkel::new(b"substrate") // TODO: Pull the constant for this +} + +fn our_i( + set: &NewSetInformation, + key: &Zeroizing<::F>, + data: &HashMap>, +) -> Participant { + let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes()); + + let mut our_i = None; + for participant in data.keys() { + let validator_index = usize::from(u16::from(*participant) - 1); + let (validator, _weight) = set.validators[validator_index]; + if validator == public { + our_i = Some(*participant); + } + } + our_i.unwrap() +} + +// Take a HashMap of participations with non-contiguous Participants and convert them to a +// contiguous sequence. +// +// The input data is expected to not include our own data, which also won't be in the output data. +// +// Returns the mapping from the contiguous Participants to the original Participants. +fn make_contiguous( + our_i: Participant, + mut data: HashMap>, + transform: impl Fn(Vec) -> std::io::Result, +) -> Result, Participant> { + assert!(!data.contains_key(&our_i)); + + let mut ordered_participants = data.keys().copied().collect::>(); + ordered_participants.sort_by_key(|participant| u16::from(*participant)); + + let mut our_i = Some(our_i); + let mut contiguous = HashMap::new(); + let mut i = 1; + for participant in ordered_participants { + // If this is the first participant after our own index, increment to account for our index + if let Some(our_i_value) = our_i { + if u16::from(participant) > u16::from(our_i_value) { + i += 1; + our_i = None; + } + } + + let contiguous_index = Participant::new(i).unwrap(); + let data = match transform(data.remove(&participant).unwrap()) { + Ok(data) => data, + Err(_) => Err(participant)?, + }; + contiguous.insert(contiguous_index, data); + i += 1; + } + Ok(contiguous) +} + +fn handle_frost_error(result: Result) -> Result { + match &result { + Ok(_) => Ok(result.unwrap()), + Err(FrostError::InvalidPreprocess(participant) | FrostError::InvalidShare(participant)) => { + Err(*participant) + } + // All of these should be unreachable + Err( + FrostError::InternalError(_) | + FrostError::InvalidParticipant(_, _) | + FrostError::InvalidSigningSet(_) | + FrostError::InvalidParticipantQuantity(_, _) | + FrostError::DuplicatedParticipant(_) | + FrostError::MissingParticipant(_), + ) => { + result.unwrap(); + unreachable!("continued execution after unwrapping Result::Err"); + } + } +} + +#[rustfmt::skip] +enum Signer { + Preprocess { attempt: u32, seed: CachedPreprocess, preprocess: [u8; 64] }, + Share { + attempt: u32, + musig_validators: Vec, + share: [u8; 32], + machine: Box>, + }, +} + +/// Performs the DKG Confirmation protocol. +pub(crate) struct ConfirmDkgTask { + db: CD, + + set: NewSetInformation, + tributary_db: TD, + + serai: Arc, + + key: Zeroizing<::F>, + signer: Option, +} + +impl ConfirmDkgTask { + pub(crate) fn new( + db: CD, + set: NewSetInformation, + tributary_db: TD, + serai: Arc, + key: Zeroizing<::F>, + ) -> Self { + Self { db, set, tributary_db, serai, key, signer: None } + } + + fn slash(db: &mut CD, set: ValidatorSet, validator: SeraiAddress) { + let mut txn = db.txn(); + TributaryTransactionsFromDkgConfirmation::send( + &mut txn, + set, + &Transaction::RemoveParticipant { participant: validator, signed: Default::default() }, + ); + txn.commit(); + } + + fn preprocess( + db: &mut CD, + set: ValidatorSet, + attempt: u32, + key: &Zeroizing<::F>, + signer: &mut Option, + ) { + // Perform the preprocess + let (machine, preprocess) = AlgorithmMachine::new( + schnorrkel(), + // We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet + musig(&musig_context(set), key, &[Ristretto::generator() * key.deref()]).unwrap().into(), + ) + .preprocess(&mut OsRng); + // We take the preprocess so we can use it in a distinct machine with the actual Musig + // parameters + let seed = machine.cache(); + + let mut preprocess_bytes = [0u8; 64]; + preprocess_bytes.copy_from_slice(&preprocess.serialize()); + let preprocess = preprocess_bytes; + + let mut txn = db.txn(); + // If this attempt has already been preprocessed for, the Tributary will de-duplicate it + // This may mean the Tributary preprocess is distinct from ours, but we check for that later + TributaryTransactionsFromDkgConfirmation::send( + &mut txn, + set, + &Transaction::DkgConfirmationPreprocess { attempt, preprocess, signed: Default::default() }, + ); + txn.commit(); + + *signer = Some(Signer::Preprocess { attempt, seed, preprocess }); + } +} + +impl ContinuallyRan for ConfirmDkgTask { + type Error = SeraiError; + + fn run_iteration(&mut self) -> impl Send + Future> { + async move { + let mut made_progress = false; + + // If we were sent a key to set, create the signer for it + if self.signer.is_none() && KeysToConfirm::get(&self.db, self.set.set).is_some() { + // Create and publish the initial preprocess + Self::preprocess(&mut self.db, self.set.set, 0, &self.key, &mut self.signer); + + made_progress = true; + } + + // If we have keys to confirm, handle all messages from the tributary + if let Some(key_pair) = KeysToConfirm::get(&self.db, self.set.set) { + // Handle all messages from the Tributary + loop { + let mut tributary_txn = self.tributary_db.txn(); + let Some(msg) = DkgConfirmationMessages::try_recv(&mut tributary_txn, self.set.set) + else { + break; + }; + + match msg { + messages::sign::CoordinatorMessage::Reattempt { + id: messages::sign::SignId { attempt, .. }, + } => { + // Create and publish the preprocess for the specified attempt + Self::preprocess(&mut self.db, self.set.set, attempt, &self.key, &mut self.signer); + } + messages::sign::CoordinatorMessage::Preprocesses { + id: messages::sign::SignId { attempt, .. }, + mut preprocesses, + } => { + // Confirm the preprocess we're expected to sign with is the one we locally have + // It may be different if we rebooted and made a second preprocess for this attempt + let Some(Signer::Preprocess { attempt: our_attempt, seed, preprocess }) = + self.signer.take() + else { + // If this message is not expected, commit the txn to drop it and move on + // At some point, we'll get a Reattempt and reset + tributary_txn.commit(); + break; + }; + + // Determine the MuSig key signed with + let musig_validators = { + let mut ordered_participants = preprocesses.keys().copied().collect::>(); + ordered_participants.sort_by_key(|participant| u16::from(*participant)); + + let mut res = vec![]; + for participant in ordered_participants { + let (validator, _weight) = + self.set.validators[usize::from(u16::from(participant) - 1)]; + res.push(validator); + } + res + }; + + let musig_public_keys = musig_validators + .iter() + .map(|key| { + Ristretto::read_G(&mut key.0.as_slice()) + .expect("Serai validator had invalid public key") + }) + .collect::>(); + + let keys = + musig(&musig_context(self.set.set), &self.key, &musig_public_keys).unwrap().into(); + + // Rebuild the machine + let (machine, preprocess_from_cache) = + AlgorithmSignMachine::from_cache(schnorrkel(), keys, seed); + assert_eq!(preprocess.as_slice(), preprocess_from_cache.serialize().as_slice()); + + // Ensure this is a consistent signing session + let our_i = our_i(&self.set, &self.key, &preprocesses); + let consistent = (attempt == our_attempt) && + (preprocesses.remove(&our_i).unwrap().as_slice() == preprocess.as_slice()); + if !consistent { + tributary_txn.commit(); + break; + } + + // Reformat the preprocesses into the expected format for Musig + let preprocesses = match make_contiguous(our_i, preprocesses, |preprocess| { + machine.read_preprocess(&mut preprocess.as_slice()) + }) { + Ok(preprocesses) => preprocesses, + // This yields the *original participant index* + Err(participant) => { + Self::slash( + &mut self.db, + self.set.set, + self.set.validators[usize::from(u16::from(participant) - 1)].0, + ); + tributary_txn.commit(); + break; + } + }; + + // Calculate our share + let (machine, share) = match handle_frost_error( + machine.sign(preprocesses, &set_keys_message(&self.set.set, &key_pair)), + ) { + Ok((machine, share)) => (machine, share), + // This yields the *musig participant index* + Err(participant) => { + Self::slash( + &mut self.db, + self.set.set, + musig_validators[usize::from(u16::from(participant) - 1)], + ); + tributary_txn.commit(); + break; + } + }; + + // Send our share + let share = <[u8; 32]>::try_from(share.serialize()).unwrap(); + let mut txn = self.db.txn(); + TributaryTransactionsFromDkgConfirmation::send( + &mut txn, + self.set.set, + &Transaction::DkgConfirmationShare { attempt, share, signed: Default::default() }, + ); + txn.commit(); + + self.signer = Some(Signer::Share { + attempt, + musig_validators, + share, + machine: Box::new(machine), + }); + } + messages::sign::CoordinatorMessage::Shares { + id: messages::sign::SignId { attempt, .. }, + mut shares, + } => { + let Some(Signer::Share { attempt: our_attempt, musig_validators, share, machine }) = + self.signer.take() + else { + tributary_txn.commit(); + break; + }; + + // Ensure this is a consistent signing session + let our_i = our_i(&self.set, &self.key, &shares); + let consistent = (attempt == our_attempt) && + (shares.remove(&our_i).unwrap().as_slice() == share.as_slice()); + if !consistent { + tributary_txn.commit(); + break; + } + + // Reformat the shares into the expected format for Musig + let shares = match make_contiguous(our_i, shares, |share| { + machine.read_share(&mut share.as_slice()) + }) { + Ok(shares) => shares, + // This yields the *original participant index* + Err(participant) => { + Self::slash( + &mut self.db, + self.set.set, + self.set.validators[usize::from(u16::from(participant) - 1)].0, + ); + tributary_txn.commit(); + break; + } + }; + + match handle_frost_error(machine.complete(shares)) { + Ok(signature) => { + // Create the bitvec of the participants + let mut signature_participants; + { + use bitvec::prelude::*; + signature_participants = bitvec![u8, Lsb0; 0; 0]; + let mut i = 0; + for (validator, _) in self.set.validators { + if Some(validator) == musig_validators.get(i) { + signature_participants.push(true); + i += 1; + } else { + signature_participants.push(false); + } + } + } + + // This is safe to call multiple times as it'll just change which *valid* + // signature to publish + let mut txn = self.db.txn(); + Keys::set( + &mut txn, + self.set.set, + key_pair.clone(), + signature_participants, + signature.into(), + ); + txn.commit(); + } + // This yields the *musig participant index* + Err(participant) => { + Self::slash( + &mut self.db, + self.set.set, + musig_validators[usize::from(u16::from(participant) - 1)], + ); + tributary_txn.commit(); + break; + } + } + } + } + + // Because we successfully handled this message, note we made proress + made_progress = true; + tributary_txn.commit(); + } + } + + // Check if the key has been set on Serai + if KeysToConfirm::get(&self.db, self.set.set).is_some() { + let serai = self.serai.as_of_latest_finalized_block().await?; + let serai = serai.validator_sets(); + let is_historic_set = serai.session(self.set.set.network).await?.map(|session| session.0) > + Some(self.set.set.session.0); + let key_set_on_serai = is_historic_set || serai.keys(self.set.set).await?.is_some(); + if key_set_on_serai { + // Take the keys to confirm so we never instantiate the signer again + let mut txn = self.db.txn(); + KeysToConfirm::take(&mut txn, self.set.set); + txn.commit(); + + // Drop our own signer + // The task won't die until the Tributary does, but now it'll never do anything again + self.signer = None; + + made_progress = true; + } + } + + Ok(made_progress) + } + } +} diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 22043392..9d7afb17 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -14,8 +14,8 @@ use borsh::BorshDeserialize; use tokio::sync::mpsc; use serai_client::{ - primitives::{NetworkId, SeraiAddress, Signature}, - validator_sets::primitives::ValidatorSet, + primitives::{NetworkId, PublicKey, SeraiAddress, Signature}, + validator_sets::primitives::{ValidatorSet, KeyPair}, Serai, }; use message_queue::{Service, client::MessageQueue}; @@ -33,6 +33,7 @@ mod db; use db::*; mod tributary; +mod dkg_confirmation; mod substrate; use substrate::SubstrateTask; @@ -197,7 +198,7 @@ async fn handle_network( messages::ProcessorMessage::KeyGen(msg) => match msg { messages::key_gen::ProcessorMessage::Participation { session, participation } => { let set = ValidatorSet { network, session }; - TributaryTransactions::send( + TributaryTransactionsFromProcessorMessages::send( &mut txn, set, &Transaction::DkgParticipation { participation, signed: Signed::default() }, @@ -207,7 +208,18 @@ async fn handle_network( session, substrate_key, network_key, - } => todo!("TODO DkgConfirmationMessages, Transaction::DkgConfirmationPreprocess"), + } => { + KeysToConfirm::set( + &mut txn, + ValidatorSet { network, session }, + &KeyPair( + PublicKey::from_raw(substrate_key), + network_key + .try_into() + .expect("generated a network key which exceeds the maximum key length"), + ), + ); + } messages::key_gen::ProcessorMessage::Blame { session, participant } => { RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); } @@ -221,11 +233,15 @@ async fn handle_network( if id.attempt == 0 { // Batches are declared by their intent to be signed if let messages::sign::VariantSignId::Batch(hash) = id.id { - TributaryTransactions::send(&mut txn, set, &Transaction::Batch { hash }); + TributaryTransactionsFromProcessorMessages::send( + &mut txn, + set, + &Transaction::Batch { hash }, + ); } } - TributaryTransactions::send( + TributaryTransactionsFromProcessorMessages::send( &mut txn, set, &Transaction::Sign { @@ -239,7 +255,7 @@ async fn handle_network( } messages::sign::ProcessorMessage::Shares { id, shares } => { let set = ValidatorSet { network, session: id.session }; - TributaryTransactions::send( + TributaryTransactionsFromProcessorMessages::send( &mut txn, set, &Transaction::Sign { @@ -284,7 +300,7 @@ async fn handle_network( for (session, plans) in by_session { let set = ValidatorSet { network, session }; SubstrateBlockPlans::set(&mut txn, set, block, &plans); - TributaryTransactions::send( + TributaryTransactionsFromProcessorMessages::send( &mut txn, set, &Transaction::SubstrateBlock { hash: block }, @@ -350,10 +366,13 @@ async fn main() { // Cleanup all historic Tributaries while let Some(to_cleanup) = TributaryCleanup::try_recv(&mut txn) { prune_tributary_db(to_cleanup); + // Remove the keys to confirm for this network + KeysToConfirm::take(&mut txn, to_cleanup); // Drain the cosign intents created for this set while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} // Drain the transactions to publish for this set - while TributaryTransactions::try_recv(&mut txn, to_cleanup).is_some() {} + while TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, to_cleanup).is_some() {} + while TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, to_cleanup).is_some() {} // Drain the participants to remove for this set while RemoveParticipant::try_recv(&mut txn, to_cleanup).is_some() {} // Remove the SignSlashReport notification @@ -442,6 +461,7 @@ async fn main() { p2p.clone(), &p2p_add_tributary_send, tributary, + serai.clone(), serai_key.clone(), ) .await; @@ -456,6 +476,7 @@ async fn main() { p2p: p2p.clone(), p2p_add_tributary: p2p_add_tributary_send.clone(), p2p_retire_tributary: p2p_retire_tributary_send.clone(), + serai: serai.clone(), }) .continually_run(substrate_task_def, vec![]), ); diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 7601b2cc..518db079 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -9,7 +9,10 @@ use tokio::sync::mpsc; use serai_db::{DbTxn, Db as DbTrait}; -use serai_client::validator_sets::primitives::{Session, ValidatorSet}; +use serai_client::{ + validator_sets::primitives::{Session, ValidatorSet}, + Serai, +}; use message_queue::{Service, Metadata, client::MessageQueue}; use tributary_sdk::Tributary; @@ -29,6 +32,7 @@ pub(crate) struct SubstrateTask { pub(crate) p2p_add_tributary: mpsc::UnboundedSender<(ValidatorSet, Tributary)>, pub(crate) p2p_retire_tributary: mpsc::UnboundedSender, + pub(crate) serai: Arc, } impl ContinuallyRan for SubstrateTask

{ @@ -146,6 +150,7 @@ impl ContinuallyRan for SubstrateTask

{ self.p2p.clone(), &self.p2p_add_tributary, new_set, + self.serai.clone(), self.serai_key.clone(), ) .await; diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 7162bbe1..fdb9c090 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -11,7 +11,7 @@ use tokio::sync::mpsc; use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use scale::Encode; -use serai_client::validator_sets::primitives::ValidatorSet; +use serai_client::{validator_sets::primitives::ValidatorSet, Serai}; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; @@ -26,7 +26,10 @@ use serai_coordinator_tributary::{ }; use serai_coordinator_p2p::P2p; -use crate::{Db, TributaryTransactions, RemoveParticipant}; +use crate::{ + Db, TributaryTransactionsFromProcessorMessages, TributaryTransactionsFromDkgConfirmation, + RemoveParticipant, dkg_confirmation::ConfirmDkgTask, +}; create_db! { Coordinator { @@ -172,6 +175,7 @@ async fn add_signed_unsigned_transaction( Ok(true | false) => {} // InvalidNonce may be out-of-order TXs, not invalid ones, but we only create nonce #n+1 after // on-chain inclusion of the TX with nonce #n, so it is invalid within our context + // TODO: We need to handle publishing #n when #n already on-chain Err( TransactionError::TooLargeTransaction | TransactionError::InvalidSigner | @@ -192,7 +196,7 @@ async fn add_signed_unsigned_transaction( true } -/// Adds all of the transactions sent via `TributaryTransactions`. +/// Adds all of the transactions sent via `TributaryTransactionsFromProcessorMessages`. pub(crate) struct AddTributaryTransactionsTask { db: CD, tributary_db: TD, @@ -210,7 +214,19 @@ impl ContinuallyRan for AddTributaryTransactio // Provide/add all transactions sent our way loop { let mut txn = self.db.txn(); - let Some(tx) = TributaryTransactions::try_recv(&mut txn, self.set.set) else { break }; + // This gives priority to DkgConfirmation as that will only yield transactions at the start + // of the Tributary, ensuring this will be exhausted and yield to ProcessorMessages + let tx = match TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, self.set.set) { + Some(tx) => tx, + None => { + let Some(tx) = + TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, self.set.set) + else { + break; + }; + tx + } + }; let kind = tx.kind(); match kind { @@ -399,6 +415,8 @@ async fn scan_on_new_block( /// - Spawn the ScanTributaryTask /// - Spawn the ProvideCosignCosignedTransactionsTask /// - Spawn the TributaryProcessorMessagesTask +/// - Spawn the AddTributaryTransactionsTask +/// - Spawn the ConfirmDkgTask /// - Spawn the SignSlashReportTask /// - Iterate the scan task whenever a new block occurs (not just on the standard interval) pub(crate) async fn spawn_tributary( @@ -407,6 +425,7 @@ pub(crate) async fn spawn_tributary( p2p: P, p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, set: NewSetInformation, + serai: Arc, serai_key: Zeroizing<::F>, ) { // Don't spawn retired Tributaries @@ -485,30 +504,37 @@ pub(crate) async fn spawn_tributary( .continually_run(scan_tributary_task_def, vec![scan_tributary_messages_task]), ); - // Spawn the sign slash report task - let (sign_slash_report_task_def, sign_slash_report_task) = Task::new(); + // Spawn the add transactions task + let (add_tributary_transactions_task_def, add_tributary_transactions_task) = Task::new(); tokio::spawn( - (SignSlashReportTask { + (AddTributaryTransactionsTask { db: db.clone(), tributary_db: tributary_db.clone(), tributary: tributary.clone(), set: set.clone(), key: serai_key.clone(), }) - .continually_run(sign_slash_report_task_def, vec![]), + .continually_run(add_tributary_transactions_task_def, vec![]), ); - // Spawn the add transactions task - let (add_tributary_transactions_task_def, add_tributary_transactions_task) = Task::new(); + // Spawn the task to confirm the DKG result + let (confirm_dkg_task_def, confirm_dkg_task) = Task::new(); tokio::spawn( - (AddTributaryTransactionsTask { + ConfirmDkgTask::new(db.clone(), set.clone(), tributary_db.clone(), serai, serai_key.clone()) + .continually_run(confirm_dkg_task_def, vec![add_tributary_transactions_task]), + ); + + // Spawn the sign slash report task + let (sign_slash_report_task_def, sign_slash_report_task) = Task::new(); + tokio::spawn( + (SignSlashReportTask { db: db.clone(), tributary_db, tributary: tributary.clone(), set: set.clone(), key: serai_key, }) - .continually_run(add_tributary_transactions_task_def, vec![]), + .continually_run(sign_slash_report_task_def, vec![]), ); // Whenever a new block occurs, immediately run the scan task @@ -520,10 +546,6 @@ pub(crate) async fn spawn_tributary( set.set, tributary, scan_tributary_task, - vec![ - provide_cosign_cosigned_transactions_task, - sign_slash_report_task, - add_tributary_transactions_task, - ], + vec![provide_cosign_cosigned_transactions_task, confirm_dkg_task, sign_slash_report_task], )); } From 6b41f32371f117687231e9e628505758df98b1ed Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 20:48:36 -0500 Subject: [PATCH 316/368] Correct handling of InvalidNonce within the coordinator --- coordinator/src/dkg_confirmation.rs | 2 +- coordinator/src/tributary.rs | 144 ++++++++++++++++++---------- 2 files changed, 96 insertions(+), 50 deletions(-) diff --git a/coordinator/src/dkg_confirmation.rs b/coordinator/src/dkg_confirmation.rs index e09b0a4d..16857614 100644 --- a/coordinator/src/dkg_confirmation.rs +++ b/coordinator/src/dkg_confirmation.rs @@ -373,7 +373,7 @@ impl ContinuallyRan for ConfirmDkgTask { use bitvec::prelude::*; signature_participants = bitvec![u8, Lsb0; 0; 0]; let mut i = 0; - for (validator, _) in self.set.validators { + for (validator, _) in &self.set.validators { if Some(validator) == musig_validators.get(i) { signature_participants.push(true); i += 1; diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index fdb9c090..b3cffb68 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -173,18 +173,32 @@ async fn add_signed_unsigned_transaction( match &res { // Fresh publication, already published Ok(true | false) => {} - // InvalidNonce may be out-of-order TXs, not invalid ones, but we only create nonce #n+1 after - // on-chain inclusion of the TX with nonce #n, so it is invalid within our context - // TODO: We need to handle publishing #n when #n already on-chain Err( TransactionError::TooLargeTransaction | TransactionError::InvalidSigner | - TransactionError::InvalidNonce | TransactionError::InvalidSignature | TransactionError::InvalidContent, ) => { panic!("created an invalid transaction, tx: {tx:?}, err: {res:?}"); } + // InvalidNonce may be out-of-order TXs, not invalid ones, but we only create nonce #n+1 after + // on-chain inclusion of the TX with nonce #n, so it is invalid within our context unless the + // issue is this transaction was already included on-chain + Err(TransactionError::InvalidNonce) => { + let TransactionKind::Signed(order, signed) = tx.kind() else { + panic!("non-Signed transaction had InvalidNonce"); + }; + let next_nonce = tributary + .next_nonce(&signed.signer, &order) + .await + .expect("signer who is a present validator didn't have a nonce"); + assert!(next_nonce != signed.nonce); + // We're publishing an old transaction + if next_nonce > signed.nonce { + return true; + } + panic!("nonce in transaction wasn't contiguous with nonce on-chain"); + } // We've published too many transactions recently Err(TransactionError::TooManyInMempool) => { return false; @@ -196,6 +210,43 @@ async fn add_signed_unsigned_transaction( true } +async fn add_with_recognition_check( + set: ValidatorSet, + tributary_db: &mut TD, + tributary: &Tributary, + key: &Zeroizing<::F>, + tx: Transaction, +) -> bool { + let kind = tx.kind(); + match kind { + TransactionKind::Provided(_) => provide_transaction(set, tributary, tx).await, + TransactionKind::Unsigned | TransactionKind::Signed(_, _) => { + // If this is a transaction with signing data, check the topic is recognized before + // publishing + let topic = tx.topic(); + let still_requires_recognition = if let Some(topic) = topic { + (topic.requires_recognition() && (!RecognizedTopics::recognized(tributary_db, set, topic))) + .then_some(topic) + } else { + None + }; + if let Some(topic) = still_requires_recognition { + // Queue the transaction until the topic is recognized + // We use the Tributary DB for this so it's cleaned up when the Tributary DB is + let mut tributary_txn = tributary_db.txn(); + PublishOnRecognition::set(&mut tributary_txn, set, topic, &tx); + tributary_txn.commit(); + } else { + // Actually add the transaction + if !add_signed_unsigned_transaction(tributary, key, tx).await { + return false; + } + } + } + } + true +} + /// Adds all of the transactions sent via `TributaryTransactionsFromProcessorMessages`. pub(crate) struct AddTributaryTransactionsTask { db: CD, @@ -214,49 +265,44 @@ impl ContinuallyRan for AddTributaryTransactio // Provide/add all transactions sent our way loop { let mut txn = self.db.txn(); - // This gives priority to DkgConfirmation as that will only yield transactions at the start - // of the Tributary, ensuring this will be exhausted and yield to ProcessorMessages - let tx = match TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, self.set.set) { - Some(tx) => tx, - None => { - let Some(tx) = - TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, self.set.set) - else { - break; - }; - tx - } + let Some(tx) = TributaryTransactionsFromDkgConfirmation::try_recv(&mut txn, self.set.set) + else { + break; }; - let kind = tx.kind(); - match kind { - TransactionKind::Provided(_) => { - provide_transaction(self.set.set, &self.tributary, tx).await - } - TransactionKind::Unsigned | TransactionKind::Signed(_, _) => { - // If this is a transaction with signing data, check the topic is recognized before - // publishing - let topic = tx.topic(); - let still_requires_recognition = if let Some(topic) = topic { - (topic.requires_recognition() && - (!RecognizedTopics::recognized(&self.tributary_db, self.set.set, topic))) - .then_some(topic) - } else { - None - }; - if let Some(topic) = still_requires_recognition { - // Queue the transaction until the topic is recognized - // We use the Tributary DB for this so it's cleaned up when the Tributary DB is - let mut txn = self.tributary_db.txn(); - PublishOnRecognition::set(&mut txn, self.set.set, topic, &tx); - txn.commit(); - } else { - // Actually add the transaction - if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await { - break; - } - } - } + if !add_with_recognition_check( + self.set.set, + &mut self.tributary_db, + &self.tributary, + &self.key, + tx, + ) + .await + { + break; + } + + made_progress = true; + txn.commit(); + } + + loop { + let mut txn = self.db.txn(); + let Some(tx) = TributaryTransactionsFromProcessorMessages::try_recv(&mut txn, self.set.set) + else { + break; + }; + + if !add_with_recognition_check( + self.set.set, + &mut self.tributary_db, + &self.tributary, + &self.key, + tx, + ) + .await + { + break; } made_progress = true; @@ -265,20 +311,20 @@ impl ContinuallyRan for AddTributaryTransactio // Provide/add all transactions due to newly recognized topics loop { - let mut txn = self.tributary_db.txn(); + let mut tributary_txn = self.tributary_db.txn(); let Some(topic) = - RecognizedTopics::try_recv_topic_requiring_recognition(&mut txn, self.set.set) + RecognizedTopics::try_recv_topic_requiring_recognition(&mut tributary_txn, self.set.set) else { break; }; - if let Some(tx) = PublishOnRecognition::take(&mut txn, self.set.set, topic) { + if let Some(tx) = PublishOnRecognition::take(&mut tributary_txn, self.set.set, topic) { if !add_signed_unsigned_transaction(&self.tributary, &self.key, tx).await { break; } } made_progress = true; - txn.commit(); + tributary_txn.commit(); } // Publish any participant removals From be2098d2e1fe315037c0dd4d11dafc647fbb75e5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 15 Jan 2025 21:00:50 -0500 Subject: [PATCH 317/368] Remove Serai from the ConfirmDkgTask --- coordinator/p2p/libp2p/src/validators.rs | 8 ++++ coordinator/src/db.rs | 2 + coordinator/src/dkg_confirmation.rs | 42 ++++++++----------- coordinator/src/main.rs | 3 +- coordinator/src/substrate.rs | 14 +++---- coordinator/src/tributary.rs | 5 +-- coordinator/substrate/src/publish_batch.rs | 2 + .../substrate/src/publish_slash_report.rs | 2 + coordinator/substrate/src/set_keys.rs | 2 + 9 files changed, 41 insertions(+), 39 deletions(-) diff --git a/coordinator/p2p/libp2p/src/validators.rs b/coordinator/p2p/libp2p/src/validators.rs index 0395ff3a..6b93cf4d 100644 --- a/coordinator/p2p/libp2p/src/validators.rs +++ b/coordinator/p2p/libp2p/src/validators.rs @@ -51,6 +51,14 @@ impl Validators { serai: impl Borrow, sessions: impl Borrow>, ) -> Result)>, SeraiError> { + /* + This uses the latest finalized block, not the latest cosigned block, which should be fine as + in the worst case, we'd connect to unexpected validators. They still shouldn't be able to + bypass the cosign protocol unless a historical global session was malicious, in which case + the cosign protocol already breaks. + + Besides, we can't connect to historical validators, only the current validators. + */ let temporal_serai = serai.borrow().as_of_latest_finalized_block().await?; let temporal_serai = temporal_serai.validator_sets(); diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 97500b4e..631c6d4b 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -80,6 +80,8 @@ create_db! { ErroneousCosigns: () -> Vec, // The keys to confirm and set on the Serai network KeysToConfirm: (set: ValidatorSet) -> KeyPair, + // The key was set on the Serai network + KeySet: (set: ValidatorSet) -> (), } } diff --git a/coordinator/src/dkg_confirmation.rs b/coordinator/src/dkg_confirmation.rs index 16857614..b9af0ec7 100644 --- a/coordinator/src/dkg_confirmation.rs +++ b/coordinator/src/dkg_confirmation.rs @@ -1,5 +1,5 @@ use core::{ops::Deref, future::Future}; -use std::{boxed::Box, sync::Arc, collections::HashMap}; +use std::{boxed::Box, collections::HashMap}; use zeroize::Zeroizing; use rand_core::OsRng; @@ -18,15 +18,14 @@ use serai_db::{DbTxn, Db as DbTrait}; use serai_client::{ primitives::SeraiAddress, validator_sets::primitives::{ValidatorSet, musig_context, set_keys_message}, - SeraiError, Serai, }; -use serai_task::ContinuallyRan; +use serai_task::{DoesNotError, ContinuallyRan}; use serai_coordinator_substrate::{NewSetInformation, Keys}; use serai_coordinator_tributary::{Transaction, DkgConfirmationMessages}; -use crate::{KeysToConfirm, TributaryTransactionsFromDkgConfirmation}; +use crate::{KeysToConfirm, KeySet, TributaryTransactionsFromDkgConfirmation}; fn schnorrkel() -> Schnorrkel { Schnorrkel::new(b"substrate") // TODO: Pull the constant for this @@ -128,8 +127,6 @@ pub(crate) struct ConfirmDkgTask { set: NewSetInformation, tributary_db: TD, - serai: Arc, - key: Zeroizing<::F>, signer: Option, } @@ -139,10 +136,9 @@ impl ConfirmDkgTask { db: CD, set: NewSetInformation, tributary_db: TD, - serai: Arc, key: Zeroizing<::F>, ) -> Self { - Self { db, set, tributary_db, serai, key, signer: None } + Self { db, set, tributary_db, key, signer: None } } fn slash(db: &mut CD, set: ValidatorSet, validator: SeraiAddress) { @@ -192,7 +188,7 @@ impl ConfirmDkgTask { } impl ContinuallyRan for ConfirmDkgTask { - type Error = SeraiError; + type Error = DoesNotError; fn run_iteration(&mut self) -> impl Send + Future> { async move { @@ -416,24 +412,20 @@ impl ContinuallyRan for ConfirmDkgTask { } // Check if the key has been set on Serai - if KeysToConfirm::get(&self.db, self.set.set).is_some() { - let serai = self.serai.as_of_latest_finalized_block().await?; - let serai = serai.validator_sets(); - let is_historic_set = serai.session(self.set.set.network).await?.map(|session| session.0) > - Some(self.set.set.session.0); - let key_set_on_serai = is_historic_set || serai.keys(self.set.set).await?.is_some(); - if key_set_on_serai { - // Take the keys to confirm so we never instantiate the signer again - let mut txn = self.db.txn(); - KeysToConfirm::take(&mut txn, self.set.set); - txn.commit(); + if KeysToConfirm::get(&self.db, self.set.set).is_some() && + KeySet::get(&self.db, self.set.set).is_some() + { + // Take the keys to confirm so we never instantiate the signer again + let mut txn = self.db.txn(); + KeysToConfirm::take(&mut txn, self.set.set); + KeySet::take(&mut txn, self.set.set); + txn.commit(); - // Drop our own signer - // The task won't die until the Tributary does, but now it'll never do anything again - self.signer = None; + // Drop our own signer + // The task won't die until the Tributary does, but now it'll never do anything again + self.signer = None; - made_progress = true; - } + made_progress = true; } Ok(made_progress) diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 9d7afb17..4d48a317 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -368,6 +368,7 @@ async fn main() { prune_tributary_db(to_cleanup); // Remove the keys to confirm for this network KeysToConfirm::take(&mut txn, to_cleanup); + KeySet::take(&mut txn, to_cleanup); // Drain the cosign intents created for this set while !Cosigning::::intended_cosigns(&mut txn, to_cleanup).is_empty() {} // Drain the transactions to publish for this set @@ -461,7 +462,6 @@ async fn main() { p2p.clone(), &p2p_add_tributary_send, tributary, - serai.clone(), serai_key.clone(), ) .await; @@ -476,7 +476,6 @@ async fn main() { p2p: p2p.clone(), p2p_add_tributary: p2p_add_tributary_send.clone(), p2p_retire_tributary: p2p_retire_tributary_send.clone(), - serai: serai.clone(), }) .continually_run(substrate_task_def, vec![]), ); diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 518db079..7a78e512 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -9,10 +9,7 @@ use tokio::sync::mpsc; use serai_db::{DbTxn, Db as DbTrait}; -use serai_client::{ - validator_sets::primitives::{Session, ValidatorSet}, - Serai, -}; +use serai_client::validator_sets::primitives::{Session, ValidatorSet}; use message_queue::{Service, Metadata, client::MessageQueue}; use tributary_sdk::Tributary; @@ -22,7 +19,7 @@ use serai_task::ContinuallyRan; use serai_coordinator_tributary::Transaction; use serai_coordinator_p2p::P2p; -use crate::Db; +use crate::{Db, KeySet}; pub(crate) struct SubstrateTask { pub(crate) serai_key: Zeroizing<::F>, @@ -32,7 +29,6 @@ pub(crate) struct SubstrateTask { pub(crate) p2p_add_tributary: mpsc::UnboundedSender<(ValidatorSet, Tributary)>, pub(crate) p2p_retire_tributary: mpsc::UnboundedSender, - pub(crate) serai: Arc, } impl ContinuallyRan for SubstrateTask

{ @@ -51,8 +47,9 @@ impl ContinuallyRan for SubstrateTask

{ }; match msg { - // TODO: Stop trying to confirm the DKG - messages::substrate::CoordinatorMessage::SetKeys { .. } => todo!("TODO"), + messages::substrate::CoordinatorMessage::SetKeys { session, .. } => { + KeySet::set(&mut txn, ValidatorSet { network, session }, &()); + } messages::substrate::CoordinatorMessage::SlashesReported { session } => { let prior_retired = crate::db::RetiredTributary::get(&txn, network); let next_to_be_retired = @@ -150,7 +147,6 @@ impl ContinuallyRan for SubstrateTask

{ self.p2p.clone(), &self.p2p_add_tributary, new_set, - self.serai.clone(), self.serai_key.clone(), ) .await; diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index b3cffb68..5f935f68 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -11,7 +11,7 @@ use tokio::sync::mpsc; use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use scale::Encode; -use serai_client::{validator_sets::primitives::ValidatorSet, Serai}; +use serai_client::validator_sets::primitives::ValidatorSet; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; @@ -471,7 +471,6 @@ pub(crate) async fn spawn_tributary( p2p: P, p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, set: NewSetInformation, - serai: Arc, serai_key: Zeroizing<::F>, ) { // Don't spawn retired Tributaries @@ -566,7 +565,7 @@ pub(crate) async fn spawn_tributary( // Spawn the task to confirm the DKG result let (confirm_dkg_task_def, confirm_dkg_task) = Task::new(); tokio::spawn( - ConfirmDkgTask::new(db.clone(), set.clone(), tributary_db.clone(), serai, serai_key.clone()) + ConfirmDkgTask::new(db.clone(), set.clone(), tributary_db.clone(), serai_key.clone()) .continually_run(confirm_dkg_task_def, vec![add_tributary_transactions_task]), ); diff --git a/coordinator/substrate/src/publish_batch.rs b/coordinator/substrate/src/publish_batch.rs index e9038d87..83aa0718 100644 --- a/coordinator/substrate/src/publish_batch.rs +++ b/coordinator/substrate/src/publish_batch.rs @@ -58,6 +58,8 @@ impl ContinuallyRan for PublishBatchTask { // Synchronize our last published batch with the Serai network's let next_to_publish = { + // This uses the latest finalized block, not the latest cosigned block, which should be + // fine as in the worst case, the only impact is no longer attempting TX publication let serai = self.serai.as_of_latest_finalized_block().await?; let last_batch = serai.in_instructions().last_batch_for_network(self.network).await?; diff --git a/coordinator/substrate/src/publish_slash_report.rs b/coordinator/substrate/src/publish_slash_report.rs index a26d4bd6..9be94f60 100644 --- a/coordinator/substrate/src/publish_slash_report.rs +++ b/coordinator/substrate/src/publish_slash_report.rs @@ -31,6 +31,8 @@ impl PublishSlashReportTask { return Ok(false); }; + // This uses the latest finalized block, not the latest cosigned block, which should be + // fine as in the worst case, the only impact is no longer attempting TX publication let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let serai = serai.validator_sets(); let session_after_slash_report = Session(session.0 + 1); diff --git a/coordinator/substrate/src/set_keys.rs b/coordinator/substrate/src/set_keys.rs index 129bb703..a63e0923 100644 --- a/coordinator/substrate/src/set_keys.rs +++ b/coordinator/substrate/src/set_keys.rs @@ -39,6 +39,8 @@ impl ContinuallyRan for SetKeysTask { continue; }; + // This uses the latest finalized block, not the latest cosigned block, which should be + // fine as in the worst case, the only impact is no longer attempting TX publication let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let serai = serai.validator_sets(); From 2226dd59cc31358e7e4104968fd91c0c07353847 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 17 Jan 2025 04:09:27 -0500 Subject: [PATCH 318/368] Comment all dependencies in substrate/node Causes the Cargo.lock to no longer include the substrate dependencies (including its copy of libp2p). --- Cargo.lock | 2388 +------------------------------------ substrate/node/Cargo.toml | 98 +- 2 files changed, 61 insertions(+), 2425 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 72675804..9f5ea20f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -375,7 +375,7 @@ dependencies = [ "serde_json", "tokio", "tokio-stream", - "tower 0.5.2", + "tower", "tracing", "wasmtimer", ] @@ -444,7 +444,7 @@ dependencies = [ "alloy-transport", "serde_json", "simple-request", - "tower 0.5.2", + "tower", ] [[package]] @@ -519,7 +519,7 @@ dependencies = [ "serde_json", "thiserror 2.0.9", "tokio", - "tower 0.5.2", + "tower", "tracing", "url", "wasmtimer", @@ -575,55 +575,12 @@ dependencies = [ "winapi", ] -[[package]] -name = "anstream" -version = "0.6.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b" -dependencies = [ - "anstyle", - "anstyle-parse", - "anstyle-query", - "anstyle-wincon", - "colorchoice", - "is_terminal_polyfill", - "utf8parse", -] - [[package]] name = "anstyle" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9" -[[package]] -name = "anstyle-parse" -version = "0.2.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9" -dependencies = [ - "utf8parse", -] - -[[package]] -name = "anstyle-query" -version = "1.1.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c" -dependencies = [ - "windows-sys 0.59.0", -] - -[[package]] -name = "anstyle-wincon" -version = "3.0.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2109dbce0e72be3ec00bed26e6a7479ca384ad226efdd66db8fa2e3a38c83125" -dependencies = [ - "anstyle", - "windows-sys 0.59.0", -] - [[package]] name = "anyhow" version = "1.0.95" @@ -639,12 +596,6 @@ dependencies = [ "num-traits", ] -[[package]] -name = "arbitrary" -version = "1.4.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223" - [[package]] name = "ark-ff" version = "0.3.0" @@ -829,17 +780,6 @@ dependencies = [ "syn 1.0.109", ] -[[package]] -name = "async-channel" -version = "1.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "81953c529336010edd6d8e358f886d9581267795c61b19475b71314bffa46d35" -dependencies = [ - "concurrent-queue", - "event-listener 2.5.3", - "futures-core", -] - [[package]] name = "async-io" version = "2.4.0" @@ -865,7 +805,7 @@ version = "3.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ff6e472cdea888a4bd64f342f09b3f50e1886d32afe8df3d663c01140b811b18" dependencies = [ - "event-listener 5.3.1", + "event-listener", "event-listener-strategy", "pin-project-lite", ] @@ -1011,15 +951,6 @@ version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d965446196e3b7decd44aa7ee49e31d630118f90ef12f97900f262eb915c951d" -[[package]] -name = "beef" -version = "0.5.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3a8241f3ebb85c056b509d4327ad0358fbbba6ffb340bf388f26350aeda225b1" -dependencies = [ - "serde", -] - [[package]] name = "bincode" version = "1.3.3" @@ -1182,39 +1113,6 @@ dependencies = [ "constant_time_eq", ] -[[package]] -name = "blake2s_simd" -version = "1.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "94230421e395b9920d23df13ea5d77a20e1725331f90fbbf6df6040b33f756ae" -dependencies = [ - "arrayref", - "arrayvec", - "constant_time_eq", -] - -[[package]] -name = "blake3" -version = "1.5.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8ee0c1824c4dea5b5f81736aff91bae041d2c07ee1192bec91054e10e3e601e" -dependencies = [ - "arrayref", - "arrayvec", - "cc", - "cfg-if", - "constant_time_eq", -] - -[[package]] -name = "block-buffer" -version = "0.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4" -dependencies = [ - "generic-array 0.14.7", -] - [[package]] name = "block-buffer" version = "0.10.4" @@ -1309,7 +1207,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2506947f73ad44e344215ccd6403ac2ae18cd8e046e581a441bf8d199f257f03" dependencies = [ "borsh-derive", - "cfg_aliases 0.2.1", + "cfg_aliases", ] [[package]] @@ -1346,16 +1244,6 @@ dependencies = [ "tinyvec", ] -[[package]] -name = "bstr" -version = "1.11.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "531a9155a481e2ee699d4f98f43c0ca4ff8ee1bfd55c31e9e98fb29d2b176fe0" -dependencies = [ - "memchr", - "serde", -] - [[package]] name = "build-helper" version = "0.1.1" @@ -1495,12 +1383,6 @@ version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" -[[package]] -name = "cfg_aliases" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e" - [[package]] name = "cfg_aliases" version = "0.2.1" @@ -1546,19 +1428,6 @@ dependencies = [ "windows-targets 0.52.6", ] -[[package]] -name = "cid" -version = "0.10.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fd94671561e36e4e7de75f753f577edafb0e7c05d6e4547229fdf7938fbcd2c3" -dependencies = [ - "core2", - "multibase", - "multihash 0.18.1", - "serde", - "unsigned-varint", -] - [[package]] name = "cipher" version = "0.4.4" @@ -1604,46 +1473,6 @@ dependencies = [ "libloading", ] -[[package]] -name = "clap" -version = "4.5.23" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3135e7ec2ef7b10c6ed8950f0f792ed96ee093fa088608f1c76e569722700c84" -dependencies = [ - "clap_builder", - "clap_derive", -] - -[[package]] -name = "clap_builder" -version = "4.5.23" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "30582fc632330df2bd26877bde0c1f4470d57c582bbc070376afcd04d8cb4838" -dependencies = [ - "anstream", - "anstyle", - "clap_lex", - "strsim", -] - -[[package]] -name = "clap_derive" -version = "4.5.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4ac6a0c7b1a9e9a5186361f67dfa1b88213572f427fb9ab038efb2bd8c582dab" -dependencies = [ - "heck 0.5.0", - "proc-macro2", - "quote", - "syn 2.0.94", -] - -[[package]] -name = "clap_lex" -version = "0.7.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6" - [[package]] name = "codespan-reporting" version = "0.11.1" @@ -1654,12 +1483,6 @@ dependencies = [ "unicode-width", ] -[[package]] -name = "colorchoice" -version = "1.0.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990" - [[package]] name = "concurrent-queue" version = "2.5.0" @@ -1767,60 +1590,6 @@ dependencies = [ "libc", ] -[[package]] -name = "cranelift-bforest" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5a91a1ccf6fb772808742db2f51e2179f25b1ec559cbe39ea080c72ff61caf8f" -dependencies = [ - "cranelift-entity", -] - -[[package]] -name = "cranelift-codegen" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "169db1a457791bff4fd1fc585bb5cc515609647e0420a7d5c98d7700c59c2d00" -dependencies = [ - "bumpalo", - "cranelift-bforest", - "cranelift-codegen-meta", - "cranelift-codegen-shared", - "cranelift-control", - "cranelift-entity", - "cranelift-isle", - "gimli 0.27.3", - "hashbrown 0.13.2", - "log", - "regalloc2", - "smallvec", - "target-lexicon", -] - -[[package]] -name = "cranelift-codegen-meta" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3486b93751ef19e6d6eef66d2c0e83ed3d2ba01da1919ed2747f2f7bd8ba3419" -dependencies = [ - "cranelift-codegen-shared", -] - -[[package]] -name = "cranelift-codegen-shared" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "86a1205ab18e7cd25dc4eca5246e56b506ced3feb8d95a8d776195e48d2cd4ef" - -[[package]] -name = "cranelift-control" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1b108cae0f724ddfdec1871a0dc193a607e0c2d960f083cfefaae8ccf655eff2" -dependencies = [ - "arbitrary", -] - [[package]] name = "cranelift-entity" version = "0.99.2" @@ -1830,51 +1599,6 @@ dependencies = [ "serde", ] -[[package]] -name = "cranelift-frontend" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b7a94c4c5508b7407e125af9d5320694b7423322e59a4ac0d07919ae254347ca" -dependencies = [ - "cranelift-codegen", - "log", - "smallvec", - "target-lexicon", -] - -[[package]] -name = "cranelift-isle" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ef1f888d0845dcd6be4d625b91d9d8308f3d95bed5c5d4072ce38e1917faa505" - -[[package]] -name = "cranelift-native" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9ad5966da08f1e96a3ae63be49966a85c9b249fa465f8cf1b66469a82b1004a0" -dependencies = [ - "cranelift-codegen", - "libc", - "target-lexicon", -] - -[[package]] -name = "cranelift-wasm" -version = "0.99.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0d8635c88b424f1d232436f683a301143b36953cd98fc6f86f7bac862dfeb6f5" -dependencies = [ - "cranelift-codegen", - "cranelift-entity", - "cranelift-frontend", - "itertools 0.10.5", - "log", - "smallvec", - "wasmparser", - "wasmtime-types", -] - [[package]] name = "crc32fast" version = "1.4.2" @@ -1884,25 +1608,6 @@ dependencies = [ "cfg-if", ] -[[package]] -name = "crossbeam-deque" -version = "0.8.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51" -dependencies = [ - "crossbeam-epoch", - "crossbeam-utils", -] - -[[package]] -name = "crossbeam-epoch" -version = "0.9.18" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e" -dependencies = [ - "crossbeam-utils", -] - [[package]] name = "crossbeam-utils" version = "0.8.21" @@ -2164,12 +1869,6 @@ dependencies = [ "unicode-xid", ] -[[package]] -name = "difflib" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6184e33543162437515c2e2b48714794e37845ec9851711914eec9d308f6ebe8" - [[package]] name = "digest" version = "0.9.0" @@ -2185,7 +1884,7 @@ version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ - "block-buffer 0.10.4", + "block-buffer", "const-oid", "crypto-common", "subtle", @@ -2313,12 +2012,6 @@ dependencies = [ "tracing", ] -[[package]] -name = "downcast" -version = "0.11.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1435fa1053d8b2fbbe9be7e97eca7f33d37b28409959813daefc1446a14247f1" - [[package]] name = "dtoa" version = "1.0.9" @@ -2503,10 +2196,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4cd405aab171cb85d6735e5c8d9db038c17d3ca007a4d2c25f337935c3d90580" dependencies = [ "humantime", - "is-terminal", "log", - "regex", - "termcolor", ] [[package]] @@ -2551,12 +2241,6 @@ dependencies = [ "tokio", ] -[[package]] -name = "event-listener" -version = "2.5.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0206175f82b8d6bf6652ff7d71a1e27fd2e4efde587fd368662814d6ec1d9ce0" - [[package]] name = "event-listener" version = "5.3.1" @@ -2574,19 +2258,10 @@ version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c3e4e0dd3673c1139bf041f3008816d9cf2946bbfac2945c09e523b8d7b05b2" dependencies = [ - "event-listener 5.3.1", + "event-listener", "pin-project-lite", ] -[[package]] -name = "exit-future" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e43f2f1833d64e33f15592464d6fdd70f349dda7b1a53088eb83cd94014008c5" -dependencies = [ - "futures", -] - [[package]] name = "expander" version = "2.0.0" @@ -2623,15 +2298,6 @@ dependencies = [ "bytes", ] -[[package]] -name = "fdlimit" -version = "0.2.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2c4c9e43643f5a3be4ca5b67d26b98031ff9db6806c3440ae32e02e3ceac3f1b" -dependencies = [ - "libc", -] - [[package]] name = "ff" version = "0.13.0" @@ -2663,16 +2329,6 @@ version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "28dea519a9695b9977216879a3ebfddf92f1c08c05d984f8996aecd6ecdc811d" -[[package]] -name = "file-per-thread-logger" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8a3cc21c33af89af0930c8cae4ade5e6fdc17b5d2c97b3d2e2edb67a1cf683f3" -dependencies = [ - "env_logger", - "log", -] - [[package]] name = "filetime" version = "0.2.25" @@ -2713,12 +2369,6 @@ dependencies = [ "static_assertions", ] -[[package]] -name = "fixedbitset" -version = "0.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80" - [[package]] name = "flexible-transcript" version = "0.3.2" @@ -2732,15 +2382,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "float-cmp" -version = "0.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "98de4bbd547a563b716d8dfa9aad1cb19bfab00f4fa09a6a4ed21dbcf44ce9c4" -dependencies = [ - "num-traits", -] - [[package]] name = "fnv" version = "1.0.7" @@ -2753,14 +2394,6 @@ version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a0d2fde1f7b3d48b8395d5f2de76c18a528bd6a9cdde438df747bfcba3e05d6f" -[[package]] -name = "fork-tree" -version = "3.0.0" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "parity-scale-codec", -] - [[package]] name = "form_urlencoded" version = "1.2.1" @@ -2770,12 +2403,6 @@ dependencies = [ "percent-encoding", ] -[[package]] -name = "fragile" -version = "2.0.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6c2141d6d6c8512188a7891b4b01590a45f6dac67afb4f255c4124dbb86d4eaa" - [[package]] name = "frame-benchmarking" version = "4.0.0-dev" @@ -2997,16 +2624,6 @@ dependencies = [ "futures-util", ] -[[package]] -name = "futures-bounded" -version = "0.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8b07bbbe7d7e78809544c6f718d875627addc73a7c3582447abc052cd3dc67e0" -dependencies = [ - "futures-timer", - "futures-util", -] - [[package]] name = "futures-channel" version = "0.3.31" @@ -3267,19 +2884,6 @@ version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a8d1add55171497b4705a648c6b583acafb01d58050a51727785f0b2c8e0a2b2" -[[package]] -name = "globset" -version = "0.4.15" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "15f1ce686646e7f1e19bf7d5533fe443a45dbfb990e00629110797578b42fb19" -dependencies = [ - "aho-corasick", - "bstr", - "log", - "regex-automata 0.4.9", - "regex-syntax 0.8.5", -] - [[package]] name = "group" version = "0.13.0" @@ -3510,12 +3114,6 @@ dependencies = [ "pin-project-lite", ] -[[package]] -name = "http-range-header" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "add0ab9360ddbd88cfeb3bd9574a1d85cfdfa14db10b3e21d3700dbc4328758f" - [[package]] name = "httparse" version = "1.9.5" @@ -3851,12 +3449,6 @@ dependencies = [ "num-traits", ] -[[package]] -name = "ip_network" -version = "0.4.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "aa2f047c0a98b2f299aa5d6d7088443570faae494e9ae1305e48be000c9e0eb1" - [[package]] name = "ipconfig" version = "0.3.2" @@ -3879,12 +3471,6 @@ checksum = "ddc24109865250148c2e0f3d25d4f0f479571723792d3802153c60922a4fb708" name = "is-terminal" version = "0.4.10" -[[package]] -name = "is_terminal_polyfill" -version = "1.70.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" - [[package]] name = "itertools" version = "0.10.5" @@ -3928,94 +3514,6 @@ dependencies = [ "wasm-bindgen", ] -[[package]] -name = "jsonrpsee" -version = "0.16.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "367a292944c07385839818bb71c8d76611138e2dedb0677d035b8da21d29c78b" -dependencies = [ - "jsonrpsee-core", - "jsonrpsee-proc-macros", - "jsonrpsee-server", - "jsonrpsee-types", - "tracing", -] - -[[package]] -name = "jsonrpsee-core" -version = "0.16.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2b5dde66c53d6dcdc8caea1874a45632ec0fcf5b437789f1e45766a1512ce803" -dependencies = [ - "anyhow", - "arrayvec", - "async-trait", - "beef", - "futures-channel", - "futures-util", - "globset", - "hyper 0.14.30", - "jsonrpsee-types", - "parking_lot 0.12.3", - "rand", - "rustc-hash 1.1.0", - "serde", - "serde_json", - "soketto 0.7.1", - "thiserror 1.0.69", - "tokio", - "tracing", -] - -[[package]] -name = "jsonrpsee-proc-macros" -version = "0.16.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "44e8ab85614a08792b9bff6c8feee23be78c98d0182d4c622c05256ab553892a" -dependencies = [ - "heck 0.4.1", - "proc-macro-crate 1.3.1", - "proc-macro2", - "quote", - "syn 1.0.109", -] - -[[package]] -name = "jsonrpsee-server" -version = "0.16.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cf4d945a6008c9b03db3354fb3c83ee02d2faa9f2e755ec1dfb69c3551b8f4ba" -dependencies = [ - "futures-channel", - "futures-util", - "http 0.2.12", - "hyper 0.14.30", - "jsonrpsee-core", - "jsonrpsee-types", - "serde", - "serde_json", - "soketto 0.7.1", - "tokio", - "tokio-stream", - "tokio-util", - "tower 0.4.13", - "tracing", -] - -[[package]] -name = "jsonrpsee-types" -version = "0.16.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "245ba8e5aa633dd1c1e4fae72bce06e71f42d34c14a2767c6b4d173b57bee5e5" -dependencies = [ - "anyhow", - "beef", - "serde", - "serde_json", - "thiserror 1.0.69", - "tracing", -] - [[package]] name = "k256" version = "0.13.4" @@ -4049,39 +3547,6 @@ dependencies = [ "sha3-asm", ] -[[package]] -name = "kvdb" -version = "0.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e7d770dcb02bf6835887c3a979b5107a04ff4bbde97a5f0928d27404a155add9" -dependencies = [ - "smallvec", -] - -[[package]] -name = "kvdb-memorydb" -version = "0.13.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bf7a85fe66f9ff9cd74e169fdd2c94c6e1e74c412c99a73b4df3200b5d3760b2" -dependencies = [ - "kvdb", - "parking_lot 0.12.3", -] - -[[package]] -name = "kvdb-rocksdb" -version = "0.19.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b644c70b92285f66bfc2032922a79000ea30af7bc2ab31902992a5dcb9b434f6" -dependencies = [ - "kvdb", - "num_cpus", - "parking_lot 0.12.3", - "regex", - "rocksdb 0.21.0", - "smallvec", -] - [[package]] name = "lazy_static" version = "1.5.0" @@ -4141,9 +3606,7 @@ dependencies = [ "libp2p-core", "libp2p-dns", "libp2p-gossipsub", - "libp2p-identify", "libp2p-identity", - "libp2p-kad", "libp2p-mdns", "libp2p-metrics", "libp2p-noise", @@ -4153,8 +3616,6 @@ dependencies = [ "libp2p-swarm", "libp2p-tcp", "libp2p-upnp", - "libp2p-wasm-ext", - "libp2p-websocket", "libp2p-yamux", "multiaddr", "pin-project", @@ -4200,7 +3661,7 @@ dependencies = [ "libp2p-identity", "log", "multiaddr", - "multihash 0.19.1", + "multihash", "multistream-select", "once_cell", "parking_lot 0.12.3", @@ -4262,29 +3723,6 @@ dependencies = [ "void", ] -[[package]] -name = "libp2p-identify" -version = "0.43.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "45a96638a0a176bec0a4bcaebc1afa8cf909b114477209d7456ade52c61cd9cd" -dependencies = [ - "asynchronous-codec", - "either", - "futures", - "futures-bounded", - "futures-timer", - "libp2p-core", - "libp2p-identity", - "libp2p-swarm", - "log", - "lru", - "quick-protobuf", - "quick-protobuf-codec", - "smallvec", - "thiserror 1.0.69", - "void", -] - [[package]] name = "libp2p-identity" version = "0.2.10" @@ -4294,7 +3732,7 @@ dependencies = [ "bs58", "ed25519-dalek", "hkdf", - "multihash 0.19.1", + "multihash", "quick-protobuf", "rand", "sha2", @@ -4303,35 +3741,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "libp2p-kad" -version = "0.44.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "16ea178dabba6dde6ffc260a8e0452ccdc8f79becf544946692fff9d412fc29d" -dependencies = [ - "arrayvec", - "asynchronous-codec", - "bytes", - "either", - "fnv", - "futures", - "futures-timer", - "instant", - "libp2p-core", - "libp2p-identity", - "libp2p-swarm", - "log", - "quick-protobuf", - "quick-protobuf-codec", - "rand", - "sha2", - "smallvec", - "thiserror 1.0.69", - "uint", - "unsigned-varint", - "void", -] - [[package]] name = "libp2p-mdns" version = "0.44.0" @@ -4362,9 +3771,7 @@ dependencies = [ "instant", "libp2p-core", "libp2p-gossipsub", - "libp2p-identify", "libp2p-identity", - "libp2p-kad", "libp2p-ping", "libp2p-swarm", "once_cell", @@ -4384,7 +3791,7 @@ dependencies = [ "libp2p-identity", "log", "multiaddr", - "multihash 0.19.1", + "multihash", "once_cell", "quick-protobuf", "rand", @@ -4544,41 +3951,6 @@ dependencies = [ "void", ] -[[package]] -name = "libp2p-wasm-ext" -version = "0.40.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e5d8e3a9e07da0ef5b55a9f26c009c8fb3c725d492d8bb4b431715786eea79c" -dependencies = [ - "futures", - "js-sys", - "libp2p-core", - "send_wrapper", - "wasm-bindgen", - "wasm-bindgen-futures", -] - -[[package]] -name = "libp2p-websocket" -version = "0.42.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "004ee9c4a4631435169aee6aad2f62e3984dc031c43b6d29731e8e82a016c538" -dependencies = [ - "either", - "futures", - "futures-rustls", - "libp2p-core", - "libp2p-identity", - "log", - "parking_lot 0.12.3", - "pin-project-lite", - "rw-stream-sink", - "soketto 0.8.1", - "thiserror 1.0.69", - "url", - "webpki-roots", -] - [[package]] name = "libp2p-yamux" version = "0.44.1" @@ -4644,15 +4016,6 @@ version = "0.5.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f" -[[package]] -name = "linked_hash_set" -version = "0.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bae85b5be22d9843c80e5fc80e9b64c8a3b1f98f867c709956eca3efff4e92e2" -dependencies = [ - "linked-hash-map", -] - [[package]] name = "linregress" version = "0.5.4" @@ -4942,33 +4305,6 @@ dependencies = [ "windows-sys 0.52.0", ] -[[package]] -name = "mockall" -version = "0.11.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4c84490118f2ee2d74570d114f3d0493cbf02790df303d2707606c3e14e07c96" -dependencies = [ - "cfg-if", - "downcast", - "fragile", - "lazy_static", - "mockall_derive", - "predicates", - "predicates-tree", -] - -[[package]] -name = "mockall_derive" -version = "0.11.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "22ce75669015c4f47b289fd4d4f56e894e4c96003ffdf3ac51313126f94c6cbb" -dependencies = [ - "cfg-if", - "proc-macro2", - "quote", - "syn 1.0.109", -] - [[package]] name = "modular-frost" version = "0.8.1" @@ -5230,7 +4566,7 @@ dependencies = [ "data-encoding", "libp2p-identity", "multibase", - "multihash 0.19.1", + "multihash", "percent-encoding", "serde", "static_assertions", @@ -5263,23 +4599,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "multihash" -version = "0.18.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cfd8a792c1694c6da4f68db0a9d707c72bd260994da179e6030a5dcee00bb815" -dependencies = [ - "blake2b_simd", - "blake2s_simd", - "blake3", - "core2", - "digest 0.10.7", - "multihash-derive 0.8.0", - "sha2", - "sha3", - "unsigned-varint", -] - [[package]] name = "multihash" version = "0.19.1" @@ -5290,70 +4609,6 @@ dependencies = [ "unsigned-varint", ] -[[package]] -name = "multihash-codetable" -version = "0.1.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f6d815ecb3c8238d00647f8630ede7060a642c9f704761cd6082cb4028af6935" -dependencies = [ - "blake2b_simd", - "blake2s_simd", - "blake3", - "core2", - "digest 0.10.7", - "multihash-derive 0.9.0", - "ripemd", - "sha1", - "sha2", - "sha3", - "strobe-rs", -] - -[[package]] -name = "multihash-derive" -version = "0.8.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fc076939022111618a5026d3be019fd8b366e76314538ff9a1b59ffbcbf98bcd" -dependencies = [ - "proc-macro-crate 1.3.1", - "proc-macro-error", - "proc-macro2", - "quote", - "syn 1.0.109", - "synstructure", -] - -[[package]] -name = "multihash-derive" -version = "0.9.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "890e72cb7396cb99ed98c1246a97b243cc16394470d94e0bc8b0c2c11d84290e" -dependencies = [ - "core2", - "multihash 0.19.1", - "multihash-derive-impl", -] - -[[package]] -name = "multihash-derive-impl" -version = "0.1.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d38685e08adb338659871ecfc6ee47ba9b22dcc8abcf6975d379cc49145c3040" -dependencies = [ - "proc-macro-crate 1.3.1", - "proc-macro-error", - "proc-macro2", - "quote", - "syn 1.0.109", - "synstructure", -] - -[[package]] -name = "multimap" -version = "0.8.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5ce46fe64a9d73be07dcbe690a38ce1b293be448fd8ce1e6c1b8062c9f72c6a" - [[package]] name = "multistream-select" version = "0.13.0" @@ -5383,15 +4638,6 @@ dependencies = [ "typenum", ] -[[package]] -name = "names" -version = "0.14.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7bddcd3bf5144b6392de80e04c347cd7fab2508f6df16a85fc496ecd5cec39bc" -dependencies = [ - "rand", -] - [[package]] name = "netlink-packet-core" version = "0.7.0" @@ -5484,12 +4730,6 @@ dependencies = [ "minimal-lexical", ] -[[package]] -name = "normalize-line-endings" -version = "0.3.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61807f77802ff30975e01f4f071c8ba10c022052f98b3294119f3e615d13e5be" - [[package]] name = "nu-ansi-term" version = "0.46.0" @@ -5800,22 +5040,6 @@ dependencies = [ "sp-std", ] -[[package]] -name = "pallet-transaction-payment-rpc" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "jsonrpsee", - "pallet-transaction-payment-rpc-runtime-api", - "parity-scale-codec", - "sp-api", - "sp-blockchain", - "sp-core", - "sp-rpc", - "sp-runtime", - "sp-weights", -] - [[package]] name = "pallet-transaction-payment-rpc-runtime-api" version = "4.0.0-dev" @@ -5925,12 +5149,6 @@ dependencies = [ "windows-targets 0.52.6", ] -[[package]] -name = "partial_sort" -version = "0.2.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7924d1d0ad836f665c9065e26d016c673ece3993f30d340068b16f282afc1156" - [[package]] name = "password-hash" version = "0.5.0" @@ -6017,16 +5235,6 @@ dependencies = [ "ucd-trie", ] -[[package]] -name = "petgraph" -version = "0.6.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b4c5cc86750666a3ed20bdaf5ca2a0344f9c67674cae0515bec2da16fbaa47db" -dependencies = [ - "fixedbitset", - "indexmap 2.7.0", -] - [[package]] name = "pin-project" version = "1.1.7" @@ -6142,46 +5350,6 @@ dependencies = [ "zerocopy", ] -[[package]] -name = "predicates" -version = "2.1.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "59230a63c37f3e18569bdb90e4a89cbf5bf8b06fea0b84e65ea10cc4df47addd" -dependencies = [ - "difflib", - "float-cmp", - "itertools 0.10.5", - "normalize-line-endings", - "predicates-core", - "regex", -] - -[[package]] -name = "predicates-core" -version = "1.0.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "727e462b119fe9c93fd0eb1429a5f7647394014cf3c04ab2c0350eeb09095ffa" - -[[package]] -name = "predicates-tree" -version = "1.0.12" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "72dd2d6d381dfb73a193c7fca536518d7caee39fc8503f74e7dc0be0531b425c" -dependencies = [ - "predicates-core", - "termtree", -] - -[[package]] -name = "prettyplease" -version = "0.1.25" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6c8646e95016a7a6c4adea95bafa8a16baab64b583356217f2c85db4a39d9a86" -dependencies = [ - "proc-macro2", - "syn 1.0.109", -] - [[package]] name = "primeorder" version = "0.13.6" @@ -6223,30 +5391,6 @@ dependencies = [ "toml_edit 0.22.22", ] -[[package]] -name = "proc-macro-error" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c" -dependencies = [ - "proc-macro-error-attr", - "proc-macro2", - "quote", - "syn 1.0.109", - "version_check", -] - -[[package]] -name = "proc-macro-error-attr" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869" -dependencies = [ - "proc-macro2", - "quote", - "version_check", -] - [[package]] name = "proc-macro-error-attr2" version = "2.0.0" @@ -6289,20 +5433,6 @@ dependencies = [ "unicode-ident", ] -[[package]] -name = "prometheus" -version = "0.13.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3d33c28a30771f7f96db69893f78b857f7450d7e0237e9c8fc6427a81bae7ed1" -dependencies = [ - "cfg-if", - "fnv", - "lazy_static", - "memchr", - "parking_lot 0.12.3", - "thiserror 1.0.69", -] - [[package]] name = "prometheus-client" version = "0.21.2" @@ -6346,60 +5476,6 @@ dependencies = [ "unarray", ] -[[package]] -name = "prost" -version = "0.11.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0b82eaa1d779e9a4bc1c3217db8ffbeabaae1dca241bf70183242128d48681cd" -dependencies = [ - "bytes", - "prost-derive", -] - -[[package]] -name = "prost-build" -version = "0.11.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "119533552c9a7ffacc21e099c24a0ac8bb19c2a2a3f363de84cd9b844feab270" -dependencies = [ - "bytes", - "heck 0.4.1", - "itertools 0.10.5", - "lazy_static", - "log", - "multimap", - "petgraph", - "prettyplease", - "prost", - "prost-types", - "regex", - "syn 1.0.109", - "tempfile", - "which", -] - -[[package]] -name = "prost-derive" -version = "0.11.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e5d2d8d10f3c6ded6da8b05b5fb3b8a5082514344d56c9f871412d29b4e075b4" -dependencies = [ - "anyhow", - "itertools 0.10.5", - "proc-macro2", - "quote", - "syn 1.0.109", -] - -[[package]] -name = "prost-types" -version = "0.11.9" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "213622a1460818959ac1181aaeb2dc9c7f63df720db7d788b3e24eacd1983e13" -dependencies = [ - "prost", -] - [[package]] name = "psm" version = "0.1.24" @@ -6541,15 +5617,6 @@ dependencies = [ "rand", ] -[[package]] -name = "rand_pcg" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "59cad018caf63deb318e5a4586d99a24424a364f40f1e5778c29aca23f4fc73e" -dependencies = [ - "rand_core", -] - [[package]] name = "rand_xorshift" version = "0.3.0" @@ -6565,26 +5632,6 @@ version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "60a357793950651c4ed0f3f52338f53b2f809f32d83a07f72909fa13e4c6c1e3" -[[package]] -name = "rayon" -version = "1.10.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa" -dependencies = [ - "either", - "rayon-core", -] - -[[package]] -name = "rayon-core" -version = "1.12.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2" -dependencies = [ - "crossbeam-deque", - "crossbeam-utils", -] - [[package]] name = "rcgen" version = "0.10.0" @@ -6637,19 +5684,6 @@ dependencies = [ "syn 2.0.94", ] -[[package]] -name = "regalloc2" -version = "0.9.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ad156d539c879b7a24a363a2016d77961786e71f48f2e2fc8302a92abd2429a6" -dependencies = [ - "hashbrown 0.13.2", - "log", - "rustc-hash 1.1.0", - "slice-group-by", - "smallvec", -] - [[package]] name = "regex" version = "1.11.1" @@ -6744,15 +5778,6 @@ dependencies = [ "windows-sys 0.52.0", ] -[[package]] -name = "ripemd" -version = "0.1.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "bd124222d17ad93a644ed9d011a40f4fb64aa54275c08cc216524a9ea82fb09f" -dependencies = [ - "digest 0.10.7", -] - [[package]] name = "rlp" version = "0.5.2" @@ -6780,17 +5805,6 @@ dependencies = [ "librocksdb-sys", ] -[[package]] -name = "rpassword" -version = "7.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "80472be3c897911d0137b2d2b9055faf6eeac5b14e324073d83bc17b191d7e3f" -dependencies = [ - "libc", - "rtoolbox", - "windows-sys 0.48.0", -] - [[package]] name = "rtnetlink" version = "0.13.1" @@ -6809,16 +5823,6 @@ dependencies = [ "tokio", ] -[[package]] -name = "rtoolbox" -version = "0.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c247d24e63230cdb56463ae328478bd5eac8b8faa8c69461a77e8e323afac90e" -dependencies = [ - "libc", - "windows-sys 0.48.0", -] - [[package]] name = "ruint" version = "1.12.3" @@ -7031,931 +6035,6 @@ dependencies = [ "winapi-util", ] -[[package]] -name = "sc-allocator" -version = "4.1.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "log", - "sp-core", - "sp-wasm-interface", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-authority-discovery" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "futures-timer", - "ip_network", - "libp2p", - "log", - "multihash-codetable", - "parity-scale-codec", - "prost", - "prost-build", - "rand", - "sc-client-api", - "sc-network", - "sp-api", - "sp-authority-discovery", - "sp-blockchain", - "sp-core", - "sp-keystore", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-basic-authorship" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "futures", - "futures-timer", - "log", - "parity-scale-codec", - "sc-block-builder", - "sc-client-api", - "sc-proposer-metrics", - "sc-telemetry", - "sc-transaction-pool-api", - "sp-api", - "sp-blockchain", - "sp-consensus", - "sp-core", - "sp-inherents", - "sp-runtime", - "substrate-prometheus-endpoint", -] - -[[package]] -name = "sc-block-builder" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "parity-scale-codec", - "sc-client-api", - "sp-api", - "sp-block-builder", - "sp-blockchain", - "sp-core", - "sp-inherents", - "sp-runtime", -] - -[[package]] -name = "sc-chain-spec" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "memmap2", - "sc-chain-spec-derive", - "sc-client-api", - "sc-executor", - "sc-network", - "sc-telemetry", - "serde", - "serde_json", - "sp-blockchain", - "sp-core", - "sp-runtime", - "sp-state-machine", -] - -[[package]] -name = "sc-chain-spec-derive" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "proc-macro-crate 1.3.1", - "proc-macro2", - "quote", - "syn 2.0.94", -] - -[[package]] -name = "sc-cli" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "chrono", - "clap", - "fdlimit", - "futures", - "libp2p-identity", - "log", - "names", - "parity-scale-codec", - "rand", - "regex", - "rpassword", - "sc-client-api", - "sc-client-db", - "sc-keystore", - "sc-network", - "sc-service", - "sc-telemetry", - "sc-tracing", - "sc-utils", - "serde", - "serde_json", - "sp-blockchain", - "sp-core", - "sp-keyring", - "sp-keystore", - "sp-panic-handler", - "sp-runtime", - "sp-version", - "thiserror 1.0.69", - "tiny-bip39", - "tokio", -] - -[[package]] -name = "sc-client-api" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "fnv", - "futures", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-executor", - "sc-transaction-pool-api", - "sc-utils", - "sp-api", - "sp-blockchain", - "sp-consensus", - "sp-core", - "sp-database", - "sp-externalities", - "sp-runtime", - "sp-state-machine", - "sp-storage", - "substrate-prometheus-endpoint", -] - -[[package]] -name = "sc-client-db" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "hash-db", - "kvdb", - "kvdb-memorydb", - "kvdb-rocksdb", - "linked-hash-map", - "log", - "parity-db", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-client-api", - "sc-state-db", - "schnellru", - "sp-arithmetic", - "sp-blockchain", - "sp-core", - "sp-database", - "sp-runtime", - "sp-state-machine", - "sp-trie", -] - -[[package]] -name = "sc-consensus" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "futures-timer", - "libp2p-identity", - "log", - "mockall", - "parking_lot 0.12.3", - "sc-client-api", - "sc-utils", - "serde", - "sp-api", - "sp-blockchain", - "sp-consensus", - "sp-core", - "sp-runtime", - "sp-state-machine", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-consensus-babe" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "fork-tree", - "futures", - "log", - "num-bigint", - "num-rational", - "num-traits", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-client-api", - "sc-consensus", - "sc-consensus-epochs", - "sc-consensus-slots", - "sc-telemetry", - "sc-transaction-pool-api", - "scale-info", - "sp-api", - "sp-application-crypto", - "sp-block-builder", - "sp-blockchain", - "sp-consensus", - "sp-consensus-babe", - "sp-consensus-slots", - "sp-core", - "sp-inherents", - "sp-keystore", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-consensus-epochs" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "fork-tree", - "parity-scale-codec", - "sc-client-api", - "sc-consensus", - "sp-blockchain", - "sp-runtime", -] - -[[package]] -name = "sc-consensus-grandpa" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "ahash", - "array-bytes", - "async-trait", - "dyn-clone", - "finality-grandpa", - "fork-tree", - "futures", - "futures-timer", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "rand", - "sc-block-builder", - "sc-chain-spec", - "sc-client-api", - "sc-consensus", - "sc-network", - "sc-network-common", - "sc-network-gossip", - "sc-telemetry", - "sc-transaction-pool-api", - "sc-utils", - "serde_json", - "sp-api", - "sp-application-crypto", - "sp-arithmetic", - "sp-blockchain", - "sp-consensus", - "sp-consensus-grandpa", - "sp-core", - "sp-keystore", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-consensus-slots" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "futures-timer", - "log", - "parity-scale-codec", - "sc-client-api", - "sc-consensus", - "sc-telemetry", - "sp-arithmetic", - "sp-blockchain", - "sp-consensus", - "sp-consensus-slots", - "sp-core", - "sp-inherents", - "sp-runtime", - "sp-state-machine", -] - -[[package]] -name = "sc-executor" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-executor-common", - "sc-executor-wasmtime", - "schnellru", - "sp-api", - "sp-core", - "sp-externalities", - "sp-io", - "sp-panic-handler", - "sp-runtime-interface", - "sp-trie", - "sp-version", - "sp-wasm-interface", - "tracing", -] - -[[package]] -name = "sc-executor-common" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "sc-allocator", - "sp-maybe-compressed-blob", - "sp-wasm-interface", - "thiserror 1.0.69", - "wasm-instrument", -] - -[[package]] -name = "sc-executor-wasmtime" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "anyhow", - "cfg-if", - "libc", - "log", - "rustix", - "sc-allocator", - "sc-executor-common", - "sp-runtime-interface", - "sp-wasm-interface", - "wasmtime", -] - -[[package]] -name = "sc-informant" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "anstyle", - "futures", - "futures-timer", - "log", - "sc-client-api", - "sc-network", - "sc-network-common", - "sp-blockchain", - "sp-runtime", -] - -[[package]] -name = "sc-keystore" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "parking_lot 0.12.3", - "serde_json", - "sp-application-crypto", - "sp-core", - "sp-keystore", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-network" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "async-channel", - "async-trait", - "asynchronous-codec", - "bytes", - "either", - "fnv", - "futures", - "futures-timer", - "ip_network", - "libp2p", - "linked_hash_set", - "log", - "mockall", - "parity-scale-codec", - "parking_lot 0.12.3", - "partial_sort", - "pin-project", - "rand", - "sc-client-api", - "sc-network-common", - "sc-utils", - "serde", - "serde_json", - "smallvec", - "sp-arithmetic", - "sp-blockchain", - "sp-core", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", - "unsigned-varint", - "void", - "wasm-timer", - "zeroize", -] - -[[package]] -name = "sc-network-bitswap" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-channel", - "cid", - "futures", - "libp2p-identity", - "log", - "prost", - "prost-build", - "sc-client-api", - "sc-network", - "sp-blockchain", - "sp-runtime", - "thiserror 1.0.69", - "unsigned-varint", -] - -[[package]] -name = "sc-network-common" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "bitflags 1.3.2", - "futures", - "libp2p-identity", - "parity-scale-codec", - "prost-build", - "sc-consensus", - "sp-consensus", - "sp-consensus-grandpa", - "sp-runtime", -] - -[[package]] -name = "sc-network-gossip" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "ahash", - "futures", - "futures-timer", - "libp2p-identity", - "log", - "multiaddr", - "sc-network", - "sc-network-common", - "schnellru", - "sp-runtime", - "substrate-prometheus-endpoint", - "tracing", -] - -[[package]] -name = "sc-network-light" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "async-channel", - "futures", - "libp2p-identity", - "log", - "parity-scale-codec", - "prost", - "prost-build", - "sc-client-api", - "sc-network", - "sp-blockchain", - "sp-core", - "sp-runtime", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-network-sync" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "async-channel", - "async-trait", - "fork-tree", - "futures", - "futures-timer", - "libp2p", - "log", - "mockall", - "parity-scale-codec", - "prost", - "prost-build", - "sc-client-api", - "sc-consensus", - "sc-network", - "sc-network-common", - "sc-utils", - "schnellru", - "smallvec", - "sp-arithmetic", - "sp-blockchain", - "sp-consensus", - "sp-consensus-grandpa", - "sp-core", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-network-transactions" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "futures", - "libp2p", - "log", - "parity-scale-codec", - "sc-network", - "sc-network-common", - "sc-utils", - "sp-consensus", - "sp-runtime", - "substrate-prometheus-endpoint", -] - -[[package]] -name = "sc-offchain" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "bytes", - "fnv", - "futures", - "futures-timer", - "hyper 0.14.30", - "libp2p", - "log", - "num_cpus", - "once_cell", - "parity-scale-codec", - "parking_lot 0.12.3", - "rand", - "sc-client-api", - "sc-network", - "sc-transaction-pool-api", - "sc-utils", - "sp-api", - "sp-core", - "sp-externalities", - "sp-keystore", - "sp-offchain", - "sp-runtime", - "threadpool", - "tracing", -] - -[[package]] -name = "sc-proposer-metrics" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "log", - "substrate-prometheus-endpoint", -] - -[[package]] -name = "sc-rpc" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "futures", - "jsonrpsee", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-block-builder", - "sc-chain-spec", - "sc-client-api", - "sc-rpc-api", - "sc-tracing", - "sc-transaction-pool-api", - "sc-utils", - "serde_json", - "sp-api", - "sp-blockchain", - "sp-core", - "sp-keystore", - "sp-offchain", - "sp-rpc", - "sp-runtime", - "sp-session", - "sp-version", - "tokio", -] - -[[package]] -name = "sc-rpc-api" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "jsonrpsee", - "parity-scale-codec", - "sc-chain-spec", - "sc-transaction-pool-api", - "scale-info", - "serde", - "serde_json", - "sp-core", - "sp-rpc", - "sp-runtime", - "sp-version", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-rpc-server" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "http 0.2.12", - "jsonrpsee", - "log", - "serde_json", - "substrate-prometheus-endpoint", - "tokio", - "tower 0.4.13", - "tower-http", -] - -[[package]] -name = "sc-rpc-spec-v2" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "array-bytes", - "futures", - "futures-util", - "hex", - "jsonrpsee", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-chain-spec", - "sc-client-api", - "sc-transaction-pool-api", - "serde", - "sp-api", - "sp-blockchain", - "sp-core", - "sp-runtime", - "sp-version", - "thiserror 1.0.69", - "tokio-stream", -] - -[[package]] -name = "sc-service" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "directories", - "exit-future", - "futures", - "futures-timer", - "jsonrpsee", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "pin-project", - "rand", - "sc-block-builder", - "sc-chain-spec", - "sc-client-api", - "sc-client-db", - "sc-consensus", - "sc-executor", - "sc-informant", - "sc-keystore", - "sc-network", - "sc-network-bitswap", - "sc-network-common", - "sc-network-light", - "sc-network-sync", - "sc-network-transactions", - "sc-rpc", - "sc-rpc-server", - "sc-rpc-spec-v2", - "sc-sysinfo", - "sc-telemetry", - "sc-tracing", - "sc-transaction-pool", - "sc-transaction-pool-api", - "sc-utils", - "serde", - "serde_json", - "sp-api", - "sp-blockchain", - "sp-consensus", - "sp-core", - "sp-externalities", - "sp-keystore", - "sp-runtime", - "sp-session", - "sp-state-machine", - "sp-storage", - "sp-transaction-pool", - "sp-trie", - "sp-version", - "static_init", - "substrate-prometheus-endpoint", - "tempfile", - "thiserror 1.0.69", - "tokio", - "tracing", - "tracing-futures", -] - -[[package]] -name = "sc-state-db" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "sp-core", -] - -[[package]] -name = "sc-sysinfo" -version = "6.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "futures", - "libc", - "log", - "rand", - "rand_pcg", - "regex", - "sc-telemetry", - "serde", - "serde_json", - "sp-core", - "sp-io", - "sp-std", -] - -[[package]] -name = "sc-telemetry" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "chrono", - "futures", - "libp2p", - "log", - "parking_lot 0.12.3", - "pin-project", - "rand", - "sc-utils", - "serde", - "serde_json", - "thiserror 1.0.69", - "wasm-timer", -] - -[[package]] -name = "sc-tracing" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "anstyle", - "chrono", - "lazy_static", - "libc", - "log", - "parking_lot 0.12.3", - "regex", - "rustc-hash 1.1.0", - "sc-client-api", - "sc-tracing-proc-macro", - "serde", - "sp-api", - "sp-blockchain", - "sp-core", - "sp-rpc", - "sp-runtime", - "sp-tracing", - "thiserror 1.0.69", - "tracing", - "tracing-log", - "tracing-subscriber 0.2.25", -] - -[[package]] -name = "sc-tracing-proc-macro" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "proc-macro-crate 1.3.1", - "proc-macro2", - "quote", - "syn 2.0.94", -] - -[[package]] -name = "sc-transaction-pool" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "futures-timer", - "linked-hash-map", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "sc-client-api", - "sc-transaction-pool-api", - "sc-utils", - "serde", - "sp-api", - "sp-blockchain", - "sp-core", - "sp-runtime", - "sp-tracing", - "sp-transaction-pool", - "substrate-prometheus-endpoint", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-transaction-pool-api" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "log", - "parity-scale-codec", - "serde", - "sp-blockchain", - "sp-core", - "sp-runtime", - "thiserror 1.0.69", -] - -[[package]] -name = "sc-utils" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-channel", - "futures", - "futures-timer", - "lazy_static", - "log", - "parking_lot 0.12.3", - "prometheus", - "sp-arithmetic", -] - [[package]] name = "scale-info" version = "2.11.6" @@ -8193,12 +6272,6 @@ dependencies = [ "pest", ] -[[package]] -name = "send_wrapper" -version = "0.6.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "cd0b0ec5f1c1ca621c432a25813d8d60c88abe6d3e08a3eb9cf37d97a0fe3d73" - [[package]] name = "serai-abi" version = "0.1.0" @@ -8781,51 +6854,6 @@ dependencies = [ [[package]] name = "serai-node" version = "0.1.0" -dependencies = [ - "ciphersuite", - "clap", - "embedwards25519", - "frame-benchmarking", - "futures-util", - "hex", - "jsonrpsee", - "libp2p", - "log", - "pallet-transaction-payment-rpc", - "rand_core", - "sc-authority-discovery", - "sc-basic-authorship", - "sc-cli", - "sc-client-api", - "sc-consensus", - "sc-consensus-babe", - "sc-consensus-grandpa", - "sc-executor", - "sc-network", - "sc-network-common", - "sc-offchain", - "sc-rpc-api", - "sc-service", - "sc-telemetry", - "sc-transaction-pool", - "sc-transaction-pool-api", - "schnorrkel", - "secq256k1", - "serai-env", - "serai-runtime", - "sp-api", - "sp-block-builder", - "sp-blockchain", - "sp-consensus-babe", - "sp-core", - "sp-io", - "sp-keystore", - "sp-timestamp", - "substrate-build-script-utils", - "substrate-frame-rpc-system", - "tokio", - "zeroize", -] [[package]] name = "serai-orchestrator" @@ -9387,30 +7415,6 @@ dependencies = [ "time", ] -[[package]] -name = "sha-1" -version = "0.9.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "99cd6713db3cf16b6c84e06321e049a9b9f699826e16096d23bbcc44d15d51a6" -dependencies = [ - "block-buffer 0.9.0", - "cfg-if", - "cpufeatures", - "digest 0.9.0", - "opaque-debug", -] - -[[package]] -name = "sha1" -version = "0.10.6" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" -dependencies = [ - "cfg-if", - "cpufeatures", - "digest 0.10.7", -] - [[package]] name = "sha2" version = "0.10.8" @@ -9518,12 +7522,6 @@ dependencies = [ "autocfg", ] -[[package]] -name = "slice-group-by" -version = "0.3.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "826167069c09b99d56f31e9ae5c99049e932a98c9dc2dac47645b08dbbf76ba7" - [[package]] name = "smallvec" version = "1.13.2" @@ -9576,37 +7574,6 @@ dependencies = [ "windows-sys 0.52.0", ] -[[package]] -name = "soketto" -version = "0.7.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "41d1c5305e39e09653383c2c7244f2f78b3bcae37cf50c64cb4789c9f5096ec2" -dependencies = [ - "base64 0.13.1", - "bytes", - "futures", - "http 0.2.12", - "httparse", - "log", - "rand", - "sha-1", -] - -[[package]] -name = "soketto" -version = "0.8.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2e859df029d160cb88608f5d7df7fb4753fd20fdfb4de5644f3d8b8440841721" -dependencies = [ - "base64 0.22.1", - "bytes", - "futures", - "httparse", - "log", - "rand", - "sha1", -] - [[package]] name = "sp-api" version = "4.0.0-dev" @@ -9692,38 +7659,6 @@ dependencies = [ "sp-std", ] -[[package]] -name = "sp-blockchain" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "futures", - "log", - "parity-scale-codec", - "parking_lot 0.12.3", - "schnellru", - "sp-api", - "sp-consensus", - "sp-database", - "sp-runtime", - "sp-state-machine", - "thiserror 1.0.69", -] - -[[package]] -name = "sp-consensus" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "async-trait", - "futures", - "log", - "sp-inherents", - "sp-runtime", - "sp-state-machine", - "thiserror 1.0.69", -] - [[package]] name = "sp-consensus-babe" version = "0.10.0-dev" @@ -9838,15 +7773,6 @@ dependencies = [ "syn 2.0.94", ] -[[package]] -name = "sp-database" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "kvdb", - "parking_lot 0.12.3", -] - [[package]] name = "sp-debug-derive" version = "8.0.0" @@ -9904,17 +7830,6 @@ dependencies = [ "tracing-core", ] -[[package]] -name = "sp-keyring" -version = "24.0.0" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "lazy_static", - "sp-core", - "sp-runtime", - "strum 0.25.0", -] - [[package]] name = "sp-keystore" version = "0.27.0" @@ -9967,16 +7882,6 @@ dependencies = [ "regex", ] -[[package]] -name = "sp-rpc" -version = "6.0.0" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "rustc-hash 1.1.0", - "serde", - "sp-core", -] - [[package]] name = "sp-runtime" version = "24.0.0" @@ -10265,34 +8170,6 @@ version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" -[[package]] -name = "static_init" -version = "1.0.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8a2a1c578e98c1c16fc3b8ec1328f7659a500737d7a0c6d625e73e830ff9c1f6" -dependencies = [ - "bitflags 1.3.2", - "cfg_aliases 0.1.1", - "libc", - "parking_lot 0.11.2", - "parking_lot_core 0.8.6", - "static_init_macro", - "winapi", -] - -[[package]] -name = "static_init_macro" -version = "1.0.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1389c88ddd739ec6d3f8f83343764a0e944cd23cfbf126a9796a714b0b6edd6f" -dependencies = [ - "cfg_aliases 0.1.1", - "memchr", - "proc-macro2", - "quote", - "syn 1.0.109", -] - [[package]] name = "std-shims" version = "0.1.1" @@ -10301,25 +8178,6 @@ dependencies = [ "spin 0.9.8", ] -[[package]] -name = "strobe-rs" -version = "0.8.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "fabb238a1cccccfa4c4fb703670c0d157e1256c1ba695abf1b93bd2bb14bab2d" -dependencies = [ - "bitflags 1.3.2", - "byteorder", - "keccak", - "subtle", - "zeroize", -] - -[[package]] -name = "strsim" -version = "0.11.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" - [[package]] name = "strum" version = "0.24.1" @@ -10395,42 +8253,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "substrate-build-script-utils" -version = "3.0.0" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" - -[[package]] -name = "substrate-frame-rpc-system" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "frame-system-rpc-runtime-api", - "futures", - "jsonrpsee", - "log", - "parity-scale-codec", - "sc-rpc-api", - "sc-transaction-pool-api", - "sp-api", - "sp-block-builder", - "sp-blockchain", - "sp-core", - "sp-runtime", -] - -[[package]] -name = "substrate-prometheus-endpoint" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" -dependencies = [ - "hyper 0.14.30", - "log", - "prometheus", - "thiserror 1.0.69", - "tokio", -] - [[package]] name = "substrate-wasm-builder" version = "5.0.0-dev" @@ -10444,7 +8266,7 @@ dependencies = [ "sp-maybe-compressed-blob", "strum 0.25.0", "tempfile", - "toml 0.7.8", + "toml", "walkdir", "wasm-opt", ] @@ -10578,12 +8400,6 @@ dependencies = [ "winapi-util", ] -[[package]] -name = "termtree" -version = "0.5.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683" - [[package]] name = "thiserror" version = "1.0.69" @@ -10776,21 +8592,11 @@ checksum = "d7fcaa8d55a2bdd6b83ace262b016eca0d79ee02818c5c1bcdf0305114081078" dependencies = [ "bytes", "futures-core", - "futures-io", "futures-sink", "pin-project-lite", "tokio", ] -[[package]] -name = "toml" -version = "0.5.11" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f4f7f0dd8d50a853a531c426359045b1998f04219d88799810762cd4ad314234" -dependencies = [ - "serde", -] - [[package]] name = "toml" version = "0.7.8" @@ -10836,17 +8642,6 @@ dependencies = [ "winnow 0.6.21", ] -[[package]] -name = "tower" -version = "0.4.13" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c" -dependencies = [ - "tower-layer", - "tower-service", - "tracing", -] - [[package]] name = "tower" version = "0.5.2" @@ -10861,24 +8656,6 @@ dependencies = [ "tower-service", ] -[[package]] -name = "tower-http" -version = "0.4.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "61c5bb1d698276a2443e5ecfabc1008bf15a36c12e6a7176e7bf089ea9131140" -dependencies = [ - "bitflags 2.6.0", - "bytes", - "futures-core", - "futures-util", - "http 0.2.12", - "http-body 0.4.6", - "http-range-header", - "pin-project-lite", - "tower-layer", - "tower-service", -] - [[package]] name = "tower-layer" version = "0.3.3" @@ -10897,7 +8674,6 @@ version = "0.1.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" dependencies = [ - "log", "pin-project-lite", "tracing-attributes", "tracing-core", @@ -10924,16 +8700,6 @@ dependencies = [ "valuable", ] -[[package]] -name = "tracing-futures" -version = "0.2.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "97d095ae15e245a057c8e8451bab9b3ee1e1f68e9ba2b4fbc18d0ac5237835f2" -dependencies = [ - "pin-project", - "tracing", -] - [[package]] name = "tracing-log" version = "0.1.4" @@ -10965,7 +8731,6 @@ dependencies = [ "chrono", "lazy_static", "matchers 0.0.1", - "parking_lot 0.11.2", "regex", "serde", "serde_json", @@ -11224,8 +8989,6 @@ checksum = "6889a77d49f1f013504cec6bf97a2c730394adedaeb1deb5ea08949a50541105" dependencies = [ "asynchronous-codec", "bytes", - "futures-io", - "futures-util", ] [[package]] @@ -11257,12 +9020,6 @@ version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" -[[package]] -name = "utf8parse" -version = "0.2.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" - [[package]] name = "uuid" version = "1.11.0" @@ -11352,19 +9109,6 @@ dependencies = [ "wasm-bindgen-shared", ] -[[package]] -name = "wasm-bindgen-futures" -version = "0.4.49" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "38176d9b44ea84e9184eff0bc34cc167ed044f816accfe5922e54d84cf48eca2" -dependencies = [ - "cfg-if", - "js-sys", - "once_cell", - "wasm-bindgen", - "web-sys", -] - [[package]] name = "wasm-bindgen-macro" version = "0.2.99" @@ -11403,15 +9147,6 @@ dependencies = [ "leb128", ] -[[package]] -name = "wasm-instrument" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2a47ecb37b9734d1085eaa5ae1a81e60801fd8c28d4cabdd8aedb982021918bc" -dependencies = [ - "parity-wasm", -] - [[package]] name = "wasm-opt" version = "0.114.2" @@ -11452,21 +9187,6 @@ dependencies = [ "cxx-build", ] -[[package]] -name = "wasm-timer" -version = "0.2.5" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "be0ecb0db480561e9a7642b5d3e4187c128914e58aa84330b9493e3eb68c5e7f" -dependencies = [ - "futures", - "js-sys", - "parking_lot 0.11.2", - "pin-utils", - "wasm-bindgen", - "wasm-bindgen-futures", - "web-sys", -] - [[package]] name = "wasmparser" version = "0.110.0" @@ -11495,14 +9215,11 @@ dependencies = [ "once_cell", "paste", "psm", - "rayon", "serde", "serde_json", "target-lexicon", "wasm-encoder", "wasmparser", - "wasmtime-cache", - "wasmtime-cranelift", "wasmtime-environ", "wasmtime-jit", "wasmtime-runtime", @@ -11518,66 +9235,6 @@ dependencies = [ "cfg-if", ] -[[package]] -name = "wasmtime-cache" -version = "12.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "31561fbbaa86d3c042696940bc9601146bf4aaec39ae725c86b5f1358d8d7023" -dependencies = [ - "anyhow", - "base64 0.21.7", - "bincode", - "directories-next", - "file-per-thread-logger", - "log", - "rustix", - "serde", - "sha2", - "toml 0.5.11", - "windows-sys 0.48.0", - "zstd 0.11.2+zstd.1.5.2", -] - -[[package]] -name = "wasmtime-cranelift" -version = "12.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8ae8ed7a4845f22be6b1ad80f33f43fa03445b03a02f2d40dca695129769cd1a" -dependencies = [ - "anyhow", - "cranelift-codegen", - "cranelift-control", - "cranelift-entity", - "cranelift-frontend", - "cranelift-native", - "cranelift-wasm", - "gimli 0.27.3", - "log", - "object 0.31.1", - "target-lexicon", - "thiserror 1.0.69", - "wasmparser", - "wasmtime-cranelift-shared", - "wasmtime-environ", - "wasmtime-versioned-export-macros", -] - -[[package]] -name = "wasmtime-cranelift-shared" -version = "12.0.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "86b17099f9320a1c481634d88101258917d5065717cf22b04ed75b1a8ea062b4" -dependencies = [ - "anyhow", - "cranelift-codegen", - "cranelift-control", - "cranelift-native", - "gimli 0.27.3", - "object 0.31.1", - "target-lexicon", - "wasmtime-environ", -] - [[package]] name = "wasmtime-environ" version = "12.0.2" @@ -11616,7 +9273,6 @@ dependencies = [ "serde", "target-lexicon", "wasmtime-environ", - "wasmtime-jit-debug", "wasmtime-jit-icache-coherence", "wasmtime-runtime", "windows-sys 0.48.0", @@ -11628,9 +9284,7 @@ version = "12.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "aef27ea6c34ef888030d15560037fe7ef27a5609fbbba8e1e3e41dc4245f5bb2" dependencies = [ - "object 0.31.1", "once_cell", - "rustix", "wasmtime-versioned-export-macros", ] @@ -11719,24 +9373,6 @@ dependencies = [ "wasm-bindgen", ] -[[package]] -name = "webpki-roots" -version = "0.25.4" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5f20c57d8d7db6d3b86154206ae5d8fba62dd39573114de97c2cb0578251f8e1" - -[[package]] -name = "which" -version = "4.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "87ba24419a2078cd2b0f2ede2691b6c66d8e47836da3b6db8265ebad47afbfc7" -dependencies = [ - "either", - "home", - "once_cell", - "rustix", -] - [[package]] name = "wide" version = "0.7.30" diff --git a/substrate/node/Cargo.toml b/substrate/node/Cargo.toml index 5e6cd3f1..cfd8ebbe 100644 --- a/substrate/node/Cargo.toml +++ b/substrate/node/Cargo.toml @@ -20,71 +20,71 @@ workspace = true name = "serai-node" [dependencies] -rand_core = "0.6" -zeroize = "1" -hex = "0.4" -log = "0.4" +#rand_core = "0.6" +#zeroize = "1" +#hex = "0.4" +#log = "0.4" -schnorrkel = "0.11" +#schnorrkel = "0.11" -ciphersuite = { path = "../../crypto/ciphersuite" } -embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } -secq256k1 = { path = "../../crypto/evrf/secq256k1" } +#ciphersuite = { path = "../../crypto/ciphersuite" } +#embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } +#secq256k1 = { path = "../../crypto/evrf/secq256k1" } -libp2p = "0.52" +#libp2p = "0.52" -sp-core = { git = "https://github.com/serai-dex/substrate" } -sp-keystore = { git = "https://github.com/serai-dex/substrate" } -sp-timestamp = { git = "https://github.com/serai-dex/substrate" } -sp-io = { git = "https://github.com/serai-dex/substrate" } -sp-blockchain = { git = "https://github.com/serai-dex/substrate" } -sp-api = { git = "https://github.com/serai-dex/substrate" } -sp-block-builder = { git = "https://github.com/serai-dex/substrate" } -sp-consensus-babe = { git = "https://github.com/serai-dex/substrate" } +#sp-core = { git = "https://github.com/serai-dex/substrate" } +#sp-keystore = { git = "https://github.com/serai-dex/substrate" } +#sp-timestamp = { git = "https://github.com/serai-dex/substrate" } +#sp-io = { git = "https://github.com/serai-dex/substrate" } +#sp-blockchain = { git = "https://github.com/serai-dex/substrate" } +#sp-api = { git = "https://github.com/serai-dex/substrate" } +#sp-block-builder = { git = "https://github.com/serai-dex/substrate" } +#sp-consensus-babe = { git = "https://github.com/serai-dex/substrate" } -frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } +#frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } -serai-runtime = { path = "../runtime", features = ["std"] } +#serai-runtime = { path = "../runtime", features = ["std"] } -clap = { version = "4", features = ["derive"] } +#clap = { version = "4", features = ["derive"] } -futures-util = "0.3" -tokio = { version = "1", features = ["sync", "rt-multi-thread"] } -jsonrpsee = { version = "0.16", features = ["server"] } +#futures-util = "0.3" +#tokio = { version = "1", features = ["sync", "rt-multi-thread"] } +#jsonrpsee = { version = "0.16", features = ["server"] } -sc-offchain = { git = "https://github.com/serai-dex/substrate" } -sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } -sc-transaction-pool-api = { git = "https://github.com/serai-dex/substrate" } -sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } -sc-executor = { git = "https://github.com/serai-dex/substrate" } -sc-service = { git = "https://github.com/serai-dex/substrate" } -sc-client-api = { git = "https://github.com/serai-dex/substrate" } -sc-network-common = { git = "https://github.com/serai-dex/substrate" } -sc-network = { git = "https://github.com/serai-dex/substrate" } +#sc-offchain = { git = "https://github.com/serai-dex/substrate" } +#sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } +#sc-transaction-pool-api = { git = "https://github.com/serai-dex/substrate" } +#sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } +#sc-executor = { git = "https://github.com/serai-dex/substrate" } +#sc-service = { git = "https://github.com/serai-dex/substrate" } +#sc-client-api = { git = "https://github.com/serai-dex/substrate" } +#sc-network-common = { git = "https://github.com/serai-dex/substrate" } +#sc-network = { git = "https://github.com/serai-dex/substrate" } -sc-consensus = { git = "https://github.com/serai-dex/substrate" } -sc-consensus-babe = { git = "https://github.com/serai-dex/substrate" } -sc-consensus-grandpa = { git = "https://github.com/serai-dex/substrate" } -sc-authority-discovery = { git = "https://github.com/serai-dex/substrate" } +#sc-consensus = { git = "https://github.com/serai-dex/substrate" } +#sc-consensus-babe = { git = "https://github.com/serai-dex/substrate" } +#sc-consensus-grandpa = { git = "https://github.com/serai-dex/substrate" } +#sc-authority-discovery = { git = "https://github.com/serai-dex/substrate" } -sc-telemetry = { git = "https://github.com/serai-dex/substrate" } -sc-cli = { git = "https://github.com/serai-dex/substrate" } +#sc-telemetry = { git = "https://github.com/serai-dex/substrate" } +#sc-cli = { git = "https://github.com/serai-dex/substrate" } -sc-rpc-api = { git = "https://github.com/serai-dex/substrate" } +#sc-rpc-api = { git = "https://github.com/serai-dex/substrate" } -substrate-frame-rpc-system = { git = "https://github.com/serai-dex/substrate" } -pallet-transaction-payment-rpc = { git = "https://github.com/serai-dex/substrate" } +#substrate-frame-rpc-system = { git = "https://github.com/serai-dex/substrate" } +#pallet-transaction-payment-rpc = { git = "https://github.com/serai-dex/substrate" } -serai-env = { path = "../../common/env" } +#serai-env = { path = "../../common/env" } [build-dependencies] -substrate-build-script-utils = { git = "https://github.com/serai-dex/substrate" } +#substrate-build-script-utils = { git = "https://github.com/serai-dex/substrate" } [features] -default = [] -fast-epoch = ["serai-runtime/fast-epoch"] -runtime-benchmarks = [ - "frame-benchmarking/runtime-benchmarks", +#default = [] +#fast-epoch = ["serai-runtime/fast-epoch"] +#runtime-benchmarks = [ +# "frame-benchmarking/runtime-benchmarks", - "serai-runtime/runtime-benchmarks", -] +# "serai-runtime/runtime-benchmarks", +#] From 2a19e9da9320fb3bdebb2f55c18c191417f56b4d Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 17 Jan 2025 04:50:15 -0500 Subject: [PATCH 319/368] Update to libp2p 0.54 This is the same libp2p Substrate uses as of https://github.com/paritytech/polkadot-sdk/pull/6248. --- Cargo.lock | 524 ++++++++++----------- coordinator/p2p/libp2p/Cargo.toml | 2 +- coordinator/p2p/libp2p/src/authenticate.rs | 23 +- coordinator/p2p/libp2p/src/lib.rs | 28 +- coordinator/p2p/libp2p/src/reqres.rs | 5 +- coordinator/p2p/libp2p/src/swarm.rs | 57 +-- 6 files changed, 292 insertions(+), 347 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 9f5ea20f..c8d00271 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -512,7 +512,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d17722a198f33bbd25337660787aea8b8f57814febb7c746bc30407bdfc39448" dependencies = [ "alloy-json-rpc", - "base64 0.22.1", + "base64", "futures-util", "futures-utils-wasm", "serde", @@ -743,9 +743,9 @@ dependencies = [ [[package]] name = "asn1-rs" -version = "0.5.2" +version = "0.6.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7f6fd5ddaf0351dff5b8da21b2fb4ff8e08ddd02857f0bf69c47639106c0fff0" +checksum = "5493c3bedbacf7fd7382c6346bbd66687d12bbaad3a89a2d2c303ee6cf20b048" dependencies = [ "asn1-rs-derive", "asn1-rs-impl", @@ -759,25 +759,25 @@ dependencies = [ [[package]] name = "asn1-rs-derive" -version = "0.4.0" +version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "726535892e8eae7e70657b4c8ea93d26b8553afb1ce617caee529ef96d7dee6c" +checksum = "965c2d33e53cb6b267e148a4cb0760bc01f4904c1cd4bb4002a085bb016d1490" dependencies = [ "proc-macro2", "quote", - "syn 1.0.109", + "syn 2.0.94", "synstructure", ] [[package]] name = "asn1-rs-impl" -version = "0.1.0" +version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2777730b2039ac0f95f093556e61b6d26cebed5393ca6f152717777cec3a42ed" +checksum = "7b18050c2cd6fe86c3a76584ef5e0baf286d038cda203eb6223df2cc413565f7" dependencies = [ "proc-macro2", "quote", - "syn 1.0.109", + "syn 2.0.94", ] [[package]] @@ -845,9 +845,9 @@ dependencies = [ [[package]] name = "asynchronous-codec" -version = "0.6.2" +version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4057f2c32adbb2fc158e22fb38433c8e9bbf76b75a4732c7c0cbaf695fb65568" +checksum = "a860072022177f903e59730004fb5dc13db9275b79bb2aef7ba8ce831956c233" dependencies = [ "bytes", "futures-sink", @@ -921,18 +921,6 @@ dependencies = [ "bitcoin_hashes", ] -[[package]] -name = "base64" -version = "0.13.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8" - -[[package]] -name = "base64" -version = "0.21.7" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567" - [[package]] name = "base64" version = "0.22.1" @@ -1162,7 +1150,7 @@ version = "0.17.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d41711ad46fda47cd701f6908e59d1bd6b9a2b7464c0d0aeab95c6d37096ff8a" dependencies = [ - "base64 0.22.1", + "base64", "bollard-stubs", "bytes", "futures-core", @@ -1804,9 +1792,9 @@ dependencies = [ [[package]] name = "der-parser" -version = "8.2.0" +version = "9.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dbd676fbbab537128ef0278adb5576cf363cff6aa22a7b24effe97347cfab61e" +checksum = "5cd0a5c643689626bec213c4d8bd4d96acc8ffdb4ad4bb6bc16abf27d5f4b553" dependencies = [ "asn1-rs", "displaydoc", @@ -1997,7 +1985,7 @@ checksum = "b8648c989dfd460932144f0ce5c55ff35cf0de758f89ea20e3b3d0d3f5e1cce6" dependencies = [ "anyhow", "async-trait", - "base64 0.22.1", + "base64", "bollard", "bytes", "dyn-clone", @@ -2165,18 +2153,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "enum-as-inner" -version = "0.5.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c9720bba047d567ffc8a3cba48bf19126600e249ab7f128e9233e6376976a116" -dependencies = [ - "heck 0.4.1", - "proc-macro2", - "quote", - "syn 1.0.109", -] - [[package]] name = "enum-as-inner" version = "0.6.1" @@ -2624,6 +2600,16 @@ dependencies = [ "futures-util", ] +[[package]] +name = "futures-bounded" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "91f328e7fb845fc832912fb6a34f40cf6d1888c92f974d1893a54e97b5ff542e" +dependencies = [ + "futures-timer", + "futures-util", +] + [[package]] name = "futures-channel" version = "0.3.31" @@ -2684,12 +2670,13 @@ dependencies = [ [[package]] name = "futures-rustls" -version = "0.24.0" +version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "35bd3cf68c183738046838e300353e4716c674dc5e56890de4826801a6622a28" +checksum = "a8f2f12607f92c69b12ed746fabf9ca4f5c482cba46679c1a75b874ed7c26adb" dependencies = [ "futures-io", - "rustls 0.21.12", + "rustls", + "rustls-pki-types", ] [[package]] @@ -2837,8 +2824,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7" dependencies = [ "cfg-if", + "js-sys", "libc", "wasi", + "wasm-bindgen", ] [[package]] @@ -3020,6 +3009,52 @@ version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3011d1213f159867b13cfd6ac92d2cd5f1345762c63be3554e84092d85a50bbd" +[[package]] +name = "hickory-proto" +version = "0.24.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "447afdcdb8afb9d0a852af6dc65d9b285ce720ed7a59e42a8bf2e931c67bc1b5" +dependencies = [ + "async-trait", + "cfg-if", + "data-encoding", + "enum-as-inner", + "futures-channel", + "futures-io", + "futures-util", + "idna", + "ipnet", + "once_cell", + "rand", + "socket2", + "thiserror 1.0.69", + "tinyvec", + "tokio", + "tracing", + "url", +] + +[[package]] +name = "hickory-resolver" +version = "0.24.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0a2e2aba9c389ce5267d31cf1e4dace82390ae276b0b364ea55630b1fa1b44b4" +dependencies = [ + "cfg-if", + "futures-util", + "hickory-proto", + "ipconfig", + "lru-cache", + "once_cell", + "parking_lot 0.12.3", + "rand", + "resolv-conf", + "smallvec", + "thiserror 1.0.69", + "tokio", + "tracing", +] + [[package]] name = "hkdf" version = "0.12.4" @@ -3149,7 +3184,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2 0.5.8", + "socket2", "tokio", "tower-service", "tracing", @@ -3201,7 +3236,7 @@ dependencies = [ "http 1.2.0", "hyper 1.4.1", "hyper-util", - "rustls 0.23.20", + "rustls", "rustls-native-certs", "rustls-pki-types", "tokio", @@ -3222,7 +3257,7 @@ dependencies = [ "http-body 1.0.1", "hyper 1.4.1", "pin-project-lite", - "socket2 0.5.8", + "socket2", "tokio", "tower-service", "tracing", @@ -3266,27 +3301,6 @@ dependencies = [ "cc", ] -[[package]] -name = "idna" -version = "0.2.3" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "418a0a6fab821475f634efe3ccc45c013f742efe03d853e8d3355d5cb850ecf8" -dependencies = [ - "matches", - "unicode-bidi", - "unicode-normalization", -] - -[[package]] -name = "idna" -version = "0.4.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7d20d6b07bfbc108882d88ed8e37d39636dcc260e15e30c45e6ba089610b917c" -dependencies = [ - "unicode-bidi", - "unicode-normalization", -] - [[package]] name = "idna" version = "1.0.3" @@ -3455,7 +3469,7 @@ version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f" dependencies = [ - "socket2 0.5.8", + "socket2", "widestring", "windows-sys 0.48.0", "winreg", @@ -3591,16 +3605,15 @@ checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa" [[package]] name = "libp2p" -version = "0.52.4" +version = "0.54.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e94495eb319a85b70a68b85e2389a95bb3555c71c49025b78c691a854a7e6464" +checksum = "bbbe80f9c7e00526cd6b838075b9c171919404a4732cb2fa8ece0a093223bfc4" dependencies = [ "bytes", "either", "futures", "futures-timer", "getrandom", - "instant", "libp2p-allow-block-list", "libp2p-connection-limits", "libp2p-core", @@ -3625,9 +3638,9 @@ dependencies = [ [[package]] name = "libp2p-allow-block-list" -version = "0.2.0" +version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "55b46558c5c0bf99d3e2a1a38fd54ff5476ca66dd1737b12466a1824dd219311" +checksum = "d1027ccf8d70320ed77e984f273bc8ce952f623762cb9bf2d126df73caef8041" dependencies = [ "libp2p-core", "libp2p-identity", @@ -3637,9 +3650,9 @@ dependencies = [ [[package]] name = "libp2p-connection-limits" -version = "0.2.1" +version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2f5107ad45cb20b2f6c3628c7b6014b996fcb13a88053f4569c872c6e30abf58" +checksum = "8d003540ee8baef0d254f7b6bfd79bac3ddf774662ca0abf69186d517ef82ad8" dependencies = [ "libp2p-core", "libp2p-identity", @@ -3649,17 +3662,15 @@ dependencies = [ [[package]] name = "libp2p-core" -version = "0.40.1" +version = "0.42.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "dd44289ab25e4c9230d9246c475a22241e301b23e8f4061d3bdef304a1a99713" +checksum = "a61f26c83ed111104cd820fe9bc3aaabbac5f1652a1d213ed6e900b7918a1298" dependencies = [ "either", "fnv", "futures", "futures-timer", - "instant", "libp2p-identity", - "log", "multiaddr", "multihash", "multistream-select", @@ -3671,34 +3682,36 @@ dependencies = [ "rw-stream-sink", "smallvec", "thiserror 1.0.69", - "unsigned-varint", + "tracing", + "unsigned-varint 0.8.0", "void", + "web-time", ] [[package]] name = "libp2p-dns" -version = "0.40.1" +version = "0.42.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e6a18db73084b4da2871438f6239fef35190b05023de7656e877c18a00541a3b" +checksum = "97f37f30d5c7275db282ecd86e54f29dd2176bd3ac656f06abf43bedb21eb8bd" dependencies = [ "async-trait", "futures", + "hickory-resolver", "libp2p-core", "libp2p-identity", - "log", "parking_lot 0.12.3", "smallvec", - "trust-dns-resolver", + "tracing", ] [[package]] name = "libp2p-gossipsub" -version = "0.45.2" +version = "0.47.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f1f9624e2a843b655f1c1b8262b8d5de6f309413fca4d66f01bb0662429f84dc" +checksum = "b4e830fdf24ac8c444c12415903174d506e1e077fbe3875c404a78c5935a8543" dependencies = [ "asynchronous-codec", - "base64 0.21.7", + "base64", "byteorder", "bytes", "either", @@ -3707,11 +3720,9 @@ dependencies = [ "futures-ticker", "getrandom", "hex_fmt", - "instant", "libp2p-core", "libp2p-identity", "libp2p-swarm", - "log", "prometheus-client", "quick-protobuf", "quick-protobuf-codec", @@ -3719,8 +3730,9 @@ dependencies = [ "regex", "sha2", "smallvec", - "unsigned-varint", + "tracing", "void", + "web-time", ] [[package]] @@ -3743,53 +3755,54 @@ dependencies = [ [[package]] name = "libp2p-mdns" -version = "0.44.0" +version = "0.46.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "42a2567c305232f5ef54185e9604579a894fd0674819402bb0ac0246da82f52a" +checksum = "14b8546b6644032565eb29046b42744aee1e9f261ed99671b2c93fb140dba417" dependencies = [ "data-encoding", "futures", + "hickory-proto", "if-watch", "libp2p-core", "libp2p-identity", "libp2p-swarm", - "log", "rand", "smallvec", - "socket2 0.5.8", + "socket2", "tokio", - "trust-dns-proto 0.22.0", + "tracing", "void", ] [[package]] name = "libp2p-metrics" -version = "0.13.1" +version = "0.15.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "239ba7d28f8d0b5d77760dc6619c05c7e88e74ec8fbbe97f856f20a56745e620" +checksum = "77ebafa94a717c8442d8db8d3ae5d1c6a15e30f2d347e0cd31d057ca72e42566" dependencies = [ - "instant", + "futures", "libp2p-core", "libp2p-gossipsub", "libp2p-identity", "libp2p-ping", "libp2p-swarm", - "once_cell", + "pin-project", "prometheus-client", + "web-time", ] [[package]] name = "libp2p-noise" -version = "0.43.2" +version = "0.45.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d2eeec39ad3ad0677551907dd304b2f13f17208ccebe333bef194076cd2e8921" +checksum = "36b137cb1ae86ee39f8e5d6245a296518912014eaa87427d24e6ff58cfc1b28c" dependencies = [ + "asynchronous-codec", "bytes", "curve25519-dalek", "futures", "libp2p-core", "libp2p-identity", - "log", "multiaddr", "multihash", "once_cell", @@ -3799,33 +3812,34 @@ dependencies = [ "snow", "static_assertions", "thiserror 1.0.69", + "tracing", "x25519-dalek", "zeroize", ] [[package]] name = "libp2p-ping" -version = "0.43.1" +version = "0.45.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "e702d75cd0827dfa15f8fd92d15b9932abe38d10d21f47c50438c71dd1b5dae3" +checksum = "005a34420359223b974ee344457095f027e51346e992d1e0dcd35173f4cdd422" dependencies = [ "either", "futures", "futures-timer", - "instant", "libp2p-core", "libp2p-identity", "libp2p-swarm", - "log", "rand", + "tracing", "void", + "web-time", ] [[package]] name = "libp2p-quic" -version = "0.9.3" +version = "0.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "130d451d83f21b81eb7b35b360bc7972aeafb15177784adc56528db082e6b927" +checksum = "46352ac5cd040c70e88e7ff8257a2ae2f891a4076abad2c439584a31c15fd24e" dependencies = [ "bytes", "futures", @@ -3834,66 +3848,68 @@ dependencies = [ "libp2p-core", "libp2p-identity", "libp2p-tls", - "log", "parking_lot 0.12.3", "quinn", "rand", - "ring 0.16.20", - "rustls 0.21.12", - "socket2 0.5.8", + "ring 0.17.8", + "rustls", + "socket2", "thiserror 1.0.69", "tokio", + "tracing", ] [[package]] name = "libp2p-request-response" -version = "0.25.3" +version = "0.27.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d8e3b4d67870478db72bac87bfc260ee6641d0734e0e3e275798f089c3fecfd4" +checksum = "1356c9e376a94a75ae830c42cdaea3d4fe1290ba409a22c809033d1b7dcab0a6" dependencies = [ "async-trait", "futures", - "instant", + "futures-bounded", + "futures-timer", "libp2p-core", "libp2p-identity", "libp2p-swarm", - "log", "rand", "smallvec", + "tracing", "void", + "web-time", ] [[package]] name = "libp2p-swarm" -version = "0.43.7" +version = "0.45.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "580189e0074af847df90e75ef54f3f30059aedda37ea5a1659e8b9fca05c0141" +checksum = "d7dd6741793d2c1fb2088f67f82cf07261f25272ebe3c0b0c311e0c6b50e851a" dependencies = [ "either", "fnv", "futures", "futures-timer", - "instant", "libp2p-core", "libp2p-identity", "libp2p-swarm-derive", - "log", + "lru", "multistream-select", "once_cell", "rand", "smallvec", "tokio", + "tracing", "void", + "web-time", ] [[package]] name = "libp2p-swarm-derive" -version = "0.33.0" +version = "0.35.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c4d5ec2a3df00c7836d7696c136274c9c59705bac69133253696a6c932cd1d74" +checksum = "206e0aa0ebe004d778d79fb0966aa0de996c19894e2c0605ba2f8524dd4443d8" dependencies = [ - "heck 0.4.1", - "proc-macro-warning", + "heck 0.5.0", "proc-macro2", "quote", "syn 2.0.94", @@ -3901,9 +3917,9 @@ dependencies = [ [[package]] name = "libp2p-tcp" -version = "0.40.1" +version = "0.42.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b558dd40d1bcd1aaaed9de898e9ec6a436019ecc2420dd0016e712fbb61c5508" +checksum = "ad964f312c59dcfcac840acd8c555de8403e295d39edf96f5240048b5fcaa314" dependencies = [ "futures", "futures-timer", @@ -3911,24 +3927,24 @@ dependencies = [ "libc", "libp2p-core", "libp2p-identity", - "log", - "socket2 0.5.8", + "socket2", "tokio", + "tracing", ] [[package]] name = "libp2p-tls" -version = "0.2.1" +version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8218d1d5482b122ccae396bbf38abdcb283ecc96fa54760e1dfd251f0546ac61" +checksum = "47b23dddc2b9c355f73c1e36eb0c3ae86f7dc964a3715f0731cfad352db4d847" dependencies = [ "futures", "futures-rustls", "libp2p-core", "libp2p-identity", "rcgen", - "ring 0.16.20", - "rustls 0.21.12", + "ring 0.17.8", + "rustls", "rustls-webpki 0.101.7", "thiserror 1.0.69", "x509-parser", @@ -3937,31 +3953,33 @@ dependencies = [ [[package]] name = "libp2p-upnp" -version = "0.1.1" +version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "82775a47b34f10f787ad3e2a22e2c1541e6ebef4fe9f28f3ac553921554c94c1" +checksum = "01bf2d1b772bd3abca049214a3304615e6a36fa6ffc742bdd1ba774486200b8f" dependencies = [ "futures", "futures-timer", "igd-next", "libp2p-core", "libp2p-swarm", - "log", "tokio", + "tracing", "void", ] [[package]] name = "libp2p-yamux" -version = "0.44.1" +version = "0.46.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8eedcb62824c4300efb9cfd4e2a6edaf3ca097b9e68b36dabe45a44469fd6a85" +checksum = "788b61c80789dba9760d8c669a5bedb642c8267555c803fabd8396e4ca5c5882" dependencies = [ + "either", "futures", "libp2p-core", - "log", "thiserror 1.0.69", - "yamux", + "tracing", + "yamux 0.12.1", + "yamux 0.13.4", ] [[package]] @@ -4570,7 +4588,7 @@ dependencies = [ "percent-encoding", "serde", "static_assertions", - "unsigned-varint", + "unsigned-varint 0.7.2", "url", ] @@ -4606,7 +4624,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "076d548d76a0e2a0d4ab471d0b1c36c577786dfc4471242035d97a12a735c492" dependencies = [ "core2", - "unsigned-varint", + "unsigned-varint 0.7.2", ] [[package]] @@ -4620,7 +4638,7 @@ dependencies = [ "log", "pin-project", "smallvec", - "unsigned-varint", + "unsigned-varint 0.7.2", ] [[package]] @@ -4869,9 +4887,9 @@ dependencies = [ [[package]] name = "oid-registry" -version = "0.6.1" +version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9bedf36ffb6ba96c2eb7144ef6270557b52e54b20c0a8e1eb2ff99a6c6959bff" +checksum = "a8d8034d9489cdaf79228eb9f6a3b8d7bb32ba00d6645ebd48eef4077ceb5bd9" dependencies = [ "asn1-rs", ] @@ -5211,11 +5229,12 @@ dependencies = [ [[package]] name = "pem" -version = "1.1.1" +version = "3.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a8835c273a76a90455d7344889b0964598e3316e2a79ede8e36f16bdcf2228b8" +checksum = "8e459365e590736a54c3fa561947c84837534b8e9af6fc5bf781307e82658fae" dependencies = [ - "base64 0.13.1", + "base64", + "serde", ] [[package]] @@ -5435,9 +5454,9 @@ dependencies = [ [[package]] name = "prometheus-client" -version = "0.21.2" +version = "0.22.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3c99afa9a01501019ac3a14d71d9f94050346f55ca471ce90c799a15c58f61e2" +checksum = "504ee9ff529add891127c4827eb481bd69dc0ebc72e9a682e187db4caa60c3ca" dependencies = [ "dtoa", "itoa", @@ -5502,63 +5521,68 @@ dependencies = [ [[package]] name = "quick-protobuf-codec" -version = "0.2.0" +version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f8ededb1cd78531627244d51dd0c7139fbe736c7d57af0092a76f0ffb2f56e98" +checksum = "15a0580ab32b169745d7a39db2ba969226ca16738931be152a3209b409de2474" dependencies = [ "asynchronous-codec", "bytes", "quick-protobuf", "thiserror 1.0.69", - "unsigned-varint", + "unsigned-varint 0.8.0", ] [[package]] name = "quinn" -version = "0.10.2" +version = "0.11.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "8cc2c5017e4b43d5995dcea317bc46c1e09404c0a9664d2908f7f02dfe943d75" +checksum = "62e96808277ec6f97351a2380e6c25114bc9e67037775464979f3037c92d05ef" dependencies = [ "bytes", "futures-io", "pin-project-lite", "quinn-proto", "quinn-udp", - "rustc-hash 1.1.0", - "rustls 0.21.12", - "thiserror 1.0.69", + "rustc-hash 2.1.0", + "rustls", + "socket2", + "thiserror 2.0.9", "tokio", "tracing", ] [[package]] name = "quinn-proto" -version = "0.10.6" +version = "0.11.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "141bf7dfde2fbc246bfd3fe12f2455aa24b0fbd9af535d8c86c7bd1381ff2b1a" +checksum = "a2fe5ef3495d7d2e377ff17b1a8ce2ee2ec2a18cde8b6ad6619d65d0701c135d" dependencies = [ "bytes", + "getrandom", "rand", - "ring 0.16.20", - "rustc-hash 1.1.0", - "rustls 0.21.12", + "ring 0.17.8", + "rustc-hash 2.1.0", + "rustls", + "rustls-pki-types", "slab", - "thiserror 1.0.69", + "thiserror 2.0.9", "tinyvec", "tracing", + "web-time", ] [[package]] name = "quinn-udp" -version = "0.4.1" +version = "0.5.9" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "055b4e778e8feb9f93c4e439f71dc2156ef13360b432b799e179a8c4cdf0b1d7" +checksum = "1c40286217b4ba3a71d644d752e6a0b71f13f1b6a2c5311acfcbe0c2418ed904" dependencies = [ - "bytes", + "cfg_aliases", "libc", - "socket2 0.5.8", + "once_cell", + "socket2", "tracing", - "windows-sys 0.48.0", + "windows-sys 0.59.0", ] [[package]] @@ -5634,9 +5658,9 @@ checksum = "60a357793950651c4ed0f3f52338f53b2f809f32d83a07f72909fa13e4c6c1e3" [[package]] name = "rcgen" -version = "0.10.0" +version = "0.11.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "ffbe84efe2f38dea12e9bfc1f65377fdf03e53a18cb3b995faedf7934c7e785b" +checksum = "52c4f3084aa3bc7dfbba4eff4fab2a54db4324965d8872ab933565e6fbd83bc6" dependencies = [ "pem", "ring 0.16.20", @@ -5917,18 +5941,6 @@ dependencies = [ "windows-sys 0.59.0", ] -[[package]] -name = "rustls" -version = "0.21.12" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3f56a14d1f48b391359b22f731fd4bd7e43c97f3c50eee276f3aa09c94784d3e" -dependencies = [ - "log", - "ring 0.17.8", - "rustls-webpki 0.101.7", - "sct", -] - [[package]] name = "rustls" version = "0.23.20" @@ -5960,6 +5972,9 @@ name = "rustls-pki-types" version = "1.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2bf47e6ff922db3825eb750c4e2ff784c6ff8fb9e13046ef6a1d1c5401b0b37" +dependencies = [ + "web-time", +] [[package]] name = "rustls-webpki" @@ -6133,16 +6148,6 @@ version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3cf7c11c38cb994f3d40e8a8cde3bbd1f72a435e4c49e85d6553d8312306152" -[[package]] -name = "sct" -version = "0.7.1" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414" -dependencies = [ - "ring 0.17.8", - "untrusted 0.9.0", -] - [[package]] name = "sec1" version = "0.7.3" @@ -7404,7 +7409,7 @@ version = "3.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d6b6f7f2fcb69f747921f79f3926bd1e203fce4fef62c268dd3abfb6d86029aa" dependencies = [ - "base64 0.22.1", + "base64", "chrono", "hex", "indexmap 1.9.3", @@ -7554,16 +7559,6 @@ dependencies = [ "subtle", ] -[[package]] -name = "socket2" -version = "0.4.10" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "9f7916fc008ca5542385b89a3d3ce689953c143e9304a9bf8beec1de48994c0d" -dependencies = [ - "libc", - "winapi", -] - [[package]] name = "socket2" version = "0.5.8" @@ -8319,14 +8314,13 @@ checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263" [[package]] name = "synstructure" -version = "0.12.6" +version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "f36bdaa60a83aca3921b5259d5400cbf5e90fc51931376a9bd4a0eb79aa7210f" +checksum = "c8af7666ab7b6390ab78131fb5b0fce11d6b7a6951602017c35fa82800708971" dependencies = [ "proc-macro2", "quote", - "syn 1.0.109", - "unicode-xid", + "syn 2.0.94", ] [[package]] @@ -8546,7 +8540,7 @@ dependencies = [ "parking_lot 0.12.3", "pin-project-lite", "signal-hook-registry", - "socket2 0.5.8", + "socket2", "tokio-macros", "windows-sys 0.52.0", ] @@ -8568,7 +8562,7 @@ version = "0.26.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f6d0975eaace0cf0fcadee4e4aaa5da15b5c079146f2cffb67c113be122bf37" dependencies = [ - "rustls 0.23.20", + "rustls", "tokio", ] @@ -8806,78 +8800,6 @@ dependencies = [ "hash-db", ] -[[package]] -name = "trust-dns-proto" -version = "0.22.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4f7f83d1e4a0e4358ac54c5c3681e5d7da5efc5a7a632c90bb6d6669ddd9bc26" -dependencies = [ - "async-trait", - "cfg-if", - "data-encoding", - "enum-as-inner 0.5.1", - "futures-channel", - "futures-io", - "futures-util", - "idna 0.2.3", - "ipnet", - "lazy_static", - "rand", - "smallvec", - "socket2 0.4.10", - "thiserror 1.0.69", - "tinyvec", - "tokio", - "tracing", - "url", -] - -[[package]] -name = "trust-dns-proto" -version = "0.23.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3119112651c157f4488931a01e586aa459736e9d6046d3bd9105ffb69352d374" -dependencies = [ - "async-trait", - "cfg-if", - "data-encoding", - "enum-as-inner 0.6.1", - "futures-channel", - "futures-io", - "futures-util", - "idna 0.4.0", - "ipnet", - "once_cell", - "rand", - "smallvec", - "thiserror 1.0.69", - "tinyvec", - "tokio", - "tracing", - "url", -] - -[[package]] -name = "trust-dns-resolver" -version = "0.23.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "10a3e6c3aff1718b3c73e395d1f35202ba2ffa847c6a62eea0db8fb4cfe30be6" -dependencies = [ - "cfg-if", - "futures-util", - "ipconfig", - "lru-cache", - "once_cell", - "parking_lot 0.12.3", - "rand", - "resolv-conf", - "smallvec", - "thiserror 1.0.69", - "tokio", - "tracing", - "trust-dns-proto 0.23.2", -] - [[package]] name = "try-lock" version = "0.2.5" @@ -8986,10 +8908,12 @@ name = "unsigned-varint" version = "0.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6889a77d49f1f013504cec6bf97a2c730394adedaeb1deb5ea08949a50541105" -dependencies = [ - "asynchronous-codec", - "bytes", -] + +[[package]] +name = "unsigned-varint" +version = "0.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eb066959b24b5196ae73cb057f45598450d2c5f71460e98c49b738086eff9c06" [[package]] name = "untrusted" @@ -9010,7 +8934,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32f8b686cadd1473f4bd0117a5d28d36b1ade384ea9b5069a1c40aefed7fda60" dependencies = [ "form_urlencoded", - "idna 1.0.3", + "idna", "percent-encoding", ] @@ -9373,6 +9297,16 @@ dependencies = [ "wasm-bindgen", ] +[[package]] +name = "web-time" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + [[package]] name = "wide" version = "0.7.30" @@ -9702,9 +9636,9 @@ dependencies = [ [[package]] name = "x509-parser" -version = "0.15.1" +version = "0.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "7069fba5b66b9193bd2c5d3d4ff12b839118f6bcbef5328efafafb5395cf63da" +checksum = "fcbc162f30700d6f3f82a24bf7cc62ffe7caea42c0b2cba8bf7f3ae50cf51f69" dependencies = [ "asn1-rs", "data-encoding", @@ -9747,6 +9681,22 @@ dependencies = [ "static_assertions", ] +[[package]] +name = "yamux" +version = "0.13.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "17610762a1207ee816c6fadc29220904753648aba0a9ed61c7b8336e80a559c4" +dependencies = [ + "futures", + "log", + "nohash-hasher", + "parking_lot 0.12.3", + "pin-project", + "rand", + "static_assertions", + "web-time", +] + [[package]] name = "yasna" version = "0.5.2" diff --git a/coordinator/p2p/libp2p/Cargo.toml b/coordinator/p2p/libp2p/Cargo.toml index 948df9a4..7707beb7 100644 --- a/coordinator/p2p/libp2p/Cargo.toml +++ b/coordinator/p2p/libp2p/Cargo.toml @@ -35,7 +35,7 @@ tributary-sdk = { path = "../../tributary-sdk" } futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["sync"] } -libp2p = { version = "0.52", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } +libp2p = { version = "0.54", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } log = { version = "0.4", default-features = false, features = ["std"] } serai-task = { path = "../../../common/task", version = "0.1" } diff --git a/coordinator/p2p/libp2p/src/authenticate.rs b/coordinator/p2p/libp2p/src/authenticate.rs index fbdcf7c9..641d4481 100644 --- a/coordinator/p2p/libp2p/src/authenticate.rs +++ b/coordinator/p2p/libp2p/src/authenticate.rs @@ -11,8 +11,7 @@ use serai_client::primitives::PublicKey as Public; use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; use libp2p::{ - core::UpgradeInfo, - InboundUpgrade, OutboundUpgrade, + core::upgrade::{UpgradeInfo, InboundConnectionUpgrade, OutboundConnectionUpgrade}, identity::{self, PeerId}, noise, }; @@ -119,12 +118,18 @@ impl UpgradeInfo for OnlyValidators { } } -impl InboundUpgrade for OnlyValidators { +impl InboundConnectionUpgrade + for OnlyValidators +{ type Output = (PeerId, noise::Output); type Error = io::Error; type Future = Pin>>>; - fn upgrade_inbound(self, socket: S, info: Self::Info) -> Self::Future { + fn upgrade_inbound( + self, + socket: S, + info: ::Info, + ) -> >::Future { Box::pin(async move { let (dialer_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair) .unwrap() @@ -147,12 +152,18 @@ impl InboundUpgrade for O } } -impl OutboundUpgrade for OnlyValidators { +impl OutboundConnectionUpgrade + for OnlyValidators +{ type Output = (PeerId, noise::Output); type Error = io::Error; type Future = Pin>>>; - fn upgrade_outbound(self, socket: S, info: Self::Info) -> Self::Future { + fn upgrade_outbound( + self, + socket: S, + info: ::Info, + ) -> >::Future { Box::pin(async move { let (listener_noise_peer_id, mut socket) = noise::Config::new(&self.noise_keypair) .unwrap() diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index 4a998289..91f66a2d 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -50,7 +50,7 @@ mod ping; /// The request-response messages and behavior mod reqres; -use reqres::{RequestId, Request, Response}; +use reqres::{InboundRequestId, Request, Response}; /// The gossip messages and behavior mod gossip; @@ -66,14 +66,6 @@ use dial::DialTask; const PORT: u16 = 30563; // 5132 ^ (('c' << 8) | 'o') -// usize::max, manually implemented, as max isn't a const fn -const MAX_LIBP2P_MESSAGE_SIZE: usize = - if gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE > reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE { - gossip::MAX_LIBP2P_GOSSIP_MESSAGE_SIZE - } else { - reqres::MAX_LIBP2P_REQRES_MESSAGE_SIZE - }; - fn peer_id_from_public(public: PublicKey) -> PeerId { // 0 represents the identity Multihash, that no hash was performed // It's an internal constant so we can't refer to the constant inside libp2p @@ -143,9 +135,9 @@ struct Libp2pInner { signed_cosigns: Mutex>, signed_cosigns_send: mpsc::UnboundedSender, - heartbeat_requests: Mutex>, - notable_cosign_requests: Mutex>, - inbound_request_responses: mpsc::UnboundedSender<(RequestId, Response)>, + heartbeat_requests: Mutex>, + notable_cosign_requests: Mutex>, + inbound_request_responses: mpsc::UnboundedSender<(InboundRequestId, Response)>, } /// The libp2p-backed P2P implementation. @@ -176,19 +168,9 @@ impl Libp2p { Ok(OnlyValidators { serai_key: serai_key.clone(), noise_keypair: noise_keypair.clone() }) }; - let new_yamux = || { - let mut config = yamux::Config::default(); - // 1 MiB default + max message size - config.set_max_buffer_size((1024 * 1024) + MAX_LIBP2P_MESSAGE_SIZE); - // 256 KiB default + max message size - config - .set_receive_window_size(((256 * 1024) + MAX_LIBP2P_MESSAGE_SIZE).try_into().unwrap()); - config - }; - let mut swarm = SwarmBuilder::with_existing_identity(identity::Keypair::generate_ed25519()) .with_tokio() - .with_tcp(TcpConfig::default().nodelay(true), new_only_validators, new_yamux) + .with_tcp(TcpConfig::default().nodelay(true), new_only_validators, yamux::Config::default) .unwrap() .with_behaviour(|_| Behavior { allow_list: allow_block_list::Behaviour::default(), diff --git a/coordinator/p2p/libp2p/src/reqres.rs b/coordinator/p2p/libp2p/src/reqres.rs index 221cbdf3..aef16940 100644 --- a/coordinator/p2p/libp2p/src/reqres.rs +++ b/coordinator/p2p/libp2p/src/reqres.rs @@ -10,7 +10,7 @@ use futures_util::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt}; use libp2p::request_response::{ self, Codec as CodecTrait, Event as GenericEvent, Config, Behaviour, ProtocolSupport, }; -pub use request_response::{RequestId, Message}; +pub use request_response::{InboundRequestId, Message}; use serai_cosign::SignedCosign; @@ -129,7 +129,6 @@ pub(crate) type Event = GenericEvent; pub(crate) type Behavior = Behaviour; pub(crate) fn new_behavior() -> Behavior { - let mut config = Config::default(); - config.set_request_timeout(Duration::from_secs(5)); + let config = Config::default().with_request_timeout(Duration::from_secs(5)); Behavior::new([(PROTOCOL, ProtocolSupport::Full)], config) } diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index a8c9556c..0d06c171 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -17,7 +17,7 @@ use serai_cosign::SignedCosign; use futures_util::StreamExt; use libp2p::{ identity::PeerId, - request_response::{RequestId, ResponseChannel}, + request_response::{InboundRequestId, OutboundRequestId, ResponseChannel}, swarm::{dial_opts::DialOpts, SwarmEvent, Swarm}, }; @@ -65,12 +65,12 @@ pub(crate) struct SwarmTask { tributary_gossip: mpsc::UnboundedSender<([u8; 32], Vec)>, outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, - outbound_request_responses: HashMap>, + outbound_request_responses: HashMap>, - inbound_request_response_channels: HashMap>, - heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, - notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, - inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>, + inbound_request_response_channels: HashMap>, + heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ValidatorSet, [u8; 32])>, + notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>, + inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>, } impl SwarmTask { @@ -222,25 +222,21 @@ impl SwarmTask { } } - SwarmEvent::Behaviour( - BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) - ) => { - // This *is* an exhaustive match as these events are empty enums - match event {} - } - SwarmEvent::Behaviour( - BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) - ) => { - if result.is_err() { - self.swarm.close_connection(connection); + SwarmEvent::Behaviour(event) => { + match event { + BehaviorEvent::AllowList(event) | BehaviorEvent::ConnectionLimits(event) => { + // This *is* an exhaustive match as these events are empty enums + match event {} + } + BehaviorEvent::Ping(ping::Event { peer: _, connection, result, }) => { + if result.is_err() { + self.swarm.close_connection(connection); + } + } + BehaviorEvent::Reqres(event) => self.handle_reqres(event), + BehaviorEvent::Gossip(event) => self.handle_gossip(event), } } - SwarmEvent::Behaviour(BehaviorEvent::Reqres(event)) => { - self.handle_reqres(event) - } - SwarmEvent::Behaviour(BehaviorEvent::Gossip(event)) => { - self.handle_gossip(event) - } // We don't handle any of these SwarmEvent::IncomingConnection { .. } | @@ -250,7 +246,14 @@ impl SwarmTask { SwarmEvent::ExpiredListenAddr { .. } | SwarmEvent::ListenerClosed { .. } | SwarmEvent::ListenerError { .. } | - SwarmEvent::Dialing { .. } => {} + SwarmEvent::Dialing { .. } | + SwarmEvent::NewExternalAddrCandidate { .. } | + SwarmEvent::ExternalAddrConfirmed { .. } | + SwarmEvent::ExternalAddrExpired { .. } | + SwarmEvent::NewExternalAddrOfPeer { .. } => {} + + // Requires as SwarmEvent is non-exhaustive + _ => log::warn!("unhandled SwarmEvent: {event:?}"), } } @@ -321,9 +324,9 @@ impl SwarmTask { outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, - heartbeat_requests: mpsc::UnboundedSender<(RequestId, ValidatorSet, [u8; 32])>, - notable_cosign_requests: mpsc::UnboundedSender<(RequestId, [u8; 32])>, - inbound_request_responses: mpsc::UnboundedReceiver<(RequestId, Response)>, + heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ValidatorSet, [u8; 32])>, + notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>, + inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>, ) { tokio::spawn( SwarmTask { From cb906242e7cd9ecac2b84dbd443c005b45189157 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 18 Jan 2025 12:31:11 -0500 Subject: [PATCH 320/368] 2025 nightly Supersedes #640. --- .github/nightly-version | 2 +- common/request/src/response.rs | 2 +- coordinator/tributary/src/lib.rs | 4 ++-- crypto/dalek-ff-group/src/field.rs | 2 +- crypto/dleq/src/lib.rs | 4 ++-- .../src/arithmetic_circuit_proof.rs | 2 +- .../evrf/generalized-bulletproofs/src/lib.rs | 2 +- crypto/frost/src/sign.rs | 18 +++++++++++------- crypto/multiexp/src/lib.rs | 2 +- processor/scheduler/utxo/primitives/src/lib.rs | 1 + substrate/client/src/serai/coins.rs | 2 +- substrate/client/src/serai/dex.rs | 2 +- .../client/src/serai/genesis_liquidity.rs | 2 +- substrate/client/src/serai/in_instructions.rs | 2 +- substrate/client/src/serai/liquidity_tokens.rs | 2 +- substrate/client/src/serai/mod.rs | 16 ++++++++-------- substrate/client/src/serai/validator_sets.rs | 2 +- 17 files changed, 36 insertions(+), 31 deletions(-) diff --git a/.github/nightly-version b/.github/nightly-version index 9f98e758..09a243d7 100644 --- a/.github/nightly-version +++ b/.github/nightly-version @@ -1 +1 @@ -nightly-2024-07-01 +nightly-2025-01-01 diff --git a/common/request/src/response.rs b/common/request/src/response.rs index 78295d37..e4628f72 100644 --- a/common/request/src/response.rs +++ b/common/request/src/response.rs @@ -11,7 +11,7 @@ use crate::{Client, Error}; #[allow(dead_code)] #[derive(Debug)] pub struct Response<'a>(pub(crate) hyper::Response, pub(crate) &'a Client); -impl<'a> Response<'a> { +impl Response<'_> { pub fn status(&self) -> StatusCode { self.0.status() } diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 27baf45d..1e1235ad 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -133,7 +133,7 @@ struct ScanBlock<'a, TD: Db, TDT: DbTxn, P: P2p> { total_weight: u16, validator_weights: &'a HashMap, } -impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { +impl ScanBlock<'_, TD, TDT, P> { fn potentially_start_cosign(&mut self) { // Don't start a new cosigning instance if we're actively running one if TributaryDb::actively_cosigning(self.tributary_txn, self.set.set).is_some() { @@ -173,7 +173,7 @@ impl<'a, TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'a, TD, TDT, P> { self.set.set, messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { session: self.set.set.session, - intent, + cosign: intent.into_cosign(self.set.set.network), }, ); } diff --git a/crypto/dalek-ff-group/src/field.rs b/crypto/dalek-ff-group/src/field.rs index bc3078c8..e3813725 100644 --- a/crypto/dalek-ff-group/src/field.rs +++ b/crypto/dalek-ff-group/src/field.rs @@ -92,7 +92,7 @@ impl Neg for FieldElement { } } -impl<'a> Neg for &'a FieldElement { +impl Neg for &FieldElement { type Output = FieldElement; fn neg(self) -> Self::Output { (*self).neg() diff --git a/crypto/dleq/src/lib.rs b/crypto/dleq/src/lib.rs index a8958a2e..f6aed25a 100644 --- a/crypto/dleq/src/lib.rs +++ b/crypto/dleq/src/lib.rs @@ -37,11 +37,11 @@ pub(crate) fn challenge(transcript: &mut T) -> F { // Get a wide amount of bytes to safely reduce without bias // In most cases, <=1.5x bytes is enough. 2x is still standard and there's some theoretical // groups which may technically require more than 1.5x bytes for this to work as intended - let target_bytes = ((usize::try_from(F::NUM_BITS).unwrap() + 7) / 8) * 2; + let target_bytes = usize::try_from(F::NUM_BITS).unwrap().div_ceil(8) * 2; let mut challenge_bytes = transcript.challenge(b"challenge"); let challenge_bytes_len = challenge_bytes.as_ref().len(); // If the challenge is 32 bytes, and we need 64, we need two challenges - let needed_challenges = (target_bytes + (challenge_bytes_len - 1)) / challenge_bytes_len; + let needed_challenges = target_bytes.div_ceil(challenge_bytes_len); // The following algorithm should be equivalent to a wide reduction of the challenges, // interpreted as concatenated, big-endian byte string diff --git a/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs index e0c6e464..c4983b76 100644 --- a/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs +++ b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs @@ -33,7 +33,7 @@ pub struct ArithmeticCircuitStatement<'a, C: Ciphersuite> { V: PointVector, } -impl<'a, C: Ciphersuite> Zeroize for ArithmeticCircuitStatement<'a, C> { +impl Zeroize for ArithmeticCircuitStatement<'_, C> { fn zeroize(&mut self) { self.constraints.zeroize(); self.C.zeroize(); diff --git a/crypto/evrf/generalized-bulletproofs/src/lib.rs b/crypto/evrf/generalized-bulletproofs/src/lib.rs index dc88e68c..48c4cd56 100644 --- a/crypto/evrf/generalized-bulletproofs/src/lib.rs +++ b/crypto/evrf/generalized-bulletproofs/src/lib.rs @@ -247,7 +247,7 @@ impl Generators { } } -impl<'a, C: Ciphersuite> ProofGenerators<'a, C> { +impl ProofGenerators<'_, C> { pub(crate) fn len(&self) -> usize { self.g_bold.len() } diff --git a/crypto/frost/src/sign.rs b/crypto/frost/src/sign.rs index 5115244f..0351584a 100644 --- a/crypto/frost/src/sign.rs +++ b/crypto/frost/src/sign.rs @@ -203,14 +203,15 @@ pub trait SignMachine: Send + Sync + Sized { /// SignatureMachine this SignMachine turns into. type SignatureMachine: SignatureMachine; - /// Cache this preprocess for usage later. This cached preprocess MUST only be used once. Reuse - /// of it enables recovery of your private key share. Third-party recovery of a cached preprocess - /// also enables recovery of your private key share, so this MUST be treated with the same - /// security as your private key share. + /// Cache this preprocess for usage later. + /// + /// This cached preprocess MUST only be used once. Reuse of it enables recovery of your private + /// key share. Third-party recovery of a cached preprocess also enables recovery of your private + /// key share, so this MUST be treated with the same security as your private key share. fn cache(self) -> CachedPreprocess; /// Create a sign machine from a cached preprocess. - + /// /// After this, the preprocess must be deleted so it's never reused. Any reuse will presumably /// cause the signer to leak their secret share. fn from_cache( @@ -219,11 +220,14 @@ pub trait SignMachine: Send + Sync + Sized { cache: CachedPreprocess, ) -> (Self, Self::Preprocess); - /// Read a Preprocess message. Despite taking self, this does not save the preprocess. - /// It must be externally cached and passed into sign. + /// Read a Preprocess message. + /// + /// Despite taking self, this does not save the preprocess. It must be externally cached and + /// passed into sign. fn read_preprocess(&self, reader: &mut R) -> io::Result; /// Sign a message. + /// /// Takes in the participants' preprocess messages. Returns the signature share to be broadcast /// to all participants, over an authenticated channel. The parties who participate here will /// become the signing set for this session. diff --git a/crypto/multiexp/src/lib.rs b/crypto/multiexp/src/lib.rs index dfd8e033..604d0fd6 100644 --- a/crypto/multiexp/src/lib.rs +++ b/crypto/multiexp/src/lib.rs @@ -59,7 +59,7 @@ pub(crate) fn prep_bits>( for pair in pairs { let p = groupings.len(); let mut bits = pair.0.to_le_bits(); - groupings.push(vec![0; (bits.len() + (w_usize - 1)) / w_usize]); + groupings.push(vec![0; bits.len().div_ceil(w_usize)]); for (i, mut bit) in bits.iter_mut().enumerate() { let mut bit = u8_from_bool(&mut bit); diff --git a/processor/scheduler/utxo/primitives/src/lib.rs b/processor/scheduler/utxo/primitives/src/lib.rs index c01baf02..a793c906 100644 --- a/processor/scheduler/utxo/primitives/src/lib.rs +++ b/processor/scheduler/utxo/primitives/src/lib.rs @@ -102,6 +102,7 @@ pub trait TransactionPlanner: 'static + Send + Sync { /// /// Returns `None` if the fee exceeded the inputs, or `Some` otherwise. // TODO: Enum for Change of None, Some, Mandatory + #[allow(clippy::type_complexity)] fn plan_transaction_with_fee_amortization( &self, operating_costs: &mut u64, diff --git a/substrate/client/src/serai/coins.rs b/substrate/client/src/serai/coins.rs index c5bef95d..2da598fd 100644 --- a/substrate/client/src/serai/coins.rs +++ b/substrate/client/src/serai/coins.rs @@ -12,7 +12,7 @@ pub type CoinsEvent = serai_abi::coins::Event; #[derive(Clone, Copy)] pub struct SeraiCoins<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiCoins<'a> { +impl SeraiCoins<'_> { pub async fn mint_events(&self) -> Result, SeraiError> { self .0 diff --git a/substrate/client/src/serai/dex.rs b/substrate/client/src/serai/dex.rs index ea76e625..8a53ba78 100644 --- a/substrate/client/src/serai/dex.rs +++ b/substrate/client/src/serai/dex.rs @@ -9,7 +9,7 @@ const PALLET: &str = "Dex"; #[derive(Clone, Copy)] pub struct SeraiDex<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiDex<'a> { +impl SeraiDex<'_> { pub async fn events(&self) -> Result, SeraiError> { self .0 diff --git a/substrate/client/src/serai/genesis_liquidity.rs b/substrate/client/src/serai/genesis_liquidity.rs index 187844be..8b9c5538 100644 --- a/substrate/client/src/serai/genesis_liquidity.rs +++ b/substrate/client/src/serai/genesis_liquidity.rs @@ -15,7 +15,7 @@ const PALLET: &str = "GenesisLiquidity"; #[derive(Clone, Copy)] pub struct SeraiGenesisLiquidity<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiGenesisLiquidity<'a> { +impl SeraiGenesisLiquidity<'_> { pub async fn events(&self) -> Result, SeraiError> { self .0 diff --git a/substrate/client/src/serai/in_instructions.rs b/substrate/client/src/serai/in_instructions.rs index 29f9b1a2..675ff792 100644 --- a/substrate/client/src/serai/in_instructions.rs +++ b/substrate/client/src/serai/in_instructions.rs @@ -9,7 +9,7 @@ const PALLET: &str = "InInstructions"; #[derive(Clone, Copy)] pub struct SeraiInInstructions<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiInInstructions<'a> { +impl SeraiInInstructions<'_> { pub async fn last_batch_for_network( &self, network: NetworkId, diff --git a/substrate/client/src/serai/liquidity_tokens.rs b/substrate/client/src/serai/liquidity_tokens.rs index 3e9052b2..530b9257 100644 --- a/substrate/client/src/serai/liquidity_tokens.rs +++ b/substrate/client/src/serai/liquidity_tokens.rs @@ -8,7 +8,7 @@ const PALLET: &str = "LiquidityTokens"; #[derive(Clone, Copy)] pub struct SeraiLiquidityTokens<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiLiquidityTokens<'a> { +impl SeraiLiquidityTokens<'_> { pub async fn token_supply(&self, coin: Coin) -> Result { Ok(self.0.storage(PALLET, "Supply", coin).await?.unwrap_or(Amount(0))) } diff --git a/substrate/client/src/serai/mod.rs b/substrate/client/src/serai/mod.rs index f99e9a39..fda876b6 100644 --- a/substrate/client/src/serai/mod.rs +++ b/substrate/client/src/serai/mod.rs @@ -80,7 +80,7 @@ pub struct TemporalSerai<'a> { block: [u8; 32], events: RwLock>, } -impl<'a> Clone for TemporalSerai<'a> { +impl Clone for TemporalSerai<'_> { fn clone(&self) -> Self { Self { serai: self.serai, block: self.block, events: RwLock::new(None) } } @@ -319,7 +319,7 @@ impl Serai { } } -impl<'a> TemporalSerai<'a> { +impl TemporalSerai<'_> { async fn events( &self, filter_map: impl Fn(&Event) -> Option, @@ -389,27 +389,27 @@ impl<'a> TemporalSerai<'a> { }) } - pub fn coins(&'a self) -> SeraiCoins<'a> { + pub fn coins(&self) -> SeraiCoins<'_> { SeraiCoins(self) } - pub fn dex(&'a self) -> SeraiDex<'a> { + pub fn dex(&self) -> SeraiDex<'_> { SeraiDex(self) } - pub fn in_instructions(&'a self) -> SeraiInInstructions<'a> { + pub fn in_instructions(&self) -> SeraiInInstructions<'_> { SeraiInInstructions(self) } - pub fn validator_sets(&'a self) -> SeraiValidatorSets<'a> { + pub fn validator_sets(&self) -> SeraiValidatorSets<'_> { SeraiValidatorSets(self) } - pub fn genesis_liquidity(&'a self) -> SeraiGenesisLiquidity { + pub fn genesis_liquidity(&self) -> SeraiGenesisLiquidity { SeraiGenesisLiquidity(self) } - pub fn liquidity_tokens(&'a self) -> SeraiLiquidityTokens { + pub fn liquidity_tokens(&self) -> SeraiLiquidityTokens { SeraiLiquidityTokens(self) } } diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index 882f7af6..d7190651 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -18,7 +18,7 @@ pub type ValidatorSetsEvent = serai_abi::validator_sets::Event; #[derive(Clone, Copy)] pub struct SeraiValidatorSets<'a>(pub(crate) &'a TemporalSerai<'a>); -impl<'a> SeraiValidatorSets<'a> { +impl SeraiValidatorSets<'_> { pub async fn new_set_events(&self) -> Result, SeraiError> { self .0 From 8222ce78d8cbf1448b368347d70c16b815491938 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 18 Jan 2025 12:41:57 -0500 Subject: [PATCH 321/368] Correct accumulated errors in the processor --- Cargo.lock | 2 + coordinator/cosign/src/lib.rs | 20 ++++++++- coordinator/tributary/src/transaction.rs | 10 +++-- processor/bin/Cargo.toml | 1 + processor/bin/src/coordinator.rs | 20 ++++----- processor/bin/src/lib.rs | 12 ++--- processor/messages/src/lib.rs | 8 ++-- processor/signers/Cargo.toml | 1 + processor/signers/src/batch/mod.rs | 14 +++++- processor/signers/src/coordinator/mod.rs | 20 +++------ processor/signers/src/cosign/mod.rs | 44 +++++++++++++------ processor/signers/src/db.rs | 12 ++--- processor/signers/src/lib.rs | 21 ++++----- processor/signers/src/slash_report.rs | 25 +++++------ processor/signers/src/wrapped_schnorrkel.rs | 12 ++--- .../primitives/src/slash_points.rs | 9 ++-- 16 files changed, 133 insertions(+), 98 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index c8d00271..4d6bf218 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -6906,6 +6906,7 @@ dependencies = [ "log", "parity-scale-codec", "serai-client", + "serai-cosign", "serai-db", "serai-env", "serai-message-queue", @@ -7097,6 +7098,7 @@ dependencies = [ "modular-frost", "parity-scale-codec", "rand_core", + "serai-cosign", "serai-db", "serai-in-instructions-primitives", "serai-primitives", diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 517dd7f3..3d476c3d 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -104,6 +104,24 @@ pub struct Cosign { pub cosigner: NetworkId, } +impl CosignIntent { + /// Convert this into a `Cosign`. + pub fn into_cosign(self, cosigner: NetworkId) -> Cosign { + let CosignIntent { global_session, block_number, block_hash, notable: _ } = self; + Cosign { global_session, block_number, block_hash, cosigner } + } +} + +impl Cosign { + /// The message to sign to sign this cosign. + /// + /// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`. + pub fn signature_message(&self) -> Vec { + // We use a schnorrkel context to domain-separate this + self.encode() + } +} + /// A signed cosign. #[derive(Clone, Debug, BorshSerialize, BorshDeserialize)] pub struct SignedCosign { @@ -118,7 +136,7 @@ impl SignedCosign { let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false }; let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false }; - signer.verify_simple(COSIGN_CONTEXT, &self.cosign.encode(), &signature).is_ok() + signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok() } } diff --git a/coordinator/tributary/src/transaction.rs b/coordinator/tributary/src/transaction.rs index b302f8d7..d05bf3c2 100644 --- a/coordinator/tributary/src/transaction.rs +++ b/coordinator/tributary/src/transaction.rs @@ -365,10 +365,12 @@ impl Transaction { Transaction::DkgConfirmationPreprocess { ref mut signed, .. } => signed, Transaction::DkgConfirmationShare { ref mut signed, .. } => signed, - Transaction::Cosign { .. } => panic!("signing CosignSubstrateBlock"), - Transaction::Cosigned { .. } => panic!("signing Cosigned"), - Transaction::SubstrateBlock { .. } => panic!("signing SubstrateBlock"), - Transaction::Batch { .. } => panic!("signing Batch"), + Transaction::Cosign { .. } => panic!("signing Cosign transaction (provided)"), + Transaction::Cosigned { .. } => panic!("signing Cosigned transaction (provided)"), + Transaction::SubstrateBlock { .. } => { + panic!("signing SubstrateBlock transaction (provided)") + } + Transaction::Batch { .. } => panic!("signing Batch transaction (provided)"), Transaction::Sign { ref mut signed, .. } => signed, diff --git a/processor/bin/Cargo.toml b/processor/bin/Cargo.toml index 52ebaeb9..164036a0 100644 --- a/processor/bin/Cargo.toml +++ b/processor/bin/Cargo.toml @@ -28,6 +28,7 @@ ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, fea dkg = { path = "../../crypto/dkg", default-features = false, features = ["std", "evrf-ristretto"] } serai-client = { path = "../../substrate/client", default-features = false } +serai-cosign = { path = "../../coordinator/cosign" } log = { version = "0.4", default-features = false, features = ["std"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] } diff --git a/processor/bin/src/coordinator.rs b/processor/bin/src/coordinator.rs index 591826bd..62eb1097 100644 --- a/processor/bin/src/coordinator.rs +++ b/processor/bin/src/coordinator.rs @@ -3,12 +3,14 @@ use std::sync::{LazyLock, Arc, Mutex}; use tokio::sync::mpsc; -use scale::Encode; use serai_client::{ - primitives::Signature, validator_sets::primitives::Session, + primitives::Signature, + validator_sets::primitives::{Session, SlashReport}, in_instructions::primitives::SignedBatch, }; +use serai_cosign::SignedCosign; + use serai_db::{Get, DbTxn, Db, create_db, db_channel}; use scanner::ScannerFeed; @@ -181,17 +183,11 @@ impl signers::Coordinator for CoordinatorSend { fn publish_cosign( &mut self, - block_number: u64, - block: [u8; 32], - signature: Signature, + cosign: SignedCosign, ) -> impl Send + Future> { async move { self.send(&messages::ProcessorMessage::Coordinator( - messages::coordinator::ProcessorMessage::CosignedBlock { - block_number, - block, - signature: signature.encode(), - }, + messages::coordinator::ProcessorMessage::CosignedBlock { cosign }, )); Ok(()) } @@ -212,13 +208,15 @@ impl signers::Coordinator for CoordinatorSend { fn publish_slash_report_signature( &mut self, session: Session, + slash_report: SlashReport, signature: Signature, ) -> impl Send + Future> { async move { self.send(&messages::ProcessorMessage::Coordinator( messages::coordinator::ProcessorMessage::SignedSlashReport { session, - signature: signature.encode(), + slash_report, + signature: signature.0, }, )); Ok(()) diff --git a/processor/bin/src/lib.rs b/processor/bin/src/lib.rs index 119c4f40..5109dcbc 100644 --- a/processor/bin/src/lib.rs +++ b/processor/bin/src/lib.rs @@ -221,20 +221,16 @@ pub async fn main_loop< signers.queue_message(txn, &msg) } messages::CoordinatorMessage::Coordinator( - messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { - session, - block_number, - block, - }, + messages::coordinator::CoordinatorMessage::CosignSubstrateBlock { session, cosign }, ) => { let txn = txn.take().unwrap(); - signers.cosign_block(txn, session, block_number, block) + signers.cosign_block(txn, session, &cosign) } messages::CoordinatorMessage::Coordinator( - messages::coordinator::CoordinatorMessage::SignSlashReport { session, report }, + messages::coordinator::CoordinatorMessage::SignSlashReport { session, slash_report }, ) => { let txn = txn.take().unwrap(); - signers.sign_slash_report(txn, session, &report) + signers.sign_slash_report(txn, session, &slash_report) } messages::CoordinatorMessage::Substrate(msg) => match msg { diff --git a/processor/messages/src/lib.rs b/processor/messages/src/lib.rs index b8f496ab..7101fdc2 100644 --- a/processor/messages/src/lib.rs +++ b/processor/messages/src/lib.rs @@ -11,7 +11,7 @@ use validator_sets_primitives::{Session, KeyPair, SlashReport}; use coins_primitives::OutInstructionWithBalance; use in_instructions_primitives::SignedBatch; -use serai_cosign::{CosignIntent, SignedCosign}; +use serai_cosign::{Cosign, SignedCosign}; #[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct SubstrateContext { @@ -166,7 +166,7 @@ pub mod coordinator { /// Cosign the specified Substrate block. /// /// This is sent by the Coordinator's Tributary scanner. - CosignSubstrateBlock { session: Session, intent: CosignIntent }, + CosignSubstrateBlock { session: Session, cosign: Cosign }, /// Sign the slash report for this session. /// /// This is sent by the Coordinator's Tributary scanner. @@ -322,8 +322,8 @@ impl CoordinatorMessage { CoordinatorMessage::Coordinator(msg) => { let (sub, id) = match msg { // We only cosign a block once, and Reattempt is a separate message - coordinator::CoordinatorMessage::CosignSubstrateBlock { intent, .. } => { - (0, intent.block_number.encode()) + coordinator::CoordinatorMessage::CosignSubstrateBlock { cosign, .. } => { + (0, cosign.block_number.encode()) } // We only sign one slash report, and Reattempt is a separate message coordinator::CoordinatorMessage::SignSlashReport { session, .. } => (1, session.encode()), diff --git a/processor/signers/Cargo.toml b/processor/signers/Cargo.toml index ddd295d3..ecf588d4 100644 --- a/processor/signers/Cargo.toml +++ b/processor/signers/Cargo.toml @@ -40,6 +40,7 @@ serai-db = { path = "../../common/db" } log = { version = "0.4", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] } +serai-cosign = { path = "../../coordinator/cosign" } messages = { package = "serai-processor-messages", path = "../messages" } primitives = { package = "serai-processor-primitives", path = "../primitives" } scanner = { package = "serai-processor-scanner", path = "../scanner" } diff --git a/processor/signers/src/batch/mod.rs b/processor/signers/src/batch/mod.rs index 2c4fd1f5..f38666a8 100644 --- a/processor/signers/src/batch/mod.rs +++ b/processor/signers/src/batch/mod.rs @@ -69,7 +69,12 @@ impl BatchSignerTask { let mut machines = Vec::with_capacity(keys.len()); for keys in &keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); + // TODO: Fetch the context for this from a constant instead of re-defining it + machines.push(WrappedSchnorrkelMachine::new( + keys.clone(), + b"substrate", + batch_message(&batch), + )); } attempt_manager.register(VariantSignId::Batch(id), machines); } @@ -106,7 +111,12 @@ impl ContinuallyRan for BatchSignerTask { let mut machines = Vec::with_capacity(self.keys.len()); for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), batch_message(&batch))); + // TODO: Also fetch the constant here + machines.push(WrappedSchnorrkelMachine::new( + keys.clone(), + b"substrate", + batch_message(&batch), + )); } for msg in self.attempt_manager.register(VariantSignId::Batch(batch_hash), machines) { BatchSignerToCoordinatorMessages::send(&mut txn, self.session, &msg); diff --git a/processor/signers/src/coordinator/mod.rs b/processor/signers/src/coordinator/mod.rs index 003c14cd..0fd10822 100644 --- a/processor/signers/src/coordinator/mod.rs +++ b/processor/signers/src/coordinator/mod.rs @@ -1,6 +1,7 @@ use core::future::Future; -use scale::Decode; +use serai_primitives::Signature; + use serai_db::{DbTxn, Db}; use primitives::task::ContinuallyRan; @@ -99,17 +100,11 @@ impl ContinuallyRan for CoordinatorTask { // Publish the cosigns from this session { let mut txn = self.db.txn(); - while let Some(((block_number, block_id), signature)) = - Cosign::try_recv(&mut txn, session) - { + while let Some(signed_cosign) = Cosign::try_recv(&mut txn, session) { iterated = true; self .coordinator - .publish_cosign( - block_number, - block_id, - <_>::decode(&mut signature.as_slice()).unwrap(), - ) + .publish_cosign(signed_cosign) .await .map_err(|e| format!("couldn't publish Cosign: {e:?}"))?; } @@ -119,15 +114,12 @@ impl ContinuallyRan for CoordinatorTask { // If this session signed its slash report, publish its signature { let mut txn = self.db.txn(); - if let Some(slash_report_signature) = SlashReportSignature::try_recv(&mut txn, session) { + if let Some((slash_report, signature)) = SignedSlashReport::try_recv(&mut txn, session) { iterated = true; self .coordinator - .publish_slash_report_signature( - session, - <_>::decode(&mut slash_report_signature.as_slice()).unwrap(), - ) + .publish_slash_report_signature(session, slash_report, Signature(signature)) .await .map_err(|e| { format!("couldn't send slash report signature to the coordinator: {e:?}") diff --git a/processor/signers/src/cosign/mod.rs b/processor/signers/src/cosign/mod.rs index dc5de6cd..ddf6c490 100644 --- a/processor/signers/src/cosign/mod.rs +++ b/processor/signers/src/cosign/mod.rs @@ -9,7 +9,8 @@ use serai_validator_sets_primitives::Session; use serai_db::{DbTxn, Db}; -use messages::{sign::VariantSignId, coordinator::cosign_block_msg}; +use serai_cosign::{COSIGN_CONTEXT, Cosign as CosignStruct, SignedCosign}; +use messages::sign::VariantSignId; use primitives::task::{DoesNotError, ContinuallyRan}; @@ -34,7 +35,7 @@ pub(crate) struct CosignerTask { session: Session, keys: Vec>, - current_cosign: Option<(u64, [u8; 32])>, + current_cosign: Option, attempt_manager: AttemptManager, } @@ -62,26 +63,34 @@ impl ContinuallyRan for CosignerTask { let mut txn = self.db.txn(); if let Some(cosign) = ToCosign::get(&txn, self.session) { // If this wasn't already signed for... - if LatestCosigned::get(&txn, self.session) < Some(cosign.0) { + if LatestCosigned::get(&txn, self.session) < Some(cosign.block_number) { // If this isn't the cosign we're currently working on, meaning it's fresh - if self.current_cosign != Some(cosign) { + if self.current_cosign.as_ref() != Some(&cosign) { // Retire the current cosign - if let Some(current_cosign) = self.current_cosign { - assert!(current_cosign.0 < cosign.0); - self.attempt_manager.retire(&mut txn, VariantSignId::Cosign(current_cosign.0)); + if let Some(current_cosign) = &self.current_cosign { + assert!(current_cosign.block_number < cosign.block_number); + self + .attempt_manager + .retire(&mut txn, VariantSignId::Cosign(current_cosign.block_number)); } // Set the cosign being worked on - self.current_cosign = Some(cosign); + self.current_cosign = Some(cosign.clone()); let mut machines = Vec::with_capacity(self.keys.len()); { - let message = cosign_block_msg(cosign.0, cosign.1); + let message = cosign.signature_message(); for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + machines.push(WrappedSchnorrkelMachine::new( + keys.clone(), + COSIGN_CONTEXT, + message.clone(), + )); } } - for msg in self.attempt_manager.register(VariantSignId::Cosign(cosign.0), machines) { + for msg in + self.attempt_manager.register(VariantSignId::Cosign(cosign.block_number), machines) + { CosignerToCoordinatorMessages::send(&mut txn, self.session, &msg); } @@ -109,12 +118,19 @@ impl ContinuallyRan for CosignerTask { let VariantSignId::Cosign(block_number) = id else { panic!("CosignerTask signed a non-Cosign") }; - assert_eq!(Some(block_number), self.current_cosign.map(|cosign| cosign.0)); + assert_eq!( + Some(block_number), + self.current_cosign.as_ref().map(|cosign| cosign.block_number) + ); let cosign = self.current_cosign.take().unwrap(); - LatestCosigned::set(&mut txn, self.session, &cosign.0); + LatestCosigned::set(&mut txn, self.session, &cosign.block_number); + let cosign = SignedCosign { + cosign, + signature: Signature::from(signature).encode().try_into().unwrap(), + }; // Send the cosign - Cosign::send(&mut txn, self.session, &(cosign, Signature::from(signature).encode())); + Cosign::send(&mut txn, self.session, &cosign); } } diff --git a/processor/signers/src/db.rs b/processor/signers/src/db.rs index 2c13ddba..23862236 100644 --- a/processor/signers/src/db.rs +++ b/processor/signers/src/db.rs @@ -1,7 +1,9 @@ -use serai_validator_sets_primitives::{Session, Slash}; +use serai_validator_sets_primitives::{Session, SlashReport as SlashReportStruct}; use serai_db::{Get, DbTxn, create_db, db_channel}; +use serai_cosign::{Cosign as CosignStruct, SignedCosign}; + use messages::sign::{ProcessorMessage, CoordinatorMessage}; create_db! { @@ -11,16 +13,16 @@ create_db! { LatestRetiredSession: () -> Session, ToCleanup: () -> Vec<(Session, Vec)>, - ToCosign: (session: Session) -> (u64, [u8; 32]), + ToCosign: (session: Session) -> CosignStruct, } } db_channel! { SignersGlobal { - Cosign: (session: Session) -> ((u64, [u8; 32]), Vec), + Cosign: (session: Session) -> SignedCosign, - SlashReport: (session: Session) -> Vec, - SlashReportSignature: (session: Session) -> Vec, + SlashReport: (session: Session) -> SlashReportStruct, + SignedSlashReport: (session: Session) -> (SlashReportStruct, [u8; 64]), /* TODO: Most of these are pointless? We drop all active signing sessions on reboot. It's diff --git a/processor/signers/src/lib.rs b/processor/signers/src/lib.rs index 116f7b9e..2f5a4a04 100644 --- a/processor/signers/src/lib.rs +++ b/processor/signers/src/lib.rs @@ -11,11 +11,13 @@ use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto}; use frost::dkg::{ThresholdCore, ThresholdKeys}; use serai_primitives::Signature; -use serai_validator_sets_primitives::{Session, Slash}; +use serai_validator_sets_primitives::{Session, SlashReport}; use serai_in_instructions_primitives::SignedBatch; use serai_db::{DbTxn, Db}; +use serai_cosign::{Cosign, SignedCosign}; + use messages::sign::{VariantSignId, ProcessorMessage, CoordinatorMessage}; use primitives::task::{Task, TaskHandle, ContinuallyRan}; @@ -59,9 +61,7 @@ pub trait Coordinator: 'static + Send + Sync { /// Publish a cosign. fn publish_cosign( &mut self, - block_number: u64, - block_id: [u8; 32], - signature: Signature, + signed_cosign: SignedCosign, ) -> impl Send + Future>; /// Publish a `SignedBatch`. @@ -74,6 +74,7 @@ pub trait Coordinator: 'static + Send + Sync { fn publish_slash_report_signature( &mut self, session: Session, + slash_report: SlashReport, signature: Signature, ) -> impl Send + Future>; } @@ -408,19 +409,13 @@ impl< /// Cosign a block. /// /// This is a cheap call and able to be done inline from a higher-level loop. - pub fn cosign_block( - &mut self, - mut txn: impl DbTxn, - session: Session, - block_number: u64, - block: [u8; 32], - ) { + pub fn cosign_block(&mut self, mut txn: impl DbTxn, session: Session, cosign: &Cosign) { // Don't cosign blocks with already retired keys if Some(session.0) <= db::LatestRetiredSession::get(&txn).map(|session| session.0) { return; } - db::ToCosign::set(&mut txn, session, &(block_number, block)); + db::ToCosign::set(&mut txn, session, cosign); txn.commit(); if let Some(tasks) = self.tasks.get(&session) { @@ -435,7 +430,7 @@ impl< &mut self, mut txn: impl DbTxn, session: Session, - slash_report: &Vec, + slash_report: &SlashReport, ) { // Don't sign slash reports with already retired keys if Some(session.0) <= db::LatestRetiredSession::get(&txn).map(|session| session.0) { diff --git a/processor/signers/src/slash_report.rs b/processor/signers/src/slash_report.rs index a5d155ef..14437a74 100644 --- a/processor/signers/src/slash_report.rs +++ b/processor/signers/src/slash_report.rs @@ -3,11 +3,8 @@ use core::{marker::PhantomData, future::Future}; use ciphersuite::Ristretto; use frost::dkg::ThresholdKeys; -use scale::Encode; use serai_primitives::Signature; -use serai_validator_sets_primitives::{ - Session, ValidatorSet, SlashReport as SlashReportStruct, report_slashes_message, -}; +use serai_validator_sets_primitives::Session; use serai_db::{DbTxn, Db}; @@ -20,7 +17,7 @@ use frost_attempt_manager::*; use crate::{ db::{ - SlashReport, SlashReportSignature, CoordinatorToSlashReportSignerMessages, + SlashReport, SignedSlashReport, CoordinatorToSlashReportSignerMessages, SlashReportSignerToCoordinatorMessages, }, WrappedSchnorrkelMachine, @@ -72,12 +69,14 @@ impl ContinuallyRan for SlashReportSignerTask { let mut machines = Vec::with_capacity(self.keys.len()); { - let message = report_slashes_message( - &ValidatorSet { network: S::NETWORK, session: self.session }, - &SlashReportStruct(slash_report.try_into().unwrap()), - ); + let message = slash_report.report_slashes_message(); for keys in &self.keys { - machines.push(WrappedSchnorrkelMachine::new(keys.clone(), message.clone())); + // TODO: Fetch this constant from somewhere instead of inlining it + machines.push(WrappedSchnorrkelMachine::new( + keys.clone(), + b"substrate", + message.clone(), + )); } } let mut txn = self.db.txn(); @@ -105,12 +104,12 @@ impl ContinuallyRan for SlashReportSignerTask { Response::Signature { id, signature } => { assert_eq!(id, VariantSignId::SlashReport); // Drain the channel - SlashReport::try_recv(&mut txn, self.session).unwrap(); + let slash_report = SlashReport::try_recv(&mut txn, self.session).unwrap(); // Send the signature - SlashReportSignature::send( + SignedSlashReport::send( &mut txn, self.session, - &Signature::from(signature).encode(), + &(slash_report, Signature::from(signature).0), ); } } diff --git a/processor/signers/src/wrapped_schnorrkel.rs b/processor/signers/src/wrapped_schnorrkel.rs index d81eaa70..a84b8d43 100644 --- a/processor/signers/src/wrapped_schnorrkel.rs +++ b/processor/signers/src/wrapped_schnorrkel.rs @@ -16,10 +16,10 @@ use frost_schnorrkel::Schnorrkel; // This wraps a Schnorrkel sign machine into one with a preset message. #[derive(Clone)] -pub(crate) struct WrappedSchnorrkelMachine(ThresholdKeys, Vec); +pub(crate) struct WrappedSchnorrkelMachine(ThresholdKeys, &'static [u8], Vec); impl WrappedSchnorrkelMachine { - pub(crate) fn new(keys: ThresholdKeys, msg: Vec) -> Self { - Self(keys, msg) + pub(crate) fn new(keys: ThresholdKeys, context: &'static [u8], msg: Vec) -> Self { + Self(keys, context, msg) } } @@ -39,10 +39,10 @@ impl PreprocessMachine for WrappedSchnorrkelMachine { rng: &mut R, ) -> (Self::SignMachine, Preprocess>::Addendum>) { - let WrappedSchnorrkelMachine(keys, batch) = self; + let WrappedSchnorrkelMachine(keys, context, msg) = self; let (machine, preprocess) = - AlgorithmMachine::new(Schnorrkel::new(b"substrate"), keys).preprocess(rng); - (WrappedSchnorrkelSignMachine(machine, batch), preprocess) + AlgorithmMachine::new(Schnorrkel::new(context), keys).preprocess(rng); + (WrappedSchnorrkelSignMachine(machine, msg), preprocess) } } diff --git a/substrate/validator-sets/primitives/src/slash_points.rs b/substrate/validator-sets/primitives/src/slash_points.rs index d420157e..0cc72b2f 100644 --- a/substrate/validator-sets/primitives/src/slash_points.rs +++ b/substrate/validator-sets/primitives/src/slash_points.rs @@ -234,9 +234,12 @@ impl TryFrom> for SlashReport { } } -// This is assumed binding to the ValidatorSet via the key signed with -pub fn report_slashes_message(slashes: &SlashReport) -> Vec { - (b"ValidatorSets-report_slashes", slashes).encode() +impl SlashReport { + /// The message to sign when publishing this SlashReport. + // This is assumed binding to the ValidatorSet via the key signed with + pub fn report_slashes_message(&self) -> Vec { + (b"ValidatorSets-report_slashes", &self.0).encode() + } } #[test] From 0d906363a08ff3b3be8fe8c98ca91706e1264b45 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 18 Jan 2025 15:13:39 -0500 Subject: [PATCH 322/368] Simplify and test deterministically_sign --- processor/ethereum/TODO/old_processor.rs | 2 +- processor/ethereum/TODO/tests/mod.rs | 2 +- processor/ethereum/deployer/src/lib.rs | 7 +- processor/ethereum/primitives/src/lib.rs | 70 ++++++++++++++----- processor/ethereum/router/src/tests/mod.rs | 14 ++-- processor/ethereum/test-primitives/src/lib.rs | 2 +- 6 files changed, 66 insertions(+), 31 deletions(-) diff --git a/processor/ethereum/TODO/old_processor.rs b/processor/ethereum/TODO/old_processor.rs index a7e85a5c..f95b8225 100644 --- a/processor/ethereum/TODO/old_processor.rs +++ b/processor/ethereum/TODO/old_processor.rs @@ -26,7 +26,7 @@ TODO }; tx.gas_limit = 1_000_000u64.into(); tx.gas_price = 1_000_000_000u64.into(); - let tx = ethereum_serai::crypto::deterministically_sign(&tx); + let tx = ethereum_serai::crypto::deterministically_sign(tx); if self.provider.get_transaction_by_hash(*tx.hash()).await.unwrap().is_none() { self diff --git a/processor/ethereum/TODO/tests/mod.rs b/processor/ethereum/TODO/tests/mod.rs index a865868f..be9106d5 100644 --- a/processor/ethereum/TODO/tests/mod.rs +++ b/processor/ethereum/TODO/tests/mod.rs @@ -109,7 +109,7 @@ pub async fn deploy_contract( input: bin, }; - let deployment_tx = deterministically_sign(&deployment_tx); + let deployment_tx = deterministically_sign(deployment_tx); // Fund the deployer address fund_account( diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index 58b0262d..a4d6ed94 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -43,10 +43,13 @@ impl Deployer { let bytecode = Bytes::from_hex(BYTECODE).expect("compiled-in Deployer bytecode wasn't valid hex"); + // Legacy transactions are used to ensure the widest possible degree of support across EVMs let tx = TxLegacy { chain_id: None, nonce: 0, - // 100 gwei + // This uses a fixed gas price as necessary to achieve a deterministic address + // The gas price is fixed to 100 gwei, which should be incredibly generous, in order to make + // this getting stuck unlikely. While expensive, this only has to occur once gas_price: 100_000_000_000u128, // TODO: Use a more accurate gas limit gas_limit: 1_000_000u64, @@ -55,7 +58,7 @@ impl Deployer { input: bytecode, }; - ethereum_primitives::deterministically_sign(&tx) + ethereum_primitives::deterministically_sign(tx) } /// Obtain the deterministic address for this contract. diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index a6da3b4d..eb7bd615 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -15,34 +15,66 @@ pub fn keccak256(data: impl AsRef<[u8]>) -> [u8; 32] { /// Deterministically sign a transaction. /// -/// This signs a transaction via setting `r = 1, s = 1`, and incrementing `r` until a signer is -/// recoverable from the signature for this transaction. The purpose of this is to be able to send -/// a transaction from a known account which no one knows the private key for. +/// This signs a transaction via setting a signature of `r = 1, s = 1`. The purpose of this is to +/// be able to send a transaction from an account which no one knows the private key for and no +/// other messages may be signed for from. /// /// This function panics if passed a transaction with a non-None chain ID. This is because the /// signer for this transaction is only singular across any/all EVM instances if it isn't binding /// to an instance. -pub fn deterministically_sign(tx: &TxLegacy) -> Signed { +pub fn deterministically_sign(tx: TxLegacy) -> Signed { assert!( tx.chain_id.is_none(), "chain ID was Some when deterministically signing a TX (causing a non-singular signer)" ); - let mut r = Scalar::ONE; + /* + ECDSA signatures are: + - x = private key + - k = rand() + - R = k * G + - r = R.x() + - s = (H(m) + (r * x)) * k.invert() + + Key recovery is performed via: + - a = s * R = (H(m) + (r * x)) * G + - b = a - (H(m) * G) = (r * x) * G + - X = b / r = x * G + - X = ((s * R) - (H(m) * G)) * r.invert() + + This requires `r` be non-zero and `R` be recoverable from `r` and the parity byte. For + `r = 1, s = 1`, this sets `X` to `R - (H(m) * G)`. Since there is an `R` recoverable for + `r = 1`, since the `R` is a point with an unknown discrete logarithm w.r.t. the generator, and + since the resulting key is dependent on the message signed for, this will always work to + the specification. + */ + + let r = Scalar::ONE; let s = Scalar::ONE; - loop { - // Create the signature - let r_bytes: [u8; 32] = r.to_repr().into(); - let s_bytes: [u8; 32] = s.to_repr().into(); - let signature = - PrimitiveSignature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), false); + let r_bytes: [u8; 32] = r.to_repr().into(); + let s_bytes: [u8; 32] = s.to_repr().into(); + let signature = + PrimitiveSignature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), false); - // Check if this is a valid signature - let tx = tx.clone().into_signed(signature); - if tx.recover_signer().is_ok() { - return tx; - } - - r += Scalar::ONE; - } + let res = tx.into_signed(signature); + debug_assert!(res.recover_signer().is_ok()); + res +} + +#[test] +fn test_deterministically_sign() { + let tx = TxLegacy { chain_id: None, ..Default::default() }; + let signed = deterministically_sign(tx.clone()); + + assert!(signed.recover_signer().is_ok()); + let one = alloy_core::primitives::U256::from(1u64); + assert_eq!(signed.signature().r(), one); + assert_eq!(signed.signature().s(), one); + + let mut other_tx = tx.clone(); + other_tx.nonce += 1; + // Signing a distinct message should yield a distinct signer + assert!( + signed.recover_signer().unwrap() != deterministically_sign(other_tx).recover_signer().unwrap() + ); } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index e5f8f41e..601d2dfa 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -84,7 +84,7 @@ async fn setup_test( // Set a gas price (100 gwei) tx.gas_price = 100_000_000_000; // Sign it - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); // Publish it let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); @@ -123,7 +123,7 @@ async fn confirm_next_serai_key( let mut tx = router.confirm_next_serai_key(&sig); tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; assert!(receipt.status()); assert_eq!( @@ -164,7 +164,7 @@ async fn test_update_serai_key() { let mut tx = router.update_serai_key(&update_to, &sig); tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); assert_eq!(u128::from(Router::UPDATE_SERAI_KEY_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); @@ -199,7 +199,7 @@ async fn test_eth_in_instruction() { .abi_encode() .into(), }; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); let signer = tx.recover_signer().unwrap(); let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; @@ -250,7 +250,7 @@ async fn publish_outs( let mut tx = router.execute(coin, fee, outs, &sig); tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); ethereum_test_primitives::publish_tx(provider, tx).await } @@ -307,7 +307,7 @@ async fn escape_hatch( let mut tx = router.escape_hatch(escape_to, &sig); tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; assert!(receipt.status()); assert_eq!(u128::from(Router::ESCAPE_HATCH_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); @@ -321,7 +321,7 @@ async fn escape( ) -> TransactionReceipt { let mut tx = router.escape(coin.address()); tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(&tx); + let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; assert!(receipt.status()); receipt diff --git a/processor/ethereum/test-primitives/src/lib.rs b/processor/ethereum/test-primitives/src/lib.rs index 9f43d0a2..47cc983e 100644 --- a/processor/ethereum/test-primitives/src/lib.rs +++ b/processor/ethereum/test-primitives/src/lib.rs @@ -76,7 +76,7 @@ pub async fn deploy_contract( input: bin.into(), }; - let deployment_tx = deterministically_sign(&deployment_tx); + let deployment_tx = deterministically_sign(deployment_tx); let receipt = publish_tx(provider, deployment_tx).await; assert!(receipt.status()); From f6b52b3fd3fec7eded321a5cc8d0fa0c6fb335f4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 18 Jan 2025 15:22:58 -0500 Subject: [PATCH 323/368] Maximum line length of 80 in Deployer.sol --- .../ethereum/deployer/contracts/Deployer.sol | 56 ++++++++++--------- 1 file changed, 30 insertions(+), 26 deletions(-) diff --git a/processor/ethereum/deployer/contracts/Deployer.sol b/processor/ethereum/deployer/contracts/Deployer.sol index 8382cf21..862a27fd 100644 --- a/processor/ethereum/deployer/contracts/Deployer.sol +++ b/processor/ethereum/deployer/contracts/Deployer.sol @@ -4,29 +4,30 @@ pragma solidity ^0.8.26; /* The expected deployment process of Serai's Router is as follows: - 1) A transaction deploying Deployer is made. Then, a deterministic signature is - created such that an account with an unknown private key is the creator of - the contract. Anyone can fund this address, and once anyone does, the + 1) A transaction deploying Deployer is made. Then, a deterministic signature + is created such that an account with an unknown private key is the creator + of the contract. Anyone can fund this address, and once anyone does, the transaction deploying Deployer can be published by anyone. No other transaction may be made from that account. - 2) Anyone deploys the Router through the Deployer. This uses a sequential nonce - such that meet-in-the-middle attacks, with complexity 2**80, aren't feasible. - While such attacks would still be feasible if the Deployer's address was - controllable, the usage of a deterministic signature with a NUMS method - prevents that. + 2) Anyone deploys the Router through the Deployer. This uses a sequential + nonce such that meet-in-the-middle attacks, with complexity 2**80, aren't + feasible. While such attacks would still be feasible if the Deployer's + address was controllable, the usage of a deterministic signature with a + NUMS method prevents that. - This doesn't have any denial-of-service risks and will resolve once anyone steps - forward as deployer. This does fail to guarantee an identical address across - every chain, though it enables letting anyone efficiently ask the Deployer for - the address (with the Deployer having an identical address on every chain). + This doesn't have any denial-of-service risks and will resolve once anyone + steps forward as deployer. This does fail to guarantee an identical address + for the Router across every chain, though it enables anyone to efficiently + ask the Deployer for the address (with the Deployer having an identical + address on every chain). - Unfortunately, guaranteeing identical addresses aren't feasible. We'd need the - Deployer contract to use a consistent salt for the Router, yet the Router must - be deployed with a specific public key for Serai. Since Ethereum isn't able to - determine a valid public key (one the result of a Serai DKG) from a dishonest - public key, we have to allow multiple deployments with Serai being the one to - determine which to use. + Unfortunately, guaranteeing identical addresses for the Router isn't + feasible. We'd need the Deployer contract to use a consistent salt for the + Router, yet the Router must be deployed with a specific public key for Serai. + Since Ethereum isn't able to determine a valid public key (one the result of + a Serai DKG) from a dishonest public key (one arbitrary), we have to allow + multiple deployments with Serai being the one to determine which to use. The alternative would be to have a council publish the Serai key on-Ethereum, with Serai verifying the published result. This would introduce a DoS risk in @@ -68,15 +69,18 @@ contract Deployer { /* Check this wasn't prior deployed. - This is a post-check, not a pre-check (in violation of the CEI pattern). If we used a - pre-check, a deployed contract could re-enter the Deployer to deploy the same contract - multiple times due to the inner call updating state and then the outer call overwriting it. - The post-check causes the outer call to error once the inner call updates state. + This is a post-check, not a pre-check (in violation of the CEI pattern). + If we used a pre-check, a deployed contract could re-enter the Deployer + to deploy the same contract multiple times due to the inner call updating + state and then the outer call overwriting it. The post-check causes the + outer call to error once the inner call updates state. - This does mean contract deployment may fail if deployment causes arbitrary execution which - maliciously nests deployment of the being-deployed contract. Such an inner call won't fail, - yet the outer call would. The usage of a re-entrancy guard would call the inner call to fail - while the outer call succeeds. This is considered so edge-case it isn't worth handling. + This does mean contract deployment may fail if deployment causes + arbitrary execution which maliciously nests deployment of the + being-deployed contract. Such an inner call won't fail, yet the outer + call would. The usage of a re-entrancy guard would cause the inner call + to fail while the outer call succeeds. This is considered so edge-case it + isn't worth handling. */ if (deployments[initCodeHash] != address(0)) { revert PriorDeployed(); From 3c9c12d3203c46ad4748b96a0bb86ed8fac63e5b Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 18 Jan 2025 23:58:38 -0500 Subject: [PATCH 324/368] Test the Deployer contract --- Cargo.lock | 4 + processor/ethereum/deployer/Cargo.toml | 8 ++ processor/ethereum/deployer/src/lib.rs | 52 ++++++++--- processor/ethereum/deployer/src/tests.rs | 107 +++++++++++++++++++++++ 4 files changed, 161 insertions(+), 10 deletions(-) create mode 100644 processor/ethereum/deployer/src/tests.rs diff --git a/Cargo.lock b/Cargo.lock index 4d6bf218..f28b9701 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -6926,14 +6926,18 @@ version = "0.1.0" dependencies = [ "alloy-consensus", "alloy-core", + "alloy-node-bindings", "alloy-provider", + "alloy-rpc-client", "alloy-rpc-types-eth", "alloy-simple-request-transport", "alloy-sol-macro", "alloy-sol-types", "alloy-transport", "build-solidity-contracts", + "serai-ethereum-test-primitives", "serai-processor-ethereum-primitives", + "tokio", ] [[package]] diff --git a/processor/ethereum/deployer/Cargo.toml b/processor/ethereum/deployer/Cargo.toml index 1b8e191e..3e0f7d5b 100644 --- a/processor/ethereum/deployer/Cargo.toml +++ b/processor/ethereum/deployer/Cargo.toml @@ -33,3 +33,11 @@ ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = [build-dependencies] build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts", default-features = false } + +[dev-dependencies] +alloy-rpc-client = { version = "0.9", default-features = false } +alloy-node-bindings = { version = "0.9", default-features = false } + +tokio = { version = "1.0", default-features = false, features = ["rt-multi-thread", "macros"] } + +ethereum-test-primitives = { package = "serai-ethereum-test-primitives", path = "../test-primitives" } diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index a4d6ed94..f810d617 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -4,7 +4,7 @@ use std::sync::Arc; -use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; +use alloy_core::primitives::{hex, Address, U256, Bytes, TxKind}; use alloy_consensus::{Signed, TxLegacy}; use alloy_sol_types::SolCall; @@ -14,6 +14,9 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; +#[cfg(test)] +mod tests; + #[rustfmt::skip] #[expect(warnings)] #[expect(needless_pass_by_value)] @@ -24,6 +27,17 @@ mod abi { alloy_sol_macro::sol!("contracts/Deployer.sol"); } +const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("Deployer.bin did not contain valid hex"), + }; + &BYTECODE +}; + /// The Deployer contract for the Serai Router contract. /// /// This Deployer has a deterministic address, letting it be immediately identified on any instance @@ -38,21 +52,39 @@ impl Deployer { /// funded for this transaction to be submitted. This account has no known private key to anyone /// so ETH sent can be neither misappropriated nor returned. pub fn deployment_tx() -> Signed { - pub const BYTECODE: &[u8] = - include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); - let bytecode = - Bytes::from_hex(BYTECODE).expect("compiled-in Deployer bytecode wasn't valid hex"); + let bytecode = Bytes::from(BYTECODE); // Legacy transactions are used to ensure the widest possible degree of support across EVMs let tx = TxLegacy { chain_id: None, nonce: 0, - // This uses a fixed gas price as necessary to achieve a deterministic address - // The gas price is fixed to 100 gwei, which should be incredibly generous, in order to make - // this getting stuck unlikely. While expensive, this only has to occur once + /* + This needs to use a fixed gas price to achieve a deterministic address. The gas price is + fixed to 100 gwei, which should be generous, in order to make this unlikely to get stuck. + While potentially expensive, this only has to occur per chain this is deployed on. + + If this is too low of a gas price, private mempools can be used, with other transactions in + the bundle raising the gas price to acceptable levels. While this strategy could be + entirely relied upon, allowing the gas price paid to reflect the network's actual gas + price, that wouldn't work for EVM networks without private mempools. + + That leaves this as failing only if it violates a protocol constant, or if the gas price is + too low on a network without private mempools to publish via. In that case, this code + should to be forked to accept an enum of which network the deployment is for (with the gas + price derivative of that, as common as possible across networks to minimize the amount of + addresses representing the Deployer). + */ gas_price: 100_000_000_000u128, - // TODO: Use a more accurate gas limit - gas_limit: 1_000_000u64, + /* + This is twice the cost of deployment as of Ethereum's Cancun upgrade. The wide margin is to + increase the likelihood of surviving changes to the cost of contract deployment (notably + the gas cost of calldata). While wasteful, this only has to be done once per chain and is + accepted accordingly. + + If this is ever unacceptable, the parameterization suggested in case the `gas_price` is + unacceptable should be implemented. + */ + gas_limit: 300_698, to: TxKind::Create, value: U256::ZERO, input: bytecode, diff --git a/processor/ethereum/deployer/src/tests.rs b/processor/ethereum/deployer/src/tests.rs new file mode 100644 index 00000000..ba1e75ae --- /dev/null +++ b/processor/ethereum/deployer/src/tests.rs @@ -0,0 +1,107 @@ +use std::sync::Arc; + +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; +use alloy_simple_request_transport::SimpleRequest; +use alloy_rpc_client::ClientBuilder; +use alloy_provider::{Provider, RootProvider}; + +use alloy_node_bindings::Anvil; + +use crate::{ + abi::Deployer::{PriorDeployed, DeploymentFailed, DeployerErrors}, + Deployer, +}; + +#[tokio::test] +async fn test_deployer() { + const CANCUN: &str = "cancun"; + const LATEST: &str = "latest"; + + for network in [CANCUN, LATEST] { + let anvil = Anvil::new().arg("--hardfork").arg(network).spawn(); + + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), + )); + + // Deploy the Deployer + { + let deployment_tx = Deployer::deployment_tx(); + let gas_programmed = deployment_tx.tx().gas_limit; + let receipt = ethereum_test_primitives::publish_tx(&provider, deployment_tx).await; + assert!(receipt.status()); + assert_eq!(receipt.contract_address.unwrap(), Deployer::address()); + + if network == CANCUN { + // Check the gas programmed was twice the gas used + // We only check this for cancun as the constant was programmed per cancun's gas pricing + assert_eq!(2 * receipt.gas_used, gas_programmed); + } + } + + // Deploy the deployer with the deployer + let mut deploy_tx = Deployer::deploy_tx(crate::BYTECODE.to_vec()); + deploy_tx.gas_price = 100_000_000_000u128; + deploy_tx.gas_limit = 1_000_000; + { + let deploy_tx = ethereum_primitives::deterministically_sign(deploy_tx.clone()); + let receipt = ethereum_test_primitives::publish_tx(&provider, deploy_tx).await; + assert!(receipt.status()); + } + + // Verify we can now find the deployer + { + let deployer = Deployer::new(provider.clone()).await.unwrap().unwrap(); + let deployed_deployer = deployer + .find_deployment(ethereum_primitives::keccak256(crate::BYTECODE)) + .await + .unwrap() + .unwrap(); + assert_eq!( + provider.get_code_at(deployed_deployer).await.unwrap(), + provider.get_code_at(Deployer::address()).await.unwrap(), + ); + assert!(deployed_deployer != Deployer::address()); + } + + // Verify deploying the same init code multiple times fails + { + let mut deploy_tx = deploy_tx; + // Change the gas price to cause a distinct message, and with it, a distinct signer + deploy_tx.gas_price += 1; + let deploy_tx = ethereum_primitives::deterministically_sign(deploy_tx); + let receipt = ethereum_test_primitives::publish_tx(&provider, deploy_tx.clone()).await; + assert!(!receipt.status()); + + let call = TransactionRequest::default() + .to(Deployer::address()) + .input(TransactionInput::new(deploy_tx.tx().input.clone())); + let call_err = provider.call(&call).await.unwrap_err(); + assert!(matches!( + call_err.as_error_resp().unwrap().as_decoded_error::(true).unwrap(), + DeployerErrors::PriorDeployed(PriorDeployed {}), + )); + } + + // Verify deployment failures yield errors properly + { + // 0xfe is an invalid opcode which is guaranteed to remain invalid + let mut deploy_tx = Deployer::deploy_tx(vec![0xfe]); + deploy_tx.gas_price = 100_000_000_000u128; + deploy_tx.gas_limit = 1_000_000; + + let deploy_tx = ethereum_primitives::deterministically_sign(deploy_tx); + let receipt = ethereum_test_primitives::publish_tx(&provider, deploy_tx.clone()).await; + assert!(!receipt.status()); + + let call = TransactionRequest::default() + .to(Deployer::address()) + .input(TransactionInput::new(deploy_tx.tx().input.clone())); + let call_err = provider.call(&call).await.unwrap_err(); + assert!(matches!( + call_err.as_error_resp().unwrap().as_decoded_error::(true).unwrap(), + DeployerErrors::DeploymentFailed(DeploymentFailed {}), + )); + } + } +} From 642ba00952ffc20468f69dc23752dc628468caff Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 19 Jan 2025 00:03:56 -0500 Subject: [PATCH 325/368] Update Deployer README, 80-character line length --- processor/ethereum/deployer/README.md | 32 ++++++++++++++++----------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/processor/ethereum/deployer/README.md b/processor/ethereum/deployer/README.md index 6b439650..f2ea6fae 100644 --- a/processor/ethereum/deployer/README.md +++ b/processor/ethereum/deployer/README.md @@ -4,20 +4,26 @@ The deployer for Serai's Ethereum contracts. ## Goals -It should be possible to efficiently locate the Serai Router on an blockchain with the EVM, without -relying on any centralized (or even federated) entities. While deploying and locating an instance of -the Router would be trivial, by using a fixed signature for the deployment transaction, the Router -must be constructed with the correct key for the Serai network (or set to have the correct key -post-construction). Since this cannot be guaranteed to occur, the process must be retryable and the -first successful invocation must be efficiently findable. +It should be possible to efficiently locate the Serai Router on a blockchain +with the EVM, without relying on any centralized (or even federated) entities. +While deploying and locating an instance of the Router would be trivial, by +using a fixed signature for the deployment transaction, the Router must be +constructed with the correct key for the Serai network (or set to have the +correct key post-construction). Since this cannot be guaranteed to occur, the +process must be retryable and the first successful invocation must be +efficiently findable. ## Methodology -We define a contract, the Deployer, to deploy the router. This contract could use `CREATE2` with the -key representing Serai as the salt, yet this would be open to collision attacks with just 2**80 -complexity. Instead, we use `CREATE` which would require 2**80 on-chain transactions (infeasible) to -use as the basis of a collision. +We define a contract, the Deployer, to deploy the Router. This contract could +use `CREATE2` with the key representing Serai as the salt, yet this would be +open to collision attacks with just 2\*\*80 complexity. Instead, we use +`CREATE` which would require 2\*\*80 on-chain transactions (infeasible) to use +as the basis of a collision. -In order to efficiently find the contract for a key, the Deployer contract saves the addresses of -deployed contracts (indexed by the initialization code hash). This allows using a single call to a -contract with a known address to find the proper Router. +In order to efficiently find the contract for a key, the Deployer contract +saves the addresses of deployed contracts (indexed by the initialization code's +hash). This allows using a single call to a contract with a known address to +find the proper Router. Saving the address to the state enables finding the +Router's address even if the connected-to node's logs have been pruned for +historical blocks. From 9d57c4eb4d894becba3e9a94ae0215949010ef84 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 19 Jan 2025 00:16:50 -0500 Subject: [PATCH 326/368] Downscope dependencies in serai-processor-ethereum-primitives, const-hex decode bytecode in ethereum-schnorr-contract --- Cargo.lock | 3 ++- networks/ethereum/schnorr/Cargo.toml | 2 ++ networks/ethereum/schnorr/README.md | 3 ++- networks/ethereum/schnorr/src/lib.rs | 12 ++++++++++-- networks/ethereum/schnorr/src/tests/premise.rs | 12 ++++-------- processor/ethereum/primitives/Cargo.toml | 2 +- processor/ethereum/primitives/src/lib.rs | 6 +++--- 7 files changed, 24 insertions(+), 16 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index f28b9701..b247b181 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2209,6 +2209,7 @@ dependencies = [ "alloy-simple-request-transport", "alloy-sol-types", "build-solidity-contracts", + "const-hex", "group", "k256", "rand_core", @@ -6959,7 +6960,7 @@ name = "serai-processor-ethereum-primitives" version = "0.1.0" dependencies = [ "alloy-consensus", - "alloy-core", + "alloy-primitives", "group", "k256", ] diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 42797bb7..5883a453 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -16,6 +16,8 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] +const-hex = { version = "1", default-features = false, features = ["std", "core-error"] } + subtle = { version = "2", default-features = false, features = ["std"] } sha3 = { version = "0.10", default-features = false, features = ["std"] } group = { version = "0.13", default-features = false, features = ["alloc"] } diff --git a/networks/ethereum/schnorr/README.md b/networks/ethereum/schnorr/README.md index 410cf520..896c92c1 100644 --- a/networks/ethereum/schnorr/README.md +++ b/networks/ethereum/schnorr/README.md @@ -2,4 +2,5 @@ An Ethereum contract to verify Schnorr signatures. -This crate will fail to build if `solc` is not installed and available. +This crate will fail to build if the expected version of `solc` is not +installed and available. diff --git a/networks/ethereum/schnorr/src/lib.rs b/networks/ethereum/schnorr/src/lib.rs index 3f67fbbf..ec6f6277 100644 --- a/networks/ethereum/schnorr/src/lib.rs +++ b/networks/ethereum/schnorr/src/lib.rs @@ -4,8 +4,16 @@ #![allow(non_snake_case)] /// The initialization bytecode of the Schnorr library. -pub const INIT_BYTECODE: &str = - include_str!(concat!(env!("OUT_DIR"), "/ethereum-schnorr-contract/Schnorr.bin")); +pub const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/ethereum-schnorr-contract/Schnorr.bin")); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match const_hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("Schnorr.bin did not contain valid hex"), + }; + &BYTECODE +}; mod public_key; pub use public_key::PublicKey; diff --git a/networks/ethereum/schnorr/src/tests/premise.rs b/networks/ethereum/schnorr/src/tests/premise.rs index 28d9135d..dee78e44 100644 --- a/networks/ethereum/schnorr/src/tests/premise.rs +++ b/networks/ethereum/schnorr/src/tests/premise.rs @@ -18,14 +18,10 @@ use crate::{Signature, tests::test_key}; fn ecrecover(message: Scalar, odd_y: bool, r: Scalar, s: Scalar) -> Option<[u8; 20]> { let sig = ecdsa::Signature::from_scalars(r, s).ok()?; let message: [u8; 32] = message.to_repr().into(); - alloy_core::primitives::Signature::from_signature_and_parity( - sig, - alloy_core::primitives::Parity::Parity(odd_y), - ) - .ok()? - .recover_address_from_prehash(&alloy_core::primitives::B256::from(message)) - .ok() - .map(Into::into) + alloy_core::primitives::PrimitiveSignature::from_signature_and_parity(sig, odd_y) + .recover_address_from_prehash(&alloy_core::primitives::B256::from(message)) + .ok() + .map(Into::into) } // Test ecrecover behaves as expected diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml index 180f5db7..05b23189 100644 --- a/processor/ethereum/primitives/Cargo.toml +++ b/processor/ethereum/primitives/Cargo.toml @@ -20,5 +20,5 @@ workspace = true group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } -alloy-core = { version = "0.8", default-features = false } +alloy-primitives = { version = "0.8", default-features = false } alloy-consensus = { version = "0.9", default-features = false, features = ["k256"] } diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index eb7bd615..dadc5424 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -5,12 +5,12 @@ use group::ff::PrimeField; use k256::Scalar; -use alloy_core::primitives::PrimitiveSignature; +use alloy_primitives::PrimitiveSignature; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; /// The Keccak256 hash function. pub fn keccak256(data: impl AsRef<[u8]>) -> [u8; 32] { - alloy_core::primitives::keccak256(data.as_ref()).into() + alloy_primitives::keccak256(data.as_ref()).into() } /// Deterministically sign a transaction. @@ -67,7 +67,7 @@ fn test_deterministically_sign() { let signed = deterministically_sign(tx.clone()); assert!(signed.recover_signer().is_ok()); - let one = alloy_core::primitives::U256::from(1u64); + let one = alloy_primitives::U256::from(1u64); assert_eq!(signed.signature().r(), one); assert_eq!(signed.signature().s(), one); From 47560fa9a93612e1431b31ae37ceb5eaca3e94ea Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 19 Jan 2025 00:45:26 -0500 Subject: [PATCH 327/368] Test manually implemented serializations in the Router lib --- processor/ethereum/router/src/lib.rs | 2 +- processor/ethereum/router/src/tests/mod.rs | 15 ++-- .../ethereum/router/src/tests/read_write.rs | 85 +++++++++++++++++++ 3 files changed, 93 insertions(+), 9 deletions(-) create mode 100644 processor/ethereum/router/src/tests/read_write.rs diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 1531a5b9..e13e9eba 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -207,7 +207,7 @@ impl From<&[(SeraiAddress, U256)]> for OutInstructions { /// An action which was executed by the Router. #[derive(Clone, PartialEq, Eq, Debug)] pub enum Executed { - /// Set a new key. + /// New key was set. SetKey { /// The nonce this was done with. nonce: u64, diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 601d2dfa..bb0da393 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -22,6 +22,8 @@ use ethereum_deployer::Deployer; use crate::{Coin, OutInstructions, Router}; +mod read_write; + #[test] fn execute_reentrancy_guard() { let hash = alloy_core::primitives::keccak256(b"ReentrancyGuard Router.execute"); @@ -88,7 +90,7 @@ async fn setup_test( // Publish it let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); - assert_eq!(u128::from(Router::DEPLOYMENT_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); + assert_eq!(Router::DEPLOYMENT_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); let router = Router::new(provider.clone(), &public_key).await.unwrap().unwrap(); @@ -126,10 +128,7 @@ async fn confirm_next_serai_key( let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; assert!(receipt.status()); - assert_eq!( - u128::from(Router::CONFIRM_NEXT_SERAI_KEY_GAS), - ((receipt.gas_used + 1000) / 1000) * 1000 - ); + assert_eq!(Router::CONFIRM_NEXT_SERAI_KEY_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); receipt } @@ -167,7 +166,7 @@ async fn test_update_serai_key() { let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; assert!(receipt.status()); - assert_eq!(u128::from(Router::UPDATE_SERAI_KEY_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); + assert_eq!(Router::UPDATE_SERAI_KEY_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(key.1)); assert_eq!(router.next_key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(update_to)); @@ -270,7 +269,7 @@ async fn test_eth_address_out_instruction() { let instructions = OutInstructions::from([].as_slice()); let receipt = publish_outs(&provider, &router, key, 2, Coin::Ether, fee, instructions).await; assert!(receipt.status()); - assert_eq!(u128::from(Router::EXECUTE_BASE_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); + assert_eq!(Router::EXECUTE_BASE_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); } @@ -310,7 +309,7 @@ async fn escape_hatch( let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; assert!(receipt.status()); - assert_eq!(u128::from(Router::ESCAPE_HATCH_GAS), ((receipt.gas_used + 1000) / 1000) * 1000); + assert_eq!(Router::ESCAPE_HATCH_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); receipt } diff --git a/processor/ethereum/router/src/tests/read_write.rs b/processor/ethereum/router/src/tests/read_write.rs new file mode 100644 index 00000000..3b6e6b73 --- /dev/null +++ b/processor/ethereum/router/src/tests/read_write.rs @@ -0,0 +1,85 @@ +use rand_core::{RngCore, OsRng}; + +use alloy_core::primitives::U256; + +use crate::{Coin, InInstruction, Executed}; + +fn coins() -> [Coin; 2] { + [Coin::Ether, { + let mut erc20 = [0; 20]; + OsRng.fill_bytes(&mut erc20); + Coin::Erc20(erc20.into()) + }] +} + +#[test] +fn test_coin_read_write() { + for coin in coins() { + let mut res = vec![]; + coin.write(&mut res).unwrap(); + assert_eq!(coin, Coin::read(&mut res.as_slice()).unwrap()); + } +} + +#[test] +fn test_in_instruction_read_write() { + for coin in coins() { + let instruction = InInstruction { + id: ( + { + let mut tx_id = [0; 32]; + OsRng.fill_bytes(&mut tx_id); + tx_id + }, + OsRng.next_u64(), + ), + from: { + let mut from = [0; 20]; + OsRng.fill_bytes(&mut from); + from + }, + coin, + amount: U256::from_le_bytes({ + let mut amount = [0; 32]; + OsRng.fill_bytes(&mut amount); + amount + }), + data: { + let len = usize::try_from(OsRng.next_u64() % 65536).unwrap(); + let mut data = vec![0; len]; + OsRng.fill_bytes(&mut data); + data + }, + }; + + let mut buf = vec![]; + instruction.write(&mut buf).unwrap(); + assert_eq!(InInstruction::read(&mut buf.as_slice()).unwrap(), instruction); + } +} + +#[test] +fn test_executed_read_write() { + for executed in [ + Executed::SetKey { + nonce: OsRng.next_u64(), + key: { + let mut key = [0; 32]; + OsRng.fill_bytes(&mut key); + key + }, + }, + Executed::Batch { + nonce: OsRng.next_u64(), + message_hash: { + let mut message_hash = [0; 32]; + OsRng.fill_bytes(&mut message_hash); + message_hash + }, + }, + ] { + let mut res = vec![]; + executed.write(&mut res).unwrap(); + assert_eq!(executed, Executed::read(&mut res.as_slice()).unwrap()); + } +} From 0b30ac175e310eae1f064c45b99e250aa75d1792 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 19 Jan 2025 02:27:35 -0500 Subject: [PATCH 328/368] Restore workspace-wide clippy Fixes accumulated errors in the Substrate code. Modifies the runtime build to work with a modern clippy. Removes e2e tests from the workspace. --- Cargo.lock | 3332 ++++++++++++++++- Cargo.toml | 6 +- coordinator/substrate/src/canonical.rs | 2 +- .../ringct/bulletproofs/src/original/mod.rs | 2 +- networks/monero/wallet/src/tests/extra.rs | 1 + processor/scanner/src/batch/mod.rs | 3 +- substrate/abi/src/in_instructions.rs | 2 +- substrate/client/tests/batch.rs | 14 +- substrate/client/tests/burn.rs | 17 +- .../client/tests/common/genesis_liquidity.rs | 17 +- .../client/tests/common/in_instructions.rs | 37 +- substrate/client/tests/dex.rs | 12 +- substrate/client/tests/emissions.rs | 19 +- substrate/client/tests/validator_sets.rs | 5 +- substrate/in-instructions/pallet/Cargo.toml | 2 + substrate/in-instructions/pallet/src/lib.rs | 28 +- .../in-instructions/primitives/src/lib.rs | 5 +- substrate/node/Cargo.toml | 98 +- substrate/runtime/build.rs | 9 +- substrate/runtime/src/abi.rs | 2 - substrate/validator-sets/pallet/src/lib.rs | 2 +- tests/message-queue/src/lib.rs | 9 +- 22 files changed, 3329 insertions(+), 295 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index b247b181..b8cdc63f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -375,7 +375,7 @@ dependencies = [ "serde_json", "tokio", "tokio-stream", - "tower", + "tower 0.5.2", "tracing", "wasmtimer", ] @@ -444,7 +444,7 @@ dependencies = [ "alloy-transport", "serde_json", "simple-request", - "tower", + "tower 0.5.2", ] [[package]] @@ -512,14 +512,14 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d17722a198f33bbd25337660787aea8b8f57814febb7c746bc30407bdfc39448" dependencies = [ "alloy-json-rpc", - "base64", + "base64 0.22.1", "futures-util", "futures-utils-wasm", "serde", "serde_json", "thiserror 2.0.9", "tokio", - "tower", + "tower 0.5.2", "tracing", "url", "wasmtimer", @@ -575,12 +575,55 @@ dependencies = [ "winapi", ] +[[package]] +name = "anstream" +version = "0.6.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8acc5369981196006228e28809f761875c0327210a891e941f4c683b3a99529b" +dependencies = [ + "anstyle", + "anstyle-parse", + "anstyle-query", + "anstyle-wincon", + "colorchoice", + "is_terminal_polyfill", + "utf8parse", +] + [[package]] name = "anstyle" version = "1.0.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "55cc3b69f167a1ef2e161439aa98aed94e6028e5f9a59be9a6ffb47aef1651f9" +[[package]] +name = "anstyle-parse" +version = "0.2.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3b2d16507662817a6a20a9ea92df6652ee4f94f914589377d69f3b21bc5798a9" +dependencies = [ + "utf8parse", +] + +[[package]] +name = "anstyle-query" +version = "1.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c" +dependencies = [ + "windows-sys 0.59.0", +] + +[[package]] +name = "anstyle-wincon" +version = "3.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2109dbce0e72be3ec00bed26e6a7479ca384ad226efdd66db8fa2e3a38c83125" +dependencies = [ + "anstyle", + "windows-sys 0.59.0", +] + [[package]] name = "anyhow" version = "1.0.95" @@ -596,6 +639,12 @@ dependencies = [ "num-traits", ] +[[package]] +name = "arbitrary" +version = "1.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dde20b3d026af13f561bdd0f15edf01fc734f0dafcedbaf42bba506a9517f223" + [[package]] name = "ark-ff" version = "0.3.0" @@ -743,12 +792,12 @@ dependencies = [ [[package]] name = "asn1-rs" -version = "0.6.2" +version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5493c3bedbacf7fd7382c6346bbd66687d12bbaad3a89a2d2c303ee6cf20b048" +checksum = "7f6fd5ddaf0351dff5b8da21b2fb4ff8e08ddd02857f0bf69c47639106c0fff0" dependencies = [ - "asn1-rs-derive", - "asn1-rs-impl", + "asn1-rs-derive 0.4.0", + "asn1-rs-impl 0.1.0", "displaydoc", "nom", "num-traits", @@ -757,6 +806,34 @@ dependencies = [ "time", ] +[[package]] +name = "asn1-rs" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5493c3bedbacf7fd7382c6346bbd66687d12bbaad3a89a2d2c303ee6cf20b048" +dependencies = [ + "asn1-rs-derive 0.5.1", + "asn1-rs-impl 0.2.0", + "displaydoc", + "nom", + "num-traits", + "rusticata-macros", + "thiserror 1.0.69", + "time", +] + +[[package]] +name = "asn1-rs-derive" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "726535892e8eae7e70657b4c8ea93d26b8553afb1ce617caee529ef96d7dee6c" +dependencies = [ + "proc-macro2", + "quote", + "syn 1.0.109", + "synstructure 0.12.6", +] + [[package]] name = "asn1-rs-derive" version = "0.5.1" @@ -766,7 +843,18 @@ dependencies = [ "proc-macro2", "quote", "syn 2.0.94", - "synstructure", + "synstructure 0.13.1", +] + +[[package]] +name = "asn1-rs-impl" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2777730b2039ac0f95f093556e61b6d26cebed5393ca6f152717777cec3a42ed" +dependencies = [ + "proc-macro2", + "quote", + "syn 1.0.109", ] [[package]] @@ -780,6 +868,17 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "async-channel" +version = "1.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "81953c529336010edd6d8e358f886d9581267795c61b19475b71314bffa46d35" +dependencies = [ + "concurrent-queue", + "event-listener 2.5.3", + "futures-core", +] + [[package]] name = "async-io" version = "2.4.0" @@ -805,7 +904,7 @@ version = "3.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ff6e472cdea888a4bd64f342f09b3f50e1886d32afe8df3d663c01140b811b18" dependencies = [ - "event-listener", + "event-listener 5.3.1", "event-listener-strategy", "pin-project-lite", ] @@ -843,6 +942,19 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "asynchronous-codec" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4057f2c32adbb2fc158e22fb38433c8e9bbf76b75a4732c7c0cbaf695fb65568" +dependencies = [ + "bytes", + "futures-sink", + "futures-util", + "memchr", + "pin-project-lite", +] + [[package]] name = "asynchronous-codec" version = "0.7.0" @@ -921,6 +1033,18 @@ dependencies = [ "bitcoin_hashes", ] +[[package]] +name = "base64" +version = "0.13.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9e1b586273c5702936fe7b7d6896644d8be71e6314cfe09d3167c95f712589e8" + +[[package]] +name = "base64" +version = "0.21.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567" + [[package]] name = "base64" version = "0.22.1" @@ -939,6 +1063,15 @@ version = "0.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d965446196e3b7decd44aa7ee49e31d630118f90ef12f97900f262eb915c951d" +[[package]] +name = "beef" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3a8241f3ebb85c056b509d4327ad0358fbbba6ffb340bf388f26350aeda225b1" +dependencies = [ + "serde", +] + [[package]] name = "bincode" version = "1.3.3" @@ -1101,6 +1234,39 @@ dependencies = [ "constant_time_eq", ] +[[package]] +name = "blake2s_simd" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "94230421e395b9920d23df13ea5d77a20e1725331f90fbbf6df6040b33f756ae" +dependencies = [ + "arrayref", + "arrayvec", + "constant_time_eq", +] + +[[package]] +name = "blake3" +version = "1.5.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8ee0c1824c4dea5b5f81736aff91bae041d2c07ee1192bec91054e10e3e601e" +dependencies = [ + "arrayref", + "arrayvec", + "cc", + "cfg-if", + "constant_time_eq", +] + +[[package]] +name = "block-buffer" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4" +dependencies = [ + "generic-array 0.14.7", +] + [[package]] name = "block-buffer" version = "0.10.4" @@ -1150,7 +1316,7 @@ version = "0.17.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d41711ad46fda47cd701f6908e59d1bd6b9a2b7464c0d0aeab95c6d37096ff8a" dependencies = [ - "base64", + "base64 0.22.1", "bollard-stubs", "bytes", "futures-core", @@ -1195,7 +1361,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2506947f73ad44e344215ccd6403ac2ae18cd8e046e581a441bf8d199f257f03" dependencies = [ "borsh-derive", - "cfg_aliases", + "cfg_aliases 0.2.1", ] [[package]] @@ -1232,6 +1398,16 @@ dependencies = [ "tinyvec", ] +[[package]] +name = "bstr" +version = "1.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "531a9155a481e2ee699d4f98f43c0ca4ff8ee1bfd55c31e9e98fb29d2b176fe0" +dependencies = [ + "memchr", + "serde", +] + [[package]] name = "build-helper" version = "0.1.1" @@ -1371,6 +1547,12 @@ version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" +[[package]] +name = "cfg_aliases" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e" + [[package]] name = "cfg_aliases" version = "0.2.1" @@ -1416,6 +1598,19 @@ dependencies = [ "windows-targets 0.52.6", ] +[[package]] +name = "cid" +version = "0.10.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fd94671561e36e4e7de75f753f577edafb0e7c05d6e4547229fdf7938fbcd2c3" +dependencies = [ + "core2", + "multibase", + "multihash 0.18.1", + "serde", + "unsigned-varint 0.7.2", +] + [[package]] name = "cipher" version = "0.4.4" @@ -1461,6 +1656,46 @@ dependencies = [ "libloading", ] +[[package]] +name = "clap" +version = "4.5.23" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3135e7ec2ef7b10c6ed8950f0f792ed96ee093fa088608f1c76e569722700c84" +dependencies = [ + "clap_builder", + "clap_derive", +] + +[[package]] +name = "clap_builder" +version = "4.5.23" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "30582fc632330df2bd26877bde0c1f4470d57c582bbc070376afcd04d8cb4838" +dependencies = [ + "anstream", + "anstyle", + "clap_lex", + "strsim", +] + +[[package]] +name = "clap_derive" +version = "4.5.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4ac6a0c7b1a9e9a5186361f67dfa1b88213572f427fb9ab038efb2bd8c582dab" +dependencies = [ + "heck 0.5.0", + "proc-macro2", + "quote", + "syn 2.0.94", +] + +[[package]] +name = "clap_lex" +version = "0.7.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6" + [[package]] name = "codespan-reporting" version = "0.11.1" @@ -1471,6 +1706,12 @@ dependencies = [ "unicode-width", ] +[[package]] +name = "colorchoice" +version = "1.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5b63caa9aa9397e2d9480a9b13673856c78d8ac123288526c37d7839f2a86990" + [[package]] name = "concurrent-queue" version = "2.5.0" @@ -1578,6 +1819,60 @@ dependencies = [ "libc", ] +[[package]] +name = "cranelift-bforest" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a91a1ccf6fb772808742db2f51e2179f25b1ec559cbe39ea080c72ff61caf8f" +dependencies = [ + "cranelift-entity", +] + +[[package]] +name = "cranelift-codegen" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "169db1a457791bff4fd1fc585bb5cc515609647e0420a7d5c98d7700c59c2d00" +dependencies = [ + "bumpalo", + "cranelift-bforest", + "cranelift-codegen-meta", + "cranelift-codegen-shared", + "cranelift-control", + "cranelift-entity", + "cranelift-isle", + "gimli 0.27.3", + "hashbrown 0.13.2", + "log", + "regalloc2", + "smallvec", + "target-lexicon", +] + +[[package]] +name = "cranelift-codegen-meta" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3486b93751ef19e6d6eef66d2c0e83ed3d2ba01da1919ed2747f2f7bd8ba3419" +dependencies = [ + "cranelift-codegen-shared", +] + +[[package]] +name = "cranelift-codegen-shared" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "86a1205ab18e7cd25dc4eca5246e56b506ced3feb8d95a8d776195e48d2cd4ef" + +[[package]] +name = "cranelift-control" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1b108cae0f724ddfdec1871a0dc193a607e0c2d960f083cfefaae8ccf655eff2" +dependencies = [ + "arbitrary", +] + [[package]] name = "cranelift-entity" version = "0.99.2" @@ -1587,6 +1882,51 @@ dependencies = [ "serde", ] +[[package]] +name = "cranelift-frontend" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b7a94c4c5508b7407e125af9d5320694b7423322e59a4ac0d07919ae254347ca" +dependencies = [ + "cranelift-codegen", + "log", + "smallvec", + "target-lexicon", +] + +[[package]] +name = "cranelift-isle" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ef1f888d0845dcd6be4d625b91d9d8308f3d95bed5c5d4072ce38e1917faa505" + +[[package]] +name = "cranelift-native" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ad5966da08f1e96a3ae63be49966a85c9b249fa465f8cf1b66469a82b1004a0" +dependencies = [ + "cranelift-codegen", + "libc", + "target-lexicon", +] + +[[package]] +name = "cranelift-wasm" +version = "0.99.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0d8635c88b424f1d232436f683a301143b36953cd98fc6f86f7bac862dfeb6f5" +dependencies = [ + "cranelift-codegen", + "cranelift-entity", + "cranelift-frontend", + "itertools 0.10.5", + "log", + "smallvec", + "wasmparser", + "wasmtime-types", +] + [[package]] name = "crc32fast" version = "1.4.2" @@ -1596,6 +1936,25 @@ dependencies = [ "cfg-if", ] +[[package]] +name = "crossbeam-deque" +version = "0.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51" +dependencies = [ + "crossbeam-epoch", + "crossbeam-utils", +] + +[[package]] +name = "crossbeam-epoch" +version = "0.9.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5b82ac4a3c2ca9c3460964f020e1402edd5753411d7737aa39c3714ad1b5420e" +dependencies = [ + "crossbeam-utils", +] + [[package]] name = "crossbeam-utils" version = "0.8.21" @@ -1790,13 +2149,27 @@ dependencies = [ "zeroize", ] +[[package]] +name = "der-parser" +version = "8.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dbd676fbbab537128ef0278adb5576cf363cff6aa22a7b24effe97347cfab61e" +dependencies = [ + "asn1-rs 0.5.2", + "displaydoc", + "nom", + "num-bigint", + "num-traits", + "rusticata-macros", +] + [[package]] name = "der-parser" version = "9.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5cd0a5c643689626bec213c4d8bd4d96acc8ffdb4ad4bb6bc16abf27d5f4b553" dependencies = [ - "asn1-rs", + "asn1-rs 0.6.2", "displaydoc", "nom", "num-bigint", @@ -1857,6 +2230,12 @@ dependencies = [ "unicode-xid", ] +[[package]] +name = "difflib" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6184e33543162437515c2e2b48714794e37845ec9851711914eec9d308f6ebe8" + [[package]] name = "digest" version = "0.9.0" @@ -1872,7 +2251,7 @@ version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ - "block-buffer", + "block-buffer 0.10.4", "const-oid", "crypto-common", "subtle", @@ -1985,7 +2364,7 @@ checksum = "b8648c989dfd460932144f0ce5c55ff35cf0de758f89ea20e3b3d0d3f5e1cce6" dependencies = [ "anyhow", "async-trait", - "base64", + "base64 0.22.1", "bollard", "bytes", "dyn-clone", @@ -2000,6 +2379,12 @@ dependencies = [ "tracing", ] +[[package]] +name = "downcast" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1435fa1053d8b2fbbe9be7e97eca7f33d37b28409959813daefc1446a14247f1" + [[package]] name = "dtoa" version = "1.0.9" @@ -2153,6 +2538,18 @@ dependencies = [ "zeroize", ] +[[package]] +name = "enum-as-inner" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c9720bba047d567ffc8a3cba48bf19126600e249ab7f128e9233e6376976a116" +dependencies = [ + "heck 0.4.1", + "proc-macro2", + "quote", + "syn 1.0.109", +] + [[package]] name = "enum-as-inner" version = "0.6.1" @@ -2172,7 +2569,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4cd405aab171cb85d6735e5c8d9db038c17d3ca007a4d2c25f337935c3d90580" dependencies = [ "humantime", + "is-terminal", "log", + "regex", + "termcolor", ] [[package]] @@ -2218,6 +2618,12 @@ dependencies = [ "tokio", ] +[[package]] +name = "event-listener" +version = "2.5.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0206175f82b8d6bf6652ff7d71a1e27fd2e4efde587fd368662814d6ec1d9ce0" + [[package]] name = "event-listener" version = "5.3.1" @@ -2235,10 +2641,19 @@ version = "0.5.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c3e4e0dd3673c1139bf041f3008816d9cf2946bbfac2945c09e523b8d7b05b2" dependencies = [ - "event-listener", + "event-listener 5.3.1", "pin-project-lite", ] +[[package]] +name = "exit-future" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e43f2f1833d64e33f15592464d6fdd70f349dda7b1a53088eb83cd94014008c5" +dependencies = [ + "futures", +] + [[package]] name = "expander" version = "2.0.0" @@ -2275,6 +2690,15 @@ dependencies = [ "bytes", ] +[[package]] +name = "fdlimit" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2c4c9e43643f5a3be4ca5b67d26b98031ff9db6806c3440ae32e02e3ceac3f1b" +dependencies = [ + "libc", +] + [[package]] name = "ff" version = "0.13.0" @@ -2306,6 +2730,16 @@ version = "0.2.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "28dea519a9695b9977216879a3ebfddf92f1c08c05d984f8996aecd6ecdc811d" +[[package]] +name = "file-per-thread-logger" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a3cc21c33af89af0930c8cae4ade5e6fdc17b5d2c97b3d2e2edb67a1cf683f3" +dependencies = [ + "env_logger", + "log", +] + [[package]] name = "filetime" version = "0.2.25" @@ -2346,6 +2780,12 @@ dependencies = [ "static_assertions", ] +[[package]] +name = "fixedbitset" +version = "0.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80" + [[package]] name = "flexible-transcript" version = "0.3.2" @@ -2359,6 +2799,15 @@ dependencies = [ "zeroize", ] +[[package]] +name = "float-cmp" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "98de4bbd547a563b716d8dfa9aad1cb19bfab00f4fa09a6a4ed21dbcf44ce9c4" +dependencies = [ + "num-traits", +] + [[package]] name = "fnv" version = "1.0.7" @@ -2371,6 +2820,14 @@ version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a0d2fde1f7b3d48b8395d5f2de76c18a528bd6a9cdde438df747bfcba3e05d6f" +[[package]] +name = "fork-tree" +version = "3.0.0" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "parity-scale-codec", +] + [[package]] name = "form_urlencoded" version = "1.2.1" @@ -2380,6 +2837,12 @@ dependencies = [ "percent-encoding", ] +[[package]] +name = "fragile" +version = "2.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6c2141d6d6c8512188a7891b4b01590a45f6dac67afb4f255c4124dbb86d4eaa" + [[package]] name = "frame-benchmarking" version = "4.0.0-dev" @@ -2601,6 +3064,16 @@ dependencies = [ "futures-util", ] +[[package]] +name = "futures-bounded" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8b07bbbe7d7e78809544c6f718d875627addc73a7c3582447abc052cd3dc67e0" +dependencies = [ + "futures-timer", + "futures-util", +] + [[package]] name = "futures-bounded" version = "0.2.4" @@ -2669,6 +3142,16 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "futures-rustls" +version = "0.24.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "35bd3cf68c183738046838e300353e4716c674dc5e56890de4826801a6622a28" +dependencies = [ + "futures-io", + "rustls 0.21.12", +] + [[package]] name = "futures-rustls" version = "0.26.0" @@ -2676,7 +3159,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a8f2f12607f92c69b12ed746fabf9ca4f5c482cba46679c1a75b874ed7c26adb" dependencies = [ "futures-io", - "rustls", + "rustls 0.23.20", "rustls-pki-types", ] @@ -2874,6 +3357,19 @@ version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a8d1add55171497b4705a648c6b583acafb01d58050a51727785f0b2c8e0a2b2" +[[package]] +name = "globset" +version = "0.4.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "15f1ce686646e7f1e19bf7d5533fe443a45dbfb990e00629110797578b42fb19" +dependencies = [ + "aho-corasick", + "bstr", + "log", + "regex-automata 0.4.9", + "regex-syntax 0.8.5", +] + [[package]] name = "group" version = "0.13.0" @@ -3019,15 +3515,15 @@ dependencies = [ "async-trait", "cfg-if", "data-encoding", - "enum-as-inner", + "enum-as-inner 0.6.1", "futures-channel", "futures-io", "futures-util", - "idna", + "idna 1.0.3", "ipnet", "once_cell", "rand", - "socket2", + "socket2 0.5.8", "thiserror 1.0.69", "tinyvec", "tokio", @@ -3150,6 +3646,12 @@ dependencies = [ "pin-project-lite", ] +[[package]] +name = "http-range-header" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "add0ab9360ddbd88cfeb3bd9574a1d85cfdfa14db10b3e21d3700dbc4328758f" + [[package]] name = "httparse" version = "1.9.5" @@ -3185,7 +3687,7 @@ dependencies = [ "httpdate", "itoa", "pin-project-lite", - "socket2", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -3237,7 +3739,7 @@ dependencies = [ "http 1.2.0", "hyper 1.4.1", "hyper-util", - "rustls", + "rustls 0.23.20", "rustls-native-certs", "rustls-pki-types", "tokio", @@ -3258,7 +3760,7 @@ dependencies = [ "http-body 1.0.1", "hyper 1.4.1", "pin-project-lite", - "socket2", + "socket2 0.5.8", "tokio", "tower-service", "tracing", @@ -3302,6 +3804,27 @@ dependencies = [ "cc", ] +[[package]] +name = "idna" +version = "0.2.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "418a0a6fab821475f634efe3ccc45c013f742efe03d853e8d3355d5cb850ecf8" +dependencies = [ + "matches", + "unicode-bidi", + "unicode-normalization", +] + +[[package]] +name = "idna" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7d20d6b07bfbc108882d88ed8e37d39636dcc260e15e30c45e6ba089610b917c" +dependencies = [ + "unicode-bidi", + "unicode-normalization", +] + [[package]] name = "idna" version = "1.0.3" @@ -3464,13 +3987,19 @@ dependencies = [ "num-traits", ] +[[package]] +name = "ip_network" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aa2f047c0a98b2f299aa5d6d7088443570faae494e9ae1305e48be000c9e0eb1" + [[package]] name = "ipconfig" version = "0.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b58db92f96b720de98181bbbe63c831e87005ab460c1bf306eb2622b4707997f" dependencies = [ - "socket2", + "socket2 0.5.8", "widestring", "windows-sys 0.48.0", "winreg", @@ -3486,6 +4015,12 @@ checksum = "ddc24109865250148c2e0f3d25d4f0f479571723792d3802153c60922a4fb708" name = "is-terminal" version = "0.4.10" +[[package]] +name = "is_terminal_polyfill" +version = "1.70.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7943c866cc5cd64cbc25b2e01621d07fa8eb2a1a23160ee81ce38704e97b8ecf" + [[package]] name = "itertools" version = "0.10.5" @@ -3529,6 +4064,94 @@ dependencies = [ "wasm-bindgen", ] +[[package]] +name = "jsonrpsee" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "367a292944c07385839818bb71c8d76611138e2dedb0677d035b8da21d29c78b" +dependencies = [ + "jsonrpsee-core", + "jsonrpsee-proc-macros", + "jsonrpsee-server", + "jsonrpsee-types", + "tracing", +] + +[[package]] +name = "jsonrpsee-core" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2b5dde66c53d6dcdc8caea1874a45632ec0fcf5b437789f1e45766a1512ce803" +dependencies = [ + "anyhow", + "arrayvec", + "async-trait", + "beef", + "futures-channel", + "futures-util", + "globset", + "hyper 0.14.30", + "jsonrpsee-types", + "parking_lot 0.12.3", + "rand", + "rustc-hash 1.1.0", + "serde", + "serde_json", + "soketto 0.7.1", + "thiserror 1.0.69", + "tokio", + "tracing", +] + +[[package]] +name = "jsonrpsee-proc-macros" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "44e8ab85614a08792b9bff6c8feee23be78c98d0182d4c622c05256ab553892a" +dependencies = [ + "heck 0.4.1", + "proc-macro-crate 1.3.1", + "proc-macro2", + "quote", + "syn 1.0.109", +] + +[[package]] +name = "jsonrpsee-server" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cf4d945a6008c9b03db3354fb3c83ee02d2faa9f2e755ec1dfb69c3551b8f4ba" +dependencies = [ + "futures-channel", + "futures-util", + "http 0.2.12", + "hyper 0.14.30", + "jsonrpsee-core", + "jsonrpsee-types", + "serde", + "serde_json", + "soketto 0.7.1", + "tokio", + "tokio-stream", + "tokio-util", + "tower 0.4.13", + "tracing", +] + +[[package]] +name = "jsonrpsee-types" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "245ba8e5aa633dd1c1e4fae72bce06e71f42d34c14a2767c6b4d173b57bee5e5" +dependencies = [ + "anyhow", + "beef", + "serde", + "serde_json", + "thiserror 1.0.69", + "tracing", +] + [[package]] name = "k256" version = "0.13.4" @@ -3540,7 +4163,6 @@ dependencies = [ "elliptic-curve", "once_cell", "sha2", - "signature", ] [[package]] @@ -3562,6 +4184,39 @@ dependencies = [ "sha3-asm", ] +[[package]] +name = "kvdb" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e7d770dcb02bf6835887c3a979b5107a04ff4bbde97a5f0928d27404a155add9" +dependencies = [ + "smallvec", +] + +[[package]] +name = "kvdb-memorydb" +version = "0.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bf7a85fe66f9ff9cd74e169fdd2c94c6e1e74c412c99a73b4df3200b5d3760b2" +dependencies = [ + "kvdb", + "parking_lot 0.12.3", +] + +[[package]] +name = "kvdb-rocksdb" +version = "0.19.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b644c70b92285f66bfc2032922a79000ea30af7bc2ab31902992a5dcb9b434f6" +dependencies = [ + "kvdb", + "num_cpus", + "parking_lot 0.12.3", + "regex", + "rocksdb 0.21.0", + "smallvec", +] + [[package]] name = "lazy_static" version = "1.5.0" @@ -3604,6 +4259,43 @@ version = "0.2.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa" +[[package]] +name = "libp2p" +version = "0.52.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e94495eb319a85b70a68b85e2389a95bb3555c71c49025b78c691a854a7e6464" +dependencies = [ + "bytes", + "either", + "futures", + "futures-timer", + "getrandom", + "instant", + "libp2p-allow-block-list 0.2.0", + "libp2p-connection-limits 0.2.1", + "libp2p-core 0.40.1", + "libp2p-dns 0.40.1", + "libp2p-identify", + "libp2p-identity", + "libp2p-kad", + "libp2p-mdns 0.44.0", + "libp2p-metrics 0.13.1", + "libp2p-noise 0.43.2", + "libp2p-ping 0.43.1", + "libp2p-quic 0.9.3", + "libp2p-request-response 0.25.3", + "libp2p-swarm 0.43.7", + "libp2p-tcp 0.40.1", + "libp2p-upnp 0.1.1", + "libp2p-wasm-ext", + "libp2p-websocket", + "libp2p-yamux 0.44.1", + "multiaddr", + "pin-project", + "rw-stream-sink", + "thiserror 1.0.69", +] + [[package]] name = "libp2p" version = "0.54.1" @@ -3615,37 +4307,61 @@ dependencies = [ "futures", "futures-timer", "getrandom", - "libp2p-allow-block-list", - "libp2p-connection-limits", - "libp2p-core", - "libp2p-dns", + "libp2p-allow-block-list 0.4.0", + "libp2p-connection-limits 0.4.0", + "libp2p-core 0.42.0", + "libp2p-dns 0.42.0", "libp2p-gossipsub", "libp2p-identity", - "libp2p-mdns", - "libp2p-metrics", - "libp2p-noise", - "libp2p-ping", - "libp2p-quic", - "libp2p-request-response", - "libp2p-swarm", - "libp2p-tcp", - "libp2p-upnp", - "libp2p-yamux", + "libp2p-mdns 0.46.0", + "libp2p-metrics 0.15.0", + "libp2p-noise 0.45.0", + "libp2p-ping 0.45.0", + "libp2p-quic 0.11.1", + "libp2p-request-response 0.27.0", + "libp2p-swarm 0.45.1", + "libp2p-tcp 0.42.0", + "libp2p-upnp 0.3.0", + "libp2p-yamux 0.46.0", "multiaddr", "pin-project", "rw-stream-sink", "thiserror 1.0.69", ] +[[package]] +name = "libp2p-allow-block-list" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "55b46558c5c0bf99d3e2a1a38fd54ff5476ca66dd1737b12466a1824dd219311" +dependencies = [ + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "void", +] + [[package]] name = "libp2p-allow-block-list" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d1027ccf8d70320ed77e984f273bc8ce952f623762cb9bf2d126df73caef8041" dependencies = [ - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", + "libp2p-swarm 0.45.1", + "void", +] + +[[package]] +name = "libp2p-connection-limits" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2f5107ad45cb20b2f6c3628c7b6014b996fcb13a88053f4569c872c6e30abf58" +dependencies = [ + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", "void", ] @@ -3655,9 +4371,37 @@ version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8d003540ee8baef0d254f7b6bfd79bac3ddf774662ca0abf69186d517ef82ad8" dependencies = [ - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", + "libp2p-swarm 0.45.1", + "void", +] + +[[package]] +name = "libp2p-core" +version = "0.40.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dd44289ab25e4c9230d9246c475a22241e301b23e8f4061d3bdef304a1a99713" +dependencies = [ + "either", + "fnv", + "futures", + "futures-timer", + "instant", + "libp2p-identity", + "log", + "multiaddr", + "multihash 0.19.1", + "multistream-select", + "once_cell", + "parking_lot 0.12.3", + "pin-project", + "quick-protobuf", + "rand", + "rw-stream-sink", + "smallvec", + "thiserror 1.0.69", + "unsigned-varint 0.7.2", "void", ] @@ -3673,7 +4417,7 @@ dependencies = [ "futures-timer", "libp2p-identity", "multiaddr", - "multihash", + "multihash 0.19.1", "multistream-select", "once_cell", "parking_lot 0.12.3", @@ -3689,6 +4433,22 @@ dependencies = [ "web-time", ] +[[package]] +name = "libp2p-dns" +version = "0.40.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e6a18db73084b4da2871438f6239fef35190b05023de7656e877c18a00541a3b" +dependencies = [ + "async-trait", + "futures", + "libp2p-core 0.40.1", + "libp2p-identity", + "log", + "parking_lot 0.12.3", + "smallvec", + "trust-dns-resolver", +] + [[package]] name = "libp2p-dns" version = "0.42.0" @@ -3698,7 +4458,7 @@ dependencies = [ "async-trait", "futures", "hickory-resolver", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", "parking_lot 0.12.3", "smallvec", @@ -3711,8 +4471,8 @@ version = "0.47.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b4e830fdf24ac8c444c12415903174d506e1e077fbe3875c404a78c5935a8543" dependencies = [ - "asynchronous-codec", - "base64", + "asynchronous-codec 0.7.0", + "base64 0.22.1", "byteorder", "bytes", "either", @@ -3721,12 +4481,12 @@ dependencies = [ "futures-ticker", "getrandom", "hex_fmt", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", - "prometheus-client", + "libp2p-swarm 0.45.1", + "prometheus-client 0.22.3", "quick-protobuf", - "quick-protobuf-codec", + "quick-protobuf-codec 0.3.1", "rand", "regex", "sha2", @@ -3736,6 +4496,29 @@ dependencies = [ "web-time", ] +[[package]] +name = "libp2p-identify" +version = "0.43.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "45a96638a0a176bec0a4bcaebc1afa8cf909b114477209d7456ade52c61cd9cd" +dependencies = [ + "asynchronous-codec 0.6.2", + "either", + "futures", + "futures-bounded 0.1.0", + "futures-timer", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "log", + "lru", + "quick-protobuf", + "quick-protobuf-codec 0.2.0", + "smallvec", + "thiserror 1.0.69", + "void", +] + [[package]] name = "libp2p-identity" version = "0.2.10" @@ -3745,7 +4528,7 @@ dependencies = [ "bs58", "ed25519-dalek", "hkdf", - "multihash", + "multihash 0.19.1", "quick-protobuf", "rand", "sha2", @@ -3754,6 +4537,56 @@ dependencies = [ "zeroize", ] +[[package]] +name = "libp2p-kad" +version = "0.44.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "16ea178dabba6dde6ffc260a8e0452ccdc8f79becf544946692fff9d412fc29d" +dependencies = [ + "arrayvec", + "asynchronous-codec 0.6.2", + "bytes", + "either", + "fnv", + "futures", + "futures-timer", + "instant", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "log", + "quick-protobuf", + "quick-protobuf-codec 0.2.0", + "rand", + "sha2", + "smallvec", + "thiserror 1.0.69", + "uint", + "unsigned-varint 0.7.2", + "void", +] + +[[package]] +name = "libp2p-mdns" +version = "0.44.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "42a2567c305232f5ef54185e9604579a894fd0674819402bb0ac0246da82f52a" +dependencies = [ + "data-encoding", + "futures", + "if-watch", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "log", + "rand", + "smallvec", + "socket2 0.5.8", + "tokio", + "trust-dns-proto 0.22.0", + "void", +] + [[package]] name = "libp2p-mdns" version = "0.46.0" @@ -3764,17 +4597,34 @@ dependencies = [ "futures", "hickory-proto", "if-watch", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", + "libp2p-swarm 0.45.1", "rand", "smallvec", - "socket2", + "socket2 0.5.8", "tokio", "tracing", "void", ] +[[package]] +name = "libp2p-metrics" +version = "0.13.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "239ba7d28f8d0b5d77760dc6619c05c7e88e74ec8fbbe97f856f20a56745e620" +dependencies = [ + "instant", + "libp2p-core 0.40.1", + "libp2p-identify", + "libp2p-identity", + "libp2p-kad", + "libp2p-ping 0.43.1", + "libp2p-swarm 0.43.7", + "once_cell", + "prometheus-client 0.21.2", +] + [[package]] name = "libp2p-metrics" version = "0.15.0" @@ -3782,30 +4632,55 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77ebafa94a717c8442d8db8d3ae5d1c6a15e30f2d347e0cd31d057ca72e42566" dependencies = [ "futures", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-gossipsub", "libp2p-identity", - "libp2p-ping", - "libp2p-swarm", + "libp2p-ping 0.45.0", + "libp2p-swarm 0.45.1", "pin-project", - "prometheus-client", + "prometheus-client 0.22.3", "web-time", ] +[[package]] +name = "libp2p-noise" +version = "0.43.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d2eeec39ad3ad0677551907dd304b2f13f17208ccebe333bef194076cd2e8921" +dependencies = [ + "bytes", + "curve25519-dalek", + "futures", + "libp2p-core 0.40.1", + "libp2p-identity", + "log", + "multiaddr", + "multihash 0.19.1", + "once_cell", + "quick-protobuf", + "rand", + "sha2", + "snow", + "static_assertions", + "thiserror 1.0.69", + "x25519-dalek", + "zeroize", +] + [[package]] name = "libp2p-noise" version = "0.45.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "36b137cb1ae86ee39f8e5d6245a296518912014eaa87427d24e6ff58cfc1b28c" dependencies = [ - "asynchronous-codec", + "asynchronous-codec 0.7.0", "bytes", "curve25519-dalek", "futures", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", "multiaddr", - "multihash", + "multihash 0.19.1", "once_cell", "quick-protobuf", "rand", @@ -3818,6 +4693,24 @@ dependencies = [ "zeroize", ] +[[package]] +name = "libp2p-ping" +version = "0.43.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e702d75cd0827dfa15f8fd92d15b9932abe38d10d21f47c50438c71dd1b5dae3" +dependencies = [ + "either", + "futures", + "futures-timer", + "instant", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "log", + "rand", + "void", +] + [[package]] name = "libp2p-ping" version = "0.45.0" @@ -3827,15 +4720,39 @@ dependencies = [ "either", "futures", "futures-timer", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", + "libp2p-swarm 0.45.1", "rand", "tracing", "void", "web-time", ] +[[package]] +name = "libp2p-quic" +version = "0.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "130d451d83f21b81eb7b35b360bc7972aeafb15177784adc56528db082e6b927" +dependencies = [ + "bytes", + "futures", + "futures-timer", + "if-watch", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-tls 0.2.1", + "log", + "parking_lot 0.12.3", + "quinn 0.10.2", + "rand", + "ring 0.16.20", + "rustls 0.21.12", + "socket2 0.5.8", + "thiserror 1.0.69", + "tokio", +] + [[package]] name = "libp2p-quic" version = "0.11.1" @@ -3846,20 +4763,38 @@ dependencies = [ "futures", "futures-timer", "if-watch", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-tls", + "libp2p-tls 0.5.0", "parking_lot 0.12.3", - "quinn", + "quinn 0.11.6", "rand", "ring 0.17.8", - "rustls", - "socket2", + "rustls 0.23.20", + "socket2 0.5.8", "thiserror 1.0.69", "tokio", "tracing", ] +[[package]] +name = "libp2p-request-response" +version = "0.25.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d8e3b4d67870478db72bac87bfc260ee6641d0734e0e3e275798f089c3fecfd4" +dependencies = [ + "async-trait", + "futures", + "instant", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm 0.43.7", + "log", + "rand", + "smallvec", + "void", +] + [[package]] name = "libp2p-request-response" version = "0.27.0" @@ -3868,11 +4803,11 @@ checksum = "1356c9e376a94a75ae830c42cdaea3d4fe1290ba409a22c809033d1b7dcab0a6" dependencies = [ "async-trait", "futures", - "futures-bounded", + "futures-bounded 0.2.4", "futures-timer", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm", + "libp2p-swarm 0.45.1", "rand", "smallvec", "tracing", @@ -3880,6 +4815,29 @@ dependencies = [ "web-time", ] +[[package]] +name = "libp2p-swarm" +version = "0.43.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "580189e0074af847df90e75ef54f3f30059aedda37ea5a1659e8b9fca05c0141" +dependencies = [ + "either", + "fnv", + "futures", + "futures-timer", + "instant", + "libp2p-core 0.40.1", + "libp2p-identity", + "libp2p-swarm-derive 0.33.0", + "log", + "multistream-select", + "once_cell", + "rand", + "smallvec", + "tokio", + "void", +] + [[package]] name = "libp2p-swarm" version = "0.45.1" @@ -3890,9 +4848,9 @@ dependencies = [ "fnv", "futures", "futures-timer", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "libp2p-swarm-derive", + "libp2p-swarm-derive 0.35.0", "lru", "multistream-select", "once_cell", @@ -3904,6 +4862,19 @@ dependencies = [ "web-time", ] +[[package]] +name = "libp2p-swarm-derive" +version = "0.33.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c4d5ec2a3df00c7836d7696c136274c9c59705bac69133253696a6c932cd1d74" +dependencies = [ + "heck 0.4.1", + "proc-macro-warning", + "proc-macro2", + "quote", + "syn 2.0.94", +] + [[package]] name = "libp2p-swarm-derive" version = "0.35.0" @@ -3916,6 +4887,23 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "libp2p-tcp" +version = "0.40.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b558dd40d1bcd1aaaed9de898e9ec6a436019ecc2420dd0016e712fbb61c5508" +dependencies = [ + "futures", + "futures-timer", + "if-watch", + "libc", + "libp2p-core 0.40.1", + "libp2p-identity", + "log", + "socket2 0.5.8", + "tokio", +] + [[package]] name = "libp2p-tcp" version = "0.42.0" @@ -3926,13 +4914,32 @@ dependencies = [ "futures-timer", "if-watch", "libc", - "libp2p-core", + "libp2p-core 0.42.0", "libp2p-identity", - "socket2", + "socket2 0.5.8", "tokio", "tracing", ] +[[package]] +name = "libp2p-tls" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8218d1d5482b122ccae396bbf38abdcb283ecc96fa54760e1dfd251f0546ac61" +dependencies = [ + "futures", + "futures-rustls 0.24.0", + "libp2p-core 0.40.1", + "libp2p-identity", + "rcgen 0.10.0", + "ring 0.16.20", + "rustls 0.21.12", + "rustls-webpki 0.101.7", + "thiserror 1.0.69", + "x509-parser 0.15.1", + "yasna", +] + [[package]] name = "libp2p-tls" version = "0.5.0" @@ -3940,18 +4947,34 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "47b23dddc2b9c355f73c1e36eb0c3ae86f7dc964a3715f0731cfad352db4d847" dependencies = [ "futures", - "futures-rustls", - "libp2p-core", + "futures-rustls 0.26.0", + "libp2p-core 0.42.0", "libp2p-identity", - "rcgen", + "rcgen 0.11.3", "ring 0.17.8", - "rustls", + "rustls 0.23.20", "rustls-webpki 0.101.7", "thiserror 1.0.69", - "x509-parser", + "x509-parser 0.16.0", "yasna", ] +[[package]] +name = "libp2p-upnp" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "82775a47b34f10f787ad3e2a22e2c1541e6ebef4fe9f28f3ac553921554c94c1" +dependencies = [ + "futures", + "futures-timer", + "igd-next", + "libp2p-core 0.40.1", + "libp2p-swarm 0.43.7", + "log", + "tokio", + "void", +] + [[package]] name = "libp2p-upnp" version = "0.3.0" @@ -3961,13 +4984,61 @@ dependencies = [ "futures", "futures-timer", "igd-next", - "libp2p-core", - "libp2p-swarm", + "libp2p-core 0.42.0", + "libp2p-swarm 0.45.1", "tokio", "tracing", "void", ] +[[package]] +name = "libp2p-wasm-ext" +version = "0.40.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e5d8e3a9e07da0ef5b55a9f26c009c8fb3c725d492d8bb4b431715786eea79c" +dependencies = [ + "futures", + "js-sys", + "libp2p-core 0.40.1", + "send_wrapper", + "wasm-bindgen", + "wasm-bindgen-futures", +] + +[[package]] +name = "libp2p-websocket" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "004ee9c4a4631435169aee6aad2f62e3984dc031c43b6d29731e8e82a016c538" +dependencies = [ + "either", + "futures", + "futures-rustls 0.24.0", + "libp2p-core 0.40.1", + "libp2p-identity", + "log", + "parking_lot 0.12.3", + "pin-project-lite", + "rw-stream-sink", + "soketto 0.8.1", + "thiserror 1.0.69", + "url", + "webpki-roots", +] + +[[package]] +name = "libp2p-yamux" +version = "0.44.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8eedcb62824c4300efb9cfd4e2a6edaf3ca097b9e68b36dabe45a44469fd6a85" +dependencies = [ + "futures", + "libp2p-core 0.40.1", + "log", + "thiserror 1.0.69", + "yamux 0.12.1", +] + [[package]] name = "libp2p-yamux" version = "0.46.0" @@ -3976,7 +5047,7 @@ checksum = "788b61c80789dba9760d8c669a5bedb642c8267555c803fabd8396e4ca5c5882" dependencies = [ "either", "futures", - "libp2p-core", + "libp2p-core 0.42.0", "thiserror 1.0.69", "tracing", "yamux 0.12.1", @@ -4035,6 +5106,15 @@ version = "0.5.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0717cef1bc8b636c6e1c1bbdefc09e6322da8a9321966e8928ef80d20f7f770f" +[[package]] +name = "linked_hash_set" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bae85b5be22d9843c80e5fc80e9b64c8a3b1f98f867c709956eca3efff4e92e2" +dependencies = [ + "linked-hash-map", +] + [[package]] name = "linregress" version = "0.5.4" @@ -4324,6 +5404,33 @@ dependencies = [ "windows-sys 0.52.0", ] +[[package]] +name = "mockall" +version = "0.11.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4c84490118f2ee2d74570d114f3d0493cbf02790df303d2707606c3e14e07c96" +dependencies = [ + "cfg-if", + "downcast", + "fragile", + "lazy_static", + "mockall_derive", + "predicates", + "predicates-tree", +] + +[[package]] +name = "mockall_derive" +version = "0.11.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "22ce75669015c4f47b289fd4d4f56e894e4c96003ffdf3ac51313126f94c6cbb" +dependencies = [ + "cfg-if", + "proc-macro2", + "quote", + "syn 1.0.109", +] + [[package]] name = "modular-frost" version = "0.8.1" @@ -4585,7 +5692,7 @@ dependencies = [ "data-encoding", "libp2p-identity", "multibase", - "multihash", + "multihash 0.19.1", "percent-encoding", "serde", "static_assertions", @@ -4618,6 +5725,23 @@ dependencies = [ "zeroize", ] +[[package]] +name = "multihash" +version = "0.18.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cfd8a792c1694c6da4f68db0a9d707c72bd260994da179e6030a5dcee00bb815" +dependencies = [ + "blake2b_simd", + "blake2s_simd", + "blake3", + "core2", + "digest 0.10.7", + "multihash-derive 0.8.0", + "sha2", + "sha3", + "unsigned-varint 0.7.2", +] + [[package]] name = "multihash" version = "0.19.1" @@ -4628,6 +5752,70 @@ dependencies = [ "unsigned-varint 0.7.2", ] +[[package]] +name = "multihash-codetable" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f6d815ecb3c8238d00647f8630ede7060a642c9f704761cd6082cb4028af6935" +dependencies = [ + "blake2b_simd", + "blake2s_simd", + "blake3", + "core2", + "digest 0.10.7", + "multihash-derive 0.9.0", + "ripemd", + "sha1", + "sha2", + "sha3", + "strobe-rs", +] + +[[package]] +name = "multihash-derive" +version = "0.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fc076939022111618a5026d3be019fd8b366e76314538ff9a1b59ffbcbf98bcd" +dependencies = [ + "proc-macro-crate 1.3.1", + "proc-macro-error", + "proc-macro2", + "quote", + "syn 1.0.109", + "synstructure 0.12.6", +] + +[[package]] +name = "multihash-derive" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "890e72cb7396cb99ed98c1246a97b243cc16394470d94e0bc8b0c2c11d84290e" +dependencies = [ + "core2", + "multihash 0.19.1", + "multihash-derive-impl", +] + +[[package]] +name = "multihash-derive-impl" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d38685e08adb338659871ecfc6ee47ba9b22dcc8abcf6975d379cc49145c3040" +dependencies = [ + "proc-macro-crate 1.3.1", + "proc-macro-error", + "proc-macro2", + "quote", + "syn 1.0.109", + "synstructure 0.12.6", +] + +[[package]] +name = "multimap" +version = "0.8.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e5ce46fe64a9d73be07dcbe690a38ce1b293be448fd8ce1e6c1b8062c9f72c6a" + [[package]] name = "multistream-select" version = "0.13.0" @@ -4657,6 +5845,15 @@ dependencies = [ "typenum", ] +[[package]] +name = "names" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7bddcd3bf5144b6392de80e04c347cd7fab2508f6df16a85fc496ecd5cec39bc" +dependencies = [ + "rand", +] + [[package]] name = "netlink-packet-core" version = "0.7.0" @@ -4749,6 +5946,12 @@ dependencies = [ "minimal-lexical", ] +[[package]] +name = "normalize-line-endings" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "61807f77802ff30975e01f4f071c8ba10c022052f98b3294119f3e615d13e5be" + [[package]] name = "nu-ansi-term" version = "0.46.0" @@ -4886,13 +6089,22 @@ dependencies = [ "memchr", ] +[[package]] +name = "oid-registry" +version = "0.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9bedf36ffb6ba96c2eb7144ef6270557b52e54b20c0a8e1eb2ff99a6c6959bff" +dependencies = [ + "asn1-rs 0.5.2", +] + [[package]] name = "oid-registry" version = "0.7.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a8d8034d9489cdaf79228eb9f6a3b8d7bb32ba00d6645ebd48eef4077ceb5bd9" dependencies = [ - "asn1-rs", + "asn1-rs 0.6.2", ] [[package]] @@ -5059,6 +6271,22 @@ dependencies = [ "sp-std", ] +[[package]] +name = "pallet-transaction-payment-rpc" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "jsonrpsee", + "pallet-transaction-payment-rpc-runtime-api", + "parity-scale-codec", + "sp-api", + "sp-blockchain", + "sp-core", + "sp-rpc", + "sp-runtime", + "sp-weights", +] + [[package]] name = "pallet-transaction-payment-rpc-runtime-api" version = "4.0.0-dev" @@ -5168,6 +6396,12 @@ dependencies = [ "windows-targets 0.52.6", ] +[[package]] +name = "partial_sort" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7924d1d0ad836f665c9065e26d016c673ece3993f30d340068b16f282afc1156" + [[package]] name = "password-hash" version = "0.5.0" @@ -5228,13 +6462,22 @@ dependencies = [ "sha2", ] +[[package]] +name = "pem" +version = "1.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a8835c273a76a90455d7344889b0964598e3316e2a79ede8e36f16bdcf2228b8" +dependencies = [ + "base64 0.13.1", +] + [[package]] name = "pem" version = "3.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e459365e590736a54c3fa561947c84837534b8e9af6fc5bf781307e82658fae" dependencies = [ - "base64", + "base64 0.22.1", "serde", ] @@ -5255,6 +6498,16 @@ dependencies = [ "ucd-trie", ] +[[package]] +name = "petgraph" +version = "0.6.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4c5cc86750666a3ed20bdaf5ca2a0344f9c67674cae0515bec2da16fbaa47db" +dependencies = [ + "fixedbitset", + "indexmap 2.7.0", +] + [[package]] name = "pin-project" version = "1.1.7" @@ -5370,6 +6623,46 @@ dependencies = [ "zerocopy", ] +[[package]] +name = "predicates" +version = "2.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "59230a63c37f3e18569bdb90e4a89cbf5bf8b06fea0b84e65ea10cc4df47addd" +dependencies = [ + "difflib", + "float-cmp", + "itertools 0.10.5", + "normalize-line-endings", + "predicates-core", + "regex", +] + +[[package]] +name = "predicates-core" +version = "1.0.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "727e462b119fe9c93fd0eb1429a5f7647394014cf3c04ab2c0350eeb09095ffa" + +[[package]] +name = "predicates-tree" +version = "1.0.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72dd2d6d381dfb73a193c7fca536518d7caee39fc8503f74e7dc0be0531b425c" +dependencies = [ + "predicates-core", + "termtree", +] + +[[package]] +name = "prettyplease" +version = "0.1.25" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6c8646e95016a7a6c4adea95bafa8a16baab64b583356217f2c85db4a39d9a86" +dependencies = [ + "proc-macro2", + "syn 1.0.109", +] + [[package]] name = "primeorder" version = "0.13.6" @@ -5411,6 +6704,30 @@ dependencies = [ "toml_edit 0.22.22", ] +[[package]] +name = "proc-macro-error" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c" +dependencies = [ + "proc-macro-error-attr", + "proc-macro2", + "quote", + "syn 1.0.109", + "version_check", +] + +[[package]] +name = "proc-macro-error-attr" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869" +dependencies = [ + "proc-macro2", + "quote", + "version_check", +] + [[package]] name = "proc-macro-error-attr2" version = "2.0.0" @@ -5453,6 +6770,32 @@ dependencies = [ "unicode-ident", ] +[[package]] +name = "prometheus" +version = "0.13.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3d33c28a30771f7f96db69893f78b857f7450d7e0237e9c8fc6427a81bae7ed1" +dependencies = [ + "cfg-if", + "fnv", + "lazy_static", + "memchr", + "parking_lot 0.12.3", + "thiserror 1.0.69", +] + +[[package]] +name = "prometheus-client" +version = "0.21.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3c99afa9a01501019ac3a14d71d9f94050346f55ca471ce90c799a15c58f61e2" +dependencies = [ + "dtoa", + "itoa", + "parking_lot 0.12.3", + "prometheus-client-derive-encode", +] + [[package]] name = "prometheus-client" version = "0.22.3" @@ -5496,6 +6839,60 @@ dependencies = [ "unarray", ] +[[package]] +name = "prost" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0b82eaa1d779e9a4bc1c3217db8ffbeabaae1dca241bf70183242128d48681cd" +dependencies = [ + "bytes", + "prost-derive", +] + +[[package]] +name = "prost-build" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "119533552c9a7ffacc21e099c24a0ac8bb19c2a2a3f363de84cd9b844feab270" +dependencies = [ + "bytes", + "heck 0.4.1", + "itertools 0.10.5", + "lazy_static", + "log", + "multimap", + "petgraph", + "prettyplease", + "prost", + "prost-types", + "regex", + "syn 1.0.109", + "tempfile", + "which", +] + +[[package]] +name = "prost-derive" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e5d2d8d10f3c6ded6da8b05b5fb3b8a5082514344d56c9f871412d29b4e075b4" +dependencies = [ + "anyhow", + "itertools 0.10.5", + "proc-macro2", + "quote", + "syn 1.0.109", +] + +[[package]] +name = "prost-types" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "213622a1460818959ac1181aaeb2dc9c7f63df720db7d788b3e24eacd1983e13" +dependencies = [ + "prost", +] + [[package]] name = "psm" version = "0.1.24" @@ -5520,19 +6917,50 @@ dependencies = [ "byteorder", ] +[[package]] +name = "quick-protobuf-codec" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8ededb1cd78531627244d51dd0c7139fbe736c7d57af0092a76f0ffb2f56e98" +dependencies = [ + "asynchronous-codec 0.6.2", + "bytes", + "quick-protobuf", + "thiserror 1.0.69", + "unsigned-varint 0.7.2", +] + [[package]] name = "quick-protobuf-codec" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "15a0580ab32b169745d7a39db2ba969226ca16738931be152a3209b409de2474" dependencies = [ - "asynchronous-codec", + "asynchronous-codec 0.7.0", "bytes", "quick-protobuf", "thiserror 1.0.69", "unsigned-varint 0.8.0", ] +[[package]] +name = "quinn" +version = "0.10.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8cc2c5017e4b43d5995dcea317bc46c1e09404c0a9664d2908f7f02dfe943d75" +dependencies = [ + "bytes", + "futures-io", + "pin-project-lite", + "quinn-proto 0.10.6", + "quinn-udp 0.4.1", + "rustc-hash 1.1.0", + "rustls 0.21.12", + "thiserror 1.0.69", + "tokio", + "tracing", +] + [[package]] name = "quinn" version = "0.11.6" @@ -5542,16 +6970,33 @@ dependencies = [ "bytes", "futures-io", "pin-project-lite", - "quinn-proto", - "quinn-udp", + "quinn-proto 0.11.9", + "quinn-udp 0.5.9", "rustc-hash 2.1.0", - "rustls", - "socket2", + "rustls 0.23.20", + "socket2 0.5.8", "thiserror 2.0.9", "tokio", "tracing", ] +[[package]] +name = "quinn-proto" +version = "0.10.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "141bf7dfde2fbc246bfd3fe12f2455aa24b0fbd9af535d8c86c7bd1381ff2b1a" +dependencies = [ + "bytes", + "rand", + "ring 0.16.20", + "rustc-hash 1.1.0", + "rustls 0.21.12", + "slab", + "thiserror 1.0.69", + "tinyvec", + "tracing", +] + [[package]] name = "quinn-proto" version = "0.11.9" @@ -5563,7 +7008,7 @@ dependencies = [ "rand", "ring 0.17.8", "rustc-hash 2.1.0", - "rustls", + "rustls 0.23.20", "rustls-pki-types", "slab", "thiserror 2.0.9", @@ -5572,16 +7017,29 @@ dependencies = [ "web-time", ] +[[package]] +name = "quinn-udp" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "055b4e778e8feb9f93c4e439f71dc2156ef13360b432b799e179a8c4cdf0b1d7" +dependencies = [ + "bytes", + "libc", + "socket2 0.5.8", + "tracing", + "windows-sys 0.48.0", +] + [[package]] name = "quinn-udp" version = "0.5.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1c40286217b4ba3a71d644d752e6a0b71f13f1b6a2c5311acfcbe0c2418ed904" dependencies = [ - "cfg_aliases", + "cfg_aliases 0.2.1", "libc", "once_cell", - "socket2", + "socket2 0.5.8", "tracing", "windows-sys 0.59.0", ] @@ -5642,6 +7100,15 @@ dependencies = [ "rand", ] +[[package]] +name = "rand_pcg" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "59cad018caf63deb318e5a4586d99a24424a364f40f1e5778c29aca23f4fc73e" +dependencies = [ + "rand_core", +] + [[package]] name = "rand_xorshift" version = "0.3.0" @@ -5657,13 +7124,45 @@ version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "60a357793950651c4ed0f3f52338f53b2f809f32d83a07f72909fa13e4c6c1e3" +[[package]] +name = "rayon" +version = "1.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b418a60154510ca1a002a752ca9714984e21e4241e804d32555251faf8b78ffa" +dependencies = [ + "either", + "rayon-core", +] + +[[package]] +name = "rayon-core" +version = "1.12.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1465873a3dfdaa8ae7cb14b4383657caab0b3e8a0aa9ae8e04b044854c8dfce2" +dependencies = [ + "crossbeam-deque", + "crossbeam-utils", +] + +[[package]] +name = "rcgen" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ffbe84efe2f38dea12e9bfc1f65377fdf03e53a18cb3b995faedf7934c7e785b" +dependencies = [ + "pem 1.1.1", + "ring 0.16.20", + "time", + "yasna", +] + [[package]] name = "rcgen" version = "0.11.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "52c4f3084aa3bc7dfbba4eff4fab2a54db4324965d8872ab933565e6fbd83bc6" dependencies = [ - "pem", + "pem 3.0.4", "ring 0.16.20", "time", "yasna", @@ -5709,6 +7208,19 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "regalloc2" +version = "0.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ad156d539c879b7a24a363a2016d77961786e71f48f2e2fc8302a92abd2429a6" +dependencies = [ + "hashbrown 0.13.2", + "log", + "rustc-hash 1.1.0", + "slice-group-by", + "smallvec", +] + [[package]] name = "regex" version = "1.11.1" @@ -5803,6 +7315,15 @@ dependencies = [ "windows-sys 0.52.0", ] +[[package]] +name = "ripemd" +version = "0.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bd124222d17ad93a644ed9d011a40f4fb64aa54275c08cc216524a9ea82fb09f" +dependencies = [ + "digest 0.10.7", +] + [[package]] name = "rlp" version = "0.5.2" @@ -5830,6 +7351,17 @@ dependencies = [ "librocksdb-sys", ] +[[package]] +name = "rpassword" +version = "7.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "80472be3c897911d0137b2d2b9055faf6eeac5b14e324073d83bc17b191d7e3f" +dependencies = [ + "libc", + "rtoolbox", + "windows-sys 0.48.0", +] + [[package]] name = "rtnetlink" version = "0.13.1" @@ -5848,6 +7380,16 @@ dependencies = [ "tokio", ] +[[package]] +name = "rtoolbox" +version = "0.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c247d24e63230cdb56463ae328478bd5eac8b8faa8c69461a77e8e323afac90e" +dependencies = [ + "libc", + "windows-sys 0.48.0", +] + [[package]] name = "ruint" version = "1.12.3" @@ -5942,6 +7484,18 @@ dependencies = [ "windows-sys 0.59.0", ] +[[package]] +name = "rustls" +version = "0.21.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3f56a14d1f48b391359b22f731fd4bd7e43c97f3c50eee276f3aa09c94784d3e" +dependencies = [ + "log", + "ring 0.17.8", + "rustls-webpki 0.101.7", + "sct", +] + [[package]] name = "rustls" version = "0.23.20" @@ -6051,6 +7605,931 @@ dependencies = [ "winapi-util", ] +[[package]] +name = "sc-allocator" +version = "4.1.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "log", + "sp-core", + "sp-wasm-interface", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-authority-discovery" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "futures-timer", + "ip_network", + "libp2p 0.52.4", + "log", + "multihash-codetable", + "parity-scale-codec", + "prost", + "prost-build", + "rand", + "sc-client-api", + "sc-network", + "sp-api", + "sp-authority-discovery", + "sp-blockchain", + "sp-core", + "sp-keystore", + "sp-runtime", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-basic-authorship" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "futures", + "futures-timer", + "log", + "parity-scale-codec", + "sc-block-builder", + "sc-client-api", + "sc-proposer-metrics", + "sc-telemetry", + "sc-transaction-pool-api", + "sp-api", + "sp-blockchain", + "sp-consensus", + "sp-core", + "sp-inherents", + "sp-runtime", + "substrate-prometheus-endpoint", +] + +[[package]] +name = "sc-block-builder" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "parity-scale-codec", + "sc-client-api", + "sp-api", + "sp-block-builder", + "sp-blockchain", + "sp-core", + "sp-inherents", + "sp-runtime", +] + +[[package]] +name = "sc-chain-spec" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "memmap2", + "sc-chain-spec-derive", + "sc-client-api", + "sc-executor", + "sc-network", + "sc-telemetry", + "serde", + "serde_json", + "sp-blockchain", + "sp-core", + "sp-runtime", + "sp-state-machine", +] + +[[package]] +name = "sc-chain-spec-derive" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "proc-macro-crate 1.3.1", + "proc-macro2", + "quote", + "syn 2.0.94", +] + +[[package]] +name = "sc-cli" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "chrono", + "clap", + "fdlimit", + "futures", + "libp2p-identity", + "log", + "names", + "parity-scale-codec", + "rand", + "regex", + "rpassword", + "sc-client-api", + "sc-client-db", + "sc-keystore", + "sc-network", + "sc-service", + "sc-telemetry", + "sc-tracing", + "sc-utils", + "serde", + "serde_json", + "sp-blockchain", + "sp-core", + "sp-keyring", + "sp-keystore", + "sp-panic-handler", + "sp-runtime", + "sp-version", + "thiserror 1.0.69", + "tiny-bip39", + "tokio", +] + +[[package]] +name = "sc-client-api" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "fnv", + "futures", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-executor", + "sc-transaction-pool-api", + "sc-utils", + "sp-api", + "sp-blockchain", + "sp-consensus", + "sp-core", + "sp-database", + "sp-externalities", + "sp-runtime", + "sp-state-machine", + "sp-storage", + "substrate-prometheus-endpoint", +] + +[[package]] +name = "sc-client-db" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "hash-db", + "kvdb", + "kvdb-memorydb", + "kvdb-rocksdb", + "linked-hash-map", + "log", + "parity-db", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-client-api", + "sc-state-db", + "schnellru", + "sp-arithmetic", + "sp-blockchain", + "sp-core", + "sp-database", + "sp-runtime", + "sp-state-machine", + "sp-trie", +] + +[[package]] +name = "sc-consensus" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "futures-timer", + "libp2p-identity", + "log", + "mockall", + "parking_lot 0.12.3", + "sc-client-api", + "sc-utils", + "serde", + "sp-api", + "sp-blockchain", + "sp-consensus", + "sp-core", + "sp-runtime", + "sp-state-machine", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-consensus-babe" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "fork-tree", + "futures", + "log", + "num-bigint", + "num-rational", + "num-traits", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-client-api", + "sc-consensus", + "sc-consensus-epochs", + "sc-consensus-slots", + "sc-telemetry", + "sc-transaction-pool-api", + "scale-info", + "sp-api", + "sp-application-crypto", + "sp-block-builder", + "sp-blockchain", + "sp-consensus", + "sp-consensus-babe", + "sp-consensus-slots", + "sp-core", + "sp-inherents", + "sp-keystore", + "sp-runtime", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-consensus-epochs" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "fork-tree", + "parity-scale-codec", + "sc-client-api", + "sc-consensus", + "sp-blockchain", + "sp-runtime", +] + +[[package]] +name = "sc-consensus-grandpa" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "ahash", + "array-bytes", + "async-trait", + "dyn-clone", + "finality-grandpa", + "fork-tree", + "futures", + "futures-timer", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "rand", + "sc-block-builder", + "sc-chain-spec", + "sc-client-api", + "sc-consensus", + "sc-network", + "sc-network-common", + "sc-network-gossip", + "sc-telemetry", + "sc-transaction-pool-api", + "sc-utils", + "serde_json", + "sp-api", + "sp-application-crypto", + "sp-arithmetic", + "sp-blockchain", + "sp-consensus", + "sp-consensus-grandpa", + "sp-core", + "sp-keystore", + "sp-runtime", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-consensus-slots" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "futures-timer", + "log", + "parity-scale-codec", + "sc-client-api", + "sc-consensus", + "sc-telemetry", + "sp-arithmetic", + "sp-blockchain", + "sp-consensus", + "sp-consensus-slots", + "sp-core", + "sp-inherents", + "sp-runtime", + "sp-state-machine", +] + +[[package]] +name = "sc-executor" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-executor-common", + "sc-executor-wasmtime", + "schnellru", + "sp-api", + "sp-core", + "sp-externalities", + "sp-io", + "sp-panic-handler", + "sp-runtime-interface", + "sp-trie", + "sp-version", + "sp-wasm-interface", + "tracing", +] + +[[package]] +name = "sc-executor-common" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "sc-allocator", + "sp-maybe-compressed-blob", + "sp-wasm-interface", + "thiserror 1.0.69", + "wasm-instrument", +] + +[[package]] +name = "sc-executor-wasmtime" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "anyhow", + "cfg-if", + "libc", + "log", + "rustix", + "sc-allocator", + "sc-executor-common", + "sp-runtime-interface", + "sp-wasm-interface", + "wasmtime", +] + +[[package]] +name = "sc-informant" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "anstyle", + "futures", + "futures-timer", + "log", + "sc-client-api", + "sc-network", + "sc-network-common", + "sp-blockchain", + "sp-runtime", +] + +[[package]] +name = "sc-keystore" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "parking_lot 0.12.3", + "serde_json", + "sp-application-crypto", + "sp-core", + "sp-keystore", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-network" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "async-channel", + "async-trait", + "asynchronous-codec 0.6.2", + "bytes", + "either", + "fnv", + "futures", + "futures-timer", + "ip_network", + "libp2p 0.52.4", + "linked_hash_set", + "log", + "mockall", + "parity-scale-codec", + "parking_lot 0.12.3", + "partial_sort", + "pin-project", + "rand", + "sc-client-api", + "sc-network-common", + "sc-utils", + "serde", + "serde_json", + "smallvec", + "sp-arithmetic", + "sp-blockchain", + "sp-core", + "sp-runtime", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", + "unsigned-varint 0.7.2", + "void", + "wasm-timer", + "zeroize", +] + +[[package]] +name = "sc-network-bitswap" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-channel", + "cid", + "futures", + "libp2p-identity", + "log", + "prost", + "prost-build", + "sc-client-api", + "sc-network", + "sp-blockchain", + "sp-runtime", + "thiserror 1.0.69", + "unsigned-varint 0.7.2", +] + +[[package]] +name = "sc-network-common" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "bitflags 1.3.2", + "futures", + "libp2p-identity", + "parity-scale-codec", + "prost-build", + "sc-consensus", + "sp-consensus", + "sp-consensus-grandpa", + "sp-runtime", +] + +[[package]] +name = "sc-network-gossip" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "ahash", + "futures", + "futures-timer", + "libp2p-identity", + "log", + "multiaddr", + "sc-network", + "sc-network-common", + "schnellru", + "sp-runtime", + "substrate-prometheus-endpoint", + "tracing", +] + +[[package]] +name = "sc-network-light" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "async-channel", + "futures", + "libp2p-identity", + "log", + "parity-scale-codec", + "prost", + "prost-build", + "sc-client-api", + "sc-network", + "sp-blockchain", + "sp-core", + "sp-runtime", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-network-sync" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "async-channel", + "async-trait", + "fork-tree", + "futures", + "futures-timer", + "libp2p 0.52.4", + "log", + "mockall", + "parity-scale-codec", + "prost", + "prost-build", + "sc-client-api", + "sc-consensus", + "sc-network", + "sc-network-common", + "sc-utils", + "schnellru", + "smallvec", + "sp-arithmetic", + "sp-blockchain", + "sp-consensus", + "sp-consensus-grandpa", + "sp-core", + "sp-runtime", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-network-transactions" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "futures", + "libp2p 0.52.4", + "log", + "parity-scale-codec", + "sc-network", + "sc-network-common", + "sc-utils", + "sp-consensus", + "sp-runtime", + "substrate-prometheus-endpoint", +] + +[[package]] +name = "sc-offchain" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "bytes", + "fnv", + "futures", + "futures-timer", + "hyper 0.14.30", + "libp2p 0.52.4", + "log", + "num_cpus", + "once_cell", + "parity-scale-codec", + "parking_lot 0.12.3", + "rand", + "sc-client-api", + "sc-network", + "sc-transaction-pool-api", + "sc-utils", + "sp-api", + "sp-core", + "sp-externalities", + "sp-keystore", + "sp-offchain", + "sp-runtime", + "threadpool", + "tracing", +] + +[[package]] +name = "sc-proposer-metrics" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "log", + "substrate-prometheus-endpoint", +] + +[[package]] +name = "sc-rpc" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "futures", + "jsonrpsee", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-block-builder", + "sc-chain-spec", + "sc-client-api", + "sc-rpc-api", + "sc-tracing", + "sc-transaction-pool-api", + "sc-utils", + "serde_json", + "sp-api", + "sp-blockchain", + "sp-core", + "sp-keystore", + "sp-offchain", + "sp-rpc", + "sp-runtime", + "sp-session", + "sp-version", + "tokio", +] + +[[package]] +name = "sc-rpc-api" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "jsonrpsee", + "parity-scale-codec", + "sc-chain-spec", + "sc-transaction-pool-api", + "scale-info", + "serde", + "serde_json", + "sp-core", + "sp-rpc", + "sp-runtime", + "sp-version", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-rpc-server" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "http 0.2.12", + "jsonrpsee", + "log", + "serde_json", + "substrate-prometheus-endpoint", + "tokio", + "tower 0.4.13", + "tower-http", +] + +[[package]] +name = "sc-rpc-spec-v2" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "array-bytes", + "futures", + "futures-util", + "hex", + "jsonrpsee", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-chain-spec", + "sc-client-api", + "sc-transaction-pool-api", + "serde", + "sp-api", + "sp-blockchain", + "sp-core", + "sp-runtime", + "sp-version", + "thiserror 1.0.69", + "tokio-stream", +] + +[[package]] +name = "sc-service" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "directories", + "exit-future", + "futures", + "futures-timer", + "jsonrpsee", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "pin-project", + "rand", + "sc-block-builder", + "sc-chain-spec", + "sc-client-api", + "sc-client-db", + "sc-consensus", + "sc-executor", + "sc-informant", + "sc-keystore", + "sc-network", + "sc-network-bitswap", + "sc-network-common", + "sc-network-light", + "sc-network-sync", + "sc-network-transactions", + "sc-rpc", + "sc-rpc-server", + "sc-rpc-spec-v2", + "sc-sysinfo", + "sc-telemetry", + "sc-tracing", + "sc-transaction-pool", + "sc-transaction-pool-api", + "sc-utils", + "serde", + "serde_json", + "sp-api", + "sp-blockchain", + "sp-consensus", + "sp-core", + "sp-externalities", + "sp-keystore", + "sp-runtime", + "sp-session", + "sp-state-machine", + "sp-storage", + "sp-transaction-pool", + "sp-trie", + "sp-version", + "static_init", + "substrate-prometheus-endpoint", + "tempfile", + "thiserror 1.0.69", + "tokio", + "tracing", + "tracing-futures", +] + +[[package]] +name = "sc-state-db" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "sp-core", +] + +[[package]] +name = "sc-sysinfo" +version = "6.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "futures", + "libc", + "log", + "rand", + "rand_pcg", + "regex", + "sc-telemetry", + "serde", + "serde_json", + "sp-core", + "sp-io", + "sp-std", +] + +[[package]] +name = "sc-telemetry" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "chrono", + "futures", + "libp2p 0.52.4", + "log", + "parking_lot 0.12.3", + "pin-project", + "rand", + "sc-utils", + "serde", + "serde_json", + "thiserror 1.0.69", + "wasm-timer", +] + +[[package]] +name = "sc-tracing" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "anstyle", + "chrono", + "lazy_static", + "libc", + "log", + "parking_lot 0.12.3", + "regex", + "rustc-hash 1.1.0", + "sc-client-api", + "sc-tracing-proc-macro", + "serde", + "sp-api", + "sp-blockchain", + "sp-core", + "sp-rpc", + "sp-runtime", + "sp-tracing", + "thiserror 1.0.69", + "tracing", + "tracing-log", + "tracing-subscriber 0.2.25", +] + +[[package]] +name = "sc-tracing-proc-macro" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "proc-macro-crate 1.3.1", + "proc-macro2", + "quote", + "syn 2.0.94", +] + +[[package]] +name = "sc-transaction-pool" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "futures-timer", + "linked-hash-map", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "sc-client-api", + "sc-transaction-pool-api", + "sc-utils", + "serde", + "sp-api", + "sp-blockchain", + "sp-core", + "sp-runtime", + "sp-tracing", + "sp-transaction-pool", + "substrate-prometheus-endpoint", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-transaction-pool-api" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "log", + "parity-scale-codec", + "serde", + "sp-blockchain", + "sp-core", + "sp-runtime", + "thiserror 1.0.69", +] + +[[package]] +name = "sc-utils" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-channel", + "futures", + "futures-timer", + "lazy_static", + "log", + "parking_lot 0.12.3", + "prometheus", + "sp-arithmetic", +] + [[package]] name = "scale-info" version = "2.11.6" @@ -6149,6 +8628,16 @@ version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a3cf7c11c38cb994f3d40e8a8cde3bbd1f72a435e4c49e85d6553d8312306152" +[[package]] +name = "sct" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414" +dependencies = [ + "ring 0.17.8", + "untrusted 0.9.0", +] + [[package]] name = "sec1" version = "0.7.3" @@ -6278,6 +8767,12 @@ dependencies = [ "pest", ] +[[package]] +name = "send_wrapper" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cd0b0ec5f1c1ca621c432a25813d8d60c88abe6d3e08a3eb9cf37d97a0fe3d73" + [[package]] name = "serai-abi" version = "0.1.0" @@ -6431,7 +8926,7 @@ dependencies = [ "borsh", "futures-util", "hex", - "libp2p", + "libp2p 0.54.1", "log", "rand_core", "schnorrkel", @@ -6477,30 +8972,6 @@ dependencies = [ "tokio", ] -[[package]] -name = "serai-coordinator-tests" -version = "0.1.0" -dependencies = [ - "blake2", - "borsh", - "ciphersuite", - "dkg", - "dockertest", - "embedwards25519", - "hex", - "parity-scale-codec", - "rand_core", - "schnorrkel", - "secq256k1", - "serai-client", - "serai-docker-tests", - "serai-message-queue", - "serai-message-queue-tests", - "serai-processor-messages", - "tokio", - "zeroize", -] - [[package]] name = "serai-coordinator-tributary" version = "0.1.0" @@ -6680,29 +9151,6 @@ dependencies = [ "serai-processor-ethereum-primitives", ] -[[package]] -name = "serai-full-stack-tests" -version = "0.1.0" -dependencies = [ - "bitcoin-serai", - "curve25519-dalek", - "dockertest", - "hex", - "monero-simple-request-rpc", - "monero-wallet", - "parity-scale-codec", - "rand_core", - "serai-client", - "serai-coordinator-tests", - "serai-docker-tests", - "serai-message-queue-tests", - "serai-processor-tests", - "serde", - "serde_json", - "tokio", - "zeroize", -] - [[package]] name = "serai-genesis-liquidity-pallet" version = "0.1.0" @@ -6741,6 +9189,7 @@ dependencies = [ name = "serai-in-instructions-pallet" version = "0.1.0" dependencies = [ + "bitvec", "frame-support", "frame-system", "parity-scale-codec", @@ -6860,6 +9309,51 @@ dependencies = [ [[package]] name = "serai-node" version = "0.1.0" +dependencies = [ + "ciphersuite", + "clap", + "embedwards25519", + "frame-benchmarking", + "futures-util", + "hex", + "jsonrpsee", + "libp2p 0.52.4", + "log", + "pallet-transaction-payment-rpc", + "rand_core", + "sc-authority-discovery", + "sc-basic-authorship", + "sc-cli", + "sc-client-api", + "sc-consensus", + "sc-consensus-babe", + "sc-consensus-grandpa", + "sc-executor", + "sc-network", + "sc-network-common", + "sc-offchain", + "sc-rpc-api", + "sc-service", + "sc-telemetry", + "sc-transaction-pool", + "sc-transaction-pool-api", + "schnorrkel", + "secq256k1", + "serai-env", + "serai-runtime", + "sp-api", + "sp-block-builder", + "sp-blockchain", + "sp-consensus-babe", + "sp-core", + "sp-io", + "sp-keystore", + "sp-timestamp", + "substrate-build-script-utils", + "substrate-frame-rpc-system", + "tokio", + "zeroize", +] [[package]] name = "serai-orchestrator" @@ -7130,33 +9624,6 @@ dependencies = [ "serai-processor-scheduler-primitives", ] -[[package]] -name = "serai-processor-tests" -version = "0.1.0" -dependencies = [ - "bitcoin-serai", - "borsh", - "ciphersuite", - "curve25519-dalek", - "dkg", - "dockertest", - "hex", - "k256", - "monero-simple-request-rpc", - "monero-wallet", - "parity-scale-codec", - "rand_core", - "serai-client", - "serai-db", - "serai-docker-tests", - "serai-message-queue", - "serai-message-queue-tests", - "serai-processor-messages", - "serde_json", - "tokio", - "zeroize", -] - [[package]] name = "serai-processor-transaction-chaining-scheduler" version = "0.1.0" @@ -7416,7 +9883,7 @@ version = "3.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d6b6f7f2fcb69f747921f79f3926bd1e203fce4fef62c268dd3abfb6d86029aa" dependencies = [ - "base64", + "base64 0.22.1", "chrono", "hex", "indexmap 1.9.3", @@ -7427,6 +9894,30 @@ dependencies = [ "time", ] +[[package]] +name = "sha-1" +version = "0.9.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "99cd6713db3cf16b6c84e06321e049a9b9f699826e16096d23bbcc44d15d51a6" +dependencies = [ + "block-buffer 0.9.0", + "cfg-if", + "cpufeatures", + "digest 0.9.0", + "opaque-debug", +] + +[[package]] +name = "sha1" +version = "0.10.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest 0.10.7", +] + [[package]] name = "sha2" version = "0.10.8" @@ -7534,6 +10025,12 @@ dependencies = [ "autocfg", ] +[[package]] +name = "slice-group-by" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "826167069c09b99d56f31e9ae5c99049e932a98c9dc2dac47645b08dbbf76ba7" + [[package]] name = "smallvec" version = "1.13.2" @@ -7566,6 +10063,16 @@ dependencies = [ "subtle", ] +[[package]] +name = "socket2" +version = "0.4.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9f7916fc008ca5542385b89a3d3ce689953c143e9304a9bf8beec1de48994c0d" +dependencies = [ + "libc", + "winapi", +] + [[package]] name = "socket2" version = "0.5.8" @@ -7576,6 +10083,37 @@ dependencies = [ "windows-sys 0.52.0", ] +[[package]] +name = "soketto" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "41d1c5305e39e09653383c2c7244f2f78b3bcae37cf50c64cb4789c9f5096ec2" +dependencies = [ + "base64 0.13.1", + "bytes", + "futures", + "http 0.2.12", + "httparse", + "log", + "rand", + "sha-1", +] + +[[package]] +name = "soketto" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2e859df029d160cb88608f5d7df7fb4753fd20fdfb4de5644f3d8b8440841721" +dependencies = [ + "base64 0.22.1", + "bytes", + "futures", + "httparse", + "log", + "rand", + "sha1", +] + [[package]] name = "sp-api" version = "4.0.0-dev" @@ -7661,6 +10199,38 @@ dependencies = [ "sp-std", ] +[[package]] +name = "sp-blockchain" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "futures", + "log", + "parity-scale-codec", + "parking_lot 0.12.3", + "schnellru", + "sp-api", + "sp-consensus", + "sp-database", + "sp-runtime", + "sp-state-machine", + "thiserror 1.0.69", +] + +[[package]] +name = "sp-consensus" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "async-trait", + "futures", + "log", + "sp-inherents", + "sp-runtime", + "sp-state-machine", + "thiserror 1.0.69", +] + [[package]] name = "sp-consensus-babe" version = "0.10.0-dev" @@ -7775,6 +10345,15 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "sp-database" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "kvdb", + "parking_lot 0.12.3", +] + [[package]] name = "sp-debug-derive" version = "8.0.0" @@ -7832,6 +10411,17 @@ dependencies = [ "tracing-core", ] +[[package]] +name = "sp-keyring" +version = "24.0.0" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "lazy_static", + "sp-core", + "sp-runtime", + "strum 0.25.0", +] + [[package]] name = "sp-keystore" version = "0.27.0" @@ -7884,6 +10474,16 @@ dependencies = [ "regex", ] +[[package]] +name = "sp-rpc" +version = "6.0.0" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "rustc-hash 1.1.0", + "serde", + "sp-core", +] + [[package]] name = "sp-runtime" version = "24.0.0" @@ -8172,6 +10772,34 @@ version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" +[[package]] +name = "static_init" +version = "1.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a2a1c578e98c1c16fc3b8ec1328f7659a500737d7a0c6d625e73e830ff9c1f6" +dependencies = [ + "bitflags 1.3.2", + "cfg_aliases 0.1.1", + "libc", + "parking_lot 0.11.2", + "parking_lot_core 0.8.6", + "static_init_macro", + "winapi", +] + +[[package]] +name = "static_init_macro" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1389c88ddd739ec6d3f8f83343764a0e944cd23cfbf126a9796a714b0b6edd6f" +dependencies = [ + "cfg_aliases 0.1.1", + "memchr", + "proc-macro2", + "quote", + "syn 1.0.109", +] + [[package]] name = "std-shims" version = "0.1.1" @@ -8180,6 +10808,25 @@ dependencies = [ "spin 0.9.8", ] +[[package]] +name = "strobe-rs" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fabb238a1cccccfa4c4fb703670c0d157e1256c1ba695abf1b93bd2bb14bab2d" +dependencies = [ + "bitflags 1.3.2", + "byteorder", + "keccak", + "subtle", + "zeroize", +] + +[[package]] +name = "strsim" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" + [[package]] name = "strum" version = "0.24.1" @@ -8255,6 +10902,42 @@ dependencies = [ "zeroize", ] +[[package]] +name = "substrate-build-script-utils" +version = "3.0.0" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" + +[[package]] +name = "substrate-frame-rpc-system" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "frame-system-rpc-runtime-api", + "futures", + "jsonrpsee", + "log", + "parity-scale-codec", + "sc-rpc-api", + "sc-transaction-pool-api", + "sp-api", + "sp-block-builder", + "sp-blockchain", + "sp-core", + "sp-runtime", +] + +[[package]] +name = "substrate-prometheus-endpoint" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#6e3f07bf5c98a6a3ec15f2b1a46148aa8c7d737a" +dependencies = [ + "hyper 0.14.30", + "log", + "prometheus", + "thiserror 1.0.69", + "tokio", +] + [[package]] name = "substrate-wasm-builder" version = "5.0.0-dev" @@ -8268,7 +10951,7 @@ dependencies = [ "sp-maybe-compressed-blob", "strum 0.25.0", "tempfile", - "toml", + "toml 0.7.8", "walkdir", "wasm-opt", ] @@ -8319,6 +11002,18 @@ version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263" +[[package]] +name = "synstructure" +version = "0.12.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f36bdaa60a83aca3921b5259d5400cbf5e90fc51931376a9bd4a0eb79aa7210f" +dependencies = [ + "proc-macro2", + "quote", + "syn 1.0.109", + "unicode-xid", +] + [[package]] name = "synstructure" version = "0.13.1" @@ -8401,6 +11096,12 @@ dependencies = [ "winapi-util", ] +[[package]] +name = "termtree" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8f50febec83f5ee1df3015341d8bd429f2d1cc62bcba7ea2076759d315084683" + [[package]] name = "thiserror" version = "1.0.69" @@ -8547,7 +11248,7 @@ dependencies = [ "parking_lot 0.12.3", "pin-project-lite", "signal-hook-registry", - "socket2", + "socket2 0.5.8", "tokio-macros", "windows-sys 0.52.0", ] @@ -8569,7 +11270,7 @@ version = "0.26.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5f6d0975eaace0cf0fcadee4e4aaa5da15b5c079146f2cffb67c113be122bf37" dependencies = [ - "rustls", + "rustls 0.23.20", "tokio", ] @@ -8593,11 +11294,21 @@ checksum = "d7fcaa8d55a2bdd6b83ace262b016eca0d79ee02818c5c1bcdf0305114081078" dependencies = [ "bytes", "futures-core", + "futures-io", "futures-sink", "pin-project-lite", "tokio", ] +[[package]] +name = "toml" +version = "0.5.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f4f7f0dd8d50a853a531c426359045b1998f04219d88799810762cd4ad314234" +dependencies = [ + "serde", +] + [[package]] name = "toml" version = "0.7.8" @@ -8643,6 +11354,17 @@ dependencies = [ "winnow 0.6.21", ] +[[package]] +name = "tower" +version = "0.4.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c" +dependencies = [ + "tower-layer", + "tower-service", + "tracing", +] + [[package]] name = "tower" version = "0.5.2" @@ -8657,6 +11379,24 @@ dependencies = [ "tower-service", ] +[[package]] +name = "tower-http" +version = "0.4.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "61c5bb1d698276a2443e5ecfabc1008bf15a36c12e6a7176e7bf089ea9131140" +dependencies = [ + "bitflags 2.6.0", + "bytes", + "futures-core", + "futures-util", + "http 0.2.12", + "http-body 0.4.6", + "http-range-header", + "pin-project-lite", + "tower-layer", + "tower-service", +] + [[package]] name = "tower-layer" version = "0.3.3" @@ -8675,6 +11415,7 @@ version = "0.1.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" dependencies = [ + "log", "pin-project-lite", "tracing-attributes", "tracing-core", @@ -8701,6 +11442,16 @@ dependencies = [ "valuable", ] +[[package]] +name = "tracing-futures" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "97d095ae15e245a057c8e8451bab9b3ee1e1f68e9ba2b4fbc18d0ac5237835f2" +dependencies = [ + "pin-project", + "tracing", +] + [[package]] name = "tracing-log" version = "0.1.4" @@ -8732,6 +11483,7 @@ dependencies = [ "chrono", "lazy_static", "matchers 0.0.1", + "parking_lot 0.11.2", "regex", "serde", "serde_json", @@ -8807,6 +11559,78 @@ dependencies = [ "hash-db", ] +[[package]] +name = "trust-dns-proto" +version = "0.22.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4f7f83d1e4a0e4358ac54c5c3681e5d7da5efc5a7a632c90bb6d6669ddd9bc26" +dependencies = [ + "async-trait", + "cfg-if", + "data-encoding", + "enum-as-inner 0.5.1", + "futures-channel", + "futures-io", + "futures-util", + "idna 0.2.3", + "ipnet", + "lazy_static", + "rand", + "smallvec", + "socket2 0.4.10", + "thiserror 1.0.69", + "tinyvec", + "tokio", + "tracing", + "url", +] + +[[package]] +name = "trust-dns-proto" +version = "0.23.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3119112651c157f4488931a01e586aa459736e9d6046d3bd9105ffb69352d374" +dependencies = [ + "async-trait", + "cfg-if", + "data-encoding", + "enum-as-inner 0.6.1", + "futures-channel", + "futures-io", + "futures-util", + "idna 0.4.0", + "ipnet", + "once_cell", + "rand", + "smallvec", + "thiserror 1.0.69", + "tinyvec", + "tokio", + "tracing", + "url", +] + +[[package]] +name = "trust-dns-resolver" +version = "0.23.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "10a3e6c3aff1718b3c73e395d1f35202ba2ffa847c6a62eea0db8fb4cfe30be6" +dependencies = [ + "cfg-if", + "futures-util", + "ipconfig", + "lru-cache", + "once_cell", + "parking_lot 0.12.3", + "rand", + "resolv-conf", + "smallvec", + "thiserror 1.0.69", + "tokio", + "tracing", + "trust-dns-proto 0.23.2", +] + [[package]] name = "try-lock" version = "0.2.5" @@ -8915,6 +11739,12 @@ name = "unsigned-varint" version = "0.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6889a77d49f1f013504cec6bf97a2c730394adedaeb1deb5ea08949a50541105" +dependencies = [ + "asynchronous-codec 0.6.2", + "bytes", + "futures-io", + "futures-util", +] [[package]] name = "unsigned-varint" @@ -8941,7 +11771,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "32f8b686cadd1473f4bd0117a5d28d36b1ade384ea9b5069a1c40aefed7fda60" dependencies = [ "form_urlencoded", - "idna", + "idna 1.0.3", "percent-encoding", ] @@ -8951,6 +11781,12 @@ version = "1.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" +[[package]] +name = "utf8parse" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" + [[package]] name = "uuid" version = "1.11.0" @@ -9040,6 +11876,19 @@ dependencies = [ "wasm-bindgen-shared", ] +[[package]] +name = "wasm-bindgen-futures" +version = "0.4.49" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "38176d9b44ea84e9184eff0bc34cc167ed044f816accfe5922e54d84cf48eca2" +dependencies = [ + "cfg-if", + "js-sys", + "once_cell", + "wasm-bindgen", + "web-sys", +] + [[package]] name = "wasm-bindgen-macro" version = "0.2.99" @@ -9078,6 +11927,15 @@ dependencies = [ "leb128", ] +[[package]] +name = "wasm-instrument" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2a47ecb37b9734d1085eaa5ae1a81e60801fd8c28d4cabdd8aedb982021918bc" +dependencies = [ + "parity-wasm", +] + [[package]] name = "wasm-opt" version = "0.114.2" @@ -9118,6 +11976,21 @@ dependencies = [ "cxx-build", ] +[[package]] +name = "wasm-timer" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "be0ecb0db480561e9a7642b5d3e4187c128914e58aa84330b9493e3eb68c5e7f" +dependencies = [ + "futures", + "js-sys", + "parking_lot 0.11.2", + "pin-utils", + "wasm-bindgen", + "wasm-bindgen-futures", + "web-sys", +] + [[package]] name = "wasmparser" version = "0.110.0" @@ -9146,11 +12019,14 @@ dependencies = [ "once_cell", "paste", "psm", + "rayon", "serde", "serde_json", "target-lexicon", "wasm-encoder", "wasmparser", + "wasmtime-cache", + "wasmtime-cranelift", "wasmtime-environ", "wasmtime-jit", "wasmtime-runtime", @@ -9166,6 +12042,66 @@ dependencies = [ "cfg-if", ] +[[package]] +name = "wasmtime-cache" +version = "12.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "31561fbbaa86d3c042696940bc9601146bf4aaec39ae725c86b5f1358d8d7023" +dependencies = [ + "anyhow", + "base64 0.21.7", + "bincode", + "directories-next", + "file-per-thread-logger", + "log", + "rustix", + "serde", + "sha2", + "toml 0.5.11", + "windows-sys 0.48.0", + "zstd 0.11.2+zstd.1.5.2", +] + +[[package]] +name = "wasmtime-cranelift" +version = "12.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ae8ed7a4845f22be6b1ad80f33f43fa03445b03a02f2d40dca695129769cd1a" +dependencies = [ + "anyhow", + "cranelift-codegen", + "cranelift-control", + "cranelift-entity", + "cranelift-frontend", + "cranelift-native", + "cranelift-wasm", + "gimli 0.27.3", + "log", + "object 0.31.1", + "target-lexicon", + "thiserror 1.0.69", + "wasmparser", + "wasmtime-cranelift-shared", + "wasmtime-environ", + "wasmtime-versioned-export-macros", +] + +[[package]] +name = "wasmtime-cranelift-shared" +version = "12.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "86b17099f9320a1c481634d88101258917d5065717cf22b04ed75b1a8ea062b4" +dependencies = [ + "anyhow", + "cranelift-codegen", + "cranelift-control", + "cranelift-native", + "gimli 0.27.3", + "object 0.31.1", + "target-lexicon", + "wasmtime-environ", +] + [[package]] name = "wasmtime-environ" version = "12.0.2" @@ -9204,6 +12140,7 @@ dependencies = [ "serde", "target-lexicon", "wasmtime-environ", + "wasmtime-jit-debug", "wasmtime-jit-icache-coherence", "wasmtime-runtime", "windows-sys 0.48.0", @@ -9215,7 +12152,9 @@ version = "12.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "aef27ea6c34ef888030d15560037fe7ef27a5609fbbba8e1e3e41dc4245f5bb2" dependencies = [ + "object 0.31.1", "once_cell", + "rustix", "wasmtime-versioned-export-macros", ] @@ -9314,6 +12253,24 @@ dependencies = [ "wasm-bindgen", ] +[[package]] +name = "webpki-roots" +version = "0.25.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5f20c57d8d7db6d3b86154206ae5d8fba62dd39573114de97c2cb0578251f8e1" + +[[package]] +name = "which" +version = "4.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "87ba24419a2078cd2b0f2ede2691b6c66d8e47836da3b6db8265ebad47afbfc7" +dependencies = [ + "either", + "home", + "once_cell", + "rustix", +] + [[package]] name = "wide" version = "0.7.30" @@ -9641,18 +12598,35 @@ dependencies = [ "zeroize", ] +[[package]] +name = "x509-parser" +version = "0.15.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7069fba5b66b9193bd2c5d3d4ff12b839118f6bcbef5328efafafb5395cf63da" +dependencies = [ + "asn1-rs 0.5.2", + "data-encoding", + "der-parser 8.2.0", + "lazy_static", + "nom", + "oid-registry 0.6.1", + "rusticata-macros", + "thiserror 1.0.69", + "time", +] + [[package]] name = "x509-parser" version = "0.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fcbc162f30700d6f3f82a24bf7cc62ffe7caea42c0b2cba8bf7f3ae50cf51f69" dependencies = [ - "asn1-rs", + "asn1-rs 0.6.2", "data-encoding", - "der-parser", + "der-parser 9.0.0", "lazy_static", "nom", - "oid-registry", + "oid-registry 0.7.1", "rusticata-macros", "thiserror 1.0.69", "time", diff --git a/Cargo.toml b/Cargo.toml index f11d5644..bfc5ff2d 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -144,9 +144,9 @@ members = [ "tests/docker", "tests/message-queue", - "tests/processor", - "tests/coordinator", - "tests/full-stack", + # TODO "tests/processor", + # TODO "tests/coordinator", + # TODO "tests/full-stack", "tests/reproducible-runtime", ] diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index 34165774..bc6db5ca 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -180,7 +180,7 @@ impl ContinuallyRan for CanonicalEventStream { batch = Some(ExecutedBatch { id: *id, publisher: *publishing_session, - external_network_block_hash: *external_network_block_hash, + external_network_block_hash: external_network_block_hash.0, in_instructions_hash: *in_instructions_hash, in_instruction_results: in_instruction_results .iter() diff --git a/networks/monero/ringct/bulletproofs/src/original/mod.rs b/networks/monero/ringct/bulletproofs/src/original/mod.rs index f001bc9b..e1b494d2 100644 --- a/networks/monero/ringct/bulletproofs/src/original/mod.rs +++ b/networks/monero/ringct/bulletproofs/src/original/mod.rs @@ -56,7 +56,7 @@ impl AggregateRangeWitness { } } -impl<'a> AggregateRangeStatement<'a> { +impl AggregateRangeStatement<'_> { fn initial_transcript(&self) -> (Scalar, Vec) { let V = self.commitments.iter().map(|c| c * INV_EIGHT()).collect::>(); (keccak256_to_scalar(V.iter().flat_map(|V| V.compress().to_bytes()).collect::>()), V) diff --git a/networks/monero/wallet/src/tests/extra.rs b/networks/monero/wallet/src/tests/extra.rs index 2f331e61..1d21490a 100644 --- a/networks/monero/wallet/src/tests/extra.rs +++ b/networks/monero/wallet/src/tests/extra.rs @@ -9,6 +9,7 @@ use crate::{ // https://github.com/monero-project/monero/blob/ac02af92867590ca80b2779a7bbeafa99ff94dcb/ // tests/unit_tests/test_tx_utils.cpp // which is licensed +#[allow(clippy::empty_line_after_outer_attr)] // rustfmt is for the comment, not for the const #[rustfmt::skip] /* Copyright (c) 2014-2022, The Monero Project diff --git a/processor/scanner/src/batch/mod.rs b/processor/scanner/src/batch/mod.rs index 736c3ac4..b583b6ed 100644 --- a/processor/scanner/src/batch/mod.rs +++ b/processor/scanner/src/batch/mod.rs @@ -5,6 +5,7 @@ use blake2::{digest::typenum::U32, Digest, Blake2b}; use scale::Encode; use serai_db::{DbTxn, Db}; +use serai_primitives::BlockHash; use serai_in_instructions_primitives::{MAX_BATCH_SIZE, Batch}; use primitives::{ @@ -106,7 +107,7 @@ impl ContinuallyRan for BatchTask { // If this block is notable, create the Batch(s) for it if notable { let network = S::NETWORK; - let external_network_block_hash = index::block_id(&txn, block_number); + let external_network_block_hash = BlockHash(index::block_id(&txn, block_number)); let mut batch_id = BatchDb::::acquire_batch_id(&mut txn); // start with empty batch diff --git a/substrate/abi/src/in_instructions.rs b/substrate/abi/src/in_instructions.rs index ddf8c657..e9afb74a 100644 --- a/substrate/abi/src/in_instructions.rs +++ b/substrate/abi/src/in_instructions.rs @@ -20,7 +20,7 @@ pub enum Event { network: NetworkId, publishing_session: Session, id: u32, - external_network_block_hash: [u8; 32], + external_network_block_hash: BlockHash, in_instructions_hash: [u8; 32], in_instruction_results: bitvec::vec::BitVec, }, diff --git a/substrate/client/tests/batch.rs b/substrate/client/tests/batch.rs index c19a4422..9893100b 100644 --- a/substrate/client/tests/batch.rs +++ b/substrate/client/tests/batch.rs @@ -8,12 +8,13 @@ use blake2::{ use scale::Encode; use serai_client::{ - primitives::{Amount, NetworkId, Coin, Balance, BlockHash, SeraiAddress}, + primitives::{BlockHash, NetworkId, Coin, Amount, Balance, SeraiAddress}, + coins::CoinsEvent, + validator_sets::primitives::Session, in_instructions::{ primitives::{InInstruction, InInstructionWithBalance, Batch}, InInstructionsEvent, }, - coins::CoinsEvent, Serai, }; @@ -32,9 +33,13 @@ serai_test!( let amount = Amount(OsRng.next_u64().saturating_add(1)); let balance = Balance { coin, amount }; + let mut external_network_block_hash = BlockHash([0; 32]); + OsRng.fill_bytes(&mut external_network_block_hash.0); + let batch = Batch { network, id, + external_network_block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Transfer(address), balance, @@ -51,8 +56,11 @@ serai_test!( batches, vec![InInstructionsEvent::Batch { network, + publishing_session: Session(0), id, - instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), + external_network_block_hash, + in_instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), + in_instruction_results: bitvec::bitvec![u8, bitvec::order::Lsb0; 1; 1], }] ); } diff --git a/substrate/client/tests/burn.rs b/substrate/client/tests/burn.rs index b8b849d3..abb30f54 100644 --- a/substrate/client/tests/burn.rs +++ b/substrate/client/tests/burn.rs @@ -7,19 +7,22 @@ use blake2::{ use scale::Encode; -use serai_abi::coins::primitives::OutInstructionWithBalance; use sp_core::Pair; use serai_client::{ primitives::{ - Amount, NetworkId, Coin, Balance, BlockHash, SeraiAddress, ExternalAddress, + BlockHash, NetworkId, Coin, Amount, Balance, SeraiAddress, ExternalAddress, insecure_pair_from_name, }, + coins::{ + primitives::{OutInstruction, OutInstructionWithBalance}, + CoinsEvent, + }, + validator_sets::primitives::Session, in_instructions::{ InInstructionsEvent, primitives::{InInstruction, InInstructionWithBalance, Batch}, }, - coins::{primitives::OutInstruction, CoinsEvent}, Serai, SeraiCoins, }; @@ -45,7 +48,7 @@ serai_test!( let batch = Batch { network, id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Transfer(address), balance, @@ -61,9 +64,11 @@ serai_test!( batches, vec![InInstructionsEvent::Batch { network, + publishing_session: Session(0), id, - block: block_hash, - instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), + external_network_block_hash: block_hash, + in_instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), + in_instruction_results: bitvec::bitvec![u8, bitvec::order::Lsb0; 1; 1], }] ); diff --git a/substrate/client/tests/common/genesis_liquidity.rs b/substrate/client/tests/common/genesis_liquidity.rs index c8c613f5..a01d58e1 100644 --- a/substrate/client/tests/common/genesis_liquidity.rs +++ b/substrate/client/tests/common/genesis_liquidity.rs @@ -10,12 +10,12 @@ use schnorrkel::Schnorrkel; use sp_core::{sr25519::Signature, Pair as PairTrait}; use serai_abi::{ - genesis_liquidity::primitives::{oraclize_values_message, Values}, - validator_sets::primitives::{musig_context, Session, ValidatorSet}, - in_instructions::primitives::{InInstruction, InInstructionWithBalance, Batch}, primitives::{ - Amount, NetworkId, Coin, Balance, BlockHash, SeraiAddress, insecure_pair_from_name, + BlockHash, NetworkId, Coin, Amount, Balance, SeraiAddress, insecure_pair_from_name, }, + validator_sets::primitives::{musig_context, Session, ValidatorSet}, + genesis_liquidity::primitives::{oraclize_values_message, Values}, + in_instructions::primitives::{InInstruction, InInstructionWithBalance, Batch}, }; use serai_client::{Serai, SeraiGenesisLiquidity}; @@ -53,7 +53,7 @@ pub async fn set_up_genesis( }) .collect::>(); - // set up bloch hash + // set up block hash let mut block = BlockHash([0; 32]); OsRng.fill_bytes(&mut block.0); @@ -65,7 +65,12 @@ pub async fn set_up_genesis( }) .or_insert(0); - let batch = Batch { network: coin.network(), id: batch_ids[&coin.network()], instructions }; + let batch = Batch { + network: coin.network(), + external_network_block_hash: block, + id: batch_ids[&coin.network()], + instructions, + }; provide_batch(serai, batch).await; } diff --git a/substrate/client/tests/common/in_instructions.rs b/substrate/client/tests/common/in_instructions.rs index 5f29f2ba..d2a0a930 100644 --- a/substrate/client/tests/common/in_instructions.rs +++ b/substrate/client/tests/common/in_instructions.rs @@ -9,7 +9,7 @@ use scale::Encode; use sp_core::Pair; use serai_client::{ - primitives::{insecure_pair_from_name, BlockHash, NetworkId, Balance, SeraiAddress}, + primitives::{BlockHash, NetworkId, Balance, SeraiAddress, insecure_pair_from_name}, validator_sets::primitives::{ValidatorSet, KeyPair}, in_instructions::{ primitives::{Batch, SignedBatch, batch_message, InInstruction, InInstructionWithBalance}, @@ -45,16 +45,29 @@ pub async fn provide_batch(serai: &Serai, batch: Batch) -> [u8; 32] { ) .await; - let batches = serai.as_of(block).in_instructions().batch_events().await.unwrap(); - // TODO: impl From for BatchEvent? - assert_eq!( - batches, - vec![InInstructionsEvent::Batch { - network: batch.network, - id: batch.id, - instructions_hash: Blake2b::::digest(batch.instructions.encode()).into(), - }], - ); + { + let mut batches = serai.as_of(block).in_instructions().batch_events().await.unwrap(); + assert_eq!(batches.len(), 1); + let InInstructionsEvent::Batch { + network, + publishing_session, + id, + external_network_block_hash, + in_instructions_hash, + in_instruction_results: _, + } = batches.swap_remove(0) + else { + panic!("Batch event wasn't Batch event") + }; + assert_eq!(network, batch.network); + assert_eq!(publishing_session, session); + assert_eq!(id, batch.id); + assert_eq!(external_network_block_hash, batch.external_network_block_hash); + assert_eq!( + in_instructions_hash, + <[u8; 32]>::from(Blake2b::::digest(batch.instructions.encode())) + ); + } // TODO: Check the tokens events @@ -75,7 +88,7 @@ pub async fn mint_coin( let batch = Batch { network, id: batch_id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Transfer(address), balance, diff --git a/substrate/client/tests/dex.rs b/substrate/client/tests/dex.rs index d02d5260..7d193806 100644 --- a/substrate/client/tests/dex.rs +++ b/substrate/client/tests/dex.rs @@ -6,8 +6,8 @@ use serai_abi::in_instructions::primitives::DexCall; use serai_client::{ primitives::{ - Amount, NetworkId, Coin, Balance, BlockHash, insecure_pair_from_name, ExternalAddress, - SeraiAddress, + BlockHash, NetworkId, Coin, Amount, Balance, SeraiAddress, ExternalAddress, + insecure_pair_from_name, }, in_instructions::primitives::{ InInstruction, InInstructionWithBalance, Batch, IN_INSTRUCTION_EXECUTOR, OutAddress, @@ -229,7 +229,7 @@ serai_test!( let batch = Batch { network: NetworkId::Bitcoin, id: batch_id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Dex(DexCall::SwapAndAddLiquidity(pair.public().into())), balance: Balance { coin: Coin::Bitcoin, amount: Amount(20_000_000_000_000) }, @@ -313,7 +313,7 @@ serai_test!( let batch = Batch { network: NetworkId::Monero, id: coin1_batch_id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Dex(DexCall::Swap(out_balance, out_address)), balance: Balance { coin: coin1, amount: Amount(200_000_000_000_000) }, @@ -353,7 +353,7 @@ serai_test!( let batch = Batch { network: NetworkId::Ethereum, id: coin2_batch_id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Dex(DexCall::Swap(out_balance, out_address.clone())), balance: Balance { coin: coin2, amount: Amount(200_000_000_000) }, @@ -391,7 +391,7 @@ serai_test!( let batch = Batch { network: NetworkId::Monero, id: coin1_batch_id, - block: block_hash, + external_network_block_hash: block_hash, instructions: vec![InInstructionWithBalance { instruction: InInstruction::Dex(DexCall::Swap(out_balance, out_address.clone())), balance: Balance { coin: coin1, amount: Amount(100_000_000_000_000) }, diff --git a/substrate/client/tests/emissions.rs b/substrate/client/tests/emissions.rs index f510d486..c4ec26df 100644 --- a/substrate/client/tests/emissions.rs +++ b/substrate/client/tests/emissions.rs @@ -4,13 +4,13 @@ use rand_core::{RngCore, OsRng}; use serai_client::TemporalSerai; use serai_abi::{ - emissions::primitives::{INITIAL_REWARD_PER_BLOCK, SECURE_BY}, - in_instructions::primitives::Batch, primitives::{ - BlockHash, Coin, COINS, FAST_EPOCH_DURATION, FAST_EPOCH_INITIAL_PERIOD, NETWORKS, - TARGET_BLOCK_TIME, + NETWORKS, COINS, TARGET_BLOCK_TIME, FAST_EPOCH_DURATION, FAST_EPOCH_INITIAL_PERIOD, BlockHash, + Coin, }, validator_sets::primitives::Session, + emissions::primitives::{INITIAL_REWARD_PER_BLOCK, SECURE_BY}, + in_instructions::primitives::Batch, }; use serai_client::{ @@ -42,7 +42,16 @@ async fn send_batches(serai: &Serai, ids: &mut HashMap) { let mut block = BlockHash([0; 32]); OsRng.fill_bytes(&mut block.0); - provide_batch(serai, Batch { network, id: ids[&network], block, instructions: vec![] }).await; + provide_batch( + serai, + Batch { + network, + id: ids[&network], + external_network_block_hash: block, + instructions: vec![], + }, + ) + .await; } } } diff --git a/substrate/client/tests/validator_sets.rs b/substrate/client/tests/validator_sets.rs index a2ccf22b..a4d9d130 100644 --- a/substrate/client/tests/validator_sets.rs +++ b/substrate/client/tests/validator_sets.rs @@ -7,7 +7,7 @@ use sp_core::{ use serai_client::{ primitives::{ - FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, NETWORKS, EmbeddedEllipticCurve, NetworkId, BlockHash, + FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, NETWORKS, BlockHash, NetworkId, EmbeddedEllipticCurve, insecure_pair_from_name, }, validator_sets::{ @@ -311,7 +311,8 @@ async fn validator_set_rotation() { // provide a batch to complete the handover and retire the previous set let mut block_hash = BlockHash([0; 32]); OsRng.fill_bytes(&mut block_hash.0); - let batch = Batch { network, id: 0, block: block_hash, instructions: vec![] }; + let batch = + Batch { network, id: 0, external_network_block_hash: block_hash, instructions: vec![] }; publish_tx( &serai, &SeraiInInstructions::execute_batch(SignedBatch { diff --git a/substrate/in-instructions/pallet/Cargo.toml b/substrate/in-instructions/pallet/Cargo.toml index 27ecb7bd..54cacf8a 100644 --- a/substrate/in-instructions/pallet/Cargo.toml +++ b/substrate/in-instructions/pallet/Cargo.toml @@ -19,6 +19,8 @@ ignored = ["scale", "scale-info"] workspace = true [dependencies] +bitvec = { version = "1", default-features = false, features = ["alloc"] } + scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive", "max-encoded-len"] } scale-info = { version = "2", default-features = false, features = ["derive"] } diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 2666f5b2..0c01dc47 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -63,10 +63,10 @@ pub mod pallet { Batch { network: NetworkId, publishing_session: Session, - external_network_block_hash: [u8; 32], id: u32, + external_network_block_hash: BlockHash, in_instructions_hash: [u8; 32], - in_instruction_results: BitVec, + in_instruction_results: bitvec::vec::BitVec, }, Halt { network: NetworkId, @@ -101,9 +101,10 @@ pub mod pallet { // Use a dedicated transaction layer when executing this InInstruction // This lets it individually error without causing any storage modifications #[frame_support::transactional] - fn execute(instruction: InInstructionWithBalance) -> Result<(), DispatchError> { - match instruction.instruction { + fn execute(instruction: &InInstructionWithBalance) -> Result<(), DispatchError> { + match &instruction.instruction { InInstruction::Transfer(address) => { + let address = *address; Coins::::mint(address.into(), instruction.balance)?; } InInstruction::Dex(call) => { @@ -113,6 +114,7 @@ pub mod pallet { match call { DexCall::SwapAndAddLiquidity(address) => { let origin = RawOrigin::Signed(IN_INSTRUCTION_EXECUTOR.into()); + let address = *address; let coin = instruction.balance.coin; // mint the given coin on the account @@ -207,7 +209,9 @@ pub mod pallet { let coin_balance = Coins::::balance(IN_INSTRUCTION_EXECUTOR.into(), out_balance.coin); let instruction = OutInstructionWithBalance { - instruction: OutInstruction { address: out_address.as_external().unwrap() }, + instruction: OutInstruction { + address: out_address.clone().as_external().unwrap(), + }, balance: Balance { coin: out_balance.coin, amount: coin_balance }, }; Coins::::burn_with_instruction(origin.into(), instruction)?; @@ -216,12 +220,14 @@ pub mod pallet { } } InInstruction::GenesisLiquidity(address) => { + let address = *address; Coins::::mint(GENESIS_LIQUIDITY_ACCOUNT.into(), instruction.balance)?; GenesisLiq::::add_coin_liquidity(address.into(), instruction.balance)?; } InInstruction::SwapToStakedSRI(address, network) => { + let address = *address; Coins::::mint(POL_ACCOUNT.into(), instruction.balance)?; - Emissions::::swap_to_staked_sri(address.into(), network, instruction.balance)?; + Emissions::::swap_to_staked_sri(address.into(), *network, instruction.balance)?; } } Ok(()) @@ -259,7 +265,7 @@ pub mod pallet { impl Pallet { #[pallet::call_index(0)] #[pallet::weight((0, DispatchClass::Operational))] // TODO - pub fn execute_batch(origin: OriginFor, batch: SignedBatch) -> DispatchResult { + pub fn execute_batch(origin: OriginFor, _batch: SignedBatch) -> DispatchResult { ensure_none(origin)?; // The entire Batch execution is handled in pre_dispatch @@ -309,7 +315,7 @@ pub mod pallet { Err(InvalidTransaction::BadProof)?; } - let batch = batch.batch; + let batch = &batch.batch; if Halted::::contains_key(network) { Err(InvalidTransaction::Custom(1))?; @@ -343,8 +349,8 @@ pub mod pallet { LastBatch::::insert(batch.network, batch.id); let in_instructions_hash = blake2_256(&batch.instructions.encode()); - let mut in_instruction_results = BitVec::new(); - for (i, instruction) in batch.instructions.into_iter().enumerate() { + let mut in_instruction_results = bitvec::vec::BitVec::new(); + for instruction in &batch.instructions { // Verify this coin is for this network if instruction.balance.coin.network() != batch.network { Err(InvalidTransaction::Custom(2))?; @@ -363,7 +369,7 @@ pub mod pallet { }); ValidTransaction::with_tag_prefix("in-instructions") - .and_provides((batch.batch.network, batch.batch.id)) + .and_provides((batch.network, batch.id)) // Set a 10 block longevity, though this should be included in the next block .longevity(10) .propagate(true) diff --git a/substrate/in-instructions/primitives/src/lib.rs b/substrate/in-instructions/primitives/src/lib.rs index a4d97db8..a72f8fd2 100644 --- a/substrate/in-instructions/primitives/src/lib.rs +++ b/substrate/in-instructions/primitives/src/lib.rs @@ -19,7 +19,8 @@ use sp_application_crypto::sr25519::Signature; use sp_std::vec::Vec; use sp_runtime::RuntimeDebug; -use serai_primitives::{Balance, NetworkId, SeraiAddress, ExternalAddress, system_address}; +#[rustfmt::skip] +use serai_primitives::{BlockHash, NetworkId, Balance, SeraiAddress, ExternalAddress, system_address}; mod shorthand; pub use shorthand::*; @@ -106,7 +107,7 @@ pub struct InInstructionWithBalance { pub struct Batch { pub network: NetworkId, pub id: u32, - pub external_network_block_hash: [u8; 32], + pub external_network_block_hash: BlockHash, pub instructions: Vec, } diff --git a/substrate/node/Cargo.toml b/substrate/node/Cargo.toml index cfd8ebbe..5e6cd3f1 100644 --- a/substrate/node/Cargo.toml +++ b/substrate/node/Cargo.toml @@ -20,71 +20,71 @@ workspace = true name = "serai-node" [dependencies] -#rand_core = "0.6" -#zeroize = "1" -#hex = "0.4" -#log = "0.4" +rand_core = "0.6" +zeroize = "1" +hex = "0.4" +log = "0.4" -#schnorrkel = "0.11" +schnorrkel = "0.11" -#ciphersuite = { path = "../../crypto/ciphersuite" } -#embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } -#secq256k1 = { path = "../../crypto/evrf/secq256k1" } +ciphersuite = { path = "../../crypto/ciphersuite" } +embedwards25519 = { path = "../../crypto/evrf/embedwards25519" } +secq256k1 = { path = "../../crypto/evrf/secq256k1" } -#libp2p = "0.52" +libp2p = "0.52" -#sp-core = { git = "https://github.com/serai-dex/substrate" } -#sp-keystore = { git = "https://github.com/serai-dex/substrate" } -#sp-timestamp = { git = "https://github.com/serai-dex/substrate" } -#sp-io = { git = "https://github.com/serai-dex/substrate" } -#sp-blockchain = { git = "https://github.com/serai-dex/substrate" } -#sp-api = { git = "https://github.com/serai-dex/substrate" } -#sp-block-builder = { git = "https://github.com/serai-dex/substrate" } -#sp-consensus-babe = { git = "https://github.com/serai-dex/substrate" } +sp-core = { git = "https://github.com/serai-dex/substrate" } +sp-keystore = { git = "https://github.com/serai-dex/substrate" } +sp-timestamp = { git = "https://github.com/serai-dex/substrate" } +sp-io = { git = "https://github.com/serai-dex/substrate" } +sp-blockchain = { git = "https://github.com/serai-dex/substrate" } +sp-api = { git = "https://github.com/serai-dex/substrate" } +sp-block-builder = { git = "https://github.com/serai-dex/substrate" } +sp-consensus-babe = { git = "https://github.com/serai-dex/substrate" } -#frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } +frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } -#serai-runtime = { path = "../runtime", features = ["std"] } +serai-runtime = { path = "../runtime", features = ["std"] } -#clap = { version = "4", features = ["derive"] } +clap = { version = "4", features = ["derive"] } -#futures-util = "0.3" -#tokio = { version = "1", features = ["sync", "rt-multi-thread"] } -#jsonrpsee = { version = "0.16", features = ["server"] } +futures-util = "0.3" +tokio = { version = "1", features = ["sync", "rt-multi-thread"] } +jsonrpsee = { version = "0.16", features = ["server"] } -#sc-offchain = { git = "https://github.com/serai-dex/substrate" } -#sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } -#sc-transaction-pool-api = { git = "https://github.com/serai-dex/substrate" } -#sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } -#sc-executor = { git = "https://github.com/serai-dex/substrate" } -#sc-service = { git = "https://github.com/serai-dex/substrate" } -#sc-client-api = { git = "https://github.com/serai-dex/substrate" } -#sc-network-common = { git = "https://github.com/serai-dex/substrate" } -#sc-network = { git = "https://github.com/serai-dex/substrate" } +sc-offchain = { git = "https://github.com/serai-dex/substrate" } +sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } +sc-transaction-pool-api = { git = "https://github.com/serai-dex/substrate" } +sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } +sc-executor = { git = "https://github.com/serai-dex/substrate" } +sc-service = { git = "https://github.com/serai-dex/substrate" } +sc-client-api = { git = "https://github.com/serai-dex/substrate" } +sc-network-common = { git = "https://github.com/serai-dex/substrate" } +sc-network = { git = "https://github.com/serai-dex/substrate" } -#sc-consensus = { git = "https://github.com/serai-dex/substrate" } -#sc-consensus-babe = { git = "https://github.com/serai-dex/substrate" } -#sc-consensus-grandpa = { git = "https://github.com/serai-dex/substrate" } -#sc-authority-discovery = { git = "https://github.com/serai-dex/substrate" } +sc-consensus = { git = "https://github.com/serai-dex/substrate" } +sc-consensus-babe = { git = "https://github.com/serai-dex/substrate" } +sc-consensus-grandpa = { git = "https://github.com/serai-dex/substrate" } +sc-authority-discovery = { git = "https://github.com/serai-dex/substrate" } -#sc-telemetry = { git = "https://github.com/serai-dex/substrate" } -#sc-cli = { git = "https://github.com/serai-dex/substrate" } +sc-telemetry = { git = "https://github.com/serai-dex/substrate" } +sc-cli = { git = "https://github.com/serai-dex/substrate" } -#sc-rpc-api = { git = "https://github.com/serai-dex/substrate" } +sc-rpc-api = { git = "https://github.com/serai-dex/substrate" } -#substrate-frame-rpc-system = { git = "https://github.com/serai-dex/substrate" } -#pallet-transaction-payment-rpc = { git = "https://github.com/serai-dex/substrate" } +substrate-frame-rpc-system = { git = "https://github.com/serai-dex/substrate" } +pallet-transaction-payment-rpc = { git = "https://github.com/serai-dex/substrate" } -#serai-env = { path = "../../common/env" } +serai-env = { path = "../../common/env" } [build-dependencies] -#substrate-build-script-utils = { git = "https://github.com/serai-dex/substrate" } +substrate-build-script-utils = { git = "https://github.com/serai-dex/substrate" } [features] -#default = [] -#fast-epoch = ["serai-runtime/fast-epoch"] -#runtime-benchmarks = [ -# "frame-benchmarking/runtime-benchmarks", +default = [] +fast-epoch = ["serai-runtime/fast-epoch"] +runtime-benchmarks = [ + "frame-benchmarking/runtime-benchmarks", -# "serai-runtime/runtime-benchmarks", -#] + "serai-runtime/runtime-benchmarks", +] diff --git a/substrate/runtime/build.rs b/substrate/runtime/build.rs index eba52b3e..d19c8315 100644 --- a/substrate/runtime/build.rs +++ b/substrate/runtime/build.rs @@ -1,5 +1,12 @@ use substrate_wasm_builder::WasmBuilder; fn main() { - WasmBuilder::new().with_current_project().export_heap_base().import_memory().build() + WasmBuilder::new() + .with_current_project() + // https://substrate.stackexchange.com/questions/12124 + // TODO: Remove once we've moved to polkadot-sdk + .disable_runtime_version_section_check() + .export_heap_base() + .import_memory() + .build() } diff --git a/substrate/runtime/src/abi.rs b/substrate/runtime/src/abi.rs index 81c8b202..99b79265 100644 --- a/substrate/runtime/src/abi.rs +++ b/substrate/runtime/src/abi.rs @@ -5,8 +5,6 @@ use scale::{Encode, Decode}; use serai_abi::Call; use crate::{ - Vec, - primitives::{PublicKey, SeraiAddress}, timestamp, coins, dex, genesis_liquidity, validator_sets::{self, MembershipProof}, in_instructions, signals, babe, grandpa, RuntimeCall, diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index 4fbddda4..18a668fb 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -1203,7 +1203,7 @@ pub mod pallet { // There must have been a previous session is PendingSlashReport is populated let set = ValidatorSet { network, session: Session(Self::session(network).unwrap().0 - 1) }; - if !key.verify(&report_slashes_message(&set, slashes), signature) { + if !key.verify(&slashes.report_slashes_message(), signature) { Err(InvalidTransaction::BadProof)?; } diff --git a/tests/message-queue/src/lib.rs b/tests/message-queue/src/lib.rs index e2bfd3a7..fc03b3c7 100644 --- a/tests/message-queue/src/lib.rs +++ b/tests/message-queue/src/lib.rs @@ -90,7 +90,8 @@ fn basic_functionality() { }, b"Hello, World!".to_vec(), ) - .await; + .await + .unwrap(); // Queue this twice, which message-queue should de-duplicate for _ in 0 .. 2 { @@ -103,7 +104,8 @@ fn basic_functionality() { }, b"Hello, World, again!".to_vec(), ) - .await; + .await + .unwrap(); } // Successfully get it @@ -146,7 +148,8 @@ fn basic_functionality() { }, b"Hello, World!".to_vec(), ) - .await; + .await + .unwrap(); let monero = MessageQueue::new( Service::Processor(NetworkId::Monero), From f690bf831f6e4ebb2d320b633bb3e5333e2c2f81 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 19 Jan 2025 02:36:34 -0500 Subject: [PATCH 329/368] Remove old code still marked TODO --- processor/TODO/main.rs | 23 --- processor/ethereum/TODO/tests/mod.rs | 86 ----------- processor/ethereum/TODO/tests/router.rs | 185 ------------------------ 3 files changed, 294 deletions(-) delete mode 100644 processor/ethereum/TODO/tests/router.rs diff --git a/processor/TODO/main.rs b/processor/TODO/main.rs index 1458a7fc..1585ea61 100644 --- a/processor/TODO/main.rs +++ b/processor/TODO/main.rs @@ -1,26 +1,3 @@ -use messages::{ - coordinator::{ - SubstrateSignableId, PlanMeta, CoordinatorMessage as CoordinatorCoordinatorMessage, - }, - CoordinatorMessage, -}; - -use serai_env as env; - -use message_queue::{Service, client::MessageQueue}; - -mod db; -pub use db::*; - -mod coordinator; -pub use coordinator::*; - -mod multisigs; -use multisigs::{MultisigEvent, MultisigManager}; - -#[cfg(test)] -mod tests; - async fn handle_coordinator_msg( txn: &mut D::Transaction<'_>, network: &N, diff --git a/processor/ethereum/TODO/tests/mod.rs b/processor/ethereum/TODO/tests/mod.rs index be9106d5..2e3e22b1 100644 --- a/processor/ethereum/TODO/tests/mod.rs +++ b/processor/ethereum/TODO/tests/mod.rs @@ -43,89 +43,3 @@ pub fn key_gen() -> (HashMap>, PublicKey) (keys, public_key) } - -// TODO: Use a proper error here -pub async fn send( - provider: &RootProvider, - wallet: &k256::ecdsa::SigningKey, - mut tx: TxLegacy, -) -> Option { - let verifying_key = *wallet.verifying_key().as_affine(); - let address = Address::from(address(&verifying_key.into())); - - // https://github.com/alloy-rs/alloy/issues/539 - // let chain_id = provider.get_chain_id().await.unwrap(); - // tx.chain_id = Some(chain_id); - tx.chain_id = None; - tx.nonce = provider.get_transaction_count(address).await.unwrap(); - // 100 gwei - tx.gas_price = 100_000_000_000u128; - - let sig = wallet.sign_prehash_recoverable(tx.signature_hash().as_ref()).unwrap(); - assert_eq!(address, tx.clone().into_signed(sig.into()).recover_signer().unwrap()); - assert!( - provider.get_balance(address).await.unwrap() > - ((U256::from(tx.gas_price) * U256::from(tx.gas_limit)) + tx.value) - ); - - let mut bytes = vec![]; - tx.encode_with_signature_fields(&Signature::from(sig), &mut bytes); - let pending_tx = provider.send_raw_transaction(&bytes).await.ok()?; - pending_tx.get_receipt().await.ok() -} - -pub async fn fund_account( - provider: &RootProvider, - wallet: &k256::ecdsa::SigningKey, - to_fund: Address, - value: U256, -) -> Option<()> { - let funding_tx = - TxLegacy { to: TxKind::Call(to_fund), gas_limit: 21_000, value, ..Default::default() }; - assert!(send(provider, wallet, funding_tx).await.unwrap().status()); - - Some(()) -} - -// TODO: Use a proper error here -pub async fn deploy_contract( - client: Arc>, - wallet: &k256::ecdsa::SigningKey, - name: &str, -) -> Option

{ - let hex_bin_buf = std::fs::read_to_string(format!("./artifacts/{name}.bin")).unwrap(); - let hex_bin = - if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf }; - let bin = Bytes::from_hex(hex_bin).unwrap(); - - let deployment_tx = TxLegacy { - chain_id: None, - nonce: 0, - // 100 gwei - gas_price: 100_000_000_000u128, - gas_limit: 1_000_000, - to: TxKind::Create, - value: U256::ZERO, - input: bin, - }; - - let deployment_tx = deterministically_sign(deployment_tx); - - // Fund the deployer address - fund_account( - &client, - wallet, - deployment_tx.recover_signer().unwrap(), - U256::from(deployment_tx.tx().gas_limit) * U256::from(deployment_tx.tx().gas_price), - ) - .await?; - - let (deployment_tx, sig, _) = deployment_tx.into_parts(); - let mut bytes = vec![]; - deployment_tx.encode_with_signature_fields(&sig, &mut bytes); - let pending_tx = client.send_raw_transaction(&bytes).await.ok()?; - let receipt = pending_tx.get_receipt().await.ok()?; - assert!(receipt.status()); - - Some(receipt.contract_address.unwrap()) -} diff --git a/processor/ethereum/TODO/tests/router.rs b/processor/ethereum/TODO/tests/router.rs deleted file mode 100644 index 63e5f1d5..00000000 --- a/processor/ethereum/TODO/tests/router.rs +++ /dev/null @@ -1,185 +0,0 @@ -// TODO - -use std::{convert::TryFrom, sync::Arc, collections::HashMap}; - -use rand_core::OsRng; - -use group::Group; -use k256::ProjectivePoint; -use frost::{ - curve::Secp256k1, - Participant, ThresholdKeys, - algorithm::IetfSchnorr, - tests::{algorithm_machines, sign}, -}; - -use alloy_core::primitives::{Address, U256}; - -use alloy_simple_request_transport::SimpleRequest; -use alloy_rpc_types_eth::BlockTransactionsKind; -use alloy_rpc_client::ClientBuilder; -use alloy_provider::{Provider, RootProvider}; - -use alloy_node_bindings::{Anvil, AnvilInstance}; - -use crate::{ - crypto::*, - deployer::Deployer, - router::{Router, abi as router}, - tests::{key_gen, send, fund_account}, -}; - -async fn setup_test() -> ( - AnvilInstance, - Arc>, - u64, - Router, - HashMap>, - PublicKey, -) { - let anvil = Anvil::new().spawn(); - - let provider = RootProvider::new( - ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), - ); - let chain_id = provider.get_chain_id().await.unwrap(); - let wallet = anvil.keys()[0].clone().into(); - let client = Arc::new(provider); - - // Make sure the Deployer constructor returns None, as it doesn't exist yet - assert!(Deployer::new(client.clone()).await.unwrap().is_none()); - - // Deploy the Deployer - let tx = Deployer::deployment_tx(); - fund_account( - &client, - &wallet, - tx.recover_signer().unwrap(), - U256::from(tx.tx().gas_limit) * U256::from(tx.tx().gas_price), - ) - .await - .unwrap(); - - let (tx, sig, _) = tx.into_parts(); - let mut bytes = vec![]; - tx.encode_with_signature_fields(&sig, &mut bytes); - - let pending_tx = client.send_raw_transaction(&bytes).await.unwrap(); - let receipt = pending_tx.get_receipt().await.unwrap(); - assert!(receipt.status()); - let deployer = - Deployer::new(client.clone()).await.expect("network error").expect("deployer wasn't deployed"); - - let (keys, public_key) = key_gen(); - - // Verify the Router constructor returns None, as it doesn't exist yet - assert!(deployer.find_router(client.clone(), &public_key).await.unwrap().is_none()); - - // Deploy the router - let receipt = send(&client, &anvil.keys()[0].clone().into(), deployer.deploy_router(&public_key)) - .await - .unwrap(); - assert!(receipt.status()); - let contract = deployer.find_router(client.clone(), &public_key).await.unwrap().unwrap(); - - (anvil, client, chain_id, contract, keys, public_key) -} - -async fn latest_block_hash(client: &RootProvider) -> [u8; 32] { - client - .get_block(client.get_block_number().await.unwrap().into(), BlockTransactionsKind::Hashes) - .await - .unwrap() - .unwrap() - .header - .hash - .0 -} - -#[tokio::test] -async fn test_deploy_contract() { - let (_anvil, client, _, router, _, public_key) = setup_test().await; - - let block_hash = latest_block_hash(&client).await; - assert_eq!(router.serai_key(block_hash).await.unwrap(), public_key); - assert_eq!(router.nonce(block_hash).await.unwrap(), U256::try_from(1u64).unwrap()); - // TODO: Check it emitted SeraiKeyUpdated(public_key) at its genesis -} - -pub fn hash_and_sign( - keys: &HashMap>, - public_key: &PublicKey, - message: &[u8], -) -> Signature { - let algo = IetfSchnorr::::ietf(); - let sig = - sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, keys), message); - - Signature::new(public_key, message, sig).unwrap() -} - -#[tokio::test] -async fn test_router_update_serai_key() { - let (anvil, client, chain_id, contract, keys, public_key) = setup_test().await; - - let next_key = loop { - let point = ProjectivePoint::random(&mut OsRng); - let Some(next_key) = PublicKey::new(point) else { continue }; - break next_key; - }; - - let message = Router::update_serai_key_message( - U256::try_from(chain_id).unwrap(), - U256::try_from(1u64).unwrap(), - &next_key, - ); - let sig = hash_and_sign(&keys, &public_key, &message); - - let first_block_hash = latest_block_hash(&client).await; - assert_eq!(contract.serai_key(first_block_hash).await.unwrap(), public_key); - - let receipt = - send(&client, &anvil.keys()[0].clone().into(), contract.update_serai_key(&next_key, &sig)) - .await - .unwrap(); - assert!(receipt.status()); - - let second_block_hash = latest_block_hash(&client).await; - assert_eq!(contract.serai_key(second_block_hash).await.unwrap(), next_key); - // Check this does still offer the historical state - assert_eq!(contract.serai_key(first_block_hash).await.unwrap(), public_key); - // TODO: Check logs - - println!("gas used: {:?}", receipt.gas_used); - // println!("logs: {:?}", receipt.logs); -} - -#[tokio::test] -async fn test_router_execute() { - let (anvil, client, chain_id, contract, keys, public_key) = setup_test().await; - - let to = Address::from([0; 20]); - let value = U256::ZERO; - let tx = router::OutInstruction { to, value, calls: vec![] }; - let txs = vec![tx]; - - let first_block_hash = latest_block_hash(&client).await; - let nonce = contract.nonce(first_block_hash).await.unwrap(); - assert_eq!(nonce, U256::try_from(1u64).unwrap()); - - let message = Router::execute_message(U256::try_from(chain_id).unwrap(), nonce, txs.clone()); - let sig = hash_and_sign(&keys, &public_key, &message); - - let receipt = - send(&client, &anvil.keys()[0].clone().into(), contract.execute(&txs, &sig)).await.unwrap(); - assert!(receipt.status()); - - let second_block_hash = latest_block_hash(&client).await; - assert_eq!(contract.nonce(second_block_hash).await.unwrap(), U256::try_from(2u64).unwrap()); - // Check this does still offer the historical state - assert_eq!(contract.nonce(first_block_hash).await.unwrap(), U256::try_from(1u64).unwrap()); - // TODO: Check logs - - println!("gas used: {:?}", receipt.gas_used); - // println!("logs: {:?}", receipt.logs); -} From c8f3a32fdf8a161f40eba7d8feb697e33be8952f Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Tue, 21 Jan 2025 03:49:29 -0500 Subject: [PATCH 330/368] Replace custom read/write impls in router with borsh --- Cargo.lock | 2 + processor/ethereum/primitives/Cargo.toml | 2 + processor/ethereum/primitives/src/borsh.rs | 24 +++ processor/ethereum/primitives/src/lib.rs | 15 ++ processor/ethereum/router/Cargo.toml | 2 + processor/ethereum/router/src/lib.rs | 156 ++++-------------- processor/ethereum/router/src/tests/mod.rs | 8 +- .../ethereum/router/src/tests/read_write.rs | 85 ---------- processor/ethereum/src/primitives/output.rs | 4 +- 9 files changed, 81 insertions(+), 217 deletions(-) create mode 100644 processor/ethereum/primitives/src/borsh.rs delete mode 100644 processor/ethereum/router/src/tests/read_write.rs diff --git a/Cargo.lock b/Cargo.lock index b8cdc63f..0e566e76 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9455,6 +9455,7 @@ version = "0.1.0" dependencies = [ "alloy-consensus", "alloy-primitives", + "borsh", "group", "k256", ] @@ -9475,6 +9476,7 @@ dependencies = [ "alloy-sol-macro-input", "alloy-sol-types", "alloy-transport", + "borsh", "build-solidity-contracts", "ethereum-schnorr-contract", "group", diff --git a/processor/ethereum/primitives/Cargo.toml b/processor/ethereum/primitives/Cargo.toml index 05b23189..89869cb8 100644 --- a/processor/ethereum/primitives/Cargo.toml +++ b/processor/ethereum/primitives/Cargo.toml @@ -17,6 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + group = { version = "0.13", default-features = false } k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } diff --git a/processor/ethereum/primitives/src/borsh.rs b/processor/ethereum/primitives/src/borsh.rs new file mode 100644 index 00000000..d7f30dbf --- /dev/null +++ b/processor/ethereum/primitives/src/borsh.rs @@ -0,0 +1,24 @@ +use ::borsh::{io, BorshSerialize, BorshDeserialize}; + +use alloy_primitives::{U256, Address}; + +/// Serialize a U256 with a borsh-compatible API. +pub fn serialize_u256(value: &U256, writer: &mut impl io::Write) -> io::Result<()> { + let value: [u8; 32] = value.to_be_bytes(); + value.serialize(writer) +} + +/// Deserialize an address with a borsh-compatible API. +pub fn deserialize_u256(reader: &mut impl io::Read) -> io::Result { + <[u8; 32]>::deserialize_reader(reader).map(|value| U256::from_be_bytes(value)) +} + +/// Serialize an address with a borsh-compatible API. +pub fn serialize_address(address: &Address, writer: &mut impl io::Write) -> io::Result<()> { + <[u8; 20]>::from(address.0).serialize(writer) +} + +/// Deserialize an address with a borsh-compatible API. +pub fn deserialize_address(reader: &mut impl io::Read) -> io::Result
{ + <[u8; 20]>::deserialize_reader(reader).map(|address| Address(address.into())) +} diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index dadc5424..44d08e5a 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -2,12 +2,27 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use ::borsh::{BorshSerialize, BorshDeserialize}; + use group::ff::PrimeField; use k256::Scalar; use alloy_primitives::PrimitiveSignature; use alloy_consensus::{SignableTransaction, Signed, TxLegacy}; +mod borsh; +pub use borsh::*; + +/// An index of a log within a block. +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +#[borsh(crate = "::borsh")] +pub struct LogIndex { + /// The hash of the block which produced this log. + pub block_hash: [u8; 32], + /// The index of this log within the execution of the block. + pub index_within_block: u64, +} + /// The Keccak256 hash function. pub fn keccak256(data: impl AsRef<[u8]>) -> [u8; 32] { alloy_primitives::keccak256(data.as_ref()).into() diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 46b1f203..4b737a00 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -17,6 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] +borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } + group = { version = "0.13", default-features = false } alloy-core = { version = "0.8", default-features = false } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index e13e9eba..d8cac48a 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -2,7 +2,9 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use std::{sync::Arc, io, collections::HashSet}; +use std::{sync::Arc, collections::HashSet}; + +use borsh::{BorshSerialize, BorshDeserialize}; use group::ff::PrimeField; @@ -16,6 +18,7 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; +use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; use erc20::{Transfer, Erc20}; @@ -65,12 +68,18 @@ impl From<&Signature> for abi::Signature { } /// A coin on Ethereum. -#[derive(Clone, Copy, PartialEq, Eq, Debug)] +#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum Coin { /// Ether, the native coin of Ethereum. Ether, /// An ERC20 token. - Erc20(Address), + Erc20( + #[borsh( + serialize_with = "ethereum_primitives::serialize_address", + deserialize_with = "ethereum_primitives::deserialize_address" + )] + Address, + ), } impl Coin { @@ -80,100 +89,31 @@ impl Coin { Coin::Erc20(address) => *address, } } - - /// Read a `Coin`. - pub fn read(reader: &mut R) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - Ok(match kind[0] { - 0 => Coin::Ether, - 1 => { - let mut address = [0; 20]; - reader.read_exact(&mut address)?; - Coin::Erc20(address.into()) - } - _ => Err(io::Error::other("unrecognized Coin type"))?, - }) - } - - /// Write the `Coin`. - pub fn write(&self, writer: &mut W) -> io::Result<()> { - match self { - Coin::Ether => writer.write_all(&[0]), - Coin::Erc20(token) => { - writer.write_all(&[1])?; - writer.write_all(token.as_ref()) - } - } - } } /// An InInstruction from the Router. -#[derive(Clone, PartialEq, Eq, Debug)] +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub struct InInstruction { /// The ID for this `InInstruction`. - pub id: ([u8; 32], u64), + pub id: LogIndex, /// The address which transferred these coins to Serai. - pub from: [u8; 20], + #[borsh( + serialize_with = "ethereum_primitives::serialize_address", + deserialize_with = "ethereum_primitives::deserialize_address" + )] + pub from: Address, /// The coin transferred. pub coin: Coin, /// The amount transferred. + #[borsh( + serialize_with = "ethereum_primitives::serialize_u256", + deserialize_with = "ethereum_primitives::deserialize_u256" + )] pub amount: U256, /// The data associated with the transfer. pub data: Vec, } -impl InInstruction { - /// Read an `InInstruction`. - pub fn read(reader: &mut R) -> io::Result { - let id = { - let mut id_hash = [0; 32]; - reader.read_exact(&mut id_hash)?; - let mut id_pos = [0; 8]; - reader.read_exact(&mut id_pos)?; - let id_pos = u64::from_le_bytes(id_pos); - (id_hash, id_pos) - }; - - let mut from = [0; 20]; - reader.read_exact(&mut from)?; - - let coin = Coin::read(reader)?; - let mut amount = [0; 32]; - reader.read_exact(&mut amount)?; - let amount = U256::from_le_slice(&amount); - - let mut data_len = [0; 4]; - reader.read_exact(&mut data_len)?; - let data_len = usize::try_from(u32::from_le_bytes(data_len)) - .map_err(|_| io::Error::other("InInstruction data exceeded 2**32 in length"))?; - let mut data = vec![0; data_len]; - reader.read_exact(&mut data)?; - - Ok(InInstruction { id, from, coin, amount, data }) - } - - /// Write the `InInstruction`. - pub fn write(&self, writer: &mut W) -> io::Result<()> { - writer.write_all(&self.id.0)?; - writer.write_all(&self.id.1.to_le_bytes())?; - - writer.write_all(&self.from)?; - - self.coin.write(writer)?; - writer.write_all(&self.amount.as_le_bytes())?; - - writer.write_all( - &u32::try_from(self.data.len()) - .map_err(|_| { - io::Error::other("InInstruction being written had data exceeding 2**32 in length") - })? - .to_le_bytes(), - )?; - writer.write_all(&self.data) - } -} - /// A list of `OutInstruction`s. #[derive(Clone)] pub struct OutInstructions(Vec); @@ -205,7 +145,7 @@ impl From<&[(SeraiAddress, U256)]> for OutInstructions { } /// An action which was executed by the Router. -#[derive(Clone, PartialEq, Eq, Debug)] +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum Executed { /// New key was set. SetKey { @@ -230,44 +170,6 @@ impl Executed { Executed::SetKey { nonce, .. } | Executed::Batch { nonce, .. } => *nonce, } } - - /// Write the Executed. - pub fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - match self { - Self::SetKey { nonce, key } => { - writer.write_all(&[0])?; - writer.write_all(&nonce.to_le_bytes())?; - writer.write_all(key) - } - Self::Batch { nonce, message_hash } => { - writer.write_all(&[1])?; - writer.write_all(&nonce.to_le_bytes())?; - writer.write_all(message_hash) - } - } - } - - /// Read an Executed. - pub fn read(reader: &mut impl io::Read) -> io::Result { - let mut kind = [0xff]; - reader.read_exact(&mut kind)?; - if kind[0] >= 2 { - Err(io::Error::other("unrecognized type of Executed"))?; - } - - let mut nonce = [0; 8]; - reader.read_exact(&mut nonce)?; - let nonce = u64::from_le_bytes(nonce); - - let mut payload = [0; 32]; - reader.read_exact(&mut payload)?; - - Ok(match kind[0] { - 0 => Self::SetKey { nonce, key: payload }, - 1 => Self::Batch { nonce, message_hash: payload }, - _ => unreachable!(), - }) - } } /// A view of the Router for Serai. @@ -452,17 +354,17 @@ impl Router { ))?; } - let id = ( - log + let id = LogIndex { + block_hash: log .block_hash .ok_or_else(|| { TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) })? .into(), - log.log_index.ok_or_else(|| { + index_within_block: log.log_index.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its index set".to_string().into()) })?, - ); + }; let tx_hash = log.transaction_hash.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) @@ -551,7 +453,7 @@ impl Router { in_instructions.push(InInstruction { id, - from: *log.from.0, + from: log.from, coin, amount: log.amount, data: log.instruction.as_ref().to_vec(), diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index bb0da393..41363daf 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -17,13 +17,12 @@ use alloy_provider::RootProvider; use alloy_node_bindings::{Anvil, AnvilInstance}; +use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; use crate::{Coin, OutInstructions, Router}; -mod read_write; - #[test] fn execute_reentrancy_guard() { let hash = alloy_core::primitives::keccak256(b"ReentrancyGuard Router.execute"); @@ -217,7 +216,10 @@ async fn test_eth_in_instruction() { assert_eq!(parsed_in_instructions.len(), 1); assert_eq!( parsed_in_instructions[0].id, - (<[u8; 32]>::from(receipt.block_hash.unwrap()), receipt.inner.logs()[0].log_index.unwrap()) + LogIndex { + block_hash: *receipt.block_hash.unwrap(), + index_within_block: receipt.inner.logs()[0].log_index.unwrap(), + }, ); assert_eq!(parsed_in_instructions[0].from, signer); assert_eq!(parsed_in_instructions[0].coin, Coin::Ether); diff --git a/processor/ethereum/router/src/tests/read_write.rs b/processor/ethereum/router/src/tests/read_write.rs deleted file mode 100644 index 3b6e6b73..00000000 --- a/processor/ethereum/router/src/tests/read_write.rs +++ /dev/null @@ -1,85 +0,0 @@ -use rand_core::{RngCore, OsRng}; - -use alloy_core::primitives::U256; - -use crate::{Coin, InInstruction, Executed}; - -fn coins() -> [Coin; 2] { - [Coin::Ether, { - let mut erc20 = [0; 20]; - OsRng.fill_bytes(&mut erc20); - Coin::Erc20(erc20.into()) - }] -} - -#[test] -fn test_coin_read_write() { - for coin in coins() { - let mut res = vec![]; - coin.write(&mut res).unwrap(); - assert_eq!(coin, Coin::read(&mut res.as_slice()).unwrap()); - } -} - -#[test] -fn test_in_instruction_read_write() { - for coin in coins() { - let instruction = InInstruction { - id: ( - { - let mut tx_id = [0; 32]; - OsRng.fill_bytes(&mut tx_id); - tx_id - }, - OsRng.next_u64(), - ), - from: { - let mut from = [0; 20]; - OsRng.fill_bytes(&mut from); - from - }, - coin, - amount: U256::from_le_bytes({ - let mut amount = [0; 32]; - OsRng.fill_bytes(&mut amount); - amount - }), - data: { - let len = usize::try_from(OsRng.next_u64() % 65536).unwrap(); - let mut data = vec![0; len]; - OsRng.fill_bytes(&mut data); - data - }, - }; - - let mut buf = vec![]; - instruction.write(&mut buf).unwrap(); - assert_eq!(InInstruction::read(&mut buf.as_slice()).unwrap(), instruction); - } -} - -#[test] -fn test_executed_read_write() { - for executed in [ - Executed::SetKey { - nonce: OsRng.next_u64(), - key: { - let mut key = [0; 32]; - OsRng.fill_bytes(&mut key); - key - }, - }, - Executed::Batch { - nonce: OsRng.next_u64(), - message_hash: { - let mut message_hash = [0; 32]; - OsRng.fill_bytes(&mut message_hash); - message_hash - }, - }, - ] { - let mut res = vec![]; - executed.write(&mut res).unwrap(); - assert_eq!(executed, Executed::read(&mut res.as_slice()).unwrap()); - } -} diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index 2215c29d..f7aaa1f8 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -145,7 +145,7 @@ impl ReceivedOutput<::G, Address> for Output { Output::Output { key, instruction } => { writer.write_all(&[0])?; writer.write_all(key.to_bytes().as_ref())?; - instruction.write(writer) + instruction.serialize(writer) } Output::Eventuality { key, nonce } => { writer.write_all(&[1])?; @@ -164,7 +164,7 @@ impl ReceivedOutput<::G, Address> for Output { Ok(match kind[0] { 0 => { let key = Secp256k1::read_G(reader)?; - let instruction = EthereumInInstruction::read(reader)?; + let instruction = EthereumInInstruction::deserialize_reader(reader)?; Self::Output { key, instruction } } 1 => { From 373e794d2c603999d72ea634aba67f059a5f70a0 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 22 Jan 2025 22:45:51 -0500 Subject: [PATCH 331/368] Check the escaped to address has code set Document choice not to use a confirmation flow there as well. --- .../ethereum/router/contracts/IRouter.sol | 9 +++-- .../ethereum/router/contracts/Router.sol | 37 ++++++++++++++----- 2 files changed, 33 insertions(+), 13 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index cf83bc51..c5e9a2fa 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -34,11 +34,11 @@ interface IRouterWithoutCollisions { * An `OutInstruction` is considered as having succeeded if the call transferring ETH doesn't * fail, the ERC20 transfer doesn't fail, and any executed code doesn't revert. */ - event Executed(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); + event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); /// @notice Emitted when `escapeHatch` is invoked /// @param escapeTo The address to escape to - event EscapeHatch(address indexed escapeTo); + event EscapeHatch(uint256 indexed nonce, address indexed escapeTo); /// @notice Emitted when coins escape through the escape hatch /// @param coin The coin which escaped @@ -122,7 +122,10 @@ interface IRouter is IRouterWithoutCollisions { } /// @title The type of destination - /// @dev A destination is either an address or a blob of code to deploy and call + /** + * @dev A destination is either an ABI-encoded address or an ABI-encoded `CodeDestination` + * containing code to deploy (invoking its constructor). + */ enum DestinationType { Address, Code diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 12d4fa9c..526e1b9c 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -25,13 +25,12 @@ import "IRouter.sol"; /// @author Luke Parker /// @notice Intakes coins for the Serai network and handles relaying batches of transfers out contract Router is IRouterWithoutCollisions { + /// @dev The code hash for a non-empty account without code + bytes32 constant ACCOUNT_WITHOUT_CODE_CODEHASH = keccak256(""); + /// @dev The address in transient storage used for the reentrancy guard - bytes32 constant EXECUTE_REENTRANCY_GUARD_SLOT = bytes32( - /* - keccak256("ReentrancyGuard Router.execute") - 1 - */ - 0xcf124a063de1614fedbd6b47187f98bf8873a1ae83da5c179a5881162f5b2401 - ); + bytes32 constant EXECUTE_REENTRANCY_GUARD_SLOT = + bytes32(uint256(keccak256("ReentrancyGuard Router.execute")) - 1); /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is @@ -509,11 +508,11 @@ contract Router is IRouterWithoutCollisions { } /* - Emit execution with the status of all included events. + Emit batch execution with the status of all included events. This is an effect after interactions yet we have a reentrancy guard making this safe. */ - emit Executed(nonceUsed, message, results); + emit Batch(nonceUsed, message, results); // Transfer the fee to the relayer transferOut(msg.sender, coin, fee); @@ -529,13 +528,31 @@ contract Router is IRouterWithoutCollisions { // @param escapeTo The address to escape to function escapeHatchDCDD91CC() external { // Verify the signature - (, bytes memory args,) = verifySignature(_seraiKey); + (uint256 nonceUsed, bytes memory args,) = verifySignature(_seraiKey); (,, address escapeTo) = abi.decode(args, (bytes32, bytes32, address)); if (escapeTo == address(0)) { revert InvalidEscapeAddress(); } + + /* + We could define the escape hatch as having its own confirmation flow, as new keys do, but new + contracts don't face all of the cryptographic concerns faced by new keys. New contracts also + would presumably be moved to after strict review, making the chance of specifying the wrong + contract incredibly unlikely. + + The only check performed accordingly (with no confirmation flow) is that the new contract is + in fact a contract. This is done to confirm the contract was successfully deployed on this + blockchain. + */ + { + bytes32 codehash = escapeTo.codehash; + if ((codehash == bytes32(0)) || (codehash == ACCOUNT_WITHOUT_CODE_CODEHASH)) { + revert InvalidEscapeAddress(); + } + } + /* We want to define the escape hatch so coins here now, and latently received, can be forwarded. If the last Serai key set could update the escape hatch, they could siphon off latently @@ -546,7 +563,7 @@ contract Router is IRouterWithoutCollisions { } _escapedTo = escapeTo; - emit EscapeHatch(escapeTo); + emit EscapeHatch(nonceUsed, escapeTo); } /// @notice Escape coins after the escape hatch has been invoked From 6508957cbc152fe29d46b525ad1015c90b0737e1 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 23 Jan 2025 00:03:54 -0500 Subject: [PATCH 332/368] Make a proper nonReentrant modifier A transaction couldn't call execute twice within a single TX prior. Now, it can. Also adds a bit more context to the escape hatch events/errors. --- .../ethereum/router/contracts/IRouter.sol | 11 ++- .../ethereum/router/contracts/Router.sol | 78 ++++++++++++------- 2 files changed, 56 insertions(+), 33 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index c5e9a2fa..57994f8d 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -42,7 +42,8 @@ interface IRouterWithoutCollisions { /// @notice Emitted when coins escape through the escape hatch /// @param coin The coin which escaped - event Escaped(address indexed coin); + /// @param amount The amount which escaped + event Escaped(address indexed coin, uint256 amount); /// @notice The key for Serai was invalid /// @dev This is incomplete and not always guaranteed to be thrown upon an invalid key @@ -57,13 +58,17 @@ interface IRouterWithoutCollisions { /// @notice The call to an ERC20's `transferFrom` failed error TransferFromFailed(); - /// @notice `execute` was re-entered - error ReenteredExecute(); + /// @notice A non-reentrant function was re-entered + error Reentered(); /// @notice An invalid address to escape to was specified. error InvalidEscapeAddress(); + /// @notice The escape address wasn't a contract. + error EscapeAddressWasNotAContract(); /// @notice Escaping when escape hatch wasn't invoked. error EscapeHatchNotInvoked(); + /// @notice Escaping failed to transfer out. + error EscapeFailed(); /// @notice Transfer coins into Serai with an instruction /// @param coin The coin to transfer in (address(0) if Ether) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 526e1b9c..ade72a8c 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -29,8 +29,8 @@ contract Router is IRouterWithoutCollisions { bytes32 constant ACCOUNT_WITHOUT_CODE_CODEHASH = keccak256(""); /// @dev The address in transient storage used for the reentrancy guard - bytes32 constant EXECUTE_REENTRANCY_GUARD_SLOT = - bytes32(uint256(keccak256("ReentrancyGuard Router.execute")) - 1); + bytes32 constant REENTRANCY_GUARD_SLOT = + bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is @@ -64,6 +64,28 @@ contract Router is IRouterWithoutCollisions { /// @dev The address escaped to address private _escapedTo; + /// @dev Acquire the re-entrancy lock for the lifetime of this transaction + modifier nonReentrant() { + bytes32 reentrancyGuardSlot = REENTRANCY_GUARD_SLOT; + bytes32 priorEntered; + // slither-disable-next-line assembly + assembly { + priorEntered := tload(reentrancyGuardSlot) + tstore(reentrancyGuardSlot, 1) + } + if (priorEntered != bytes32(0)) { + revert Reentered(); + } + + _; + + // Clear the re-entrancy guard to allow multiple transactions to non-re-entrant functions within + // a transaction + assembly { + tstore(reentrancyGuardSlot, 0) + } + } + /// @dev Set the next Serai key. This does not read from/write to `_nextNonce` /// @param nonceUpdatedWith The nonce used to set the next key /// @param nextSeraiKeyVar The key to set as next @@ -139,15 +161,16 @@ contract Router is IRouterWithoutCollisions { bytes32 signatureS; // slither-disable-next-line assembly + uint256 chainID = block.chainid; assembly { // Read the signature (placed after the function signature) signatureC := mload(add(message, 36)) signatureS := mload(add(message, 68)) - // Overwrite the signature challenge with the nonce - mstore(add(message, 36), nonceUsed) - // Overwrite the signature response with 0 - mstore(add(message, 68), 0) + // Overwrite the signature challenge with the chain ID + mstore(add(message, 36), chainID) + // Overwrite the signature response with the nonce + mstore(add(message, 68), nonceUsed) // Calculate the message hash messageHash := keccak256(add(message, 32), messageLen) @@ -404,6 +427,12 @@ contract Router is IRouterWithoutCollisions { * fee. * * The hex bytes are to cause a function selector collision with `IRouter.execute`. + * + * Re-entrancy is prevented because we emit a bitmask of which `OutInstruction`s succeeded. Doing + * that requires executing the `OutInstruction`s, which may re-enter here. While our application + * of CEI with `verifySignature` prevents replays, re-entrancy would allow out-of-order + * completion for the execution of batches (despite their in-order start of execution) which + * isn't a headache worth dealing with. */ // @param signature The signature by the current key for Serai's Ethereum validators // @param coin The coin all of these `OutInstruction`s are for @@ -411,26 +440,7 @@ contract Router is IRouterWithoutCollisions { // @param outs The `OutInstruction`s to act on // Each individual call is explicitly metered to ensure there isn't a DoS here // slither-disable-next-line calls-loop,reentrancy-events - function execute4DE42904() external { - /* - Prevent re-entrancy. - - We emit a bitmask of which `OutInstruction`s succeeded. Doing that requires executing the - `OutInstruction`s, which may re-enter here. While our application of CEI with verifySignature - prevents replays, re-entrancy would allow out-of-order execution of batches (despite their - in-order start of execution) which isn't a headache worth dealing with. - */ - bytes32 executeReentrancyGuardSlot = EXECUTE_REENTRANCY_GUARD_SLOT; - bytes32 priorEntered; - // slither-disable-next-line assembly - assembly { - priorEntered := tload(executeReentrancyGuardSlot) - tstore(executeReentrancyGuardSlot, 1) - } - if (priorEntered != bytes32(0)) { - revert ReenteredExecute(); - } - + function execute4DE42904() external nonReentrant { (uint256 nonceUsed, bytes memory args, bytes32 message) = verifySignature(_seraiKey); (,, address coin, uint256 fee, IRouter.OutInstruction[] memory outs) = abi.decode(args, (bytes32, bytes32, address, uint256, IRouter.OutInstruction[])); @@ -545,11 +555,15 @@ contract Router is IRouterWithoutCollisions { The only check performed accordingly (with no confirmation flow) is that the new contract is in fact a contract. This is done to confirm the contract was successfully deployed on this blockchain. + + This check is also comprehensive to the zero-address case, but this function doesn't have to + be perfectly optimized and it's better to explicitly handle that due to it being its own + invariant. */ { bytes32 codehash = escapeTo.codehash; if ((codehash == bytes32(0)) || (codehash == ACCOUNT_WITHOUT_CODE_CODEHASH)) { - revert InvalidEscapeAddress(); + revert EscapeAddressWasNotAContract(); } } @@ -573,8 +587,6 @@ contract Router is IRouterWithoutCollisions { revert EscapeHatchNotInvoked(); } - emit Escaped(coin); - // Fetch the amount to escape uint256 amount = address(this).balance; if (coin != address(0)) { @@ -582,7 +594,13 @@ contract Router is IRouterWithoutCollisions { } // Perform the transfer - transferOut(_escapedTo, coin, amount); + // While this can be re-entered to try escaping our balance twice, the outer call will fail + if (!transferOut(_escapedTo, coin, amount)) { + revert EscapeFailed(); + } + + // Since we successfully escaped this amount, emit the event for it + emit Escaped(coin, amount); } /// @notice Fetch the next nonce to use by an action published to this contract From 669b8b776befebf92d5a277a5d58c34404bbcd47 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 23 Jan 2025 01:59:24 -0500 Subject: [PATCH 333/368] Work on testing the Router Completes the `Executed` enum in the router. Adds an `Escape` struct. Both are needed for testing purposes. Documents the gas constants in intent and reasoning. Adds modernized tests around key rotation and the escape hatch. Also updates the rest of the codebase which had accumulated errors. --- Cargo.lock | 1 + processor/ethereum/erc20/Cargo.toml | 3 + processor/ethereum/erc20/src/lib.rs | 22 +- processor/ethereum/router/src/lib.rs | 384 ++++++++---- .../ethereum/router/src/tests/constants.rs | 21 + processor/ethereum/router/src/tests/mod.rs | 588 ++++++++++++------ processor/ethereum/src/main.rs | 22 +- processor/ethereum/src/primitives/block.rs | 1 + processor/ethereum/src/primitives/output.rs | 8 +- .../ethereum/src/primitives/transaction.rs | 44 +- processor/ethereum/src/publisher.rs | 4 +- processor/ethereum/src/rpc.rs | 10 +- processor/ethereum/src/scheduler.rs | 12 +- 13 files changed, 765 insertions(+), 355 deletions(-) create mode 100644 processor/ethereum/router/src/tests/constants.rs diff --git a/Cargo.lock b/Cargo.lock index 0e566e76..1f6b372a 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9446,6 +9446,7 @@ dependencies = [ "alloy-sol-macro", "alloy-sol-types", "alloy-transport", + "serai-processor-ethereum-primitives", "tokio", ] diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index befa0f29..21be88c5 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -27,4 +27,7 @@ alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-provider = { version = "0.9", default-features = false } +ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } + +# TODO futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false, features = ["rt"] } diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 20f44b53..e72e357b 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -13,6 +13,8 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; +use ethereum_primitives::LogIndex; + use tokio::task::JoinSet; #[rustfmt::skip] @@ -31,9 +33,11 @@ pub use abi::IERC20::Transfer; #[derive(Clone, Debug)] pub struct TopLevelTransfer { /// The ID of the event for this transfer. - pub id: ([u8; 32], u64), + pub id: LogIndex, + /// The hash of the transaction which caused this transfer. + pub transaction_hash: [u8; 32], /// The address which made the transfer. - pub from: [u8; 20], + pub from: Address, /// The amount transferred. pub amount: U256, /// The data appended after the call itself. @@ -52,12 +56,12 @@ impl Erc20 { /// Match a transaction for its top-level transfer to the specified address (if one exists). pub async fn match_top_level_transfer( provider: impl AsRef>, - transaction_id: B256, + transaction_hash: B256, to: Address, ) -> Result, RpcError> { // Fetch the transaction let transaction = - provider.as_ref().get_transaction_by_hash(transaction_id).await?.ok_or_else(|| { + provider.as_ref().get_transaction_by_hash(transaction_hash).await?.ok_or_else(|| { TransportErrorKind::Custom( "node didn't have the transaction which emitted a log it had".to_string().into(), ) @@ -81,7 +85,7 @@ impl Erc20 { // Fetch the transaction's logs let receipt = - provider.as_ref().get_transaction_receipt(transaction_id).await?.ok_or_else(|| { + provider.as_ref().get_transaction_receipt(transaction_hash).await?.ok_or_else(|| { TransportErrorKind::Custom( "node didn't have receipt for a transaction we were matching for a top-level transfer" .to_string() @@ -102,6 +106,9 @@ impl Erc20 { continue; } + let block_hash = log.block_hash.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) + })?; let log_index = log.log_index.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its index set".to_string().into()) })?; @@ -125,8 +132,9 @@ impl Erc20 { let data = transaction.inner.input().as_ref()[encoded.len() ..].to_vec(); return Ok(Some(TopLevelTransfer { - id: (*transaction_id, log_index), - from: *log.from.0, + id: LogIndex { block_hash: *block_hash, index_within_block: log_index }, + transaction_hash: *transaction_hash, + from: log.from, amount: log.value, data, })); diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index d8cac48a..fd88a222 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -8,12 +8,15 @@ use borsh::{BorshSerialize, BorshDeserialize}; use group::ff::PrimeField; -use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind}; +use alloy_core::primitives::{ + hex::{self, FromHex}, + Address, U256, Bytes, TxKind, +}; use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; use alloy_consensus::TxLegacy; -use alloy_rpc_types_eth::{TransactionRequest, TransactionInput, BlockId, Filter}; +use alloy_rpc_types_eth::{BlockId, Log, Filter, TransactionInput, TransactionRequest}; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; @@ -51,8 +54,9 @@ mod abi { pub use super::_router_abi::Router::constructorCall; } use abi::{ - SeraiKeyUpdated as SeraiKeyUpdatedEvent, InInstruction as InInstructionEvent, - Executed as ExecutedEvent, + NextSeraiKeySet as NextSeraiKeySetEvent, SeraiKeyUpdated as SeraiKeyUpdatedEvent, + InInstruction as InInstructionEvent, Batch as BatchEvent, EscapeHatch as EscapeHatchEvent, + Escaped as EscapedEvent, }; #[cfg(test)] @@ -81,12 +85,20 @@ pub enum Coin { Address, ), } - -impl Coin { - fn address(&self) -> Address { - match self { - Coin::Ether => [0; 20].into(), - Coin::Erc20(address) => *address, +impl From for Address { + fn from(coin: Coin) -> Address { + match coin { + Coin::Ether => Address::ZERO, + Coin::Erc20(address) => address, + } + } +} +impl From
for Coin { + fn from(address: Address) -> Coin { + if address == Address::ZERO { + Coin::Ether + } else { + Coin::Erc20(address) } } } @@ -96,6 +108,8 @@ impl Coin { pub struct InInstruction { /// The ID for this `InInstruction`. pub id: LogIndex, + /// The hash of the transaction which caused this. + pub transaction_hash: [u8; 32], /// The address which transferred these coins to Serai. #[borsh( serialize_with = "ethereum_primitives::serialize_address", @@ -126,6 +140,8 @@ impl From<&[(SeraiAddress, U256)]> for OutInstructions { #[allow(non_snake_case)] let (destinationType, destination) = match address { SeraiAddress::Address(address) => { + // Per the documentation, `DestinationType::Address`'s value is an ABI-encoded + // address (abi::DestinationType::Address, (Address::from(address)).abi_encode()) } SeraiAddress::Contract(contract) => ( @@ -147,41 +163,90 @@ impl From<&[(SeraiAddress, U256)]> for OutInstructions { /// An action which was executed by the Router. #[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] pub enum Executed { - /// New key was set. - SetKey { + /// Next key was set. + NextSeraiKeySet { /// The nonce this was done with. nonce: u64, /// The key set. key: [u8; 32], }, - /// Executed Batch. + /// The next key was updated to. + SeraiKeyUpdated { + /// The nonce this was done with. + nonce: u64, + /// The key set. + key: [u8; 32], + }, + /// Executed batch of `OutInstruction`s. Batch { /// The nonce this was done with. nonce: u64, /// The hash of the signed message for the Batch executed. message_hash: [u8; 32], }, + /// The escape hatch was set. + EscapeHatch { + /// The nonce this was done with. + nonce: u64, + /// The address set to escape to. + #[borsh( + serialize_with = "ethereum_primitives::serialize_address", + deserialize_with = "ethereum_primitives::deserialize_address" + )] + escape_to: Address, + }, } impl Executed { /// The nonce consumed by this executed event. + /// + /// This is a `u64` despite the contract defining the nonce as a `u256`. Since the nonce is + /// incremental, the u64 will never be exhausted. pub fn nonce(&self) -> u64 { match self { - Executed::SetKey { nonce, .. } | Executed::Batch { nonce, .. } => *nonce, + Executed::NextSeraiKeySet { nonce, .. } | + Executed::SeraiKeyUpdated { nonce, .. } | + Executed::Batch { nonce, .. } | + Executed::EscapeHatch { nonce, .. } => *nonce, } } } +/// An Escape from the Router. +#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +pub struct Escape { + /// The coin escaped. + pub coin: Coin, + /// The amount escaped. + #[borsh( + serialize_with = "ethereum_primitives::serialize_u256", + deserialize_with = "ethereum_primitives::deserialize_u256" + )] + pub amount: U256, +} + /// A view of the Router for Serai. #[derive(Clone, Debug)] -pub struct Router(Arc>, Address); +pub struct Router { + provider: Arc>, + address: Address, +} impl Router { - const DEPLOYMENT_GAS: u64 = 1_000_000; - const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 58_000; - const UPDATE_SERAI_KEY_GAS: u64 = 61_000; + /* + The gas limits to use for transactions. + + These are expected to be constant as a distributed group signs the transactions invoking these + calls. Having the gas be constant prevents needing to run a protocol to determine what gas to + use. + + These gas limits may break if/when gas opcodes undergo repricing. In that case, this library is + expected to be modified with these made parameters. The caller would then be expected to pass + the correct set of prices for the network they're operating on. + */ + const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; + const UPDATE_SERAI_KEY_GAS: u64 = 60_045; const EXECUTE_BASE_GAS: u64 = 48_000; - const ESCAPE_HATCH_GAS: u64 = 58_000; - const ESCAPE_GAS: u64 = 200_000; + const ESCAPE_HATCH_GAS: u64 = 61_238; fn code() -> Vec { const BYTECODE: &[u8] = @@ -198,11 +263,10 @@ impl Router { /// Obtain the transaction to deploy this contract. /// - /// This transaction assumes the `Deployer` has already been deployed. + /// This transaction assumes the `Deployer` has already been deployed. The gas limit and gas + /// price are not set and are left to the caller. pub fn deployment_tx(initial_serai_key: &PublicKey) -> TxLegacy { - let mut tx = Deployer::deploy_tx(Self::init_code(initial_serai_key)); - tx.gas_limit = Self::DEPLOYMENT_GAS * 120 / 100; - tx + Deployer::deploy_tx(Self::init_code(initial_serai_key)) } /// Create a new view of the Router. @@ -216,25 +280,25 @@ impl Router { let Some(deployer) = Deployer::new(provider.clone()).await? else { return Ok(None); }; - let Some(deployment) = deployer + let Some(address) = deployer .find_deployment(ethereum_primitives::keccak256(Self::init_code(initial_serai_key))) .await? else { return Ok(None); }; - Ok(Some(Self(provider, deployment))) + Ok(Some(Self { provider, address })) } /// The address of the router. pub fn address(&self) -> Address { - self.1 + self.address } /// Get the message to be signed in order to confirm the next key for Serai. - pub fn confirm_next_serai_key_message(nonce: u64) -> Vec { + pub fn confirm_next_serai_key_message(chain_id: U256, nonce: u64) -> Vec { abi::confirmNextSeraiKeyCall::new((abi::Signature { - c: U256::try_from(nonce).unwrap().into(), - s: U256::ZERO.into(), + c: chain_id.into(), + s: U256::try_from(nonce).unwrap().into(), },)) .abi_encode() } @@ -242,7 +306,7 @@ impl Router { /// Construct a transaction to confirm the next key representing Serai. pub fn confirm_next_serai_key(&self, sig: &Signature) -> TxLegacy { TxLegacy { - to: TxKind::Call(self.1), + to: TxKind::Call(self.address), input: abi::confirmNextSeraiKeyCall::new((abi::Signature::from(sig),)).abi_encode().into(), gas_limit: Self::CONFIRM_NEXT_SERAI_KEY_GAS * 120 / 100, ..Default::default() @@ -250,9 +314,9 @@ impl Router { } /// Get the message to be signed in order to update the key for Serai. - pub fn update_serai_key_message(nonce: u64, key: &PublicKey) -> Vec { + pub fn update_serai_key_message(chain_id: U256, nonce: u64, key: &PublicKey) -> Vec { abi::updateSeraiKeyCall::new(( - abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, + abi::Signature { c: chain_id.into(), s: U256::try_from(nonce).unwrap().into() }, key.eth_repr().into(), )) .abi_encode() @@ -261,7 +325,7 @@ impl Router { /// Construct a transaction to update the key representing Serai. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { TxLegacy { - to: TxKind::Call(self.1), + to: TxKind::Call(self.address), input: abi::updateSeraiKeyCall::new(( abi::Signature::from(sig), public_key.eth_repr().into(), @@ -274,10 +338,16 @@ impl Router { } /// Get the message to be signed in order to execute a series of `OutInstruction`s. - pub fn execute_message(nonce: u64, coin: Coin, fee: U256, outs: OutInstructions) -> Vec { + pub fn execute_message( + chain_id: U256, + nonce: u64, + coin: Coin, + fee: U256, + outs: OutInstructions, + ) -> Vec { abi::executeCall::new(( - abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, - coin.address(), + abi::Signature { c: chain_id.into(), s: U256::try_from(nonce).unwrap().into() }, + Address::from(coin), fee, outs.0, )) @@ -289,8 +359,8 @@ impl Router { // TODO let gas_limit = Self::EXECUTE_BASE_GAS + outs.0.iter().map(|_| 200_000 + 10_000).sum::(); TxLegacy { - to: TxKind::Call(self.1), - input: abi::executeCall::new((abi::Signature::from(sig), coin.address(), fee, outs.0)) + to: TxKind::Call(self.address), + input: abi::executeCall::new((abi::Signature::from(sig), Address::from(coin), fee, outs.0)) .abi_encode() .into(), gas_limit: gas_limit * 120 / 100, @@ -299,9 +369,9 @@ impl Router { } /// Get the message to be signed in order to trigger the escape hatch. - pub fn escape_hatch_message(nonce: u64, escape_to: Address) -> Vec { + pub fn escape_hatch_message(chain_id: U256, nonce: u64, escape_to: Address) -> Vec { abi::escapeHatchCall::new(( - abi::Signature { c: U256::try_from(nonce).unwrap().into(), s: U256::ZERO.into() }, + abi::Signature { c: chain_id.into(), s: U256::try_from(nonce).unwrap().into() }, escape_to, )) .abi_encode() @@ -310,7 +380,7 @@ impl Router { /// Construct a transaction to trigger the escape hatch. pub fn escape_hatch(&self, escape_to: Address, sig: &Signature) -> TxLegacy { TxLegacy { - to: TxKind::Call(self.1), + to: TxKind::Call(self.address), input: abi::escapeHatchCall::new((abi::Signature::from(sig), escape_to)).abi_encode().into(), gas_limit: Self::ESCAPE_HATCH_GAS * 120 / 100, ..Default::default() @@ -318,11 +388,10 @@ impl Router { } /// Construct a transaction to escape coins via the escape hatch. - pub fn escape(&self, coin: Address) -> TxLegacy { + pub fn escape(&self, coin: Coin) -> TxLegacy { TxLegacy { - to: TxKind::Call(self.1), - input: abi::escapeCall::new((coin,)).abi_encode().into(), - gas_limit: Self::ESCAPE_GAS, + to: TxKind::Call(self.address), + input: abi::escapeCall::new((Address::from(coin),)).abi_encode().into(), ..Default::default() } } @@ -334,9 +403,10 @@ impl Router { allowed_tokens: &HashSet
, ) -> Result, RpcError> { // The InInstruction events for this block - let filter = Filter::new().from_block(block).to_block(block).address(self.1); + let filter = Filter::new().from_block(block).to_block(block).address(self.address); let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await?; + let mut logs = self.provider.get_logs(&filter).await?; + logs.sort_by_key(|log| (log.block_number, log.log_index)); /* We check that for all InInstructions for ERC20s emitted, a corresponding transfer occurred. @@ -348,7 +418,7 @@ impl Router { let mut in_instructions = vec![]; for log in logs { // Double check the address which emitted this log - if log.address() != self.1 { + if log.address() != self.address { Err(TransportErrorKind::Custom( "node returned a log from a different address than requested".to_string().into(), ))?; @@ -366,7 +436,7 @@ impl Router { })?, }; - let tx_hash = log.transaction_hash.ok_or_else(|| { + let transaction_hash = log.transaction_hash.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) })?; @@ -380,21 +450,19 @@ impl Router { .inner .data; - let coin = if log.coin.0 == [0; 20] { - Coin::Ether - } else { - let token = log.coin; - + let coin = Coin::from(log.coin); + if let Coin::Erc20(token) = coin { if !allowed_tokens.contains(&token) { continue; } // Get all logs for this TX - let receipt = self.0.get_transaction_receipt(tx_hash).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have the receipt for a transaction it had".to_string().into(), - ) - })?; + let receipt = + self.provider.get_transaction_receipt(transaction_hash).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have the receipt for a transaction it had".to_string().into(), + ) + })?; let tx_logs = receipt.inner.logs(); /* @@ -402,9 +470,11 @@ impl Router { Accordingly, when looking for the matching transfer, disregard the top-level transfer (if one exists). */ - if let Some(matched) = Erc20::match_top_level_transfer(&self.0, tx_hash, self.1).await? { + if let Some(matched) = + Erc20::match_top_level_transfer(&self.provider, transaction_hash, self.address).await? + { // Mark this log index as used so it isn't used again - transfer_check.insert(matched.id.1); + transfer_check.insert(matched.id.index_within_block); } // Find a matching transfer log @@ -432,7 +502,7 @@ impl Router { } let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue }; // Check if this is a transfer to us for the expected amount - if (transfer.to == self.1) && (transfer.value == log.amount) { + if (transfer.to == self.address) && (transfer.value == log.amount) { transfer_check.insert(log_index); found_transfer = true; break; @@ -447,12 +517,11 @@ impl Router { "ERC20 InInstruction with no matching transfer log".to_string().into(), ))?; } - - Coin::Erc20(token) }; in_instructions.push(InInstruction { id, + transaction_hash: *transaction_hash, from: log.from, coin, amount: log.amount, @@ -464,74 +533,123 @@ impl Router { } /// Fetch the executed actions from this block. - pub async fn executed(&self, block: u64) -> Result, RpcError> { + pub async fn executed( + &self, + from_block: u64, + to_block: u64, + ) -> Result, RpcError> { + fn decode(log: &Log) -> Result> { + Ok( + log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom( + format!("filtered to event yet couldn't decode log: {e:?}").into(), + ) + })? + .inner + .data, + ) + } + + let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + let mut logs = self.provider.get_logs(&filter).await?; + logs.sort_by_key(|log| (log.block_number, log.log_index)); + let mut res = vec![]; + for log in logs { + // Double check the address which emitted this log + if log.address() != self.address { + Err(TransportErrorKind::Custom( + "node returned a log from a different address than requested".to_string().into(), + ))?; + } - { - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(SeraiKeyUpdatedEvent::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await?; - - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(TransportErrorKind::Custom( - "node returned a log from a different address than requested".to_string().into(), - ))?; + match log.topics().first() { + Some(&NextSeraiKeySetEvent::SIGNATURE_HASH) => { + let event = decode::(&log)?; + res.push(Executed::NextSeraiKeySet { + nonce: event.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) + })?, + key: event.key.into(), + }); } - - let log = log - .log_decode::() - .map_err(|e| { - TransportErrorKind::Custom( - format!("filtered to SeraiKeyUpdatedEvent yet couldn't decode log: {e:?}").into(), - ) - })? - .inner - .data; - - res.push(Executed::SetKey { - nonce: log.nonce.try_into().map_err(|e| { - TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) - })?, - key: log.key.into(), - }); + Some(&SeraiKeyUpdatedEvent::SIGNATURE_HASH) => { + let event = decode::(&log)?; + res.push(Executed::SeraiKeyUpdated { + nonce: event.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) + })?, + key: event.key.into(), + }); + } + Some(&BatchEvent::SIGNATURE_HASH) => { + let event = decode::(&log)?; + res.push(Executed::Batch { + nonce: event.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) + })?, + message_hash: event.messageHash.into(), + }); + } + Some(&EscapeHatchEvent::SIGNATURE_HASH) => { + let event = decode::(&log)?; + res.push(Executed::EscapeHatch { + nonce: event.nonce.try_into().map_err(|e| { + TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) + })?, + escape_to: event.escapeTo, + }); + } + Some(&InInstructionEvent::SIGNATURE_HASH | &EscapedEvent::SIGNATURE_HASH) => {} + unrecognized => Err(TransportErrorKind::Custom( + format!("unrecognized event yielded by the Router: {:?}", unrecognized.map(hex::encode)) + .into(), + ))?, } } - { - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(ExecutedEvent::SIGNATURE_HASH); - let logs = self.0.get_logs(&filter).await?; + Ok(res) + } - for log in logs { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(TransportErrorKind::Custom( - "node returned a log from a different address than requested".to_string().into(), - ))?; - } + /// Fetch the `Escape`s from the smart contract through the escape hatch. + pub async fn escapes( + &self, + from_block: u64, + to_block: u64, + ) -> Result, RpcError> { + let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + let mut logs = + self.provider.get_logs(&filter.event_signature(EscapedEvent::SIGNATURE_HASH)).await?; + logs.sort_by_key(|log| (log.block_number, log.log_index)); - let log = log - .log_decode::() - .map_err(|e| { - TransportErrorKind::Custom( - format!("filtered to ExecutedEvent yet couldn't decode log: {e:?}").into(), - ) - })? - .inner - .data; - - res.push(Executed::Batch { - nonce: log.nonce.try_into().map_err(|e| { - TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) - })?, - message_hash: log.messageHash.into(), - }); + let mut res = vec![]; + for log in logs { + // Double check the address which emitted this log + if log.address() != self.address { + Err(TransportErrorKind::Custom( + "node returned a log from a different address than requested".to_string().into(), + ))?; + } + // Double check the topic + if log.topics().first() != Some(&EscapedEvent::SIGNATURE_HASH) { + Err(TransportErrorKind::Custom( + "node returned a log for a different topic than filtered to".to_string().into(), + ))?; } - } - res.sort_by_key(Executed::nonce); + let log = log + .log_decode::() + .map_err(|e| { + TransportErrorKind::Custom( + format!("filtered to event yet couldn't decode log: {e:?}").into(), + ) + })? + .inner + .data; + res.push(Escape { coin: Coin::from(log.coin), amount: log.amount }); + } Ok(res) } @@ -541,8 +659,9 @@ impl Router { block: BlockId, call: Vec, ) -> Result, RpcError> { - let call = TransactionRequest::default().to(self.1).input(TransactionInput::new(call.into())); - let bytes = self.0.call(&call).block(block).await?; + let call = + TransactionRequest::default().to(self.address).input(TransactionInput::new(call.into())); + let bytes = self.provider.call(&call).block(block).await?; // This is fine as both key calls share a return type let res = abi::nextSeraiKeyCall::abi_decode_returns(&bytes, true) .map_err(|e| TransportErrorKind::Custom(format!("failed to decode key: {e:?}").into()))?; @@ -575,9 +694,9 @@ impl Router { /// Fetch the nonce of the next action to execute pub async fn next_nonce(&self, block: BlockId) -> Result> { let call = TransactionRequest::default() - .to(self.1) + .to(self.address) .input(TransactionInput::new(abi::nextNonceCall::new(()).abi_encode().into())); - let bytes = self.0.call(&call).block(block).await?; + let bytes = self.provider.call(&call).block(block).await?; let res = abi::nextNonceCall::abi_decode_returns(&bytes, true) .map_err(|e| TransportErrorKind::Custom(format!("failed to decode nonce: {e:?}").into()))?; Ok(u64::try_from(res._0).map_err(|_| { @@ -586,14 +705,17 @@ impl Router { } /// Fetch the address the escape hatch was set to - pub async fn escaped_to(&self, block: BlockId) -> Result> { + pub async fn escaped_to( + &self, + block: BlockId, + ) -> Result, RpcError> { let call = TransactionRequest::default() - .to(self.1) + .to(self.address) .input(TransactionInput::new(abi::escapedToCall::new(()).abi_encode().into())); - let bytes = self.0.call(&call).block(block).await?; + let bytes = self.provider.call(&call).block(block).await?; let res = abi::escapedToCall::abi_decode_returns(&bytes, true).map_err(|e| { TransportErrorKind::Custom(format!("failed to decode the address escaped to: {e:?}").into()) })?; - Ok(res._0) + Ok(if res._0 == Address([0; 20].into()) { None } else { Some(res._0) }) } } diff --git a/processor/ethereum/router/src/tests/constants.rs b/processor/ethereum/router/src/tests/constants.rs new file mode 100644 index 00000000..db24971f --- /dev/null +++ b/processor/ethereum/router/src/tests/constants.rs @@ -0,0 +1,21 @@ +use alloy_sol_types::SolCall; + +#[test] +fn selector_collisions() { + assert_eq!( + crate::_irouter_abi::IRouter::confirmNextSeraiKeyCall::SELECTOR, + crate::_router_abi::Router::confirmNextSeraiKey34AC53ACCall::SELECTOR + ); + assert_eq!( + crate::_irouter_abi::IRouter::updateSeraiKeyCall::SELECTOR, + crate::_router_abi::Router::updateSeraiKey5A8542A2Call::SELECTOR + ); + assert_eq!( + crate::_irouter_abi::IRouter::executeCall::SELECTOR, + crate::_router_abi::Router::execute4DE42904Call::SELECTOR + ); + assert_eq!( + crate::_irouter_abi::IRouter::escapeHatchCall::SELECTOR, + crate::_router_abi::Router::escapeHatchDCDD91CCCall::SELECTOR + ); +} diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 41363daf..d3cb4427 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -10,10 +10,11 @@ use alloy_sol_types::SolCall; use alloy_consensus::TxLegacy; -use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionReceipt}; +#[rustfmt::skip] +use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionInput, TransactionRequest, TransactionReceipt}; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; -use alloy_provider::RootProvider; +use alloy_provider::{Provider, RootProvider}; use alloy_node_bindings::{Anvil, AnvilInstance}; @@ -21,39 +22,14 @@ use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; -use crate::{Coin, OutInstructions, Router}; +use crate::{ + _irouter_abi::IRouterWithoutCollisions::{ + self as IRouter, IRouterWithoutCollisionsErrors as IRouterErrors, + }, + Coin, OutInstructions, Router, Executed, Escape, +}; -#[test] -fn execute_reentrancy_guard() { - let hash = alloy_core::primitives::keccak256(b"ReentrancyGuard Router.execute"); - assert_eq!( - alloy_core::primitives::hex::encode( - (U256::from_be_slice(hash.as_ref()) - U256::from(1u8)).to_be_bytes::<32>() - ), - // Constant from the Router contract - "cf124a063de1614fedbd6b47187f98bf8873a1ae83da5c179a5881162f5b2401", - ); -} - -#[test] -fn selector_collisions() { - assert_eq!( - crate::_irouter_abi::IRouter::confirmNextSeraiKeyCall::SELECTOR, - crate::_router_abi::Router::confirmNextSeraiKey34AC53ACCall::SELECTOR - ); - assert_eq!( - crate::_irouter_abi::IRouter::updateSeraiKeyCall::SELECTOR, - crate::_router_abi::Router::updateSeraiKey5A8542A2Call::SELECTOR - ); - assert_eq!( - crate::_irouter_abi::IRouter::executeCall::SELECTOR, - crate::_router_abi::Router::execute4DE42904Call::SELECTOR - ); - assert_eq!( - crate::_irouter_abi::IRouter::escapeHatchCall::SELECTOR, - crate::_router_abi::Router::escapeHatchDCDD91CCCall::SELECTOR - ); -} +mod constants; pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { @@ -65,111 +41,418 @@ pub(crate) fn test_key() -> (Scalar, PublicKey) { } } -async fn setup_test( -) -> (AnvilInstance, Arc>, Router, (Scalar, PublicKey)) { - let anvil = Anvil::new().spawn(); +fn sign(key: (Scalar, PublicKey), msg: &[u8]) -> Signature { + let nonce = Scalar::random(&mut OsRng); + let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, msg); + let s = nonce + (c * key.0); + Signature::new(c, s).unwrap() +} - let provider = Arc::new(RootProvider::new( - ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), - )); +/// Calculate the gas used by a transaction if none of its calldata's bytes were zero +struct CalldataAgnosticGas; +impl CalldataAgnosticGas { + fn calculate(tx: &TxLegacy, mut gas_used: u64) -> u64 { + const ZERO_BYTE_GAS_COST: u64 = 4; + const NON_ZERO_BYTE_GAS_COST: u64 = 16; + for b in &tx.input { + if *b == 0 { + gas_used += NON_ZERO_BYTE_GAS_COST - ZERO_BYTE_GAS_COST; + } + } + gas_used + } +} - let (private_key, public_key) = test_key(); - assert!(Router::new(provider.clone(), &public_key).await.unwrap().is_none()); +struct RouterState { + next_key: Option<(Scalar, PublicKey)>, + key: Option<(Scalar, PublicKey)>, + next_nonce: u64, + escaped_to: Option
, +} - // Deploy the Deployer - let receipt = ethereum_test_primitives::publish_tx(&provider, Deployer::deployment_tx()).await; - assert!(receipt.status()); +struct Test { + #[allow(unused)] + anvil: AnvilInstance, + provider: Arc>, + chain_id: U256, + router: Router, + state: RouterState, +} - // Get the TX to deploy the Router - let mut tx = Router::deployment_tx(&public_key); - // Set a gas price (100 gwei) - tx.gas_price = 100_000_000_000; - // Sign it - let tx = ethereum_primitives::deterministically_sign(tx); - // Publish it - let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; - assert!(receipt.status()); - assert_eq!(Router::DEPLOYMENT_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); +impl Test { + async fn verify_state(&self) { + assert_eq!( + self.router.next_key(BlockNumberOrTag::Latest.into()).await.unwrap(), + self.state.next_key.map(|key| key.1) + ); + assert_eq!( + self.router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), + self.state.key.map(|key| key.1) + ); + assert_eq!( + self.router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), + self.state.next_nonce + ); + assert_eq!( + self.router.escaped_to(BlockNumberOrTag::Latest.into()).await.unwrap(), + self.state.escaped_to, + ); + } - let router = Router::new(provider.clone(), &public_key).await.unwrap().unwrap(); + async fn new() -> Self { + // The following is explicitly only evaluated against the cancun network upgrade at this time + let anvil = Anvil::new().arg("--hardfork").arg("cancun").spawn(); - (anvil, provider, router, (private_key, public_key)) + let provider = Arc::new(RootProvider::new( + ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), + )); + let chain_id = U256::from(provider.get_chain_id().await.unwrap()); + + let (private_key, public_key) = test_key(); + assert!(Router::new(provider.clone(), &public_key).await.unwrap().is_none()); + + // Deploy the Deployer + let receipt = ethereum_test_primitives::publish_tx(&provider, Deployer::deployment_tx()).await; + assert!(receipt.status()); + + let mut tx = Router::deployment_tx(&public_key); + tx.gas_limit = 1_100_000; + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; + assert!(receipt.status()); + + let router = Router::new(provider.clone(), &public_key).await.unwrap().unwrap(); + let state = RouterState { + next_key: Some((private_key, public_key)), + key: None, + // Nonce 0 should've been consumed by setting the next key to the key initialized with + next_nonce: 1, + escaped_to: None, + }; + + // Confirm nonce 0 was used as such + { + let block = receipt.block_number.unwrap(); + let executed = router.executed(block, block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!(executed[0], Executed::NextSeraiKeySet { nonce: 0, key: public_key.eth_repr() }); + } + + let res = Test { anvil, provider, chain_id, router, state }; + res.verify_state().await; + res + } + + async fn call_and_decode_err(&self, tx: TxLegacy) -> IRouterErrors { + let call = TransactionRequest::default() + .to(self.router.address()) + .input(TransactionInput::new(tx.input)); + let call_err = self.provider.call(&call).await.unwrap_err(); + call_err.as_error_resp().unwrap().as_decoded_error::(true).unwrap() + } + + fn confirm_next_serai_key_tx(&self) -> TxLegacy { + let msg = Router::confirm_next_serai_key_message(self.chain_id, self.state.next_nonce); + let sig = sign(self.state.next_key.unwrap(), &msg); + + self.router.confirm_next_serai_key(&sig) + } + + async fn confirm_next_serai_key(&mut self) { + let mut tx = self.confirm_next_serai_key_tx(); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + if self.state.key.is_none() { + assert_eq!( + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), + Router::CONFIRM_NEXT_SERAI_KEY_GAS, + ); + } else { + assert!( + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) < + Router::CONFIRM_NEXT_SERAI_KEY_GAS + ); + } + + { + let block = receipt.block_number.unwrap(); + let executed = self.router.executed(block, block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!( + executed[0], + Executed::SeraiKeyUpdated { + nonce: self.state.next_nonce, + key: self.state.next_key.unwrap().1.eth_repr() + } + ); + } + + self.state.next_nonce += 1; + self.state.key = self.state.next_key; + self.state.next_key = None; + self.verify_state().await; + } + + fn update_serai_key_tx(&self) -> ((Scalar, PublicKey), TxLegacy) { + let next_key = test_key(); + + let msg = Router::update_serai_key_message(self.chain_id, self.state.next_nonce, &next_key.1); + let sig = sign(self.state.key.unwrap(), &msg); + + (next_key, self.router.update_serai_key(&next_key.1, &sig)) + } + + async fn update_serai_key(&mut self) { + let (next_key, mut tx) = self.update_serai_key_tx(); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + assert_eq!( + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), + Router::UPDATE_SERAI_KEY_GAS, + ); + + { + let block = receipt.block_number.unwrap(); + let executed = self.router.executed(block, block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!( + executed[0], + Executed::NextSeraiKeySet { nonce: self.state.next_nonce, key: next_key.1.eth_repr() } + ); + } + + self.state.next_nonce += 1; + self.state.next_key = Some(next_key); + self.verify_state().await; + } + + fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { + let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); + let sig = sign(self.state.key.unwrap(), &msg); + self.router.escape_hatch(escape_to, &sig) + } + + async fn escape_hatch(&mut self) { + let mut escape_to = [0; 20]; + OsRng.fill_bytes(&mut escape_to); + let escape_to = Address(escape_to.into()); + + // Set the code of the address to escape to so it isn't flagged as a non-contract + let () = self.provider.raw_request("anvil_setCode".into(), (escape_to, [0])).await.unwrap(); + + let mut tx = self.escape_hatch_tx(escape_to); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + assert_eq!(CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), Router::ESCAPE_HATCH_GAS); + + { + let block = receipt.block_number.unwrap(); + let executed = self.router.executed(block, block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!(executed[0], Executed::EscapeHatch { nonce: self.state.next_nonce, escape_to }); + } + + self.state.next_nonce += 1; + self.state.escaped_to = Some(escape_to); + self.verify_state().await; + } + + fn escape_tx(&self, coin: Coin) -> TxLegacy { + let mut tx = self.router.escape(coin); + tx.gas_limit = 100_000; + tx.gas_price = 100_000_000_000; + tx + } } #[tokio::test] async fn test_constructor() { - let (_anvil, _provider, router, key) = setup_test().await; - assert_eq!(router.next_key(BlockNumberOrTag::Latest.into()).await.unwrap(), Some(key.1)); - assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), None); - assert_eq!(router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), 1); - assert_eq!( - router.escaped_to(BlockNumberOrTag::Latest.into()).await.unwrap(), - Address::from([0; 20]) - ); -} - -async fn confirm_next_serai_key( - provider: &Arc>, - router: &Router, - nonce: u64, - key: (Scalar, PublicKey), -) -> TransactionReceipt { - let msg = Router::confirm_next_serai_key_message(nonce); - - let nonce = Scalar::random(&mut OsRng); - let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); - let s = nonce + (c * key.0); - - let sig = Signature::new(c, s).unwrap(); - - let mut tx = router.confirm_next_serai_key(&sig); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; - assert!(receipt.status()); - assert_eq!(Router::CONFIRM_NEXT_SERAI_KEY_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); - receipt + // `Test::new` internalizes all checks on initial state + Test::new().await; } #[tokio::test] async fn test_confirm_next_serai_key() { - let (_anvil, provider, router, key) = setup_test().await; - - assert_eq!(router.next_key(BlockNumberOrTag::Latest.into()).await.unwrap(), Some(key.1)); - assert_eq!(router.key(BlockNumberOrTag::Latest.into()).await.unwrap(), None); - assert_eq!(router.next_nonce(BlockNumberOrTag::Latest.into()).await.unwrap(), 1); - - let receipt = confirm_next_serai_key(&provider, &router, 1, key).await; - - assert_eq!(router.next_key(receipt.block_hash.unwrap().into()).await.unwrap(), None); - assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(key.1)); - assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 2); + let mut test = Test::new().await; + // TODO: Check all calls fail at this time, including inInstruction + test.confirm_next_serai_key().await; } #[tokio::test] async fn test_update_serai_key() { - let (_anvil, provider, router, key) = setup_test().await; - confirm_next_serai_key(&provider, &router, 1, key).await; + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + test.update_serai_key().await; - let update_to = test_key().1; - let msg = Router::update_serai_key_message(2, &update_to); + // Once we update to a new key, we should, of course, be able to continue to rotate keys + test.confirm_next_serai_key().await; +} - let nonce = Scalar::random(&mut OsRng); - let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); - let s = nonce + (c * key.0); +#[tokio::test] +async fn test_eth_in_instruction() { + todo!("TODO") +} - let sig = Signature::new(c, s).unwrap(); +#[tokio::test] +async fn test_erc20_in_instruction() { + todo!("TODO") +} - let mut tx = router.update_serai_key(&update_to, &sig); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; - assert!(receipt.status()); - assert_eq!(Router::UPDATE_SERAI_KEY_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); +#[tokio::test] +async fn test_eth_address_out_instruction() { + todo!("TODO") +} - assert_eq!(router.key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(key.1)); - assert_eq!(router.next_key(receipt.block_hash.unwrap().into()).await.unwrap(), Some(update_to)); - assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); +#[tokio::test] +async fn test_erc20_address_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_eth_code_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_erc20_code_out_instruction() { + todo!("TODO") +} + +#[tokio::test] +async fn test_escape_hatch() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + // Queue another key so the below test cases can run + test.update_serai_key().await; + + { + // The zero address should be invalid to escape to + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([0; 20].into())).await, + IRouterErrors::InvalidEscapeAddress(IRouter::InvalidEscapeAddress {}) + )); + // Empty addresses should be invalid to escape to + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, + IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) + )); + // Non-empty addresses without code should be invalid to escape to + let tx = ethereum_primitives::deterministically_sign(TxLegacy { + to: Address([1; 20].into()).into(), + gas_limit: 21_000, + gas_price: 100_000_000_000u128, + value: U256::from(1), + ..Default::default() + }); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, + IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) + )); + + // Escaping at this point in time should fail + assert!(matches!( + test.call_and_decode_err(test.router.escape(Coin::Ether)).await, + IRouterErrors::EscapeHatchNotInvoked(IRouter::EscapeHatchNotInvoked {}) + )); + } + + // Invoke the escape hatch + test.escape_hatch().await; + + // Now that the escape hatch has been invoked, all of the following calls should fail + { + assert!(matches!( + test.call_and_decode_err(test.update_serai_key_tx().1).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + assert!(matches!( + test.call_and_decode_err(test.confirm_next_serai_key_tx()).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + // TODO inInstruction + // TODO execute + // We reject further attempts to update the escape hatch to prevent the last key from being + // able to switch from the honest escape hatch to siphoning via a malicious escape hatch (such + // as after the validators represented unstake) + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx(test.state.escaped_to.unwrap())).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + } + + // Check the escape fn itself + + // ETH + { + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 1)) + .await + .unwrap(); + let tx = ethereum_primitives::deterministically_sign(test.escape_tx(Coin::Ether)); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + assert_eq!( + test.router.escapes(block, block).await.unwrap(), + vec![Escape { coin: Coin::Ether, amount: U256::from(1) }], + ); + + assert!(test.provider.get_balance(test.router.address()).await.unwrap() == U256::from(0)); + assert!( + test.provider.get_balance(test.state.escaped_to.unwrap()).await.unwrap() == U256::from(1) + ); + } + + // TODO ERC20 escape +} + +/* + event InInstruction( + address indexed from, address indexed coin, uint256 amount, bytes instruction + ); + event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); + error InvalidSeraiKey(); + error InvalidSignature(); + error AmountMismatchesMsgValue(); + error TransferFromFailed(); + error Reentered(); + error EscapeFailed(); + function executeArbitraryCode(bytes memory code) external payable; + struct Signature { + bytes32 c; + bytes32 s; + } + enum DestinationType { + Address, + Code + } + struct CodeDestination { + uint32 gasLimit; + bytes code; + } + struct OutInstruction { + DestinationType destinationType; + bytes destination; + uint256 amount; + } + function execute( + Signature calldata signature, + address coin, + uint256 fee, + OutInstruction[] calldata outs + ) external; } #[tokio::test] @@ -189,7 +472,7 @@ async fn test_eth_in_instruction() { gas_limit: 1_000_000, to: TxKind::Call(router.address()), value: amount, - input: crate::abi::inInstructionCall::new(( + input: crate::_irouter_abi::inInstructionCall::new(( [0; 20].into(), amount, in_instruction.clone().into(), @@ -227,11 +510,6 @@ async fn test_eth_in_instruction() { assert_eq!(parsed_in_instructions[0].data, in_instruction); } -#[tokio::test] -async fn test_erc20_in_instruction() { - todo!("TODO") -} - async fn publish_outs( provider: &RootProvider, router: &Router, @@ -275,68 +553,4 @@ async fn test_eth_address_out_instruction() { assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); } - -#[tokio::test] -async fn test_erc20_address_out_instruction() { - todo!("TODO") -} - -#[tokio::test] -async fn test_eth_code_out_instruction() { - todo!("TODO") -} - -#[tokio::test] -async fn test_erc20_code_out_instruction() { - todo!("TODO") -} - -async fn escape_hatch( - provider: &Arc>, - router: &Router, - nonce: u64, - key: (Scalar, PublicKey), - escape_to: Address, -) -> TransactionReceipt { - let msg = Router::escape_hatch_message(nonce, escape_to); - - let nonce = Scalar::random(&mut OsRng); - let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); - let s = nonce + (c * key.0); - - let sig = Signature::new(c, s).unwrap(); - - let mut tx = router.escape_hatch(escape_to, &sig); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; - assert!(receipt.status()); - assert_eq!(Router::ESCAPE_HATCH_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); - receipt -} - -async fn escape( - provider: &Arc>, - router: &Router, - coin: Coin, -) -> TransactionReceipt { - let mut tx = router.escape(coin.address()); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(provider, tx).await; - assert!(receipt.status()); - receipt -} - -#[tokio::test] -async fn test_escape_hatch() { - let (_anvil, provider, router, key) = setup_test().await; - confirm_next_serai_key(&provider, &router, 1, key).await; - let escape_to: Address = { - let mut escape_to = [0; 20]; - OsRng.fill_bytes(&mut escape_to); - escape_to.into() - }; - escape_hatch(&provider, &router, 2, key, escape_to).await; - escape(&provider, &router, Coin::Ether).await; -} +*/ diff --git a/processor/ethereum/src/main.rs b/processor/ethereum/src/main.rs index acb5bd0d..1a7ff773 100644 --- a/processor/ethereum/src/main.rs +++ b/processor/ethereum/src/main.rs @@ -6,11 +6,13 @@ static ALLOCATOR: zalloc::ZeroizingAlloc = zalloc::ZeroizingAlloc(std::alloc::System); +use core::time::Duration; use std::sync::Arc; +use alloy_core::primitives::U256; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; -use alloy_provider::RootProvider; +use alloy_provider::{Provider, RootProvider}; use serai_client::validator_sets::primitives::Session; @@ -62,10 +64,26 @@ async fn main() { ClientBuilder::default().transport(SimpleRequest::new(bin::url()), true), )); + let chain_id = { + let mut delay = Duration::from_secs(5); + loop { + match provider.get_chain_id().await { + Ok(chain_id) => break chain_id, + Err(e) => { + log::error!("failed to fetch the chain ID on boot: {e:?}"); + tokio::time::sleep(delay).await; + delay = (delay + Duration::from_secs(5)).max(Duration::from_secs(120)); + } + } + } + }; + bin::main_loop::( db.clone(), Rpc { db: db.clone(), provider: provider.clone() }, - Scheduler::::new(SmartContract), + Scheduler::::new(SmartContract { + chain_id: U256::from_le_slice(&chain_id.to_le_bytes()), + }), TransactionPublisher::new(db, provider, { let relayer_hostname = env::var("ETHEREUM_RELAYER_HOSTNAME") .expect("ethereum relayer hostname wasn't specified") diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index 780837fa..5804114f 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -99,6 +99,7 @@ impl primitives::Block for FullEpoch { let Some(expected) = eventualities.active_eventualities.remove(executed.nonce().to_le_bytes().as_slice()) else { + // TODO: Why is this a continue, not an assert? continue; }; assert_eq!( diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index f7aaa1f8..99ffc880 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -81,8 +81,8 @@ impl ReceivedOutput<::G, Address> for Output { match self { Output::Output { key: _, instruction } => { let mut id = [0; 40]; - id[.. 32].copy_from_slice(&instruction.id.0); - id[32 ..].copy_from_slice(&instruction.id.1.to_le_bytes()); + id[.. 32].copy_from_slice(&instruction.id.block_hash); + id[32 ..].copy_from_slice(&instruction.id.index_within_block.to_le_bytes()); OutputId(id) } // Yet upon Eventuality completions, we report a Change output to ensure synchrony per the @@ -97,7 +97,7 @@ impl ReceivedOutput<::G, Address> for Output { fn transaction_id(&self) -> Self::TransactionId { match self { - Output::Output { key: _, instruction } => instruction.id.0, + Output::Output { key: _, instruction } => instruction.transaction_hash, Output::Eventuality { key: _, nonce } => { let mut id = [0; 32]; id[.. 8].copy_from_slice(&nonce.to_le_bytes()); @@ -114,7 +114,7 @@ impl ReceivedOutput<::G, Address> for Output { fn presumed_origin(&self) -> Option
{ match self { - Output::Output { key: _, instruction } => Some(Address::from(instruction.from)), + Output::Output { key: _, instruction } => Some(Address::Address(*instruction.from.0)), Output::Eventuality { .. } => None, } } diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index 98de30c8..dc430f29 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -17,8 +17,8 @@ use crate::{output::OutputId, machine::ClonableTransctionMachine}; #[derive(Clone, PartialEq, Debug)] pub(crate) enum Action { - SetKey { nonce: u64, key: PublicKey }, - Batch { nonce: u64, coin: Coin, fee: U256, outs: Vec<(Address, U256)> }, + SetKey { chain_id: U256, nonce: u64, key: PublicKey }, + Batch { chain_id: U256, nonce: u64, coin: Coin, fee: U256, outs: Vec<(Address, U256)> }, } #[derive(Clone, PartialEq, Eq, Debug)] @@ -33,17 +33,25 @@ impl Action { pub(crate) fn message(&self) -> Vec { match self { - Action::SetKey { nonce, key } => Router::update_serai_key_message(*nonce, key), - Action::Batch { nonce, coin, fee, outs } => { - Router::execute_message(*nonce, *coin, *fee, OutInstructions::from(outs.as_ref())) + Action::SetKey { chain_id, nonce, key } => { + Router::update_serai_key_message(*chain_id, *nonce, key) } + Action::Batch { chain_id, nonce, coin, fee, outs } => Router::execute_message( + *chain_id, + *nonce, + *coin, + *fee, + OutInstructions::from(outs.as_ref()), + ), } } pub(crate) fn eventuality(&self) -> Eventuality { Eventuality(match self { - Self::SetKey { nonce, key } => Executed::SetKey { nonce: *nonce, key: key.eth_repr() }, - Self::Batch { nonce, .. } => { + Self::SetKey { chain_id: _, nonce, key } => { + Executed::NextSeraiKeySet { nonce: *nonce, key: key.eth_repr() } + } + Self::Batch { chain_id: _, nonce, .. } => { Executed::Batch { nonce: *nonce, message_hash: keccak256(self.message()) } } }) @@ -77,6 +85,10 @@ impl SignableTransaction for Action { Err(io::Error::other("unrecognized Action type"))?; } + let mut chain_id = [0; 32]; + reader.read_exact(&mut chain_id)?; + let chain_id = U256::from_be_bytes(chain_id); + let mut nonce = [0; 8]; reader.read_exact(&mut nonce)?; let nonce = u64::from_le_bytes(nonce); @@ -88,10 +100,10 @@ impl SignableTransaction for Action { let key = PublicKey::from_eth_repr(key).ok_or_else(|| io::Error::other("invalid key in Action"))?; - Action::SetKey { nonce, key } + Action::SetKey { chain_id, nonce, key } } 1 => { - let coin = Coin::read(reader)?; + let coin = borsh::from_reader(reader)?; let mut fee = [0; 32]; reader.read_exact(&mut fee)?; @@ -111,22 +123,24 @@ impl SignableTransaction for Action { outs.push((address, amount)); } - Action::Batch { nonce, coin, fee, outs } + Action::Batch { chain_id, nonce, coin, fee, outs } } _ => unreachable!(), }) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { match self { - Self::SetKey { nonce, key } => { + Self::SetKey { chain_id, nonce, key } => { writer.write_all(&[0])?; + writer.write_all(&chain_id.to_be_bytes::<32>())?; writer.write_all(&nonce.to_le_bytes())?; writer.write_all(&key.eth_repr()) } - Self::Batch { nonce, coin, fee, outs } => { + Self::Batch { chain_id, nonce, coin, fee, outs } => { writer.write_all(&[1])?; + writer.write_all(&chain_id.to_be_bytes::<32>())?; writer.write_all(&nonce.to_le_bytes())?; - coin.write(writer)?; + borsh::BorshSerialize::serialize(coin, writer)?; writer.write_all(&fee.as_le_bytes())?; writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?; for (address, amount) in outs { @@ -167,9 +181,9 @@ impl primitives::Eventuality for Eventuality { } fn read(reader: &mut impl io::Read) -> io::Result { - Executed::read(reader).map(Self) + Ok(Self(borsh::from_reader(reader)?)) } fn write(&self, writer: &mut impl io::Write) -> io::Result<()> { - self.0.write(writer) + borsh::BorshSerialize::serialize(&self.0, writer) } } diff --git a/processor/ethereum/src/publisher.rs b/processor/ethereum/src/publisher.rs index a4edd65f..3d18a6ef 100644 --- a/processor/ethereum/src/publisher.rs +++ b/processor/ethereum/src/publisher.rs @@ -88,8 +88,8 @@ impl signers::TransactionPublisher for TransactionPublisher< let nonce = tx.0.nonce(); // Convert from an Action (an internal representation of a signable event) to a TxLegacy let tx = match tx.0 { - Action::SetKey { nonce: _, key } => router.update_serai_key(&key, &tx.1), - Action::Batch { nonce: _, coin, fee, outs } => { + Action::SetKey { chain_id: _, nonce: _, key } => router.update_serai_key(&key, &tx.1), + Action::Batch { chain_id: _, nonce: _, coin, fee, outs } => { router.execute(coin, fee, OutInstructions::from(outs.as_ref()), &tx.1) } }; diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 610eb491..128db1e4 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -165,12 +165,14 @@ impl ScannerFeed for Rpc { let mut instructions = router.in_instructions(block.number, &HashSet::from(TOKENS)).await?; for token in TOKENS { - for TopLevelTransfer { id, from, amount, data } in Erc20::new(provider.clone(), **token) - .top_level_transfers(block.number, router.address()) - .await? + for TopLevelTransfer { id, transaction_hash, from, amount, data } in + Erc20::new(provider.clone(), **token) + .top_level_transfers(block.number, router.address()) + .await? { instructions.push(EthereumInInstruction { id, + transaction_hash, from, coin: EthereumCoin::Erc20(token), amount, @@ -179,7 +181,7 @@ impl ScannerFeed for Rpc { } } - let executed = router.executed(block.number).await?; + let executed = router.executed(block.number, block.number).await?; Ok((instructions, executed)) } diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index e7752897..e8a437c1 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -36,7 +36,9 @@ fn balance_to_ethereum_amount(balance: Balance) -> U256 { } #[derive(Clone)] -pub(crate) struct SmartContract; +pub(crate) struct SmartContract { + pub(crate) chain_id: U256, +} impl smart_contract_scheduler::SmartContract> for SmartContract { type SignableTransaction = Action; @@ -46,8 +48,11 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { _retiring_key: KeyFor>, new_key: KeyFor>, ) -> (Self::SignableTransaction, EventualityFor>) { - let action = - Action::SetKey { nonce, key: PublicKey::new(new_key).expect("rotating to an invald key") }; + let action = Action::SetKey { + chain_id: self.chain_id, + nonce, + key: PublicKey::new(new_key).expect("rotating to an invald key"), + }; (action.clone(), action.eventuality()) } @@ -133,6 +138,7 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { } res.push(Action::Batch { + chain_id: self.chain_id, nonce, coin: coin_to_ethereum_coin(coin), fee: U256::try_from(total_gas).unwrap() * fee_per_gas, From 7e53eff6425719007b556955c8e09ec84149c1ed Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 23 Jan 2025 06:10:18 -0500 Subject: [PATCH 334/368] Fix the async flow with the Router It had sequential async calls with complexity O(n), with a variety of redundant calls. There was also a constant of... 4? 5? for each item. Now, the total sequence depth is just 3-4. --- Cargo.lock | 3 +- processor/ethereum/erc20/Cargo.toml | 3 +- processor/ethereum/erc20/src/lib.rs | 253 +++++++++++---------- processor/ethereum/router/Cargo.toml | 2 + processor/ethereum/router/src/lib.rs | 250 +++++++++++++------- processor/ethereum/src/primitives/block.rs | 1 + processor/ethereum/src/rpc.rs | 8 +- 7 files changed, 314 insertions(+), 206 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 1f6b372a..b1613420 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9446,8 +9446,8 @@ dependencies = [ "alloy-sol-macro", "alloy-sol-types", "alloy-transport", + "futures-util", "serai-processor-ethereum-primitives", - "tokio", ] [[package]] @@ -9480,6 +9480,7 @@ dependencies = [ "borsh", "build-solidity-contracts", "ethereum-schnorr-contract", + "futures-util", "group", "k256", "rand_core", diff --git a/processor/ethereum/erc20/Cargo.toml b/processor/ethereum/erc20/Cargo.toml index 21be88c5..078192a4 100644 --- a/processor/ethereum/erc20/Cargo.toml +++ b/processor/ethereum/erc20/Cargo.toml @@ -29,5 +29,4 @@ alloy-provider = { version = "0.9", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } -# TODO futures-util = { version = "0.3", default-features = false, features = ["std"] } -tokio = { version = "1", default-features = false, features = ["rt"] } +futures-util = { version = "0.3", default-features = false, features = ["std"] } diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index e72e357b..df0a3922 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -2,20 +2,21 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use std::{sync::Arc, collections::HashSet}; +use core::borrow::Borrow; +use std::{sync::Arc, collections::HashMap}; -use alloy_core::primitives::{Address, B256, U256}; +use alloy_core::primitives::{Address, U256}; use alloy_sol_types::{SolInterface, SolEvent}; -use alloy_rpc_types_eth::{Filter, TransactionTrait}; +use alloy_rpc_types_eth::{Log, Filter, TransactionTrait}; use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; use ethereum_primitives::LogIndex; -use tokio::task::JoinSet; +use futures_util::stream::{StreamExt, FuturesUnordered}; #[rustfmt::skip] #[expect(warnings)] @@ -30,6 +31,9 @@ use abi::IERC20::{IERC20Calls, transferCall, transferFromCall}; pub use abi::IERC20::Transfer; /// A top-level ERC20 transfer +/// +/// This does not include `token`, `to` fields. Those are assumed contextual to the creation of +/// this. #[derive(Clone, Debug)] pub struct TopLevelTransfer { /// The ID of the event for this transfer. @@ -46,160 +50,175 @@ pub struct TopLevelTransfer { /// A view for an ERC20 contract. #[derive(Clone, Debug)] -pub struct Erc20(Arc>, Address); +pub struct Erc20 { + provider: Arc>, + address: Address, +} impl Erc20 { /// Construct a new view of the specified ERC20 contract. - pub fn new(provider: Arc>, address: [u8; 20]) -> Self { - Self(provider, Address::from(&address)) + pub fn new(provider: Arc>, address: Address) -> Self { + Self { provider, address } } - /// Match a transaction for its top-level transfer to the specified address (if one exists). - pub async fn match_top_level_transfer( + /// The filter for transfer logs of the specified ERC20, to the specified recipient. + pub fn transfer_filter(from_block: u64, to_block: u64, erc20: Address, to: Address) -> Filter { + let filter = Filter::new().from_block(from_block).to_block(to_block); + filter.address(erc20).event_signature(Transfer::SIGNATURE_HASH).topic2(to.into_word()) + } + + /// Yield the top-level transfer for the specified transaction (if one exists). + /// + /// The passed-in logs MUST be the logs for this transaction. The logs MUST be filtered to the + /// `Transfer` events of the intended token(s) and the intended `to` transferred to. These + /// properties are completely unchecked and assumed to be the case. + /// + /// This does NOT yield THE top-level transfer. If multiple `Transfer` events have identical + /// structure to the top-level transfer call, the earliest `Transfer` event present in the logs + /// is considered the top-level transfer. + // Yielding THE top-level transfer would require tracing the transaction execution and isn't + // worth the effort. + pub async fn top_level_transfer( provider: impl AsRef>, - transaction_hash: B256, - to: Address, + transaction_hash: [u8; 32], + mut transfer_logs: Vec>, ) -> Result, RpcError> { // Fetch the transaction let transaction = - provider.as_ref().get_transaction_by_hash(transaction_hash).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have the transaction which emitted a log it had".to_string().into(), - ) - })?; + provider.as_ref().get_transaction_by_hash(transaction_hash.into()).await?.ok_or_else( + || { + TransportErrorKind::Custom( + "node didn't have the transaction which emitted a log it had".to_string().into(), + ) + }, + )?; // If this is a top-level call... // Don't validate the encoding as this can't be re-encoded to an identical bytestring due // to the `InInstruction` appended after the call itself - if let Ok(call) = IERC20Calls::abi_decode(transaction.inner.input(), false) { - // Extract the top-level call's from/to/value - let (from, call_to, value) = match call { - IERC20Calls::transfer(transferCall { to, value }) => (transaction.from, to, value), - IERC20Calls::transferFrom(transferFromCall { from, to, value }) => (from, to, value), - // Treat any other function selectors as unrecognized - _ => return Ok(None), - }; - // If this isn't a transfer to the expected address, return None - if call_to != to { - return Ok(None); + let Ok(call) = IERC20Calls::abi_decode(transaction.inner.input(), false) else { + return Ok(None); + }; + + // Extract the top-level call's from/to/value + let (from, to, value) = match call { + IERC20Calls::transfer(transferCall { to, value }) => (transaction.from, to, value), + IERC20Calls::transferFrom(transferFromCall { from, to, value }) => (from, to, value), + // Treat any other function selectors as unrecognized + _ => return Ok(None), + }; + + // Sort the logs to ensure the the earliest logs are first + transfer_logs.sort_by_key(|log| log.borrow().log_index); + // Find the log for this top-level transfer + for log in transfer_logs { + // Check the log is for the called contract + // This handles the edge case where we're checking if transfers of token X were top-level and + // a transfer of token Y (with equivalent structure) was top-level + if Some(log.borrow().address()) != transaction.inner.to() { + continue; } - // Fetch the transaction's logs - let receipt = - provider.as_ref().get_transaction_receipt(transaction_hash).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have receipt for a transaction we were matching for a top-level transfer" - .to_string() - .into(), - ) - })?; + // Since the caller is responsible for filtering these to `Transfer` events, we can assume + // this is a non-compliant ERC20 or an error with the logs fetched. We assume ERC20 + // compliance here, making this an RPC error + let log = log.borrow().log_decode::().map_err(|_| { + TransportErrorKind::Custom("log didn't include a valid transfer event".to_string().into()) + })?; - // Find the log for this transfer - for log in receipt.inner.logs() { - // If this log was emitted by a different contract, continue - if Some(log.address()) != transaction.inner.to() { - continue; - } + let block_hash = log.block_hash.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) + })?; + let log_index = log.log_index.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its index set".to_string().into()) + })?; + let log = log.inner.data; - // Check if this is actually a transfer log - // https://github.com/alloy-rs/core/issues/589 - if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { - continue; - } - - let block_hash = log.block_hash.ok_or_else(|| { - TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) - })?; - let log_index = log.log_index.ok_or_else(|| { - TransportErrorKind::Custom("log didn't have its index set".to_string().into()) - })?; - let log = log - .log_decode::() - .map_err(|e| { - TransportErrorKind::Custom(format!("failed to decode Transfer log: {e:?}").into()) - })? - .inner - .data; - - // Ensure the top-level transfer is equivalent to the transfer this log represents. Since - // we can't find the exact top-level transfer without tracing the call, we just rule the - // first equivalent transfer as THE top-level transfer - if !((log.from == from) && (log.to == to) && (log.value == value)) { - continue; - } - - // Read the data appended after - let encoded = call.abi_encode(); - let data = transaction.inner.input().as_ref()[encoded.len() ..].to_vec(); - - return Ok(Some(TopLevelTransfer { - id: LogIndex { block_hash: *block_hash, index_within_block: log_index }, - transaction_hash: *transaction_hash, - from: log.from, - amount: log.value, - data, - })); + // Ensure the top-level transfer is equivalent to the transfer this log represents + if !((log.from == from) && (log.to == to) && (log.value == value)) { + continue; } + + // Read the data appended after + let encoded = call.abi_encode(); + let data = transaction.inner.input().as_ref()[encoded.len() ..].to_vec(); + + return Ok(Some(TopLevelTransfer { + id: LogIndex { block_hash: *block_hash, index_within_block: log_index }, + transaction_hash, + from: log.from, + amount: log.value, + data, + })); } Ok(None) } - /// Fetch all top-level transfers to the specified address. + /// Fetch all top-level transfers to the specified address for this token. /// /// The result of this function is unordered. - pub async fn top_level_transfers( + pub async fn top_level_transfers_unordered( &self, - block: u64, + from_block: u64, + to_block: u64, to: Address, ) -> Result, RpcError> { - // Get all transfers within this block - let filter = Filter::new().from_block(block).to_block(block).address(self.1); - let filter = filter.event_signature(Transfer::SIGNATURE_HASH); - let mut to_topic = [0; 32]; - to_topic[12 ..].copy_from_slice(to.as_ref()); - let filter = filter.topic2(B256::from(to_topic)); - let logs = self.0.get_logs(&filter).await?; + // Get all transfers within these blocks + let logs = self + .provider + .get_logs(&Self::transfer_filter(from_block, to_block, self.address, to)) + .await?; - // These logs are for all transactions which performed any transfer - // We now check each transaction for having a top-level transfer to the specified address - let tx_ids = logs - .into_iter() - .map(|log| { - // Double check the address which emitted this log - if log.address() != self.1 { - Err(TransportErrorKind::Custom( - "node returned logs for a different address than requested".to_string().into(), - ))?; - } + // The logs, indexed by their transactions + let mut transaction_logs = HashMap::new(); + // Index the logs by their transactions + for log in logs { + // Double check the address which emitted this log + if log.address() != self.address { + Err(TransportErrorKind::Custom( + "node returned logs for a different address than requested".to_string().into(), + ))?; + } + // Double check the event signature for this log + if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { + Err(TransportErrorKind::Custom( + "node returned a log for a different topic than filtered to".to_string().into(), + ))?; + } + // Double check the `to` topic + if log.topics().get(2) != Some(&to.into_word()) { + Err(TransportErrorKind::Custom( + "node returned a transfer for a different `to` than filtered to".to_string().into(), + ))?; + } - log.transaction_hash.ok_or_else(|| { + let tx_id = log + .transaction_hash + .ok_or_else(|| { TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) - }) - }) - .collect::, _>>()?; + })? + .0; - let mut join_set = JoinSet::new(); - for tx_id in tx_ids { - join_set.spawn(Self::match_top_level_transfer(self.0.clone(), tx_id, to)); + transaction_logs.entry(tx_id).or_insert_with(|| Vec::with_capacity(1)).push(log); + } + + // Use `FuturesUnordered` so these RPC calls run in parallel + let mut futures = FuturesUnordered::new(); + for (tx_id, transfer_logs) in transaction_logs { + futures.push(Self::top_level_transfer(&self.provider, tx_id, transfer_logs)); } let mut top_level_transfers = vec![]; - while let Some(top_level_transfer) = join_set.join_next().await { - // This is an error if a task panics or aborts - // Panicking on a task panic is desired behavior, and we haven't aborted any tasks - match top_level_transfer.unwrap() { + while let Some(top_level_transfer) = futures.next().await { + match top_level_transfer { // Top-level transfer Ok(Some(top_level_transfer)) => top_level_transfers.push(top_level_transfer), // Not a top-level transfer Ok(None) => continue, // Failed to get this transaction's information so abort - Err(e) => { - join_set.abort_all(); - Err(e)? - } + Err(e) => Err(e)?, } } - Ok(top_level_transfers) } } diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 4b737a00..4078ba0e 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -41,6 +41,8 @@ erc20 = { package = "serai-processor-ethereum-erc20", path = "../erc20", default serai-client = { path = "../../../substrate/client", default-features = false, features = ["ethereum"] } +futures-util = { version = "0.3", default-features = false, features = ["std"] } + [build-dependencies] build-solidity-contracts = { path = "../../../networks/ethereum/build-contracts", default-features = false } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index fd88a222..9e15c9f9 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -2,7 +2,10 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use std::{sync::Arc, collections::HashSet}; +use std::{ + sync::Arc, + collections::{HashSet, HashMap}, +}; use borsh::{BorshSerialize, BorshDeserialize}; @@ -21,12 +24,14 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; +use serai_client::networks::ethereum::Address as SeraiAddress; + use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; use erc20::{Transfer, Erc20}; -use serai_client::networks::ethereum::Address as SeraiAddress; +use futures_util::stream::{StreamExt, FuturesUnordered}; #[rustfmt::skip] #[expect(warnings)] @@ -397,25 +402,33 @@ impl Router { } /// Fetch the `InInstruction`s emitted by the Router from this block. - pub async fn in_instructions( + /// + /// This is not guaranteed to return them in any order. + pub async fn in_instructions_unordered( &self, - block: u64, + from_block: u64, + to_block: u64, allowed_tokens: &HashSet
, ) -> Result, RpcError> { // The InInstruction events for this block - let filter = Filter::new().from_block(block).to_block(block).address(self.address); - let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); - let mut logs = self.provider.get_logs(&filter).await?; - logs.sort_by_key(|log| (log.block_number, log.log_index)); + let logs = { + let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); + self.provider.get_logs(&filter).await? + }; + let mut in_instructions = Vec::with_capacity(logs.len()); /* We check that for all InInstructions for ERC20s emitted, a corresponding transfer occurred. - In order to prevent a transfer from being used to justify multiple distinct InInstructions, - we insert the transfer's log index into this HashSet. - */ - let mut transfer_check = HashSet::new(); + On this initial loop, we just queue the ERC20 InInstructions for later verification. - let mut in_instructions = vec![]; + We don't do this for ETH as it'd require tracing the transaction, which is non-trivial. It + also isn't necessary as all of this is solely defense in depth. + */ + let mut erc20s = HashSet::new(); + let mut erc20_transfer_logs = FuturesUnordered::new(); + let mut erc20_transactions = HashSet::new(); + let mut erc20_in_instructions = vec![]; for log in logs { // Double check the address which emitted this log if log.address() != self.address { @@ -423,6 +436,10 @@ impl Router { "node returned a log from a different address than requested".to_string().into(), ))?; } + // Double check this is a InInstruction log + if log.topics().first() != Some(&InInstructionEvent::SIGNATURE_HASH) { + continue; + } let id = LogIndex { block_hash: log @@ -439,6 +456,7 @@ impl Router { let transaction_hash = log.transaction_hash.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) })?; + let transaction_hash = *transaction_hash; let log = log .log_decode::() @@ -451,82 +469,148 @@ impl Router { .data; let coin = Coin::from(log.coin); - if let Coin::Erc20(token) = coin { - if !allowed_tokens.contains(&token) { - continue; - } - // Get all logs for this TX - let receipt = - self.provider.get_transaction_receipt(transaction_hash).await?.ok_or_else(|| { - TransportErrorKind::Custom( - "node didn't have the receipt for a transaction it had".to_string().into(), - ) - })?; - let tx_logs = receipt.inner.logs(); - - /* - The transfer which causes an InInstruction event won't be a top-level transfer. - Accordingly, when looking for the matching transfer, disregard the top-level transfer (if - one exists). - */ - if let Some(matched) = - Erc20::match_top_level_transfer(&self.provider, transaction_hash, self.address).await? - { - // Mark this log index as used so it isn't used again - transfer_check.insert(matched.id.index_within_block); - } - - // Find a matching transfer log - let mut found_transfer = false; - for tx_log in tx_logs { - let log_index = tx_log.log_index.ok_or_else(|| { - TransportErrorKind::Custom( - "log in transaction receipt didn't have its log index set".to_string().into(), - ) - })?; - - // Ensure we didn't already use this transfer to check a distinct InInstruction event - if transfer_check.contains(&log_index) { - continue; - } - - // Check if this log is from the token we expected to be transferred - if tx_log.address() != token { - continue; - } - // Check if this is a transfer log - // https://github.com/alloy-rs/core/issues/589 - if tx_log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { - continue; - } - let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue }; - // Check if this is a transfer to us for the expected amount - if (transfer.to == self.address) && (transfer.value == log.amount) { - transfer_check.insert(log_index); - found_transfer = true; - break; - } - } - if !found_transfer { - // This shouldn't be a simple error - // This is an exploit, a non-conforming ERC20, or a malicious connection - // This should halt the process. While this is sufficient, it's sub-optimal - // TODO - Err(TransportErrorKind::Custom( - "ERC20 InInstruction with no matching transfer log".to_string().into(), - ))?; - } - }; - - in_instructions.push(InInstruction { + let in_instruction = InInstruction { id, - transaction_hash: *transaction_hash, + transaction_hash, from: log.from, coin, amount: log.amount, data: log.instruction.as_ref().to_vec(), - }); + }; + + match coin { + Coin::Ether => in_instructions.push(in_instruction), + Coin::Erc20(token) => { + if !allowed_tokens.contains(&token) { + continue; + } + + // Fetch the ERC20 transfer events necessary to verify this InInstruction has a matching + // transfer + if !erc20s.contains(&token) { + erc20s.insert(token); + erc20_transfer_logs.push(async move { + let filter = Erc20::transfer_filter(from_block, to_block, token, self.address); + self.provider.get_logs(&filter).await.map(|logs| (token, logs)) + }); + } + erc20_transactions.insert(transaction_hash); + erc20_in_instructions.push((transaction_hash, in_instruction)) + } + } + } + + // Collect the ERC20 transfer logs + let erc20_transfer_logs = { + let mut collected = HashMap::with_capacity(erc20s.len()); + while let Some(token_and_logs) = erc20_transfer_logs.next().await { + let (token, logs) = token_and_logs?; + collected.insert(token, logs); + } + collected + }; + + /* + For each transaction, it may have a top-level ERC20 transfer. That top-level transfer won't + be the transfer caused by the call to `inInstruction`, so we shouldn't consider it + justification for this `InInstruction` event. + + Fetch all top-level transfers here so we can ignore them. + */ + let mut erc20_top_level_transfers = FuturesUnordered::new(); + let mut transaction_transfer_logs = HashMap::new(); + for transaction in erc20_transactions { + // Filter to the logs for this specific transaction + let logs = erc20_transfer_logs + .values() + .flat_map(|logs_per_token| logs_per_token.iter()) + .filter_map(|log| { + let log_transaction_hash = log.transaction_hash.ok_or_else(|| { + TransportErrorKind::Custom( + "log didn't have its transaction hash set".to_string().into(), + ) + }); + match log_transaction_hash { + Ok(log_transaction_hash) => { + if log_transaction_hash == transaction { + Some(Ok(log)) + } else { + None + } + } + Err(e) => Some(Err(e)), + } + }) + .collect::, _>>()?; + + // Find the top-level transfer + erc20_top_level_transfers.push(Erc20::top_level_transfer( + &self.provider, + transaction, + logs.clone(), + )); + // Keep the transaction-indexed logs for the actual justifying + transaction_transfer_logs.insert(transaction, logs); + } + + /* + In order to prevent a single transfer from being used to justify multiple distinct + InInstructions, we insert the transfer's log index into this HashSet. + */ + let mut already_used_to_justify = HashSet::new(); + + // Collect the top-level transfers + while let Some(erc20_top_level_transfer) = erc20_top_level_transfers.next().await { + let erc20_top_level_transfer = erc20_top_level_transfer?; + // If this transaction had a top-level transfer... + if let Some(erc20_top_level_transfer) = erc20_top_level_transfer { + // Mark this log index as used so it isn't used again + already_used_to_justify.insert(erc20_top_level_transfer.id.index_within_block); + } + } + + // Now, for each ERC20 InInstruction, find a justifying transfer log + for (transaction_hash, in_instruction) in erc20_in_instructions { + let mut justified = false; + for log in &transaction_transfer_logs[&transaction_hash] { + let log_index = log.log_index.ok_or_else(|| { + TransportErrorKind::Custom( + "log in transaction receipt didn't have its log index set".to_string().into(), + ) + })?; + + // Ensure we didn't already use this transfer to check a distinct InInstruction event + if already_used_to_justify.contains(&log_index) { + continue; + } + + // Check if this log is from the token we expected to be transferred + if log.address() != Address::from(in_instruction.coin) { + continue; + } + // Check if this is a transfer log + if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { + continue; + } + let Ok(transfer) = Transfer::decode_log(&log.inner.clone(), true) else { continue }; + // Check if this aligns with the InInstruction + if (transfer.from == in_instruction.from) && + (transfer.to == self.address) && + (transfer.value == in_instruction.amount) + { + already_used_to_justify.insert(log_index); + justified = true; + break; + } + } + if !justified { + // This is an exploit, a non-conforming ERC20, or an invalid connection + Err(TransportErrorKind::Custom( + "ERC20 InInstruction with no matching transfer log".to_string().into(), + ))?; + } + in_instructions.push(in_instruction); } Ok(in_instructions) diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index 5804114f..b01493f0 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -32,6 +32,7 @@ impl primitives::BlockHeader for Epoch { #[derive(Clone, PartialEq, Eq, Debug)] pub(crate) struct FullEpoch { pub(crate) epoch: Epoch, + /// The unordered list of `InInstruction`s within this epoch pub(crate) instructions: Vec, pub(crate) executed: Vec, } diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 128db1e4..480c9440 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -162,12 +162,14 @@ impl ScannerFeed for Rpc { router: Router, block: Header, ) -> Result<(Vec, Vec), RpcError> { - let mut instructions = router.in_instructions(block.number, &HashSet::from(TOKENS)).await?; + let mut instructions = router + .in_instructions_unordered(block.number, block.number, &HashSet::from(TOKENS)) + .await?; for token in TOKENS { for TopLevelTransfer { id, transaction_hash, from, amount, data } in - Erc20::new(provider.clone(), **token) - .top_level_transfers(block.number, router.address()) + Erc20::new(provider.clone(), token) + .top_level_transfers_unordered(block.number, block.number, router.address()) .await? { instructions.push(EthereumInInstruction { From e922264ebf0b9e8cc31a6f7d62f4f7a4269cebcf Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 23 Jan 2025 08:22:41 -0500 Subject: [PATCH 335/368] Add selector collisions to the IERC20 lib --- processor/ethereum/erc20/contracts/IERC20.sol | 14 +++++++++++++ processor/ethereum/erc20/src/lib.rs | 21 +++++++++++++++++-- processor/ethereum/erc20/src/tests.rs | 13 ++++++++++++ .../ethereum/router/contracts/Router.sol | 3 +-- 4 files changed, 47 insertions(+), 4 deletions(-) create mode 100644 processor/ethereum/erc20/src/tests.rs diff --git a/processor/ethereum/erc20/contracts/IERC20.sol b/processor/ethereum/erc20/contracts/IERC20.sol index c2de5ca0..6298592a 100644 --- a/processor/ethereum/erc20/contracts/IERC20.sol +++ b/processor/ethereum/erc20/contracts/IERC20.sol @@ -18,3 +18,17 @@ interface IERC20 { function approve(address spender, uint256 value) external returns (bool); function allowance(address owner, address spender) external view returns (uint256); } + +interface SeraiIERC20 { + function transferWithInInstruction01BB244A8A( + address to, + uint256 value, + bytes calldata inInstruction + ) external returns (bool); + function transferFromWithInInstruction00081948E0( + address from, + address to, + uint256 value, + bytes calldata inInstruction + ) external returns (bool); +} diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index df0a3922..953bab88 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -28,8 +28,15 @@ mod abi { alloy_sol_macro::sol!("contracts/IERC20.sol"); } use abi::IERC20::{IERC20Calls, transferCall, transferFromCall}; +use abi::SeraiIERC20::{ + SeraiIERC20Calls, transferWithInInstruction01BB244A8ACall as transferWithInInstructionCall, + transferFromWithInInstruction00081948E0Call as transferFromWithInInstructionCall, +}; pub use abi::IERC20::Transfer; +#[cfg(test)] +mod tests; + /// A top-level ERC20 transfer /// /// This does not include `token`, `to` fields. Those are assumed contextual to the creation of @@ -139,8 +146,18 @@ impl Erc20 { } // Read the data appended after - let encoded = call.abi_encode(); - let data = transaction.inner.input().as_ref()[encoded.len() ..].to_vec(); + let data = if let Ok(call) = SeraiIERC20Calls::abi_decode(transaction.inner.input(), true) { + match call { + SeraiIERC20Calls::transferWithInInstruction01BB244A8A( + transferWithInInstructionCall { inInstruction, .. }, + ) | + SeraiIERC20Calls::transferFromWithInInstruction00081948E0( + transferFromWithInInstructionCall { inInstruction, .. }, + ) => Vec::from(inInstruction), + } + } else { + vec![] + }; return Ok(Some(TopLevelTransfer { id: LogIndex { block_hash: *block_hash, index_within_block: log_index }, diff --git a/processor/ethereum/erc20/src/tests.rs b/processor/ethereum/erc20/src/tests.rs new file mode 100644 index 00000000..2218e19b --- /dev/null +++ b/processor/ethereum/erc20/src/tests.rs @@ -0,0 +1,13 @@ +use alloy_sol_types::SolCall; + +#[test] +fn selector_collisions() { + assert_eq!( + crate::abi::IERC20::transferCall::SELECTOR, + crate::abi::SeraiIERC20::transferWithInInstruction01BB244A8ACall::SELECTOR + ); + assert_eq!( + crate::abi::IERC20::transferFromCall::SELECTOR, + crate::abi::SeraiIERC20::transferFromWithInInstruction00081948E0Call::SELECTOR + ); +} diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index ade72a8c..e0bc77bb 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -29,8 +29,7 @@ contract Router is IRouterWithoutCollisions { bytes32 constant ACCOUNT_WITHOUT_CODE_CODEHASH = keccak256(""); /// @dev The address in transient storage used for the reentrancy guard - bytes32 constant REENTRANCY_GUARD_SLOT = - bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); + bytes32 constant REENTRANCY_GUARD_SLOT = bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is From a63a86ba79bd61bbd6acedfe31044bc7042bbcca Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 23 Jan 2025 09:30:54 -0500 Subject: [PATCH 336/368] Test Ether InInstructions --- Cargo.lock | 1 + processor/ethereum/erc20/src/lib.rs | 10 +++- processor/ethereum/router/Cargo.toml | 1 + processor/ethereum/router/src/lib.rs | 46 +++++++++++++- processor/ethereum/router/src/tests/mod.rs | 70 +++++++++++++++++++++- 5 files changed, 121 insertions(+), 7 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index b1613420..a9aadf55 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9483,6 +9483,7 @@ dependencies = [ "futures-util", "group", "k256", + "parity-scale-codec", "rand_core", "serai-client", "serai-ethereum-test-primitives", diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 953bab88..a3ed386c 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -21,6 +21,7 @@ use futures_util::stream::{StreamExt, FuturesUnordered}; #[rustfmt::skip] #[expect(warnings)] #[expect(needless_pass_by_value)] +#[expect(missing_docs)] #[expect(clippy::all)] #[expect(clippy::ignored_unit_patterns)] #[expect(clippy::redundant_closure_for_method_calls)] @@ -28,11 +29,12 @@ mod abi { alloy_sol_macro::sol!("contracts/IERC20.sol"); } use abi::IERC20::{IERC20Calls, transferCall, transferFromCall}; -use abi::SeraiIERC20::{ - SeraiIERC20Calls, transferWithInInstruction01BB244A8ACall as transferWithInInstructionCall, +use abi::SeraiIERC20::SeraiIERC20Calls; +pub use abi::IERC20::Transfer; +pub use abi::SeraiIERC20::{ + transferWithInInstruction01BB244A8ACall as transferWithInInstructionCall, transferFromWithInInstruction00081948E0Call as transferFromWithInInstructionCall, }; -pub use abi::IERC20::Transfer; #[cfg(test)] mod tests; @@ -156,6 +158,8 @@ impl Erc20 { ) => Vec::from(inInstruction), } } else { + // We don't error here so this transfer is propagated up the stack, even without the + // InInstruction. In practice, Serai should acknowledge this and return it to the sender vec![] }; diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 4078ba0e..1da4fd02 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -39,6 +39,7 @@ ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = ethereum-deployer = { package = "serai-processor-ethereum-deployer", path = "../deployer", default-features = false } erc20 = { package = "serai-processor-ethereum-erc20", path = "../erc20", default-features = false } +scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] } serai-client = { path = "../../../substrate/client", default-features = false, features = ["ethereum"] } futures-util = { version = "0.3", default-features = false, features = ["std"] } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 9e15c9f9..a7e0165c 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -24,7 +24,10 @@ use alloy_transport::{TransportErrorKind, RpcError}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; -use serai_client::networks::ethereum::Address as SeraiAddress; +use scale::Encode; +use serai_client::{ + in_instructions::primitives::Shorthand, networks::ethereum::Address as SeraiAddress, +}; use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; @@ -309,6 +312,8 @@ impl Router { } /// Construct a transaction to confirm the next key representing Serai. + /// + /// The gas price is not set and is left to the caller. pub fn confirm_next_serai_key(&self, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), @@ -328,6 +333,8 @@ impl Router { } /// Construct a transaction to update the key representing Serai. + /// + /// The gas price is not set and is left to the caller. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), @@ -342,6 +349,37 @@ impl Router { } } + /// Construct a transaction to send coins with an InInstruction to Serai. + /// + /// If coin is an ERC20, this will not create a transaction calling the Router but will create a + /// top-level transfer of the ERC20 to the Router. This avoids needing to call `approve` before + /// publishing the transaction calling the Router. + /// + /// The gas limit and gas price are not set and are left to the caller. + pub fn in_instruction(&self, coin: Coin, amount: U256, in_instruction: &Shorthand) -> TxLegacy { + match coin { + Coin::Ether => TxLegacy { + to: self.address.into(), + input: abi::inInstructionCall::new((coin.into(), amount, in_instruction.encode().into())) + .abi_encode() + .into(), + value: amount, + ..Default::default() + }, + Coin::Erc20(erc20) => TxLegacy { + to: erc20.into(), + input: erc20::transferWithInInstructionCall::new(( + self.address, + amount, + in_instruction.encode().into(), + )) + .abi_encode() + .into(), + ..Default::default() + }, + } + } + /// Get the message to be signed in order to execute a series of `OutInstruction`s. pub fn execute_message( chain_id: U256, @@ -360,6 +398,8 @@ impl Router { } /// Construct a transaction to execute a batch of `OutInstruction`s. + /// + /// The gas limit and gas price are not set and are left to the caller. pub fn execute(&self, coin: Coin, fee: U256, outs: OutInstructions, sig: &Signature) -> TxLegacy { // TODO let gas_limit = Self::EXECUTE_BASE_GAS + outs.0.iter().map(|_| 200_000 + 10_000).sum::(); @@ -383,6 +423,8 @@ impl Router { } /// Construct a transaction to trigger the escape hatch. + /// + /// The gas price is not set and is left to the caller. pub fn escape_hatch(&self, escape_to: Address, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), @@ -393,6 +435,8 @@ impl Router { } /// Construct a transaction to escape coins via the escape hatch. + /// + /// The gas limit and gas price are not set and are left to the caller. pub fn escape(&self, coin: Coin) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index d3cb4427..6426bcaf 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -18,6 +18,14 @@ use alloy_provider::{Provider, RootProvider}; use alloy_node_bindings::{Anvil, AnvilInstance}; +use scale::Encode; +use serai_client::{ + primitives::SeraiAddress, + in_instructions::primitives::{ + InInstruction as SeraiInInstruction, RefundableInInstruction, Shorthand, + }, +}; + use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; @@ -26,7 +34,7 @@ use crate::{ _irouter_abi::IRouterWithoutCollisions::{ self as IRouter, IRouterWithoutCollisionsErrors as IRouterErrors, }, - Coin, OutInstructions, Router, Executed, Escape, + Coin, InInstruction, OutInstructions, Router, Executed, Escape, }; mod constants; @@ -165,6 +173,8 @@ impl Test { let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; assert!(receipt.status()); + // Only check the gas is equal when writing to a previously unallocated storage slot, as this + // is the highest possible gas cost and what the constant is derived from if self.state.key.is_none() { assert_eq!( CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), @@ -231,6 +241,21 @@ impl Test { self.verify_state().await; } + fn eth_in_instruction_tx(&self) -> (Coin, U256, Shorthand, TxLegacy) { + let coin = Coin::Ether; + let amount = U256::from(1); + let shorthand = Shorthand::Raw(RefundableInInstruction { + origin: None, + instruction: SeraiInInstruction::Transfer(SeraiAddress([0xff; 32])), + }); + + let mut tx = self.router.in_instruction(coin, amount, &shorthand); + tx.gas_limit = 1_000_000; + tx.gas_price = 100_000_000_000; + + (coin, amount, shorthand, tx) + } + fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); let sig = sign(self.state.key.unwrap(), &msg); @@ -297,7 +322,43 @@ async fn test_update_serai_key() { #[tokio::test] async fn test_eth_in_instruction() { - todo!("TODO") + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let (coin, amount, shorthand, tx) = test.eth_in_instruction_tx(); + + // This should fail if the value mismatches the amount + { + let mut tx = tx.clone(); + tx.value = U256::ZERO; + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::AmountMismatchesMsgValue(IRouter::AmountMismatchesMsgValue {}) + )); + } + + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + let in_instructions = + test.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); + assert_eq!(in_instructions.len(), 1); + assert_eq!( + in_instructions[0], + InInstruction { + id: LogIndex { + block_hash: *receipt.block_hash.unwrap(), + index_within_block: receipt.inner.logs()[0].log_index.unwrap(), + }, + transaction_hash: **tx.hash(), + from: tx.recover_signer().unwrap(), + coin, + amount, + data: shorthand.encode(), + } + ); } #[tokio::test] @@ -379,7 +440,10 @@ async fn test_escape_hatch() { test.call_and_decode_err(test.confirm_next_serai_key_tx()).await, IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); - // TODO inInstruction + assert!(matches!( + test.call_and_decode_err(test.eth_in_instruction_tx().3).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); // TODO execute // We reject further attempts to update the escape hatch to prevent the last key from being // able to switch from the honest escape hatch to siphoning via a malicious escape hatch (such From 3d44766eff93df6c0383733132eeec409aacffb3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 03:23:58 -0500 Subject: [PATCH 337/368] Add ERC20 InInstruction test --- processor/ethereum/deployer/src/lib.rs | 2 +- processor/ethereum/router/build.rs | 5 +- .../contracts/tests/ERC20.sol | 22 +++-- processor/ethereum/router/src/lib.rs | 20 +++-- processor/ethereum/router/src/tests/erc20.rs | 83 +++++++++++++++++++ processor/ethereum/router/src/tests/mod.rs | 83 +++++++++++++++++-- 6 files changed, 194 insertions(+), 21 deletions(-) rename processor/ethereum/{TODO => router}/contracts/tests/ERC20.sol (73%) create mode 100644 processor/ethereum/router/src/tests/erc20.rs diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index f810d617..64ca0d2b 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -52,7 +52,7 @@ impl Deployer { /// funded for this transaction to be submitted. This account has no known private key to anyone /// so ETH sent can be neither misappropriated nor returned. pub fn deployment_tx() -> Signed { - let bytecode = Bytes::from(BYTECODE); + let bytecode = Bytes::from_static(BYTECODE); // Legacy transactions are used to ensure the widest possible degree of support across EVMs let tx = TxLegacy { diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index 26a2bee6..8c0fbe67 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -41,6 +41,9 @@ fn main() { "contracts/IRouter.sol", "contracts/Router.sol", ], - &(artifacts_path + "/router.rs"), + &(artifacts_path.clone() + "/router.rs"), ); + + // Build the test contracts + build_solidity_contracts::build(&[], "contracts/tests", &(artifacts_path + "/tests")).unwrap(); } diff --git a/processor/ethereum/TODO/contracts/tests/ERC20.sol b/processor/ethereum/router/contracts/tests/ERC20.sol similarity index 73% rename from processor/ethereum/TODO/contracts/tests/ERC20.sol rename to processor/ethereum/router/contracts/tests/ERC20.sol index 9ce4bad7..f10ac0cd 100644 --- a/processor/ethereum/TODO/contracts/tests/ERC20.sol +++ b/processor/ethereum/router/contracts/tests/ERC20.sol @@ -17,17 +17,11 @@ contract TestERC20 { return 18; } - function totalSupply() public pure returns (uint256) { - return 1_000_000 * 10e18; - } + uint256 public totalSupply; mapping(address => uint256) balances; mapping(address => mapping(address => uint256)) allowances; - constructor() { - balances[msg.sender] = totalSupply(); - } - function balanceOf(address owner) public view returns (uint256) { return balances[owner]; } @@ -35,6 +29,7 @@ contract TestERC20 { function transfer(address to, uint256 value) public returns (bool) { balances[msg.sender] -= value; balances[to] += value; + emit Transfer(msg.sender, to, value); return true; } @@ -42,15 +37,28 @@ contract TestERC20 { allowances[from][msg.sender] -= value; balances[from] -= value; balances[to] += value; + emit Transfer(from, to, value); return true; } function approve(address spender, uint256 value) public returns (bool) { allowances[msg.sender][spender] = value; + emit Approval(msg.sender, spender, value); return true; } function allowance(address owner, address spender) public view returns (uint256) { return allowances[owner][spender]; } + + function mint(address owner, uint256 value) external { + balances[owner] += value; + totalSupply += value; + emit Transfer(address(0), owner, value); + } + + function magicApprove(address owner, address spender, uint256 value) external { + allowances[owner][spender] = value; + emit Approval(owner, spender, value); + } } diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index a7e0165c..394f2df0 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -11,10 +11,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use group::ff::PrimeField; -use alloy_core::primitives::{ - hex::{self, FromHex}, - Address, U256, Bytes, TxKind, -}; +use alloy_core::primitives::{hex, Address, U256, TxKind}; use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent}; use alloy_consensus::TxLegacy; @@ -257,9 +254,18 @@ impl Router { const ESCAPE_HATCH_GAS: u64 = 61_238; fn code() -> Vec { - const BYTECODE: &[u8] = - include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); - Bytes::from_hex(BYTECODE).expect("compiled-in Router bytecode wasn't valid hex").to_vec() + const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("Router.bin did not contain valid hex"), + }; + &BYTECODE + }; + + BYTECODE.to_vec() } fn init_code(key: &PublicKey) -> Vec { diff --git a/processor/ethereum/router/src/tests/erc20.rs b/processor/ethereum/router/src/tests/erc20.rs new file mode 100644 index 00000000..f342a87c --- /dev/null +++ b/processor/ethereum/router/src/tests/erc20.rs @@ -0,0 +1,83 @@ +use alloy_core::primitives::{hex, Address, U256, Bytes, TxKind, PrimitiveSignature}; +use alloy_sol_types::SolCall; + +use alloy_consensus::{TxLegacy, SignableTransaction, Signed}; + +use alloy_provider::Provider; + +use ethereum_primitives::keccak256; + +use crate::tests::Test; + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod abi { + alloy_sol_macro::sol!("contracts/tests/ERC20.sol"); +} + +pub struct Erc20(Address); +impl Erc20 { + pub(crate) async fn deploy(test: &Test) -> Self { + const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = + include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/TestERC20.bin")); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("TestERC20.bin did not contain valid hex"), + }; + &BYTECODE + }; + + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000, + to: TxKind::Create, + value: U256::ZERO, + input: Bytes::from_static(BYTECODE), + }; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx).await; + Self(receipt.contract_address.unwrap()) + } + + pub(crate) fn address(&self) -> Address { + self.0 + } + + pub(crate) async fn approve(&self, test: &Test, owner: Address, spender: Address, amount: U256) { + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000, + to: self.0.into(), + value: U256::ZERO, + input: abi::TestERC20::magicApproveCall::new((owner, spender, amount)).abi_encode().into(), + }; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx).await; + assert!(receipt.status()); + } + + pub(crate) async fn mint(&self, test: &Test, account: Address, amount: U256) { + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000, + to: self.0.into(), + value: U256::ZERO, + input: abi::TestERC20::mintCall::new((account, amount)).abi_encode().into(), + }; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx).await; + assert!(receipt.status()); + } +} diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 6426bcaf..4d63f89c 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -38,6 +38,8 @@ use crate::{ }; mod constants; +mod erc20; +use erc20::Erc20; pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { @@ -241,13 +243,17 @@ impl Test { self.verify_state().await; } + fn in_instruction() -> Shorthand { + Shorthand::Raw(RefundableInInstruction { + origin: None, + instruction: SeraiInInstruction::Transfer(SeraiAddress([0xff; 32])), + }) + } + fn eth_in_instruction_tx(&self) -> (Coin, U256, Shorthand, TxLegacy) { let coin = Coin::Ether; let amount = U256::from(1); - let shorthand = Shorthand::Raw(RefundableInInstruction { - origin: None, - instruction: SeraiInInstruction::Transfer(SeraiAddress([0xff; 32])), - }); + let shorthand = Self::in_instruction(); let mut tx = self.router.in_instruction(coin, amount, &shorthand); tx.gas_limit = 1_000_000; @@ -363,7 +369,74 @@ async fn test_eth_in_instruction() { #[tokio::test] async fn test_erc20_in_instruction() { - todo!("TODO") + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let erc20 = Erc20::deploy(&test).await; + + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + let shorthand = Test::in_instruction(); + + // The provided `in_instruction` function will use a top-level transfer for ERC20 InInstructions, + // so we have to manually write this call + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000u128, + gas_limit: 1_000_000, + to: test.router.address().into(), + value: U256::ZERO, + input: crate::abi::inInstructionCall::new((coin.into(), amount, shorthand.encode().into())) + .abi_encode() + .into(), + }; + + // If no `approve` was granted, this should fail + assert!(matches!( + test.call_and_decode_err(tx.clone()).await, + IRouterErrors::TransferFromFailed(IRouter::TransferFromFailed {}) + )); + + let tx = ethereum_primitives::deterministically_sign(tx); + { + let signer = tx.recover_signer().unwrap(); + erc20.mint(&test, signer, amount).await; + erc20.approve(&test, signer, test.router.address(), amount).await; + } + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + + // If we don't whitelist this token, we shouldn't be yielded an InInstruction + { + let in_instructions = + test.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); + assert!(in_instructions.is_empty()); + } + + let in_instructions = test + .router + .in_instructions_unordered(block, block, &HashSet::from([coin.into()])) + .await + .unwrap(); + assert_eq!(in_instructions.len(), 1); + assert_eq!( + in_instructions[0], + InInstruction { + id: LogIndex { + block_hash: *receipt.block_hash.unwrap(), + // First is the Transfer log, then the InInstruction log + index_within_block: receipt.inner.logs()[1].log_index.unwrap(), + }, + transaction_hash: **tx.hash(), + from: tx.recover_signer().unwrap(), + coin, + amount, + data: shorthand.encode(), + } + ); } #[tokio::test] From 201b67503135eb17a2252df831d45627370d8fa3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 03:45:04 -0500 Subject: [PATCH 338/368] Test ERC20 InInstructions --- processor/ethereum/router/src/tests/erc20.rs | 10 ++- processor/ethereum/router/src/tests/mod.rs | 84 +++++--------------- 2 files changed, 29 insertions(+), 65 deletions(-) diff --git a/processor/ethereum/router/src/tests/erc20.rs b/processor/ethereum/router/src/tests/erc20.rs index f342a87c..3dfff600 100644 --- a/processor/ethereum/router/src/tests/erc20.rs +++ b/processor/ethereum/router/src/tests/erc20.rs @@ -1,8 +1,9 @@ use alloy_core::primitives::{hex, Address, U256, Bytes, TxKind, PrimitiveSignature}; -use alloy_sol_types::SolCall; +use alloy_sol_types::{SolValue, SolCall}; use alloy_consensus::{TxLegacy, SignableTransaction, Signed}; +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; use alloy_provider::Provider; use ethereum_primitives::keccak256; @@ -80,4 +81,11 @@ impl Erc20 { let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx).await; assert!(receipt.status()); } + + pub(crate) async fn balance_of(&self, test: &Test, account: Address) -> U256 { + let call = TransactionRequest::default().to(self.0).input(TransactionInput::new( + abi::TestERC20::balanceOfCall::new((account,)).abi_encode().into(), + )); + U256::abi_decode(&test.provider.call(&call).await.unwrap(), true).unwrap() + } } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 4d63f89c..6e359987 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -546,24 +546,35 @@ async fn test_escape_hatch() { vec![Escape { coin: Coin::Ether, amount: U256::from(1) }], ); - assert!(test.provider.get_balance(test.router.address()).await.unwrap() == U256::from(0)); - assert!( - test.provider.get_balance(test.state.escaped_to.unwrap()).await.unwrap() == U256::from(1) + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + assert_eq!( + test.provider.get_balance(test.state.escaped_to.unwrap()).await.unwrap(), + U256::from(1) ); } - // TODO ERC20 escape + // ERC20 + { + let erc20 = Erc20::deploy(&test).await; + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + erc20.mint(&test, test.router.address(), amount).await; + + let tx = ethereum_primitives::deterministically_sign(test.escape_tx(coin)); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + assert_eq!(test.router.escapes(block, block).await.unwrap(), vec![Escape { coin, amount }],); + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); + assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); + } } /* - event InInstruction( - address indexed from, address indexed coin, uint256 amount, bytes instruction - ); event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); error InvalidSeraiKey(); error InvalidSignature(); - error AmountMismatchesMsgValue(); - error TransferFromFailed(); error Reentered(); error EscapeFailed(); function executeArbitraryCode(bytes memory code) external payable; @@ -592,61 +603,6 @@ async fn test_escape_hatch() { ) external; } -#[tokio::test] -async fn test_eth_in_instruction() { - let (_anvil, provider, router, key) = setup_test().await; - confirm_next_serai_key(&provider, &router, 1, key).await; - - let amount = U256::try_from(OsRng.next_u64()).unwrap(); - let mut in_instruction = vec![0; usize::try_from(OsRng.next_u64() % 256).unwrap()]; - OsRng.fill_bytes(&mut in_instruction); - - let tx = TxLegacy { - chain_id: None, - nonce: 0, - // 100 gwei - gas_price: 100_000_000_000, - gas_limit: 1_000_000, - to: TxKind::Call(router.address()), - value: amount, - input: crate::_irouter_abi::inInstructionCall::new(( - [0; 20].into(), - amount, - in_instruction.clone().into(), - )) - .abi_encode() - .into(), - }; - let tx = ethereum_primitives::deterministically_sign(tx); - let signer = tx.recover_signer().unwrap(); - - let receipt = ethereum_test_primitives::publish_tx(&provider, tx).await; - assert!(receipt.status()); - - assert_eq!(receipt.inner.logs().len(), 1); - let parsed_log = - receipt.inner.logs()[0].log_decode::().unwrap().inner.data; - assert_eq!(parsed_log.from, signer); - assert_eq!(parsed_log.coin, Address::from([0; 20])); - assert_eq!(parsed_log.amount, amount); - assert_eq!(parsed_log.instruction.as_ref(), &in_instruction); - - let parsed_in_instructions = - router.in_instructions(receipt.block_number.unwrap(), &HashSet::new()).await.unwrap(); - assert_eq!(parsed_in_instructions.len(), 1); - assert_eq!( - parsed_in_instructions[0].id, - LogIndex { - block_hash: *receipt.block_hash.unwrap(), - index_within_block: receipt.inner.logs()[0].log_index.unwrap(), - }, - ); - assert_eq!(parsed_in_instructions[0].from, signer); - assert_eq!(parsed_in_instructions[0].coin, Coin::Ether); - assert_eq!(parsed_in_instructions[0].amount, amount); - assert_eq!(parsed_in_instructions[0].data, in_instruction); -} - async fn publish_outs( provider: &RootProvider, router: &Router, From f948881ebadb593fdfd07be3272ed53a7fe02679 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 05:34:49 -0500 Subject: [PATCH 339/368] Simplify async code in in_instructions_unordered Outsources fetching the ERC20 events to top_level_transfers_unordered. --- processor/ethereum/erc20/src/lib.rs | 191 +++++++------- processor/ethereum/primitives/src/lib.rs | 2 +- processor/ethereum/router/src/lib.rs | 251 ++++++++----------- processor/ethereum/router/src/tests/erc20.rs | 6 +- processor/ethereum/router/src/tests/mod.rs | 132 +++++----- processor/ethereum/src/rpc.rs | 26 +- 6 files changed, 284 insertions(+), 324 deletions(-) diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index a3ed386c..4d0cb0ff 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -2,8 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] -use core::borrow::Borrow; -use std::{sync::Arc, collections::HashMap}; +use std::collections::HashMap; use alloy_core::primitives::{Address, U256}; @@ -57,20 +56,27 @@ pub struct TopLevelTransfer { pub data: Vec, } +/// The result of `Erc20::top_level_transfers_unordered`. +pub struct TopLevelTransfers { + /// Every `Transfer` log of the contextual ERC20 to the contextual account, indexed by + /// their transaction. + /// + /// The ERC20/account is labelled contextual as it isn't directly named here. Instead, they're + /// assumed contextual to how this was created. + pub logs: HashMap<[u8; 32], Vec>, + /// All of the top-level transfers of the contextual ERC20 to the contextual account. + /// + /// The ERC20/account is labelled contextual as it isn't directly named here. Instead, they're + /// assumed contextual to how this was created. + pub transfers: Vec, +} + /// A view for an ERC20 contract. #[derive(Clone, Debug)] -pub struct Erc20 { - provider: Arc>, - address: Address, -} +pub struct Erc20; impl Erc20 { - /// Construct a new view of the specified ERC20 contract. - pub fn new(provider: Arc>, address: Address) -> Self { - Self { provider, address } - } - /// The filter for transfer logs of the specified ERC20, to the specified recipient. - pub fn transfer_filter(from_block: u64, to_block: u64, erc20: Address, to: Address) -> Filter { + fn transfer_filter(from_block: u64, to_block: u64, erc20: Address, to: Address) -> Filter { let filter = Filter::new().from_block(from_block).to_block(to_block); filter.address(erc20).event_signature(Transfer::SIGNATURE_HASH).topic2(to.into_word()) } @@ -78,32 +84,35 @@ impl Erc20 { /// Yield the top-level transfer for the specified transaction (if one exists). /// /// The passed-in logs MUST be the logs for this transaction. The logs MUST be filtered to the - /// `Transfer` events of the intended token(s) and the intended `to` transferred to. These + /// `Transfer` events of the intended token and the intended `to` transferred to. These /// properties are completely unchecked and assumed to be the case. /// /// This does NOT yield THE top-level transfer. If multiple `Transfer` events have identical - /// structure to the top-level transfer call, the earliest `Transfer` event present in the logs - /// is considered the top-level transfer. + /// structure to the top-level transfer call, the first `Transfer` event present in the logs is + /// considered the top-level transfer. // Yielding THE top-level transfer would require tracing the transaction execution and isn't // worth the effort. - pub async fn top_level_transfer( - provider: impl AsRef>, + async fn top_level_transfer( + provider: &RootProvider, + erc20: Address, transaction_hash: [u8; 32], - mut transfer_logs: Vec>, + transfer_logs: &[Log], ) -> Result, RpcError> { // Fetch the transaction let transaction = - provider.as_ref().get_transaction_by_hash(transaction_hash.into()).await?.ok_or_else( - || { - TransportErrorKind::Custom( - "node didn't have the transaction which emitted a log it had".to_string().into(), - ) - }, - )?; + provider.get_transaction_by_hash(transaction_hash.into()).await?.ok_or_else(|| { + TransportErrorKind::Custom( + "node didn't have the transaction which emitted a log it had".to_string().into(), + ) + })?; + + // If this transaction didn't call this ERC20 at a top-level, return + if transaction.inner.to() != Some(erc20) { + return Ok(None); + } - // If this is a top-level call... // Don't validate the encoding as this can't be re-encoded to an identical bytestring due - // to the `InInstruction` appended after the call itself + // to the additional data appended after the call itself let Ok(call) = IERC20Calls::abi_decode(transaction.inner.input(), false) else { return Ok(None); }; @@ -116,21 +125,12 @@ impl Erc20 { _ => return Ok(None), }; - // Sort the logs to ensure the the earliest logs are first - transfer_logs.sort_by_key(|log| log.borrow().log_index); // Find the log for this top-level transfer for log in transfer_logs { - // Check the log is for the called contract - // This handles the edge case where we're checking if transfers of token X were top-level and - // a transfer of token Y (with equivalent structure) was top-level - if Some(log.borrow().address()) != transaction.inner.to() { - continue; - } - // Since the caller is responsible for filtering these to `Transfer` events, we can assume // this is a non-compliant ERC20 or an error with the logs fetched. We assume ERC20 // compliance here, making this an RPC error - let log = log.borrow().log_decode::().map_err(|_| { + let log = log.log_decode::().map_err(|_| { TransportErrorKind::Custom("log didn't include a valid transfer event".to_string().into()) })?; @@ -158,8 +158,8 @@ impl Erc20 { ) => Vec::from(inInstruction), } } else { - // We don't error here so this transfer is propagated up the stack, even without the - // InInstruction. In practice, Serai should acknowledge this and return it to the sender + // If there was no additional data appended, use an empty Vec (which has no data) + // This has a slight information loss in that it's None -> Some(vec![]), but it's fine vec![] }; @@ -177,69 +177,76 @@ impl Erc20 { /// Fetch all top-level transfers to the specified address for this token. /// - /// The result of this function is unordered. + /// The `transfers` in the result are unordered. The `logs` are sorted by index. pub async fn top_level_transfers_unordered( - &self, + provider: &RootProvider, from_block: u64, to_block: u64, + erc20: Address, to: Address, - ) -> Result, RpcError> { - // Get all transfers within these blocks - let logs = self - .provider - .get_logs(&Self::transfer_filter(from_block, to_block, self.address, to)) - .await?; + ) -> Result> { + let mut logs = { + // Get all transfers within these blocks + let logs = provider.get_logs(&Self::transfer_filter(from_block, to_block, erc20, to)).await?; - // The logs, indexed by their transactions - let mut transaction_logs = HashMap::new(); - // Index the logs by their transactions - for log in logs { - // Double check the address which emitted this log - if log.address() != self.address { - Err(TransportErrorKind::Custom( - "node returned logs for a different address than requested".to_string().into(), - ))?; - } - // Double check the event signature for this log - if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { - Err(TransportErrorKind::Custom( - "node returned a log for a different topic than filtered to".to_string().into(), - ))?; - } - // Double check the `to` topic - if log.topics().get(2) != Some(&to.into_word()) { - Err(TransportErrorKind::Custom( - "node returned a transfer for a different `to` than filtered to".to_string().into(), - ))?; + // The logs, indexed by their transactions + let mut transaction_logs = HashMap::new(); + // Index the logs by their transactions + for log in logs { + // Double check the address which emitted this log + if log.address() != erc20 { + Err(TransportErrorKind::Custom( + "node returned logs for a different address than requested".to_string().into(), + ))?; + } + // Double check the event signature for this log + if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { + Err(TransportErrorKind::Custom( + "node returned a log for a different topic than filtered to".to_string().into(), + ))?; + } + // Double check the `to` topic + if log.topics().get(2) != Some(&to.into_word()) { + Err(TransportErrorKind::Custom( + "node returned a transfer for a different `to` than filtered to".to_string().into(), + ))?; + } + + let tx_id = log + .transaction_hash + .ok_or_else(|| { + TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) + })? + .0; + + transaction_logs.entry(tx_id).or_insert_with(|| Vec::with_capacity(1)).push(log); } - let tx_id = log - .transaction_hash - .ok_or_else(|| { - TransportErrorKind::Custom("log didn't specify its transaction hash".to_string().into()) - })? - .0; + transaction_logs + }; - transaction_logs.entry(tx_id).or_insert_with(|| Vec::with_capacity(1)).push(log); - } + let mut transfers = vec![]; + { + // Use `FuturesUnordered` so these RPC calls run in parallel + let mut futures = FuturesUnordered::new(); + for (tx_id, transfer_logs) in &mut logs { + // Sort the logs to ensure the the earliest logs are first + transfer_logs.sort_by_key(|log| log.log_index); + futures.push(Self::top_level_transfer(provider, erc20, *tx_id, transfer_logs)); + } - // Use `FuturesUnordered` so these RPC calls run in parallel - let mut futures = FuturesUnordered::new(); - for (tx_id, transfer_logs) in transaction_logs { - futures.push(Self::top_level_transfer(&self.provider, tx_id, transfer_logs)); - } - - let mut top_level_transfers = vec![]; - while let Some(top_level_transfer) = futures.next().await { - match top_level_transfer { - // Top-level transfer - Ok(Some(top_level_transfer)) => top_level_transfers.push(top_level_transfer), - // Not a top-level transfer - Ok(None) => continue, - // Failed to get this transaction's information so abort - Err(e) => Err(e)?, + while let Some(transfer) = futures.next().await { + match transfer { + // Top-level transfer + Ok(Some(transfer)) => transfers.push(transfer), + // Not a top-level transfer + Ok(None) => continue, + // Failed to get this transaction's information so abort + Err(e) => Err(e)?, + } } } - Ok(top_level_transfers) + + Ok(TopLevelTransfers { logs, transfers }) } } diff --git a/processor/ethereum/primitives/src/lib.rs b/processor/ethereum/primitives/src/lib.rs index 44d08e5a..2727ea22 100644 --- a/processor/ethereum/primitives/src/lib.rs +++ b/processor/ethereum/primitives/src/lib.rs @@ -14,7 +14,7 @@ mod borsh; pub use borsh::*; /// An index of a log within a block. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[borsh(crate = "::borsh")] pub struct LogIndex { /// The hash of the block which produced this log. diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 394f2df0..de9531f7 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -29,7 +29,7 @@ use serai_client::{ use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; -use erc20::{Transfer, Erc20}; +use erc20::{Transfer, TopLevelTransfer, TopLevelTransfers, Erc20}; use futures_util::stream::{StreamExt, FuturesUnordered}; @@ -451,35 +451,66 @@ impl Router { } } - /// Fetch the `InInstruction`s emitted by the Router from this block. + /// Fetch the `InInstruction`s for the Router for the specified inclusive range of blocks. + /// + /// This includes all `InInstruction` events from the Router and all top-level transfers to the + /// Router. /// /// This is not guaranteed to return them in any order. pub async fn in_instructions_unordered( &self, from_block: u64, to_block: u64, - allowed_tokens: &HashSet
, + allowed_erc20s: &HashSet
, ) -> Result, RpcError> { // The InInstruction events for this block - let logs = { + let in_instruction_logs = { let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); self.provider.get_logs(&filter).await? }; - let mut in_instructions = Vec::with_capacity(logs.len()); - /* - We check that for all InInstructions for ERC20s emitted, a corresponding transfer occurred. - On this initial loop, we just queue the ERC20 InInstructions for later verification. + // Define the Vec for the result now that we have the logs as a size hint + let mut in_instructions = Vec::with_capacity(in_instruction_logs.len()); - We don't do this for ETH as it'd require tracing the transaction, which is non-trivial. It - also isn't necessary as all of this is solely defense in depth. - */ - let mut erc20s = HashSet::new(); - let mut erc20_transfer_logs = FuturesUnordered::new(); - let mut erc20_transactions = HashSet::new(); - let mut erc20_in_instructions = vec![]; - for log in logs { + // Handle the top-level transfers for this block + let mut justifying_erc20_transfer_logs = HashSet::new(); + let erc20_transfer_logs = { + let mut transfers = FuturesUnordered::new(); + for erc20 in allowed_erc20s { + transfers.push(async move { + ( + erc20, + Erc20::top_level_transfers_unordered( + &self.provider, + from_block, + to_block, + *erc20, + self.address, + ) + .await, + ) + }); + } + + let mut logs = HashMap::with_capacity(allowed_erc20s.len()); + while let Some((token, transfers)) = transfers.next().await { + let TopLevelTransfers { logs: token_logs, transfers } = transfers?; + logs.insert(token, token_logs); + // Map the top-level transfer to an InInstruction + for transfer in transfers { + let TopLevelTransfer { id, transaction_hash, from, amount, data } = transfer; + justifying_erc20_transfer_logs.insert(transfer.id); + let in_instruction = + InInstruction { id, transaction_hash, from, coin: Coin::Erc20(*token), amount, data }; + in_instructions.push(in_instruction); + } + } + logs + }; + + // Now handle the InInstruction events + for log in in_instruction_logs { // Double check the address which emitted this log if log.address() != self.address { Err(TransportErrorKind::Custom( @@ -491,18 +522,22 @@ impl Router { continue; } - let id = LogIndex { - block_hash: log - .block_hash - .ok_or_else(|| { - TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) - })? - .into(), - index_within_block: log.log_index.ok_or_else(|| { - TransportErrorKind::Custom("log didn't have its index set".to_string().into()) - })?, + let log_index = |log: &Log| -> Result { + Ok(LogIndex { + block_hash: log + .block_hash + .ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its block hash set".to_string().into()) + })? + .into(), + index_within_block: log.log_index.ok_or_else(|| { + TransportErrorKind::Custom("log didn't have its index set".to_string().into()) + })?, + }) }; + let id = log_index(&log)?; + let transaction_hash = log.transaction_hash.ok_or_else(|| { TransportErrorKind::Custom("log didn't have its transaction hash set".to_string().into()) })?; @@ -530,135 +565,57 @@ impl Router { }; match coin { - Coin::Ether => in_instructions.push(in_instruction), + Coin::Ether => {} Coin::Erc20(token) => { - if !allowed_tokens.contains(&token) { + // Check this is an allowed token + if !allowed_erc20s.contains(&token) { continue; } - // Fetch the ERC20 transfer events necessary to verify this InInstruction has a matching - // transfer - if !erc20s.contains(&token) { - erc20s.insert(token); - erc20_transfer_logs.push(async move { - let filter = Erc20::transfer_filter(from_block, to_block, token, self.address); - self.provider.get_logs(&filter).await.map(|logs| (token, logs)) - }); - } - erc20_transactions.insert(transaction_hash); - erc20_in_instructions.push((transaction_hash, in_instruction)) - } - } - } + /* + We check that for all InInstructions for ERC20s emitted, a corresponding transfer + occurred. - // Collect the ERC20 transfer logs - let erc20_transfer_logs = { - let mut collected = HashMap::with_capacity(erc20s.len()); - while let Some(token_and_logs) = erc20_transfer_logs.next().await { - let (token, logs) = token_and_logs?; - collected.insert(token, logs); - } - collected - }; + We don't do this for ETH as it'd require tracing the transaction, which is non-trivial. + It also isn't necessary as all of this is solely defense in depth. + */ + let mut justified = false; + // These logs are returned from `top_level_transfers_unordered` and we don't require any + // ordering of them + for log in erc20_transfer_logs[&token].get(&transaction_hash).unwrap_or(&vec![]) { + let log_index = log_index(log)?; - /* - For each transaction, it may have a top-level ERC20 transfer. That top-level transfer won't - be the transfer caused by the call to `inInstruction`, so we shouldn't consider it - justification for this `InInstruction` event. - - Fetch all top-level transfers here so we can ignore them. - */ - let mut erc20_top_level_transfers = FuturesUnordered::new(); - let mut transaction_transfer_logs = HashMap::new(); - for transaction in erc20_transactions { - // Filter to the logs for this specific transaction - let logs = erc20_transfer_logs - .values() - .flat_map(|logs_per_token| logs_per_token.iter()) - .filter_map(|log| { - let log_transaction_hash = log.transaction_hash.ok_or_else(|| { - TransportErrorKind::Custom( - "log didn't have its transaction hash set".to_string().into(), - ) - }); - match log_transaction_hash { - Ok(log_transaction_hash) => { - if log_transaction_hash == transaction { - Some(Ok(log)) - } else { - None - } + // Ensure we didn't already use this transfer to justify a distinct InInstruction + if justifying_erc20_transfer_logs.contains(&log_index) { + continue; + } + + // Check if this log is from the token we expected to be transferred + if log.address() != Address::from(in_instruction.coin) { + continue; + } + // Check if this is a transfer log + if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { + continue; + } + let Ok(transfer) = Transfer::decode_log(&log.inner.clone(), true) else { continue }; + // Check if this aligns with the InInstruction + if (transfer.from == in_instruction.from) && + (transfer.to == self.address) && + (transfer.value == in_instruction.amount) + { + justifying_erc20_transfer_logs.insert(log_index); + justified = true; + break; } - Err(e) => Some(Err(e)), } - }) - .collect::, _>>()?; - - // Find the top-level transfer - erc20_top_level_transfers.push(Erc20::top_level_transfer( - &self.provider, - transaction, - logs.clone(), - )); - // Keep the transaction-indexed logs for the actual justifying - transaction_transfer_logs.insert(transaction, logs); - } - - /* - In order to prevent a single transfer from being used to justify multiple distinct - InInstructions, we insert the transfer's log index into this HashSet. - */ - let mut already_used_to_justify = HashSet::new(); - - // Collect the top-level transfers - while let Some(erc20_top_level_transfer) = erc20_top_level_transfers.next().await { - let erc20_top_level_transfer = erc20_top_level_transfer?; - // If this transaction had a top-level transfer... - if let Some(erc20_top_level_transfer) = erc20_top_level_transfer { - // Mark this log index as used so it isn't used again - already_used_to_justify.insert(erc20_top_level_transfer.id.index_within_block); - } - } - - // Now, for each ERC20 InInstruction, find a justifying transfer log - for (transaction_hash, in_instruction) in erc20_in_instructions { - let mut justified = false; - for log in &transaction_transfer_logs[&transaction_hash] { - let log_index = log.log_index.ok_or_else(|| { - TransportErrorKind::Custom( - "log in transaction receipt didn't have its log index set".to_string().into(), - ) - })?; - - // Ensure we didn't already use this transfer to check a distinct InInstruction event - if already_used_to_justify.contains(&log_index) { - continue; + if !justified { + // This is an exploit, a non-conforming ERC20, or an invalid connection + Err(TransportErrorKind::Custom( + "ERC20 InInstruction with no matching transfer log".to_string().into(), + ))?; + } } - - // Check if this log is from the token we expected to be transferred - if log.address() != Address::from(in_instruction.coin) { - continue; - } - // Check if this is a transfer log - if log.topics().first() != Some(&Transfer::SIGNATURE_HASH) { - continue; - } - let Ok(transfer) = Transfer::decode_log(&log.inner.clone(), true) else { continue }; - // Check if this aligns with the InInstruction - if (transfer.from == in_instruction.from) && - (transfer.to == self.address) && - (transfer.value == in_instruction.amount) - { - already_used_to_justify.insert(log_index); - justified = true; - break; - } - } - if !justified { - // This is an exploit, a non-conforming ERC20, or an invalid connection - Err(TransportErrorKind::Custom( - "ERC20 InInstruction with no matching transfer log".to_string().into(), - ))?; } in_instructions.push(in_instruction); } @@ -666,7 +623,7 @@ impl Router { Ok(in_instructions) } - /// Fetch the executed actions from this block. + /// Fetch the executed actions for the specified range of blocks. pub async fn executed( &self, from_block: u64, diff --git a/processor/ethereum/router/src/tests/erc20.rs b/processor/ethereum/router/src/tests/erc20.rs index 3dfff600..e107fbb1 100644 --- a/processor/ethereum/router/src/tests/erc20.rs +++ b/processor/ethereum/router/src/tests/erc20.rs @@ -1,13 +1,11 @@ -use alloy_core::primitives::{hex, Address, U256, Bytes, TxKind, PrimitiveSignature}; +use alloy_core::primitives::{hex, Address, U256, Bytes, TxKind}; use alloy_sol_types::{SolValue, SolCall}; -use alloy_consensus::{TxLegacy, SignableTransaction, Signed}; +use alloy_consensus::TxLegacy; use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; use alloy_provider::Provider; -use ethereum_primitives::keccak256; - use crate::tests::Test; #[rustfmt::skip] diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 6e359987..9e7909f0 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -5,13 +5,12 @@ use rand_core::{RngCore, OsRng}; use group::ff::Field; use k256::{Scalar, ProjectivePoint}; -use alloy_core::primitives::{Address, U256, TxKind}; -use alloy_sol_types::SolCall; +use alloy_core::primitives::{Address, U256}; +use alloy_sol_types::{SolCall, SolEvent}; -use alloy_consensus::TxLegacy; +use alloy_consensus::{TxLegacy, Signed}; -#[rustfmt::skip] -use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionInput, TransactionRequest, TransactionReceipt}; +use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionInput, TransactionRequest}; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; use alloy_provider::{Provider, RootProvider}; @@ -262,6 +261,56 @@ impl Test { (coin, amount, shorthand, tx) } + async fn publish_in_instruction_tx( + &self, + tx: Signed, + coin: Coin, + amount: U256, + shorthand: &Shorthand, + ) { + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + + if matches!(coin, Coin::Erc20(_)) { + // If we don't whitelist this token, we shouldn't be yielded an InInstruction + let in_instructions = + self.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); + assert!(in_instructions.is_empty()); + } + + let in_instructions = self + .router + .in_instructions_unordered( + block, + block, + &if let Coin::Erc20(token) = coin { HashSet::from([token]) } else { HashSet::new() }, + ) + .await + .unwrap(); + assert_eq!(in_instructions.len(), 1); + + let in_instruction_log_index = receipt.inner.logs().iter().find_map(|log| { + (log.topics().first() == Some(&crate::InInstructionEvent::SIGNATURE_HASH)) + .then(|| log.log_index.unwrap()) + }); + // If this isn't an InInstruction event, it'll be a top-level transfer event + let log_index = in_instruction_log_index.unwrap_or(0); + + assert_eq!( + in_instructions[0], + InInstruction { + id: LogIndex { block_hash: *receipt.block_hash.unwrap(), index_within_block: log_index }, + transaction_hash: **tx.hash(), + from: tx.recover_signer().unwrap(), + coin, + amount, + data: shorthand.encode(), + } + ); + } + fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); let sig = sign(self.state.key.unwrap(), &msg); @@ -344,31 +393,11 @@ async fn test_eth_in_instruction() { } let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; - assert!(receipt.status()); - - let block = receipt.block_number.unwrap(); - let in_instructions = - test.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); - assert_eq!(in_instructions.len(), 1); - assert_eq!( - in_instructions[0], - InInstruction { - id: LogIndex { - block_hash: *receipt.block_hash.unwrap(), - index_within_block: receipt.inner.logs()[0].log_index.unwrap(), - }, - transaction_hash: **tx.hash(), - from: tx.recover_signer().unwrap(), - coin, - amount, - data: shorthand.encode(), - } - ); + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; } #[tokio::test] -async fn test_erc20_in_instruction() { +async fn test_erc20_router_in_instruction() { let mut test = Test::new().await; test.confirm_next_serai_key().await; @@ -404,39 +433,28 @@ async fn test_erc20_in_instruction() { erc20.mint(&test, signer, amount).await; erc20.approve(&test, signer, test.router.address(), amount).await; } - let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; - assert!(receipt.status()); - let block = receipt.block_number.unwrap(); + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; +} - // If we don't whitelist this token, we shouldn't be yielded an InInstruction - { - let in_instructions = - test.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); - assert!(in_instructions.is_empty()); - } +#[tokio::test] +async fn test_erc20_top_level_transfer_in_instruction() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; - let in_instructions = test - .router - .in_instructions_unordered(block, block, &HashSet::from([coin.into()])) - .await - .unwrap(); - assert_eq!(in_instructions.len(), 1); - assert_eq!( - in_instructions[0], - InInstruction { - id: LogIndex { - block_hash: *receipt.block_hash.unwrap(), - // First is the Transfer log, then the InInstruction log - index_within_block: receipt.inner.logs()[1].log_index.unwrap(), - }, - transaction_hash: **tx.hash(), - from: tx.recover_signer().unwrap(), - coin, - amount, - data: shorthand.encode(), - } - ); + let erc20 = Erc20::deploy(&test).await; + + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + let shorthand = Test::in_instruction(); + + let mut tx = test.router.in_instruction(coin, amount, &shorthand); + tx.gas_price = 100_000_000_000u128; + tx.gas_limit = 1_000_000; + + let tx = ethereum_primitives::deterministically_sign(tx); + erc20.mint(&test, tx.recover_signer().unwrap(), amount).await; + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; } #[tokio::test] diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 480c9440..9305fd91 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -16,9 +16,7 @@ use serai_db::Db; use scanner::ScannerFeed; use ethereum_schnorr::PublicKey; -use ethereum_erc20::{TopLevelTransfer, Erc20}; -#[rustfmt::skip] -use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstruction, Executed, Router}; +use ethereum_router::{InInstruction as EthereumInInstruction, Executed, Router}; use crate::{ TOKENS, ETHER_DUST, DAI_DUST, InitialSeraiKey, @@ -158,31 +156,13 @@ impl ScannerFeed for Rpc { }; async fn sync_block( - provider: Arc>, router: Router, block: Header, ) -> Result<(Vec, Vec), RpcError> { - let mut instructions = router + let instructions = router .in_instructions_unordered(block.number, block.number, &HashSet::from(TOKENS)) .await?; - for token in TOKENS { - for TopLevelTransfer { id, transaction_hash, from, amount, data } in - Erc20::new(provider.clone(), token) - .top_level_transfers_unordered(block.number, block.number, router.address()) - .await? - { - instructions.push(EthereumInInstruction { - id, - transaction_hash, - from, - coin: EthereumCoin::Erc20(token), - amount, - data, - }); - } - } - let executed = router.executed(block.number, block.number).await?; Ok((instructions, executed)) @@ -214,7 +194,7 @@ impl ScannerFeed for Rpc { to_check = *to_check_block.parent_hash; // Spawn a task to sync this block - join_set.spawn(sync_block(self.provider.clone(), router.clone(), to_check_block)); + join_set.spawn(sync_block(router.clone(), to_check_block)); } let mut instructions = vec![]; From 164fe9a14f8631e8b6e6d1e0b5fbba0f67e631f8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 06:41:24 -0500 Subject: [PATCH 340/368] Test Router's InvalidSeraiKey error --- processor/ethereum/router/src/tests/mod.rs | 61 +++++++++++++++++++--- 1 file changed, 55 insertions(+), 6 deletions(-) diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 9e7909f0..f66a70f8 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -222,10 +222,16 @@ impl Test { let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; assert!(receipt.status()); - assert_eq!( - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), - Router::UPDATE_SERAI_KEY_GAS, - ); + if self.state.next_key.is_none() { + assert_eq!( + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), + Router::UPDATE_SERAI_KEY_GAS, + ); + } else { + assert!( + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) < Router::UPDATE_SERAI_KEY_GAS + ); + } { let block = receipt.block_number.unwrap(); @@ -371,10 +377,54 @@ async fn test_update_serai_key() { test.confirm_next_serai_key().await; test.update_serai_key().await; + // We should be able to update while an update is pending as well (in case the new key never + // confirms) + test.update_serai_key().await; + + // But we shouldn't be able to update the key to None + { + let msg = crate::abi::updateSeraiKeyCall::new(( + crate::abi::Signature { + c: test.chain_id.into(), + s: U256::try_from(test.state.next_nonce).unwrap().into(), + }, + [0; 32].into(), + )) + .abi_encode(); + let sig = sign(test.state.key.unwrap(), &msg); + + assert!(matches!( + test + .call_and_decode_err(TxLegacy { + input: crate::abi::updateSeraiKeyCall::new(( + crate::abi::Signature::from(&sig), + [0; 32].into(), + )) + .abi_encode() + .into(), + ..Default::default() + }) + .await, + IRouterErrors::InvalidSeraiKey(IRouter::InvalidSeraiKey {}) + )); + } + // Once we update to a new key, we should, of course, be able to continue to rotate keys test.confirm_next_serai_key().await; } +#[tokio::test] +async fn test_no_in_instruction_before_key() { + let test = Test::new().await; + + // We shouldn't be able to publish `InInstruction`s before publishing a key + let (_coin, _amount, _shorthand, tx) = test.eth_in_instruction_tx(); + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::InvalidSeraiKey(IRouter::InvalidSeraiKey {}) + )); +} + #[tokio::test] async fn test_eth_in_instruction() { let mut test = Test::new().await; @@ -589,9 +639,8 @@ async fn test_escape_hatch() { } } -/* +/* TODO event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); - error InvalidSeraiKey(); error InvalidSignature(); error Reentered(); error EscapeFailed(); From cefc542744cb8cd0e671b76729b2721db76a30d8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 06:58:54 -0500 Subject: [PATCH 341/368] Test SeraiKeyWasNone --- .../ethereum/router/contracts/IRouter.sol | 2 + .../ethereum/router/contracts/Router.sol | 4 +- processor/ethereum/router/src/tests/mod.rs | 40 ++++++++++++++++++- 3 files changed, 42 insertions(+), 4 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index 57994f8d..c22bef3c 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -45,6 +45,8 @@ interface IRouterWithoutCollisions { /// @param amount The amount which escaped event Escaped(address indexed coin, uint256 amount); + /// @notice The Serai key verifying the signature wasn't set + error SeraiKeyWasNone(); /// @notice The key for Serai was invalid /// @dev This is incomplete and not always guaranteed to be thrown upon an invalid key error InvalidSeraiKey(); diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index e0bc77bb..c3f6befa 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -137,7 +137,7 @@ contract Router is IRouterWithoutCollisions { The Schnorr contract should already reject this public key yet it's best to be explicit. */ if (key == bytes32(0)) { - revert InvalidSignature(); + revert SeraiKeyWasNone(); } message = msg.data; @@ -266,7 +266,7 @@ contract Router is IRouterWithoutCollisions { function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { // Check there is an active key if (_seraiKey == bytes32(0)) { - revert InvalidSeraiKey(); + revert SeraiKeyWasNone(); } // Don't allow further InInstructions once the escape hatch has been invoked diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index f66a70f8..da79e956 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -367,10 +367,46 @@ async fn test_constructor() { #[tokio::test] async fn test_confirm_next_serai_key() { let mut test = Test::new().await; - // TODO: Check all calls fail at this time, including inInstruction test.confirm_next_serai_key().await; } +#[tokio::test] +async fn test_no_serai_key() { + // Before we confirm a key, any operations requiring a signature shouldn't work + { + let mut test = Test::new().await; + + // Corrupt the test's state so we can obtain signed TXs + test.state.key = Some(test_key()); + + assert!(matches!( + test.call_and_decode_err(test.update_serai_key_tx().1).await, + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) + )); + /* TODO + assert!(matches!( + test.call_and_decode_err(test.execute_tx()).await, + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) + )); + */ + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx(Address::ZERO)).await, + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) + )); + } + + // And if there's no key to confirm, any operations requiring a signature shouldn't work + { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + test.state.next_key = Some(test_key()); + assert!(matches!( + test.call_and_decode_err(test.confirm_next_serai_key_tx()).await, + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) + )); + } +} + #[tokio::test] async fn test_update_serai_key() { let mut test = Test::new().await; @@ -421,7 +457,7 @@ async fn test_no_in_instruction_before_key() { let (_coin, _amount, _shorthand, tx) = test.eth_in_instruction_tx(); assert!(matches!( test.call_and_decode_err(tx).await, - IRouterErrors::InvalidSeraiKey(IRouter::InvalidSeraiKey {}) + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); } From 977dcad86db0afda5dc48e6aeb36b159725bcd13 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 07:22:43 -0500 Subject: [PATCH 342/368] Test the Router rejects invalid signatures --- processor/ethereum/router/src/tests/mod.rs | 42 +++++++++++++++++++++- 1 file changed, 41 insertions(+), 1 deletion(-) diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index da79e956..4ff36a83 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -407,6 +407,47 @@ async fn test_no_serai_key() { } } +#[tokio::test] +async fn test_invalid_signature() { + let mut test = Test::new().await; + + { + let mut tx = test.confirm_next_serai_key_tx(); + // Cut it down to the function signature + tx.input = tx.input.as_ref()[.. 4].to_vec().into(); + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::InvalidSignature(IRouter::InvalidSignature {}) + )); + } + + { + let mut tx = test.confirm_next_serai_key_tx(); + // Mutate the signature + let mut input = Vec::::from(tx.input); + *input.last_mut().unwrap() = input.last().unwrap().wrapping_add(1); + tx.input = input.into(); + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::InvalidSignature(IRouter::InvalidSignature {}) + )); + } + + test.confirm_next_serai_key().await; + + { + let mut tx = test.update_serai_key_tx().1; + // Mutate the message + let mut input = Vec::::from(tx.input); + *input.last_mut().unwrap() = input.last().unwrap().wrapping_add(1); + tx.input = input.into(); + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::InvalidSignature(IRouter::InvalidSignature {}) + )); + } +} + #[tokio::test] async fn test_update_serai_key() { let mut test = Test::new().await; @@ -677,7 +718,6 @@ async fn test_escape_hatch() { /* TODO event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); - error InvalidSignature(); error Reentered(); error EscapeFailed(); function executeArbitraryCode(bytes memory code) external payable; From 604a4b2442cecf27eb264fb0cd8af130108b7cf6 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 07:33:36 -0500 Subject: [PATCH 343/368] Add execute_tx to fill in missing test cases reliant on it --- processor/ethereum/router/src/tests/mod.rs | 32 ++++++++++++++++------ 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 4ff36a83..a94e4cbb 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -19,6 +19,7 @@ use alloy_node_bindings::{Anvil, AnvilInstance}; use scale::Encode; use serai_client::{ + networks::ethereum::Address as SeraiEthereumAddress, primitives::SeraiAddress, in_instructions::primitives::{ InInstruction as SeraiInInstruction, RefundableInInstruction, Shorthand, @@ -317,6 +318,24 @@ impl Test { ); } + fn execute_tx( + &self, + coin: Coin, + fee: U256, + out_instructions: &[(SeraiEthereumAddress, U256)], + ) -> TxLegacy { + let out_instructions = OutInstructions::from(out_instructions); + let msg = Router::execute_message( + self.chain_id, + self.state.next_nonce, + coin, + fee, + out_instructions.clone(), + ); + let sig = sign(self.state.key.unwrap(), &msg); + self.router.execute(coin, fee, out_instructions, &sig) + } + fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); let sig = sign(self.state.key.unwrap(), &msg); @@ -383,12 +402,10 @@ async fn test_no_serai_key() { test.call_and_decode_err(test.update_serai_key_tx().1).await, IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); - /* TODO assert!(matches!( - test.call_and_decode_err(test.execute_tx()).await, + test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[])).await, IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); - */ assert!(matches!( test.call_and_decode_err(test.escape_hatch_tx(Address::ZERO)).await, IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) @@ -662,7 +679,10 @@ async fn test_escape_hatch() { test.call_and_decode_err(test.eth_in_instruction_tx().3).await, IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); - // TODO execute + assert!(matches!( + test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[])).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); // We reject further attempts to update the escape hatch to prevent the last key from being // able to switch from the honest escape hatch to siphoning via a malicious escape hatch (such // as after the validators represented unstake) @@ -721,10 +741,6 @@ async fn test_escape_hatch() { error Reentered(); error EscapeFailed(); function executeArbitraryCode(bytes memory code) external payable; - struct Signature { - bytes32 c; - bytes32 s; - } enum DestinationType { Address, Code From 29bb5e21ab64a46543ee58abba8c6fd814baee26 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 07:44:47 -0500 Subject: [PATCH 344/368] Take advantage of RangeInclusive for specifying filters' blocks --- processor/ethereum/erc20/src/lib.rs | 10 +++--- processor/ethereum/router/src/lib.rs | 38 ++++++++++------------ processor/ethereum/router/src/tests/mod.rs | 17 +++++----- processor/ethereum/src/rpc.rs | 4 +-- 4 files changed, 32 insertions(+), 37 deletions(-) diff --git a/processor/ethereum/erc20/src/lib.rs b/processor/ethereum/erc20/src/lib.rs index 4d0cb0ff..20e086aa 100644 --- a/processor/ethereum/erc20/src/lib.rs +++ b/processor/ethereum/erc20/src/lib.rs @@ -2,6 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use core::ops::RangeInclusive; use std::collections::HashMap; use alloy_core::primitives::{Address, U256}; @@ -76,8 +77,8 @@ pub struct TopLevelTransfers { pub struct Erc20; impl Erc20 { /// The filter for transfer logs of the specified ERC20, to the specified recipient. - fn transfer_filter(from_block: u64, to_block: u64, erc20: Address, to: Address) -> Filter { - let filter = Filter::new().from_block(from_block).to_block(to_block); + fn transfer_filter(blocks: RangeInclusive, erc20: Address, to: Address) -> Filter { + let filter = Filter::new().select(blocks); filter.address(erc20).event_signature(Transfer::SIGNATURE_HASH).topic2(to.into_word()) } @@ -180,14 +181,13 @@ impl Erc20 { /// The `transfers` in the result are unordered. The `logs` are sorted by index. pub async fn top_level_transfers_unordered( provider: &RootProvider, - from_block: u64, - to_block: u64, + blocks: RangeInclusive, erc20: Address, to: Address, ) -> Result> { let mut logs = { // Get all transfers within these blocks - let logs = provider.get_logs(&Self::transfer_filter(from_block, to_block, erc20, to)).await?; + let logs = provider.get_logs(&Self::transfer_filter(blocks, erc20, to)).await?; // The logs, indexed by their transactions let mut transaction_logs = HashMap::new(); diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index de9531f7..42495b13 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -2,6 +2,7 @@ #![doc = include_str!("../README.md")] #![deny(missing_docs)] +use core::ops::RangeInclusive; use std::{ sync::Arc, collections::{HashSet, HashMap}, @@ -459,13 +460,13 @@ impl Router { /// This is not guaranteed to return them in any order. pub async fn in_instructions_unordered( &self, - from_block: u64, - to_block: u64, + blocks: RangeInclusive, allowed_erc20s: &HashSet
, ) -> Result, RpcError> { // The InInstruction events for this block let in_instruction_logs = { - let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + // https://github.com/rust-lang/rust/issues/27186 + let filter = Filter::new().select(blocks.clone()).address(self.address); let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH); self.provider.get_logs(&filter).await? }; @@ -478,18 +479,15 @@ impl Router { let erc20_transfer_logs = { let mut transfers = FuturesUnordered::new(); for erc20 in allowed_erc20s { - transfers.push(async move { - ( - erc20, - Erc20::top_level_transfers_unordered( - &self.provider, - from_block, - to_block, - *erc20, - self.address, - ) - .await, - ) + transfers.push({ + // https://github.com/rust-lang/rust/issues/27186 + let blocks: RangeInclusive = blocks.clone(); + async move { + let transfers = + Erc20::top_level_transfers_unordered(&self.provider, blocks, *erc20, self.address) + .await; + (erc20, transfers) + } }); } @@ -626,8 +624,7 @@ impl Router { /// Fetch the executed actions for the specified range of blocks. pub async fn executed( &self, - from_block: u64, - to_block: u64, + blocks: RangeInclusive, ) -> Result, RpcError> { fn decode(log: &Log) -> Result> { Ok( @@ -643,7 +640,7 @@ impl Router { ) } - let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + let filter = Filter::new().select(blocks).address(self.address); let mut logs = self.provider.get_logs(&filter).await?; logs.sort_by_key(|log| (log.block_number, log.log_index)); @@ -707,10 +704,9 @@ impl Router { /// Fetch the `Escape`s from the smart contract through the escape hatch. pub async fn escapes( &self, - from_block: u64, - to_block: u64, + blocks: RangeInclusive, ) -> Result, RpcError> { - let filter = Filter::new().from_block(from_block).to_block(to_block).address(self.address); + let filter = Filter::new().select(blocks).address(self.address); let mut logs = self.provider.get_logs(&filter.event_signature(EscapedEvent::SIGNATURE_HASH)).await?; logs.sort_by_key(|log| (log.block_number, log.log_index)); diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index a94e4cbb..5b9748b3 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -144,7 +144,7 @@ impl Test { // Confirm nonce 0 was used as such { let block = receipt.block_number.unwrap(); - let executed = router.executed(block, block).await.unwrap(); + let executed = router.executed(block ..= block).await.unwrap(); assert_eq!(executed.len(), 1); assert_eq!(executed[0], Executed::NextSeraiKeySet { nonce: 0, key: public_key.eth_repr() }); } @@ -191,7 +191,7 @@ impl Test { { let block = receipt.block_number.unwrap(); - let executed = self.router.executed(block, block).await.unwrap(); + let executed = self.router.executed(block ..= block).await.unwrap(); assert_eq!(executed.len(), 1); assert_eq!( executed[0], @@ -236,7 +236,7 @@ impl Test { { let block = receipt.block_number.unwrap(); - let executed = self.router.executed(block, block).await.unwrap(); + let executed = self.router.executed(block ..= block).await.unwrap(); assert_eq!(executed.len(), 1); assert_eq!( executed[0], @@ -283,15 +283,14 @@ impl Test { if matches!(coin, Coin::Erc20(_)) { // If we don't whitelist this token, we shouldn't be yielded an InInstruction let in_instructions = - self.router.in_instructions_unordered(block, block, &HashSet::new()).await.unwrap(); + self.router.in_instructions_unordered(block ..= block, &HashSet::new()).await.unwrap(); assert!(in_instructions.is_empty()); } let in_instructions = self .router .in_instructions_unordered( - block, - block, + block ..= block, &if let Coin::Erc20(token) = coin { HashSet::from([token]) } else { HashSet::new() }, ) .await @@ -359,7 +358,7 @@ impl Test { { let block = receipt.block_number.unwrap(); - let executed = self.router.executed(block, block).await.unwrap(); + let executed = self.router.executed(block ..= block).await.unwrap(); assert_eq!(executed.len(), 1); assert_eq!(executed[0], Executed::EscapeHatch { nonce: self.state.next_nonce, escape_to }); } @@ -707,7 +706,7 @@ async fn test_escape_hatch() { let block = receipt.block_number.unwrap(); assert_eq!( - test.router.escapes(block, block).await.unwrap(), + test.router.escapes(block ..= block).await.unwrap(), vec![Escape { coin: Coin::Ether, amount: U256::from(1) }], ); @@ -730,7 +729,7 @@ async fn test_escape_hatch() { assert!(receipt.status()); let block = receipt.block_number.unwrap(); - assert_eq!(test.router.escapes(block, block).await.unwrap(), vec![Escape { coin, amount }],); + assert_eq!(test.router.escapes(block ..= block).await.unwrap(), vec![Escape { coin, amount }],); assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); } diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index 9305fd91..b5b50cfa 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -160,10 +160,10 @@ impl ScannerFeed for Rpc { block: Header, ) -> Result<(Vec, Vec), RpcError> { let instructions = router - .in_instructions_unordered(block.number, block.number, &HashSet::from(TOKENS)) + .in_instructions_unordered(block.number ..= block.number, &HashSet::from(TOKENS)) .await?; - let executed = router.executed(block.number, block.number).await?; + let executed = router.executed(block.number ..= block.number).await?; Ok((instructions, executed)) } From ed599c8ab580170bc7a2dfcde12cfc1c743029aa Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 17:03:48 -0500 Subject: [PATCH 345/368] Have the Batch event encode the amount of results Necessary to distinguish a bitvec with 1 results from a bitvec with 7 results. --- processor/ethereum/router/contracts/IRouter.sol | 15 +++++++++------ processor/ethereum/router/contracts/Router.sol | 9 ++++----- 2 files changed, 13 insertions(+), 11 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index c22bef3c..772a04f7 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -27,14 +27,17 @@ interface IRouterWithoutCollisions { /// @notice Emitted when a batch of `OutInstruction`s occurs /// @param nonce The nonce consumed to execute this batch of transactions /// @param messageHash The hash of the message signed for the executed batch + /// @param resultsLength The length of the results bitvec (represented as bytes) /** - * @param results The result of each `OutInstruction` executed. This is a bitmask with true - * representing success and false representing failure. The high bit (1 << 7) in the first byte - * is used for the first `OutInstruction`, before the next bit, and so on, before the next byte. - * An `OutInstruction` is considered as having succeeded if the call transferring ETH doesn't - * fail, the ERC20 transfer doesn't fail, and any executed code doesn't revert. + * @param results The result of each `OutInstruction` executed. This is a bitvec with true + * representing success and false representing failure. The low bit in the first byte is used + * for the first `OutInstruction`, before the next bit, and so on, before the next byte. An + * `OutInstruction` is considered as having succeeded if the call transferring ETH doesn't fail, + * the ERC20 transfer doesn't fail, and any executed code doesn't revert. */ - event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); + event Batch( + uint256 indexed nonce, bytes32 indexed messageHash, uint256 resultsLength, bytes results + ); /// @notice Emitted when `escapeHatch` is invoked /// @param escapeTo The address to escape to diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index c3f6befa..b6ffc3c1 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -376,14 +376,13 @@ contract Router is IRouterWithoutCollisions { */ function transferOut(address to, address coin, uint256 amount) private returns (bool success) { if (coin == address(0)) { - // Enough gas to service the transfer and a minimal amount of logic - uint256 _gas = 5_000; // This uses assembly to prevent return bombs // slither-disable-next-line assembly assembly { success := call( - _gas, + // explicit gas + 0, to, amount, // calldata @@ -512,7 +511,7 @@ contract Router is IRouterWithoutCollisions { } if (success) { - results[i / 8] |= bytes1(uint8(1 << (7 - (i % 8)))); + results[i / 8] |= bytes1(uint8(1 << (i % 8))); } } @@ -521,7 +520,7 @@ contract Router is IRouterWithoutCollisions { This is an effect after interactions yet we have a reentrancy guard making this safe. */ - emit Batch(nonceUsed, message, results); + emit Batch(nonceUsed, message, outs.length, results); // Transfer the fee to the relayer transferOut(msg.sender, coin, fee); From 3892fa30b7cfba2744c927f08e6fee6d1913f765 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 17:13:36 -0500 Subject: [PATCH 346/368] Test an empty execute --- processor/ethereum/router/src/lib.rs | 193 ++++++++++++++---- processor/ethereum/router/src/tests/erc20.rs | 6 +- processor/ethereum/router/src/tests/mod.rs | 68 +++++- processor/ethereum/src/primitives/block.rs | 11 +- .../ethereum/src/primitives/transaction.rs | 2 +- substrate/client/src/networks/ethereum.rs | 4 +- 6 files changed, 235 insertions(+), 49 deletions(-) diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 42495b13..c17526c3 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -68,6 +68,14 @@ use abi::{ #[cfg(test)] mod tests; +// As per Dencun, used for estimating gas for determining relayer fees +const NON_ZERO_BYTE_GAS_COST: u64 = 16; +const MEMORY_EXPANSION_COST: u64 = 3; // Does not model the quadratic cost +const COLD_COST: u64 = 2_600; +const WARM_COST: u64 = 100; +const POSITIVE_VALUE_COST: u64 = 9_000; +const EMPTY_ACCOUNT_COST: u64 = 25_000; + impl From<&Signature> for abi::Signature { fn from(signature: &Signature) -> Self { Self { @@ -134,35 +142,33 @@ pub struct InInstruction { pub data: Vec, } +impl From<&(SeraiAddress, U256)> for abi::OutInstruction { + fn from((address, amount): &(SeraiAddress, U256)) -> Self { + #[allow(non_snake_case)] + let (destinationType, destination) = match address { + SeraiAddress::Address(address) => { + // Per the documentation, `DestinationType::Address`'s value is an ABI-encoded address + (abi::DestinationType::Address, (Address::from(address)).abi_encode()) + } + SeraiAddress::Contract(contract) => ( + abi::DestinationType::Code, + (abi::CodeDestination { + gasLimit: contract.gas_limit(), + code: contract.code().to_vec().into(), + }) + .abi_encode(), + ), + }; + abi::OutInstruction { destinationType, destination: destination.into(), amount: *amount } + } +} + /// A list of `OutInstruction`s. #[derive(Clone)] pub struct OutInstructions(Vec); impl From<&[(SeraiAddress, U256)]> for OutInstructions { fn from(outs: &[(SeraiAddress, U256)]) -> Self { - Self( - outs - .iter() - .map(|(address, amount)| { - #[allow(non_snake_case)] - let (destinationType, destination) = match address { - SeraiAddress::Address(address) => { - // Per the documentation, `DestinationType::Address`'s value is an ABI-encoded - // address - (abi::DestinationType::Address, (Address::from(address)).abi_encode()) - } - SeraiAddress::Contract(contract) => ( - abi::DestinationType::Code, - (abi::CodeDestination { - gasLimit: contract.gas_limit(), - code: contract.code().to_vec().into(), - }) - .abi_encode(), - ), - }; - abi::OutInstruction { destinationType, destination: destination.into(), amount: *amount } - }) - .collect(), - ) + Self(outs.iter().map(Into::into).collect()) } } @@ -189,6 +195,8 @@ pub enum Executed { nonce: u64, /// The hash of the signed message for the Batch executed. message_hash: [u8; 32], + /// The results of the `OutInstruction`s executed. + results: Vec, }, /// The escape hatch was set. EscapeHatch { @@ -238,12 +246,15 @@ pub struct Router { address: Address, } impl Router { + // Gas allocated for ERC20 calls + const GAS_FOR_ERC20_CALL: u64 = 100_000; + /* The gas limits to use for transactions. - These are expected to be constant as a distributed group signs the transactions invoking these - calls. Having the gas be constant prevents needing to run a protocol to determine what gas to - use. + These are expected to be constant as a distributed group may sign the transactions invoking + these calls. Having the gas be constant prevents needing to run a protocol to determine what + gas to use. These gas limits may break if/when gas opcodes undergo repricing. In that case, this library is expected to be modified with these made parameters. The caller would then be expected to pass @@ -251,9 +262,18 @@ impl Router { */ const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; const UPDATE_SERAI_KEY_GAS: u64 = 60_045; - const EXECUTE_BASE_GAS: u64 = 48_000; + const EXECUTE_BASE_GAS: u64 = 51_131; const ESCAPE_HATCH_GAS: u64 = 61_238; + /* + The percentage to actually use as the gas limit, in case any opcodes are repriced or errors + occurred. + + Per prior commentary, this is just intended to be best-effort. If this is unnecessary, the gas + will be unspent. If this becomes necessary, it avoids needing an update. + */ + const GAS_REPRICING_BUFFER: u64 = 120; + fn code() -> Vec { const BYTECODE: &[u8] = { const BYTECODE_HEX: &[u8] = @@ -325,7 +345,7 @@ impl Router { TxLegacy { to: TxKind::Call(self.address), input: abi::confirmNextSeraiKeyCall::new((abi::Signature::from(sig),)).abi_encode().into(), - gas_limit: Self::CONFIRM_NEXT_SERAI_KEY_GAS * 120 / 100, + gas_limit: Self::CONFIRM_NEXT_SERAI_KEY_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -351,7 +371,7 @@ impl Router { )) .abi_encode() .into(), - gas_limit: Self::UPDATE_SERAI_KEY_GAS * 120 / 100, + gas_limit: Self::UPDATE_SERAI_KEY_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -404,18 +424,103 @@ impl Router { .abi_encode() } + /// The estimated gas cost for this OutInstruction. + /// + /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used + /// for determining relayer fees. + fn execute_out_instruction_gas_estimate_internal( + coin: Coin, + instruction: &abi::OutInstruction, + ) -> u64 { + // The assigned cost for performing an additional iteration of the loop + const ITERATION_COST: u64 = 5_000; + // The additional cost for a `DestinationType.Code`, as an additional buffer for its complexity + const CODE_COST: u64 = 10_000; + + let size = u64::try_from(instruction.abi_encoded_size()).unwrap(); + let calldata_memory_cost = + (NON_ZERO_BYTE_GAS_COST * size) + (MEMORY_EXPANSION_COST * size.div_ceil(32)); + + ITERATION_COST + + (match coin { + Coin::Ether => match instruction.destinationType { + // We assume we're tranferring a positive value to a cold, empty account + abi::DestinationType::Address => { + calldata_memory_cost + COLD_COST + POSITIVE_VALUE_COST + EMPTY_ACCOUNT_COST + } + abi::DestinationType::Code => { + // OutInstructions can't be encoded/decoded and doesn't have pub internals, enabling it + // to be correct by construction + let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); + // This performs a call to self with the value, incurring the positive-value cost before + // CREATE's + calldata_memory_cost + + CODE_COST + + (WARM_COST + POSITIVE_VALUE_COST + u64::from(code.gasLimit)) + } + abi::DestinationType::__Invalid => unreachable!(), + }, + Coin::Erc20(_) => { + // The ERC20 is warmed by the fee payment to the relayer + let erc20_call_gas = WARM_COST + Self::GAS_FOR_ERC20_CALL; + match instruction.destinationType { + abi::DestinationType::Address => calldata_memory_cost + erc20_call_gas, + abi::DestinationType::Code => { + let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); + calldata_memory_cost + + CODE_COST + + erc20_call_gas + + // Call to self to deploy the contract + (WARM_COST + u64::from(code.gasLimit)) + } + abi::DestinationType::__Invalid => unreachable!(), + } + } + }) + } + + /// The estimated gas cost for this OutInstruction. + /// + /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used + /// for determining relayer fees. + pub fn execute_out_instruction_gas_estimate(coin: Coin, address: SeraiAddress) -> u64 { + Self::execute_out_instruction_gas_estimate_internal( + coin, + &abi::OutInstruction::from(&(address, U256::ZERO)), + ) + } + + /// The estimated gas cost for this batch. + /// + /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used + /// for determining relayer fees. + pub fn execute_gas_estimate(coin: Coin, outs: &OutInstructions) -> u64 { + Self::EXECUTE_BASE_GAS + + (match coin { + // This is warm as it's the message sender who is called with the fee payment + Coin::Ether => WARM_COST + POSITIVE_VALUE_COST, + // This is cold as we say the fee payment is the one warming the ERC20 + Coin::Erc20(_) => COLD_COST + Self::GAS_FOR_ERC20_CALL, + }) + + outs + .0 + .iter() + .map(|out| Self::execute_out_instruction_gas_estimate_internal(coin, out)) + .sum::() + } + /// Construct a transaction to execute a batch of `OutInstruction`s. /// - /// The gas limit and gas price are not set and are left to the caller. + /// The gas limit is set to an estimate which may or may not be sufficient. The caller is + /// expected to set a correct gas limit. The gas price is not set and is left to the caller. pub fn execute(&self, coin: Coin, fee: U256, outs: OutInstructions, sig: &Signature) -> TxLegacy { - // TODO - let gas_limit = Self::EXECUTE_BASE_GAS + outs.0.iter().map(|_| 200_000 + 10_000).sum::(); + let gas = Self::execute_gas_estimate(coin, &outs); TxLegacy { to: TxKind::Call(self.address), input: abi::executeCall::new((abi::Signature::from(sig), Address::from(coin), fee, outs.0)) .abi_encode() .into(), - gas_limit: gas_limit * 120 / 100, + gas_limit: gas * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -436,7 +541,7 @@ impl Router { TxLegacy { to: TxKind::Call(self.address), input: abi::escapeHatchCall::new((abi::Signature::from(sig), escape_to)).abi_encode().into(), - gas_limit: Self::ESCAPE_HATCH_GAS * 120 / 100, + gas_limit: Self::ESCAPE_HATCH_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -679,6 +784,24 @@ impl Router { TransportErrorKind::Custom(format!("failed to convert nonce to u64: {e:?}").into()) })?, message_hash: event.messageHash.into(), + results: { + let results_len = usize::try_from(event.resultsLength).map_err(|e| { + TransportErrorKind::Custom( + format!("failed to convert resultsLength to usize: {e:?}").into(), + ) + })?; + if results_len.div_ceil(8) != event.results.len() { + Err(TransportErrorKind::Custom( + "resultsLength didn't align with results length".to_string().into(), + ))?; + } + let mut results = Vec::with_capacity(results_len); + for b in 0 .. results_len { + let byte = event.results[b / 8]; + results.push(((byte >> (b % 8)) & 1) == 1); + } + results + }, }); } Some(&EscapeHatchEvent::SIGNATURE_HASH) => { diff --git a/processor/ethereum/router/src/tests/erc20.rs b/processor/ethereum/router/src/tests/erc20.rs index e107fbb1..7f07f935 100644 --- a/processor/ethereum/router/src/tests/erc20.rs +++ b/processor/ethereum/router/src/tests/erc20.rs @@ -22,8 +22,10 @@ pub struct Erc20(Address); impl Erc20 { pub(crate) async fn deploy(test: &Test) -> Self { const BYTECODE: &[u8] = { - const BYTECODE_HEX: &[u8] = - include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/TestERC20.bin")); + const BYTECODE_HEX: &[u8] = include_bytes!(concat!( + env!("OUT_DIR"), + "/serai-processor-ethereum-router/tests/TestERC20.bin" + )); const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { Ok(bytecode) => bytecode, diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 5b9748b3..8d8930ab 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -322,7 +322,7 @@ impl Test { coin: Coin, fee: U256, out_instructions: &[(SeraiEthereumAddress, U256)], - ) -> TxLegacy { + ) -> ([u8; 32], TxLegacy) { let out_instructions = OutInstructions::from(out_instructions); let msg = Router::execute_message( self.chain_id, @@ -331,8 +331,47 @@ impl Test { fee, out_instructions.clone(), ); + let msg_hash = ethereum_primitives::keccak256(&msg); let sig = sign(self.state.key.unwrap(), &msg); - self.router.execute(coin, fee, out_instructions, &sig) + + let mut tx = self.router.execute(coin, fee, out_instructions, &sig); + // Restore the original estimate as the gas limit to ensure it's sufficient, at least in our + // test cases + tx.gas_limit = (tx.gas_limit * 100) / Router::GAS_REPRICING_BUFFER; + + (msg_hash, tx) + } + + async fn execute( + &mut self, + coin: Coin, + fee: U256, + out_instructions: &[(SeraiEthereumAddress, U256)], + results: Vec, + ) -> u64 { + let (message_hash, mut tx) = self.execute_tx(coin, fee, out_instructions); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + // We don't check the gas for `execute` as it's infeasible. Due to our use of account + // abstraction, it isn't a critical if we do ever under-estimate, solely an unprofitable relay + + { + let block = receipt.block_number.unwrap(); + let executed = self.router.executed(block ..= block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!( + executed[0], + Executed::Batch { nonce: self.state.next_nonce, message_hash, results } + ); + } + + self.state.next_nonce += 1; + self.verify_state().await; + + // We do return the gas used in case a caller can benefit from it + CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) } fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { @@ -402,7 +441,7 @@ async fn test_no_serai_key() { IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); assert!(matches!( - test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[])).await, + test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[]).1).await, IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); assert!(matches!( @@ -555,7 +594,7 @@ async fn test_erc20_router_in_instruction() { let tx = TxLegacy { chain_id: None, nonce: 0, - gas_price: 100_000_000_000u128, + gas_price: 100_000_000_000, gas_limit: 1_000_000, to: test.router.address().into(), value: U256::ZERO, @@ -592,7 +631,7 @@ async fn test_erc20_top_level_transfer_in_instruction() { let shorthand = Test::in_instruction(); let mut tx = test.router.in_instruction(coin, amount, &shorthand); - tx.gas_price = 100_000_000_000u128; + tx.gas_price = 100_000_000_000; tx.gas_limit = 1_000_000; let tx = ethereum_primitives::deterministically_sign(tx); @@ -600,6 +639,21 @@ async fn test_erc20_top_level_transfer_in_instruction() { test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; } +#[tokio::test] +async fn test_empty_execute() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + let () = + test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 1)).await.unwrap(); + let gas_used = test.execute(Coin::Ether, U256::from(1), &[], vec![]).await; + + // For the empty ETH case, we do compare this cost to the base cost + const CALL_GAS_STIPEND: u64 = 2_300; + // We don't use the call gas stipend here + const UNUSED_GAS: u64 = CALL_GAS_STIPEND; + assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_BASE_GAS); +} + #[tokio::test] async fn test_eth_address_out_instruction() { todo!("TODO") @@ -643,7 +697,7 @@ async fn test_escape_hatch() { let tx = ethereum_primitives::deterministically_sign(TxLegacy { to: Address([1; 20].into()).into(), gas_limit: 21_000, - gas_price: 100_000_000_000u128, + gas_price: 100_000_000_000, value: U256::from(1), ..Default::default() }); @@ -679,7 +733,7 @@ async fn test_escape_hatch() { IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); assert!(matches!( - test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[])).await, + test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[]).1).await, IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); // We reject further attempts to update the escape hatch to prevent the last key from being diff --git a/processor/ethereum/src/primitives/block.rs b/processor/ethereum/src/primitives/block.rs index b01493f0..9d4a8a2d 100644 --- a/processor/ethereum/src/primitives/block.rs +++ b/processor/ethereum/src/primitives/block.rs @@ -97,12 +97,19 @@ impl primitives::Block for FullEpoch { > { let mut res = HashMap::new(); for executed in &self.executed { - let Some(expected) = + let Some(mut expected) = eventualities.active_eventualities.remove(executed.nonce().to_le_bytes().as_slice()) else { // TODO: Why is this a continue, not an assert? continue; }; + // If this is a Batch Eventuality, we didn't know how the OutInstructions would resolve at + // time of creation. Copy the results from the actual transaction into the expectation + if let (Executed::Batch { results, .. }, Executed::Batch { results: expected_results, .. }) = + (executed, &mut expected.0) + { + *expected_results = results.clone(); + } assert_eq!( executed, &expected.0, @@ -119,7 +126,7 @@ impl primitives::Block for FullEpoch { Accordingly, we have free reign as to what to set the transaction ID to. We set the ID to the nonce as it's the most helpful value and unique barring someone - finding the premise for this as a hash. + finding the preimage for this as a hash. */ let mut tx_id = [0; 32]; tx_id[.. 8].copy_from_slice(executed.nonce().to_le_bytes().as_slice()); diff --git a/processor/ethereum/src/primitives/transaction.rs b/processor/ethereum/src/primitives/transaction.rs index dc430f29..a698fdb4 100644 --- a/processor/ethereum/src/primitives/transaction.rs +++ b/processor/ethereum/src/primitives/transaction.rs @@ -52,7 +52,7 @@ impl Action { Executed::NextSeraiKeySet { nonce: *nonce, key: key.eth_repr() } } Self::Batch { chain_id: _, nonce, .. } => { - Executed::Batch { nonce: *nonce, message_hash: keccak256(self.message()) } + Executed::Batch { nonce: *nonce, message_hash: keccak256(self.message()), results: vec![] } } }) } diff --git a/substrate/client/src/networks/ethereum.rs b/substrate/client/src/networks/ethereum.rs index 47b58af5..7e94dfb8 100644 --- a/substrate/client/src/networks/ethereum.rs +++ b/substrate/client/src/networks/ethereum.rs @@ -14,8 +14,8 @@ pub const ADDRESS_GAS_LIMIT: u32 = 950_000; pub struct ContractDeployment { /// The gas limit to use for this contract's execution. /// - /// THis MUST be less than the Serai gas limit. The cost of it will be deducted from the amount - /// transferred. + /// This MUST be less than the Serai gas limit. The cost of it, and the associated costs with + /// making this transaction, will be deducted from the amount transferred. gas_limit: u32, /// The initialization code of the contract to deploy. /// From 27c1dc46463e1a6d86ef28dcd679766c8c9fcb2e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Fri, 24 Jan 2025 18:46:17 -0500 Subject: [PATCH 347/368] Test ETH address/code OutInstructions --- processor/ethereum/router/src/lib.rs | 104 ++++++++---------- processor/ethereum/router/src/tests/mod.rs | 116 ++++++++++++++++++--- 2 files changed, 148 insertions(+), 72 deletions(-) diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index c17526c3..053aebce 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -68,14 +68,6 @@ use abi::{ #[cfg(test)] mod tests; -// As per Dencun, used for estimating gas for determining relayer fees -const NON_ZERO_BYTE_GAS_COST: u64 = 16; -const MEMORY_EXPANSION_COST: u64 = 3; // Does not model the quadratic cost -const COLD_COST: u64 = 2_600; -const WARM_COST: u64 = 100; -const POSITIVE_VALUE_COST: u64 = 9_000; -const EMPTY_ACCOUNT_COST: u64 = 25_000; - impl From<&Signature> for abi::Signature { fn from(signature: &Signature) -> Self { Self { @@ -247,6 +239,7 @@ pub struct Router { } impl Router { // Gas allocated for ERC20 calls + #[cfg(test)] const GAS_FOR_ERC20_CALL: u64 = 100_000; /* @@ -262,7 +255,12 @@ impl Router { */ const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; const UPDATE_SERAI_KEY_GAS: u64 = 60_045; - const EXECUTE_BASE_GAS: u64 = 51_131; + const EXECUTE_ETH_BASE_GAS: u64 = 51_131; + const EXECUTE_ERC20_BASE_GAS: u64 = 149_831; + const EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS: u64 = 41_453; + const EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS: u64 = 51_723; + const EXECUTE_ERC20_ADDRESS_OUT_INSTRUCTION_GAS: u64 = 0; // TODO + const EXECUTE_ERC20_CODE_OUT_INSTRUCTION_GAS: u64 = 0; // TODO const ESCAPE_HATCH_GAS: u64 = 61_238; /* @@ -432,51 +430,39 @@ impl Router { coin: Coin, instruction: &abi::OutInstruction, ) -> u64 { - // The assigned cost for performing an additional iteration of the loop - const ITERATION_COST: u64 = 5_000; - // The additional cost for a `DestinationType.Code`, as an additional buffer for its complexity - const CODE_COST: u64 = 10_000; + // As per Dencun, used for estimating gas for determining relayer fees + const NON_ZERO_BYTE_GAS_COST: u64 = 16; + const MEMORY_EXPANSION_COST: u64 = 3; // Does not model the quadratic cost let size = u64::try_from(instruction.abi_encoded_size()).unwrap(); let calldata_memory_cost = - (NON_ZERO_BYTE_GAS_COST * size) + (MEMORY_EXPANSION_COST * size.div_ceil(32)); + (size * NON_ZERO_BYTE_GAS_COST) + (size.div_ceil(32) * MEMORY_EXPANSION_COST); - ITERATION_COST + - (match coin { - Coin::Ether => match instruction.destinationType { - // We assume we're tranferring a positive value to a cold, empty account - abi::DestinationType::Address => { - calldata_memory_cost + COLD_COST + POSITIVE_VALUE_COST + EMPTY_ACCOUNT_COST - } - abi::DestinationType::Code => { - // OutInstructions can't be encoded/decoded and doesn't have pub internals, enabling it - // to be correct by construction - let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); - // This performs a call to self with the value, incurring the positive-value cost before - // CREATE's + match coin { + Coin::Ether => match instruction.destinationType { + // The calldata and memory cost is already part of this + abi::DestinationType::Address => Self::EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS, + abi::DestinationType::Code => { + // OutInstructions can't be encoded/decoded and doesn't have pub internals, enabling it + // to be correct by construction + let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); + Self::EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS + calldata_memory_cost + - CODE_COST + - (WARM_COST + POSITIVE_VALUE_COST + u64::from(code.gasLimit)) - } - abi::DestinationType::__Invalid => unreachable!(), - }, - Coin::Erc20(_) => { - // The ERC20 is warmed by the fee payment to the relayer - let erc20_call_gas = WARM_COST + Self::GAS_FOR_ERC20_CALL; - match instruction.destinationType { - abi::DestinationType::Address => calldata_memory_cost + erc20_call_gas, - abi::DestinationType::Code => { - let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); - calldata_memory_cost + - CODE_COST + - erc20_call_gas + - // Call to self to deploy the contract - (WARM_COST + u64::from(code.gasLimit)) - } - abi::DestinationType::__Invalid => unreachable!(), - } + u64::from(code.gasLimit) } - }) + abi::DestinationType::__Invalid => unreachable!(), + }, + Coin::Erc20(_) => match instruction.destinationType { + abi::DestinationType::Address => Self::EXECUTE_ERC20_ADDRESS_OUT_INSTRUCTION_GAS, + abi::DestinationType::Code => { + let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); + Self::EXECUTE_ERC20_CODE_OUT_INSTRUCTION_GAS + + calldata_memory_cost + + u64::from(code.gasLimit) + } + abi::DestinationType::__Invalid => unreachable!(), + }, + } } /// The estimated gas cost for this OutInstruction. @@ -495,18 +481,16 @@ impl Router { /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used /// for determining relayer fees. pub fn execute_gas_estimate(coin: Coin, outs: &OutInstructions) -> u64 { - Self::EXECUTE_BASE_GAS + - (match coin { - // This is warm as it's the message sender who is called with the fee payment - Coin::Ether => WARM_COST + POSITIVE_VALUE_COST, - // This is cold as we say the fee payment is the one warming the ERC20 - Coin::Erc20(_) => COLD_COST + Self::GAS_FOR_ERC20_CALL, - }) + - outs - .0 - .iter() - .map(|out| Self::execute_out_instruction_gas_estimate_internal(coin, out)) - .sum::() + (match coin { + // This is warm as it's the message sender who is called with the fee payment + Coin::Ether => Self::EXECUTE_ETH_BASE_GAS, + // This is cold as we say the fee payment is the one warming the ERC20 + Coin::Erc20(_) => Self::EXECUTE_ERC20_BASE_GAS, + }) + outs + .0 + .iter() + .map(|out| Self::execute_out_instruction_gas_estimate_internal(coin, out)) + .sum::() } /// Construct a transaction to execute a batch of `OutInstruction`s. diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 8d8930ab..77e8b50d 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -19,7 +19,7 @@ use alloy_node_bindings::{Anvil, AnvilInstance}; use scale::Encode; use serai_client::{ - networks::ethereum::Address as SeraiEthereumAddress, + networks::ethereum::{ContractDeployment, Address as SeraiEthereumAddress}, primitives::SeraiAddress, in_instructions::primitives::{ InInstruction as SeraiInInstruction, RefundableInInstruction, Shorthand, @@ -41,6 +41,8 @@ mod constants; mod erc20; use erc20::Erc20; +const CALL_GAS_STIPEND: u64 = 2_300; + pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { let key = Scalar::random(&mut OsRng); @@ -348,7 +350,7 @@ impl Test { fee: U256, out_instructions: &[(SeraiEthereumAddress, U256)], results: Vec, - ) -> u64 { + ) -> (Signed, u64, u64) { let (message_hash, mut tx) = self.execute_tx(coin, fee, out_instructions); tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(tx); @@ -371,7 +373,7 @@ impl Test { self.verify_state().await; // We do return the gas used in case a caller can benefit from it - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) + (tx.clone(), receipt.gas_used, CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used)) } fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { @@ -645,28 +647,118 @@ async fn test_empty_execute() { test.confirm_next_serai_key().await; let () = test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 1)).await.unwrap(); - let gas_used = test.execute(Coin::Ether, U256::from(1), &[], vec![]).await; - // For the empty ETH case, we do compare this cost to the base cost - const CALL_GAS_STIPEND: u64 = 2_300; - // We don't use the call gas stipend here - const UNUSED_GAS: u64 = CALL_GAS_STIPEND; - assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_BASE_GAS); + { + let (tx, raw_gas_used, gas_used) = test.execute(Coin::Ether, U256::from(1), &[], vec![]).await; + // We don't use the call gas stipend here + const UNUSED_GAS: u64 = CALL_GAS_STIPEND; + assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_ETH_BASE_GAS); + + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; + let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + assert_eq!( + test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - + U256::from(minted_to_sender - spent_by_sender), + U256::from(1) + ); + } + + { + // This uses a token of Address(0) as it'll be interpreted as a non-standard ERC20 which uses 0 + // gas, letting us safely evaluate the EXECUTE_ERC20_BASE_GAS constant + let (_tx, _raw_gas_used, gas_used) = + test.execute(Coin::Erc20(Address::ZERO), U256::from(1), &[], vec![]).await; + // Add an extra 1000 gas for decoding the return value which would exist if a compliant ERC20 + const UNUSED_GAS: u64 = Router::GAS_FOR_ERC20_CALL + 1000; + assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_ERC20_BASE_GAS); + } } #[tokio::test] async fn test_eth_address_out_instruction() { - todo!("TODO") + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + let () = + test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 3)).await.unwrap(); + + let mut rand_address = [0xff; 20]; + OsRng.fill_bytes(&mut rand_address); + let (tx, raw_gas_used, gas_used) = test + .execute( + Coin::Ether, + U256::from(1), + &[(SeraiEthereumAddress::Address(rand_address), U256::from(2))], + vec![true], + ) + .await; + // We don't use the call gas stipend here + const UNUSED_GAS: u64 = CALL_GAS_STIPEND; + // This doesn't model the quadratic memory costs + let gas_for_eth_address_out_instruction = gas_used + UNUSED_GAS - Router::EXECUTE_ETH_BASE_GAS; + // 2000 gas as a surplus for the quadratic memory cost and any inaccuracies + assert_eq!( + gas_for_eth_address_out_instruction + 2000, + Router::EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS + ); + + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; + let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + assert_eq!( + test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - + U256::from(minted_to_sender - spent_by_sender), + U256::from(1) + ); + assert_eq!(test.provider.get_balance(rand_address.into()).await.unwrap(), U256::from(2)); } #[tokio::test] async fn test_erc20_address_out_instruction() { todo!("TODO") + /* + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); + assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); + */ } #[tokio::test] async fn test_eth_code_out_instruction() { - todo!("TODO") + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + let () = + test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 3)).await.unwrap(); + + let mut rand_address = [0xff; 20]; + OsRng.fill_bytes(&mut rand_address); + let (tx, raw_gas_used, gas_used) = test + .execute( + Coin::Ether, + U256::from(1), + &[( + SeraiEthereumAddress::Contract(ContractDeployment::new(100_000, vec![]).unwrap()), + U256::from(2), + )], + vec![true], + ) + .await; + // This doesn't model the quadratic memory costs + let gas_for_eth_code_out_instruction = gas_used - Router::EXECUTE_ETH_BASE_GAS; + // 2000 gas as a surplus for the quadratic memory cost and any inaccuracies + assert_eq!(gas_for_eth_code_out_instruction + 2000, Router::EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS); + + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; + let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + assert_eq!( + test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - + U256::from(minted_to_sender - spent_by_sender), + U256::from(1) + ); + assert_eq!( + test.provider.get_balance(test.router.address().create(1)).await.unwrap(), + U256::from(2) + ); } #[tokio::test] @@ -854,7 +946,7 @@ async fn test_eth_address_out_instruction() { let instructions = OutInstructions::from([].as_slice()); let receipt = publish_outs(&provider, &router, key, 2, Coin::Ether, fee, instructions).await; assert!(receipt.status()); - assert_eq!(Router::EXECUTE_BASE_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); + assert_eq!(Router::EXECUTE_ETH_BASE_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); } From 5164a710a2c1469ff1337128fa0d59ad5f77bc2e Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sun, 26 Jan 2025 22:42:50 -0500 Subject: [PATCH 348/368] Redo gas estimation via revm Adds a minimal amount of packages. Does add decent complexity. Avoids having constants which aren't exact, due to things like the quadratic memory cost, and the issues with such estimates accordingly. --- Cargo.lock | 138 +++++++ processor/ethereum/router/Cargo.toml | 4 + .../ethereum/router/contracts/Router.sol | 6 +- processor/ethereum/router/src/gas.rs | 340 ++++++++++++++++++ processor/ethereum/router/src/lib.rs | 148 +------- processor/ethereum/router/src/tests/mod.rs | 273 +++++++++----- 6 files changed, 682 insertions(+), 227 deletions(-) create mode 100644 processor/ethereum/router/src/gas.rs diff --git a/Cargo.lock b/Cargo.lock index a9aadf55..72cc661e 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -318,6 +318,7 @@ dependencies = [ "alloy-primitives", "alloy-rpc-client", "alloy-rpc-types-eth", + "alloy-rpc-types-trace", "alloy-transport", "async-stream", "async-trait", @@ -411,6 +412,20 @@ dependencies = [ "thiserror 2.0.9", ] +[[package]] +name = "alloy-rpc-types-trace" +version = "0.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cd38207e056cc7d1372367fbb4560ddf9107cbd20731743f641246bf0dede149" +dependencies = [ + "alloy-primitives", + "alloy-rpc-types-eth", + "alloy-serde", + "serde", + "serde_json", + "thiserror 2.0.9", +] + [[package]] name = "alloy-serde" version = "0.9.2" @@ -979,6 +994,16 @@ dependencies = [ "url", ] +[[package]] +name = "aurora-engine-modexp" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0aef7712851e524f35fbbb74fa6599c5cd8692056a1c36f9ca0d2001b670e7e5" +dependencies = [ + "hex", + "num", +] + [[package]] name = "auto_impl" version = "1.2.0" @@ -2562,6 +2587,17 @@ dependencies = [ "syn 2.0.94", ] +[[package]] +name = "enumn" +version = "0.1.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2f9ed6b3789237c8a0c1c505af1c7eb2c560df6186f01b098c3a1064ea532f38" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.94", +] + [[package]] name = "env_logger" version = "0.10.2" @@ -5962,6 +5998,20 @@ dependencies = [ "winapi", ] +[[package]] +name = "num" +version = "0.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "35bd024e8b2ff75562e5f34e7f4905839deb4b22955ef5e73d2fea1b9813cb23" +dependencies = [ + "num-bigint", + "num-complex", + "num-integer", + "num-iter", + "num-rational", + "num-traits", +] + [[package]] name = "num-bigint" version = "0.4.6" @@ -6006,6 +6056,17 @@ dependencies = [ "num-traits", ] +[[package]] +name = "num-iter" +version = "0.1.45" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1429034a0490724d0075ebb2bc9e875d6503c3cf69e235a8941aa757d83ef5bf" +dependencies = [ + "autocfg", + "num-integer", + "num-traits", +] + [[package]] name = "num-rational" version = "0.4.2" @@ -7275,6 +7336,68 @@ dependencies = [ "quick-error", ] +[[package]] +name = "revm" +version = "19.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0a5a57589c308880c0f89ebf68d92aeef0d51e1ed88867474f895f6fd0f25c64" +dependencies = [ + "auto_impl", + "cfg-if", + "dyn-clone", + "revm-interpreter", + "revm-precompile", + "serde", + "serde_json", +] + +[[package]] +name = "revm-interpreter" +version = "15.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c0f632e761f171fb2f6ace8d1552a5793e0350578d4acec3e79ade1489f4c2a6" +dependencies = [ + "revm-primitives", + "serde", +] + +[[package]] +name = "revm-precompile" +version = "16.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6542fb37650dfdbf4b9186769e49c4a8bc1901a3280b2ebf32f915b6c8850f36" +dependencies = [ + "aurora-engine-modexp", + "c-kzg", + "cfg-if", + "k256", + "once_cell", + "revm-primitives", + "ripemd", + "secp256k1", + "sha2", + "substrate-bn", +] + +[[package]] +name = "revm-primitives" +version = "15.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "48faea1ecf2c9f80d9b043bbde0db9da616431faed84c4cfa3dd7393005598e6" +dependencies = [ + "alloy-eip2930", + "alloy-eip7702", + "alloy-primitives", + "auto_impl", + "bitflags 2.6.0", + "bitvec", + "cfg-if", + "dyn-clone", + "enumn", + "hex", + "serde", +] + [[package]] name = "rfc6979" version = "0.4.0" @@ -9485,6 +9608,7 @@ dependencies = [ "k256", "parity-scale-codec", "rand_core", + "revm", "serai-client", "serai-ethereum-test-primitives", "serai-processor-ethereum-deployer", @@ -9844,6 +9968,7 @@ version = "1.0.134" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d00f4175c42ee48b15416f6193a959ba3a0d67fc699a0db9ad12df9f83991c7d" dependencies = [ + "indexmap 2.7.0", "itoa", "memchr", "ryu", @@ -10907,6 +11032,19 @@ dependencies = [ "zeroize", ] +[[package]] +name = "substrate-bn" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72b5bbfa79abbae15dd642ea8176a21a635ff3c00059961d1ea27ad04e5b441c" +dependencies = [ + "byteorder", + "crunchy", + "lazy_static", + "rand", + "rustc-hex", +] + [[package]] name = "substrate-build-script-utils" version = "3.0.0" diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 1da4fd02..07c88fe6 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -20,6 +20,7 @@ workspace = true borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } group = { version = "0.13", default-features = false } +k256 = { version = "0.13", default-features = false, features = ["std", "arithmetic"] } alloy-core = { version = "0.8", default-features = false } @@ -33,6 +34,8 @@ alloy-transport = { version = "0.9", default-features = false } alloy-simple-request-transport = { path = "../../../networks/ethereum/alloy-simple-request-transport", default-features = false } alloy-provider = { version = "0.9", default-features = false } +revm = { version = "19", default-features = false, features = ["std"] } + ethereum-schnorr = { package = "ethereum-schnorr-contract", path = "../../../networks/ethereum/schnorr", default-features = false } ethereum-primitives = { package = "serai-processor-ethereum-primitives", path = "../primitives", default-features = false } @@ -58,6 +61,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } k256 = { version = "0.13", default-features = false, features = ["std"] } +alloy-provider = { version = "0.9", default-features = false, features = ["trace-api"] } alloy-rpc-client = { version = "0.9", default-features = false } alloy-node-bindings = { version = "0.9", default-features = false } diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index b6ffc3c1..3bd5c73f 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -13,11 +13,15 @@ import "IRouter.sol"; individual transactions is critical. We don't check the return values as we don't care if the calls succeeded. We solely care we made - them. If someone configures an external contract in a way which borks, we epxlicitly define that + them. If someone configures an external contract in a way which borks, we explicitly define that as their fault and out-of-scope to this contract. If an actual invariant within Serai exists, an escape hatch exists to move to a new contract. Any improperly handled actions can be re-signed and re-executed at that point in time. + + The `execute` function pays a relayer, as expected for use in the account-abstraction model. Other + functions also expect relayers, yet do not explicitly pay fees. Those calls are expected to be + justified via the backpressure of transactions with fees. */ // slither-disable-start low-level-calls,unchecked-lowlevel diff --git a/processor/ethereum/router/src/gas.rs b/processor/ethereum/router/src/gas.rs new file mode 100644 index 00000000..22aab3f4 --- /dev/null +++ b/processor/ethereum/router/src/gas.rs @@ -0,0 +1,340 @@ +use k256::{Scalar, ProjectivePoint}; + +use alloy_core::primitives::{Address, U160, U256}; +use alloy_sol_types::SolCall; + +use revm::{ + primitives::*, + interpreter::{gas::*, opcode::InstructionTables, *}, + db::{emptydb::EmptyDB, in_memory_db::InMemoryDB}, + Handler, Context, EvmBuilder, Evm, +}; + +use ethereum_schnorr::{PublicKey, Signature}; + +use crate::*; + +// The chain ID used for gas estimation +const CHAIN_ID: U256 = U256::from_be_slice(&[1]); + +/// The object used for estimating gas. +/// +/// Due to `execute` heavily branching, we locally simulate calls with revm. +pub(crate) type GasEstimator = Evm<'static, (), InMemoryDB>; + +impl Router { + const NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[1]); + const SERAI_KEY_STORAGE_SLOT: U256 = U256::from_be_slice(&[3]); + + // Gas allocated for ERC20 calls + #[cfg(test)] + pub(crate) const GAS_FOR_ERC20_CALL: u64 = 100_000; + + /* + The gas limits to use for non-Execute transactions. + + These don't branch on the success path, allowing constants to be used out-right. These + constants target the Cancun network upgrade and are validated by the tests. + + While whoever publishes these transactions may be able to query a gas estimate, it may not be + reasonable to. If the signing context is a distributed group, as Serai frequently employs, a + non-deterministic gas (such as estimates from the local nodes) would require a consensus + protocol to determine which to use. + + These gas limits may break if/when gas opcodes undergo repricing. In that case, this library is + expected to be modified with these made parameters. The caller would then be expected to pass + the correct set of prices for the network they're operating on. + */ + /// The gas used by `confirmSeraiKey`. + pub const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; + /// The gas used by `updateSeraiKey`. + pub const UPDATE_SERAI_KEY_GAS: u64 = 60_045; + /// The gas used by `escapeHatch`. + pub const ESCAPE_HATCH_GAS: u64 = 61_094; + + /// The key to use when performing gas estimations. + /// + /// There has to be a key to verify the signatures of the messages signed. + fn gas_estimation_key() -> (Scalar, PublicKey) { + (Scalar::ONE, PublicKey::new(ProjectivePoint::GENERATOR).unwrap()) + } + + pub(crate) fn gas_estimator(&self, erc20: Option
) -> GasEstimator { + // The DB to use + let db = { + const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = include_bytes!(concat!( + env!("OUT_DIR"), + "/serai-processor-ethereum-router/Router.bin-runtime" + )); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("Router.bin-runtime did not contain valid hex"), + }; + &BYTECODE + }; + let bytecode = Bytecode::new_legacy(Bytes::from_static(BYTECODE)); + + let mut db = InMemoryDB::new(EmptyDB::new()); + // Insert the Router into the state + db.insert_account_info( + self.address, + AccountInfo { + balance: U256::from(0), + // Per EIP-161 + nonce: 1, + code_hash: bytecode.hash_slow(), + code: Some(bytecode), + }, + ); + + // Insert a non-zero nonce, as the zero nonce will update to the initial key and never be + // used for any gas estimations of `execute`, the only function estimated + db.insert_account_storage(self.address, Self::NONCE_STORAGE_SLOT, U256::from(1)).unwrap(); + + // Insert the public key to verify with + db.insert_account_storage( + self.address, + Self::SERAI_KEY_STORAGE_SLOT, + U256::from_be_bytes(Self::gas_estimation_key().1.eth_repr()), + ) + .unwrap(); + + db + }; + + // Create a custom handler so we can assume every CALL is the worst-case + let handler = { + let mut instructions = InstructionTables::<'_, _>::new_plain::(); + instructions.update_boxed(revm::interpreter::opcode::CALL, { + move |call_op, interpreter, host: &mut Context<_, _>| { + let (address_called, value, return_addr, return_len) = { + let stack = &mut interpreter.stack; + + let address = stack.peek(1).unwrap(); + let value = stack.peek(2).unwrap(); + let return_addr = stack.peek(5).unwrap(); + let return_len = stack.peek(6).unwrap(); + + ( + address, + value, + usize::try_from(return_addr).unwrap(), + usize::try_from(return_len).unwrap(), + ) + }; + let address_called = + Address::from(U160::from_be_slice(&address_called.to_be_bytes::<32>()[12 ..])); + + // Have the original call op incur its costs as programmed + call_op(interpreter, host); + + /* + Unfortunately, the call opcode executed only sets itself up, it doesn't handle the + entire inner call for us. We manually do so here by shimming the intended result. The + other option, on this path chosen, would be to shim the call-frame execution ourselves + and only then manipulate the result. + + Ideally, we wouldn't override CALL, yet STOP/RETURN (the tail of the CALL) to avoid all + of this. Those overrides weren't being successfully hit in initial experiments, and + while this solution does appear overly complicated, it's sufficiently tested to justify + itself. + + revm does cost the entire gas limit during the call setup. After the call completes, + it refunds whatever was unused. Since we manually complete the call here ourselves, + but don't implement that refund logic as we want the worst-case scenario, we do + successfully implement complete costing of the gas limit. + */ + + // Perform the call value transfer, which also marks the recipient as warm + assert!(host + .evm + .inner + .journaled_state + .transfer( + &interpreter.contract.target_address, + &address_called, + value, + &mut host.evm.inner.db + ) + .unwrap() + .is_none()); + + // Clear the call-to-be + debug_assert!(matches!(interpreter.next_action, InterpreterAction::Call { .. })); + interpreter.next_action = InterpreterAction::None; + interpreter.instruction_result = InstructionResult::Continue; + + // Clear the existing return data + interpreter.return_data_buffer.clear(); + + // If calling an ERC20, trigger the return data's worst-case by returning `true` + // (as expected by compliant ERC20s) + if Some(address_called) == erc20 { + interpreter.return_data_buffer = true.abi_encode().into(); + } + // Also copy the return data into memory + let return_len = return_len.min(interpreter.return_data_buffer.len()); + let needed_memory_size = return_addr + return_len; + if interpreter.shared_memory.len() < needed_memory_size { + assert!(interpreter.resize_memory(needed_memory_size)); + } + interpreter + .shared_memory + .slice_mut(return_addr, return_len) + .copy_from_slice(&interpreter.return_data_buffer[.. return_len]); + + // Finally, push the result of the call onto the stack + interpreter.stack.push(U256::from(1)).unwrap(); + } + }); + let mut handler = Handler::mainnet::(); + handler.set_instruction_table(instructions); + + handler + }; + + EvmBuilder::default() + .with_db(db) + .with_handler(handler) + .modify_cfg_env(|cfg| { + cfg.chain_id = CHAIN_ID.try_into().unwrap(); + }) + .modify_tx_env(|tx| { + tx.gas_limit = u64::MAX; + tx.transact_to = self.address.into(); + }) + .build() + } + + /// The worst-case gas cost for a legacy transaction which executes this batch. + pub fn execute_gas(&self, coin: Coin, fee_per_gas: U256, outs: &OutInstructions) -> u64 { + // Unfortunately, we can't cache this in self, despite the following code being written such + // that a common EVM instance could be used, as revm's types aren't Send/Sync and we expect the + // Router to be send/sync + let mut gas_estimator = self.gas_estimator(match coin { + Coin::Ether => None, + Coin::Erc20(erc20) => Some(erc20), + }); + + let fee = match coin { + Coin::Ether => { + // Use a fee of 1 so the fee payment is recognized as positive-value + let fee = U256::from(1); + + // Set a balance of the amount sent out to ensure we don't error on that premise + { + let db = gas_estimator.db_mut(); + let account = db.load_account(self.address).unwrap(); + account.info.balance = fee + outs.0.iter().map(|out| out.amount).sum::(); + } + + fee + } + Coin::Erc20(_) => U256::from(0), + }; + + // Sign a dummy signature + let (private_key, public_key) = Self::gas_estimation_key(); + let c = Signature::challenge( + // Use a nonce of 1 + ProjectivePoint::GENERATOR, + &public_key, + &Self::execute_message(CHAIN_ID, 1, coin, fee, outs.clone()), + ); + let s = Scalar::ONE + (c * private_key); + let sig = Signature::new(c, s).unwrap(); + + // Write the current transaction + /* + revm has poor documentation on if the EVM instance can be dirtied, which would be the concern + if we shared a mutable reference to a singular instance across invocations, but our + consistent use of nonce #1 shows storage read/writes aren't being persisted. They're solely + returned upon execution in a `state` field we ignore. + */ + { + let tx = gas_estimator.tx_mut(); + tx.caller = Address::from({ + /* + We assume the transaction sender is not the destination of any `OutInstruction`, making + all transfers to destinations cold. A malicious adversary could create an + `OutInstruction` whose destination is the caller stubbed here, however, to make us + under-estimate. + + We prevent this by defining the caller as the hash of the `OutInstruction`s, forcing a + hash collision to cause an `OutInstruction` destination to be warm when it wasn't warmed + by either being the Router, being the ERC20, or by being the destination of a distinct + `OutInstruction`. All of those cases will affect the gas used in reality accordingly. + */ + let hash = ethereum_primitives::keccak256(outs.0.abi_encode()); + <[u8; 20]>::try_from(&hash[12 ..]).unwrap() + }); + tx.data = abi::executeCall::new(( + abi::Signature::from(&sig), + Address::from(coin), + fee, + outs.0.clone(), + )) + .abi_encode() + .into(); + } + + // Execute the transaction + let mut gas = match gas_estimator.transact().unwrap().result { + ExecutionResult::Success { gas_used, gas_refunded, .. } => { + assert_eq!(gas_refunded, 0); + gas_used + } + res => panic!("estimated execute transaction failed: {res:?}"), + }; + + // The transaction uses gas based on the amount of non-zero bytes in the calldata, which is + // variable to the fee, which is variable to the gad used. This iterates until parity + let initial_gas = |fee, sig| { + let gas = calculate_initial_tx_gas( + SpecId::CANCUN, + &abi::executeCall::new((sig, Address::from(coin), fee, outs.0.clone())).abi_encode(), + false, + &[], + 0, + ); + assert_eq!(gas.floor_gas, 0); + gas.initial_gas + }; + let mut current_initial_gas = initial_gas(fee, abi::Signature::from(&sig)); + loop { + let fee = fee_per_gas * U256::from(gas); + let new_initial_gas = + initial_gas(fee, abi::Signature { c: [0xff; 32].into(), s: [0xff; 32].into() }); + if current_initial_gas >= new_initial_gas { + return gas; + } + + gas += new_initial_gas - current_initial_gas; + current_initial_gas = new_initial_gas; + } + } + + /// The estimated fee for this `OutInstruction`. + /// + /// This does not model the quadratic costs incurred when in a batch, nor other misc costs such + /// as the potential to cause one less zero byte in the fee's encoding. This is intended to + /// produce a per-`OutInstruction` fee to deduct from each `OutInstruction`, before all + /// `OutInstruction`s incur an amortized fee of what remains for the batch itself. + pub fn execute_out_instruction_gas_estimate( + &mut self, + coin: Coin, + instruction: abi::OutInstruction, + ) -> u64 { + #[allow(clippy::map_entry)] // clippy doesn't realize the multiple mutable borrows + if !self.empty_execute_gas.contains_key(&coin) { + // This can't be de-duplicated across ERC20s due to the zero bytes in the address + let gas = self.execute_gas(coin, U256::from(0), &OutInstructions(vec![])); + self.empty_execute_gas.insert(coin, gas); + } + + let gas = self.execute_gas(coin, U256::from(0), &OutInstructions(vec![instruction])); + gas - self.empty_execute_gas[&coin] + } +} diff --git a/processor/ethereum/router/src/lib.rs b/processor/ethereum/router/src/lib.rs index 053aebce..f052763e 100644 --- a/processor/ethereum/router/src/lib.rs +++ b/processor/ethereum/router/src/lib.rs @@ -65,6 +65,8 @@ use abi::{ Escaped as EscapedEvent, }; +mod gas; + #[cfg(test)] mod tests; @@ -78,7 +80,7 @@ impl From<&Signature> for abi::Signature { } /// A coin on Ethereum. -#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] pub enum Coin { /// Ether, the native coin of Ethereum. Ether, @@ -236,62 +238,25 @@ pub struct Escape { pub struct Router { provider: Arc>, address: Address, + empty_execute_gas: HashMap, } impl Router { - // Gas allocated for ERC20 calls - #[cfg(test)] - const GAS_FOR_ERC20_CALL: u64 = 100_000; - - /* - The gas limits to use for transactions. - - These are expected to be constant as a distributed group may sign the transactions invoking - these calls. Having the gas be constant prevents needing to run a protocol to determine what - gas to use. - - These gas limits may break if/when gas opcodes undergo repricing. In that case, this library is - expected to be modified with these made parameters. The caller would then be expected to pass - the correct set of prices for the network they're operating on. - */ - const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; - const UPDATE_SERAI_KEY_GAS: u64 = 60_045; - const EXECUTE_ETH_BASE_GAS: u64 = 51_131; - const EXECUTE_ERC20_BASE_GAS: u64 = 149_831; - const EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS: u64 = 41_453; - const EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS: u64 = 51_723; - const EXECUTE_ERC20_ADDRESS_OUT_INSTRUCTION_GAS: u64 = 0; // TODO - const EXECUTE_ERC20_CODE_OUT_INSTRUCTION_GAS: u64 = 0; // TODO - const ESCAPE_HATCH_GAS: u64 = 61_238; - - /* - The percentage to actually use as the gas limit, in case any opcodes are repriced or errors - occurred. - - Per prior commentary, this is just intended to be best-effort. If this is unnecessary, the gas - will be unspent. If this becomes necessary, it avoids needing an update. - */ - const GAS_REPRICING_BUFFER: u64 = 120; - - fn code() -> Vec { - const BYTECODE: &[u8] = { - const BYTECODE_HEX: &[u8] = + fn init_code(key: &PublicKey) -> Vec { + const INITCODE: &[u8] = { + const INITCODE_HEX: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-router/Router.bin")); - const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = - match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + const INITCODE: [u8; INITCODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ INITCODE_HEX.len() / 2 }>(INITCODE_HEX) { Ok(bytecode) => bytecode, Err(_) => panic!("Router.bin did not contain valid hex"), }; - &BYTECODE + &INITCODE }; - BYTECODE.to_vec() - } - - fn init_code(key: &PublicKey) -> Vec { - let mut bytecode = Self::code(); // Append the constructor arguments - bytecode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); - bytecode + let mut initcode = INITCODE.to_vec(); + initcode.extend((abi::constructorCall { initialSeraiKey: key.eth_repr().into() }).abi_encode()); + initcode } /// Obtain the transaction to deploy this contract. @@ -319,7 +284,7 @@ impl Router { else { return Ok(None); }; - Ok(Some(Self { provider, address })) + Ok(Some(Self { provider, address, empty_execute_gas: HashMap::new() })) } /// The address of the router. @@ -338,12 +303,11 @@ impl Router { /// Construct a transaction to confirm the next key representing Serai. /// - /// The gas price is not set and is left to the caller. + /// The gas limit and gas price are not set and are left to the caller. pub fn confirm_next_serai_key(&self, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), input: abi::confirmNextSeraiKeyCall::new((abi::Signature::from(sig),)).abi_encode().into(), - gas_limit: Self::CONFIRM_NEXT_SERAI_KEY_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -359,7 +323,7 @@ impl Router { /// Construct a transaction to update the key representing Serai. /// - /// The gas price is not set and is left to the caller. + /// The gas limit and gas price are not set and are left to the caller. pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), @@ -369,7 +333,6 @@ impl Router { )) .abi_encode() .into(), - gas_limit: Self::UPDATE_SERAI_KEY_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -422,89 +385,15 @@ impl Router { .abi_encode() } - /// The estimated gas cost for this OutInstruction. - /// - /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used - /// for determining relayer fees. - fn execute_out_instruction_gas_estimate_internal( - coin: Coin, - instruction: &abi::OutInstruction, - ) -> u64 { - // As per Dencun, used for estimating gas for determining relayer fees - const NON_ZERO_BYTE_GAS_COST: u64 = 16; - const MEMORY_EXPANSION_COST: u64 = 3; // Does not model the quadratic cost - - let size = u64::try_from(instruction.abi_encoded_size()).unwrap(); - let calldata_memory_cost = - (size * NON_ZERO_BYTE_GAS_COST) + (size.div_ceil(32) * MEMORY_EXPANSION_COST); - - match coin { - Coin::Ether => match instruction.destinationType { - // The calldata and memory cost is already part of this - abi::DestinationType::Address => Self::EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS, - abi::DestinationType::Code => { - // OutInstructions can't be encoded/decoded and doesn't have pub internals, enabling it - // to be correct by construction - let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); - Self::EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS + - calldata_memory_cost + - u64::from(code.gasLimit) - } - abi::DestinationType::__Invalid => unreachable!(), - }, - Coin::Erc20(_) => match instruction.destinationType { - abi::DestinationType::Address => Self::EXECUTE_ERC20_ADDRESS_OUT_INSTRUCTION_GAS, - abi::DestinationType::Code => { - let code = abi::CodeDestination::abi_decode(&instruction.destination, true).unwrap(); - Self::EXECUTE_ERC20_CODE_OUT_INSTRUCTION_GAS + - calldata_memory_cost + - u64::from(code.gasLimit) - } - abi::DestinationType::__Invalid => unreachable!(), - }, - } - } - - /// The estimated gas cost for this OutInstruction. - /// - /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used - /// for determining relayer fees. - pub fn execute_out_instruction_gas_estimate(coin: Coin, address: SeraiAddress) -> u64 { - Self::execute_out_instruction_gas_estimate_internal( - coin, - &abi::OutInstruction::from(&(address, U256::ZERO)), - ) - } - - /// The estimated gas cost for this batch. - /// - /// This is not guaranteed to be correct or even sufficient. It is a hint and a hint alone used - /// for determining relayer fees. - pub fn execute_gas_estimate(coin: Coin, outs: &OutInstructions) -> u64 { - (match coin { - // This is warm as it's the message sender who is called with the fee payment - Coin::Ether => Self::EXECUTE_ETH_BASE_GAS, - // This is cold as we say the fee payment is the one warming the ERC20 - Coin::Erc20(_) => Self::EXECUTE_ERC20_BASE_GAS, - }) + outs - .0 - .iter() - .map(|out| Self::execute_out_instruction_gas_estimate_internal(coin, out)) - .sum::() - } - /// Construct a transaction to execute a batch of `OutInstruction`s. /// - /// The gas limit is set to an estimate which may or may not be sufficient. The caller is - /// expected to set a correct gas limit. The gas price is not set and is left to the caller. + /// The gas limit and gas price are not set and are left to the caller. pub fn execute(&self, coin: Coin, fee: U256, outs: OutInstructions, sig: &Signature) -> TxLegacy { - let gas = Self::execute_gas_estimate(coin, &outs); TxLegacy { to: TxKind::Call(self.address), input: abi::executeCall::new((abi::Signature::from(sig), Address::from(coin), fee, outs.0)) .abi_encode() .into(), - gas_limit: gas * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } @@ -520,12 +409,11 @@ impl Router { /// Construct a transaction to trigger the escape hatch. /// - /// The gas price is not set and is left to the caller. + /// The gas limit and gas price are not set and are left to the caller. pub fn escape_hatch(&self, escape_to: Address, sig: &Signature) -> TxLegacy { TxLegacy { to: TxKind::Call(self.address), input: abi::escapeHatchCall::new((abi::Signature::from(sig), escape_to)).abi_encode().into(), - gas_limit: Self::ESCAPE_HATCH_GAS * Self::GAS_REPRICING_BUFFER / 100, ..Default::default() } } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 77e8b50d..093af39f 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -6,14 +6,14 @@ use group::ff::Field; use k256::{Scalar, ProjectivePoint}; use alloy_core::primitives::{Address, U256}; -use alloy_sol_types::{SolCall, SolEvent}; +use alloy_sol_types::{SolValue, SolCall, SolEvent}; use alloy_consensus::{TxLegacy, Signed}; use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionInput, TransactionRequest}; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; -use alloy_provider::{Provider, RootProvider}; +use alloy_provider::{Provider, RootProvider, ext::TraceApi}; use alloy_node_bindings::{Anvil, AnvilInstance}; @@ -41,8 +41,6 @@ mod constants; mod erc20; use erc20::Erc20; -const CALL_GAS_STIPEND: u64 = 2_300; - pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { let key = Scalar::random(&mut OsRng); @@ -63,15 +61,24 @@ fn sign(key: (Scalar, PublicKey), msg: &[u8]) -> Signature { /// Calculate the gas used by a transaction if none of its calldata's bytes were zero struct CalldataAgnosticGas; impl CalldataAgnosticGas { - fn calculate(tx: &TxLegacy, mut gas_used: u64) -> u64 { - const ZERO_BYTE_GAS_COST: u64 = 4; - const NON_ZERO_BYTE_GAS_COST: u64 = 16; - for b in &tx.input { - if *b == 0 { - gas_used += NON_ZERO_BYTE_GAS_COST - ZERO_BYTE_GAS_COST; + #[must_use] + fn calculate(input: &[u8], mut constant_zero_bytes: usize, gas_used: u64) -> u64 { + use revm::{primitives::SpecId, interpreter::gas::calculate_initial_tx_gas}; + + let mut without_variable_zero_bytes = Vec::with_capacity(input.len()); + for byte in input { + if (constant_zero_bytes > 0) && (*byte == 0) { + constant_zero_bytes -= 1; + without_variable_zero_bytes.push(0); + } else { + // If this is a variably zero byte, or a non-zero byte, push a non-zero byte + without_variable_zero_bytes.push(0xff); } } - gas_used + gas_used + + (calculate_initial_tx_gas(SpecId::CANCUN, &without_variable_zero_bytes, false, &[], 0) + .initial_gas - + calculate_initial_tx_gas(SpecId::CANCUN, input, false, &[], 0).initial_gas) } } @@ -173,6 +180,7 @@ impl Test { async fn confirm_next_serai_key(&mut self) { let mut tx = self.confirm_next_serai_key_tx(); + tx.gas_limit = Router::CONFIRM_NEXT_SERAI_KEY_GAS + 5_000; tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; @@ -181,12 +189,12 @@ impl Test { // is the highest possible gas cost and what the constant is derived from if self.state.key.is_none() { assert_eq!( - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 0, receipt.gas_used), Router::CONFIRM_NEXT_SERAI_KEY_GAS, ); } else { assert!( - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) < + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 0, receipt.gas_used) < Router::CONFIRM_NEXT_SERAI_KEY_GAS ); } @@ -221,18 +229,20 @@ impl Test { async fn update_serai_key(&mut self) { let (next_key, mut tx) = self.update_serai_key_tx(); + tx.gas_limit = Router::UPDATE_SERAI_KEY_GAS + 5_000; tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; assert!(receipt.status()); if self.state.next_key.is_none() { assert_eq!( - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 0, receipt.gas_used), Router::UPDATE_SERAI_KEY_GAS, ); } else { assert!( - CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used) < Router::UPDATE_SERAI_KEY_GAS + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 0, receipt.gas_used) < + Router::UPDATE_SERAI_KEY_GAS ); } @@ -323,9 +333,8 @@ impl Test { &self, coin: Coin, fee: U256, - out_instructions: &[(SeraiEthereumAddress, U256)], + out_instructions: OutInstructions, ) -> ([u8; 32], TxLegacy) { - let out_instructions = OutInstructions::from(out_instructions); let msg = Router::execute_message( self.chain_id, self.state.next_nonce, @@ -334,13 +343,17 @@ impl Test { out_instructions.clone(), ); let msg_hash = ethereum_primitives::keccak256(&msg); - let sig = sign(self.state.key.unwrap(), &msg); - - let mut tx = self.router.execute(coin, fee, out_instructions, &sig); - // Restore the original estimate as the gas limit to ensure it's sufficient, at least in our - // test cases - tx.gas_limit = (tx.gas_limit * 100) / Router::GAS_REPRICING_BUFFER; + let sig = loop { + let sig = sign(self.state.key.unwrap(), &msg); + // Standardize the zero bytes in the signature for calldata gas reasons + let has_zero_byte = sig.to_bytes().iter().filter(|b| **b == 0).count() != 0; + if has_zero_byte { + continue; + } + break sig; + }; + let tx = self.router.execute(coin, fee, out_instructions, &sig); (msg_hash, tx) } @@ -348,16 +361,18 @@ impl Test { &mut self, coin: Coin, fee: U256, - out_instructions: &[(SeraiEthereumAddress, U256)], + out_instructions: OutInstructions, results: Vec, - ) -> (Signed, u64, u64) { + ) -> (Signed, u64) { let (message_hash, mut tx) = self.execute_tx(coin, fee, out_instructions); + tx.gas_limit = 1_000_000; tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; assert!(receipt.status()); - // We don't check the gas for `execute` as it's infeasible. Due to our use of account - // abstraction, it isn't a critical if we do ever under-estimate, solely an unprofitable relay + + // We don't check the gas for `execute` here, instead at the call-sites where we have + // beneficial context { let block = receipt.block_number.unwrap(); @@ -372,14 +387,15 @@ impl Test { self.state.next_nonce += 1; self.verify_state().await; - // We do return the gas used in case a caller can benefit from it - (tx.clone(), receipt.gas_used, CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used)) + (tx.clone(), receipt.gas_used) } fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); let sig = sign(self.state.key.unwrap(), &msg); - self.router.escape_hatch(escape_to, &sig) + let mut tx = self.router.escape_hatch(escape_to, &sig); + tx.gas_limit = Router::ESCAPE_HATCH_GAS + 5_000; + tx } async fn escape_hatch(&mut self) { @@ -395,7 +411,11 @@ impl Test { let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; assert!(receipt.status()); - assert_eq!(CalldataAgnosticGas::calculate(tx.tx(), receipt.gas_used), Router::ESCAPE_HATCH_GAS); + // This encodes an address which has 12 bytes of padding + assert_eq!( + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 12, receipt.gas_used), + Router::ESCAPE_HATCH_GAS + ); { let block = receipt.block_number.unwrap(); @@ -443,7 +463,9 @@ async fn test_no_serai_key() { IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); assert!(matches!( - test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[]).1).await, + test + .call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), [].as_slice().into()).1) + .await, IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) )); assert!(matches!( @@ -645,72 +667,107 @@ async fn test_erc20_top_level_transfer_in_instruction() { async fn test_empty_execute() { let mut test = Test::new().await; test.confirm_next_serai_key().await; - let () = - test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 1)).await.unwrap(); { - let (tx, raw_gas_used, gas_used) = test.execute(Coin::Ether, U256::from(1), &[], vec![]).await; - // We don't use the call gas stipend here - const UNUSED_GAS: u64 = CALL_GAS_STIPEND; - assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_ETH_BASE_GAS); + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 100_000)) + .await + .unwrap(); - assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &[].as_slice().into()); + let fee = U256::from(gas); + let (tx, gas_used) = test.execute(Coin::Ether, fee, [].as_slice().into(), vec![]).await; + // We don't use the call gas stipend here + const UNUSED_GAS: u64 = revm::interpreter::gas::CALL_STIPEND; + assert_eq!(gas_used + UNUSED_GAS, gas); + + assert_eq!( + test.provider.get_balance(test.router.address()).await.unwrap(), + U256::from(100_000 - gas) + ); let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; - let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + let spent_by_sender = u128::from(gas_used) * tx.tx().gas_price; assert_eq!( test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - U256::from(minted_to_sender - spent_by_sender), - U256::from(1) + U256::from(gas) ); } { - // This uses a token of Address(0) as it'll be interpreted as a non-standard ERC20 which uses 0 - // gas, letting us safely evaluate the EXECUTE_ERC20_BASE_GAS constant - let (_tx, _raw_gas_used, gas_used) = - test.execute(Coin::Erc20(Address::ZERO), U256::from(1), &[], vec![]).await; - // Add an extra 1000 gas for decoding the return value which would exist if a compliant ERC20 - const UNUSED_GAS: u64 = Router::GAS_FOR_ERC20_CALL + 1000; - assert_eq!(gas_used + UNUSED_GAS, Router::EXECUTE_ERC20_BASE_GAS); + let token = Address::from([0xff; 20]); + { + #[rustfmt::skip] + let code = vec![ + 0x60, // push 1 byte | 3 gas + 0x01, // the value 1 + 0x5f, // push 0 | 2 gas + 0x52, // mstore to offset 0 the value 1 | 3 gas + 0x60, // push 1 byte | 3 gas + 0x20, // the value 32 + 0x5f, // push 0 | 2 gas + 0xf3, // return from offset 0 1 word | 0 gas + // 13 gas for the execution plus a single word of memory for 16 gas total + ]; + // Deploy our 'token' + let () = test.provider.raw_request("anvil_setCode".into(), (token, code)).await.unwrap(); + let call = + TransactionRequest::default().to(token).input(TransactionInput::new(vec![].into())); + // Check it returns the expected result + assert_eq!( + test.provider.call(&call).await.unwrap().as_ref(), + U256::from(1).abi_encode().as_slice() + ); + // Check it has the expected gas cost + assert_eq!(test.provider.estimate_gas(&call).await.unwrap(), 21_000 + 16); + } + + let gas = test.router.execute_gas(Coin::Erc20(token), U256::from(0), &[].as_slice().into()); + let fee = U256::from(0); + let (_tx, gas_used) = test.execute(Coin::Erc20(token), fee, [].as_slice().into(), vec![]).await; + const UNUSED_GAS: u64 = Router::GAS_FOR_ERC20_CALL - 16; + assert_eq!(gas_used + UNUSED_GAS, gas); } } +// TODO: Test order, length of results +// TODO: Test reentrancy + #[tokio::test] async fn test_eth_address_out_instruction() { let mut test = Test::new().await; test.confirm_next_serai_key().await; - let () = - test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 3)).await.unwrap(); + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 100_000)) + .await + .unwrap(); let mut rand_address = [0xff; 20]; OsRng.fill_bytes(&mut rand_address); - let (tx, raw_gas_used, gas_used) = test - .execute( - Coin::Ether, - U256::from(1), - &[(SeraiEthereumAddress::Address(rand_address), U256::from(2))], - vec![true], - ) - .await; - // We don't use the call gas stipend here - const UNUSED_GAS: u64 = CALL_GAS_STIPEND; - // This doesn't model the quadratic memory costs - let gas_for_eth_address_out_instruction = gas_used + UNUSED_GAS - Router::EXECUTE_ETH_BASE_GAS; - // 2000 gas as a surplus for the quadratic memory cost and any inaccuracies - assert_eq!( - gas_for_eth_address_out_instruction + 2000, - Router::EXECUTE_ETH_ADDRESS_OUT_INSTRUCTION_GAS - ); + let amount_out = U256::from(2); + let out_instructions = + OutInstructions::from([(SeraiEthereumAddress::Address(rand_address), amount_out)].as_slice()); - assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &out_instructions); + let fee = U256::from(gas); + let (tx, gas_used) = test.execute(Coin::Ether, fee, out_instructions, vec![true]).await; + const UNUSED_GAS: u64 = 2 * revm::interpreter::gas::CALL_STIPEND; + assert_eq!(gas_used + UNUSED_GAS, gas); + + assert_eq!( + test.provider.get_balance(test.router.address()).await.unwrap(), + U256::from(100_000) - amount_out - fee + ); let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; - let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + let spent_by_sender = u128::from(gas_used) * tx.tx().gas_price; assert_eq!( test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - U256::from(minted_to_sender - spent_by_sender), - U256::from(1) + U256::from(fee) ); - assert_eq!(test.provider.get_balance(rand_address.into()).await.unwrap(), U256::from(2)); + assert_eq!(test.provider.get_balance(rand_address.into()).await.unwrap(), amount_out); } #[tokio::test] @@ -726,39 +783,61 @@ async fn test_erc20_address_out_instruction() { async fn test_eth_code_out_instruction() { let mut test = Test::new().await; test.confirm_next_serai_key().await; - let () = - test.provider.raw_request("anvil_setBalance".into(), (test.router.address(), 3)).await.unwrap(); + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 1_000_000)) + .await + .unwrap(); let mut rand_address = [0xff; 20]; OsRng.fill_bytes(&mut rand_address); - let (tx, raw_gas_used, gas_used) = test - .execute( - Coin::Ether, - U256::from(1), - &[( - SeraiEthereumAddress::Contract(ContractDeployment::new(100_000, vec![]).unwrap()), - U256::from(2), - )], - vec![true], - ) - .await; - // This doesn't model the quadratic memory costs - let gas_for_eth_code_out_instruction = gas_used - Router::EXECUTE_ETH_BASE_GAS; - // 2000 gas as a surplus for the quadratic memory cost and any inaccuracies - assert_eq!(gas_for_eth_code_out_instruction + 2000, Router::EXECUTE_ETH_CODE_OUT_INSTRUCTION_GAS); + let amount_out = U256::from(2); + let out_instructions = OutInstructions::from( + [( + SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, vec![]).unwrap()), + amount_out, + )] + .as_slice(), + ); - assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &out_instructions); + let fee = U256::from(gas); + let (tx, gas_used) = test.execute(Coin::Ether, fee, out_instructions, vec![true]).await; + + // We use call-traces here to determine how much gas was allowed but unused due to the complexity + // of modeling the call to the Router itself and the following CREATE + let mut unused_gas = 0; + { + let traces = test.provider.trace_transaction(*tx.hash()).await.unwrap(); + // Skip the call to the Router and the ecrecover + let mut traces = traces.iter().skip(2); + while let Some(trace) = traces.next() { + let trace = &trace.trace; + // We're tracing the Router's immediate actions, and it doesn't immediately call CREATE + // It only makes a call to itself which calls CREATE + let gas_provided = trace.action.as_call().as_ref().unwrap().gas; + let gas_spent = trace.result.as_ref().unwrap().gas_used(); + unused_gas += gas_provided - gas_spent; + for _ in 0 .. trace.subtraces { + // Skip the subtraces for this call (such as CREATE) + traces.next().unwrap(); + } + } + } + assert_eq!(gas_used + unused_gas, gas); + + assert_eq!( + test.provider.get_balance(test.router.address()).await.unwrap(), + U256::from(1_000_000) - amount_out - fee + ); let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; - let spent_by_sender = u128::from(raw_gas_used) * tx.tx().gas_price; + let spent_by_sender = u128::from(gas_used) * tx.tx().gas_price; assert_eq!( test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - U256::from(minted_to_sender - spent_by_sender), - U256::from(1) - ); - assert_eq!( - test.provider.get_balance(test.router.address().create(1)).await.unwrap(), - U256::from(2) + U256::from(fee) ); + assert_eq!(test.provider.get_balance(test.router.address().create(1)).await.unwrap(), amount_out); } #[tokio::test] @@ -825,7 +904,9 @@ async fn test_escape_hatch() { IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); assert!(matches!( - test.call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), &[]).1).await, + test + .call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), [].as_slice().into()).1) + .await, IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) )); // We reject further attempts to update the escape hatch to prevent the last key from being From e742a6b0ec8b27938e0208daa06a273743b697c4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 02:08:01 -0500 Subject: [PATCH 349/368] Test ERC20 OutInstructions --- Cargo.lock | 11 +++ processor/ethereum/router/Cargo.toml | 2 +- processor/ethereum/router/src/gas.rs | 10 ++- processor/ethereum/router/src/tests/mod.rs | 87 ++++++++++++++++------ 4 files changed, 83 insertions(+), 27 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 72cc661e..014a6d40 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -317,6 +317,7 @@ dependencies = [ "alloy-network-primitives", "alloy-primitives", "alloy-rpc-client", + "alloy-rpc-types-debug", "alloy-rpc-types-eth", "alloy-rpc-types-trace", "alloy-transport", @@ -392,6 +393,16 @@ dependencies = [ "alloy-serde", ] +[[package]] +name = "alloy-rpc-types-debug" +version = "0.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "358d6a8d7340b9eb1a7589a6c1fb00df2c9b26e90737fa5ed0108724dd8dac2c" +dependencies = [ + "alloy-primitives", + "serde", +] + [[package]] name = "alloy-rpc-types-eth" version = "0.9.2" diff --git a/processor/ethereum/router/Cargo.toml b/processor/ethereum/router/Cargo.toml index 07c88fe6..5cdd0b3a 100644 --- a/processor/ethereum/router/Cargo.toml +++ b/processor/ethereum/router/Cargo.toml @@ -61,7 +61,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] } k256 = { version = "0.13", default-features = false, features = ["std"] } -alloy-provider = { version = "0.9", default-features = false, features = ["trace-api"] } +alloy-provider = { version = "0.9", default-features = false, features = ["debug-api", "trace-api"] } alloy-rpc-client = { version = "0.9", default-features = false } alloy-node-bindings = { version = "0.9", default-features = false } diff --git a/processor/ethereum/router/src/gas.rs b/processor/ethereum/router/src/gas.rs index 22aab3f4..769a2010 100644 --- a/processor/ethereum/router/src/gas.rs +++ b/processor/ethereum/router/src/gas.rs @@ -169,8 +169,14 @@ impl Router { // Clear the existing return data interpreter.return_data_buffer.clear(); - // If calling an ERC20, trigger the return data's worst-case by returning `true` - // (as expected by compliant ERC20s) + /* + If calling an ERC20, trigger the return data's worst-case by returning `true` + (as expected by compliant ERC20s). Else return none, as we expect none or won't bother + copying/decoding the return data. + + This doesn't affect calls to ecrecover as those use STATICCALL and this overrides CALL + alone. + */ if Some(address_called) == erc20 { interpreter.return_data_buffer = true.abi_encode().into(); } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 093af39f..c0e43685 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -13,7 +13,10 @@ use alloy_consensus::{TxLegacy, Signed}; use alloy_rpc_types_eth::{BlockNumberOrTag, TransactionInput, TransactionRequest}; use alloy_simple_request_transport::SimpleRequest; use alloy_rpc_client::ClientBuilder; -use alloy_provider::{Provider, RootProvider, ext::TraceApi}; +use alloy_provider::{ + Provider, RootProvider, + ext::{DebugApi, TraceApi}, +}; use alloy_node_bindings::{Anvil, AnvilInstance}; @@ -120,7 +123,7 @@ impl Test { async fn new() -> Self { // The following is explicitly only evaluated against the cancun network upgrade at this time - let anvil = Anvil::new().arg("--hardfork").arg("cancun").spawn(); + let anvil = Anvil::new().arg("--hardfork").arg("cancun").arg("--tracing").spawn(); let provider = Arc::new(RootProvider::new( ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), @@ -435,6 +438,38 @@ impl Test { tx.gas_price = 100_000_000_000; tx } + + async fn gas_unused_by_calls(&self, tx: &Signed) -> u64 { + let mut unused_gas = 0; + + // Handle the difference between the gas limits and gas used values + let traces = self.provider.trace_transaction(*tx.hash()).await.unwrap(); + // Skip the initial call to the Router and the call to ecrecover + let mut traces = traces.iter().skip(2); + while let Some(trace) = traces.next() { + let trace = &trace.trace; + // We're tracing the Router's immediate actions, and it doesn't immediately call CREATE + // It only makes a call to itself which calls CREATE + let gas_provided = trace.action.as_call().as_ref().unwrap().gas; + let gas_spent = trace.result.as_ref().unwrap().gas_used(); + unused_gas += gas_provided - gas_spent; + for _ in 0 .. trace.subtraces { + // Skip the subtraces for this call (such as CREATE) + traces.next().unwrap(); + } + } + + // Also handle any refunds + { + let trace = + self.provider.debug_trace_transaction(*tx.hash(), Default::default()).await.unwrap(); + let refund = + trace.try_into_default_frame().unwrap().struct_logs.last().unwrap().refund_counter; + unused_gas += refund.unwrap_or(0) + } + + unused_gas + } } #[tokio::test] @@ -772,11 +807,32 @@ async fn test_eth_address_out_instruction() { #[tokio::test] async fn test_erc20_address_out_instruction() { - todo!("TODO") - /* + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let erc20 = Erc20::deploy(&test).await; + let coin = Coin::Erc20(erc20.address()); + + let mut rand_address = [0xff; 20]; + OsRng.fill_bytes(&mut rand_address); + let amount_out = U256::from(2); + let out_instructions = + OutInstructions::from([(SeraiEthereumAddress::Address(rand_address), amount_out)].as_slice()); + + let gas = test.router.execute_gas(coin, U256::from(1), &out_instructions); + let fee = U256::from(gas); + + // Mint to the Router the necessary amount of the ERC20 + erc20.mint(&test, test.router.address(), amount_out + fee).await; + + let (tx, gas_used) = test.execute(coin, fee, out_instructions, vec![true]).await; + // Uses traces due to the complexity of modeling Erc20::transfer + let unused_gas = test.gas_unused_by_calls(&tx).await; + assert_eq!(gas_used + unused_gas, gas); + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); - assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); - */ + assert_eq!(erc20.balance_of(&test, tx.recover_signer().unwrap()).await, U256::from(fee)); + assert_eq!(erc20.balance_of(&test, rand_address.into()).await, amount_out); } #[tokio::test] @@ -806,24 +862,7 @@ async fn test_eth_code_out_instruction() { // We use call-traces here to determine how much gas was allowed but unused due to the complexity // of modeling the call to the Router itself and the following CREATE - let mut unused_gas = 0; - { - let traces = test.provider.trace_transaction(*tx.hash()).await.unwrap(); - // Skip the call to the Router and the ecrecover - let mut traces = traces.iter().skip(2); - while let Some(trace) = traces.next() { - let trace = &trace.trace; - // We're tracing the Router's immediate actions, and it doesn't immediately call CREATE - // It only makes a call to itself which calls CREATE - let gas_provided = trace.action.as_call().as_ref().unwrap().gas; - let gas_spent = trace.result.as_ref().unwrap().gas_used(); - unused_gas += gas_provided - gas_spent; - for _ in 0 .. trace.subtraces { - // Skip the subtraces for this call (such as CREATE) - traces.next().unwrap(); - } - } - } + let unused_gas = test.gas_unused_by_calls(&tx).await; assert_eq!(gas_used + unused_gas, gas); assert_eq!( From 75c6427d7c02a1ada5412a4e8b59c7f013038072 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 04:23:50 -0500 Subject: [PATCH 350/368] CREATE uses RLP, not ABI-encoding --- .../ethereum/router/contracts/Router.sol | 93 +++++++++++++---- processor/ethereum/router/src/gas.rs | 10 +- processor/ethereum/router/src/tests/mod.rs | 99 +++++++------------ 3 files changed, 115 insertions(+), 87 deletions(-) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 3bd5c73f..79d01226 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -35,17 +35,6 @@ contract Router is IRouterWithoutCollisions { /// @dev The address in transient storage used for the reentrancy guard bytes32 constant REENTRANCY_GUARD_SLOT = bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); - /** - * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is - * used to predict the addresses of deployed contracts ahead of time. - */ - /* - We don't expose a getter for this as it shouldn't be expected to have any specific value at a - given moment in time. If someone wants to know the address of their deployed contract, they can - have it emit an event and verify the emitting contract is the expected one. - */ - uint256 private _smartContractNonce; - /** * @dev The nonce to verify the next signature with, incremented upon an action to prevent * replays/out-of-order execution @@ -64,6 +53,17 @@ contract Router is IRouterWithoutCollisions { */ bytes32 private _seraiKey; + /** + * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is + * used to predict the addresses of deployed contracts ahead of time. + */ + /* + We don't expose a getter for this as it shouldn't be expected to have any specific value at a + given moment in time. If someone wants to know the address of their deployed contract, they can + have it emit an event and verify the emitting contract is the expected one. + */ + uint64 private _smartContractNonce; + /// @dev The address escaped to address private _escapedTo; @@ -84,6 +84,7 @@ contract Router is IRouterWithoutCollisions { // Clear the re-entrancy guard to allow multiple transactions to non-re-entrant functions within // a transaction + // slither-disable-next-line assembly assembly { tstore(reentrancyGuardSlot, 0) } @@ -163,8 +164,8 @@ contract Router is IRouterWithoutCollisions { bytes32 signatureC; bytes32 signatureS; - // slither-disable-next-line assembly uint256 chainID = block.chainid; + // slither-disable-next-line assembly assembly { // Read the signature (placed after the function signature) signatureC := mload(add(message, 36)) @@ -402,6 +403,64 @@ contract Router is IRouterWithoutCollisions { } } + /// @notice The header for an address, when encoded with RLP for the purposes of CREATE + /// @dev 0x80 + 20, shifted left 30 bytes + uint256 constant ADDRESS_HEADER = (0x80 + 20) << (30 * 8); + + /// @notice Calculate the next address which will be deployed to by CREATE + /** + * @dev This manually implements the RLP encoding to save gas over the usage of CREATE2. While the + * the keccak256 call itself is surprisingly cheap, the memory cost (quadratic and already + * detrimental to other `OutInstruction`s within the same batch) is sufficiently concerning to + * justify this. + */ + function createAddress(uint256 nonce) private view returns (address) { + unchecked { + /* + The hashed RLP-encoding is: + - Header (1 byte) + - Address header (1 bytes) + - Address (20 bytes) + - Nonce (1 ..= 9 bytes) + Since the maximum length is less than 32 bytes, we calculate this on the stack. + */ + // Shift the address from bytes 12 .. 32 to 2 .. 22 + uint256 rlpEncoding = uint256(uint160(address(this))) << 80; + uint256 rlpEncodingLen; + if (nonce <= 0x7f) { + // 22 + 1 + rlpEncodingLen = 23; + // Shift from byte 31 to byte 22 + rlpEncoding |= (nonce << 72); + } else { + uint256 bitsNeeded = 8; + while (nonce >= (1 << bitsNeeded)) { + bitsNeeded += 8; + } + uint256 bytesNeeded = bitsNeeded / 8; + rlpEncodingLen = 22 + bytesNeeded; + // Shift from byte 31 to byte 22 + rlpEncoding |= 0x80 + (bytesNeeded << 72); + // Shift past the unnecessary bytes + rlpEncoding |= nonce << (72 - bitsNeeded); + } + rlpEncoding |= ADDRESS_HEADER; + // The header, which does not include itself in its length, shifted into the first byte + rlpEncoding |= (0xc0 + (rlpEncodingLen - 1)) << 248; + + // Store this to the scratch space + bytes memory rlp; + // slither-disable-next-line assembly + assembly { + mstore(0, rlpEncodingLen) + mstore(32, rlpEncoding) + rlp := 0 + } + + return address(uint160(uint256(keccak256(rlp)))); + } + } + /// @notice Execute some arbitrary code within a secure sandbox /** * @dev This performs sandboxing by deploying this code with `CREATE`. This is an external @@ -473,12 +532,12 @@ contract Router is IRouterWithoutCollisions { /* If it's an ERC20, we calculate the address of the will-be contract and transfer to it before deployment. This avoids needing to deploy the contract, then call transfer, then - call the contract again - */ - address nextAddress = address( - uint160(uint256(keccak256(abi.encodePacked(address(this), _smartContractNonce)))) - ); + call the contract again. + We use CREATE, not CREATE2, despite the difficulty in calculating the address + in-contract, for cost-savings reasons explained within `createAddress`'s documentation. + */ + address nextAddress = createAddress(_smartContractNonce); success = erc20TransferOut(nextAddress, coin, outs[i].amount); } diff --git a/processor/ethereum/router/src/gas.rs b/processor/ethereum/router/src/gas.rs index 769a2010..c3deb022 100644 --- a/processor/ethereum/router/src/gas.rs +++ b/processor/ethereum/router/src/gas.rs @@ -23,8 +23,8 @@ const CHAIN_ID: U256 = U256::from_be_slice(&[1]); pub(crate) type GasEstimator = Evm<'static, (), InMemoryDB>; impl Router { - const NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[1]); - const SERAI_KEY_STORAGE_SLOT: U256 = U256::from_be_slice(&[3]); + const NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[0]); + const SERAI_KEY_STORAGE_SLOT: U256 = U256::from_be_slice(&[2]); // Gas allocated for ERC20 calls #[cfg(test)] @@ -46,11 +46,11 @@ impl Router { the correct set of prices for the network they're operating on. */ /// The gas used by `confirmSeraiKey`. - pub const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; + pub const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_764; /// The gas used by `updateSeraiKey`. - pub const UPDATE_SERAI_KEY_GAS: u64 = 60_045; + pub const UPDATE_SERAI_KEY_GAS: u64 = 60_073; /// The gas used by `escapeHatch`. - pub const ESCAPE_HATCH_GAS: u64 = 61_094; + pub const ESCAPE_HATCH_GAS: u64 = 44_037; /// The key to use when performing gas estimations. /// diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index c0e43685..61572e6e 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -465,6 +465,8 @@ impl Test { self.provider.debug_trace_transaction(*tx.hash(), Default::default()).await.unwrap(); let refund = trace.try_into_default_frame().unwrap().struct_logs.last().unwrap().refund_counter; + // This isn't capped to 1/5th of the TX's gas usage yet that's fine as none of our tests are + // so refund intensive unused_gas += refund.unwrap_or(0) } @@ -881,7 +883,37 @@ async fn test_eth_code_out_instruction() { #[tokio::test] async fn test_erc20_code_out_instruction() { - todo!("TODO") + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let erc20 = Erc20::deploy(&test).await; + let coin = Coin::Erc20(erc20.address()); + + let mut rand_address = [0xff; 20]; + OsRng.fill_bytes(&mut rand_address); + let amount_out = U256::from(2); + let out_instructions = OutInstructions::from( + [( + SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, vec![]).unwrap()), + amount_out, + )] + .as_slice(), + ); + + let gas = test.router.execute_gas(coin, U256::from(1), &out_instructions); + let fee = U256::from(gas); + + // Mint to the Router the necessary amount of the ERC20 + erc20.mint(&test, test.router.address(), amount_out + fee).await; + + let (tx, gas_used) = test.execute(coin, fee, out_instructions, vec![true]).await; + + let unused_gas = test.gas_unused_by_calls(&tx).await; + assert_eq!(gas_used + unused_gas, gas); + + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); + assert_eq!(erc20.balance_of(&test, tx.recover_signer().unwrap()).await, U256::from(fee)); + assert_eq!(erc20.balance_of(&test, test.router.address().create(1)).await, amount_out); } #[tokio::test] @@ -1006,68 +1038,5 @@ async fn test_escape_hatch() { error Reentered(); error EscapeFailed(); function executeArbitraryCode(bytes memory code) external payable; - enum DestinationType { - Address, - Code - } - struct CodeDestination { - uint32 gasLimit; - bytes code; - } - struct OutInstruction { - DestinationType destinationType; - bytes destination; - uint256 amount; - } - function execute( - Signature calldata signature, - address coin, - uint256 fee, - OutInstruction[] calldata outs - ) external; -} - -async fn publish_outs( - provider: &RootProvider, - router: &Router, - key: (Scalar, PublicKey), - nonce: u64, - coin: Coin, - fee: U256, - outs: OutInstructions, -) -> TransactionReceipt { - let msg = Router::execute_message(nonce, coin, fee, outs.clone()); - - let nonce = Scalar::random(&mut OsRng); - let c = Signature::challenge(ProjectivePoint::GENERATOR * nonce, &key.1, &msg); - let s = nonce + (c * key.0); - - let sig = Signature::new(c, s).unwrap(); - - let mut tx = router.execute(coin, fee, outs, &sig); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - ethereum_test_primitives::publish_tx(provider, tx).await -} - -#[tokio::test] -async fn test_eth_address_out_instruction() { - let (_anvil, provider, router, key) = setup_test().await; - confirm_next_serai_key(&provider, &router, 1, key).await; - - let mut amount = U256::try_from(OsRng.next_u64()).unwrap(); - let mut fee = U256::try_from(OsRng.next_u64()).unwrap(); - if fee > amount { - core::mem::swap(&mut amount, &mut fee); - } - assert!(amount >= fee); - ethereum_test_primitives::fund_account(&provider, router.address(), amount).await; - - let instructions = OutInstructions::from([].as_slice()); - let receipt = publish_outs(&provider, &router, key, 2, Coin::Ether, fee, instructions).await; - assert!(receipt.status()); - assert_eq!(Router::EXECUTE_ETH_BASE_GAS, ((receipt.gas_used + 1000) / 1000) * 1000); - - assert_eq!(router.next_nonce(receipt.block_hash.unwrap().into()).await.unwrap(), 3); -} + function createAddress(uint256 nonce) private view returns (address); */ From a9625364df987c2909baf220db7a3852dcd3abb8 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 05:37:56 -0500 Subject: [PATCH 351/368] Test createAddress Benchmarks gas usage Note the estimator needs to be updated as this is now variable-gas to the state. --- processor/ethereum/router/build.rs | 7 +- .../ethereum/router/contracts/Router.sol | 7 +- .../router/contracts/tests/CreateAddress.sol | 13 +++ .../router/src/tests/create_address.rs | 97 +++++++++++++++++++ processor/ethereum/router/src/tests/mod.rs | 3 + 5 files changed, 123 insertions(+), 4 deletions(-) create mode 100644 processor/ethereum/router/contracts/tests/CreateAddress.sol create mode 100644 processor/ethereum/router/src/tests/create_address.rs diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index 8c0fbe67..bf2cc92a 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -45,5 +45,10 @@ fn main() { ); // Build the test contracts - build_solidity_contracts::build(&[], "contracts/tests", &(artifacts_path + "/tests")).unwrap(); + build_solidity_contracts::build( + &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts", "contracts"], + "contracts/tests", + &(artifacts_path + "/tests"), + ) + .unwrap(); } diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 79d01226..81de35ce 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -414,7 +414,7 @@ contract Router is IRouterWithoutCollisions { * detrimental to other `OutInstruction`s within the same batch) is sufficiently concerning to * justify this. */ - function createAddress(uint256 nonce) private view returns (address) { + function createAddress(uint256 nonce) internal view returns (address) { unchecked { /* The hashed RLP-encoding is: @@ -438,9 +438,10 @@ contract Router is IRouterWithoutCollisions { bitsNeeded += 8; } uint256 bytesNeeded = bitsNeeded / 8; - rlpEncodingLen = 22 + bytesNeeded; + // 22 + 1 + the amount of bytes needed + rlpEncodingLen = 23 + bytesNeeded; // Shift from byte 31 to byte 22 - rlpEncoding |= 0x80 + (bytesNeeded << 72); + rlpEncoding |= (0x80 + bytesNeeded) << 72; // Shift past the unnecessary bytes rlpEncoding |= nonce << (72 - bitsNeeded); } diff --git a/processor/ethereum/router/contracts/tests/CreateAddress.sol b/processor/ethereum/router/contracts/tests/CreateAddress.sol new file mode 100644 index 00000000..6be58fe2 --- /dev/null +++ b/processor/ethereum/router/contracts/tests/CreateAddress.sol @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; + +import "Router.sol"; + +// Wrap the Router with a contract which exposes the address +contract CreateAddress is Router { + constructor() Router(bytes32(uint256(1))) {} + + function createAddressForSelf(uint256 nonce) external returns (address) { + return Router.createAddress(nonce); + } +} diff --git a/processor/ethereum/router/src/tests/create_address.rs b/processor/ethereum/router/src/tests/create_address.rs new file mode 100644 index 00000000..a431e5e1 --- /dev/null +++ b/processor/ethereum/router/src/tests/create_address.rs @@ -0,0 +1,97 @@ +use alloy_core::primitives::{hex, U256, Bytes, TxKind}; +use alloy_sol_types::SolCall; + +use alloy_consensus::TxLegacy; + +use alloy_rpc_types_eth::{TransactionInput, TransactionRequest}; +use alloy_provider::Provider; + +use revm::{primitives::SpecId, interpreter::gas::calculate_initial_tx_gas}; + +use crate::tests::Test; + +#[rustfmt::skip] +#[expect(warnings)] +#[expect(needless_pass_by_value)] +#[expect(clippy::all)] +#[expect(clippy::ignored_unit_patterns)] +#[expect(clippy::redundant_closure_for_method_calls)] +mod abi { + alloy_sol_macro::sol!("contracts/tests/CreateAddress.sol"); +} + +#[tokio::test] +async fn test_create_address() { + let test = Test::new().await; + + let address = { + const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = include_bytes!(concat!( + env!("OUT_DIR"), + "/serai-processor-ethereum-router/tests/CreateAddress.bin" + )); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { + Ok(bytecode) => bytecode, + Err(_) => panic!("CreateAddress.bin did not contain valid hex"), + }; + &BYTECODE + }; + + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000u128, + gas_limit: 1_100_000, + to: TxKind::Create, + value: U256::ZERO, + input: Bytes::from_static(BYTECODE), + }; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx).await; + receipt.contract_address.unwrap() + }; + + // Check `createAddress` correctly encodes the nonce for every single meaningful bit pattern + // The only meaningful patterns are < 0x80, == 0x80, and then each length greater > 0x80 + // The following covers all three + let mut nonce = 1u64; + while nonce.checked_add(nonce).is_some() { + assert_eq!( + &test + .provider + .call( + &TransactionRequest::default().to(address).input(TransactionInput::new( + (abi::CreateAddress::createAddressForSelfCall { nonce: U256::from(nonce) }) + .abi_encode() + .into() + )) + ) + .await + .unwrap() + .as_ref()[12 ..], + address.create(nonce).as_slice(), + ); + nonce <<= 1; + } + + let input = + (abi::CreateAddress::createAddressForSelfCall { nonce: U256::from(u64::MAX) }).abi_encode(); + let gas = test + .provider + .estimate_gas( + &TransactionRequest::default().to(address).input(TransactionInput::new(input.clone().into())), + ) + .await + .unwrap() - + calculate_initial_tx_gas(SpecId::CANCUN, &input, false, &[], 0).initial_gas; + + let keccak256_gas_estimate = |len: u64| 30 + (6 * len.div_ceil(32)); + let mut bytecode_len = 0; + while (keccak256_gas_estimate(bytecode_len) + keccak256_gas_estimate(85)) < gas { + bytecode_len += 32; + } + println!( + "Worst-case createAddress gas: {gas}, CREATE2 break-even is bytecode of length {bytecode_len}", + ); +} diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 61572e6e..5937df3b 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -41,6 +41,9 @@ use crate::{ }; mod constants; + +mod create_address; + mod erc20; use erc20::Erc20; From ea00ba9ff8058bd82f91e67568c371702c7ca198 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 07:22:40 -0500 Subject: [PATCH 352/368] Clarified usage of CREATE CREATE was originally intended for gas savings. While one sketch did move to CREATE2, the security concerns around address collisions (requiring all init codes not be malleable to achieve security) continue to justify this. To resolve the gas estimation concerns raised in the prior commit, the createAddress function has been made constant-gas. --- processor/ethereum/router/build.rs | 18 ++- .../ethereum/router/contracts/Router.sol | 145 ++++++++++-------- .../router/contracts/tests/CreateAddress.sol | 2 +- processor/ethereum/router/src/gas.rs | 20 ++- .../router/src/tests/create_address.rs | 50 +++--- 5 files changed, 131 insertions(+), 104 deletions(-) diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index bf2cc92a..dec965d3 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -32,6 +32,17 @@ fn main() { &artifacts_path, ) .unwrap(); + // These are detected multiple times and distinguished, hence their renaming to canonical forms + fs::rename( + artifacts_path.clone() + "/Router_sol_Router.bin", + artifacts_path.clone() + "/Router.bin", + ) + .unwrap(); + fs::rename( + artifacts_path.clone() + "/Router_sol_Router.bin-runtime", + artifacts_path.clone() + "/Router.bin-runtime", + ) + .unwrap(); // This cannot be handled with the sol! macro. The Router requires an import // https://github.com/alloy-rs/core/issues/602 @@ -44,11 +55,16 @@ fn main() { &(artifacts_path.clone() + "/router.rs"), ); + let test_artifacts_path = artifacts_path + "/tests"; + if !fs::exists(&test_artifacts_path).unwrap() { + fs::create_dir(&test_artifacts_path).unwrap(); + } + // Build the test contracts build_solidity_contracts::build( &["../../../networks/ethereum/schnorr/contracts", "../erc20/contracts", "contracts"], "contracts/tests", - &(artifacts_path + "/tests"), + &test_artifacts_path, ) .unwrap(); } diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 81de35ce..670f79e9 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -35,34 +35,34 @@ contract Router is IRouterWithoutCollisions { /// @dev The address in transient storage used for the reentrancy guard bytes32 constant REENTRANCY_GUARD_SLOT = bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); - /** - * @dev The nonce to verify the next signature with, incremented upon an action to prevent - * replays/out-of-order execution - */ - uint256 private _nextNonce; - - /** - * @dev The next public key for Serai's Ethereum validator set, in the form the Schnorr library - * expects - */ - bytes32 private _nextSeraiKey; - - /** - * @dev The current public key for Serai's Ethereum validator set, in the form the Schnorr library - * expects - */ - bytes32 private _seraiKey; - /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is - * used to predict the addresses of deployed contracts ahead of time. + * used to predict the addresses of deployed contracts ahead of time. */ /* We don't expose a getter for this as it shouldn't be expected to have any specific value at a given moment in time. If someone wants to know the address of their deployed contract, they can have it emit an event and verify the emitting contract is the expected one. */ - uint64 private _smartContractNonce; + uint256 private _smartContractNonce; + + /** + * @dev The nonce to verify the next signature with, incremented upon an action to prevent + * replays/out-of-order execution + */ + uint256 private _nextNonce; + + /** + * @dev The next public key for Serai's Ethereum validator set, in the form the Schnorr library + * expects + */ + bytes32 private _nextSeraiKey; + + /** + * @dev The current public key for Serai's Ethereum validator set, in the form the Schnorr library + * expects + */ + bytes32 private _seraiKey; /// @dev The address escaped to address private _escapedTo; @@ -122,10 +122,9 @@ contract Router is IRouterWithoutCollisions { } /** - * @dev - * Verify a signature of the calldata, placed immediately after the function selector. The - * calldata should be signed with the nonce taking the place of the signature's commitment to - * its nonce, and the signature solution zeroed. + * @dev Verify a signature of the calldata, placed immediately after the function selector. The + * calldata should be signed with the nonce taking the place of the signature's commitment to + * its nonce, and the signature solution zeroed. */ function verifySignature(bytes32 key) private @@ -228,9 +227,9 @@ contract Router is IRouterWithoutCollisions { /// @notice Start updating the key representing Serai's Ethereum validators /** * @dev This does not validate the passed-in key as much as possible. This is accepted as the key - * won't actually be rotated to until it provides a signature confirming the update however - * (proving signatures can be made by the key in question and verified via our Schnorr - * contract). + * won't actually be rotated to until it provides a signature confirming the update however + * (proving signatures can be made by the key in question and verified via our Schnorr + * contract). * * The hex bytes are to cause a collision with `IRouter.updateSeraiKey`. */ @@ -264,7 +263,7 @@ contract Router is IRouterWithoutCollisions { /// @param amount The amount to transfer in (msg.value if Ether) /** * @param instruction The Shorthand-encoded InInstruction for Serai to associate with this - * transfer in + * transfer in */ // Re-entrancy doesn't bork this function // slither-disable-next-line reentrancy-events @@ -327,7 +326,7 @@ contract Router is IRouterWithoutCollisions { /// @param amount The amount of the coin to transfer /** * @return success If the coins were successfully transferred out. This is defined as if the - * call succeeded and returned true or nothing. + * call succeeded and returned true or nothing. */ // execute has this annotation yet this still flags (even when it doesn't have its own loop) // slither-disable-next-line calls-loop @@ -377,7 +376,7 @@ contract Router is IRouterWithoutCollisions { /// @param amount The amount of the coin to transfer /** * @return success If the coins were successfully transferred out. For Ethereum, this is if the - * call succeeded. For the ERC20, it's if the call succeeded and returned true or nothing. + * call succeeded. For the ERC20, it's if the call succeeded and returned true or nothing. */ function transferOut(address to, address coin, uint256 amount) private returns (bool success) { if (coin == address(0)) { @@ -409,45 +408,57 @@ contract Router is IRouterWithoutCollisions { /// @notice Calculate the next address which will be deployed to by CREATE /** - * @dev This manually implements the RLP encoding to save gas over the usage of CREATE2. While the - * the keccak256 call itself is surprisingly cheap, the memory cost (quadratic and already - * detrimental to other `OutInstruction`s within the same batch) is sufficiently concerning to - * justify this. + * @dev While CREATE2 is preferable inside smart contracts, CREATE2 is fundamentally vulnerable to + * collisions. Our usage of CREATE forces an incremental nonce infeasible to brute force. While + * addresses are still variable to the Router address, the Router address itself is the product + * of an incremental nonce (the Deployer's). The Deployer's address is constant (generated via + * NUMS methods), finally ensuring the security of this. + * + * This is written to be constant-gas, allowing state-independent gas prediction. + * + * This has undefined behavior when `nonce` is zero (EIP-161 makes this irrelevant). */ function createAddress(uint256 nonce) internal view returns (address) { unchecked { - /* - The hashed RLP-encoding is: - - Header (1 byte) - - Address header (1 bytes) - - Address (20 bytes) - - Nonce (1 ..= 9 bytes) - Since the maximum length is less than 32 bytes, we calculate this on the stack. - */ - // Shift the address from bytes 12 .. 32 to 2 .. 22 - uint256 rlpEncoding = uint256(uint160(address(this))) << 80; - uint256 rlpEncodingLen; - if (nonce <= 0x7f) { - // 22 + 1 - rlpEncodingLen = 23; - // Shift from byte 31 to byte 22 - rlpEncoding |= (nonce << 72); - } else { - uint256 bitsNeeded = 8; - while (nonce >= (1 << bitsNeeded)) { - bitsNeeded += 8; + // The amount of bytes needed to represent the nonce + uint256 bitsNeeded = 0; + for (uint256 bits = 0; bits <= 64; bits += 8) { + bool valueFits = nonce < (uint256(1) << bits); + bool notPriorSet = bitsNeeded == 0; + // If the value fits, and the bits weren't prior set, we should set the bits now + uint256 shouldSet; + // slither-disable-next-line assembly + assembly { + shouldSet := and(valueFits, notPriorSet) } - uint256 bytesNeeded = bitsNeeded / 8; - // 22 + 1 + the amount of bytes needed - rlpEncodingLen = 23 + bytesNeeded; - // Shift from byte 31 to byte 22 - rlpEncoding |= (0x80 + bytesNeeded) << 72; - // Shift past the unnecessary bytes - rlpEncoding |= nonce << (72 - bitsNeeded); + // Carry the existing bitsNeeded value, set bits if should set + bitsNeeded = bitsNeeded + (shouldSet * bits); } - rlpEncoding |= ADDRESS_HEADER; + uint256 bytesNeeded = bitsNeeded / 8; + + // if the nonce is an RLP string or not + bool nonceIsNotStringBool = nonce <= 0x7f; + uint256 nonceIsNotString; + // slither-disable-next-line assembly + assembly { + nonceIsNotString := nonceIsNotStringBool + } + uint256 nonceIsString = nonceIsNotString ^ 1; + + // Define the RLP length + uint256 rlpEncodingLen = 23 + (nonceIsString * bytesNeeded); + + uint256 rlpEncoding = // The header, which does not include itself in its length, shifted into the first byte - rlpEncoding |= (0xc0 + (rlpEncodingLen - 1)) << 248; + ((0xc0 + (rlpEncodingLen - 1)) << 248) + // The address header, which is constant + | ADDRESS_HEADER + // Shift the address from bytes 12 .. 32 to 2 .. 22 + | (uint256(uint160(address(this))) << 80) + // Shift the nonce (one byte) or the nonce's header from byte 31 to byte 22 + | (((nonceIsNotString * nonce) + (nonceIsString * (0x80 + bytesNeeded))) << 72) + // Shift past the unnecessary bytes + | (nonce * nonceIsString) << (72 - bitsNeeded); // Store this to the scratch space bytes memory rlp; @@ -465,8 +476,8 @@ contract Router is IRouterWithoutCollisions { /// @notice Execute some arbitrary code within a secure sandbox /** * @dev This performs sandboxing by deploying this code with `CREATE`. This is an external - * function as we can't meter `CREATE`/internal functions. We work around this by calling this - * function with `CALL` (which we can meter). This does forward `msg.value` to the newly + * function as we can't meter `CREATE`/internal functions. We work around this by calling this + * function with `CALL` (which we can meter). This does forward `msg.value` to the newly * deployed contract. */ /// @param code The code to execute @@ -495,6 +506,8 @@ contract Router is IRouterWithoutCollisions { * of CEI with `verifySignature` prevents replays, re-entrancy would allow out-of-order * completion for the execution of batches (despite their in-order start of execution) which * isn't a headache worth dealing with. + * + * Re-entrancy is also explicitly required due to how `_smartContractNonce` is handled. */ // @param signature The signature by the current key for Serai's Ethereum validators // @param coin The coin all of these `OutInstruction`s are for @@ -536,7 +549,7 @@ contract Router is IRouterWithoutCollisions { call the contract again. We use CREATE, not CREATE2, despite the difficulty in calculating the address - in-contract, for cost-savings reasons explained within `createAddress`'s documentation. + in-contract, for reasons explained within `createAddress`'s documentation. */ address nextAddress = createAddress(_smartContractNonce); success = erc20TransferOut(nextAddress, coin, outs[i].amount); diff --git a/processor/ethereum/router/contracts/tests/CreateAddress.sol b/processor/ethereum/router/contracts/tests/CreateAddress.sol index 6be58fe2..2d092449 100644 --- a/processor/ethereum/router/contracts/tests/CreateAddress.sol +++ b/processor/ethereum/router/contracts/tests/CreateAddress.sol @@ -5,7 +5,7 @@ import "Router.sol"; // Wrap the Router with a contract which exposes the address contract CreateAddress is Router { - constructor() Router(bytes32(uint256(1))) {} + constructor() Router(bytes32(uint256(1))) { } function createAddressForSelf(uint256 nonce) external returns (address) { return Router.createAddress(nonce); diff --git a/processor/ethereum/router/src/gas.rs b/processor/ethereum/router/src/gas.rs index c3deb022..28dd799a 100644 --- a/processor/ethereum/router/src/gas.rs +++ b/processor/ethereum/router/src/gas.rs @@ -23,8 +23,9 @@ const CHAIN_ID: U256 = U256::from_be_slice(&[1]); pub(crate) type GasEstimator = Evm<'static, (), InMemoryDB>; impl Router { - const NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[0]); - const SERAI_KEY_STORAGE_SLOT: U256 = U256::from_be_slice(&[2]); + const SMART_CONTRACT_NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[0]); + const NONCE_STORAGE_SLOT: U256 = U256::from_be_slice(&[1]); + const SERAI_KEY_STORAGE_SLOT: U256 = U256::from_be_slice(&[3]); // Gas allocated for ERC20 calls #[cfg(test)] @@ -46,11 +47,11 @@ impl Router { the correct set of prices for the network they're operating on. */ /// The gas used by `confirmSeraiKey`. - pub const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_764; + pub const CONFIRM_NEXT_SERAI_KEY_GAS: u64 = 57_736; /// The gas used by `updateSeraiKey`. - pub const UPDATE_SERAI_KEY_GAS: u64 = 60_073; + pub const UPDATE_SERAI_KEY_GAS: u64 = 60_045; /// The gas used by `escapeHatch`. - pub const ESCAPE_HATCH_GAS: u64 = 44_037; + pub const ESCAPE_HATCH_GAS: u64 = 61_094; /// The key to use when performing gas estimations. /// @@ -89,6 +90,15 @@ impl Router { }, ); + // Insert the value for _smartContractNonce set in the constructor + // All operations w.r.t. execute in constant-time, making the actual value irrelevant + db.insert_account_storage( + self.address, + Self::SMART_CONTRACT_NONCE_STORAGE_SLOT, + U256::from(1), + ) + .unwrap(); + // Insert a non-zero nonce, as the zero nonce will update to the initial key and never be // used for any gas estimations of `execute`, the only function estimated db.insert_account_storage(self.address, Self::NONCE_STORAGE_SLOT, U256::from(1)).unwrap(); diff --git a/processor/ethereum/router/src/tests/create_address.rs b/processor/ethereum/router/src/tests/create_address.rs index a431e5e1..339c44b2 100644 --- a/processor/ethereum/router/src/tests/create_address.rs +++ b/processor/ethereum/router/src/tests/create_address.rs @@ -56,42 +56,30 @@ async fn test_create_address() { // The only meaningful patterns are < 0x80, == 0x80, and then each length greater > 0x80 // The following covers all three let mut nonce = 1u64; + let mut gas = None; while nonce.checked_add(nonce).is_some() { + let input = + (abi::CreateAddress::createAddressForSelfCall { nonce: U256::from(nonce) }).abi_encode(); + + // Make sure the function works as expected + let call = + TransactionRequest::default().to(address).input(TransactionInput::new(input.clone().into())); assert_eq!( - &test - .provider - .call( - &TransactionRequest::default().to(address).input(TransactionInput::new( - (abi::CreateAddress::createAddressForSelfCall { nonce: U256::from(nonce) }) - .abi_encode() - .into() - )) - ) - .await - .unwrap() - .as_ref()[12 ..], + &test.provider.call(&call).await.unwrap().as_ref()[12 ..], address.create(nonce).as_slice(), ); + + // Check the function is constant-gas + let gas_used = test.provider.estimate_gas(&call).await.unwrap(); + let initial_gas = calculate_initial_tx_gas(SpecId::CANCUN, &input, false, &[], 0).initial_gas; + let this_call = gas_used - initial_gas; + if gas.is_none() { + gas = Some(this_call); + } + assert_eq!(gas, Some(this_call)); + nonce <<= 1; } - let input = - (abi::CreateAddress::createAddressForSelfCall { nonce: U256::from(u64::MAX) }).abi_encode(); - let gas = test - .provider - .estimate_gas( - &TransactionRequest::default().to(address).input(TransactionInput::new(input.clone().into())), - ) - .await - .unwrap() - - calculate_initial_tx_gas(SpecId::CANCUN, &input, false, &[], 0).initial_gas; - - let keccak256_gas_estimate = |len: u64| 30 + (6 * len.div_ceil(32)); - let mut bytecode_len = 0; - while (keccak256_gas_estimate(bytecode_len) + keccak256_gas_estimate(85)) < gas { - bytecode_len += 32; - } - println!( - "Worst-case createAddress gas: {gas}, CREATE2 break-even is bytecode of length {bytecode_len}", - ); + println!("createAddress gas: {}", gas.unwrap()); } From 0957460f276ce129f05f0230ed4222bddf7693c5 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 07:36:23 -0500 Subject: [PATCH 353/368] Add supporting security commentary to Router.sol --- processor/ethereum/router/contracts/Router.sol | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 670f79e9..214eed52 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -22,6 +22,15 @@ import "IRouter.sol"; The `execute` function pays a relayer, as expected for use in the account-abstraction model. Other functions also expect relayers, yet do not explicitly pay fees. Those calls are expected to be justified via the backpressure of transactions with fees. + + We do transfer ERC20s to contracts before their successful deployment. The usage of CREATE should + prevent deployment failures premised on address collisions, leaving failures to be failures with + the user-provided code/gas limit. Those failures are deemed to be the user's fault. Alternative + designs not only have increased overhead yet their own concerns around complexity (the Router + calling itself via msg.sender), justifying this as acceptable. + + Historically, the call-stack-depth limit would've made this design untenable. Due to EIP-150, even + with 1 billion gas transactions, the call-stack-depth limit remains unreachable. */ // slither-disable-start low-level-calls,unchecked-lowlevel From f8c3acae7b026a23d92753cfcbef83b40728bc05 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 07:48:37 -0500 Subject: [PATCH 354/368] Check the Router-deployed contracts' code --- processor/ethereum/router/src/tests/mod.rs | 86 +++++++++++----------- 1 file changed, 44 insertions(+), 42 deletions(-) diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 5937df3b..fbdad8cb 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -703,53 +703,56 @@ async fn test_erc20_top_level_transfer_in_instruction() { test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; } +// Code which returns true +#[rustfmt::skip] +fn return_true_code() -> Vec { + vec![ + 0x60, // push 1 byte | 3 gas + 0x01, // the value 1 + 0x5f, // push 0 | 2 gas + 0x52, // mstore to offset 0 the value 1 | 3 gas + 0x60, // push 1 byte | 3 gas + 0x20, // the value 32 + 0x5f, // push 0 | 2 gas + 0xf3, // return from offset 0 1 word | 0 gas + // 13 gas for the execution plus a single word of memory for 16 gas total + ] +} + #[tokio::test] async fn test_empty_execute() { let mut test = Test::new().await; test.confirm_next_serai_key().await; { + let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &[].as_slice().into()); + let fee = U256::from(gas); + let () = test .provider - .raw_request("anvil_setBalance".into(), (test.router.address(), 100_000)) + .raw_request("anvil_setBalance".into(), (test.router.address(), fee)) .await .unwrap(); - let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &[].as_slice().into()); - let fee = U256::from(gas); let (tx, gas_used) = test.execute(Coin::Ether, fee, [].as_slice().into(), vec![]).await; // We don't use the call gas stipend here const UNUSED_GAS: u64 = revm::interpreter::gas::CALL_STIPEND; assert_eq!(gas_used + UNUSED_GAS, gas); - assert_eq!( - test.provider.get_balance(test.router.address()).await.unwrap(), - U256::from(100_000 - gas) - ); + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; let spent_by_sender = u128::from(gas_used) * tx.tx().gas_price; assert_eq!( test.provider.get_balance(tx.recover_signer().unwrap()).await.unwrap() - U256::from(minted_to_sender - spent_by_sender), - U256::from(gas) + U256::from(fee) ); } { let token = Address::from([0xff; 20]); { - #[rustfmt::skip] - let code = vec![ - 0x60, // push 1 byte | 3 gas - 0x01, // the value 1 - 0x5f, // push 0 | 2 gas - 0x52, // mstore to offset 0 the value 1 | 3 gas - 0x60, // push 1 byte | 3 gas - 0x20, // the value 32 - 0x5f, // push 0 | 2 gas - 0xf3, // return from offset 0 1 word | 0 gas - // 13 gas for the execution plus a single word of memory for 16 gas total - ]; + let code = return_true_code(); // Deploy our 'token' let () = test.provider.raw_request("anvil_setCode".into(), (token, code)).await.unwrap(); let call = @@ -759,7 +762,7 @@ async fn test_empty_execute() { test.provider.call(&call).await.unwrap().as_ref(), U256::from(1).abi_encode().as_slice() ); - // Check it has the expected gas cost + // Check it has the expected gas cost (16 is documented in `return_true_code`) assert_eq!(test.provider.estimate_gas(&call).await.unwrap(), 21_000 + 16); } @@ -778,11 +781,6 @@ async fn test_empty_execute() { async fn test_eth_address_out_instruction() { let mut test = Test::new().await; test.confirm_next_serai_key().await; - let () = test - .provider - .raw_request("anvil_setBalance".into(), (test.router.address(), 100_000)) - .await - .unwrap(); let mut rand_address = [0xff; 20]; OsRng.fill_bytes(&mut rand_address); @@ -792,14 +790,18 @@ async fn test_eth_address_out_instruction() { let gas = test.router.execute_gas(Coin::Ether, U256::from(1), &out_instructions); let fee = U256::from(gas); + + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), amount_out + fee)) + .await + .unwrap(); + let (tx, gas_used) = test.execute(Coin::Ether, fee, out_instructions, vec![true]).await; const UNUSED_GAS: u64 = 2 * revm::interpreter::gas::CALL_STIPEND; assert_eq!(gas_used + UNUSED_GAS, gas); - assert_eq!( - test.provider.get_balance(test.router.address()).await.unwrap(), - U256::from(100_000) - amount_out - fee - ); + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); let minted_to_sender = u128::from(tx.tx().gas_limit) * tx.tx().gas_price; let spent_by_sender = u128::from(gas_used) * tx.tx().gas_price; assert_eq!( @@ -850,12 +852,11 @@ async fn test_eth_code_out_instruction() { .await .unwrap(); - let mut rand_address = [0xff; 20]; - OsRng.fill_bytes(&mut rand_address); + let code = return_true_code(); let amount_out = U256::from(2); let out_instructions = OutInstructions::from( [( - SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, vec![]).unwrap()), + SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, code.clone()).unwrap()), amount_out, )] .as_slice(), @@ -881,7 +882,10 @@ async fn test_eth_code_out_instruction() { U256::from(minted_to_sender - spent_by_sender), U256::from(fee) ); - assert_eq!(test.provider.get_balance(test.router.address().create(1)).await.unwrap(), amount_out); + let deployed = test.router.address().create(1); + assert_eq!(test.provider.get_balance(deployed).await.unwrap(), amount_out); + // The init code we use returns true, which will become the deployed contract's code + assert_eq!(test.provider.get_code_at(deployed).await.unwrap().to_vec(), true.abi_encode()); } #[tokio::test] @@ -892,15 +896,11 @@ async fn test_erc20_code_out_instruction() { let erc20 = Erc20::deploy(&test).await; let coin = Coin::Erc20(erc20.address()); - let mut rand_address = [0xff; 20]; - OsRng.fill_bytes(&mut rand_address); + let code = return_true_code(); let amount_out = U256::from(2); let out_instructions = OutInstructions::from( - [( - SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, vec![]).unwrap()), - amount_out, - )] - .as_slice(), + [(SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, code).unwrap()), amount_out)] + .as_slice(), ); let gas = test.router.execute_gas(coin, U256::from(1), &out_instructions); @@ -916,7 +916,9 @@ async fn test_erc20_code_out_instruction() { assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); assert_eq!(erc20.balance_of(&test, tx.recover_signer().unwrap()).await, U256::from(fee)); - assert_eq!(erc20.balance_of(&test, test.router.address().create(1)).await, amount_out); + let deployed = test.router.address().create(1); + assert_eq!(erc20.balance_of(&test, deployed).await, amount_out); + assert_eq!(test.provider.get_code_at(deployed).await.unwrap().to_vec(), true.abi_encode()); } #[tokio::test] From 7e01589fbad9eb852548eead413285d62fc588dc Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 11:37:17 -0500 Subject: [PATCH 355/368] Erc20::approve for DestinationType::Contract This allows the CREATE code to bork without the Serai router losing access to the coins in question. It does incur overhead on the deployed contract, which now no longer just has to query its balance but also has to call the transferFrom, but its a safer pattern and not a UX detriment. This also improves documentation. --- processor/ethereum/router/build.rs | 18 +- .../ethereum/router/contracts/Router.sol | 175 ++++++++++-------- processor/ethereum/router/src/tests/erc20.rs | 7 + processor/ethereum/router/src/tests/mod.rs | 4 +- 4 files changed, 115 insertions(+), 89 deletions(-) diff --git a/processor/ethereum/router/build.rs b/processor/ethereum/router/build.rs index dec965d3..f80a0b77 100644 --- a/processor/ethereum/router/build.rs +++ b/processor/ethereum/router/build.rs @@ -33,16 +33,14 @@ fn main() { ) .unwrap(); // These are detected multiple times and distinguished, hence their renaming to canonical forms - fs::rename( - artifacts_path.clone() + "/Router_sol_Router.bin", - artifacts_path.clone() + "/Router.bin", - ) - .unwrap(); - fs::rename( - artifacts_path.clone() + "/Router_sol_Router.bin-runtime", - artifacts_path.clone() + "/Router.bin-runtime", - ) - .unwrap(); + let router_bin = artifacts_path.clone() + "/Router.bin"; + let _ = fs::remove_file(&router_bin); // Remove the file if it already exists, if we can + fs::rename(artifacts_path.clone() + "/Router_sol_Router.bin", &router_bin).unwrap(); + + let router_bin_runtime = artifacts_path.clone() + "/Router.bin-runtime"; + let _ = fs::remove_file(&router_bin_runtime); + fs::rename(artifacts_path.clone() + "/Router_sol_Router.bin-runtime", router_bin_runtime) + .unwrap(); // This cannot be handled with the sol! macro. The Router requires an import // https://github.com/alloy-rs/core/issues/602 diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 214eed52..25ece4b8 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -8,9 +8,9 @@ import "Schnorr.sol"; import "IRouter.sol"; /* - The Router directly performs low-level calls in order to fine-tune the gas settings. Since this - contract is meant to relay an entire batch of transactions, the ability to exactly meter - individual transactions is critical. + The Router directly performs low-level calls in order to have direct control over gas. Since this + contract is meant to relay an entire batch of outs in a single transaction, the ability to exactly + meter individual outs is critical. We don't check the return values as we don't care if the calls succeeded. We solely care we made them. If someone configures an external contract in a way which borks, we explicitly define that @@ -19,18 +19,12 @@ import "IRouter.sol"; If an actual invariant within Serai exists, an escape hatch exists to move to a new contract. Any improperly handled actions can be re-signed and re-executed at that point in time. + Historically, the call-stack-depth limit would've made this design untenable. Due to EIP-150, even + with 1 billion gas transactions, the call-stack-depth limit remains unreachable. + The `execute` function pays a relayer, as expected for use in the account-abstraction model. Other functions also expect relayers, yet do not explicitly pay fees. Those calls are expected to be justified via the backpressure of transactions with fees. - - We do transfer ERC20s to contracts before their successful deployment. The usage of CREATE should - prevent deployment failures premised on address collisions, leaving failures to be failures with - the user-provided code/gas limit. Those failures are deemed to be the user's fault. Alternative - designs not only have increased overhead yet their own concerns around complexity (the Router - calling itself via msg.sender), justifying this as acceptable. - - Historically, the call-stack-depth limit would've made this design untenable. Due to EIP-150, even - with 1 billion gas transactions, the call-stack-depth limit remains unreachable. */ // slither-disable-start low-level-calls,unchecked-lowlevel @@ -44,6 +38,28 @@ contract Router is IRouterWithoutCollisions { /// @dev The address in transient storage used for the reentrancy guard bytes32 constant REENTRANCY_GUARD_SLOT = bytes32(uint256(keccak256("ReentrancyGuard Router")) - 1); + /** + * @dev The amount of gas to use when interacting with ERC20s + * + * The ERC20s integrated are presumed to have a constant gas cost, meaning this fixed gas cost + * can only be insufficient if: + * + * A) An integrated ERC20 uses more gas than this limit (presumed not to be the case) + * B) An integrated ERC20 is upgraded (integrated ERC20s are presumed to not be upgradeable) + * C) The ERC20 call has a variable gas cost and the user set a hook on receive which caused + * this (in which case, we accept such interactions failing) + * D) The user was blacklisted and any transfers to them cause out of gas errors (in which + * case, we again accept dropping this) + * E) Other extreme edge cases, for which such tokens are assumed to not be integrated + * F) Ethereum opcodes are repriced in a sufficiently breaking fashion + * + * This should be in such excess of the gas requirements of integrated tokens we'll survive + * repricing, so long as the repricing doesn't revolutionize EVM gas costs as we know it. In such + * a case, Serai would have to migrate to a new smart contract using `escapeHatch`. That also + * covers all other potential exceptional cases. + */ + uint256 constant ERC20_GAS = 100_000; + /** * @dev The next nonce used to determine the address of contracts deployed with CREATE. This is * used to predict the addresses of deployed contracts ahead of time. @@ -135,6 +151,7 @@ contract Router is IRouterWithoutCollisions { * calldata should be signed with the nonce taking the place of the signature's commitment to * its nonce, and the signature solution zeroed. */ + /// @param key The key to verify the signature with function verifySignature(bytes32 key) private returns (uint256 nonceUsed, bytes memory message, bytes32 messageHash) @@ -274,7 +291,7 @@ contract Router is IRouterWithoutCollisions { * @param instruction The Shorthand-encoded InInstruction for Serai to associate with this * transfer in */ - // Re-entrancy doesn't bork this function + // This function doesn't require nonReentrant as re-entrancy isn't an issue with this function // slither-disable-next-line reentrancy-events function inInstruction(address coin, uint256 amount, bytes memory instruction) external payable { // Check there is an active key @@ -329,65 +346,21 @@ contract Router is IRouterWithoutCollisions { emit InInstruction(msg.sender, coin, amount, instruction); } - /// @dev Perform an ERC20 transfer out - /// @param to The address to transfer the coins to - /// @param coin The coin to transfer - /// @param amount The amount of the coin to transfer - /** - * @return success If the coins were successfully transferred out. This is defined as if the - * call succeeded and returned true or nothing. - */ - // execute has this annotation yet this still flags (even when it doesn't have its own loop) - // slither-disable-next-line calls-loop - function erc20TransferOut(address to, address coin, uint256 amount) - private - returns (bool success) - { - /* - The ERC20s integrated are presumed to have a constant gas cost, meaning this can only be - insufficient: - - A) An integrated ERC20 uses more gas than this limit (presumed not to be the case) - B) An integrated ERC20 is upgraded (integrated ERC20s are presumed to not be upgradeable) - C) This has a variable gas cost and the user set a hook on receive which caused this (in - which case, we accept dropping this) - D) The user was blacklisted (in which case, we again accept dropping this) - E) Other extreme edge cases, for which such tokens are assumed to not be integrated - F) Ethereum opcodes are repriced in a sufficiently breaking fashion - - This should be in such excess of the gas requirements of integrated tokens we'll survive - repricing, so long as the repricing doesn't revolutionize EVM gas costs as we know it. In such - a case, Serai would have to migrate to a new smart contract using `escapeHatch`. That also - covers all other potential exceptional cases. - */ - uint256 _gas = 100_000; - - /* - `coin` is either signed (from `execute`) or called from `escape` (which can safely be - arbitrarily called). We accordingly don't need to be worried about return bombs here. - */ - // slither-disable-next-line return-bomb - (bool erc20Success, bytes memory res) = - address(coin).call{ gas: _gas }(abi.encodeWithSelector(IERC20.transfer.selector, to, amount)); - - /* - Require there was nothing returned, which is done by some non-standard tokens, or that the - ERC20 contract did in fact return true. - */ - // slither-disable-next-line incorrect-equality - bool nonStandardResOrTrue = (res.length == 0) || ((res.length == 32) && abi.decode(res, (bool))); - success = erc20Success && nonStandardResOrTrue; - } - - /// @dev Perform an ETH/ERC20 transfer out + /// @dev Perform an Ether/ERC20 transfer out /// @param to The address to transfer the coins to /// @param coin The coin to transfer (address(0) if Ether) /// @param amount The amount of the coin to transfer + /// @param contractDestination If we're transferring to a contract we just deployed /** * @return success If the coins were successfully transferred out. For Ethereum, this is if the * call succeeded. For the ERC20, it's if the call succeeded and returned true or nothing. */ - function transferOut(address to, address coin, uint256 amount) private returns (bool success) { + // execute has this annotation yet this still flags (even when it doesn't have its own loop) + // slither-disable-next-line calls-loop + function transferOut(address to, address coin, uint256 amount, bool contractDestination) + private + returns (bool success) + { if (coin == address(0)) { // This uses assembly to prevent return bombs // slither-disable-next-line assembly @@ -407,7 +380,44 @@ contract Router is IRouterWithoutCollisions { ) } } else { - success = erc20TransferOut(to, coin, amount); + bytes4 selector; + if (contractDestination) { + /* + If this is an out of DestinationType::Contract, we only grant an approval. We don't + perform a transfer. This allows the contract, or our expectation of the contract as far as + our obligation to it, to be borked and for Serai to potentially it accordingly. + + Unfortunately, this isn't a feasible flow for Ether unless we set Ether approvals within + our contract (for entities to collect later) which is of sufficient complexity to not be + worth the effort. We also don't have the `CREATE` complexity when transferring Ether to + contracts we deploy. + */ + selector = IERC20.approve.selector; + } else { + /* + For non-contracts, we don't place the burden of the transferFrom flow and directly + transfer. + */ + selector = IERC20.transfer.selector; + } + + /* + `coin` is either signed (from `execute`) or called from `escape` (which can safely be + arbitrarily called). We accordingly don't need to be worried about return bombs here. + */ + // slither-disable-next-line return-bomb + (bool erc20Success, bytes memory res) = address(coin).call{ gas: ERC20_GAS }( + abi.encodeWithSelector(selector, to, amount) + ); + + /* + Require there was nothing returned, which is done by some non-standard tokens, or that the + ERC20 contract did in fact return true. + */ + // slither-disable-next-line incorrect-equality + bool nonStandardResOrTrue = + (res.length == 0) || ((res.length == 32) && abi.decode(res, (bool))); + success = erc20Success && nonStandardResOrTrue; } } @@ -427,6 +437,7 @@ contract Router is IRouterWithoutCollisions { * * This has undefined behavior when `nonce` is zero (EIP-161 makes this irrelevant). */ + /// @param nonce The nonce to use for CREATE function createAddress(uint256 nonce) internal view returns (address) { unchecked { // The amount of bytes needed to represent the nonce @@ -441,7 +452,7 @@ contract Router is IRouterWithoutCollisions { shouldSet := and(valueFits, notPriorSet) } // Carry the existing bitsNeeded value, set bits if should set - bitsNeeded = bitsNeeded + (shouldSet * bits); + bitsNeeded += (shouldSet * bits); } uint256 bytesNeeded = bitsNeeded / 8; @@ -452,9 +463,11 @@ contract Router is IRouterWithoutCollisions { assembly { nonceIsNotString := nonceIsNotStringBool } + // slither-disable-next-line incorrect-exp This is meant to be a xor uint256 nonceIsString = nonceIsNotString ^ 1; // Define the RLP length + // slither-disable-next-line divide-before-multiply uint256 rlpEncodingLen = 23 + (nonceIsString * bytesNeeded); uint256 rlpEncoding = @@ -539,17 +552,17 @@ contract Router is IRouterWithoutCollisions { // If the destination is an address, we perform a direct transfer if (outs[i].destinationType == IRouter.DestinationType.Address) { /* - This may cause a revert if the destination isn't actually a valid address. Serai is + This may cause a revert if the destination isn't actually a valid address. Serai is trusted to not pass a malformed destination, yet if it ever did, it could simply re-sign a corrected batch using this nonce. */ address destination = abi.decode(outs[i].destination, (address)); - success = transferOut(destination, coin, outs[i].amount); + success = transferOut(destination, coin, outs[i].amount, false); } else { // Prepare the transfer uint256 ethValue = 0; if (coin == address(0)) { - // If it's ETH, we transfer the amount with the call + // If it's Ether, we transfer the amount with the call ethValue = outs[i].amount; } else { /* @@ -559,9 +572,11 @@ contract Router is IRouterWithoutCollisions { We use CREATE, not CREATE2, despite the difficulty in calculating the address in-contract, for reasons explained within `createAddress`'s documentation. + + If this is ever borked, the fact we only set an approval allows recovery. */ address nextAddress = createAddress(_smartContractNonce); - success = erc20TransferOut(nextAddress, coin, outs[i].amount); + success = transferOut(nextAddress, coin, outs[i].amount, true); } /* @@ -571,10 +586,10 @@ contract Router is IRouterWithoutCollisions { entire isn't put into a halted state. Since the recipient is a fresh account, this presumably isn't the recipient being - blacklisted (the most likely invariant upon the integration of a popular, standard ERC20). - That means there likely is some invariant with this integration to be resolved later. - Since reaching this invariant state requires an invariant, and for the reasons above, this - is accepted. + blacklisted (the most likely invariant upon the integration of a popular, + otherwise-standard ERC20). That means there likely is some invariant with this integration + to be resolved later. Given our ability to sign new batches with the necessary + corrections, this is accepted. */ if (success) { (IRouter.CodeDestination memory destination) = @@ -609,7 +624,7 @@ contract Router is IRouterWithoutCollisions { emit Batch(nonceUsed, message, outs.length, results); // Transfer the fee to the relayer - transferOut(msg.sender, coin, fee); + transferOut(msg.sender, coin, fee, false); } /// @notice Escapes to a new smart contract @@ -666,6 +681,7 @@ contract Router is IRouterWithoutCollisions { /// @notice Escape coins after the escape hatch has been invoked /// @param coin The coin to escape + // slither-disable-next-line reentrancy-events Out-of-order events aren't an issue here function escape(address coin) external { if (_escapedTo == address(0)) { revert EscapeHatchNotInvoked(); @@ -679,7 +695,12 @@ contract Router is IRouterWithoutCollisions { // Perform the transfer // While this can be re-entered to try escaping our balance twice, the outer call will fail - if (!transferOut(_escapedTo, coin, amount)) { + /* + We don't flag the escape hatch as a contract destination, despite being a contract, as the + escape hatch's invocation is permanent. If the coins do not go through the escape hatch, they + will never go anywhere (ignoring any unspent approvals voided by this action). + */ + if (!transferOut(_escapedTo, coin, amount, false)) { revert EscapeFailed(); } diff --git a/processor/ethereum/router/src/tests/erc20.rs b/processor/ethereum/router/src/tests/erc20.rs index 7f07f935..02dc957e 100644 --- a/processor/ethereum/router/src/tests/erc20.rs +++ b/processor/ethereum/router/src/tests/erc20.rs @@ -88,4 +88,11 @@ impl Erc20 { )); U256::abi_decode(&test.provider.call(&call).await.unwrap(), true).unwrap() } + + pub(crate) async fn router_approval(&self, test: &Test, account: Address) -> U256 { + let call = TransactionRequest::default().to(self.0).input(TransactionInput::new( + abi::TestERC20::allowanceCall::new((test.router.address(), account)).abi_encode().into(), + )); + U256::abi_decode(&test.provider.call(&call).await.unwrap(), true).unwrap() + } } diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index fbdad8cb..f879f181 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -914,10 +914,10 @@ async fn test_erc20_code_out_instruction() { let unused_gas = test.gas_unused_by_calls(&tx).await; assert_eq!(gas_used + unused_gas, gas); - assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(amount_out)); assert_eq!(erc20.balance_of(&test, tx.recover_signer().unwrap()).await, U256::from(fee)); let deployed = test.router.address().create(1); - assert_eq!(erc20.balance_of(&test, deployed).await, amount_out); + assert_eq!(erc20.router_approval(&test, deployed).await, amount_out); assert_eq!(test.provider.get_code_at(deployed).await.unwrap().to_vec(), true.abi_encode()); } From 17cc10b3f7bf8c853c8d9866d27ac57e6c3f94ef Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 13:01:52 -0500 Subject: [PATCH 356/368] Test Execute result decoding, reentrancy --- .../router/contracts/tests/CreateAddress.sol | 2 +- .../router/contracts/tests/Reentrancy.sol | 17 ++++ processor/ethereum/router/src/tests/mod.rs | 82 ++++++++++++++++--- 3 files changed, 87 insertions(+), 14 deletions(-) create mode 100644 processor/ethereum/router/contracts/tests/Reentrancy.sol diff --git a/processor/ethereum/router/contracts/tests/CreateAddress.sol b/processor/ethereum/router/contracts/tests/CreateAddress.sol index 2d092449..6aa57629 100644 --- a/processor/ethereum/router/contracts/tests/CreateAddress.sol +++ b/processor/ethereum/router/contracts/tests/CreateAddress.sol @@ -3,7 +3,7 @@ pragma solidity ^0.8.26; import "Router.sol"; -// Wrap the Router with a contract which exposes the address +// Wrap the Router with a contract which exposes the createAddress function contract CreateAddress is Router { constructor() Router(bytes32(uint256(1))) { } diff --git a/processor/ethereum/router/contracts/tests/Reentrancy.sol b/processor/ethereum/router/contracts/tests/Reentrancy.sol new file mode 100644 index 00000000..979fd74d --- /dev/null +++ b/processor/ethereum/router/contracts/tests/Reentrancy.sol @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: AGPL-3.0-only +pragma solidity ^0.8.26; + +import "Router.sol"; + +// This inherits from the Router for visibility over Reentered +contract Reentrancy { + error Reentered(); + + constructor() { + (bool success, bytes memory res) = + msg.sender.call(abi.encodeWithSelector(Router.execute4DE42904.selector, "")); + require(!success); + // We can't compare `bytes memory` so we hash them and compare the hashes + require(keccak256(res) == keccak256(abi.encode(Reentered.selector))); + } +} diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index f879f181..4c21aeaf 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -456,9 +456,12 @@ impl Test { let gas_provided = trace.action.as_call().as_ref().unwrap().gas; let gas_spent = trace.result.as_ref().unwrap().gas_used(); unused_gas += gas_provided - gas_spent; - for _ in 0 .. trace.subtraces { - // Skip the subtraces for this call (such as CREATE) - traces.next().unwrap(); + + let mut subtraces = trace.subtraces; + while subtraces != 0 { + // Skip the subtraces (and their subtraces) for this call (such as CREATE) + subtraces += traces.next().unwrap().trace.subtraces; + subtraces -= 1; } } @@ -774,9 +777,6 @@ async fn test_empty_execute() { } } -// TODO: Test order, length of results -// TODO: Test reentrancy - #[tokio::test] async fn test_eth_address_out_instruction() { let mut test = Test::new().await; @@ -921,6 +921,31 @@ async fn test_erc20_code_out_instruction() { assert_eq!(test.provider.get_code_at(deployed).await.unwrap().to_vec(), true.abi_encode()); } +#[tokio::test] +async fn test_result_decoding() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + // Create three OutInstructions, where the last one errors + let out_instructions = OutInstructions::from( + [ + (SeraiEthereumAddress::Address([0; 20]), U256::from(0)), + (SeraiEthereumAddress::Address([0; 20]), U256::from(0)), + (SeraiEthereumAddress::Contract(ContractDeployment::new(0, vec![]).unwrap()), U256::from(0)), + ] + .as_slice(), + ); + + let gas = test.router.execute_gas(Coin::Ether, U256::from(0), &out_instructions); + + // We should decode these in the correct order (not `false, true, true`) + let (_tx, gas_used) = + test.execute(Coin::Ether, U256::from(0), out_instructions, vec![true, true, false]).await; + // We don't check strict equality as we don't know how much gas was used by the reverted call + // (even with the trace), solely that it used less than or equal to the limit + assert!(gas_used <= gas); +} + #[tokio::test] async fn test_escape_hatch() { let mut test = Test::new().await; @@ -1038,10 +1063,41 @@ async fn test_escape_hatch() { } } -/* TODO - event Batch(uint256 indexed nonce, bytes32 indexed messageHash, bytes results); - error Reentered(); - error EscapeFailed(); - function executeArbitraryCode(bytes memory code) external payable; - function createAddress(uint256 nonce) private view returns (address); -*/ +#[tokio::test] +async fn test_reentrancy() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + const BYTECODE: &[u8] = { + const BYTECODE_HEX: &[u8] = include_bytes!(concat!( + env!("OUT_DIR"), + "/serai-processor-ethereum-router/tests/Reentrancy.bin" + )); + const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = + match alloy_core::primitives::hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>( + BYTECODE_HEX, + ) { + Ok(bytecode) => bytecode, + Err(_) => panic!("Reentrancy.bin did not contain valid hex"), + }; + &BYTECODE + }; + + let out_instructions = OutInstructions::from( + [( + // The Reentrancy contract, in its constructor, will re-enter and verify the proper error is + // returned + SeraiEthereumAddress::Contract(ContractDeployment::new(50_000, BYTECODE.to_vec()).unwrap()), + U256::from(0), + )] + .as_slice(), + ); + + let gas = test.router.execute_gas(Coin::Ether, U256::from(0), &out_instructions); + let (_tx, gas_used) = + test.execute(Coin::Ether, U256::from(0), out_instructions, vec![true]).await; + // Even though this doesn't have failed `OutInstruction`s, our logic is incomplete upon any + // failed internal calls for some reason. That's fine, as the gas yielded is still the worst-case + // (which this isn't a counter-example to) and is validated to be the worst-case, but is peculiar + assert!(gas_used <= gas); +} From 0484113254b12578ff1d464a39247367f7c14630 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 13:07:35 -0500 Subject: [PATCH 357/368] Fix the ability for a malicious adversary to snipe ERC20s out via re-entrancy from the ERC20 contract --- .../ethereum/router/contracts/IRouter.sol | 2 ++ .../ethereum/router/contracts/Router.sol | 24 ++++++++++++++++--- processor/ethereum/router/src/tests/mod.rs | 20 ++++++++++++++++ 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/processor/ethereum/router/contracts/IRouter.sol b/processor/ethereum/router/contracts/IRouter.sol index 772a04f7..1cf61f8e 100644 --- a/processor/ethereum/router/contracts/IRouter.sol +++ b/processor/ethereum/router/contracts/IRouter.sol @@ -63,6 +63,8 @@ interface IRouterWithoutCollisions { /// @notice The call to an ERC20's `transferFrom` failed error TransferFromFailed(); + /// @notice The code wasn't to-be-executed by self + error CodeNotBySelf(); /// @notice A non-reentrant function was re-entered error Reentered(); diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index 25ece4b8..dd0ad6e5 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -406,9 +406,8 @@ contract Router is IRouterWithoutCollisions { arbitrarily called). We accordingly don't need to be worried about return bombs here. */ // slither-disable-next-line return-bomb - (bool erc20Success, bytes memory res) = address(coin).call{ gas: ERC20_GAS }( - abi.encodeWithSelector(selector, to, amount) - ); + (bool erc20Success, bytes memory res) = + address(coin).call{ gas: ERC20_GAS }(abi.encodeWithSelector(selector, to, amount)); /* Require there was nothing returned, which is done by some non-standard tokens, or that the @@ -504,6 +503,25 @@ contract Router is IRouterWithoutCollisions { */ /// @param code The code to execute function executeArbitraryCode(bytes memory code) external payable { + /* + execute assumes that from the time it reads `_smartContractNonce` until the time it calls this + function, no mutations to it will occur. If any mutations could occur, it'd lead to a fault + where tokens could be sniped by: + + 1) An out occurring, transferring tokens to an about-to-be-deployed smart contract + 2) The token contract re-entering the Router to deploy a new smart contract which claims the + tokens + 3) The Router then deploying the intended smart contract (ignoring whatever result it may + have) + + This does assume a malicious token, or a token with callbacks which can be set by a malicious + adversary, yet the way to ensure it's a non-issue is to not allow other entities to mutate + `_smartContractNonce`. + */ + if (msg.sender != address(this)) { + revert CodeNotBySelf(); + } + // Because we're creating a contract, increment our nonce _smartContractNonce += 1; diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 4c21aeaf..2f6397cc 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -706,6 +706,26 @@ async fn test_erc20_top_level_transfer_in_instruction() { test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; } +#[tokio::test] +async fn test_execute_arbitrary_code() { + let test = Test::new().await; + + assert!(matches!( + test + .call_and_decode_err(TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000, + gas_limit: 1_000_000, + to: test.router.address().into(), + value: U256::ZERO, + input: crate::abi::executeArbitraryCodeCall::new((vec![].into(),)).abi_encode().into(), + }) + .await, + IRouterErrors::CodeNotBySelf(IRouter::CodeNotBySelf {}) + )); +} + // Code which returns true #[rustfmt::skip] fn return_true_code() -> Vec { From 835b5bb06f280ae1a9bc8900eecc4e48d52b7659 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 13:59:11 -0500 Subject: [PATCH 358/368] Split tests across a few files, fuzz generate OutInstructions Tests successful gas estimation even with more complex behaviors. --- .../ethereum/router/src/tests/escape_hatch.rs | 172 +++++++ .../router/src/tests/in_instruction.rs | 182 ++++++++ processor/ethereum/router/src/tests/mod.rs | 421 ++++-------------- 3 files changed, 435 insertions(+), 340 deletions(-) create mode 100644 processor/ethereum/router/src/tests/escape_hatch.rs create mode 100644 processor/ethereum/router/src/tests/in_instruction.rs diff --git a/processor/ethereum/router/src/tests/escape_hatch.rs b/processor/ethereum/router/src/tests/escape_hatch.rs new file mode 100644 index 00000000..28be1a64 --- /dev/null +++ b/processor/ethereum/router/src/tests/escape_hatch.rs @@ -0,0 +1,172 @@ +use alloy_core::primitives::{Address, U256}; + +use alloy_consensus::TxLegacy; + +use alloy_provider::Provider; + +use crate::tests::*; + +impl Test { + pub(crate) fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { + let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); + let sig = sign(self.state.key.unwrap(), &msg); + let mut tx = self.router.escape_hatch(escape_to, &sig); + tx.gas_limit = Router::ESCAPE_HATCH_GAS + 5_000; + tx + } + + pub(crate) async fn escape_hatch(&mut self) { + let mut escape_to = [0; 20]; + OsRng.fill_bytes(&mut escape_to); + let escape_to = Address(escape_to.into()); + + // Set the code of the address to escape to so it isn't flagged as a non-contract + let () = self.provider.raw_request("anvil_setCode".into(), (escape_to, [0])).await.unwrap(); + + let mut tx = self.escape_hatch_tx(escape_to); + tx.gas_price = 100_000_000_000; + let tx = ethereum_primitives::deterministically_sign(tx); + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + // This encodes an address which has 12 bytes of padding + assert_eq!( + CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 12, receipt.gas_used), + Router::ESCAPE_HATCH_GAS + ); + + { + let block = receipt.block_number.unwrap(); + let executed = self.router.executed(block ..= block).await.unwrap(); + assert_eq!(executed.len(), 1); + assert_eq!(executed[0], Executed::EscapeHatch { nonce: self.state.next_nonce, escape_to }); + } + + self.state.next_nonce += 1; + self.state.escaped_to = Some(escape_to); + self.verify_state().await; + } + + pub(crate) fn escape_tx(&self, coin: Coin) -> TxLegacy { + let mut tx = self.router.escape(coin); + tx.gas_limit = 100_000; + tx.gas_price = 100_000_000_000; + tx + } +} + +#[tokio::test] +async fn test_escape_hatch() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + // Queue another key so the below test cases can run + test.update_serai_key().await; + + { + // The zero address should be invalid to escape to + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([0; 20].into())).await, + IRouterErrors::InvalidEscapeAddress(IRouter::InvalidEscapeAddress {}) + )); + // Empty addresses should be invalid to escape to + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, + IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) + )); + // Non-empty addresses without code should be invalid to escape to + let tx = ethereum_primitives::deterministically_sign(TxLegacy { + to: Address([1; 20].into()).into(), + gas_limit: 21_000, + gas_price: 100_000_000_000, + value: U256::from(1), + ..Default::default() + }); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, + IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) + )); + + // Escaping at this point in time should fail + assert!(matches!( + test.call_and_decode_err(test.router.escape(Coin::Ether)).await, + IRouterErrors::EscapeHatchNotInvoked(IRouter::EscapeHatchNotInvoked {}) + )); + } + + // Invoke the escape hatch + test.escape_hatch().await; + + // Now that the escape hatch has been invoked, all of the following calls should fail + { + assert!(matches!( + test.call_and_decode_err(test.update_serai_key_tx().1).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + assert!(matches!( + test.call_and_decode_err(test.confirm_next_serai_key_tx()).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + assert!(matches!( + test.call_and_decode_err(test.eth_in_instruction_tx().3).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + assert!(matches!( + test + .call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), [].as_slice().into()).1) + .await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + // We reject further attempts to update the escape hatch to prevent the last key from being + // able to switch from the honest escape hatch to siphoning via a malicious escape hatch (such + // as after the validators represented unstake) + assert!(matches!( + test.call_and_decode_err(test.escape_hatch_tx(test.state.escaped_to.unwrap())).await, + IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) + )); + } + + // Check the escape fn itself + + // ETH + { + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 1)) + .await + .unwrap(); + let tx = ethereum_primitives::deterministically_sign(test.escape_tx(Coin::Ether)); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + assert_eq!( + test.router.escapes(block ..= block).await.unwrap(), + vec![Escape { coin: Coin::Ether, amount: U256::from(1) }], + ); + + assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); + assert_eq!( + test.provider.get_balance(test.state.escaped_to.unwrap()).await.unwrap(), + U256::from(1) + ); + } + + // ERC20 + { + let erc20 = Erc20::deploy(&test).await; + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + erc20.mint(&test, test.router.address(), amount).await; + + let tx = ethereum_primitives::deterministically_sign(test.escape_tx(coin)); + let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + assert_eq!(test.router.escapes(block ..= block).await.unwrap(), vec![Escape { coin, amount }],); + assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); + assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); + } +} diff --git a/processor/ethereum/router/src/tests/in_instruction.rs b/processor/ethereum/router/src/tests/in_instruction.rs new file mode 100644 index 00000000..20ddfd02 --- /dev/null +++ b/processor/ethereum/router/src/tests/in_instruction.rs @@ -0,0 +1,182 @@ +use std::collections::HashSet; + +use alloy_core::primitives::U256; +use alloy_sol_types::SolCall; + +use alloy_consensus::{TxLegacy, Signed}; + +use scale::Encode; +use serai_client::{ + primitives::SeraiAddress, + in_instructions::primitives::{ + InInstruction as SeraiInInstruction, RefundableInInstruction, Shorthand, + }, +}; + +use ethereum_primitives::LogIndex; + +use crate::{InInstruction, tests::*}; + +impl Test { + pub(crate) fn in_instruction() -> Shorthand { + Shorthand::Raw(RefundableInInstruction { + origin: None, + instruction: SeraiInInstruction::Transfer(SeraiAddress([0xff; 32])), + }) + } + + pub(crate) fn eth_in_instruction_tx(&self) -> (Coin, U256, Shorthand, TxLegacy) { + let coin = Coin::Ether; + let amount = U256::from(1); + let shorthand = Self::in_instruction(); + + let mut tx = self.router.in_instruction(coin, amount, &shorthand); + tx.gas_limit = 1_000_000; + tx.gas_price = 100_000_000_000; + + (coin, amount, shorthand, tx) + } + + pub(crate) async fn publish_in_instruction_tx( + &self, + tx: Signed, + coin: Coin, + amount: U256, + shorthand: &Shorthand, + ) { + let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; + assert!(receipt.status()); + + let block = receipt.block_number.unwrap(); + + if matches!(coin, Coin::Erc20(_)) { + // If we don't whitelist this token, we shouldn't be yielded an InInstruction + let in_instructions = + self.router.in_instructions_unordered(block ..= block, &HashSet::new()).await.unwrap(); + assert!(in_instructions.is_empty()); + } + + let in_instructions = self + .router + .in_instructions_unordered( + block ..= block, + &if let Coin::Erc20(token) = coin { HashSet::from([token]) } else { HashSet::new() }, + ) + .await + .unwrap(); + assert_eq!(in_instructions.len(), 1); + + let in_instruction_log_index = receipt.inner.logs().iter().find_map(|log| { + (log.topics().first() == Some(&crate::InInstructionEvent::SIGNATURE_HASH)) + .then(|| log.log_index.unwrap()) + }); + // If this isn't an InInstruction event, it'll be a top-level transfer event + let log_index = in_instruction_log_index.unwrap_or(0); + + assert_eq!( + in_instructions[0], + InInstruction { + id: LogIndex { block_hash: *receipt.block_hash.unwrap(), index_within_block: log_index }, + transaction_hash: **tx.hash(), + from: tx.recover_signer().unwrap(), + coin, + amount, + data: shorthand.encode(), + } + ); + } +} + +#[tokio::test] +async fn test_no_in_instruction_before_key() { + let test = Test::new().await; + + // We shouldn't be able to publish `InInstruction`s before publishing a key + let (_coin, _amount, _shorthand, tx) = test.eth_in_instruction_tx(); + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) + )); +} + +#[tokio::test] +async fn test_eth_in_instruction() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let (coin, amount, shorthand, tx) = test.eth_in_instruction_tx(); + + // This should fail if the value mismatches the amount + { + let mut tx = tx.clone(); + tx.value = U256::ZERO; + assert!(matches!( + test.call_and_decode_err(tx).await, + IRouterErrors::AmountMismatchesMsgValue(IRouter::AmountMismatchesMsgValue {}) + )); + } + + let tx = ethereum_primitives::deterministically_sign(tx); + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; +} + +#[tokio::test] +async fn test_erc20_router_in_instruction() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let erc20 = Erc20::deploy(&test).await; + + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + let shorthand = Test::in_instruction(); + + // The provided `in_instruction` function will use a top-level transfer for ERC20 InInstructions, + // so we have to manually write this call + let tx = TxLegacy { + chain_id: None, + nonce: 0, + gas_price: 100_000_000_000, + gas_limit: 1_000_000, + to: test.router.address().into(), + value: U256::ZERO, + input: crate::abi::inInstructionCall::new((coin.into(), amount, shorthand.encode().into())) + .abi_encode() + .into(), + }; + + // If no `approve` was granted, this should fail + assert!(matches!( + test.call_and_decode_err(tx.clone()).await, + IRouterErrors::TransferFromFailed(IRouter::TransferFromFailed {}) + )); + + let tx = ethereum_primitives::deterministically_sign(tx); + { + let signer = tx.recover_signer().unwrap(); + erc20.mint(&test, signer, amount).await; + erc20.approve(&test, signer, test.router.address(), amount).await; + } + + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; +} + +#[tokio::test] +async fn test_erc20_top_level_transfer_in_instruction() { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + let erc20 = Erc20::deploy(&test).await; + + let coin = Coin::Erc20(erc20.address()); + let amount = U256::from(1); + let shorthand = Test::in_instruction(); + + let mut tx = test.router.in_instruction(coin, amount, &shorthand); + tx.gas_price = 100_000_000_000; + tx.gas_limit = 1_000_000; + + let tx = ethereum_primitives::deterministically_sign(tx); + erc20.mint(&test, tx.recover_signer().unwrap(), amount).await; + test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; +} diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index 2f6397cc..bc086c62 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -1,4 +1,4 @@ -use std::{sync::Arc, collections::HashSet}; +use std::sync::Arc; use rand_core::{RngCore, OsRng}; @@ -20,16 +20,8 @@ use alloy_provider::{ use alloy_node_bindings::{Anvil, AnvilInstance}; -use scale::Encode; -use serai_client::{ - networks::ethereum::{ContractDeployment, Address as SeraiEthereumAddress}, - primitives::SeraiAddress, - in_instructions::primitives::{ - InInstruction as SeraiInInstruction, RefundableInInstruction, Shorthand, - }, -}; +use serai_client::networks::ethereum::{ContractDeployment, Address as SeraiEthereumAddress}; -use ethereum_primitives::LogIndex; use ethereum_schnorr::{PublicKey, Signature}; use ethereum_deployer::Deployer; @@ -37,16 +29,18 @@ use crate::{ _irouter_abi::IRouterWithoutCollisions::{ self as IRouter, IRouterWithoutCollisionsErrors as IRouterErrors, }, - Coin, InInstruction, OutInstructions, Router, Executed, Escape, + Coin, OutInstructions, Router, Executed, Escape, }; mod constants; -mod create_address; - mod erc20; use erc20::Erc20; +mod create_address; +mod in_instruction; +mod escape_hatch; + pub(crate) fn test_key() -> (Scalar, PublicKey) { loop { let key = Scalar::random(&mut OsRng); @@ -126,7 +120,13 @@ impl Test { async fn new() -> Self { // The following is explicitly only evaluated against the cancun network upgrade at this time - let anvil = Anvil::new().arg("--hardfork").arg("cancun").arg("--tracing").spawn(); + let anvil = Anvil::new() + .arg("--hardfork") + .arg("cancun") + .arg("--tracing") + .arg("--no-request-size-limit") + .arg("--disable-block-gas-limit") + .spawn(); let provider = Arc::new(RootProvider::new( ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true), @@ -267,74 +267,6 @@ impl Test { self.verify_state().await; } - fn in_instruction() -> Shorthand { - Shorthand::Raw(RefundableInInstruction { - origin: None, - instruction: SeraiInInstruction::Transfer(SeraiAddress([0xff; 32])), - }) - } - - fn eth_in_instruction_tx(&self) -> (Coin, U256, Shorthand, TxLegacy) { - let coin = Coin::Ether; - let amount = U256::from(1); - let shorthand = Self::in_instruction(); - - let mut tx = self.router.in_instruction(coin, amount, &shorthand); - tx.gas_limit = 1_000_000; - tx.gas_price = 100_000_000_000; - - (coin, amount, shorthand, tx) - } - - async fn publish_in_instruction_tx( - &self, - tx: Signed, - coin: Coin, - amount: U256, - shorthand: &Shorthand, - ) { - let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; - assert!(receipt.status()); - - let block = receipt.block_number.unwrap(); - - if matches!(coin, Coin::Erc20(_)) { - // If we don't whitelist this token, we shouldn't be yielded an InInstruction - let in_instructions = - self.router.in_instructions_unordered(block ..= block, &HashSet::new()).await.unwrap(); - assert!(in_instructions.is_empty()); - } - - let in_instructions = self - .router - .in_instructions_unordered( - block ..= block, - &if let Coin::Erc20(token) = coin { HashSet::from([token]) } else { HashSet::new() }, - ) - .await - .unwrap(); - assert_eq!(in_instructions.len(), 1); - - let in_instruction_log_index = receipt.inner.logs().iter().find_map(|log| { - (log.topics().first() == Some(&crate::InInstructionEvent::SIGNATURE_HASH)) - .then(|| log.log_index.unwrap()) - }); - // If this isn't an InInstruction event, it'll be a top-level transfer event - let log_index = in_instruction_log_index.unwrap_or(0); - - assert_eq!( - in_instructions[0], - InInstruction { - id: LogIndex { block_hash: *receipt.block_hash.unwrap(), index_within_block: log_index }, - transaction_hash: **tx.hash(), - from: tx.recover_signer().unwrap(), - coin, - amount, - data: shorthand.encode(), - } - ); - } - fn execute_tx( &self, coin: Coin, @@ -371,7 +303,7 @@ impl Test { results: Vec, ) -> (Signed, u64) { let (message_hash, mut tx) = self.execute_tx(coin, fee, out_instructions); - tx.gas_limit = 1_000_000; + tx.gas_limit = 100_000_000; tx.gas_price = 100_000_000_000; let tx = ethereum_primitives::deterministically_sign(tx); let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; @@ -396,52 +328,6 @@ impl Test { (tx.clone(), receipt.gas_used) } - fn escape_hatch_tx(&self, escape_to: Address) -> TxLegacy { - let msg = Router::escape_hatch_message(self.chain_id, self.state.next_nonce, escape_to); - let sig = sign(self.state.key.unwrap(), &msg); - let mut tx = self.router.escape_hatch(escape_to, &sig); - tx.gas_limit = Router::ESCAPE_HATCH_GAS + 5_000; - tx - } - - async fn escape_hatch(&mut self) { - let mut escape_to = [0; 20]; - OsRng.fill_bytes(&mut escape_to); - let escape_to = Address(escape_to.into()); - - // Set the code of the address to escape to so it isn't flagged as a non-contract - let () = self.provider.raw_request("anvil_setCode".into(), (escape_to, [0])).await.unwrap(); - - let mut tx = self.escape_hatch_tx(escape_to); - tx.gas_price = 100_000_000_000; - let tx = ethereum_primitives::deterministically_sign(tx); - let receipt = ethereum_test_primitives::publish_tx(&self.provider, tx.clone()).await; - assert!(receipt.status()); - // This encodes an address which has 12 bytes of padding - assert_eq!( - CalldataAgnosticGas::calculate(tx.tx().input.as_ref(), 12, receipt.gas_used), - Router::ESCAPE_HATCH_GAS - ); - - { - let block = receipt.block_number.unwrap(); - let executed = self.router.executed(block ..= block).await.unwrap(); - assert_eq!(executed.len(), 1); - assert_eq!(executed[0], Executed::EscapeHatch { nonce: self.state.next_nonce, escape_to }); - } - - self.state.next_nonce += 1; - self.state.escaped_to = Some(escape_to); - self.verify_state().await; - } - - fn escape_tx(&self, coin: Coin) -> TxLegacy { - let mut tx = self.router.escape(coin); - tx.gas_limit = 100_000; - tx.gas_price = 100_000_000_000; - tx - } - async fn gas_unused_by_calls(&self, tx: &Signed) -> u64 { let mut unused_gas = 0; @@ -612,100 +498,6 @@ async fn test_update_serai_key() { test.confirm_next_serai_key().await; } -#[tokio::test] -async fn test_no_in_instruction_before_key() { - let test = Test::new().await; - - // We shouldn't be able to publish `InInstruction`s before publishing a key - let (_coin, _amount, _shorthand, tx) = test.eth_in_instruction_tx(); - assert!(matches!( - test.call_and_decode_err(tx).await, - IRouterErrors::SeraiKeyWasNone(IRouter::SeraiKeyWasNone {}) - )); -} - -#[tokio::test] -async fn test_eth_in_instruction() { - let mut test = Test::new().await; - test.confirm_next_serai_key().await; - - let (coin, amount, shorthand, tx) = test.eth_in_instruction_tx(); - - // This should fail if the value mismatches the amount - { - let mut tx = tx.clone(); - tx.value = U256::ZERO; - assert!(matches!( - test.call_and_decode_err(tx).await, - IRouterErrors::AmountMismatchesMsgValue(IRouter::AmountMismatchesMsgValue {}) - )); - } - - let tx = ethereum_primitives::deterministically_sign(tx); - test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; -} - -#[tokio::test] -async fn test_erc20_router_in_instruction() { - let mut test = Test::new().await; - test.confirm_next_serai_key().await; - - let erc20 = Erc20::deploy(&test).await; - - let coin = Coin::Erc20(erc20.address()); - let amount = U256::from(1); - let shorthand = Test::in_instruction(); - - // The provided `in_instruction` function will use a top-level transfer for ERC20 InInstructions, - // so we have to manually write this call - let tx = TxLegacy { - chain_id: None, - nonce: 0, - gas_price: 100_000_000_000, - gas_limit: 1_000_000, - to: test.router.address().into(), - value: U256::ZERO, - input: crate::abi::inInstructionCall::new((coin.into(), amount, shorthand.encode().into())) - .abi_encode() - .into(), - }; - - // If no `approve` was granted, this should fail - assert!(matches!( - test.call_and_decode_err(tx.clone()).await, - IRouterErrors::TransferFromFailed(IRouter::TransferFromFailed {}) - )); - - let tx = ethereum_primitives::deterministically_sign(tx); - { - let signer = tx.recover_signer().unwrap(); - erc20.mint(&test, signer, amount).await; - erc20.approve(&test, signer, test.router.address(), amount).await; - } - - test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; -} - -#[tokio::test] -async fn test_erc20_top_level_transfer_in_instruction() { - let mut test = Test::new().await; - test.confirm_next_serai_key().await; - - let erc20 = Erc20::deploy(&test).await; - - let coin = Coin::Erc20(erc20.address()); - let amount = U256::from(1); - let shorthand = Test::in_instruction(); - - let mut tx = test.router.in_instruction(coin, amount, &shorthand); - tx.gas_price = 100_000_000_000; - tx.gas_limit = 1_000_000; - - let tx = ethereum_primitives::deterministically_sign(tx); - erc20.mint(&test, tx.recover_signer().unwrap(), amount).await; - test.publish_in_instruction_tx(tx, coin, amount, &shorthand).await; -} - #[tokio::test] async fn test_execute_arbitrary_code() { let test = Test::new().await; @@ -966,123 +758,6 @@ async fn test_result_decoding() { assert!(gas_used <= gas); } -#[tokio::test] -async fn test_escape_hatch() { - let mut test = Test::new().await; - test.confirm_next_serai_key().await; - - // Queue another key so the below test cases can run - test.update_serai_key().await; - - { - // The zero address should be invalid to escape to - assert!(matches!( - test.call_and_decode_err(test.escape_hatch_tx([0; 20].into())).await, - IRouterErrors::InvalidEscapeAddress(IRouter::InvalidEscapeAddress {}) - )); - // Empty addresses should be invalid to escape to - assert!(matches!( - test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, - IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) - )); - // Non-empty addresses without code should be invalid to escape to - let tx = ethereum_primitives::deterministically_sign(TxLegacy { - to: Address([1; 20].into()).into(), - gas_limit: 21_000, - gas_price: 100_000_000_000, - value: U256::from(1), - ..Default::default() - }); - let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; - assert!(receipt.status()); - assert!(matches!( - test.call_and_decode_err(test.escape_hatch_tx([1; 20].into())).await, - IRouterErrors::EscapeAddressWasNotAContract(IRouter::EscapeAddressWasNotAContract {}) - )); - - // Escaping at this point in time should fail - assert!(matches!( - test.call_and_decode_err(test.router.escape(Coin::Ether)).await, - IRouterErrors::EscapeHatchNotInvoked(IRouter::EscapeHatchNotInvoked {}) - )); - } - - // Invoke the escape hatch - test.escape_hatch().await; - - // Now that the escape hatch has been invoked, all of the following calls should fail - { - assert!(matches!( - test.call_and_decode_err(test.update_serai_key_tx().1).await, - IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) - )); - assert!(matches!( - test.call_and_decode_err(test.confirm_next_serai_key_tx()).await, - IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) - )); - assert!(matches!( - test.call_and_decode_err(test.eth_in_instruction_tx().3).await, - IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) - )); - assert!(matches!( - test - .call_and_decode_err(test.execute_tx(Coin::Ether, U256::from(0), [].as_slice().into()).1) - .await, - IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) - )); - // We reject further attempts to update the escape hatch to prevent the last key from being - // able to switch from the honest escape hatch to siphoning via a malicious escape hatch (such - // as after the validators represented unstake) - assert!(matches!( - test.call_and_decode_err(test.escape_hatch_tx(test.state.escaped_to.unwrap())).await, - IRouterErrors::EscapeHatchInvoked(IRouter::EscapeHatchInvoked {}) - )); - } - - // Check the escape fn itself - - // ETH - { - let () = test - .provider - .raw_request("anvil_setBalance".into(), (test.router.address(), 1)) - .await - .unwrap(); - let tx = ethereum_primitives::deterministically_sign(test.escape_tx(Coin::Ether)); - let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; - assert!(receipt.status()); - - let block = receipt.block_number.unwrap(); - assert_eq!( - test.router.escapes(block ..= block).await.unwrap(), - vec![Escape { coin: Coin::Ether, amount: U256::from(1) }], - ); - - assert_eq!(test.provider.get_balance(test.router.address()).await.unwrap(), U256::from(0)); - assert_eq!( - test.provider.get_balance(test.state.escaped_to.unwrap()).await.unwrap(), - U256::from(1) - ); - } - - // ERC20 - { - let erc20 = Erc20::deploy(&test).await; - let coin = Coin::Erc20(erc20.address()); - let amount = U256::from(1); - erc20.mint(&test, test.router.address(), amount).await; - - let tx = ethereum_primitives::deterministically_sign(test.escape_tx(coin)); - let receipt = ethereum_test_primitives::publish_tx(&test.provider, tx.clone()).await; - assert!(receipt.status()); - - let block = receipt.block_number.unwrap(); - assert_eq!(test.router.escapes(block ..= block).await.unwrap(), vec![Escape { coin, amount }],); - assert_eq!(erc20.balance_of(&test, test.router.address()).await, U256::from(0)); - assert_eq!(erc20.balance_of(&test, test.state.escaped_to.unwrap()).await, amount); - } -} - #[tokio::test] async fn test_reentrancy() { let mut test = Test::new().await; @@ -1121,3 +796,69 @@ async fn test_reentrancy() { // (which this isn't a counter-example to) and is validated to be the worst-case, but is peculiar assert!(gas_used <= gas); } + +#[tokio::test] +async fn fuzz_test_out_instructions_gas() { + for _ in 0 .. 10 { + let mut test = Test::new().await; + test.confirm_next_serai_key().await; + + // Generate a random OutInstructions + let mut out_instructions = vec![]; + let mut prior_addresses = vec![]; + for _ in 0 .. (OsRng.next_u64() % 50) { + let amount_out = U256::from(OsRng.next_u64() % 2); + if (OsRng.next_u64() % 2) == 1 { + let mut code = return_true_code(); + + // Extend this with random data to make it somewhat random, despite the constant returned + // code (though the estimator will never run the initcode and realize that) + let ext = vec![0; usize::try_from(OsRng.next_u64() % 400).unwrap()]; + code.extend(&ext); + + out_instructions.push(( + SeraiEthereumAddress::Contract(ContractDeployment::new(100_000, ext).unwrap()), + amount_out, + )); + } else { + // Occasionally reuse addresses (cold/warm slots) + let address = if (!prior_addresses.is_empty()) && ((OsRng.next_u64() % 2) == 1) { + prior_addresses[usize::try_from( + OsRng.next_u64() % u64::try_from(prior_addresses.len()).unwrap(), + ) + .unwrap()] + } else { + let mut rand_address = [0; 20]; + OsRng.fill_bytes(&mut rand_address); + prior_addresses.push(rand_address); + rand_address + }; + out_instructions.push((SeraiEthereumAddress::Address(address), amount_out)); + } + } + let out_instructions = OutInstructions::from(out_instructions.as_slice()); + + // Randomly decide the coin + let coin = if (OsRng.next_u64() % 2) == 1 { + let () = test + .provider + .raw_request("anvil_setBalance".into(), (test.router.address(), 1_000_000_000)) + .await + .unwrap(); + Coin::Ether + } else { + let erc20 = Erc20::deploy(&test).await; + erc20.mint(&test, test.router.address(), U256::from(1_000_000_000)).await; + Coin::Erc20(erc20.address()) + }; + + let fee_per_gas = U256::from(OsRng.next_u64() % 10); + let gas = test.router.execute_gas(coin, fee_per_gas, &out_instructions); + let fee = U256::from(gas) * fee_per_gas; + // All of these should have succeeded + let (tx, gas_used) = + test.execute(coin, fee, out_instructions.clone(), vec![true; out_instructions.0.len()]).await; + let unused_gas = test.gas_unused_by_calls(&tx).await; + assert_eq!(gas_used + unused_gas, gas); + } +} From f004c8726f94633413d913f29d5a80327822b021 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 15:38:44 -0500 Subject: [PATCH 359/368] Remove unused library bytecode from ethereum-schnorr-contract --- Cargo.lock | 1 - networks/ethereum/schnorr/Cargo.toml | 2 -- networks/ethereum/schnorr/src/lib.rs | 12 ------------ 3 files changed, 15 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 014a6d40..42401961 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2656,7 +2656,6 @@ dependencies = [ "alloy-simple-request-transport", "alloy-sol-types", "build-solidity-contracts", - "const-hex", "group", "k256", "rand_core", diff --git a/networks/ethereum/schnorr/Cargo.toml b/networks/ethereum/schnorr/Cargo.toml index 5883a453..42797bb7 100644 --- a/networks/ethereum/schnorr/Cargo.toml +++ b/networks/ethereum/schnorr/Cargo.toml @@ -16,8 +16,6 @@ rustdoc-args = ["--cfg", "docsrs"] workspace = true [dependencies] -const-hex = { version = "1", default-features = false, features = ["std", "core-error"] } - subtle = { version = "2", default-features = false, features = ["std"] } sha3 = { version = "0.10", default-features = false, features = ["std"] } group = { version = "0.13", default-features = false, features = ["alloc"] } diff --git a/networks/ethereum/schnorr/src/lib.rs b/networks/ethereum/schnorr/src/lib.rs index ec6f6277..4e2d6883 100644 --- a/networks/ethereum/schnorr/src/lib.rs +++ b/networks/ethereum/schnorr/src/lib.rs @@ -3,18 +3,6 @@ #![deny(missing_docs)] #![allow(non_snake_case)] -/// The initialization bytecode of the Schnorr library. -pub const BYTECODE: &[u8] = { - const BYTECODE_HEX: &[u8] = - include_bytes!(concat!(env!("OUT_DIR"), "/ethereum-schnorr-contract/Schnorr.bin")); - const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = - match const_hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { - Ok(bytecode) => bytecode, - Err(_) => panic!("Schnorr.bin did not contain valid hex"), - }; - &BYTECODE -}; - mod public_key; pub use public_key::PublicKey; mod signature; From fa0dadc9bd081e6cef66d4e77612f2d52a5a9363 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 15:39:06 -0500 Subject: [PATCH 360/368] Rename Deployer bytecode to initcode --- processor/ethereum/deployer/src/lib.rs | 16 ++++++++-------- processor/ethereum/deployer/src/tests.rs | 4 ++-- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/processor/ethereum/deployer/src/lib.rs b/processor/ethereum/deployer/src/lib.rs index 64ca0d2b..10140303 100644 --- a/processor/ethereum/deployer/src/lib.rs +++ b/processor/ethereum/deployer/src/lib.rs @@ -27,15 +27,15 @@ mod abi { alloy_sol_macro::sol!("contracts/Deployer.sol"); } -const BYTECODE: &[u8] = { - const BYTECODE_HEX: &[u8] = +const INITCODE: &[u8] = { + const INITCODE_HEX: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/serai-processor-ethereum-deployer/Deployer.bin")); - const BYTECODE: [u8; BYTECODE_HEX.len() / 2] = - match hex::const_decode_to_array::<{ BYTECODE_HEX.len() / 2 }>(BYTECODE_HEX) { - Ok(bytecode) => bytecode, + const INITCODE: [u8; INITCODE_HEX.len() / 2] = + match hex::const_decode_to_array::<{ INITCODE_HEX.len() / 2 }>(INITCODE_HEX) { + Ok(initcode) => initcode, Err(_) => panic!("Deployer.bin did not contain valid hex"), }; - &BYTECODE + &INITCODE }; /// The Deployer contract for the Serai Router contract. @@ -52,7 +52,7 @@ impl Deployer { /// funded for this transaction to be submitted. This account has no known private key to anyone /// so ETH sent can be neither misappropriated nor returned. pub fn deployment_tx() -> Signed { - let bytecode = Bytes::from_static(BYTECODE); + let initcode = Bytes::from_static(INITCODE); // Legacy transactions are used to ensure the widest possible degree of support across EVMs let tx = TxLegacy { @@ -87,7 +87,7 @@ impl Deployer { gas_limit: 300_698, to: TxKind::Create, value: U256::ZERO, - input: bytecode, + input: initcode, }; ethereum_primitives::deterministically_sign(tx) diff --git a/processor/ethereum/deployer/src/tests.rs b/processor/ethereum/deployer/src/tests.rs index ba1e75ae..6e4570ff 100644 --- a/processor/ethereum/deployer/src/tests.rs +++ b/processor/ethereum/deployer/src/tests.rs @@ -40,7 +40,7 @@ async fn test_deployer() { } // Deploy the deployer with the deployer - let mut deploy_tx = Deployer::deploy_tx(crate::BYTECODE.to_vec()); + let mut deploy_tx = Deployer::deploy_tx(crate::INITCODE.to_vec()); deploy_tx.gas_price = 100_000_000_000u128; deploy_tx.gas_limit = 1_000_000; { @@ -53,7 +53,7 @@ async fn test_deployer() { { let deployer = Deployer::new(provider.clone()).await.unwrap().unwrap(); let deployed_deployer = deployer - .find_deployment(ethereum_primitives::keccak256(crate::BYTECODE)) + .find_deployment(ethereum_primitives::keccak256(crate::INITCODE)) .await .unwrap() .unwrap(); From 19422de231592690ac324edb57e32ba1517d1db4 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Mon, 27 Jan 2025 15:39:55 -0500 Subject: [PATCH 361/368] Ensure a non-zero fee in the Router OutInstruction gas fuzz test --- processor/ethereum/erc20/src/tests.rs | 2 ++ processor/ethereum/router/contracts/Router.sol | 2 ++ processor/ethereum/router/src/gas.rs | 2 ++ processor/ethereum/router/src/tests/mod.rs | 9 +++++++-- 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/processor/ethereum/erc20/src/tests.rs b/processor/ethereum/erc20/src/tests.rs index 2218e19b..037c7862 100644 --- a/processor/ethereum/erc20/src/tests.rs +++ b/processor/ethereum/erc20/src/tests.rs @@ -11,3 +11,5 @@ fn selector_collisions() { crate::abi::SeraiIERC20::transferFromWithInInstruction00081948E0Call::SELECTOR ); } + +// This is primarily tested via serai-processor-ethereum-router diff --git a/processor/ethereum/router/contracts/Router.sol b/processor/ethereum/router/contracts/Router.sol index dd0ad6e5..03eaac0e 100644 --- a/processor/ethereum/router/contracts/Router.sol +++ b/processor/ethereum/router/contracts/Router.sol @@ -441,6 +441,8 @@ contract Router is IRouterWithoutCollisions { unchecked { // The amount of bytes needed to represent the nonce uint256 bitsNeeded = 0; + // This only iterates up to 64-bits as this will never exceed 2**64 as a matter of + // practicality for (uint256 bits = 0; bits <= 64; bits += 8) { bool valueFits = nonce < (uint256(1) << bits); bool notPriorSet = bitsNeeded == 0; diff --git a/processor/ethereum/router/src/gas.rs b/processor/ethereum/router/src/gas.rs index 28dd799a..266ad586 100644 --- a/processor/ethereum/router/src/gas.rs +++ b/processor/ethereum/router/src/gas.rs @@ -225,6 +225,8 @@ impl Router { } /// The worst-case gas cost for a legacy transaction which executes this batch. + /// + /// This assumes the fee will be non-zero. pub fn execute_gas(&self, coin: Coin, fee_per_gas: U256, outs: &OutInstructions) -> u64 { // Unfortunately, we can't cache this in self, despite the following code being written such // that a common EVM instance could be used, as revm's types aren't Send/Sync and we expect the diff --git a/processor/ethereum/router/src/tests/mod.rs b/processor/ethereum/router/src/tests/mod.rs index bc086c62..403e871e 100644 --- a/processor/ethereum/router/src/tests/mod.rs +++ b/processor/ethereum/router/src/tests/mod.rs @@ -836,6 +836,7 @@ async fn fuzz_test_out_instructions_gas() { out_instructions.push((SeraiEthereumAddress::Address(address), amount_out)); } } + let out_instructions_original = out_instructions.clone(); let out_instructions = OutInstructions::from(out_instructions.as_slice()); // Randomly decide the coin @@ -852,13 +853,17 @@ async fn fuzz_test_out_instructions_gas() { Coin::Erc20(erc20.address()) }; - let fee_per_gas = U256::from(OsRng.next_u64() % 10); + let fee_per_gas = U256::from(1) + U256::from(OsRng.next_u64() % 10); let gas = test.router.execute_gas(coin, fee_per_gas, &out_instructions); let fee = U256::from(gas) * fee_per_gas; // All of these should have succeeded let (tx, gas_used) = test.execute(coin, fee, out_instructions.clone(), vec![true; out_instructions.0.len()]).await; let unused_gas = test.gas_unused_by_calls(&tx).await; - assert_eq!(gas_used + unused_gas, gas); + assert_eq!( + gas_used + unused_gas, + gas, + "{coin:?} {fee_per_gas:?} {out_instructions_original:?}" + ); } } From 2bc880e372f2e7ce81114cc16f2e8c655dc5d593 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 29 Jan 2025 22:29:40 -0500 Subject: [PATCH 362/368] Downstream the eVRF libraries from FCMP++ Also adds no-std support to secq256k1 and embedwards25519. --- Cargo.lock | 12 +++ crypto/ciphersuite/src/dalek.rs | 6 ++ crypto/ciphersuite/src/ed448.rs | 6 ++ crypto/ciphersuite/src/kp256.rs | 18 +++- crypto/ciphersuite/src/lib.rs | 9 ++ crypto/dkg/Cargo.toml | 2 +- crypto/dkg/src/evrf/mod.rs | 7 +- crypto/dkg/src/evrf/proof.rs | 48 ++++------ crypto/dkg/src/tests/evrf/proof.rs | 12 ++- crypto/dkg/src/tests/promote.rs | 4 + crypto/evrf/circuit-abstraction/Cargo.toml | 14 ++- crypto/evrf/circuit-abstraction/src/lib.rs | 47 +++++---- crypto/evrf/divisors/Cargo.toml | 26 ++--- crypto/evrf/divisors/src/lib.rs | 73 +++++++------- crypto/evrf/divisors/src/poly.rs | 5 +- crypto/evrf/ec-gadgets/Cargo.toml | 14 ++- crypto/evrf/ec-gadgets/src/dlog.rs | 31 +++--- crypto/evrf/ec-gadgets/src/lib.rs | 1 + crypto/evrf/embedwards25519/Cargo.toml | 21 ++-- crypto/evrf/embedwards25519/src/lib.rs | 23 +++++ .../evrf/generalized-bulletproofs/Cargo.toml | 20 ++-- .../src/arithmetic_circuit_proof.rs | 96 ++++++++----------- .../src/inner_product.rs | 14 ++- .../evrf/generalized-bulletproofs/src/lib.rs | 46 +++++---- .../generalized-bulletproofs/src/lincomb.rs | 42 +------- .../src/point_vector.rs | 1 + .../src/scalar_vector.rs | 1 + .../src/tests/arithmetic_circuit_proof.rs | 44 ++------- .../src/tests/inner_product.rs | 6 +- .../src/transcript.rs | 87 ++++++++++------- crypto/evrf/secq256k1/Cargo.toml | 21 ++-- crypto/evrf/secq256k1/src/lib.rs | 23 +++++ crypto/ff-group-tests/Cargo.toml | 2 +- tests/no-std/Cargo.toml | 7 ++ tests/no-std/src/lib.rs | 7 ++ 35 files changed, 456 insertions(+), 340 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 42401961..68a2657f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2470,6 +2470,7 @@ dependencies = [ "hex", "pasta_curves", "rand_core", + "std-shims", "subtle", "zeroize", ] @@ -2570,6 +2571,7 @@ dependencies = [ "hex-literal", "rand_core", "rustversion", + "std-shims", "subtle", "zeroize", ] @@ -3293,6 +3295,7 @@ dependencies = [ "flexible-transcript", "multiexp", "rand_core", + "std-shims", "zeroize", ] @@ -3302,6 +3305,7 @@ version = "0.1.0" dependencies = [ "ciphersuite", "generalized-bulletproofs", + "std-shims", "zeroize", ] @@ -3312,6 +3316,7 @@ dependencies = [ "ciphersuite", "generalized-bulletproofs-circuit-abstraction", "generic-array 1.1.1", + "std-shims", ] [[package]] @@ -8822,6 +8827,7 @@ dependencies = [ "k256", "rand_core", "rustversion", + "std-shims", "subtle", "zeroize", ] @@ -9432,11 +9438,17 @@ dependencies = [ "dalek-ff-group", "dkg", "dleq", + "ec-divisors", + "embedwards25519", "flexible-transcript", + "generalized-bulletproofs", + "generalized-bulletproofs-circuit-abstraction", + "generalized-bulletproofs-ec-gadgets", "minimal-ed448", "monero-wallet-util", "multiexp", "schnorr-signatures", + "secq256k1", ] [[package]] diff --git a/crypto/ciphersuite/src/dalek.rs b/crypto/ciphersuite/src/dalek.rs index bd9c70c1..a04195b2 100644 --- a/crypto/ciphersuite/src/dalek.rs +++ b/crypto/ciphersuite/src/dalek.rs @@ -28,6 +28,12 @@ macro_rules! dalek_curve { $Point::generator() } + fn reduce_512(mut scalar: [u8; 64]) -> Self::F { + let res = Scalar::from_bytes_mod_order_wide(&scalar); + scalar.zeroize(); + res + } + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { Scalar::from_hash(Sha512::new_with_prefix(&[dst, data].concat())) } diff --git a/crypto/ciphersuite/src/ed448.rs b/crypto/ciphersuite/src/ed448.rs index 8a927251..0b19ffa5 100644 --- a/crypto/ciphersuite/src/ed448.rs +++ b/crypto/ciphersuite/src/ed448.rs @@ -66,6 +66,12 @@ impl Ciphersuite for Ed448 { Point::generator() } + fn reduce_512(mut scalar: [u8; 64]) -> Self::F { + let res = Self::hash_to_F(b"Ciphersuite-reduce_512", &scalar); + scalar.zeroize(); + res + } + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_ref().try_into().unwrap()) } diff --git a/crypto/ciphersuite/src/kp256.rs b/crypto/ciphersuite/src/kp256.rs index 37fdb2e4..a1f64ae4 100644 --- a/crypto/ciphersuite/src/kp256.rs +++ b/crypto/ciphersuite/src/kp256.rs @@ -6,7 +6,7 @@ use group::ff::PrimeField; use elliptic_curve::{ generic_array::GenericArray, - bigint::{NonZero, CheckedAdd, Encoding, U384}, + bigint::{NonZero, CheckedAdd, Encoding, U384, U512}, hash2curve::{Expander, ExpandMsg, ExpandMsgXmd}, }; @@ -31,6 +31,22 @@ macro_rules! kp_curve { $lib::ProjectivePoint::GENERATOR } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + let mut modulus = [0; 64]; + modulus[32 ..].copy_from_slice(&(Self::F::ZERO - Self::F::ONE).to_bytes()); + let modulus = U512::from_be_slice(&modulus).checked_add(&U512::ONE).unwrap(); + + let mut wide = + U512::from_be_bytes(scalar).rem(&NonZero::new(modulus).unwrap()).to_be_bytes(); + + let mut array = *GenericArray::from_slice(&wide[32 ..]); + let res = $lib::Scalar::from_repr(array).unwrap(); + + wide.zeroize(); + array.zeroize(); + res + } + fn hash_to_F(dst: &[u8], msg: &[u8]) -> Self::F { // While one of these two libraries does support directly hashing to the Scalar field, the // other doesn't. While that's probably an oversight, this is a universally working method diff --git a/crypto/ciphersuite/src/lib.rs b/crypto/ciphersuite/src/lib.rs index e5ea6645..6519a413 100644 --- a/crypto/ciphersuite/src/lib.rs +++ b/crypto/ciphersuite/src/lib.rs @@ -62,6 +62,12 @@ pub trait Ciphersuite: // While group does provide this in its API, privacy coins may want to use a custom basepoint fn generator() -> Self::G; + /// Reduce 512 bits into a uniform scalar. + /// + /// If 512 bits is insufficient to perform a reduction into a uniform scalar, the ciphersuite + /// will perform a hash to sample the necessary bits. + fn reduce_512(scalar: [u8; 64]) -> Self::F; + /// Hash the provided domain-separation tag and message to a scalar. Ciphersuites MAY naively /// prefix the tag to the message, enabling transpotion between the two. Accordingly, this /// function should NOT be used in any scheme where one tag is a valid substring of another @@ -99,6 +105,9 @@ pub trait Ciphersuite: } /// Read a canonical point from something implementing std::io::Read. + /// + /// The provided implementation is safe so long as `GroupEncoding::to_bytes` always returns a + /// canonical serialization. #[cfg(any(feature = "alloc", feature = "std"))] #[allow(non_snake_case)] fn read_G(reader: &mut R) -> io::Result { diff --git a/crypto/dkg/Cargo.toml b/crypto/dkg/Cargo.toml index 1ff3efb8..a15a1d47 100644 --- a/crypto/dkg/Cargo.toml +++ b/crypto/dkg/Cargo.toml @@ -54,7 +54,7 @@ rand = { version = "0.8", default-features = false, features = ["std"] } ciphersuite = { path = "../ciphersuite", default-features = false, features = ["ristretto"] } generalized-bulletproofs = { path = "../evrf/generalized-bulletproofs", features = ["tests"] } ec-divisors = { path = "../evrf/divisors", features = ["pasta"] } -pasta_curves = "0.5" +pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" } [features] std = [ diff --git a/crypto/dkg/src/evrf/mod.rs b/crypto/dkg/src/evrf/mod.rs index 343c6141..d31f33f7 100644 --- a/crypto/dkg/src/evrf/mod.rs +++ b/crypto/dkg/src/evrf/mod.rs @@ -85,7 +85,7 @@ use ciphersuite::{ }; use multiexp::multiexp_vartime; -use generalized_bulletproofs::arithmetic_circuit_proof::*; +use generalized_bulletproofs::{Generators, arithmetic_circuit_proof::*}; use ec_divisors::DivisorCurve; use crate::{Participant, ThresholdParams, Interpolation, ThresholdCore, ThresholdKeys}; @@ -277,6 +277,7 @@ impl EvrfDkg { if evrf_public_keys.iter().any(|key| bool::from(key.is_identity())) { Err(EvrfError::PublicKeyWasIdentity)?; }; + // This also checks the private key is not 0 let evrf_public_key = ::generator() * evrf_private_key.deref(); if !evrf_public_keys.iter().any(|key| *key == evrf_public_key) { Err(EvrfError::NotAParticipant)?; @@ -359,7 +360,7 @@ impl EvrfDkg { let transcript = Self::initial_transcript(context, evrf_public_keys, t); - let mut evrf_verifier = generators.0.batch_verifier(); + let mut evrf_verifier = Generators::batch_verifier(); for (i, participation) in participations { let evrf_public_key = evrf_public_keys[usize::from(u16::from(*i)) - 1]; @@ -395,7 +396,7 @@ impl EvrfDkg { if faulty.contains(i) { continue; } - let mut evrf_verifier = generators.0.batch_verifier(); + let mut evrf_verifier = Generators::batch_verifier(); Evrf::::verify( rng, &generators.0, diff --git a/crypto/dkg/src/evrf/proof.rs b/crypto/dkg/src/evrf/proof.rs index 8eb3ab00..9c16fec6 100644 --- a/crypto/dkg/src/evrf/proof.rs +++ b/crypto/dkg/src/evrf/proof.rs @@ -129,15 +129,11 @@ impl Evrf { /// Read a Variable from a theoretical vector commitment tape fn read_one_from_tape(generators_to_use: usize, start: &mut usize) -> Variable { // Each commitment has twice as many variables as generators in use - let commitment = *start / (2 * generators_to_use); + let commitment = *start / generators_to_use; // The index will be less than the amount of generators in use, as half are left and half are // right let index = *start % generators_to_use; - let res = if (*start / generators_to_use) % 2 == 0 { - Variable::CG { commitment, index } - } else { - Variable::CH { commitment, index } - }; + let res = Variable::CG { commitment, index }; *start += 1; res } @@ -202,8 +198,8 @@ impl Evrf { padded_pow_of_2 <<= 1; } // This may as small as 16, which would create an excessive amount of vector commitments - // We set a floor of 1024 rows for bandwidth reasons - padded_pow_of_2.max(1024) + // We set a floor of 2048 rows for bandwidth reasons + padded_pow_of_2.max(2048) }; (expected_muls, generators_to_use) } @@ -213,7 +209,7 @@ impl Evrf { evrf_public_key: (C::F, C::F), coefficients: usize, ecdh_commitments: &[[(C::F, C::F); 2]], - generator_tables: &[GeneratorTable], + generator_tables: &[&GeneratorTable], circuit: &mut Circuit, transcript: &mut impl Transcript, ) { @@ -376,8 +372,10 @@ impl Evrf { let evrf_public_key; let mut actual_coefficients = Vec::with_capacity(coefficients); { + // This is checked at a higher level let dlog = - ScalarDecomposition::<::F>::new(**evrf_private_key); + ScalarDecomposition::<::F>::new(**evrf_private_key) + .expect("eVRF private key was zero"); let points = Self::transcript_to_points(transcript, coefficients); // Start by pushing the discrete logarithm onto the tape @@ -431,7 +429,8 @@ impl Evrf { } } let dlog = - ScalarDecomposition::<::F>::new(ecdh_private_key); + ScalarDecomposition::<::F>::new(ecdh_private_key) + .expect("ECDH private key was zero"); let ecdh_commitment = ::generator() * ecdh_private_key; ecdh_commitments.push(ecdh_commitment); ecdh_commitments_xy.last_mut().unwrap()[j] = @@ -471,15 +470,10 @@ impl Evrf { Self::muls_and_generators_to_use(coefficients, ecdh_public_keys.len()); let mut vector_commitments = - Vec::with_capacity(vector_commitment_tape.len().div_ceil(2 * generators_to_use)); - for chunk in vector_commitment_tape.chunks(2 * generators_to_use) { + Vec::with_capacity(vector_commitment_tape.len().div_ceil(generators_to_use)); + for chunk in vector_commitment_tape.chunks(generators_to_use) { let g_values = chunk[.. generators_to_use.min(chunk.len())].to_vec().into(); - let h_values = chunk[generators_to_use.min(chunk.len()) ..].to_vec().into(); - vector_commitments.push(PedersenVectorCommitment { - g_values, - h_values, - mask: C::F::random(&mut *rng), - }); + vector_commitments.push(PedersenVectorCommitment { g_values, mask: C::F::random(&mut *rng) }); } vector_commitment_tape.zeroize(); @@ -499,7 +493,7 @@ impl Evrf { .iter() .map(|commitment| { commitment - .commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h()) + .commit(generators.g_bold_slice(), generators.h()) .ok_or(AcError::NotEnoughGenerators) }) .collect::>()?, @@ -518,7 +512,7 @@ impl Evrf { evrf_public_key, coefficients, &ecdh_commitments_xy, - &generator_tables, + &generator_tables.iter().collect::>(), &mut circuit, &mut transcript, ); @@ -543,7 +537,7 @@ impl Evrf { let mut agg_weights = Vec::with_capacity(commitments.len()); agg_weights.push(C::F::ONE); while agg_weights.len() < commitments.len() { - agg_weights.push(transcript.challenge::()); + agg_weights.push(transcript.challenge::()); } let mut x = commitments .iter() @@ -554,7 +548,7 @@ impl Evrf { // Do a Schnorr PoK for the randomness of the aggregated Pedersen commitment let mut r = C::F::random(&mut *rng); transcript.push_point(generators.h() * r); - let c = transcript.challenge::(); + let c = transcript.challenge::(); transcript.push_scalar(r + (c * x)); r.zeroize(); x.zeroize(); @@ -615,7 +609,7 @@ impl Evrf { let coeffs_vc_variables = dlog_len + ((1 + (2 * coefficients)) * dlog_proof_len); let ecdhs_vc_variables = ((2 * ecdh_public_keys.len()) * dlog_len) + ((2 * 2 * ecdh_public_keys.len()) * dlog_proof_len); - let vcs = (coeffs_vc_variables + ecdhs_vc_variables).div_ceil(2 * generators_to_use); + let vcs = (coeffs_vc_variables + ecdhs_vc_variables).div_ceil(generators_to_use); let all_commitments = transcript.read_commitments(vcs, coefficients + ecdh_public_keys.len()).map_err(|_| ())?; @@ -642,7 +636,7 @@ impl Evrf { ::G::to_xy(evrf_public_key).ok_or(())?, coefficients, &ecdh_keys_xy, - &generator_tables, + &generator_tables.iter().collect::>(), &mut circuit, &mut transcript, ); @@ -665,7 +659,7 @@ impl Evrf { let mut agg_weights = Vec::with_capacity(commitments.len()); agg_weights.push(C::F::ONE); while agg_weights.len() < commitments.len() { - agg_weights.push(transcript.challenge::()); + agg_weights.push(transcript.challenge::()); } let sum_points = @@ -677,7 +671,7 @@ impl Evrf { #[allow(non_snake_case)] let R = transcript.read_point::().map_err(|_| ())?; - let c = transcript.challenge::(); + let c = transcript.challenge::(); let s = transcript.read_scalar::().map_err(|_| ())?; // Doesn't batch verify this as we can't access the internals of the GBP batch verifier diff --git a/crypto/dkg/src/tests/evrf/proof.rs b/crypto/dkg/src/tests/evrf/proof.rs index 5750c6c4..cc2fb7f7 100644 --- a/crypto/dkg/src/tests/evrf/proof.rs +++ b/crypto/dkg/src/tests/evrf/proof.rs @@ -15,7 +15,7 @@ use ciphersuite::{ }; use pasta_curves::{Ep, Eq, Fp, Fq}; -use generalized_bulletproofs::tests::generators; +use generalized_bulletproofs::{Generators, tests::generators}; use generalized_bulletproofs_ec_gadgets::DiscreteLogParameters; use crate::evrf::proof::*; @@ -35,6 +35,9 @@ impl Ciphersuite for Pallas { // This is solely test code so it's fine Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into()) } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + Self::F::from_uniform_bytes(&scalar) + } } #[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)] @@ -52,6 +55,9 @@ impl Ciphersuite for Vesta { // This is solely test code so it's fine Self::F::from_uniform_bytes(&Self::H::digest([dst, msg].concat()).into()) } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + Self::F::from_uniform_bytes(&scalar) + } } pub struct VestaParams; @@ -68,7 +74,7 @@ impl EvrfCurve for Pallas { } fn evrf_proof_test() { - let generators = generators(1024); + let generators = generators(2048); let vesta_private_key = Zeroizing::new(::F::random(&mut OsRng)); let ecdh_public_keys = [ ::G::random(&mut OsRng), @@ -81,7 +87,7 @@ fn evrf_proof_test() { println!("Proving time: {:?}", time.elapsed()); let time = Instant::now(); - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); Evrf::::verify( &mut OsRng, &generators, diff --git a/crypto/dkg/src/tests/promote.rs b/crypto/dkg/src/tests/promote.rs index 99c00433..242f085b 100644 --- a/crypto/dkg/src/tests/promote.rs +++ b/crypto/dkg/src/tests/promote.rs @@ -28,6 +28,10 @@ impl Ciphersuite for AltGenerator { C::G::generator() * ::hash_to_F(b"DKG Promotion Test", b"generator") } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + ::reduce_512(scalar) + } + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { ::hash_to_F(dst, data) } diff --git a/crypto/evrf/circuit-abstraction/Cargo.toml b/crypto/evrf/circuit-abstraction/Cargo.toml index ec2767fe..64d4758c 100644 --- a/crypto/evrf/circuit-abstraction/Cargo.toml +++ b/crypto/evrf/circuit-abstraction/Cargo.toml @@ -3,19 +3,25 @@ name = "generalized-bulletproofs-circuit-abstraction" version = "0.1.0" description = "An abstraction for arithmetic circuits over Generalized Bulletproofs" license = "MIT" -repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/circuit-abstraction" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/fcmps/circuit-abstraction" authors = ["Luke Parker "] keywords = ["bulletproofs", "circuit"] edition = "2021" -rust-version = "1.80" +rust-version = "1.69" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] +std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } + zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } -ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false } -generalized-bulletproofs = { path = "../generalized-bulletproofs" } +generalized-bulletproofs = { path = "../generalized-bulletproofs", default-features = false } + +[features] +std = ["std-shims/std", "zeroize/std", "ciphersuite/std", "generalized-bulletproofs/std"] +default = ["std"] diff --git a/crypto/evrf/circuit-abstraction/src/lib.rs b/crypto/evrf/circuit-abstraction/src/lib.rs index 9971480d..8a4b826b 100644 --- a/crypto/evrf/circuit-abstraction/src/lib.rs +++ b/crypto/evrf/circuit-abstraction/src/lib.rs @@ -1,14 +1,14 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] #![deny(missing_docs)] #![allow(non_snake_case)] +use std_shims::{vec, vec::Vec}; + use zeroize::{Zeroize, ZeroizeOnDrop}; -use ciphersuite::{ - group::ff::{Field, PrimeField}, - Ciphersuite, -}; +use ciphersuite::{group::ff::Field, Ciphersuite}; use generalized_bulletproofs::{ ScalarVector, PedersenCommitment, PedersenVectorCommitment, ProofGenerators, @@ -26,16 +26,28 @@ pub trait Transcript { /// /// It is the caller's responsibility to have properly transcripted all variables prior to /// sampling this challenge. - fn challenge(&mut self) -> F; + fn challenge(&mut self) -> C::F; + + /// Sample a challenge as a byte array. + /// + /// It is the caller's responsibility to have properly transcripted all variables prior to + /// sampling this challenge. + fn challenge_bytes(&mut self) -> [u8; 64]; } impl Transcript for ProverTranscript { - fn challenge(&mut self) -> F { - self.challenge() + fn challenge(&mut self) -> C::F { + self.challenge::() + } + fn challenge_bytes(&mut self) -> [u8; 64] { + self.challenge_bytes() } } impl Transcript for VerifierTranscript<'_> { - fn challenge(&mut self) -> F { - self.challenge() + fn challenge(&mut self) -> C::F { + self.challenge::() + } + fn challenge_bytes(&mut self) -> [u8; 64] { + self.challenge_bytes() } } @@ -64,7 +76,6 @@ impl Circuit { } /// Create an instance to prove satisfaction of a circuit with. - // TODO: Take the transcript here #[allow(clippy::type_complexity)] pub fn prove( vector_commitments: Vec>, @@ -78,14 +89,13 @@ impl Circuit { } /// Create an instance to verify a proof with. - // TODO: Take the transcript here pub fn verify() -> Self { Self { muls: 0, constraints: vec![], prover: None } } /// Evaluate a linear combination. /// - /// Yields WL aL + WR aR + WO aO + WCG CG + WCH CH + WV V + c. + /// Yields WL aL + WR aR + WO aO + WCG CG + WV V + c. /// /// May panic if the linear combination references non-existent terms. /// @@ -107,11 +117,6 @@ impl Circuit { res += C.g_values[*j] * weight; } } - for (WCH, C) in lincomb.WCH().iter().zip(&prover.C) { - for (j, weight) in WCH { - res += C.h_values[*j] * weight; - } - } for (index, weight) in lincomb.WV() { res += prover.V[*index].value * weight; } @@ -176,13 +181,13 @@ impl Circuit { // We can't deconstruct the witness as it implements Drop (per ZeroizeOnDrop) // Accordingly, we take the values within it and move forward with those let mut aL = vec![]; - std::mem::swap(&mut prover.aL, &mut aL); + core::mem::swap(&mut prover.aL, &mut aL); let mut aR = vec![]; - std::mem::swap(&mut prover.aR, &mut aR); + core::mem::swap(&mut prover.aR, &mut aR); let mut C = vec![]; - std::mem::swap(&mut prover.C, &mut C); + core::mem::swap(&mut prover.C, &mut C); let mut V = vec![]; - std::mem::swap(&mut prover.V, &mut V); + core::mem::swap(&mut prover.V, &mut V); ArithmeticCircuitWitness::new(ScalarVector::from(aL), ScalarVector::from(aR), C, V) }) .transpose()?; diff --git a/crypto/evrf/divisors/Cargo.toml b/crypto/evrf/divisors/Cargo.toml index a5e0663c..af6da971 100644 --- a/crypto/evrf/divisors/Cargo.toml +++ b/crypto/evrf/divisors/Cargo.toml @@ -3,35 +3,39 @@ name = "ec-divisors" version = "0.1.0" description = "A library for calculating elliptic curve divisors" license = "MIT" -repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/divisors" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/divisors" authors = ["Luke Parker "] keywords = ["ciphersuite", "ff", "group"] edition = "2021" -rust-version = "1.71" +rust-version = "1.69" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] -rand_core = { version = "0.6", default-features = false } -zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } +std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } -subtle = { version = "2", default-features = false, features = ["std"] } -ff = { version = "0.13", default-features = false, features = ["std", "bits"] } +rand_core = { version = "0.6", default-features = false } +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } + +subtle = { version = "2", default-features = false } +ff = { version = "0.13", default-features = false, features = ["bits"] } group = { version = "0.13", default-features = false } -hex = { version = "0.4", optional = true } -dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"], optional = true } -pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"], optional = true } +hex = { version = "0.4", default-features = false, optional = true } +dalek-ff-group = { path = "../../dalek-ff-group", default-features = false, optional = true } +pasta_curves = { version = "0.5", git = "https://github.com/kayabaNerve/pasta_curves.git", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616", default-features = false, features = ["bits", "alloc"], optional = true } [dev-dependencies] rand_core = { version = "0.6", features = ["getrandom"] } hex = "0.4" dalek-ff-group = { path = "../../dalek-ff-group", features = ["std"] } -pasta_curves = { version = "0.5", default-features = false, features = ["bits", "alloc"] } +pasta_curves = { version = "0.5", git = "https://github.com/kayabaNerve/pasta_curves.git", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616", default-features = false, features = ["bits", "alloc"] } [features] -ed25519 = ["hex", "dalek-ff-group"] +std = ["std-shims/std", "zeroize/std", "subtle/std", "ff/std", "dalek-ff-group?/std"] +ed25519 = ["hex/alloc", "dalek-ff-group"] pasta = ["pasta_curves"] +default = ["std"] diff --git a/crypto/evrf/divisors/src/lib.rs b/crypto/evrf/divisors/src/lib.rs index f4bcc927..8f6325da 100644 --- a/crypto/evrf/divisors/src/lib.rs +++ b/crypto/evrf/divisors/src/lib.rs @@ -1,8 +1,11 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] #![deny(missing_docs)] #![allow(non_snake_case)] +use std_shims::{vec, vec::Vec}; + use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable}; use zeroize::{Zeroize, ZeroizeOnDrop}; @@ -18,7 +21,7 @@ pub use poly::Poly; mod tests; /// A curve usable with this library. -pub trait DivisorCurve: Group + ConstantTimeEq + ConditionallySelectable { +pub trait DivisorCurve: Group + ConstantTimeEq + ConditionallySelectable + Zeroize { /// An element of the field this curve is defined over. type FieldElement: Zeroize + PrimeField + ConditionallySelectable; @@ -54,6 +57,8 @@ pub trait DivisorCurve: Group + ConstantTimeEq + ConditionallySelectable { /// Convert a point to its x and y coordinates. /// /// Returns None if passed the point at infinity. + /// + /// This function may run in time variable to if the point is the identity. fn to_xy(point: Self) -> Option<(Self::FieldElement, Self::FieldElement)>; } @@ -271,8 +276,16 @@ pub struct ScalarDecomposition { } impl ScalarDecomposition { - /// Decompose a scalar. - pub fn new(scalar: F) -> Self { + /// Decompose a non-zero scalar. + /// + /// Returns `None` if the scalar is zero. + /// + /// This function is constant time if the scalar is non-zero. + pub fn new(scalar: F) -> Option { + if bool::from(scalar.is_zero()) { + None?; + } + /* We need the sum of the coefficients to equal F::NUM_BITS. The scalar's bits will be less than F::NUM_BITS. Accordingly, we need to increment the sum of the coefficients without @@ -400,7 +413,12 @@ impl ScalarDecomposition { } debug_assert!(bool::from(decomposition.iter().sum::().ct_eq(&num_bits))); - ScalarDecomposition { scalar, decomposition } + Some(ScalarDecomposition { scalar, decomposition }) + } + + /// The scalar. + pub fn scalar(&self) -> &F { + &self.scalar } /// The decomposition of the scalar. @@ -414,7 +432,7 @@ impl ScalarDecomposition { /// /// This function executes in constant time with regards to the scalar. /// - /// This function MAY panic if this scalar is zero. + /// This function MAY panic if the generator is the point at infinity. pub fn scalar_mul_divisor>( &self, mut generator: C, @@ -430,37 +448,19 @@ impl ScalarDecomposition { divisor_points[0] = -generator * self.scalar; // Write the decomposition - let mut write_to: u32 = 1; + let mut write_above: u64 = 0; for coefficient in &self.decomposition { - let mut coefficient = *coefficient; - // Iterate over the maximum amount of iters for this value to be constant time regardless of - // any branch prediction algorithms - for _ in 0 .. ::NUM_BITS { - // Write the generator to the slot we're supposed to - /* - Without this loop, we'd increment this dependent on the distribution within the - decomposition. If the distribution is bottom-heavy, we won't access the tail of - `divisor_points` for a while, risking it being ejected out of the cache (causing a cache - miss which may not occur with a top-heavy distribution which quickly moves to the tail). - - This is O(log2(NUM_BITS) ** 3) though, as this the third loop, which is horrific. - */ - for i in 1 ..= ::NUM_BITS { - divisor_points[i as usize] = - <_>::conditional_select(&divisor_points[i as usize], &generator, i.ct_eq(&write_to)); - } - // If the coefficient isn't zero, increment write_to (so we don't overwrite this generator - // when it should be there) - let coefficient_not_zero = !coefficient.ct_eq(&0); - write_to = <_>::conditional_select(&write_to, &(write_to + 1), coefficient_not_zero); - // Subtract one from the coefficient, if it's not zero and won't underflow - coefficient = - <_>::conditional_select(&coefficient, &coefficient.wrapping_sub(1), coefficient_not_zero); + // Write the generator to every slot except the slots we have already written to. + for i in 1 ..= (::NUM_BITS as u64) { + divisor_points[i as usize].conditional_assign(&generator, i.ct_gt(&write_above)); } + + // Increase the next write start by the coefficient. + write_above += coefficient; generator = generator.double(); } - // Create a divisor out of all points except the last point which is solely scratch + // Create a divisor out of the points let res = new_divisor(&divisor_points).unwrap(); divisor_points.zeroize(); res @@ -511,6 +511,7 @@ mod pasta { #[cfg(any(test, feature = "ed25519"))] mod ed25519 { + use subtle::{Choice, ConditionallySelectable}; use group::{ ff::{Field, PrimeField}, Group, GroupEncoding, @@ -558,9 +559,13 @@ mod ed25519 { ((D * edwards_y_sq) + Self::FieldElement::ONE).invert().unwrap()) .sqrt() .unwrap(); - if u8::from(bool::from(edwards_x.is_odd())) != x_is_odd { - edwards_x = -edwards_x; - } + + // Negate the x coordinate if the sign doesn't match + edwards_x = <_>::conditional_select( + &edwards_x, + &-edwards_x, + edwards_x.is_odd() ^ Choice::from(x_is_odd), + ); // Calculate the x and y coordinates for Wei25519 let edwards_y_plus_one = Self::FieldElement::ONE + edwards_y; diff --git a/crypto/evrf/divisors/src/poly.rs b/crypto/evrf/divisors/src/poly.rs index 8d99aef2..4ade0f79 100644 --- a/crypto/evrf/divisors/src/poly.rs +++ b/crypto/evrf/divisors/src/poly.rs @@ -1,4 +1,5 @@ use core::ops::{Add, Neg, Sub, Mul, Rem}; +use std_shims::{vec, vec::Vec}; use subtle::{Choice, ConstantTimeEq, ConstantTimeGreater, ConditionallySelectable}; use zeroize::{Zeroize, ZeroizeOnDrop}; @@ -257,7 +258,7 @@ impl + Zeroize + PrimeField> Poly { self.zero_coefficient = F::ZERO; // Move the x coefficients - std::mem::swap(&mut self.yx_coefficients[power_of_y - 1], &mut self.x_coefficients); + core::mem::swap(&mut self.yx_coefficients[power_of_y - 1], &mut self.x_coefficients); self.x_coefficients = vec![]; self @@ -564,7 +565,7 @@ impl + Zeroize + PrimeField> Poly { quotient = conditional_select_poly( quotient, // If the dividing coefficient was for y**0 x**0, we return the poly scaled by its inverse - self.clone() * denominator_dividing_coefficient_inv, + self * denominator_dividing_coefficient_inv, denominator_dividing_coefficient.ct_eq(&CoefficientIndex { y_pow: 0, x_pow: 0 }), ); remainder = conditional_select_poly( diff --git a/crypto/evrf/ec-gadgets/Cargo.toml b/crypto/evrf/ec-gadgets/Cargo.toml index dad62a93..f29cc4c4 100644 --- a/crypto/evrf/ec-gadgets/Cargo.toml +++ b/crypto/evrf/ec-gadgets/Cargo.toml @@ -3,19 +3,25 @@ name = "generalized-bulletproofs-ec-gadgets" version = "0.1.0" description = "Gadgets for working with an embedded Elliptic Curve in a Generalized Bulletproofs circuit" license = "MIT" -repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/ec-gadgets" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/fcmps/ec-gadgets" authors = ["Luke Parker "] keywords = ["bulletproofs", "circuit", "divisors"] edition = "2021" -rust-version = "1.80" +rust-version = "1.69" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] +std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } + generic-array = { version = "1", default-features = false, features = ["alloc"] } -ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false } -generalized-bulletproofs-circuit-abstraction = { path = "../circuit-abstraction" } +generalized-bulletproofs-circuit-abstraction = { path = "../circuit-abstraction", default-features = false } + +[features] +std = ["std-shims/std", "ciphersuite/std", "generalized-bulletproofs-circuit-abstraction/std"] +default = ["std"] diff --git a/crypto/evrf/ec-gadgets/src/dlog.rs b/crypto/evrf/ec-gadgets/src/dlog.rs index ef4b8c83..d124e07f 100644 --- a/crypto/evrf/ec-gadgets/src/dlog.rs +++ b/crypto/evrf/ec-gadgets/src/dlog.rs @@ -1,4 +1,5 @@ use core::fmt; +use std_shims::{vec, vec::Vec}; use ciphersuite::{ group::ff::{Field, PrimeField, BatchInverter}, @@ -10,11 +11,6 @@ use generalized_bulletproofs_circuit_abstraction::*; use crate::*; /// Parameters for a discrete logarithm proof. -/// -/// This isn't required to be implemented by the Field/Group/Ciphersuite, solely a struct, to -/// enable parameterization of discrete log proofs to the bitlength of the discrete logarithm. -/// While that may be F::NUM_BITS, a discrete log proof a for a full scalar, it could also be 64, -/// a discrete log proof for a u64 (such as if opening a Pedersen commitment in-circuit). pub trait DiscreteLogParameters { /// The amount of bits used to represent a scalar. type ScalarBits: ArrayLength; @@ -30,8 +26,8 @@ pub trait DiscreteLogParameters { /// The amount of y x**i coefficients in a divisor. /// - /// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) plus - /// one, divided by two, minus two. + /// This is the amount of points in a divisor (the amount of bits in a scalar, plus one) divided + /// by two, minus two. type YxCoefficients: ArrayLength; } @@ -106,8 +102,6 @@ pub struct Divisor { /// exceeding trivial complexity. pub y: Variable, /// The coefficients for the `y**1 x**i` terms of the polynomial. - // This subtraction enforces the divisor to have at least 4 points which is acceptable. - // TODO: Double check these constants pub yx: GenericArray, /// The coefficients for the `x**i` terms of the polynomial, skipping x**1. /// @@ -324,7 +318,7 @@ pub trait EcDlogGadgets { &self, transcript: &mut T, curve: &CurveSpec, - generators: &[GeneratorTable], + generators: &[&GeneratorTable], ) -> (DiscreteLogChallenge, Vec>); /// Prove this point has the specified discrete logarithm over the specified generator. @@ -355,12 +349,14 @@ impl EcDlogGadgets for Circuit { &self, transcript: &mut T, curve: &CurveSpec, - generators: &[GeneratorTable], + generators: &[&GeneratorTable], ) -> (DiscreteLogChallenge, Vec>) { // Get the challenge points - // TODO: Implement a proper hash to curve + let sign_of_points = transcript.challenge_bytes(); + let sign_of_point_0 = (sign_of_points[0] & 1) == 1; + let sign_of_point_1 = ((sign_of_points[0] >> 1) & 1) == 1; let (c0_x, c0_y) = loop { - let c0_x: C::F = transcript.challenge(); + let c0_x = transcript.challenge::(); let Some(c0_y) = Option::::from(((c0_x.square() * c0_x) + (curve.a * c0_x) + curve.b).sqrt()) else { @@ -368,17 +364,16 @@ impl EcDlogGadgets for Circuit { }; // Takes the even y coordinate as to not be dependent on whatever root the above sqrt // happens to returns - // TODO: Randomly select which to take - break (c0_x, if bool::from(c0_y.is_odd()) { -c0_y } else { c0_y }); + break (c0_x, if bool::from(c0_y.is_odd()) != sign_of_point_0 { -c0_y } else { c0_y }); }; let (c1_x, c1_y) = loop { - let c1_x: C::F = transcript.challenge(); + let c1_x = transcript.challenge::(); let Some(c1_y) = Option::::from(((c1_x.square() * c1_x) + (curve.a * c1_x) + curve.b).sqrt()) else { continue; }; - break (c1_x, if bool::from(c1_y.is_odd()) { -c1_y } else { c1_y }); + break (c1_x, if bool::from(c1_y.is_odd()) != sign_of_point_1 { -c1_y } else { c1_y }); }; // mmadd-1998-cmo @@ -483,7 +478,7 @@ impl EcDlogGadgets for Circuit { let arg_iter = arg_iter.chain(dlog.iter()); for variable in arg_iter { debug_assert!( - matches!(variable, Variable::CG { .. } | Variable::CH { .. } | Variable::V(_)), + matches!(variable, Variable::CG { .. } | Variable::V(_)), "discrete log proofs requires all arguments belong to commitments", ); } diff --git a/crypto/evrf/ec-gadgets/src/lib.rs b/crypto/evrf/ec-gadgets/src/lib.rs index 463eedd6..e9fb57fb 100644 --- a/crypto/evrf/ec-gadgets/src/lib.rs +++ b/crypto/evrf/ec-gadgets/src/lib.rs @@ -1,5 +1,6 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] #![deny(missing_docs)] #![allow(non_snake_case)] diff --git a/crypto/evrf/embedwards25519/Cargo.toml b/crypto/evrf/embedwards25519/Cargo.toml index bca1f3c2..e45d06c0 100644 --- a/crypto/evrf/embedwards25519/Cargo.toml +++ b/crypto/evrf/embedwards25519/Cargo.toml @@ -17,20 +17,22 @@ rustdoc-args = ["--cfg", "docsrs"] rustversion = "1" hex-literal = { version = "0.4", default-features = false } -rand_core = { version = "0.6", default-features = false, features = ["std"] } +std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false, optional = true } -zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } -subtle = { version = "^2.4", default-features = false, features = ["std"] } +rand_core = { version = "0.6", default-features = false } + +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } +subtle = { version = "^2.4", default-features = false } generic-array = { version = "1", default-features = false } crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] } dalek-ff-group = { path = "../../dalek-ff-group", version = "0.4", default-features = false } -blake2 = { version = "0.10", default-features = false, features = ["std"] } -ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } -ec-divisors = { path = "../divisors" } -generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" } +blake2 = { version = "0.10", default-features = false } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false } +ec-divisors = { path = "../divisors", default-features = false } +generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets", default-features = false } [dev-dependencies] hex = "0.4" @@ -38,3 +40,8 @@ hex = "0.4" rand_core = { version = "0.6", features = ["std"] } ff-group-tests = { path = "../../ff-group-tests" } + +[features] +alloc = ["std-shims", "zeroize/alloc", "ciphersuite/alloc"] +std = ["std-shims/std", "rand_core/std", "zeroize/std", "subtle/std", "blake2/std", "ciphersuite/std", "ec-divisors/std", "generalized-bulletproofs-ec-gadgets/std"] +default = ["std"] diff --git a/crypto/evrf/embedwards25519/src/lib.rs b/crypto/evrf/embedwards25519/src/lib.rs index 858f4ada..c3ee6e1d 100644 --- a/crypto/evrf/embedwards25519/src/lib.rs +++ b/crypto/evrf/embedwards25519/src/lib.rs @@ -1,5 +1,9 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] + +#[cfg(any(feature = "alloc", feature = "std"))] +use std_shims::io::{self, Read}; use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2}; use ciphersuite::group::{ff::PrimeField, Group}; @@ -33,10 +37,29 @@ impl ciphersuite::Ciphersuite for Embedwards25519 { Point::generator() } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + Scalar::wide_reduce(scalar) + } + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { use blake2::Digest; Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap()) } + + // We override the provided impl, which compares against the reserialization, because + // we already require canonicity + #[cfg(any(feature = "alloc", feature = "std"))] + #[allow(non_snake_case)] + fn read_G(reader: &mut R) -> io::Result { + use ciphersuite::group::GroupEncoding; + + let mut encoding = ::Repr::default(); + reader.read_exact(encoding.as_mut())?; + + let point = Option::::from(Self::G::from_bytes(&encoding)) + .ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))?; + Ok(point) + } } impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Embedwards25519 { diff --git a/crypto/evrf/generalized-bulletproofs/Cargo.toml b/crypto/evrf/generalized-bulletproofs/Cargo.toml index 7744f5ed..1b7ad7b0 100644 --- a/crypto/evrf/generalized-bulletproofs/Cargo.toml +++ b/crypto/evrf/generalized-bulletproofs/Cargo.toml @@ -3,25 +3,27 @@ name = "generalized-bulletproofs" version = "0.1.0" description = "Generalized Bulletproofs" license = "MIT" -repository = "https://github.com/serai-dex/serai/tree/develop/crypto/evrf/generalized-bulletproofs" +repository = "https://github.com/serai-dex/serai/tree/develop/crypto/generalized-bulletproofs" authors = ["Luke Parker "] keywords = ["ciphersuite", "ff", "group"] edition = "2021" -rust-version = "1.80" +rust-version = "1.69" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [dependencies] -rand_core = { version = "0.6", default-features = false, features = ["std"] } +std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false } -zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } +rand_core = { version = "0.6", default-features = false } -blake2 = { version = "0.10", default-features = false, features = ["std"] } +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } -multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["std", "batch"] } -ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } +blake2 = { version = "0.10", default-features = false } + +multiexp = { path = "../../multiexp", version = "0.4", default-features = false, features = ["batch"] } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false } [dev-dependencies] rand_core = { version = "0.6", features = ["getrandom"] } @@ -31,4 +33,6 @@ transcript = { package = "flexible-transcript", path = "../../transcript", featu ciphersuite = { path = "../../ciphersuite", features = ["ristretto"] } [features] -tests = [] +std = ["std-shims/std", "rand_core/std", "zeroize/std", "blake2/std", "multiexp/std", "ciphersuite/std"] +tests = ["std"] +default = ["std"] diff --git a/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs index c4983b76..32a071ce 100644 --- a/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs +++ b/crypto/evrf/generalized-bulletproofs/src/arithmetic_circuit_proof.rs @@ -1,3 +1,5 @@ +use std_shims::{vec, vec::Vec}; + use rand_core::{RngCore, CryptoRng}; use zeroize::{Zeroize, ZeroizeOnDrop}; @@ -20,10 +22,10 @@ pub use crate::lincomb::{Variable, LinComb}; /// `aL * aR = aO, WL * aL + WR * aR + WO * aO = WV * V + c`. /// /// Generalized Bulletproofs modifies this to -/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H = WV * V + c`. +/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G = WV * V + c`. /// /// We implement the latter, yet represented (for simplicity) as -/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WCH * C_H + WV * V + c = 0`. +/// `aL * aR = aO, WL * aL + WR * aR + WO * aO + WCG * C_G + WV * V + c = 0`. #[derive(Clone, Debug)] pub struct ArithmeticCircuitStatement<'a, C: Ciphersuite> { generators: ProofGenerators<'a, C>, @@ -202,16 +204,10 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { if c.g_values.len() > n { Err(AcError::NotEnoughGenerators)?; } - if c.h_values.len() > n { - Err(AcError::NotEnoughGenerators)?; - } // The Pedersen vector commitments internally have n terms while c.g_values.len() < n { c.g_values.0.push(C::F::ZERO); } - while c.h_values.len() < n { - c.h_values.0.push(C::F::ZERO); - } } // Check the witness's consistency with the statement @@ -227,12 +223,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { } } for (commitment, opening) in self.C.0.iter().zip(witness.c.iter()) { - if Some(*commitment) != - opening.commit( - self.generators.g_bold_slice(), - self.generators.h_bold_slice(), - self.generators.h(), - ) + if Some(*commitment) != opening.commit(self.generators.g_bold_slice(), self.generators.h()) { Err(AcError::InconsistentWitness)?; } @@ -250,11 +241,6 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { weights.iter().map(|(j, weight)| *weight * c.g_values[*j]) }), ) - .chain( - constraint.WCH.iter().zip(&witness.c).flat_map(|(weights, c)| { - weights.iter().map(|(j, weight)| *weight * c.h_values[*j]) - }), - ) .chain(constraint.WV.iter().map(|(i, weight)| *weight * witness.v[*i].value)) .chain(core::iter::once(constraint.c)) .sum::(); @@ -306,8 +292,8 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { transcript.push_point(AI); transcript.push_point(AO); transcript.push_point(S); - let y = transcript.challenge(); - let z = transcript.challenge(); + let y = transcript.challenge::(); + let z = transcript.challenge::(); let YzChallenges { y_inv, z } = self.yz_challenges(y, z); let y = ScalarVector::powers(y, n); @@ -318,7 +304,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { // polynomial). // ni = n' - let ni = 2 * (c + 1); + let ni = 2 + (2 * (c / 2)); // These indexes are from the Generalized Bulletproofs paper #[rustfmt::skip] let ilr = ni / 2; // 1 if c = 0 @@ -379,32 +365,25 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { // r decreasing from n' (skipping jlr) let mut cg_weights = Vec::with_capacity(witness.c.len()); - let mut ch_weights = Vec::with_capacity(witness.c.len()); for i in 0 .. witness.c.len() { let mut cg = ScalarVector::new(n); - let mut ch = ScalarVector::new(n); for (constraint, z) in self.constraints.iter().zip(&z.0) { if let Some(WCG) = constraint.WCG.get(i) { accumulate_vector(&mut cg, WCG, *z); } - if let Some(WCH) = constraint.WCH.get(i) { - accumulate_vector(&mut ch, WCH, *z); - } } cg_weights.push(cg); - ch_weights.push(ch); } - for (i, (c, (cg_weights, ch_weights))) in - witness.c.iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate() - { - let i = i + 1; + for (mut i, (c, cg_weights)) in witness.c.iter().zip(cg_weights).enumerate() { + if i >= ilr { + i += 1; + } + // Because i has skipped ilr, j will skip jlr let j = ni - i; l[i] = c.g_values.clone(); - l[j] = ch_weights * &y_inv; r[j] = cg_weights; - r[i] = (c.h_values.clone() * &y) + &r[i]; } // Multiply them to obtain t @@ -437,7 +416,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { transcript.push_point(multiexp(&[(*t, self.generators.g()), (*tau, self.generators.h())])); } - let x: ScalarVector = ScalarVector::powers(transcript.challenge(), t.len()); + let x: ScalarVector = ScalarVector::powers(transcript.challenge::(), t.len()); let poly_eval = |poly: &[ScalarVector], x: &ScalarVector<_>| -> ScalarVector<_> { let mut res = ScalarVector::::new(poly[0].0.len()); @@ -477,8 +456,11 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { let mut u = (alpha * x[ilr]) + (beta * x[io]) + (rho * x[is]); // Incorporate the commitment masks multiplied by the associated power of x - for (i, commitment) in witness.c.iter().enumerate() { - let i = i + 1; + for (mut i, commitment) in witness.c.iter().enumerate() { + // If this index is ni / 2, skip it + if i >= (ni / 2) { + i += 1; + } u += x[i] * commitment.mask; } u @@ -498,7 +480,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { transcript.push_scalar(tau_x); transcript.push_scalar(u); transcript.push_scalar(t_caret); - let ip_x = transcript.challenge(); + let ip_x = transcript.challenge::(); P_terms.push((ip_x * t_caret, self.generators.g())); IpStatement::new( self.generators, @@ -513,16 +495,27 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { } /// Verify a proof for this statement. + /// + /// This solely queues the statement for batch verification. The resulting BatchVerifier MUST + /// still be verified. + /// + /// If this proof returns an error, the BatchVerifier MUST be assumed corrupted and discarded. pub fn verify( self, rng: &mut R, verifier: &mut BatchVerifier, transcript: &mut VerifierTranscript, ) -> Result<(), AcError> { + if verifier.g_bold.len() < self.generators.len() { + verifier.g_bold.resize(self.generators.len(), C::F::ZERO); + verifier.h_bold.resize(self.generators.len(), C::F::ZERO); + verifier.h_sum.resize(self.generators.len(), C::F::ZERO); + } + let n = self.n(); let c = self.c(); - let ni = 2 * (c + 1); + let ni = 2 + (2 * (c / 2)); let ilr = ni / 2; let io = ni; @@ -535,8 +528,8 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { let AI = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; let AO = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; let S = transcript.read_point::().map_err(|_| AcError::IncompleteProof)?; - let y = transcript.challenge(); - let z = transcript.challenge(); + let y = transcript.challenge::(); + let z = transcript.challenge::(); let YzChallenges { y_inv, z } = self.yz_challenges(y, z); let mut l_weights = ScalarVector::new(n); @@ -559,7 +552,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { for _ in 0 .. (t_poly_len - ni - 1) { T_after_ni.push(transcript.read_point::().map_err(|_| AcError::IncompleteProof)?); } - let x: ScalarVector = ScalarVector::powers(transcript.challenge(), t_poly_len); + let x: ScalarVector = ScalarVector::powers(transcript.challenge::(), t_poly_len); let tau_x = transcript.read_scalar::().map_err(|_| AcError::IncompleteProof)?; let u = transcript.read_scalar::().map_err(|_| AcError::IncompleteProof)?; @@ -624,34 +617,25 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { h_bold_scalars = h_bold_scalars + &(o_weights * verifier_weight); let mut cg_weights = Vec::with_capacity(self.C.len()); - let mut ch_weights = Vec::with_capacity(self.C.len()); for i in 0 .. self.C.len() { let mut cg = ScalarVector::new(n); - let mut ch = ScalarVector::new(n); for (constraint, z) in self.constraints.iter().zip(&z.0) { if let Some(WCG) = constraint.WCG.get(i) { accumulate_vector(&mut cg, WCG, *z); } - if let Some(WCH) = constraint.WCH.get(i) { - accumulate_vector(&mut ch, WCH, *z); - } } cg_weights.push(cg); - ch_weights.push(ch); } // Push the terms for C, which increment from 0, and the terms for WC, which decrement from // n' - for (i, (C, (WCG, WCH))) in - self.C.0.into_iter().zip(cg_weights.into_iter().zip(ch_weights)).enumerate() - { - let i = i + 1; + for (mut i, (C, WCG)) in self.C.0.into_iter().zip(cg_weights).enumerate() { + if i >= (ni / 2) { + i += 1; + } let j = ni - i; verifier.additional.push((x[i], C)); h_bold_scalars = h_bold_scalars + &(WCG * x[j]); - for (i, scalar) in (WCH * &y_inv * x[j]).0.into_iter().enumerate() { - verifier.g_bold[i] += scalar; - } } // All terms for h_bold here have actually been for h_bold', h_bold * y_inv @@ -666,7 +650,7 @@ impl<'a, C: Ciphersuite> ArithmeticCircuitStatement<'a, C> { // Prove for lines 88, 92 with an Inner-Product statement // This inlines Protocol 1, as our IpStatement implements Protocol 2 - let ip_x = transcript.challenge(); + let ip_x = transcript.challenge::(); // P is amended with this additional term verifier.g += verifier_weight * ip_x * t_caret; IpStatement::new(self.generators, y_inv, ip_x, P::Verifier { verifier_weight }) diff --git a/crypto/evrf/generalized-bulletproofs/src/inner_product.rs b/crypto/evrf/generalized-bulletproofs/src/inner_product.rs index ae3ec876..e7127e00 100644 --- a/crypto/evrf/generalized-bulletproofs/src/inner_product.rs +++ b/crypto/evrf/generalized-bulletproofs/src/inner_product.rs @@ -1,3 +1,5 @@ +use std_shims::{vec, vec::Vec}; + use multiexp::multiexp_vartime; use ciphersuite::{group::ff::Field, Ciphersuite}; @@ -186,7 +188,7 @@ impl<'a, C: Ciphersuite> IpStatement<'a, C> { // Now that we've calculate L, R, transcript them to receive x (26-27) transcript.push_point(L); transcript.push_point(R); - let x: C::F = transcript.challenge(); + let x: C::F = transcript.challenge::(); let x_inv = x.invert().unwrap(); // The prover and verifier now calculate the following (28-31) @@ -269,11 +271,19 @@ impl<'a, C: Ciphersuite> IpStatement<'a, C> { /// This will return Err if there is an error. This will return Ok if the proof was successfully /// queued for batch verification. The caller is required to verify the batch in order to ensure /// the proof is actually correct. + /// + /// If this proof returns an error, the BatchVerifier MUST be assumed corrupted and discarded. pub(crate) fn verify( self, verifier: &mut BatchVerifier, transcript: &mut VerifierTranscript, ) -> Result<(), IpError> { + if verifier.g_bold.len() < self.generators.len() { + verifier.g_bold.resize(self.generators.len(), C::F::ZERO); + verifier.h_bold.resize(self.generators.len(), C::F::ZERO); + verifier.h_sum.resize(self.generators.len(), C::F::ZERO); + } + let IpStatement { generators, h_bold_weights, u, P } = self; // Calculate the discrete log w.r.t. 2 for the amount of generators present @@ -296,7 +306,7 @@ impl<'a, C: Ciphersuite> IpStatement<'a, C> { for _ in 0 .. lr_len { L.push(transcript.read_point::().map_err(|_| IpError::IncompleteProof)?); R.push(transcript.read_point::().map_err(|_| IpError::IncompleteProof)?); - xs.push(transcript.challenge()); + xs.push(transcript.challenge::()); } // We calculate their inverse in batch diff --git a/crypto/evrf/generalized-bulletproofs/src/lib.rs b/crypto/evrf/generalized-bulletproofs/src/lib.rs index 48c4cd56..d02c9e9c 100644 --- a/crypto/evrf/generalized-bulletproofs/src/lib.rs +++ b/crypto/evrf/generalized-bulletproofs/src/lib.rs @@ -1,10 +1,11 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] #![deny(missing_docs)] #![allow(non_snake_case)] use core::fmt; -use std::collections::HashSet; +use std_shims::{vec, vec::Vec, collections::HashSet}; use zeroize::Zeroize; @@ -70,14 +71,26 @@ pub struct Generators { #[must_use] #[derive(Clone)] pub struct BatchVerifier { - g: C::F, - h: C::F, + /// The summed scalar for the G generator. + pub g: C::F, + /// The summed scalar for the G generator. + pub h: C::F, - g_bold: Vec, - h_bold: Vec, - h_sum: Vec, + /// The summed scalars for the G_bold generators. + pub g_bold: Vec, + /// The summed scalars for the H_bold generators. + pub h_bold: Vec, + /// The summed scalars for the sums of all H generators prior to the index. + /// + /// This is not populated with the full set of summed H generators. This is only populated with + /// the powers of 2. Accordingly, an index i specifies a scalar for the sum of all H generators + /// from H**2**0 ..= H**2**i. + pub h_sum: Vec, - additional: Vec<(C::F, C::G)>, + /// Additional (non-fixed) points to include in the multiexp. + /// + /// This is used for proof-specific elements. + pub additional: Vec<(C::F, C::G)>, } impl fmt::Debug for Generators { @@ -171,15 +184,15 @@ impl Generators { Ok(Generators { g, h, g_bold, h_bold, h_sum }) } - /// Create a BatchVerifier for proofs which use these generators. - pub fn batch_verifier(&self) -> BatchVerifier { + /// Create a BatchVerifier for proofs which use a consistent set of generators. + pub fn batch_verifier() -> BatchVerifier { BatchVerifier { g: C::F::ZERO, h: C::F::ZERO, - g_bold: vec![C::F::ZERO; self.g_bold.len()], - h_bold: vec![C::F::ZERO; self.h_bold.len()], - h_sum: vec![C::F::ZERO; self.h_sum.len()], + g_bold: vec![], + h_bold: vec![], + h_sum: vec![], additional: Vec::with_capacity(128), } @@ -298,8 +311,6 @@ impl PedersenCommitment { pub struct PedersenVectorCommitment { /// The values committed to across the `g` (bold) generators. pub g_values: ScalarVector, - /// The values committed to across the `h` (bold) generators. - pub h_values: ScalarVector, /// The mask blinding the values committed to. pub mask: C::F, } @@ -309,8 +320,8 @@ impl PedersenVectorCommitment { /// /// This function returns None if the amount of generators is less than the amount of values /// within the relevant vector. - pub fn commit(&self, g_bold: &[C::G], h_bold: &[C::G], h: C::G) -> Option { - if (g_bold.len() < self.g_values.len()) || (h_bold.len() < self.h_values.len()) { + pub fn commit(&self, g_bold: &[C::G], h: C::G) -> Option { + if g_bold.len() < self.g_values.len() { None?; }; @@ -318,9 +329,6 @@ impl PedersenVectorCommitment { for pair in self.g_values.0.iter().cloned().zip(g_bold.iter().cloned()) { terms.push(pair); } - for pair in self.h_values.0.iter().cloned().zip(h_bold.iter().cloned()) { - terms.push(pair); - } let res = multiexp(&terms); terms.zeroize(); Some(res) diff --git a/crypto/evrf/generalized-bulletproofs/src/lincomb.rs b/crypto/evrf/generalized-bulletproofs/src/lincomb.rs index 291b3b0b..e08a6d48 100644 --- a/crypto/evrf/generalized-bulletproofs/src/lincomb.rs +++ b/crypto/evrf/generalized-bulletproofs/src/lincomb.rs @@ -1,4 +1,5 @@ use core::ops::{Add, Sub, Mul}; +use std_shims::{vec, vec::Vec}; use zeroize::Zeroize; @@ -23,13 +24,6 @@ pub enum Variable { /// The index of the variable. index: usize, }, - /// A variable within a Pedersen vector commitment, committed to with a generator from `h` (bold). - CH { - /// The commitment being indexed. - commitment: usize, - /// The index of the variable. - index: usize, - }, /// A variable within a Pedersen commitment. V(usize), } @@ -41,7 +35,7 @@ impl Zeroize for Variable { /// A linear combination. /// -/// Specifically, `WL aL + WR aR + WO aO + WCG C_G + WCH C_H + WV V + c`. +/// Specifically, `WL aL + WR aR + WO aO + WCG C_G + WV V + c`. #[derive(Clone, PartialEq, Eq, Debug, Zeroize)] #[must_use] pub struct LinComb { @@ -55,7 +49,6 @@ pub struct LinComb { pub(crate) WO: Vec<(usize, F)>, // Sparse representation once within a commitment pub(crate) WCG: Vec>, - pub(crate) WCH: Vec>, // Sparse representation of WV pub(crate) WV: Vec<(usize, F)>, pub(crate) c: F, @@ -81,15 +74,9 @@ impl Add<&LinComb> for LinComb { while self.WCG.len() < constraint.WCG.len() { self.WCG.push(vec![]); } - while self.WCH.len() < constraint.WCH.len() { - self.WCH.push(vec![]); - } for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) { sWC.extend(cWC); } - for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) { - sWC.extend(cWC); - } self.WV.extend(&constraint.WV); self.c += constraint.c; self @@ -110,15 +97,9 @@ impl Sub<&LinComb> for LinComb { while self.WCG.len() < constraint.WCG.len() { self.WCG.push(vec![]); } - while self.WCH.len() < constraint.WCH.len() { - self.WCH.push(vec![]); - } for (sWC, cWC) in self.WCG.iter_mut().zip(&constraint.WCG) { sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight))); } - for (sWC, cWC) in self.WCH.iter_mut().zip(&constraint.WCH) { - sWC.extend(cWC.iter().map(|(i, weight)| (*i, -*weight))); - } self.WV.extend(constraint.WV.iter().map(|(i, weight)| (*i, -*weight))); self.c -= constraint.c; self @@ -143,11 +124,6 @@ impl Mul for LinComb { *weight *= scalar; } } - for WC in self.WCH.iter_mut() { - for (_, weight) in WC { - *weight *= scalar; - } - } for (_, weight) in self.WV.iter_mut() { *weight *= scalar; } @@ -167,7 +143,6 @@ impl LinComb { WR: vec![], WO: vec![], WCG: vec![], - WCH: vec![], WV: vec![], c: F::ZERO, } @@ -196,14 +171,6 @@ impl LinComb { } self.WCG[i].push((j, scalar)) } - Variable::CH { commitment: i, index: j } => { - self.highest_c_index = self.highest_c_index.max(Some(i)); - self.highest_a_index = self.highest_a_index.max(Some(j)); - while self.WCH.len() <= i { - self.WCH.push(vec![]); - } - self.WCH[i].push((j, scalar)) - } Variable::V(i) => { self.highest_v_index = self.highest_v_index.max(Some(i)); self.WV.push((i, scalar)); @@ -238,11 +205,6 @@ impl LinComb { &self.WCG } - /// View the current weights for CH. - pub fn WCH(&self) -> &[Vec<(usize, F)>] { - &self.WCH - } - /// View the current weights for V. pub fn WV(&self) -> &[(usize, F)] { &self.WV diff --git a/crypto/evrf/generalized-bulletproofs/src/point_vector.rs b/crypto/evrf/generalized-bulletproofs/src/point_vector.rs index 82fad519..16bf6989 100644 --- a/crypto/evrf/generalized-bulletproofs/src/point_vector.rs +++ b/crypto/evrf/generalized-bulletproofs/src/point_vector.rs @@ -1,4 +1,5 @@ use core::ops::{Index, IndexMut}; +use std_shims::vec::Vec; use zeroize::Zeroize; diff --git a/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs b/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs index a9cf4365..18c4f619 100644 --- a/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs +++ b/crypto/evrf/generalized-bulletproofs/src/scalar_vector.rs @@ -1,4 +1,5 @@ use core::ops::{Index, IndexMut, Add, Sub, Mul}; +use std_shims::{vec, vec::Vec}; use zeroize::Zeroize; diff --git a/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs b/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs index 588a6ae6..388c3aca 100644 --- a/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs +++ b/crypto/evrf/generalized-bulletproofs/src/tests/arithmetic_circuit_proof.rs @@ -3,7 +3,7 @@ use rand_core::{RngCore, OsRng}; use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto}; use crate::{ - ScalarVector, PedersenCommitment, PedersenVectorCommitment, + ScalarVector, PedersenCommitment, PedersenVectorCommitment, Generators, transcript::*, arithmetic_circuit_proof::{ Variable, LinComb, ArithmeticCircuitStatement, ArithmeticCircuitWitness, @@ -43,7 +43,7 @@ fn test_zero_arithmetic_circuit() { statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); transcript.complete() }; - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); let mut transcript = VerifierTranscript::new([0; 32], &proof); let verifier_commmitments = transcript.read_commitments(0, 1); @@ -59,14 +59,8 @@ fn test_vector_commitment_arithmetic_circuit() { let v1 = ::F::random(&mut OsRng); let v2 = ::F::random(&mut OsRng); - let v3 = ::F::random(&mut OsRng); - let v4 = ::F::random(&mut OsRng); let gamma = ::F::random(&mut OsRng); - let commitment = (reduced.g_bold(0) * v1) + - (reduced.g_bold(1) * v2) + - (reduced.h_bold(0) * v3) + - (reduced.h_bold(1) * v4) + - (generators.h() * gamma); + let commitment = (reduced.g_bold(0) * v1) + (reduced.g_bold(1) * v2) + (generators.h() * gamma); let V = vec![]; let C = vec![commitment]; @@ -83,20 +77,14 @@ fn test_vector_commitment_arithmetic_circuit() { vec![LinComb::empty() .term(::F::ONE, Variable::CG { commitment: 0, index: 0 }) .term(::F::from(2u64), Variable::CG { commitment: 0, index: 1 }) - .term(::F::from(3u64), Variable::CH { commitment: 0, index: 0 }) - .term(::F::from(4u64), Variable::CH { commitment: 0, index: 1 }) - .constant(-(v1 + (v2 + v2) + (v3 + v3 + v3) + (v4 + v4 + v4 + v4)))], + .constant(-(v1 + (v2 + v2)))], commitments.clone(), ) .unwrap(); let witness = ArithmeticCircuitWitness::::new( aL, aR, - vec![PedersenVectorCommitment { - g_values: ScalarVector(vec![v1, v2]), - h_values: ScalarVector(vec![v3, v4]), - mask: gamma, - }], + vec![PedersenVectorCommitment { g_values: ScalarVector(vec![v1, v2]), mask: gamma }], vec![], ) .unwrap(); @@ -105,7 +93,7 @@ fn test_vector_commitment_arithmetic_circuit() { statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); transcript.complete() }; - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); let mut transcript = VerifierTranscript::new([0; 32], &proof); let verifier_commmitments = transcript.read_commitments(1, 0); @@ -139,13 +127,8 @@ fn fuzz_test_arithmetic_circuit() { while g_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() { g_values.0.push(::F::random(&mut OsRng)); } - let mut h_values = ScalarVector(vec![]); - while h_values.0.len() < ((OsRng.next_u64() % 8) + 1).try_into().unwrap() { - h_values.0.push(::F::random(&mut OsRng)); - } C.push(PedersenVectorCommitment { g_values, - h_values, mask: ::F::random(&mut OsRng), }); } @@ -193,13 +176,6 @@ fn fuzz_test_arithmetic_circuit() { constraint = constraint.term(weight, Variable::CG { commitment, index }); eval += weight * C.g_values[index]; } - - for _ in 0 .. (OsRng.next_u64() % 4) { - let index = usize::try_from(OsRng.next_u64()).unwrap() % C.h_values.len(); - let weight = ::F::random(&mut OsRng); - constraint = constraint.term(weight, Variable::CH { commitment, index }); - eval += weight * C.h_values[index]; - } } if !V.is_empty() { @@ -218,11 +194,7 @@ fn fuzz_test_arithmetic_circuit() { let mut transcript = Transcript::new([0; 32]); let commitments = transcript.write_commitments( - C.iter() - .map(|C| { - C.commit(generators.g_bold_slice(), generators.h_bold_slice(), generators.h()).unwrap() - }) - .collect(), + C.iter().map(|C| C.commit(generators.g_bold_slice(), generators.h()).unwrap()).collect(), V.iter().map(|V| V.commit(generators.g(), generators.h())).collect(), ); @@ -239,7 +211,7 @@ fn fuzz_test_arithmetic_circuit() { statement.clone().prove(&mut OsRng, &mut transcript, witness).unwrap(); transcript.complete() }; - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); let mut transcript = VerifierTranscript::new([0; 32], &proof); let verifier_commmitments = transcript.read_commitments(C.len(), V.len()); diff --git a/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs b/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs index 49b5fc32..63bc9f92 100644 --- a/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs +++ b/crypto/evrf/generalized-bulletproofs/src/tests/inner_product.rs @@ -8,7 +8,7 @@ use ciphersuite::{ }; use crate::{ - ScalarVector, PointVector, + ScalarVector, PointVector, Generators, transcript::*, inner_product::{P, IpStatement, IpWitness}, tests::generators, @@ -41,7 +41,7 @@ fn test_zero_inner_product() { transcript.complete() }; - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); IpStatement::::new( reduced, ScalarVector(vec![::F::ONE; 1]), @@ -58,7 +58,7 @@ fn test_zero_inner_product() { fn test_inner_product() { // P = sum(g_bold * a, h_bold * b) let generators = generators::(32); - let mut verifier = generators.batch_verifier(); + let mut verifier = Generators::batch_verifier(); for i in [1, 2, 4, 8, 16, 32] { let generators = generators.reduce(i).unwrap(); let g = generators.g(); diff --git a/crypto/evrf/generalized-bulletproofs/src/transcript.rs b/crypto/evrf/generalized-bulletproofs/src/transcript.rs index 75ef35c4..70fe9f8d 100644 --- a/crypto/evrf/generalized-bulletproofs/src/transcript.rs +++ b/crypto/evrf/generalized-bulletproofs/src/transcript.rs @@ -1,9 +1,12 @@ -use std::io; +use std_shims::{vec::Vec, io}; use blake2::{Digest, Blake2b512}; use ciphersuite::{ - group::{ff::PrimeField, GroupEncoding}, + group::{ + ff::{Field, PrimeField}, + GroupEncoding, + }, Ciphersuite, }; @@ -13,27 +16,11 @@ const SCALAR: u8 = 0; const POINT: u8 = 1; const CHALLENGE: u8 = 2; -fn challenge(digest: &mut Blake2b512) -> F { - // Panic if this is such a wide field, we won't successfully perform a reduction into an unbiased - // scalar - debug_assert!((F::NUM_BITS + 128) < 512); - +fn challenge(digest: &mut Blake2b512) -> C::F { digest.update([CHALLENGE]); - let chl = digest.clone().finalize(); + let chl = digest.clone().finalize().into(); - let mut res = F::ZERO; - for (i, mut byte) in chl.iter().cloned().enumerate() { - for j in 0 .. 8 { - let lsb = byte & 1; - let mut bit = F::from(u64::from(lsb)); - for _ in 0 .. ((i * 8) + j) { - bit = bit.double(); - } - res += bit; - - byte >>= 1; - } - } + let res = C::reduce_512(chl); // Negligible probability if bool::from(res.is_zero()) { @@ -83,6 +70,8 @@ impl Transcript { } /// Push a scalar onto the transcript. + /// + /// The order and layout of this must be constant to the context. pub fn push_scalar(&mut self, scalar: impl PrimeField) { self.digest.update([SCALAR]); let bytes = scalar.to_repr(); @@ -91,6 +80,8 @@ impl Transcript { } /// Push a point onto the transcript. + /// + /// The order and layout of this must be constant to the context. pub fn push_point(&mut self, point: impl GroupEncoding) { self.digest.update([POINT]); let bytes = point.to_bytes(); @@ -104,9 +95,11 @@ impl Transcript { C: Vec, V: Vec, ) -> Commitments { + self.digest.update(u32::try_from(C.len()).unwrap().to_le_bytes()); for C in &C { self.push_point(*C); } + self.digest.update(u32::try_from(V.len()).unwrap().to_le_bytes()); for V in &V { self.push_point(*V); } @@ -114,8 +107,14 @@ impl Transcript { } /// Sample a challenge. - pub fn challenge(&mut self) -> F { - challenge(&mut self.digest) + pub fn challenge(&mut self) -> C::F { + challenge::(&mut self.digest) + } + + /// Sample a challenge as a byte array. + pub fn challenge_bytes(&mut self) -> [u8; 64] { + self.digest.update([CHALLENGE]); + self.digest.clone().finalize().into() } /// Complete a transcript, yielding the fully serialized proof. @@ -139,20 +138,36 @@ impl<'a> VerifierTranscript<'a> { } /// Read a scalar from the transcript. + /// + /// The order and layout of this must be constant to the context. pub fn read_scalar(&mut self) -> io::Result { - let scalar = C::read_F(&mut self.transcript)?; + // Read the scalar onto the transcript using the serialization present in the transcript self.digest.update([SCALAR]); - let bytes = scalar.to_repr(); - self.digest.update(bytes); + let scalar_len = ::Repr::default().as_ref().len(); + if self.transcript.len() < scalar_len { + Err(io::Error::new(io::ErrorKind::Other, "not enough bytes to read_scalar"))?; + } + self.digest.update(&self.transcript[.. scalar_len]); + + // Read the actual scalar, where `read_F` ensures its canonically serialized + let scalar = C::read_F(&mut self.transcript)?; Ok(scalar) } /// Read a point from the transcript. + /// + /// The order and layout of this must be constant to the context. pub fn read_point(&mut self) -> io::Result { - let point = C::read_G(&mut self.transcript)?; + // Read the point onto the transcript using the serialization present in the transcript self.digest.update([POINT]); - let bytes = point.to_bytes(); - self.digest.update(bytes); + let point_len = ::Repr::default().as_ref().len(); + if self.transcript.len() < point_len { + Err(io::Error::new(io::ErrorKind::Other, "not enough bytes to read_point"))?; + } + self.digest.update(&self.transcript[.. point_len]); + + // Read the actual point, where `read_G` ensures its canonically serialized + let point = C::read_G(&mut self.transcript)?; Ok(point) } @@ -165,10 +180,12 @@ impl<'a> VerifierTranscript<'a> { C: usize, V: usize, ) -> io::Result> { + self.digest.update(u32::try_from(C).unwrap().to_le_bytes()); let mut C_vec = Vec::with_capacity(C); for _ in 0 .. C { C_vec.push(self.read_point::()?); } + self.digest.update(u32::try_from(V).unwrap().to_le_bytes()); let mut V_vec = Vec::with_capacity(V); for _ in 0 .. V { V_vec.push(self.read_point::()?); @@ -177,11 +194,17 @@ impl<'a> VerifierTranscript<'a> { } /// Sample a challenge. - pub fn challenge(&mut self) -> F { - challenge(&mut self.digest) + pub fn challenge(&mut self) -> C::F { + challenge::(&mut self.digest) } - /// Complete the transcript, returning the advanced slice. + /// Sample a challenge as a byte array. + pub fn challenge_bytes(&mut self) -> [u8; 64] { + self.digest.update([CHALLENGE]); + self.digest.clone().finalize().into() + } + + /// Complete the transcript transcript, yielding what remains. pub fn complete(self) -> &'a [u8] { self.transcript } diff --git a/crypto/evrf/secq256k1/Cargo.toml b/crypto/evrf/secq256k1/Cargo.toml index 9d7d6ef4..4b5ec5ac 100644 --- a/crypto/evrf/secq256k1/Cargo.toml +++ b/crypto/evrf/secq256k1/Cargo.toml @@ -17,20 +17,22 @@ rustdoc-args = ["--cfg", "docsrs"] rustversion = "1" hex-literal = { version = "0.4", default-features = false } -rand_core = { version = "0.6", default-features = false, features = ["std"] } +std-shims = { version = "0.1", path = "../../../common/std-shims", default-features = false, optional = true } -zeroize = { version = "^1.5", default-features = false, features = ["std", "zeroize_derive"] } -subtle = { version = "^2.4", default-features = false, features = ["std"] } +rand_core = { version = "0.6", default-features = false } + +zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] } +subtle = { version = "^2.4", default-features = false } generic-array = { version = "0.14", default-features = false } crypto-bigint = { version = "0.5", default-features = false, features = ["zeroize"] } k256 = { version = "0.13", default-features = false, features = ["arithmetic"] } -blake2 = { version = "0.10", default-features = false, features = ["std"] } -ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false, features = ["std"] } -ec-divisors = { path = "../divisors" } -generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets" } +blake2 = { version = "0.10", default-features = false } +ciphersuite = { path = "../../ciphersuite", version = "0.4", default-features = false } +ec-divisors = { path = "../divisors", default-features = false } +generalized-bulletproofs-ec-gadgets = { path = "../ec-gadgets", default-features = false } [dev-dependencies] hex = "0.4" @@ -38,3 +40,8 @@ hex = "0.4" rand_core = { version = "0.6", features = ["std"] } ff-group-tests = { path = "../../ff-group-tests" } + +[features] +alloc = ["std-shims", "zeroize/alloc", "ciphersuite/alloc"] +std = ["std-shims/std", "rand_core/std", "zeroize/std", "subtle/std", "blake2/std", "ciphersuite/std", "ec-divisors/std", "generalized-bulletproofs-ec-gadgets/std"] +default = ["std"] diff --git a/crypto/evrf/secq256k1/src/lib.rs b/crypto/evrf/secq256k1/src/lib.rs index b59078af..40656db8 100644 --- a/crypto/evrf/secq256k1/src/lib.rs +++ b/crypto/evrf/secq256k1/src/lib.rs @@ -1,5 +1,9 @@ #![cfg_attr(docsrs, feature(doc_auto_cfg))] #![doc = include_str!("../README.md")] +#![cfg_attr(not(feature = "std"), no_std)] + +#[cfg(any(feature = "alloc", feature = "std"))] +use std_shims::io::{self, Read}; use generic_array::typenum::{Sum, Diff, Quot, U, U1, U2}; use ciphersuite::group::{ff::PrimeField, Group}; @@ -33,10 +37,29 @@ impl ciphersuite::Ciphersuite for Secq256k1 { Point::generator() } + fn reduce_512(scalar: [u8; 64]) -> Self::F { + Scalar::wide_reduce(scalar) + } + fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F { use blake2::Digest; Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_slice().try_into().unwrap()) } + + // We override the provided impl, which compares against the reserialization, because + // we already require canonicity + #[cfg(any(feature = "alloc", feature = "std"))] + #[allow(non_snake_case)] + fn read_G(reader: &mut R) -> io::Result { + use ciphersuite::group::GroupEncoding; + + let mut encoding = ::Repr::default(); + reader.read_exact(encoding.as_mut())?; + + let point = Option::::from(Self::G::from_bytes(&encoding)) + .ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))?; + Ok(point) + } } impl generalized_bulletproofs_ec_gadgets::DiscreteLogParameters for Secq256k1 { diff --git a/crypto/ff-group-tests/Cargo.toml b/crypto/ff-group-tests/Cargo.toml index aa328fa1..6b25a2b9 100644 --- a/crypto/ff-group-tests/Cargo.toml +++ b/crypto/ff-group-tests/Cargo.toml @@ -30,4 +30,4 @@ p256 = { version = "^0.13.1", default-features = false, features = ["std", "arit bls12_381 = "0.8" -pasta_curves = "0.5" +pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" } diff --git a/tests/no-std/Cargo.toml b/tests/no-std/Cargo.toml index 5023a80e..dabedd78 100644 --- a/tests/no-std/Cargo.toml +++ b/tests/no-std/Cargo.toml @@ -29,6 +29,13 @@ multiexp = { path = "../../crypto/multiexp", default-features = false, features dleq = { path = "../../crypto/dleq", default-features = false } schnorr-signatures = { path = "../../crypto/schnorr", default-features = false } +secq256k1 = { path = "../../crypto/evrf/secq256k1", default-features = false } +embedwards25519 = { path = "../../crypto/evrf/embedwards25519", default-features = false } +generalized-bulletproofs = { path = "../../crypto/evrf/generalized-bulletproofs", default-features = false } +generalized-bulletproofs-circuit-abstraction = { path = "../../crypto/evrf/circuit-abstraction", default-features = false } +ec-divisors = { path = "../../crypto/evrf/divisors", default-features = false } +generalized-bulletproofs-ec-gadgets = { path = "../../crypto/evrf/ec-gadgets", default-features = false } + dkg = { path = "../../crypto/dkg", default-features = false } # modular-frost = { path = "../../crypto/frost", default-features = false } # frost-schnorrkel = { path = "../../crypto/schnorrkel", default-features = false } diff --git a/tests/no-std/src/lib.rs b/tests/no-std/src/lib.rs index fa9da268..73efcd40 100644 --- a/tests/no-std/src/lib.rs +++ b/tests/no-std/src/lib.rs @@ -12,6 +12,13 @@ pub use multiexp; pub use dleq; pub use schnorr_signatures; +pub use secq256k1; +pub use embedwards25519; +pub use generalized_bulletproofs; +pub use generalized_bulletproofs_circuit_abstraction; +pub use ec_divisors; +pub use generalized_bulletproofs_ec_gadgets; + pub use dkg; /* pub use modular_frost; From 315d4fb356d7328e2f8b7692635f87cdbfb56cae Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Wed, 29 Jan 2025 23:01:45 -0500 Subject: [PATCH 363/368] Correct decoding identity for embedwards25519/secq256k1 --- crypto/evrf/embedwards25519/README.md | 2 +- crypto/evrf/embedwards25519/src/point.rs | 5 ++++- crypto/evrf/secq256k1/src/point.rs | 5 ++++- crypto/ff-group-tests/src/group.rs | 8 +++++--- 4 files changed, 14 insertions(+), 6 deletions(-) diff --git a/crypto/evrf/embedwards25519/README.md b/crypto/evrf/embedwards25519/README.md index 5f7f5e47..e282c063 100644 --- a/crypto/evrf/embedwards25519/README.md +++ b/crypto/evrf/embedwards25519/README.md @@ -7,7 +7,7 @@ This curve was found via for finding curves (specifically, curve cycles), modified to search for curves whose field is the Ed25519 scalar field (not the Ed25519 field). -``` +```ignore p = 0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed q = 0x0fffffffffffffffffffffffffffffffe53f4debb78ff96877063f0306eef96b D = -420435 diff --git a/crypto/evrf/embedwards25519/src/point.rs b/crypto/evrf/embedwards25519/src/point.rs index 9d24e88a..e5a0ba51 100644 --- a/crypto/evrf/embedwards25519/src/point.rs +++ b/crypto/evrf/embedwards25519/src/point.rs @@ -198,6 +198,7 @@ impl Group for Point { Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO } } fn generator() -> Self { + // Point with the lowest valid x-coordinate Point { x: FieldElement::from_repr(hex_literal::hex!( "0100000000000000000000000000000000000000000000000000000000000000" @@ -335,8 +336,10 @@ impl GroupEncoding for Point { // If this the identity, set y to 1 let y = CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity); + // If this the identity, set y to 1 and z to 0 (instead of 1) + let z = <_>::conditional_select(&FieldElement::ONE, &FieldElement::ZERO, is_identity); // Create the point if we have a y solution - let point = y.map(|y| Point { x, y, z: FieldElement::ONE }); + let point = y.map(|y| Point { x, y, z }); let not_negative_zero = !(is_identity & sign); // Only return the point if it isn't -0 diff --git a/crypto/evrf/secq256k1/src/point.rs b/crypto/evrf/secq256k1/src/point.rs index 384b68c9..b7e51037 100644 --- a/crypto/evrf/secq256k1/src/point.rs +++ b/crypto/evrf/secq256k1/src/point.rs @@ -192,6 +192,7 @@ impl Group for Point { Point { x: FieldElement::ZERO, y: FieldElement::ONE, z: FieldElement::ZERO } } fn generator() -> Self { + // Point with the lowest valid x-coordinate Point { x: FieldElement::from_repr( hex_literal::hex!("0000000000000000000000000000000000000000000000000000000000000001") @@ -334,8 +335,10 @@ impl GroupEncoding for Point { // If this the identity, set y to 1 let y = CtOption::conditional_select(&y, &CtOption::new(FieldElement::ONE, 1.into()), is_identity); + // If this the identity, set y to 1 and z to 0 (instead of 1) + let z = <_>::conditional_select(&FieldElement::ONE, &FieldElement::ZERO, is_identity); // Create the point if we have a y solution - let point = y.map(|y| Point { x, y, z: FieldElement::ONE }); + let point = y.map(|y| Point { x, y, z }); let not_negative_zero = !(is_identity & sign); // Only return the point if it isn't -0 and the sign byte wasn't malleated diff --git a/crypto/ff-group-tests/src/group.rs b/crypto/ff-group-tests/src/group.rs index 0f0aab4e..f2b69acc 100644 --- a/crypto/ff-group-tests/src/group.rs +++ b/crypto/ff-group-tests/src/group.rs @@ -154,18 +154,20 @@ pub fn test_group(rng: &mut R) { /// Test encoding and decoding of group elements. pub fn test_encoding() { - let test = |point: G, msg| { + let test = |point: G, msg| -> G { let bytes = point.to_bytes(); let mut repr = G::Repr::default(); repr.as_mut().copy_from_slice(bytes.as_ref()); - assert_eq!(point, G::from_bytes(&repr).unwrap(), "{msg} couldn't be encoded and decoded"); + let decoded = G::from_bytes(&repr).unwrap(); + assert_eq!(point, decoded, "{msg} couldn't be encoded and decoded"); assert_eq!( point, G::from_bytes_unchecked(&repr).unwrap(), "{msg} couldn't be encoded and decoded", ); + decoded }; - test(G::identity(), "identity"); + assert!(bool::from(test(G::identity(), "identity").is_identity())); test(G::generator(), "generator"); test(G::generator() + G::generator(), "(generator * 2)"); } From 3655dc723f91d7123fff0147beedcc001e7e73de Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 30 Jan 2025 00:13:55 -0500 Subject: [PATCH 364/368] Use clearer identity check in equality --- crypto/evrf/embedwards25519/src/point.rs | 3 ++- crypto/evrf/secq256k1/src/point.rs | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/crypto/evrf/embedwards25519/src/point.rs b/crypto/evrf/embedwards25519/src/point.rs index e5a0ba51..19f95c6a 100644 --- a/crypto/evrf/embedwards25519/src/point.rs +++ b/crypto/evrf/embedwards25519/src/point.rs @@ -46,7 +46,8 @@ impl ConstantTimeEq for Point { let y1 = self.y * other.z; let y2 = other.y * self.z; - (self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) + // Both identity or equivalent over their denominators + (self.z.is_zero() & other.z.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) } } diff --git a/crypto/evrf/secq256k1/src/point.rs b/crypto/evrf/secq256k1/src/point.rs index b7e51037..9b590cdf 100644 --- a/crypto/evrf/secq256k1/src/point.rs +++ b/crypto/evrf/secq256k1/src/point.rs @@ -40,7 +40,8 @@ impl ConstantTimeEq for Point { let y1 = self.y * other.z; let y2 = other.y * self.z; - (self.x.is_zero() & other.x.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) + // Identity or equivalent + (self.z.is_zero() & other.z.is_zero()) | (x1.ct_eq(&x2) & y1.ct_eq(&y2)) } } From a275023cfc92415d4f6a04b7710d919eaf41bc21 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Thu, 30 Jan 2025 03:14:24 -0500 Subject: [PATCH 365/368] Finish merging in the develop branch --- Cargo.lock | 60 +-------------- Cargo.toml | 3 - coordinator/cosign/src/intend.rs | 6 +- coordinator/cosign/src/lib.rs | 61 +++++++-------- coordinator/p2p/libp2p/src/lib.rs | 11 +-- coordinator/p2p/libp2p/src/swarm.rs | 6 +- coordinator/p2p/libp2p/src/validators.rs | 29 ++++---- coordinator/p2p/src/heartbeat.rs | 4 +- coordinator/p2p/src/lib.rs | 14 ++-- coordinator/src/db.rs | 45 ++++++----- coordinator/src/dkg_confirmation.rs | 15 ++-- coordinator/src/main.rs | 28 +++---- coordinator/src/substrate.rs | 17 +++-- coordinator/src/tributary.rs | 16 ++-- coordinator/substrate/src/canonical.rs | 7 +- coordinator/substrate/src/ephemeral.rs | 20 +++-- coordinator/substrate/src/lib.rs | 40 +++++----- coordinator/substrate/src/publish_batch.rs | 18 ++--- .../substrate/src/publish_slash_report.rs | 12 +-- coordinator/substrate/src/set_keys.rs | 12 +-- coordinator/tributary/src/db.rs | 74 +++++++++++-------- coordinator/tributary/src/lib.rs | 21 +++--- patches/tiny-bip39/Cargo.toml | 24 ------ patches/tiny-bip39/src/lib.rs | 1 - processor/bitcoin/src/main.rs | 2 +- processor/bitcoin/src/primitives/output.rs | 6 +- processor/bitcoin/src/rpc.rs | 12 +-- processor/bitcoin/src/scheduler.rs | 4 +- processor/ethereum/src/primitives/output.rs | 20 ++--- processor/ethereum/src/rpc.rs | 16 ++-- processor/ethereum/src/scheduler.rs | 20 ++--- processor/monero/src/primitives/output.rs | 6 +- processor/monero/src/rpc.rs | 12 +-- processor/monero/src/scheduler.rs | 4 +- processor/primitives/src/output.rs | 4 +- processor/primitives/src/payment.rs | 10 +-- processor/scanner/src/batch/db.rs | 6 +- processor/scanner/src/lib.rs | 8 +- .../scheduler/utxo/primitives/src/tree.rs | 11 ++- processor/scheduler/utxo/standard/src/db.rs | 30 ++++---- processor/scheduler/utxo/standard/src/lib.rs | 48 ++++++------ .../utxo/transaction-chaining/src/db.rs | 24 +++--- .../utxo/transaction-chaining/src/lib.rs | 44 ++++++----- substrate/client/src/serai/in_instructions.rs | 6 +- substrate/client/src/serai/mod.rs | 4 +- substrate/client/src/serai/validator_sets.rs | 8 +- substrate/client/tests/batch.rs | 2 +- substrate/client/tests/burn.rs | 2 +- .../client/tests/common/genesis_liquidity.rs | 7 +- .../client/tests/common/in_instructions.rs | 2 +- .../client/tests/common/validator_sets.rs | 4 +- substrate/client/tests/dex.rs | 2 +- substrate/client/tests/dht.rs | 2 +- substrate/client/tests/emissions.rs | 25 +++---- substrate/client/tests/validator_sets.rs | 14 ++-- substrate/coins/pallet/src/tests.rs | 2 +- substrate/in-instructions/pallet/src/lib.rs | 13 ++-- .../in-instructions/primitives/src/lib.rs | 2 +- substrate/node/src/chain_spec.rs | 5 +- substrate/primitives/src/networks.rs | 15 +++- substrate/validator-sets/pallet/src/lib.rs | 12 +-- .../validator-sets/primitives/src/lib.rs | 2 +- 62 files changed, 452 insertions(+), 508 deletions(-) delete mode 100644 patches/tiny-bip39/Cargo.toml delete mode 100644 patches/tiny-bip39/src/lib.rs diff --git a/Cargo.lock b/Cargo.lock index 68a2657f..87d49f61 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -5630,19 +5630,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "monero-seed" -version = "0.1.0" -dependencies = [ - "curve25519-dalek", - "hex", - "monero-primitives", - "rand_core", - "std-shims", - "thiserror 2.0.9", - "zeroize", -] - [[package]] name = "monero-serai" version = "0.1.4-alpha" @@ -5717,21 +5704,6 @@ dependencies = [ "zeroize", ] -[[package]] -name = "monero-wallet-util" -version = "0.1.0" -dependencies = [ - "curve25519-dalek", - "hex", - "monero-seed", - "monero-wallet", - "polyseed", - "rand_core", - "std-shims", - "thiserror 2.0.9", - "zeroize", -] - [[package]] name = "multiaddr" version = "0.18.1" @@ -6478,17 +6450,6 @@ version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7924d1d0ad836f665c9065e26d016c673ece3993f30d340068b16f282afc1156" -[[package]] -name = "password-hash" -version = "0.5.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "346f04948ba92c43e8469c1ee6736c7563d71012b17d40745260fe106aac2166" -dependencies = [ - "base64ct", - "rand_core", - "subtle", -] - [[package]] name = "pasta_curves" version = "0.5.1" @@ -6533,9 +6494,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2" dependencies = [ "digest 0.10.7", - "hmac", - "password-hash", - "sha2", ] [[package]] @@ -6658,20 +6616,6 @@ dependencies = [ "universal-hash", ] -[[package]] -name = "polyseed" -version = "0.1.0" -dependencies = [ - "hex", - "pbkdf2 0.12.2", - "rand_core", - "sha3", - "std-shims", - "subtle", - "thiserror 2.0.9", - "zeroize", -] - [[package]] name = "polyval" version = "0.6.2" @@ -9006,6 +8950,7 @@ dependencies = [ "serai-coins-primitives", "serai-primitives", "sp-core", + "sp-io", "sp-runtime", "sp-std", ] @@ -9445,7 +9390,7 @@ dependencies = [ "generalized-bulletproofs-circuit-abstraction", "generalized-bulletproofs-ec-gadgets", "minimal-ed448", - "monero-wallet-util", + "monero-wallet", "multiexp", "schnorr-signatures", "secq256k1", @@ -9531,6 +9476,7 @@ dependencies = [ "sp-core", "sp-io", "sp-runtime", + "sp-std", "zeroize", ] diff --git a/Cargo.toml b/Cargo.toml index c50c7c10..7ac71666 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -191,9 +191,6 @@ parking_lot = { path = "patches/parking_lot" } zstd = { path = "patches/zstd" } # Needed for WAL compression rocksdb = { path = "patches/rocksdb" } -# 1.0.1 was yanked due to a breaking change (an extra field) -# 2.0 has fewer dependencies and still works within our tree -tiny-bip39 = { path = "patches/tiny-bip39" } # is-terminal now has an std-based solution with an equivalent API is-terminal = { path = "patches/is-terminal" } diff --git a/coordinator/cosign/src/intend.rs b/coordinator/cosign/src/intend.rs index c42c2d12..08643aad 100644 --- a/coordinator/cosign/src/intend.rs +++ b/coordinator/cosign/src/intend.rs @@ -3,7 +3,7 @@ use std::{sync::Arc, collections::HashMap}; use serai_client::{ primitives::{SeraiAddress, Amount}, - validator_sets::primitives::ValidatorSet, + validator_sets::primitives::ExternalValidatorSet, Serai, }; @@ -28,7 +28,7 @@ db_channel! { CosignIntendChannels { GlobalSessionsChannel: () -> ([u8; 32], GlobalSession), BlockEvents: () -> BlockEventData, - IntendedCosigns: (set: ValidatorSet) -> CosignIntent, + IntendedCosigns: (set: ExternalValidatorSet) -> CosignIntent, } } @@ -110,7 +110,7 @@ impl ContinuallyRan for CosignIntendTask { keys.insert(set.network, SeraiAddress::from(*key)); let stake = serai .validator_sets() - .total_allocated_stake(set.network) + .total_allocated_stake(set.network.into()) .await .map_err(|e| format!("{e:?}"))? .unwrap_or(Amount(0)) diff --git a/coordinator/cosign/src/lib.rs b/coordinator/cosign/src/lib.rs index 3d476c3d..e98127b4 100644 --- a/coordinator/cosign/src/lib.rs +++ b/coordinator/cosign/src/lib.rs @@ -11,8 +11,8 @@ use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{NetworkId, SeraiAddress}, - validator_sets::primitives::{Session, ValidatorSet, KeyPair}, + primitives::{ExternalNetworkId, SeraiAddress}, + validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair}, Public, Block, Serai, TemporalSerai, }; @@ -52,13 +52,13 @@ pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign"; #[derive(Debug, BorshSerialize, BorshDeserialize)] pub(crate) struct GlobalSession { pub(crate) start_block_number: u64, - pub(crate) sets: Vec, - pub(crate) keys: HashMap, - pub(crate) stakes: HashMap, + pub(crate) sets: Vec, + pub(crate) keys: HashMap, + pub(crate) stakes: HashMap, pub(crate) total_stake: u64, } impl GlobalSession { - fn id(mut cosigners: Vec) -> [u8; 32] { + fn id(mut cosigners: Vec) -> [u8; 32] { cosigners.sort_by_key(|a| borsh::to_vec(a).unwrap()); Blake2s256::digest(borsh::to_vec(&cosigners).unwrap()).into() } @@ -101,12 +101,12 @@ pub struct Cosign { /// The hash of the block to cosign. pub block_hash: [u8; 32], /// The actual cosigner. - pub cosigner: NetworkId, + pub cosigner: ExternalNetworkId, } impl CosignIntent { /// Convert this into a `Cosign`. - pub fn into_cosign(self, cosigner: NetworkId) -> Cosign { + pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign { let CosignIntent { global_session, block_number, block_hash, notable: _ } = self; Cosign { global_session, block_number, block_hash, cosigner } } @@ -166,7 +166,10 @@ create_db! { // one notable block. All validator sets will explicitly produce a cosign for their notable // block, causing the latest cosigned block for a global session to either be the global // session's notable cosigns or the network's latest cosigns. - NetworksLatestCosignedBlock: (global_session: [u8; 32], network: NetworkId) -> SignedCosign, + NetworksLatestCosignedBlock: ( + global_session: [u8; 32], + network: ExternalNetworkId + ) -> SignedCosign, // Cosigns received for blocks not locally recognized as finalized. Faults: (global_session: [u8; 32]) -> Vec, // The global session which faulted. @@ -177,15 +180,10 @@ create_db! { /// Fetch the keys used for cosigning by a specific network. async fn keys_for_network( serai: &TemporalSerai<'_>, - network: NetworkId, + network: ExternalNetworkId, ) -> Result, String> { - // The Serai network never cosigns so it has no keys for cosigning - if network == NetworkId::Serai { - return Ok(None); - } - let Some(latest_session) = - serai.validator_sets().session(network).await.map_err(|e| format!("{e:?}"))? + serai.validator_sets().session(network.into()).await.map_err(|e| format!("{e:?}"))? else { // If this network hasn't had a session declared, move on return Ok(None); @@ -194,7 +192,7 @@ async fn keys_for_network( // Get the keys for the latest session if let Some(keys) = serai .validator_sets() - .keys(ValidatorSet { network, session: latest_session }) + .keys(ExternalValidatorSet { network, session: latest_session }) .await .map_err(|e| format!("{e:?}"))? { @@ -205,7 +203,7 @@ async fn keys_for_network( if let Some(prior_session) = latest_session.0.checked_sub(1).map(Session) { if let Some(keys) = serai .validator_sets() - .keys(ValidatorSet { network, session: prior_session }) + .keys(ExternalValidatorSet { network, session: prior_session }) .await .map_err(|e| format!("{e:?}"))? { @@ -216,16 +214,19 @@ async fn keys_for_network( Ok(None) } -/// Fetch the `ValidatorSet`s, and their associated keys, used for cosigning as of this block. -async fn cosigning_sets(serai: &TemporalSerai<'_>) -> Result, String> { - let mut sets = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); - for network in serai_client::primitives::NETWORKS { +/// Fetch the `ExternalValidatorSet`s, and their associated keys, used for cosigning as of this +/// block. +async fn cosigning_sets( + serai: &TemporalSerai<'_>, +) -> Result, String> { + let mut sets = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len()); + for network in serai_client::primitives::EXTERNAL_NETWORKS { let Some((session, keys)) = keys_for_network(serai, network).await? else { // If this network doesn't have usable keys, move on continue; }; - sets.push((ValidatorSet { network, session }, keys.0)); + sets.push((ExternalValidatorSet { network, session }, keys.0)); } Ok(sets) } @@ -345,8 +346,8 @@ impl Cosigning { /// If this global session hasn't produced any notable cosigns, this will return the latest /// cosigns for this session. pub fn notable_cosigns(getter: &impl Get, global_session: [u8; 32]) -> Vec { - let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); - for network in serai_client::primitives::NETWORKS { + let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len()); + for network in serai_client::primitives::EXTERNAL_NETWORKS { if let Some(cosign) = NetworksLatestCosignedBlock::get(getter, global_session, network) { cosigns.push(cosign); } @@ -363,7 +364,7 @@ impl Cosigning { let mut cosigns = Faults::get(&self.db, faulted).expect("faulted with no faults"); // Also include all of our recognized-as-honest cosigns in an attempt to induce fault // identification in those who see the faulty cosigns as honest - for network in serai_client::primitives::NETWORKS { + for network in serai_client::primitives::EXTERNAL_NETWORKS { if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, faulted, network) { if cosign.cosign.global_session == faulted { cosigns.push(cosign); @@ -375,8 +376,8 @@ impl Cosigning { let Some(global_session) = evaluator::currently_evaluated_global_session(&self.db) else { return vec![]; }; - let mut cosigns = Vec::with_capacity(serai_client::primitives::NETWORKS.len()); - for network in serai_client::primitives::NETWORKS { + let mut cosigns = Vec::with_capacity(serai_client::primitives::EXTERNAL_NETWORKS.len()); + for network in serai_client::primitives::EXTERNAL_NETWORKS { if let Some(cosign) = NetworksLatestCosignedBlock::get(&self.db, global_session, network) { cosigns.push(cosign); } @@ -487,12 +488,12 @@ impl Cosigning { Ok(()) } - /// Receive intended cosigns to produce for this ValidatorSet. + /// Receive intended cosigns to produce for this ExternalValidatorSet. /// /// All cosigns intended, up to and including the next notable cosign, are returned. /// /// This will drain the internal channel and not re-yield these intentions again. - pub fn intended_cosigns(txn: &mut impl DbTxn, set: ValidatorSet) -> Vec { + pub fn intended_cosigns(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Vec { let mut res: Vec = vec![]; // While we have yet to find a notable cosign... while !res.last().map(|cosign| cosign.notable).unwrap_or(false) { diff --git a/coordinator/p2p/libp2p/src/lib.rs b/coordinator/p2p/libp2p/src/lib.rs index 91f66a2d..8d60b32b 100644 --- a/coordinator/p2p/libp2p/src/lib.rs +++ b/coordinator/p2p/libp2p/src/lib.rs @@ -14,8 +14,8 @@ use zeroize::Zeroizing; use schnorrkel::Keypair; use serai_client::{ - primitives::{NetworkId, PublicKey}, - validator_sets::primitives::ValidatorSet, + primitives::{ExternalNetworkId, PublicKey}, + validator_sets::primitives::ExternalValidatorSet, Serai, }; @@ -104,7 +104,7 @@ impl serai_coordinator_p2p::Peer<'_> for Peer<'_> { #[derive(Clone)] struct Peers { - peers: Arc>>>, + peers: Arc>>>, } // Consider adding identify/kad/autonat/rendevous/(relay + dcutr). While we currently use the Serai @@ -135,7 +135,8 @@ struct Libp2pInner { signed_cosigns: Mutex>, signed_cosigns_send: mpsc::UnboundedSender, - heartbeat_requests: Mutex>, + heartbeat_requests: + Mutex>, notable_cosign_requests: Mutex>, inbound_request_responses: mpsc::UnboundedSender<(InboundRequestId, Response)>, } @@ -312,7 +313,7 @@ impl serai_cosign::RequestNotableCosigns for Libp2p { impl serai_coordinator_p2p::P2p for Libp2p { type Peer<'a> = Peer<'a>; - fn peers(&self, network: NetworkId) -> impl Send + Future>> { + fn peers(&self, network: ExternalNetworkId) -> impl Send + Future>> { async move { let Some(peer_ids) = self.0.peers.peers.read().await.get(&network).cloned() else { return vec![]; diff --git a/coordinator/p2p/libp2p/src/swarm.rs b/coordinator/p2p/libp2p/src/swarm.rs index 0d06c171..94a7cb03 100644 --- a/coordinator/p2p/libp2p/src/swarm.rs +++ b/coordinator/p2p/libp2p/src/swarm.rs @@ -6,7 +6,7 @@ use std::{ use borsh::BorshDeserialize; -use serai_client::validator_sets::primitives::ValidatorSet; +use serai_client::validator_sets::primitives::ExternalValidatorSet; use tokio::sync::{mpsc, oneshot, RwLock}; @@ -68,7 +68,7 @@ pub(crate) struct SwarmTask { outbound_request_responses: HashMap>, inbound_request_response_channels: HashMap>, - heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ValidatorSet, [u8; 32])>, + heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>, notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>, inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>, } @@ -324,7 +324,7 @@ impl SwarmTask { outbound_requests: mpsc::UnboundedReceiver<(PeerId, Request, oneshot::Sender)>, - heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ValidatorSet, [u8; 32])>, + heartbeat_requests: mpsc::UnboundedSender<(InboundRequestId, ExternalValidatorSet, [u8; 32])>, notable_cosign_requests: mpsc::UnboundedSender<(InboundRequestId, [u8; 32])>, inbound_request_responses: mpsc::UnboundedReceiver<(InboundRequestId, Response)>, ) { diff --git a/coordinator/p2p/libp2p/src/validators.rs b/coordinator/p2p/libp2p/src/validators.rs index 6b93cf4d..25fabacd 100644 --- a/coordinator/p2p/libp2p/src/validators.rs +++ b/coordinator/p2p/libp2p/src/validators.rs @@ -4,7 +4,9 @@ use std::{ collections::{HashSet, HashMap}, }; -use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, SeraiError, Serai}; +use serai_client::{ + primitives::ExternalNetworkId, validator_sets::primitives::Session, SeraiError, Serai, +}; use serai_task::{Task, ContinuallyRan}; @@ -24,11 +26,11 @@ pub(crate) struct Validators { serai: Arc, // A cache for which session we're populated with the validators of - sessions: HashMap, + sessions: HashMap, // The validators by network - by_network: HashMap>, + by_network: HashMap>, // The validators and their networks - validators: HashMap>, + validators: HashMap>, // The channel to send the changes down changes: mpsc::UnboundedSender, @@ -49,8 +51,8 @@ impl Validators { async fn session_changes( serai: impl Borrow, - sessions: impl Borrow>, - ) -> Result)>, SeraiError> { + sessions: impl Borrow>, + ) -> Result)>, SeraiError> { /* This uses the latest finalized block, not the latest cosigned block, which should be fine as in the worst case, we'd connect to unexpected validators. They still shouldn't be able to @@ -67,13 +69,10 @@ impl Validators { // FuturesUnordered can be bad practice as it'll cause timeouts if infrequently polled, but // we poll it till it yields all futures with the most minimal processing possible let mut futures = FuturesUnordered::new(); - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - } + for network in serai_client::primitives::EXTERNAL_NETWORKS { let sessions = sessions.borrow(); futures.push(async move { - let session = match temporal_serai.session(network).await { + let session = match temporal_serai.session(network.into()).await { Ok(Some(session)) => session, Ok(None) => return Ok(None), Err(e) => return Err(e), @@ -82,7 +81,7 @@ impl Validators { if sessions.get(&network) == Some(&session) { Ok(None) } else { - match temporal_serai.active_network_validators(network).await { + match temporal_serai.active_network_validators(network.into()).await { Ok(validators) => Ok(Some(( network, session, @@ -105,7 +104,7 @@ impl Validators { fn incorporate_session_changes( &mut self, - session_changes: Vec<(NetworkId, Session, HashSet)>, + session_changes: Vec<(ExternalNetworkId, Session, HashSet)>, ) { let mut removed = HashSet::new(); let mut added = HashSet::new(); @@ -160,11 +159,11 @@ impl Validators { Ok(()) } - pub(crate) fn by_network(&self) -> &HashMap> { + pub(crate) fn by_network(&self) -> &HashMap> { &self.by_network } - pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet> { + pub(crate) fn networks(&self, peer_id: &PeerId) -> Option<&HashSet> { self.validators.get(peer_id) } } diff --git a/coordinator/p2p/src/heartbeat.rs b/coordinator/p2p/src/heartbeat.rs index f13a0e5c..7691abbd 100644 --- a/coordinator/p2p/src/heartbeat.rs +++ b/coordinator/p2p/src/heartbeat.rs @@ -1,7 +1,7 @@ use core::future::Future; use std::time::{Duration, SystemTime}; -use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ValidatorSet}; +use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet}; use futures_lite::FutureExt; @@ -38,7 +38,7 @@ pub const BATCH_SIZE_LIMIT: usize = MIN_BLOCKS_PER_BATCH * /// If the other validator has more blocks then we do, they're expected to inform us. This forms /// the sync protocol for our Tributaries. pub(crate) struct HeartbeatTask { - pub(crate) set: ValidatorSet, + pub(crate) set: ExternalValidatorSet, pub(crate) tributary: Tributary, pub(crate) reader: TributaryReader, pub(crate) p2p: P, diff --git a/coordinator/p2p/src/lib.rs b/coordinator/p2p/src/lib.rs index 9bf245ca..68536b9d 100644 --- a/coordinator/p2p/src/lib.rs +++ b/coordinator/p2p/src/lib.rs @@ -7,7 +7,7 @@ use std::collections::HashMap; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet}; +use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet}; use serai_db::Db; use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader}; @@ -25,7 +25,7 @@ use crate::heartbeat::HeartbeatTask; #[derive(Clone, Copy, BorshSerialize, BorshDeserialize, Debug)] pub struct Heartbeat { /// The Tributary this is the heartbeat of. - pub set: ValidatorSet, + pub set: ExternalValidatorSet, /// The hash of the latest block added to the Tributary. pub latest_block_hash: [u8; 32], } @@ -56,7 +56,7 @@ pub trait P2p: type Peer<'a>: Peer<'a>; /// Fetch the peers for this network. - fn peers(&self, network: NetworkId) -> impl Send + Future>>; + fn peers(&self, network: ExternalNetworkId) -> impl Send + Future>>; /// Broadcast a cosign. fn publish_cosign(&self, cosign: SignedCosign) -> impl Send + Future; @@ -131,13 +131,13 @@ fn handle_heartbeat( pub async fn run( db: impl Db, p2p: P, - mut add_tributary: mpsc::UnboundedReceiver<(ValidatorSet, Tributary)>, - mut retire_tributary: mpsc::UnboundedReceiver, + mut add_tributary: mpsc::UnboundedReceiver<(ExternalValidatorSet, Tributary)>, + mut retire_tributary: mpsc::UnboundedReceiver, send_cosigns: mpsc::UnboundedSender, ) { - let mut readers = HashMap::>::new(); + let mut readers = HashMap::>::new(); let mut tributaries = HashMap::<[u8; 32], mpsc::UnboundedSender>>::new(); - let mut heartbeat_tasks = HashMap::::new(); + let mut heartbeat_tasks = HashMap::::new(); loop { tokio::select! { diff --git a/coordinator/src/db.rs b/coordinator/src/db.rs index 631c6d4b..108e0f32 100644 --- a/coordinator/src/db.rs +++ b/coordinator/src/db.rs @@ -6,8 +6,8 @@ use serai_db::{create_db, db_channel}; use dkg::Participant; use serai_client::{ - primitives::NetworkId, - validator_sets::primitives::{Session, ValidatorSet, KeyPair}, + primitives::ExternalNetworkId, + validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair}, }; use serai_cosign::SignedCosign; @@ -43,22 +43,21 @@ pub(crate) fn coordinator_db() -> Db { db(&format!("{root_path}/coordinator/db")) } -fn tributary_db_folder(set: ValidatorSet) -> String { +fn tributary_db_folder(set: ExternalValidatorSet) -> String { let root_path = serai_env::var("DB_PATH").expect("path to DB wasn't specified"); let network = match set.network { - NetworkId::Serai => panic!("creating Tributary for the Serai network"), - NetworkId::Bitcoin => "Bitcoin", - NetworkId::Ethereum => "Ethereum", - NetworkId::Monero => "Monero", + ExternalNetworkId::Bitcoin => "Bitcoin", + ExternalNetworkId::Ethereum => "Ethereum", + ExternalNetworkId::Monero => "Monero", }; format!("{root_path}/tributary-{network}-{}", set.session.0) } -pub(crate) fn tributary_db(set: ValidatorSet) -> Db { +pub(crate) fn tributary_db(set: ExternalValidatorSet) -> Db { db(&format!("{}/db", tributary_db_folder(set))) } -pub(crate) fn prune_tributary_db(set: ValidatorSet) { +pub(crate) fn prune_tributary_db(set: ExternalValidatorSet) { log::info!("pruning data directory for tributary {set:?}"); let db = tributary_db_folder(set); if fs::exists(&db).expect("couldn't check if tributary DB exists") { @@ -73,15 +72,15 @@ create_db! { // The latest Tributary to have been retired for a network // Since Tributaries are retired sequentially, this is informative to if any Tributary has been // retired - RetiredTributary: (network: NetworkId) -> Session, + RetiredTributary: (network: ExternalNetworkId) -> Session, // The last handled message from a Processor - LastProcessorMessage: (network: NetworkId) -> u64, + LastProcessorMessage: (network: ExternalNetworkId) -> u64, // Cosigns we produced and tried to intake yet incurred an error while doing so ErroneousCosigns: () -> Vec, // The keys to confirm and set on the Serai network - KeysToConfirm: (set: ValidatorSet) -> KeyPair, + KeysToConfirm: (set: ExternalValidatorSet) -> KeyPair, // The key was set on the Serai network - KeySet: (set: ValidatorSet) -> (), + KeySet: (set: ExternalValidatorSet) -> (), } } @@ -90,7 +89,7 @@ db_channel! { // Cosigns we produced SignedCosigns: () -> SignedCosign, // Tributaries to clean up upon reboot - TributaryCleanup: () -> ValidatorSet, + TributaryCleanup: () -> ExternalValidatorSet, } } @@ -100,50 +99,50 @@ mod _internal_db { db_channel! { Coordinator { // Tributary transactions to publish from the Processor messages - TributaryTransactionsFromProcessorMessages: (set: ValidatorSet) -> Transaction, + TributaryTransactionsFromProcessorMessages: (set: ExternalValidatorSet) -> Transaction, // Tributary transactions to publish from the DKG confirmation task - TributaryTransactionsFromDkgConfirmation: (set: ValidatorSet) -> Transaction, + TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction, // Participants to remove - RemoveParticipant: (set: ValidatorSet) -> Participant, + RemoveParticipant: (set: ExternalValidatorSet) -> Participant, } } } pub(crate) struct TributaryTransactionsFromProcessorMessages; impl TributaryTransactionsFromProcessorMessages { - pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, tx: &Transaction) { + pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) { // If this set has yet to be retired, send this transaction if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { _internal_db::TributaryTransactionsFromProcessorMessages::send(txn, set, tx); } } - pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option { _internal_db::TributaryTransactionsFromProcessorMessages::try_recv(txn, set) } } pub(crate) struct TributaryTransactionsFromDkgConfirmation; impl TributaryTransactionsFromDkgConfirmation { - pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, tx: &Transaction) { + pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, tx: &Transaction) { // If this set has yet to be retired, send this transaction if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { _internal_db::TributaryTransactionsFromDkgConfirmation::send(txn, set, tx); } } - pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option { _internal_db::TributaryTransactionsFromDkgConfirmation::try_recv(txn, set) } } pub(crate) struct RemoveParticipant; impl RemoveParticipant { - pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet, participant: Participant) { + pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) { // If this set has yet to be retired, send this transaction if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { _internal_db::RemoveParticipant::send(txn, set, &participant); } } - pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option { _internal_db::RemoveParticipant::try_recv(txn, set) } } diff --git a/coordinator/src/dkg_confirmation.rs b/coordinator/src/dkg_confirmation.rs index b9af0ec7..a28fb40f 100644 --- a/coordinator/src/dkg_confirmation.rs +++ b/coordinator/src/dkg_confirmation.rs @@ -17,7 +17,7 @@ use serai_db::{DbTxn, Db as DbTrait}; use serai_client::{ primitives::SeraiAddress, - validator_sets::primitives::{ValidatorSet, musig_context, set_keys_message}, + validator_sets::primitives::{ExternalValidatorSet, musig_context, set_keys_message}, }; use serai_task::{DoesNotError, ContinuallyRan}; @@ -141,7 +141,7 @@ impl ConfirmDkgTask { Self { db, set, tributary_db, key, signer: None } } - fn slash(db: &mut CD, set: ValidatorSet, validator: SeraiAddress) { + fn slash(db: &mut CD, set: ExternalValidatorSet, validator: SeraiAddress) { let mut txn = db.txn(); TributaryTransactionsFromDkgConfirmation::send( &mut txn, @@ -153,7 +153,7 @@ impl ConfirmDkgTask { fn preprocess( db: &mut CD, - set: ValidatorSet, + set: ExternalValidatorSet, attempt: u32, key: &Zeroizing<::F>, signer: &mut Option, @@ -162,7 +162,9 @@ impl ConfirmDkgTask { let (machine, preprocess) = AlgorithmMachine::new( schnorrkel(), // We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet - musig(&musig_context(set), key, &[Ristretto::generator() * key.deref()]).unwrap().into(), + musig(&musig_context(set.into()), key, &[Ristretto::generator() * key.deref()]) + .unwrap() + .into(), ) .preprocess(&mut OsRng); // We take the preprocess so we can use it in a distinct machine with the actual Musig @@ -256,8 +258,9 @@ impl ContinuallyRan for ConfirmDkgTask { }) .collect::>(); - let keys = - musig(&musig_context(self.set.set), &self.key, &musig_public_keys).unwrap().into(); + let keys = musig(&musig_context(self.set.set.into()), &self.key, &musig_public_keys) + .unwrap() + .into(); // Rebuild the machine let (machine, preprocess_from_cache) = diff --git a/coordinator/src/main.rs b/coordinator/src/main.rs index 4d48a317..d63b79a2 100644 --- a/coordinator/src/main.rs +++ b/coordinator/src/main.rs @@ -14,8 +14,8 @@ use borsh::BorshDeserialize; use tokio::sync::mpsc; use serai_client::{ - primitives::{NetworkId, PublicKey, SeraiAddress, Signature}, - validator_sets::primitives::{ValidatorSet, KeyPair}, + primitives::{ExternalNetworkId, PublicKey, SeraiAddress, Signature}, + validator_sets::primitives::{ExternalValidatorSet, KeyPair}, Serai, }; use message_queue::{Service, client::MessageQueue}; @@ -153,14 +153,13 @@ async fn handle_network( mut db: impl serai_db::Db, message_queue: Arc, serai: Arc, - network: NetworkId, + network: ExternalNetworkId, ) { // Spawn the task to publish batches for this network { let (publish_batch_task_def, publish_batch_task) = Task::new(); tokio::spawn( PublishBatchTask::new(db.clone(), serai.clone(), network) - .unwrap() .continually_run(publish_batch_task_def, vec![]), ); // Forget its handle so it always runs in the background @@ -197,7 +196,7 @@ async fn handle_network( match msg { messages::ProcessorMessage::KeyGen(msg) => match msg { messages::key_gen::ProcessorMessage::Participation { session, participation } => { - let set = ValidatorSet { network, session }; + let set = ExternalValidatorSet { network, session }; TributaryTransactionsFromProcessorMessages::send( &mut txn, set, @@ -211,7 +210,7 @@ async fn handle_network( } => { KeysToConfirm::set( &mut txn, - ValidatorSet { network, session }, + ExternalValidatorSet { network, session }, &KeyPair( PublicKey::from_raw(substrate_key), network_key @@ -221,15 +220,15 @@ async fn handle_network( ); } messages::key_gen::ProcessorMessage::Blame { session, participant } => { - RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); + RemoveParticipant::send(&mut txn, ExternalValidatorSet { network, session }, participant); } }, messages::ProcessorMessage::Sign(msg) => match msg { messages::sign::ProcessorMessage::InvalidParticipant { session, participant } => { - RemoveParticipant::send(&mut txn, ValidatorSet { network, session }, participant); + RemoveParticipant::send(&mut txn, ExternalValidatorSet { network, session }, participant); } messages::sign::ProcessorMessage::Preprocesses { id, preprocesses } => { - let set = ValidatorSet { network, session: id.session }; + let set = ExternalValidatorSet { network, session: id.session }; if id.attempt == 0 { // Batches are declared by their intent to be signed if let messages::sign::VariantSignId::Batch(hash) = id.id { @@ -254,7 +253,7 @@ async fn handle_network( ); } messages::sign::ProcessorMessage::Shares { id, shares } => { - let set = ValidatorSet { network, session: id.session }; + let set = ExternalValidatorSet { network, session: id.session }; TributaryTransactionsFromProcessorMessages::send( &mut txn, set, @@ -282,7 +281,7 @@ async fn handle_network( } => { SlashReports::set( &mut txn, - ValidatorSet { network, session }, + ExternalValidatorSet { network, session }, slash_report, Signature(signature), ); @@ -298,7 +297,7 @@ async fn handle_network( .push(plan.transaction_plan_id); } for (session, plans) in by_session { - let set = ValidatorSet { network, session }; + let set = ExternalValidatorSet { network, session }; SubstrateBlockPlans::set(&mut txn, set, block, &plans); TributaryTransactionsFromProcessorMessages::send( &mut txn, @@ -481,10 +480,7 @@ async fn main() { ); // Handle each of the networks - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - } + for network in serai_client::primitives::EXTERNAL_NETWORKS { tokio::spawn(handle_network(db.clone(), message_queue.clone(), serai.clone(), network)); } diff --git a/coordinator/src/substrate.rs b/coordinator/src/substrate.rs index 7a78e512..4a70ee6b 100644 --- a/coordinator/src/substrate.rs +++ b/coordinator/src/substrate.rs @@ -9,7 +9,7 @@ use tokio::sync::mpsc; use serai_db::{DbTxn, Db as DbTrait}; -use serai_client::validator_sets::primitives::{Session, ValidatorSet}; +use serai_client::validator_sets::primitives::{Session, ExternalValidatorSet}; use message_queue::{Service, Metadata, client::MessageQueue}; use tributary_sdk::Tributary; @@ -27,8 +27,8 @@ pub(crate) struct SubstrateTask { pub(crate) message_queue: Arc, pub(crate) p2p: P, pub(crate) p2p_add_tributary: - mpsc::UnboundedSender<(ValidatorSet, Tributary)>, - pub(crate) p2p_retire_tributary: mpsc::UnboundedSender, + mpsc::UnboundedSender<(ExternalValidatorSet, Tributary)>, + pub(crate) p2p_retire_tributary: mpsc::UnboundedSender, } impl ContinuallyRan for SubstrateTask

{ @@ -38,7 +38,7 @@ impl ContinuallyRan for SubstrateTask

{ let mut made_progress = false; // Handle the Canonical events - for network in serai_client::primitives::NETWORKS { + for network in serai_client::primitives::EXTERNAL_NETWORKS { loop { let mut txn = self.db.txn(); let Some(msg) = serai_coordinator_substrate::Canonical::try_recv(&mut txn, network) @@ -48,7 +48,7 @@ impl ContinuallyRan for SubstrateTask

{ match msg { messages::substrate::CoordinatorMessage::SetKeys { session, .. } => { - KeySet::set(&mut txn, ValidatorSet { network, session }, &()); + KeySet::set(&mut txn, ExternalValidatorSet { network, session }, &()); } messages::substrate::CoordinatorMessage::SlashesReported { session } => { let prior_retired = crate::db::RetiredTributary::get(&txn, network); @@ -58,7 +58,7 @@ impl ContinuallyRan for SubstrateTask

{ crate::db::RetiredTributary::set(&mut txn, network, &session); self .p2p_retire_tributary - .send(ValidatorSet { network, session }) + .send(ExternalValidatorSet { network, session }) .expect("p2p retire_tributary channel dropped?"); } messages::substrate::CoordinatorMessage::Block { .. } => {} @@ -108,7 +108,10 @@ impl ContinuallyRan for SubstrateTask

{ */ crate::db::TributaryCleanup::send( &mut txn, - &ValidatorSet { network: new_set.set.network, session: Session(historic_session) }, + &ExternalValidatorSet { + network: new_set.set.network, + session: Session(historic_session), + }, ); } diff --git a/coordinator/src/tributary.rs b/coordinator/src/tributary.rs index 5f935f68..7f45797d 100644 --- a/coordinator/src/tributary.rs +++ b/coordinator/src/tributary.rs @@ -11,7 +11,7 @@ use tokio::sync::mpsc; use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use scale::Encode; -use serai_client::validator_sets::primitives::ValidatorSet; +use serai_client::validator_sets::primitives::ExternalValidatorSet; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; @@ -33,13 +33,13 @@ use crate::{ create_db! { Coordinator { - PublishOnRecognition: (set: ValidatorSet, topic: Topic) -> Transaction, + PublishOnRecognition: (set: ExternalValidatorSet, topic: Topic) -> Transaction, } } db_channel! { Coordinator { - PendingCosigns: (set: ValidatorSet) -> CosignIntent, + PendingCosigns: (set: ExternalValidatorSet) -> CosignIntent, } } @@ -48,7 +48,7 @@ db_channel! { /// This is not a well-designed function. This is specific to the context in which its called, /// within this file. It should only be considered an internal helper for this domain alone. async fn provide_transaction( - set: ValidatorSet, + set: ExternalValidatorSet, tributary: &Tributary, tx: Transaction, ) { @@ -211,7 +211,7 @@ async fn add_signed_unsigned_transaction( } async fn add_with_recognition_check( - set: ValidatorSet, + set: ExternalValidatorSet, tributary_db: &mut TD, tributary: &Tributary, key: &Zeroizing<::F>, @@ -350,7 +350,7 @@ impl ContinuallyRan for AddTributaryTransactio /// Takes the messages from ScanTributaryTask and publishes them to the message-queue. pub(crate) struct TributaryProcessorMessagesTask { tributary_db: TD, - set: ValidatorSet, + set: ExternalValidatorSet, message_queue: Arc, } impl ContinuallyRan for TributaryProcessorMessagesTask { @@ -430,7 +430,7 @@ impl ContinuallyRan for SignSlashReportTask( db: CD, - set: ValidatorSet, + set: ExternalValidatorSet, tributary: Tributary, scan_tributary_task: TaskHandle, tasks_to_keep_alive: Vec, @@ -469,7 +469,7 @@ pub(crate) async fn spawn_tributary( db: Db, message_queue: Arc, p2p: P, - p2p_add_tributary: &mpsc::UnboundedSender<(ValidatorSet, Tributary)>, + p2p_add_tributary: &mpsc::UnboundedSender<(ExternalValidatorSet, Tributary)>, set: NewSetInformation, serai_key: Zeroizing<::F>, ) { diff --git a/coordinator/substrate/src/canonical.rs b/coordinator/substrate/src/canonical.rs index bc6db5ca..81a8ccce 100644 --- a/coordinator/substrate/src/canonical.rs +++ b/coordinator/substrate/src/canonical.rs @@ -3,7 +3,7 @@ use std::sync::Arc; use futures::stream::{StreamExt, FuturesOrdered}; -use serai_client::Serai; +use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai}; use messages::substrate::{InInstructionResult, ExecutedBatch, CoordinatorMessage}; @@ -152,6 +152,7 @@ impl ContinuallyRan for CanonicalEventStream { else { panic!("SetRetired event wasn't a SetRetired event: {set_retired:?}"); }; + let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue }; crate::Canonical::send( &mut txn, set.network, @@ -159,7 +160,7 @@ impl ContinuallyRan for CanonicalEventStream { ); } - for network in serai_client::primitives::NETWORKS { + for network in serai_client::primitives::EXTERNAL_NETWORKS { let mut batch = None; for this_batch in &block.batch_events { let serai_client::in_instructions::InInstructionsEvent::Batch { @@ -201,7 +202,7 @@ impl ContinuallyRan for CanonicalEventStream { let serai_client::coins::CoinsEvent::BurnWithInstruction { from: _, instruction } = &burn else { - panic!("Burn event wasn't a Burn.in event: {burn:?}"); + panic!("BurnWithInstruction event wasn't a BurnWithInstruction event: {burn:?}"); }; if instruction.balance.coin.network() == network { burns.push(instruction.clone()); diff --git a/coordinator/substrate/src/ephemeral.rs b/coordinator/substrate/src/ephemeral.rs index 18c11d00..cb6e14cd 100644 --- a/coordinator/substrate/src/ephemeral.rs +++ b/coordinator/substrate/src/ephemeral.rs @@ -4,8 +4,8 @@ use std::sync::Arc; use futures::stream::{StreamExt, FuturesOrdered}; use serai_client::{ - primitives::{NetworkId, SeraiAddress, EmbeddedEllipticCurve}, - validator_sets::primitives::MAX_KEY_SHARES_PER_SET, + primitives::{SeraiAddress, EmbeddedEllipticCurve}, + validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet}, Serai, }; @@ -130,16 +130,13 @@ impl ContinuallyRan for EphemeralEventStream { let serai_client::validator_sets::ValidatorSetsEvent::NewSet { set } = &new_set else { panic!("NewSet event wasn't a NewSet event: {new_set:?}"); }; - // We only coordinate over external networks - if set.network == NetworkId::Serai { - continue; - } + let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue }; let serai = self.serai.as_of(block.block_hash); let serai = serai.validator_sets(); let Some(validators) = - serai.participants(set.network).await.map_err(|e| format!("{e:?}"))? + serai.participants(set.network.into()).await.map_err(|e| format!("{e:?}"))? else { Err(format!( "block #{block_number} declared a new set but didn't have the participants" @@ -222,11 +219,11 @@ impl ContinuallyRan for EphemeralEventStream { } let mut new_set = NewSetInformation { - set: *set, + set, serai_block: block.block_hash, declaration_time: block.time, - // TODO: Why do we have this as an explicit field here? - // Shouldn't this be inlined into the Processor's key gen code, where it's used? + // TODO: This should be inlined into the Processor's key gen code + // It's legacy from when we removed participants from the key gen threshold: ((total_weight * 2) / 3) + 1, validators, evrf_public_keys, @@ -246,7 +243,8 @@ impl ContinuallyRan for EphemeralEventStream { else { panic!("AcceptedHandover event wasn't a AcceptedHandover event: {accepted_handover:?}"); }; - crate::SignSlashReport::send(&mut txn, *set); + let Ok(set) = ExternalValidatorSet::try_from(*set) else { continue }; + crate::SignSlashReport::send(&mut txn, set); } txn.commit(); diff --git a/coordinator/substrate/src/lib.rs b/coordinator/substrate/src/lib.rs index 68566ff4..902234dc 100644 --- a/coordinator/substrate/src/lib.rs +++ b/coordinator/substrate/src/lib.rs @@ -10,8 +10,8 @@ use borsh::{BorshSerialize, BorshDeserialize}; use dkg::Participant; use serai_client::{ - primitives::{NetworkId, SeraiAddress, Signature}, - validator_sets::primitives::{Session, ValidatorSet, KeyPair, SlashReport}, + primitives::{ExternalNetworkId, SeraiAddress, Signature}, + validator_sets::primitives::{Session, ExternalValidatorSet, KeyPair, SlashReport}, in_instructions::primitives::SignedBatch, Transaction, }; @@ -35,7 +35,7 @@ pub use publish_slash_report::PublishSlashReportTask; #[borsh(init = init_participant_indexes)] pub struct NewSetInformation { /// The set. - pub set: ValidatorSet, + pub set: ExternalValidatorSet, /// The Serai block which declared it. pub serai_block: [u8; 32], /// The time of the block which declared it, in seconds. @@ -82,24 +82,24 @@ mod _public_db { db_channel!( CoordinatorSubstrate { // Canonical messages to send to the processor - Canonical: (network: NetworkId) -> messages::substrate::CoordinatorMessage, + Canonical: (network: ExternalNetworkId) -> messages::substrate::CoordinatorMessage, // Relevant new set, from an ephemeral event stream NewSet: () -> NewSetInformation, // Potentially relevant sign slash report, from an ephemeral event stream - SignSlashReport: (set: ValidatorSet) -> (), + SignSlashReport: (set: ExternalValidatorSet) -> (), // Signed batches to publish onto the Serai network - SignedBatches: (network: NetworkId) -> SignedBatch, + SignedBatches: (network: ExternalNetworkId) -> SignedBatch, } ); create_db!( CoordinatorSubstrate { // Keys to set on the Serai network - Keys: (network: NetworkId) -> (Session, Vec), + Keys: (network: ExternalNetworkId) -> (Session, Vec), // Slash reports to publish onto the Serai network - SlashReports: (network: NetworkId) -> (Session, Vec), + SlashReports: (network: ExternalNetworkId) -> (Session, Vec), } ); } @@ -109,7 +109,7 @@ pub struct Canonical; impl Canonical { pub(crate) fn send( txn: &mut impl DbTxn, - network: NetworkId, + network: ExternalNetworkId, msg: &messages::substrate::CoordinatorMessage, ) { _public_db::Canonical::send(txn, network, msg); @@ -117,7 +117,7 @@ impl Canonical { /// Try to receive a canonical event, returning `None` if there is none to receive. pub fn try_recv( txn: &mut impl DbTxn, - network: NetworkId, + network: ExternalNetworkId, ) -> Option { _public_db::Canonical::try_recv(txn, network) } @@ -141,12 +141,12 @@ impl NewSet { /// notifications for all relevant validator sets will be included. pub struct SignSlashReport; impl SignSlashReport { - pub(crate) fn send(txn: &mut impl DbTxn, set: ValidatorSet) { + pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet) { _public_db::SignSlashReport::send(txn, set, &()); } /// Try to receive a notification to sign a slash report, returning `None` if there is none to /// receive. - pub fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option<()> { + pub fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<()> { _public_db::SignSlashReport::try_recv(txn, set) } } @@ -160,7 +160,7 @@ impl Keys { /// reported at once. pub fn set( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, key_pair: KeyPair, signature_participants: bitvec::vec::BitVec, signature: Signature, @@ -180,7 +180,10 @@ impl Keys { ); _public_db::Keys::set(txn, set.network, &(set.session, tx.encode())); } - pub(crate) fn take(txn: &mut impl DbTxn, network: NetworkId) -> Option<(Session, Transaction)> { + pub(crate) fn take( + txn: &mut impl DbTxn, + network: ExternalNetworkId, + ) -> Option<(Session, Transaction)> { let (session, tx) = _public_db::Keys::take(txn, network)?; Some((session, <_>::decode(&mut tx.as_slice()).unwrap())) } @@ -193,7 +196,7 @@ impl SignedBatches { pub fn send(txn: &mut impl DbTxn, batch: &SignedBatch) { _public_db::SignedBatches::send(txn, batch.batch.network, batch); } - pub(crate) fn try_recv(txn: &mut impl DbTxn, network: NetworkId) -> Option { + pub(crate) fn try_recv(txn: &mut impl DbTxn, network: ExternalNetworkId) -> Option { _public_db::SignedBatches::try_recv(txn, network) } } @@ -207,7 +210,7 @@ impl SlashReports { /// slashes reported at once. pub fn set( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, slash_report: SlashReport, signature: Signature, ) { @@ -225,7 +228,10 @@ impl SlashReports { ); _public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode())); } - pub(crate) fn take(txn: &mut impl DbTxn, network: NetworkId) -> Option<(Session, Transaction)> { + pub(crate) fn take( + txn: &mut impl DbTxn, + network: ExternalNetworkId, + ) -> Option<(Session, Transaction)> { let (session, tx) = _public_db::SlashReports::take(txn, network)?; Some((session, <_>::decode(&mut tx.as_slice()).unwrap())) } diff --git a/coordinator/substrate/src/publish_batch.rs b/coordinator/substrate/src/publish_batch.rs index 83aa0718..ff4b46de 100644 --- a/coordinator/substrate/src/publish_batch.rs +++ b/coordinator/substrate/src/publish_batch.rs @@ -2,7 +2,7 @@ use core::future::Future; use std::sync::Arc; #[rustfmt::skip] -use serai_client::{primitives::NetworkId, in_instructions::primitives::SignedBatch, SeraiError, Serai}; +use serai_client::{primitives::ExternalNetworkId, in_instructions::primitives::SignedBatch, SeraiError, Serai}; use serai_db::{Get, DbTxn, Db, create_db}; use serai_task::ContinuallyRan; @@ -11,8 +11,8 @@ use crate::SignedBatches; create_db!( CoordinatorSubstrate { - LastPublishedBatch: (network: NetworkId) -> u32, - BatchesToPublish: (network: NetworkId, batch: u32) -> SignedBatch, + LastPublishedBatch: (network: ExternalNetworkId) -> u32, + BatchesToPublish: (network: ExternalNetworkId, batch: u32) -> SignedBatch, } ); @@ -20,19 +20,13 @@ create_db!( pub struct PublishBatchTask { db: D, serai: Arc, - network: NetworkId, + network: ExternalNetworkId, } impl PublishBatchTask { /// Create a task to publish `SignedBatch`s onto Serai. - /// - /// Returns None if `network == NetworkId::Serai`. - // TODO: ExternalNetworkId - pub fn new(db: D, serai: Arc, network: NetworkId) -> Option { - if network == NetworkId::Serai { - None? - }; - Some(Self { db, serai, network }) + pub fn new(db: D, serai: Arc, network: ExternalNetworkId) -> Self { + Self { db, serai, network } } } diff --git a/coordinator/substrate/src/publish_slash_report.rs b/coordinator/substrate/src/publish_slash_report.rs index 9be94f60..7b90d53d 100644 --- a/coordinator/substrate/src/publish_slash_report.rs +++ b/coordinator/substrate/src/publish_slash_report.rs @@ -3,7 +3,7 @@ use std::sync::Arc; use serai_db::{DbTxn, Db}; -use serai_client::{primitives::NetworkId, validator_sets::primitives::Session, Serai}; +use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::Session, Serai}; use serai_task::ContinuallyRan; @@ -24,7 +24,7 @@ impl PublishSlashReportTask { impl PublishSlashReportTask { // Returns if a slash report was successfully published - async fn publish(&mut self, network: NetworkId) -> Result { + async fn publish(&mut self, network: ExternalNetworkId) -> Result { let mut txn = self.db.txn(); let Some((session, slash_report)) = SlashReports::take(&mut txn, network) else { // No slash report to publish @@ -36,7 +36,7 @@ impl PublishSlashReportTask { let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let serai = serai.validator_sets(); let session_after_slash_report = Session(session.0 + 1); - let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; + let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?; let current_session = current_session.map(|session| session.0); // Only attempt to publish the slash report for session #n while session #n+1 is still // active @@ -84,11 +84,7 @@ impl ContinuallyRan for PublishSlashReportTask { async move { let mut made_progress = false; let mut error = None; - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - }; - + for network in serai_client::primitives::EXTERNAL_NETWORKS { let network_res = self.publish(network).await; // We made progress if any network successfully published their slash report made_progress |= network_res == Ok(true); diff --git a/coordinator/substrate/src/set_keys.rs b/coordinator/substrate/src/set_keys.rs index a63e0923..b8bf2ad1 100644 --- a/coordinator/substrate/src/set_keys.rs +++ b/coordinator/substrate/src/set_keys.rs @@ -3,7 +3,7 @@ use std::sync::Arc; use serai_db::{DbTxn, Db}; -use serai_client::{primitives::NetworkId, validator_sets::primitives::ValidatorSet, Serai}; +use serai_client::{validator_sets::primitives::ExternalValidatorSet, Serai}; use serai_task::ContinuallyRan; @@ -28,11 +28,7 @@ impl ContinuallyRan for SetKeysTask { fn run_iteration(&mut self) -> impl Send + Future> { async move { let mut made_progress = false; - for network in serai_client::primitives::NETWORKS { - if network == NetworkId::Serai { - continue; - }; - + for network in serai_client::primitives::EXTERNAL_NETWORKS { let mut txn = self.db.txn(); let Some((session, keys)) = Keys::take(&mut txn, network) else { // No keys to set @@ -44,7 +40,7 @@ impl ContinuallyRan for SetKeysTask { let serai = self.serai.as_of_latest_finalized_block().await.map_err(|e| format!("{e:?}"))?; let serai = serai.validator_sets(); - let current_session = serai.session(network).await.map_err(|e| format!("{e:?}"))?; + let current_session = serai.session(network.into()).await.map_err(|e| format!("{e:?}"))?; let current_session = current_session.map(|session| session.0); // Only attempt to set these keys if this isn't a retired session if Some(session.0) < current_session { @@ -62,7 +58,7 @@ impl ContinuallyRan for SetKeysTask { // If this session already has had its keys set, move on if serai - .keys(ValidatorSet { network, session }) + .keys(ExternalValidatorSet { network, session }) .await .map_err(|e| format!("{e:?}"))? .is_some() diff --git a/coordinator/tributary/src/db.rs b/coordinator/tributary/src/db.rs index ef4199b8..7d5857eb 100644 --- a/coordinator/tributary/src/db.rs +++ b/coordinator/tributary/src/db.rs @@ -3,7 +3,7 @@ use std::collections::HashMap; use scale::Encode; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ValidatorSet}; +use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ExternalValidatorSet}; use messages::sign::{VariantSignId, SignId}; @@ -97,7 +97,7 @@ impl Topic { /// The SignId for this topic /// /// Returns None if Topic isn't Topic::Sign - pub(crate) fn sign_id(self, set: ValidatorSet) -> Option { + pub(crate) fn sign_id(self, set: ExternalValidatorSet) -> Option { #[allow(clippy::match_same_arms)] match self { Topic::RemoveParticipant { .. } => None, @@ -115,7 +115,7 @@ impl Topic { /// Returns None if Topic isn't Topic::DkgConfirmation. pub(crate) fn dkg_confirmation_sign_id( self, - set: ValidatorSet, + set: ExternalValidatorSet, ) -> Option { #[allow(clippy::match_same_arms)] match self { @@ -227,41 +227,48 @@ pub(crate) enum DataSet { create_db!( CoordinatorTributary { // The last handled tributary block's (number, hash) - LastHandledTributaryBlock: (set: ValidatorSet) -> (u64, [u8; 32]), + LastHandledTributaryBlock: (set: ExternalValidatorSet) -> (u64, [u8; 32]), // The slash points a validator has accrued, with u32::MAX representing a fatal slash. - SlashPoints: (set: ValidatorSet, validator: SeraiAddress) -> u32, + SlashPoints: (set: ExternalValidatorSet, validator: SeraiAddress) -> u32, // The cosign intent for a Substrate block - CosignIntents: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> CosignIntent, + CosignIntents: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> CosignIntent, // The latest Substrate block to cosign. - LatestSubstrateBlockToCosign: (set: ValidatorSet) -> [u8; 32], + LatestSubstrateBlockToCosign: (set: ExternalValidatorSet) -> [u8; 32], // The hash of the block we're actively cosigning. - ActivelyCosigning: (set: ValidatorSet) -> [u8; 32], + ActivelyCosigning: (set: ExternalValidatorSet) -> [u8; 32], // If this block has already been cosigned. - Cosigned: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> (), + Cosigned: (set: ExternalValidatorSet, substrate_block_hash: [u8; 32]) -> (), // The plans to recognize upon a `Transaction::SubstrateBlock` being included on-chain. - SubstrateBlockPlans: (set: ValidatorSet, substrate_block_hash: [u8; 32]) -> Vec<[u8; 32]>, + SubstrateBlockPlans: ( + set: ExternalValidatorSet, + substrate_block_hash: [u8; 32] + ) -> Vec<[u8; 32]>, // The weight accumulated for a topic. - AccumulatedWeight: (set: ValidatorSet, topic: Topic) -> u16, + AccumulatedWeight: (set: ExternalValidatorSet, topic: Topic) -> u16, // The entries accumulated for a topic, by validator. - Accumulated: (set: ValidatorSet, topic: Topic, validator: SeraiAddress) -> D, + Accumulated: ( + set: ExternalValidatorSet, + topic: Topic, + validator: SeraiAddress + ) -> D, // Topics to be recognized as of a certain block number due to the reattempt protocol. - Reattempt: (set: ValidatorSet, block_number: u64) -> Vec, + Reattempt: (set: ExternalValidatorSet, block_number: u64) -> Vec, } ); db_channel!( CoordinatorTributary { // Messages to send to the processor - ProcessorMessages: (set: ValidatorSet) -> messages::CoordinatorMessage, + ProcessorMessages: (set: ExternalValidatorSet) -> messages::CoordinatorMessage, // Messages for the DKG confirmation - DkgConfirmationMessages: (set: ValidatorSet) -> messages::sign::CoordinatorMessage, + DkgConfirmationMessages: (set: ExternalValidatorSet) -> messages::sign::CoordinatorMessage, // Topics which have been explicitly recognized - RecognizedTopics: (set: ValidatorSet) -> Topic, + RecognizedTopics: (set: ExternalValidatorSet) -> Topic, } ); @@ -269,13 +276,13 @@ pub(crate) struct TributaryDb; impl TributaryDb { pub(crate) fn last_handled_tributary_block( getter: &impl Get, - set: ValidatorSet, + set: ExternalValidatorSet, ) -> Option<(u64, [u8; 32])> { LastHandledTributaryBlock::get(getter, set) } pub(crate) fn set_last_handled_tributary_block( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, block_number: u64, block_hash: [u8; 32], ) { @@ -284,23 +291,26 @@ impl TributaryDb { pub(crate) fn latest_substrate_block_to_cosign( getter: &impl Get, - set: ValidatorSet, + set: ExternalValidatorSet, ) -> Option<[u8; 32]> { LatestSubstrateBlockToCosign::get(getter, set) } pub(crate) fn set_latest_substrate_block_to_cosign( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], ) { LatestSubstrateBlockToCosign::set(txn, set, &substrate_block_hash); } - pub(crate) fn actively_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) -> Option<[u8; 32]> { + pub(crate) fn actively_cosigning( + txn: &mut impl DbTxn, + set: ExternalValidatorSet, + ) -> Option<[u8; 32]> { ActivelyCosigning::get(txn, set) } pub(crate) fn start_cosigning( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], substrate_block_number: u64, ) { @@ -320,33 +330,33 @@ impl TributaryDb { }, ); } - pub(crate) fn finish_cosigning(txn: &mut impl DbTxn, set: ValidatorSet) { + pub(crate) fn finish_cosigning(txn: &mut impl DbTxn, set: ExternalValidatorSet) { assert!(ActivelyCosigning::take(txn, set).is_some(), "finished cosigning but not cosigning"); } pub(crate) fn mark_cosigned( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], ) { Cosigned::set(txn, set, substrate_block_hash, &()); } pub(crate) fn cosigned( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], ) -> bool { Cosigned::get(txn, set, substrate_block_hash).is_some() } - pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ValidatorSet, topic: Topic) { + pub(crate) fn recognize_topic(txn: &mut impl DbTxn, set: ExternalValidatorSet, topic: Topic) { AccumulatedWeight::set(txn, set, topic, &0); RecognizedTopics::send(txn, set, &topic); } - pub(crate) fn recognized(getter: &impl Get, set: ValidatorSet, topic: Topic) -> bool { + pub(crate) fn recognized(getter: &impl Get, set: ExternalValidatorSet, topic: Topic) -> bool { AccumulatedWeight::get(getter, set, topic).is_some() } - pub(crate) fn start_of_block(txn: &mut impl DbTxn, set: ValidatorSet, block_number: u64) { + pub(crate) fn start_of_block(txn: &mut impl DbTxn, set: ExternalValidatorSet, block_number: u64) { for topic in Reattempt::take(txn, set, block_number).unwrap_or(vec![]) { /* TODO: Slash all people who preprocessed but didn't share, and add a delay to their @@ -376,7 +386,7 @@ impl TributaryDb { pub(crate) fn fatal_slash( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, validator: SeraiAddress, reason: &str, ) { @@ -386,7 +396,7 @@ impl TributaryDb { pub(crate) fn is_fatally_slashed( getter: &impl Get, - set: ValidatorSet, + set: ExternalValidatorSet, validator: SeraiAddress, ) -> bool { SlashPoints::get(getter, set, validator).unwrap_or(0) == u32::MAX @@ -395,7 +405,7 @@ impl TributaryDb { #[allow(clippy::too_many_arguments)] pub(crate) fn accumulate( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, validators: &[SeraiAddress], total_weight: u16, block_number: u64, @@ -511,7 +521,7 @@ impl TributaryDb { pub(crate) fn send_message( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, message: impl Into, ) { ProcessorMessages::send(txn, set, &message.into()); diff --git a/coordinator/tributary/src/lib.rs b/coordinator/tributary/src/lib.rs index 1e1235ad..1c82d5b9 100644 --- a/coordinator/tributary/src/lib.rs +++ b/coordinator/tributary/src/lib.rs @@ -10,7 +10,7 @@ use dkg::Participant; use serai_client::{ primitives::SeraiAddress, - validator_sets::primitives::{ValidatorSet, Slash}, + validator_sets::primitives::{ExternalValidatorSet, Slash}, }; use serai_db::*; @@ -41,7 +41,10 @@ pub use db::Topic; pub struct ProcessorMessages; impl ProcessorMessages { /// Try to receive a message to send to a Processor. - pub fn try_recv(txn: &mut impl DbTxn, set: ValidatorSet) -> Option { + pub fn try_recv( + txn: &mut impl DbTxn, + set: ExternalValidatorSet, + ) -> Option { db::ProcessorMessages::try_recv(txn, set) } } @@ -58,7 +61,7 @@ impl DkgConfirmationMessages { /// across validator sets, with no guarantees of uniqueness across contexts. pub fn try_recv( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, ) -> Option { db::DkgConfirmationMessages::try_recv(txn, set) } @@ -70,12 +73,12 @@ impl CosignIntents { /// Provide a CosignIntent for this Tributary. /// /// This must be done before the associated `Transaction::Cosign` is provided. - pub fn provide(txn: &mut impl DbTxn, set: ValidatorSet, intent: &CosignIntent) { + pub fn provide(txn: &mut impl DbTxn, set: ExternalValidatorSet, intent: &CosignIntent) { db::CosignIntents::set(txn, set, intent.block_hash, intent); } fn take( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], ) -> Option { db::CosignIntents::take(txn, set, substrate_block_hash) @@ -88,13 +91,13 @@ impl RecognizedTopics { /// If this topic has been recognized by this Tributary. /// /// This will either be by explicit recognition or participation. - pub fn recognized(getter: &impl Get, set: ValidatorSet, topic: Topic) -> bool { + pub fn recognized(getter: &impl Get, set: ExternalValidatorSet, topic: Topic) -> bool { TributaryDb::recognized(getter, set, topic) } /// The next topic requiring recognition which has been recognized by this Tributary. pub fn try_recv_topic_requiring_recognition( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, ) -> Option { db::RecognizedTopics::try_recv(txn, set) } @@ -109,7 +112,7 @@ impl SubstrateBlockPlans { /// This must be done before the associated `Transaction::Cosign` is provided. pub fn set( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], plans: &Vec<[u8; 32]>, ) { @@ -117,7 +120,7 @@ impl SubstrateBlockPlans { } fn take( txn: &mut impl DbTxn, - set: ValidatorSet, + set: ExternalValidatorSet, substrate_block_hash: [u8; 32], ) -> Option> { db::SubstrateBlockPlans::take(txn, set, substrate_block_hash) diff --git a/patches/tiny-bip39/Cargo.toml b/patches/tiny-bip39/Cargo.toml deleted file mode 100644 index ff9b8a61..00000000 --- a/patches/tiny-bip39/Cargo.toml +++ /dev/null @@ -1,24 +0,0 @@ -[package] -name = "tiny-bip39" -version = "1.0.2" -description = "tiny-bip39 which patches to the latest update" -license = "MIT" -repository = "https://github.com/serai-dex/serai/tree/develop/patches/tiny-bip39" -authors = ["Luke Parker "] -keywords = [] -edition = "2021" -rust-version = "1.70" - -[package.metadata.docs.rs] -all-features = true -rustdoc-args = ["--cfg", "docsrs"] - -[package.metadata.cargo-machete] -ignored = ["tiny-bip39"] - -[lib] -name = "bip39" -path = "src/lib.rs" - -[dependencies] -tiny-bip39 = "2" diff --git a/patches/tiny-bip39/src/lib.rs b/patches/tiny-bip39/src/lib.rs deleted file mode 100644 index 3890f5ae..00000000 --- a/patches/tiny-bip39/src/lib.rs +++ /dev/null @@ -1 +0,0 @@ -pub use bip39::*; diff --git a/processor/bitcoin/src/main.rs b/processor/bitcoin/src/main.rs index 5feb3e25..302f670b 100644 --- a/processor/bitcoin/src/main.rs +++ b/processor/bitcoin/src/main.rs @@ -90,7 +90,7 @@ use bitcoin_serai::bitcoin::{ }; use serai_client::{ - primitives::{MAX_DATA_LEN, Coin, NetworkId, Amount, Balance}, + primitives::{MAX_DATA_LEN, ExternalNetworkId, ExternalCoin, Amount, Balance}, networks::bitcoin::Address, }; */ diff --git a/processor/bitcoin/src/primitives/output.rs b/processor/bitcoin/src/primitives/output.rs index f1a1dc7a..44f422c2 100644 --- a/processor/bitcoin/src/primitives/output.rs +++ b/processor/bitcoin/src/primitives/output.rs @@ -14,7 +14,7 @@ use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::Get; use serai_client::{ - primitives::{Coin, Amount, Balance, ExternalAddress}, + primitives::{ExternalCoin, Amount, ExternalBalance, ExternalAddress}, networks::bitcoin::Address, }; @@ -127,8 +127,8 @@ impl ReceivedOutput<::G, Address> for Output { self.presumed_origin.clone() } - fn balance(&self) -> Balance { - Balance { coin: Coin::Bitcoin, amount: Amount(self.output.value()) } + fn balance(&self) -> ExternalBalance { + ExternalBalance { coin: ExternalCoin::Bitcoin, amount: Amount(self.output.value()) } } fn data(&self) -> &[u8] { diff --git a/processor/bitcoin/src/rpc.rs b/processor/bitcoin/src/rpc.rs index acd3be85..4289c714 100644 --- a/processor/bitcoin/src/rpc.rs +++ b/processor/bitcoin/src/rpc.rs @@ -2,7 +2,7 @@ use core::future::Future; use bitcoin_serai::rpc::{RpcError, Rpc as BRpc}; -use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_client::primitives::{ExternalNetworkId, ExternalCoin, Amount}; use serai_db::Db; use scanner::ScannerFeed; @@ -21,7 +21,7 @@ pub(crate) struct Rpc { } impl ScannerFeed for Rpc { - const NETWORK: NetworkId = NetworkId::Bitcoin; + const NETWORK: ExternalNetworkId = ExternalNetworkId::Bitcoin; // 6 confirmations is widely accepted as secure and shouldn't occur const CONFIRMATIONS: u64 = 6; // The window length should be roughly an hour @@ -118,8 +118,8 @@ impl ScannerFeed for Rpc { } } - fn dust(coin: Coin) -> Amount { - assert_eq!(coin, Coin::Bitcoin); + fn dust(coin: ExternalCoin) -> Amount { + assert_eq!(coin, ExternalCoin::Bitcoin); /* A Taproot input is: @@ -158,11 +158,11 @@ impl ScannerFeed for Rpc { fn cost_to_aggregate( &self, - coin: Coin, + coin: ExternalCoin, _reference_block: &Self::Block, ) -> impl Send + Future> { async move { - assert_eq!(coin, Coin::Bitcoin); + assert_eq!(coin, ExternalCoin::Bitcoin); // TODO Ok(Amount(0)) } diff --git a/processor/bitcoin/src/scheduler.rs b/processor/bitcoin/src/scheduler.rs index 08dc508c..00f4a072 100644 --- a/processor/bitcoin/src/scheduler.rs +++ b/processor/bitcoin/src/scheduler.rs @@ -8,7 +8,7 @@ use bitcoin_serai::{ }; use serai_client::{ - primitives::{Coin, Amount}, + primitives::{ExternalCoin, Amount}, networks::bitcoin::Address, }; @@ -59,7 +59,7 @@ fn signable_transaction( .map(|payment| { (ScriptBuf::from(payment.address().clone()), { let balance = payment.balance(); - assert_eq!(balance.coin, Coin::Bitcoin); + assert_eq!(balance.coin, ExternalCoin::Bitcoin); balance.amount.0 }) }) diff --git a/processor/ethereum/src/primitives/output.rs b/processor/ethereum/src/primitives/output.rs index 99ffc880..797b528d 100644 --- a/processor/ethereum/src/primitives/output.rs +++ b/processor/ethereum/src/primitives/output.rs @@ -8,7 +8,7 @@ use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{NetworkId, Coin, Amount, Balance}, + primitives::{ExternalNetworkId, ExternalCoin, Amount, ExternalBalance}, networks::ethereum::Address, }; @@ -17,20 +17,20 @@ use ethereum_router::{Coin as EthereumCoin, InInstruction as EthereumInInstructi use crate::{DAI, ETHER_DUST}; -fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { +fn coin_to_serai_coin(coin: &EthereumCoin) -> Option { match coin { - EthereumCoin::Ether => Some(Coin::Ether), + EthereumCoin::Ether => Some(ExternalCoin::Ether), EthereumCoin::Erc20(token) => { if *token == DAI { - return Some(Coin::Dai); + return Some(ExternalCoin::Dai); } None } } } -fn amount_to_serai_amount(coin: Coin, amount: U256) -> Amount { - assert_eq!(coin.network(), NetworkId::Ethereum); +fn amount_to_serai_amount(coin: ExternalCoin, amount: U256) -> Amount { + assert_eq!(coin.network(), ExternalNetworkId::Ethereum); assert_eq!(coin.decimals(), 8); // Remove 10 decimals so we go from 18 decimals to 8 decimals let divisor = U256::from(10_000_000_000u64); @@ -119,7 +119,7 @@ impl ReceivedOutput<::G, Address> for Output { } } - fn balance(&self) -> Balance { + fn balance(&self) -> ExternalBalance { match self { Output::Output { key: _, instruction } => { let coin = coin_to_serai_coin(&instruction.coin).unwrap_or_else(|| { @@ -128,9 +128,11 @@ impl ReceivedOutput<::G, Address> for Output { "this never should have been yielded" ) }); - Balance { coin, amount: amount_to_serai_amount(coin, instruction.amount) } + ExternalBalance { coin, amount: amount_to_serai_amount(coin, instruction.amount) } + } + Output::Eventuality { .. } => { + ExternalBalance { coin: ExternalCoin::Ether, amount: ETHER_DUST } } - Output::Eventuality { .. } => Balance { coin: Coin::Ether, amount: ETHER_DUST }, } } fn data(&self) -> &[u8] { diff --git a/processor/ethereum/src/rpc.rs b/processor/ethereum/src/rpc.rs index b5b50cfa..57c14f59 100644 --- a/processor/ethereum/src/rpc.rs +++ b/processor/ethereum/src/rpc.rs @@ -7,7 +7,7 @@ use alloy_transport::{RpcError, TransportErrorKind}; use alloy_simple_request_transport::SimpleRequest; use alloy_provider::{Provider, RootProvider}; -use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_client::primitives::{ExternalNetworkId, ExternalCoin, Amount}; use tokio::task::JoinSet; @@ -30,7 +30,7 @@ pub(crate) struct Rpc { } impl ScannerFeed for Rpc { - const NETWORK: NetworkId = NetworkId::Ethereum; + const NETWORK: ExternalNetworkId = ExternalNetworkId::Ethereum; // We only need one confirmation as Ethereum properly finalizes const CONFIRMATIONS: u64 = 1; @@ -209,22 +209,22 @@ impl ScannerFeed for Rpc { } } - fn dust(coin: Coin) -> Amount { - assert_eq!(coin.network(), NetworkId::Ethereum); + fn dust(coin: ExternalCoin) -> Amount { + assert_eq!(coin.network(), ExternalNetworkId::Ethereum); match coin { - Coin::Ether => ETHER_DUST, - Coin::Dai => DAI_DUST, + ExternalCoin::Ether => ETHER_DUST, + ExternalCoin::Dai => DAI_DUST, _ => unreachable!(), } } fn cost_to_aggregate( &self, - coin: Coin, + coin: ExternalCoin, _reference_block: &Self::Block, ) -> impl Send + Future> { async move { - assert_eq!(coin.network(), NetworkId::Ethereum); + assert_eq!(coin.network(), ExternalNetworkId::Ethereum); // There is no cost to aggregate as we receive to an account Ok(Amount(0)) } diff --git a/processor/ethereum/src/scheduler.rs b/processor/ethereum/src/scheduler.rs index e8a437c1..207792ec 100644 --- a/processor/ethereum/src/scheduler.rs +++ b/processor/ethereum/src/scheduler.rs @@ -3,7 +3,7 @@ use std::collections::HashMap; use alloy_core::primitives::U256; use serai_client::{ - primitives::{NetworkId, Coin, Balance}, + primitives::{ExternalNetworkId, ExternalCoin, ExternalBalance}, networks::ethereum::Address, }; @@ -17,17 +17,17 @@ use ethereum_router::Coin as EthereumCoin; use crate::{DAI, transaction::Action, rpc::Rpc}; -fn coin_to_ethereum_coin(coin: Coin) -> EthereumCoin { - assert_eq!(coin.network(), NetworkId::Ethereum); +fn coin_to_ethereum_coin(coin: ExternalCoin) -> EthereumCoin { + assert_eq!(coin.network(), ExternalNetworkId::Ethereum); match coin { - Coin::Ether => EthereumCoin::Ether, - Coin::Dai => EthereumCoin::Erc20(DAI), + ExternalCoin::Ether => EthereumCoin::Ether, + ExternalCoin::Dai => EthereumCoin::Erc20(DAI), _ => unreachable!(), } } -fn balance_to_ethereum_amount(balance: Balance) -> U256 { - assert_eq!(balance.coin.network(), NetworkId::Ethereum); +fn balance_to_ethereum_amount(balance: ExternalBalance) -> U256 { + assert_eq!(balance.coin.network(), ExternalNetworkId::Ethereum); assert_eq!(balance.coin.decimals(), 8); // Restore 10 decimals so we go from 8 decimals to 18 decimals // TODO: Document the expectation all integrated coins have 18 decimals @@ -73,17 +73,17 @@ impl smart_contract_scheduler::SmartContract> for SmartContract { } let mut res = vec![]; - for coin in [Coin::Ether, Coin::Dai] { + for coin in [ExternalCoin::Ether, ExternalCoin::Dai] { let Some(outs) = outs.remove(&coin) else { continue }; assert!(!outs.is_empty()); let fee_per_gas = match coin { // 10 gwei - Coin::Ether => { + ExternalCoin::Ether => { U256::try_from(10u64).unwrap() * alloy_core::primitives::utils::Unit::GWEI.wei() } // 0.0003 DAI - Coin::Dai => { + ExternalCoin::Dai => { U256::try_from(30u64).unwrap() * alloy_core::primitives::utils::Unit::TWEI.wei() } _ => unreachable!(), diff --git a/processor/monero/src/primitives/output.rs b/processor/monero/src/primitives/output.rs index 201e75c9..b2b87a5c 100644 --- a/processor/monero/src/primitives/output.rs +++ b/processor/monero/src/primitives/output.rs @@ -8,7 +8,7 @@ use scale::{Encode, Decode}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_client::{ - primitives::{Coin, Amount, Balance}, + primitives::{ExternalCoin, Amount, ExternalBalance}, networks::monero::Address, }; @@ -76,8 +76,8 @@ impl ReceivedOutput<::G, Address> for Output { None } - fn balance(&self) -> Balance { - Balance { coin: Coin::Monero, amount: Amount(self.0.commitment().amount) } + fn balance(&self) -> ExternalBalance { + ExternalBalance { coin: ExternalCoin::Monero, amount: Amount(self.0.commitment().amount) } } fn data(&self) -> &[u8] { diff --git a/processor/monero/src/rpc.rs b/processor/monero/src/rpc.rs index 9244b23f..5ca74d02 100644 --- a/processor/monero/src/rpc.rs +++ b/processor/monero/src/rpc.rs @@ -3,7 +3,7 @@ use core::future::Future; use monero_wallet::rpc::{RpcError, Rpc as RpcTrait}; use monero_simple_request_rpc::SimpleRequestRpc; -use serai_client::primitives::{NetworkId, Coin, Amount}; +use serai_client::primitives::{ExternalNetworkId, ExternalCoin, Amount}; use scanner::ScannerFeed; use signers::TransactionPublisher; @@ -19,7 +19,7 @@ pub(crate) struct Rpc { } impl ScannerFeed for Rpc { - const NETWORK: NetworkId = NetworkId::Monero; + const NETWORK: ExternalNetworkId = ExternalNetworkId::Monero; // Outputs aren't spendable until 10 blocks later due to the 10-block lock // Since we assumed scanned outputs are spendable, that sets a minimum confirmation depth of 10 // A 10-block reorganization hasn't been observed in years and shouldn't occur @@ -107,8 +107,8 @@ impl ScannerFeed for Rpc { } } - fn dust(coin: Coin) -> Amount { - assert_eq!(coin, Coin::Monero); + fn dust(coin: ExternalCoin) -> Amount { + assert_eq!(coin, ExternalCoin::Monero); // 0.01 XMR Amount(10_000_000_000) @@ -116,11 +116,11 @@ impl ScannerFeed for Rpc { fn cost_to_aggregate( &self, - coin: Coin, + coin: ExternalCoin, _reference_block: &Self::Block, ) -> impl Send + Future> { async move { - assert_eq!(coin, Coin::Bitcoin); + assert_eq!(coin, ExternalCoin::Bitcoin); // TODO Ok(Amount(0)) } diff --git a/processor/monero/src/scheduler.rs b/processor/monero/src/scheduler.rs index 489db810..9043f888 100644 --- a/processor/monero/src/scheduler.rs +++ b/processor/monero/src/scheduler.rs @@ -9,7 +9,7 @@ use ciphersuite::{Ciphersuite, Ed25519}; use monero_wallet::rpc::{FeeRate, RpcError}; use serai_client::{ - primitives::{Coin, Amount}, + primitives::{ExternalCoin, Amount}, networks::monero::Address, }; @@ -106,7 +106,7 @@ async fn signable_transaction( .map(|payment| { (MoneroAddress::from(*payment.address()), { let balance = payment.balance(); - assert_eq!(balance.coin, Coin::Monero); + assert_eq!(balance.coin, ExternalCoin::Monero); balance.amount.0 }) }) diff --git a/processor/primitives/src/output.rs b/processor/primitives/src/output.rs index 76acde60..e45b7344 100644 --- a/processor/primitives/src/output.rs +++ b/processor/primitives/src/output.rs @@ -5,7 +5,7 @@ use group::GroupEncoding; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_primitives::{ExternalAddress, Balance}; +use serai_primitives::{ExternalAddress, ExternalBalance}; use crate::Id; @@ -133,7 +133,7 @@ pub trait ReceivedOutput: fn presumed_origin(&self) -> Option; /// The balance associated with this output. - fn balance(&self) -> Balance; + fn balance(&self) -> ExternalBalance; /// The arbitrary data (presumably an InInstruction) associated with this output. fn data(&self) -> &[u8]; diff --git a/processor/primitives/src/payment.rs b/processor/primitives/src/payment.rs index 59b10f7f..b892b2b4 100644 --- a/processor/primitives/src/payment.rs +++ b/processor/primitives/src/payment.rs @@ -3,7 +3,7 @@ use std::io; use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; -use serai_primitives::Balance; +use serai_primitives::ExternalBalance; use serai_coins_primitives::OutInstructionWithBalance; use crate::Address; @@ -12,7 +12,7 @@ use crate::Address; #[derive(Clone, BorshSerialize, BorshDeserialize)] pub struct Payment { address: A, - balance: Balance, + balance: ExternalBalance, } impl TryFrom for Payment { @@ -27,7 +27,7 @@ impl TryFrom for Payment { impl Payment { /// Create a new Payment. - pub fn new(address: A, balance: Balance) -> Self { + pub fn new(address: A, balance: ExternalBalance) -> Self { Payment { address, balance } } @@ -36,7 +36,7 @@ impl Payment { &self.address } /// The balance to transfer. - pub fn balance(&self) -> Balance { + pub fn balance(&self) -> ExternalBalance { self.balance } @@ -44,7 +44,7 @@ impl Payment { pub fn read(reader: &mut impl io::Read) -> io::Result { let address = A::deserialize_reader(reader)?; let reader = &mut IoReader(reader); - let balance = Balance::decode(reader).map_err(io::Error::other)?; + let balance = ExternalBalance::decode(reader).map_err(io::Error::other)?; Ok(Self { address, balance }) } /// Write the Payment. diff --git a/processor/scanner/src/batch/db.rs b/processor/scanner/src/batch/db.rs index 88ca2882..015b661b 100644 --- a/processor/scanner/src/batch/db.rs +++ b/processor/scanner/src/batch/db.rs @@ -7,7 +7,7 @@ use scale::{Encode, Decode, IoReader}; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, create_db}; -use serai_primitives::Balance; +use serai_primitives::ExternalBalance; use serai_validator_sets_primitives::Session; use primitives::EncodableG; @@ -39,7 +39,7 @@ create_db!( pub(crate) struct ReturnInformation { pub(crate) address: AddressFor, - pub(crate) balance: Balance, + pub(crate) balance: ExternalBalance, } pub(crate) struct BatchDb(PhantomData); @@ -116,7 +116,7 @@ impl BatchDb { res.push((opt[0] == 1).then(|| { let address = AddressFor::::deserialize_reader(&mut buf).unwrap(); - let balance = Balance::decode(&mut IoReader(&mut buf)).unwrap(); + let balance = ExternalBalance::decode(&mut IoReader(&mut buf)).unwrap(); ReturnInformation { address, balance } })); } diff --git a/processor/scanner/src/lib.rs b/processor/scanner/src/lib.rs index 510af61b..d3e24183 100644 --- a/processor/scanner/src/lib.rs +++ b/processor/scanner/src/lib.rs @@ -10,7 +10,7 @@ use group::GroupEncoding; use borsh::{BorshSerialize, BorshDeserialize}; use serai_db::{Get, DbTxn, Db}; -use serai_primitives::{NetworkId, Coin, Amount}; +use serai_primitives::{ExternalNetworkId, ExternalCoin, Amount}; use serai_coins_primitives::OutInstructionWithBalance; use messages::substrate::ExecutedBatch; @@ -64,7 +64,7 @@ impl BlockExt for B { /// This defines the primitive types used, along with various getters necessary for indexing. pub trait ScannerFeed: 'static + Send + Sync + Clone { /// The ID of the network being scanned for. - const NETWORK: NetworkId; + const NETWORK: ExternalNetworkId; /// The amount of confirmations a block must have to be considered finalized. /// @@ -175,14 +175,14 @@ pub trait ScannerFeed: 'static + Send + Sync + Clone { /// /// This MUST be constant. Serai MUST NOT create internal outputs worth less than this. This /// SHOULD be a value worth handling at a human level. - fn dust(coin: Coin) -> Amount; + fn dust(coin: ExternalCoin) -> Amount; /// The cost to aggregate an input as of the specified block. /// /// This is defined as the transaction fee for a 2-input, 1-output transaction. fn cost_to_aggregate( &self, - coin: Coin, + coin: ExternalCoin, reference_block: &Self::Block, ) -> impl Send + Future>; } diff --git a/processor/scheduler/utxo/primitives/src/tree.rs b/processor/scheduler/utxo/primitives/src/tree.rs index d5b47309..565706a3 100644 --- a/processor/scheduler/utxo/primitives/src/tree.rs +++ b/processor/scheduler/utxo/primitives/src/tree.rs @@ -1,6 +1,6 @@ use borsh::{BorshSerialize, BorshDeserialize}; -use serai_primitives::{Coin, Amount, Balance}; +use serai_primitives::{ExternalCoin, Amount, ExternalBalance}; use primitives::{Address, Payment}; use scanner::ScannerFeed; @@ -52,7 +52,7 @@ impl TreeTransaction { /// payments should be made. pub fn payments( &self, - coin: Coin, + coin: ExternalCoin, branch_address: &A, input_value: u64, ) -> Option>> { @@ -115,7 +115,10 @@ impl TreeTransaction { .filter_map(|(payment, amount)| { amount.map(|amount| { // The existing payment, with the new amount - Payment::new(payment.address().clone(), Balance { coin, amount: Amount(amount) }) + Payment::new( + payment.address().clone(), + ExternalBalance { coin, amount: Amount(amount) }, + ) }) }) .collect() @@ -126,7 +129,7 @@ impl TreeTransaction { .filter_map(|amount| { amount.map(|amount| { // A branch output with the new amount - Payment::new(branch_address.clone(), Balance { coin, amount: Amount(amount) }) + Payment::new(branch_address.clone(), ExternalBalance { coin, amount: Amount(amount) }) }) }) .collect() diff --git a/processor/scheduler/utxo/standard/src/db.rs b/processor/scheduler/utxo/standard/src/db.rs index 00761595..128c5df6 100644 --- a/processor/scheduler/utxo/standard/src/db.rs +++ b/processor/scheduler/utxo/standard/src/db.rs @@ -2,7 +2,7 @@ use core::marker::PhantomData; use group::GroupEncoding; -use serai_primitives::{Coin, Amount, Balance}; +use serai_primitives::{ExternalCoin, Amount, ExternalBalance}; use borsh::BorshDeserialize; use serai_db::{Get, DbTxn, create_db, db_channel}; @@ -13,31 +13,31 @@ use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; create_db! { UtxoScheduler { - OperatingCosts: (coin: Coin) -> Amount, - SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, - SerializedQueuedPayments: (key: &[u8], coin: Coin) -> Vec, + OperatingCosts: (coin: ExternalCoin) -> Amount, + SerializedOutputs: (key: &[u8], coin: ExternalCoin) -> Vec, + SerializedQueuedPayments: (key: &[u8], coin: ExternalCoin) -> Vec, } } db_channel! { UtxoScheduler { - PendingBranch: (key: &[u8], balance: Balance) -> Vec, + PendingBranch: (key: &[u8], balance: ExternalBalance) -> Vec, } } pub(crate) struct Db(PhantomData); impl Db { - pub(crate) fn operating_costs(getter: &impl Get, coin: Coin) -> Amount { + pub(crate) fn operating_costs(getter: &impl Get, coin: ExternalCoin) -> Amount { OperatingCosts::get(getter, coin).unwrap_or(Amount(0)) } - pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: Coin, amount: Amount) { + pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: ExternalCoin, amount: Amount) { OperatingCosts::set(txn, coin, &amount) } pub(crate) fn outputs( getter: &impl Get, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Option>> { let buf = SerializedOutputs::get(getter, key.to_bytes().as_ref(), coin)?; let mut buf = buf.as_slice(); @@ -51,7 +51,7 @@ impl Db { pub(crate) fn set_outputs( txn: &mut impl DbTxn, key: KeyFor, - coin: Coin, + coin: ExternalCoin, outputs: &[OutputFor], ) { let mut buf = Vec::with_capacity(outputs.len() * 128); @@ -60,14 +60,14 @@ impl Db { } SerializedOutputs::set(txn, key.to_bytes().as_ref(), coin, &buf); } - pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: ExternalCoin) { SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); } pub(crate) fn queued_payments( getter: &impl Get, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Option>>> { let buf = SerializedQueuedPayments::get(getter, key.to_bytes().as_ref(), coin)?; let mut buf = buf.as_slice(); @@ -81,7 +81,7 @@ impl Db { pub(crate) fn set_queued_payments( txn: &mut impl DbTxn, key: KeyFor, - coin: Coin, + coin: ExternalCoin, queued: &[Payment>], ) { let mut buf = Vec::with_capacity(queued.len() * 128); @@ -90,14 +90,14 @@ impl Db { } SerializedQueuedPayments::set(txn, key.to_bytes().as_ref(), coin, &buf); } - pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: ExternalCoin) { SerializedQueuedPayments::del(txn, key.to_bytes().as_ref(), coin); } pub(crate) fn queue_pending_branch( txn: &mut impl DbTxn, key: KeyFor, - balance: Balance, + balance: ExternalBalance, child: &TreeTransaction>, ) { PendingBranch::send(txn, key.to_bytes().as_ref(), balance, &borsh::to_vec(child).unwrap()) @@ -105,7 +105,7 @@ impl Db { pub(crate) fn take_pending_branch( txn: &mut impl DbTxn, key: KeyFor, - balance: Balance, + balance: ExternalBalance, ) -> Option>> { PendingBranch::try_recv(txn, key.to_bytes().as_ref(), balance) .map(|bytes| TreeTransaction::>::deserialize(&mut bytes.as_slice()).unwrap()) diff --git a/processor/scheduler/utxo/standard/src/lib.rs b/processor/scheduler/utxo/standard/src/lib.rs index e826c300..cc2e2d35 100644 --- a/processor/scheduler/utxo/standard/src/lib.rs +++ b/processor/scheduler/utxo/standard/src/lib.rs @@ -7,7 +7,7 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_primitives::{Coin, Amount, Balance}; +use serai_primitives::{ExternalCoin, Amount, ExternalBalance}; use serai_db::DbTxn; @@ -42,7 +42,7 @@ impl> Scheduler { block: &BlockFor, key_for_change: KeyFor, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; @@ -79,7 +79,7 @@ impl> Scheduler { txn: &mut impl DbTxn, operating_costs: &mut u64, key: KeyFor, - coin: Coin, + coin: ExternalCoin, value_of_outputs: u64, ) -> Vec>> { // Fetch all payments for this key @@ -133,7 +133,7 @@ impl> Scheduler { fn queue_branches( txn: &mut impl DbTxn, key: KeyFor, - coin: Coin, + coin: ExternalCoin, effected_payments: Vec, tx: TreeTransaction>, ) { @@ -149,7 +149,7 @@ impl> Scheduler { children thanks to our sort. */ for (amount, child) in effected_payments.into_iter().zip(children) { - Db::::queue_pending_branch(txn, key, Balance { coin, amount }, &child); + Db::::queue_pending_branch(txn, key, ExternalBalance { coin, amount }, &child); } } } @@ -216,8 +216,6 @@ impl> Scheduler { let branch_address = P::branch_address(key); 'coin: for coin in S::NETWORK.coins() { - let coin = *coin; - // Perform any input aggregation we should eventualities .append(&mut self.aggregate_inputs(txn, block, key_for_change, key, coin).await?); @@ -308,7 +306,7 @@ impl> Scheduler { block: &BlockFor, from: KeyFor, to: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Result<(), >::EphemeralError> { let from_bytes = from.to_bytes().as_ref().to_vec(); // Ensure our inputs are aggregated @@ -349,10 +347,10 @@ impl> SchedulerTrait for Schedul fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - assert!(Db::::outputs(txn, key, *coin).is_none()); - Db::::set_outputs(txn, key, *coin, &[]); - assert!(Db::::queued_payments(txn, key, *coin).is_none()); - Db::::set_queued_payments(txn, key, *coin, &[]); + assert!(Db::::outputs(txn, key, coin).is_none()); + Db::::set_outputs(txn, key, coin, &[]); + assert!(Db::::queued_payments(txn, key, coin).is_none()); + Db::::set_queued_payments(txn, key, coin, &[]); } } @@ -368,18 +366,18 @@ impl> SchedulerTrait for Schedul for coin in S::NETWORK.coins() { // Move the payments to the new key { - let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); - let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + let still_queued = Db::::queued_payments(txn, retiring_key, coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, coin).unwrap(); let mut queued = still_queued; queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &[]); - Db::::set_queued_payments(txn, new_key, *coin, &queued); + Db::::set_queued_payments(txn, retiring_key, coin, &[]); + Db::::set_queued_payments(txn, new_key, coin, &queued); } // Move the outputs to the new key - self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin).await?; + self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, coin).await?; } Ok(eventualities) } @@ -387,10 +385,10 @@ impl> SchedulerTrait for Schedul fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - assert!(Db::::outputs(txn, key, *coin).unwrap().is_empty()); - Db::::del_outputs(txn, key, *coin); - assert!(Db::::queued_payments(txn, key, *coin).unwrap().is_empty()); - Db::::del_queued_payments(txn, key, *coin); + assert!(Db::::outputs(txn, key, coin).unwrap().is_empty()); + Db::::del_outputs(txn, key, coin); + assert!(Db::::queued_payments(txn, key, coin).unwrap().is_empty()); + Db::::del_queued_payments(txn, key, coin); } } @@ -463,7 +461,7 @@ impl> SchedulerTrait for Schedul block, active_keys[0].0, active_keys[1].0, - *coin, + coin, ) .await?; } @@ -552,10 +550,10 @@ impl> SchedulerTrait for Schedul // Queue the payments for this key for coin in S::NETWORK.coins() { - let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, coin).unwrap(); queued_payments - .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); - Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + .extend(payments.iter().filter(|payment| payment.balance().coin == coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, coin, &queued_payments); } // Handle the queued payments diff --git a/processor/scheduler/utxo/transaction-chaining/src/db.rs b/processor/scheduler/utxo/transaction-chaining/src/db.rs index 11bcd78d..68558e6f 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/db.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/db.rs @@ -2,7 +2,7 @@ use core::marker::PhantomData; use group::GroupEncoding; -use serai_primitives::{Coin, Amount}; +use serai_primitives::{ExternalCoin, Amount}; use serai_db::{Get, DbTxn, create_db}; @@ -11,28 +11,28 @@ use scanner::{ScannerFeed, KeyFor, AddressFor, OutputFor}; create_db! { TransactionChainingScheduler { - OperatingCosts: (coin: Coin) -> Amount, - SerializedOutputs: (key: &[u8], coin: Coin) -> Vec, + OperatingCosts: (coin: ExternalCoin) -> Amount, + SerializedOutputs: (key: &[u8], coin: ExternalCoin) -> Vec, AlreadyAccumulatedOutput: (id: &[u8]) -> (), // We should be immediately able to schedule the fulfillment of payments, yet this may not be // possible if we're in the middle of a multisig rotation (as our output set will be split) - SerializedQueuedPayments: (key: &[u8], coin: Coin) -> Vec, + SerializedQueuedPayments: (key: &[u8], coin: ExternalCoin) -> Vec, } } pub(crate) struct Db(PhantomData); impl Db { - pub(crate) fn operating_costs(getter: &impl Get, coin: Coin) -> Amount { + pub(crate) fn operating_costs(getter: &impl Get, coin: ExternalCoin) -> Amount { OperatingCosts::get(getter, coin).unwrap_or(Amount(0)) } - pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: Coin, amount: Amount) { + pub(crate) fn set_operating_costs(txn: &mut impl DbTxn, coin: ExternalCoin, amount: Amount) { OperatingCosts::set(txn, coin, &amount) } pub(crate) fn outputs( getter: &impl Get, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Option>> { let buf = SerializedOutputs::get(getter, key.to_bytes().as_ref(), coin)?; let mut buf = buf.as_slice(); @@ -46,7 +46,7 @@ impl Db { pub(crate) fn set_outputs( txn: &mut impl DbTxn, key: KeyFor, - coin: Coin, + coin: ExternalCoin, outputs: &[OutputFor], ) { let mut buf = Vec::with_capacity(outputs.len() * 128); @@ -55,7 +55,7 @@ impl Db { } SerializedOutputs::set(txn, key.to_bytes().as_ref(), coin, &buf); } - pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + pub(crate) fn del_outputs(txn: &mut impl DbTxn, key: KeyFor, coin: ExternalCoin) { SerializedOutputs::del(txn, key.to_bytes().as_ref(), coin); } @@ -75,7 +75,7 @@ impl Db { pub(crate) fn queued_payments( getter: &impl Get, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Option>>> { let buf = SerializedQueuedPayments::get(getter, key.to_bytes().as_ref(), coin)?; let mut buf = buf.as_slice(); @@ -89,7 +89,7 @@ impl Db { pub(crate) fn set_queued_payments( txn: &mut impl DbTxn, key: KeyFor, - coin: Coin, + coin: ExternalCoin, queued: &[Payment>], ) { let mut buf = Vec::with_capacity(queued.len() * 128); @@ -98,7 +98,7 @@ impl Db { } SerializedQueuedPayments::set(txn, key.to_bytes().as_ref(), coin, &buf); } - pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: Coin) { + pub(crate) fn del_queued_payments(txn: &mut impl DbTxn, key: KeyFor, coin: ExternalCoin) { SerializedQueuedPayments::del(txn, key.to_bytes().as_ref(), coin); } } diff --git a/processor/scheduler/utxo/transaction-chaining/src/lib.rs b/processor/scheduler/utxo/transaction-chaining/src/lib.rs index bb39dcd3..5f7275ce 100644 --- a/processor/scheduler/utxo/transaction-chaining/src/lib.rs +++ b/processor/scheduler/utxo/transaction-chaining/src/lib.rs @@ -7,7 +7,7 @@ use std::collections::HashMap; use group::GroupEncoding; -use serai_primitives::{Coin, Amount}; +use serai_primitives::{ExternalCoin, Amount}; use serai_db::DbTxn; @@ -72,7 +72,7 @@ impl>> Sched block: &BlockFor, key_for_change: KeyFor, key: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Result>, >::EphemeralError> { let mut eventualities = vec![]; @@ -112,7 +112,7 @@ impl>> Sched txn: &mut impl DbTxn, operating_costs: &mut u64, key: KeyFor, - coin: Coin, + coin: ExternalCoin, value_of_outputs: u64, ) -> Vec>> { // Fetch all payments for this key @@ -184,8 +184,6 @@ impl>> Sched let branch_address = P::branch_address(key); 'coin: for coin in S::NETWORK.coins() { - let coin = *coin; - // Perform any input aggregation we should eventualities .append(&mut self.aggregate_inputs(txn, block, key_for_change, key, coin).await?); @@ -360,7 +358,7 @@ impl>> Sched block: &BlockFor, from: KeyFor, to: KeyFor, - coin: Coin, + coin: ExternalCoin, ) -> Result<(), >::EphemeralError> { let from_bytes = from.to_bytes().as_ref().to_vec(); // Ensure our inputs are aggregated @@ -404,10 +402,10 @@ impl>> Sched fn activate_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - assert!(Db::::outputs(txn, key, *coin).is_none()); - Db::::set_outputs(txn, key, *coin, &[]); - assert!(Db::::queued_payments(txn, key, *coin).is_none()); - Db::::set_queued_payments(txn, key, *coin, &[]); + assert!(Db::::outputs(txn, key, coin).is_none()); + Db::::set_outputs(txn, key, coin, &[]); + assert!(Db::::queued_payments(txn, key, coin).is_none()); + Db::::set_queued_payments(txn, key, coin, &[]); } } @@ -423,18 +421,18 @@ impl>> Sched for coin in S::NETWORK.coins() { // Move the payments to the new key { - let still_queued = Db::::queued_payments(txn, retiring_key, *coin).unwrap(); - let mut new_queued = Db::::queued_payments(txn, new_key, *coin).unwrap(); + let still_queued = Db::::queued_payments(txn, retiring_key, coin).unwrap(); + let mut new_queued = Db::::queued_payments(txn, new_key, coin).unwrap(); let mut queued = still_queued; queued.append(&mut new_queued); - Db::::set_queued_payments(txn, retiring_key, *coin, &[]); - Db::::set_queued_payments(txn, new_key, *coin, &queued); + Db::::set_queued_payments(txn, retiring_key, coin, &[]); + Db::::set_queued_payments(txn, new_key, coin, &queued); } // Move the outputs to the new key - self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, *coin).await?; + self.flush_outputs(txn, &mut eventualities, block, retiring_key, new_key, coin).await?; } Ok(eventualities) } @@ -442,10 +440,10 @@ impl>> Sched fn retire_key(txn: &mut impl DbTxn, key: KeyFor) { for coin in S::NETWORK.coins() { - assert!(Db::::outputs(txn, key, *coin).unwrap().is_empty()); - Db::::del_outputs(txn, key, *coin); - assert!(Db::::queued_payments(txn, key, *coin).unwrap().is_empty()); - Db::::del_queued_payments(txn, key, *coin); + assert!(Db::::outputs(txn, key, coin).unwrap().is_empty()); + Db::::del_outputs(txn, key, coin); + assert!(Db::::queued_payments(txn, key, coin).unwrap().is_empty()); + Db::::del_queued_payments(txn, key, coin); } } @@ -481,7 +479,7 @@ impl>> Sched block, active_keys[0].0, active_keys[1].0, - *coin, + coin, ) .await?; } @@ -570,10 +568,10 @@ impl>> Sched // Queue the payments for this key for coin in S::NETWORK.coins() { - let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, *coin).unwrap(); + let mut queued_payments = Db::::queued_payments(txn, fulfillment_key, coin).unwrap(); queued_payments - .extend(payments.iter().filter(|payment| payment.balance().coin == *coin).cloned()); - Db::::set_queued_payments(txn, fulfillment_key, *coin, &queued_payments); + .extend(payments.iter().filter(|payment| payment.balance().coin == coin).cloned()); + Db::::set_queued_payments(txn, fulfillment_key, coin, &queued_payments); } // Handle the queued payments diff --git a/substrate/client/src/serai/in_instructions.rs b/substrate/client/src/serai/in_instructions.rs index 7b0629f0..db9a4f78 100644 --- a/substrate/client/src/serai/in_instructions.rs +++ b/substrate/client/src/serai/in_instructions.rs @@ -1,10 +1,7 @@ pub use serai_abi::in_instructions::primitives; use primitives::SignedBatch; -use crate::{ - primitives::{BlockHash, ExternalNetworkId}, - Transaction, SeraiError, Serai, TemporalSerai, -}; +use crate::{primitives::ExternalNetworkId, Transaction, SeraiError, Serai, TemporalSerai}; pub type InInstructionsEvent = serai_abi::in_instructions::Event; @@ -12,6 +9,7 @@ const PALLET: &str = "InInstructions"; #[derive(Clone, Copy)] pub struct SeraiInInstructions<'a>(pub(crate) &'a TemporalSerai<'a>); +impl SeraiInInstructions<'_> { pub async fn last_batch_for_network( &self, network: ExternalNetworkId, diff --git a/substrate/client/src/serai/mod.rs b/substrate/client/src/serai/mod.rs index fda876b6..532c9cf2 100644 --- a/substrate/client/src/serai/mod.rs +++ b/substrate/client/src/serai/mod.rs @@ -16,7 +16,7 @@ pub use abi::{primitives, Transaction}; use abi::*; pub use primitives::{SeraiAddress, Signature, Amount}; -use primitives::{Header, NetworkId}; +use primitives::{Header, ExternalNetworkId}; pub mod coins; pub use coins::SeraiCoins; @@ -313,7 +313,7 @@ impl Serai { /// Return the P2P Multiaddrs for the validators of the specified network. pub async fn p2p_validators( &self, - network: NetworkId, + network: ExternalNetworkId, ) -> Result, SeraiError> { self.call("p2p_validators", network).await } diff --git a/substrate/client/src/serai/validator_sets.rs b/substrate/client/src/serai/validator_sets.rs index b475730d..3eddc943 100644 --- a/substrate/client/src/serai/validator_sets.rs +++ b/substrate/client/src/serai/validator_sets.rs @@ -5,10 +5,10 @@ use sp_runtime::BoundedVec; use serai_abi::{primitives::Amount, validator_sets::primitives::ExternalValidatorSet}; pub use serai_abi::validator_sets::primitives; -use primitives::{MAX_KEY_LEN, Session, ValidatorSet, KeyPair, SlashReport}; +use primitives::{MAX_KEY_LEN, Session, KeyPair, SlashReport}; use crate::{ - primitives::{NetworkId, ExternalNetworkId, EmbeddedEllipticCurve, SeraiAddress}, + primitives::{NetworkId, ExternalNetworkId, EmbeddedEllipticCurve}, Transaction, Serai, TemporalSerai, SeraiError, }; @@ -203,7 +203,7 @@ impl SeraiValidatorSets<'_> { } pub fn set_keys( - network: NetworkId, + network: ExternalNetworkId, key_pair: KeyPair, signature_participants: bitvec::vec::BitVec, signature: Signature, @@ -237,7 +237,7 @@ impl SeraiValidatorSets<'_> { } pub fn report_slashes( - network: NetworkId, + network: ExternalNetworkId, slashes: SlashReport, signature: Signature, ) -> Transaction { diff --git a/substrate/client/tests/batch.rs b/substrate/client/tests/batch.rs index 0edfaf2f..2d32462f 100644 --- a/substrate/client/tests/batch.rs +++ b/substrate/client/tests/batch.rs @@ -8,7 +8,7 @@ use blake2::{ use scale::Encode; use serai_client::{ - primitives::{BlockHash, NetworkId, ExternalCoin, Amount, ExternalBalance, SeraiAddress}, + primitives::{BlockHash, ExternalCoin, Amount, ExternalBalance, SeraiAddress}, coins::CoinsEvent, validator_sets::primitives::Session, in_instructions::{ diff --git a/substrate/client/tests/burn.rs b/substrate/client/tests/burn.rs index cba8e480..8351781e 100644 --- a/substrate/client/tests/burn.rs +++ b/substrate/client/tests/burn.rs @@ -11,7 +11,7 @@ use sp_core::Pair; use serai_client::{ primitives::{ - BlockHash, ExternalNetworkId, ExternalCoin, Amount, ExternalBalance, SeraiAddress, ExternalAddress, + BlockHash, ExternalCoin, Amount, ExternalBalance, SeraiAddress, ExternalAddress, insecure_pair_from_name, }, coins::{ diff --git a/substrate/client/tests/common/genesis_liquidity.rs b/substrate/client/tests/common/genesis_liquidity.rs index c602b2e6..cba6bdea 100644 --- a/substrate/client/tests/common/genesis_liquidity.rs +++ b/substrate/client/tests/common/genesis_liquidity.rs @@ -11,10 +11,11 @@ use sp_core::{sr25519::Signature, Pair as PairTrait}; use serai_abi::{ primitives::{ - BlockHash, ExternalNetworkId, ExternalCoin, Amount, ExternalBalance, SeraiAddress, insecure_pair_from_name, + EXTERNAL_COINS, BlockHash, ExternalNetworkId, NetworkId, ExternalCoin, Amount, ExternalBalance, + SeraiAddress, insecure_pair_from_name, }, - validator_sets::primitives::{musig_context, Session, ValidatorSet}, - genesis_liquidity::primitives::{oraclize_values_message, Values}, + validator_sets::primitives::{Session, ValidatorSet, musig_context}, + genesis_liquidity::primitives::{Values, oraclize_values_message}, in_instructions::primitives::{InInstruction, InInstructionWithBalance, Batch}, }; diff --git a/substrate/client/tests/common/in_instructions.rs b/substrate/client/tests/common/in_instructions.rs index 115b75d0..87e26c5d 100644 --- a/substrate/client/tests/common/in_instructions.rs +++ b/substrate/client/tests/common/in_instructions.rs @@ -9,7 +9,7 @@ use scale::Encode; use sp_core::Pair; use serai_client::{ - primitives::{BlockHash, NetworkId, ExternalBalance, SeraiAddress, insecure_pair_from_name}, + primitives::{BlockHash, ExternalBalance, SeraiAddress, insecure_pair_from_name}, validator_sets::primitives::{ExternalValidatorSet, KeyPair}, in_instructions::{ primitives::{Batch, SignedBatch, batch_message, InInstruction, InInstructionWithBalance}, diff --git a/substrate/client/tests/common/validator_sets.rs b/substrate/client/tests/common/validator_sets.rs index 4771b5ed..008cb3fc 100644 --- a/substrate/client/tests/common/validator_sets.rs +++ b/substrate/client/tests/common/validator_sets.rs @@ -16,12 +16,12 @@ use frost::dkg::musig::musig; use schnorrkel::Schnorrkel; use serai_client::{ - primitives::EmbeddedEllipticCurve, + primitives::{EmbeddedEllipticCurve, Amount}, validator_sets::{ primitives::{MAX_KEY_LEN, ExternalValidatorSet, KeyPair, musig_context, set_keys_message}, ValidatorSetsEvent, }, - Amount, Serai, SeraiValidatorSets, + SeraiValidatorSets, Serai, }; use crate::common::tx::publish_tx; diff --git a/substrate/client/tests/dex.rs b/substrate/client/tests/dex.rs index 06bf42f9..93422f5e 100644 --- a/substrate/client/tests/dex.rs +++ b/substrate/client/tests/dex.rs @@ -6,7 +6,7 @@ use serai_abi::in_instructions::primitives::DexCall; use serai_client::{ primitives::{ - BlockHash, NetworkId, Coin, Amount, Balance, SeraiAddress, ExternalAddress, + BlockHash, ExternalCoin, Coin, Amount, ExternalBalance, Balance, SeraiAddress, ExternalAddress, insecure_pair_from_name, }, in_instructions::primitives::{ diff --git a/substrate/client/tests/dht.rs b/substrate/client/tests/dht.rs index 0d27c91e..8b8a078b 100644 --- a/substrate/client/tests/dht.rs +++ b/substrate/client/tests/dht.rs @@ -44,7 +44,7 @@ async fn dht() { assert!(!Serai::new(serai_rpc.clone()) .await .unwrap() - .p2p_validators(ExternalNetworkId::Bitcoin.into()) + .p2p_validators(ExternalNetworkId::Bitcoin) .await .unwrap() .is_empty()); diff --git a/substrate/client/tests/emissions.rs b/substrate/client/tests/emissions.rs index 6c131567..7ee843cb 100644 --- a/substrate/client/tests/emissions.rs +++ b/substrate/client/tests/emissions.rs @@ -5,8 +5,8 @@ use serai_client::TemporalSerai; use serai_abi::{ primitives::{ - NETWORKS, COINS, TARGET_BLOCK_TIME, FAST_EPOCH_DURATION, FAST_EPOCH_INITIAL_PERIOD, BlockHash, - Coin, + EXTERNAL_NETWORKS, NETWORKS, TARGET_BLOCK_TIME, FAST_EPOCH_DURATION, FAST_EPOCH_INITIAL_PERIOD, + BlockHash, ExternalNetworkId, NetworkId, ExternalCoin, Amount, ExternalBalance, }, validator_sets::primitives::Session, emissions::primitives::{INITIAL_REWARD_PER_BLOCK, SECURE_BY}, @@ -38,17 +38,16 @@ async fn send_batches(serai: &Serai, ids: &mut HashMap) let mut block = BlockHash([0; 32]); OsRng.fill_bytes(&mut block.0); - provide_batch( - serai, - Batch { - network, - id: ids[&network], - external_network_block_hash: block, - instructions: vec![], - }, - ) - .await; - } + provide_batch( + serai, + Batch { + network, + id: ids[&network], + external_network_block_hash: block, + instructions: vec![], + }, + ) + .await; } } diff --git a/substrate/client/tests/validator_sets.rs b/substrate/client/tests/validator_sets.rs index 14a5552f..32f5d481 100644 --- a/substrate/client/tests/validator_sets.rs +++ b/substrate/client/tests/validator_sets.rs @@ -7,11 +7,11 @@ use sp_core::{ use serai_client::{ primitives::{ - FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, NETWORKS, BlockHash, NetworkId, EmbeddedEllipticCurve, - insecure_pair_from_name, + FAST_EPOCH_DURATION, TARGET_BLOCK_TIME, NETWORKS, BlockHash, ExternalNetworkId, NetworkId, + EmbeddedEllipticCurve, Amount, insecure_pair_from_name, }, validator_sets::{ - primitives::{Session, ValidatorSet, ExternalValidatorSet, KeyPair}, + primitives::{Session, ExternalValidatorSet, ValidatorSet, KeyPair}, ValidatorSetsEvent, }, in_instructions::{ @@ -313,8 +313,12 @@ async fn validator_set_rotation() { // provide a batch to complete the handover and retire the previous set let mut block_hash = BlockHash([0; 32]); OsRng.fill_bytes(&mut block_hash.0); - let batch = - Batch { network: network.try_into().unwrap(), id: 0, external_network_block_hash: block_hash, instructions: vec![] }; + let batch = Batch { + network: network.try_into().unwrap(), + id: 0, + external_network_block_hash: block_hash, + instructions: vec![], + }; publish_tx( &serai, &SeraiInInstructions::execute_batch(SignedBatch { diff --git a/substrate/coins/pallet/src/tests.rs b/substrate/coins/pallet/src/tests.rs index a6d16afd..52b81d37 100644 --- a/substrate/coins/pallet/src/tests.rs +++ b/substrate/coins/pallet/src/tests.rs @@ -58,7 +58,7 @@ fn burn_with_instruction() { // we shouldn't be able to burn more than what we have let mut instruction = OutInstructionWithBalance { - instruction: OutInstruction { address: ExternalAddress::new(vec![]).unwrap(), data: None }, + instruction: OutInstruction { address: ExternalAddress::new(vec![]).unwrap() }, balance: ExternalBalance { coin: coin.try_into().unwrap(), amount: Amount(balance.amount.0 + 1), diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 07fb2294..23d8a875 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -67,7 +67,7 @@ pub mod pallet { in_instruction_results: bitvec::vec::BitVec, }, Halt { - network: NetworkId, + network: ExternalNetworkId, }, } @@ -103,7 +103,7 @@ pub mod pallet { fn execute(instruction: &InInstructionWithBalance) -> Result<(), DispatchError> { match &instruction.instruction { InInstruction::Transfer(address) => { - Coins::::mint(address.into(), instruction.balance.into())?; + Coins::::mint((*address).into(), instruction.balance.into())?; } InInstruction::Dex(call) => { // This will only be initiated by external chain transactions. That is why we only need @@ -222,11 +222,11 @@ pub mod pallet { } InInstruction::GenesisLiquidity(address) => { Coins::::mint(GENESIS_LIQUIDITY_ACCOUNT.into(), instruction.balance.into())?; - GenesisLiq::::add_coin_liquidity(address.into(), instruction.balance)?; + GenesisLiq::::add_coin_liquidity((*address).into(), instruction.balance)?; } InInstruction::SwapToStakedSRI(address, network) => { Coins::::mint(POL_ACCOUNT.into(), instruction.balance.into())?; - Emissions::::swap_to_staked_sri(address.into(), network, instruction.balance)?; + Emissions::::swap_to_staked_sri((*address).into(), *network, instruction.balance)?; } } Ok(()) @@ -319,7 +319,10 @@ pub mod pallet { // key is publishing `Batch`s. This should only happen once the current key has verified all // `Batch`s published by the prior key, meaning they are accepting the hand-over. if prior.is_some() && (!valid_by_prior) { - ValidatorSets::::retire_set(ValidatorSet { network: network.into(), session: prior_session }); + ValidatorSets::::retire_set(ValidatorSet { + network: network.into(), + session: prior_session, + }); } // check that this validator set isn't publishing a batch more than once per block diff --git a/substrate/in-instructions/primitives/src/lib.rs b/substrate/in-instructions/primitives/src/lib.rs index c3aaad1b..5c74bf55 100644 --- a/substrate/in-instructions/primitives/src/lib.rs +++ b/substrate/in-instructions/primitives/src/lib.rs @@ -20,7 +20,7 @@ use sp_std::vec::Vec; use sp_runtime::RuntimeDebug; #[rustfmt::skip] -use serai_primitives::{BlockHash, NetworkId, Balance, SeraiAddress, ExternalAddress, system_address}; +use serai_primitives::{BlockHash, ExternalNetworkId, NetworkId, ExternalBalance, Balance, SeraiAddress, ExternalAddress, system_address}; mod shorthand; pub use shorthand::*; diff --git a/substrate/node/src/chain_spec.rs b/substrate/node/src/chain_spec.rs index 7f153f1a..ebc47fcb 100644 --- a/substrate/node/src/chain_spec.rs +++ b/substrate/node/src/chain_spec.rs @@ -88,7 +88,10 @@ fn devnet_genesis( networks: key_shares.clone(), participants: validators.clone(), }, - emissions: EmissionsConfig { networks: key_shares, participants: validators.clone() }, + emissions: EmissionsConfig { + networks: key_shares, + participants: validators.iter().map(|(validator, _)| *validator).collect(), + }, signals: SignalsConfig::default(), babe: BabeConfig { authorities: validators.iter().map(|validator| (validator.0.into(), 1)).collect(), diff --git a/substrate/primitives/src/networks.rs b/substrate/primitives/src/networks.rs index d8c4047b..ace34127 100644 --- a/substrate/primitives/src/networks.rs +++ b/substrate/primitives/src/networks.rs @@ -26,9 +26,7 @@ pub enum EmbeddedEllipticCurve { } /// The type used to identify external networks. -#[derive( - Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode, PartialOrd, Ord, MaxEncodedLen, TypeInfo, -)] +#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, TypeInfo)] #[cfg_attr(feature = "std", derive(Zeroize))] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] pub enum ExternalNetworkId { @@ -162,6 +160,17 @@ impl ExternalNetworkId { } impl NetworkId { + /// The embedded elliptic curve actively used for this network. + /// + /// This is guaranteed to return `[]`, `[Embedwards25519]`, or + /// `[Embedwards25519, *network specific curve*]`. + pub fn embedded_elliptic_curves(&self) -> &'static [EmbeddedEllipticCurve] { + match self { + Self::Serai => &[], + Self::External(network) => network.embedded_elliptic_curves(), + } + } + pub fn coins(&self) -> Vec { match self { Self::Serai => vec![Coin::Serai], diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index bc1452e0..755e980a 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -1154,8 +1154,8 @@ pub mod pallet { // session on this assumption assert_eq!(Pallet::::latest_decided_session(network.into()), Some(current_session)); - let participants = - Participants::::get(network).expect("session existed without participants"); + let participants = Participants::::get(NetworkId::from(network)) + .expect("session existed without participants"); // Check the bitvec is of the proper length if participants.len() != signature_participants.len() { @@ -1189,7 +1189,7 @@ pub mod pallet { // Verify the signature with the MuSig key of the signers // We theoretically don't need set_keys_message to bind to removed_participants, as the // key we're signing with effectively already does so, yet there's no reason not to - if !musig_key(set, &signers).verify(&set_keys_message(&set, key_pair), signature) { + if !musig_key(set.into(), &signers).verify(&set_keys_message(&set, key_pair), signature) { Err(InvalidTransaction::BadProof)?; } @@ -1207,8 +1207,10 @@ pub mod pallet { }; // There must have been a previous session is PendingSlashReport is populated - let set = - ExternalValidatorSet { network, session: Session(Self::session(network).unwrap().0 - 1) }; + let set = ExternalValidatorSet { + network, + session: Session(Self::session(NetworkId::from(network)).unwrap().0 - 1), + }; if !key.verify(&slashes.report_slashes_message(), signature) { Err(InvalidTransaction::BadProof)?; } diff --git a/substrate/validator-sets/primitives/src/lib.rs b/substrate/validator-sets/primitives/src/lib.rs index 21b628b9..04e3b548 100644 --- a/substrate/validator-sets/primitives/src/lib.rs +++ b/substrate/validator-sets/primitives/src/lib.rs @@ -140,7 +140,7 @@ pub fn musig_key(set: ValidatorSet, set_keys: &[Public]) -> Public { } /// The message for the `set_keys` signature. -pub fn set_keys_message(set: &ValidatorSet, key_pair: &KeyPair) -> Vec { +pub fn set_keys_message(set: &ExternalValidatorSet, key_pair: &KeyPair) -> Vec { (b"ValidatorSets-set_keys", set, key_pair).encode() } From 9c33a711d7035eb70211a5d7d5d6b7442d4dc104 Mon Sep 17 00:00:00 2001 From: akildemir <34187742+akildemir@users.noreply.github.com> Date: Thu, 30 Jan 2025 12:13:21 +0300 Subject: [PATCH 366/368] add in instructions pallet tests (#608) * add pallet tests * set mock runtime AllowMint to correct type --- Cargo.lock | 4 + substrate/emissions/pallet/src/lib.rs | 17 +- substrate/genesis-liquidity/pallet/src/lib.rs | 2 + substrate/in-instructions/pallet/Cargo.toml | 27 +- substrate/in-instructions/pallet/src/lib.rs | 6 + substrate/in-instructions/pallet/src/mock.rs | 201 +++++++ substrate/in-instructions/pallet/src/tests.rs | 500 ++++++++++++++++++ 7 files changed, 749 insertions(+), 8 deletions(-) create mode 100644 substrate/in-instructions/pallet/src/mock.rs create mode 100644 substrate/in-instructions/pallet/src/tests.rs diff --git a/Cargo.lock b/Cargo.lock index 87d49f61..7b438c66 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9276,10 +9276,14 @@ dependencies = [ "bitvec", "frame-support", "frame-system", + "pallet-babe", + "pallet-grandpa", + "pallet-timestamp", "parity-scale-codec", "scale-info", "serai-coins-pallet", "serai-dex-pallet", + "serai-economic-security-pallet", "serai-emissions-pallet", "serai-genesis-liquidity-pallet", "serai-in-instructions-primitives", diff --git a/substrate/emissions/pallet/src/lib.rs b/substrate/emissions/pallet/src/lib.rs index 7eb1c594..4241d186 100644 --- a/substrate/emissions/pallet/src/lib.rs +++ b/substrate/emissions/pallet/src/lib.rs @@ -12,7 +12,7 @@ pub mod pallet { use frame_system::{pallet_prelude::*, RawOrigin}; use frame_support::{pallet_prelude::*, sp_runtime::SaturatedConversion}; - use sp_std::{vec, vec::Vec, ops::Mul, collections::btree_map::BTreeMap}; + use sp_std::{vec, vec::Vec, collections::btree_map::BTreeMap}; use coins_pallet::{Config as CoinsConfig, Pallet as Coins}; use dex_pallet::{Config as DexConfig, Pallet as Dex}; @@ -59,6 +59,7 @@ pub mod pallet { NetworkHasEconomicSecurity, NoValueForCoin, InsufficientAllocation, + AmountOverflow, } #[pallet::event] @@ -412,9 +413,17 @@ pub mod pallet { let last_block = >::block_number() - 1u32.into(); let value = Dex::::spot_price_for_block(last_block, balance.coin) .ok_or(Error::::NoValueForCoin)?; - // TODO: may panic? It might be best for this math ops to return the result as is instead of - // doing an unwrap so that it can be properly dealt with. - let sri_amount = balance.amount.mul(value); + + let sri_amount = Amount( + u64::try_from( + u128::from(balance.amount.0) + .checked_mul(u128::from(value.0)) + .ok_or(Error::::AmountOverflow)? + .checked_div(u128::from(10u64.pow(balance.coin.decimals()))) + .ok_or(Error::::AmountOverflow)?, + ) + .map_err(|_| Error::::AmountOverflow)?, + ); // Mint Coins::::mint(to, Balance { coin: Coin::Serai, amount: sri_amount })?; diff --git a/substrate/genesis-liquidity/pallet/src/lib.rs b/substrate/genesis-liquidity/pallet/src/lib.rs index 3a78e493..2fd24589 100644 --- a/substrate/genesis-liquidity/pallet/src/lib.rs +++ b/substrate/genesis-liquidity/pallet/src/lib.rs @@ -64,6 +64,7 @@ pub mod pallet { /// Keeps shares and the amount of coins per account. #[pallet::storage] + #[pallet::getter(fn liquidity)] pub(crate) type Liquidity = StorageDoubleMap< _, Identity, @@ -76,6 +77,7 @@ pub mod pallet { /// Keeps the total shares and the total amount of coins per coin. #[pallet::storage] + #[pallet::getter(fn supply)] pub(crate) type Supply = StorageMap<_, Identity, ExternalCoin, LiquidityAmount, OptionQuery>; diff --git a/substrate/in-instructions/pallet/Cargo.toml b/substrate/in-instructions/pallet/Cargo.toml index 54cacf8a..c288e9fe 100644 --- a/substrate/in-instructions/pallet/Cargo.toml +++ b/substrate/in-instructions/pallet/Cargo.toml @@ -42,6 +42,14 @@ validator-sets-pallet = { package = "serai-validator-sets-pallet", path = "../.. genesis-liquidity-pallet = { package = "serai-genesis-liquidity-pallet", path = "../../genesis-liquidity/pallet", default-features = false } emissions-pallet = { package = "serai-emissions-pallet", path = "../../emissions/pallet", default-features = false } + +[dev-dependencies] +pallet-babe = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-grandpa = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-timestamp = { git = "https://github.com/serai-dex/substrate", default-features = false } + +economic-security-pallet = { package = "serai-economic-security-pallet", path = "../../economic-security/pallet", default-features = false } + [features] std = [ "scale/std", @@ -64,8 +72,19 @@ std = [ "validator-sets-pallet/std", "genesis-liquidity-pallet/std", "emissions-pallet/std", -] -default = ["std"] -# TODO -try-runtime = [] + "economic-security-pallet/std", + + "pallet-babe/std", + "pallet-grandpa/std", + "pallet-timestamp/std", +] + +try-runtime = [ + "frame-system/try-runtime", + "frame-support/try-runtime", + + "sp-runtime/try-runtime", +] + +default = ["std"] diff --git a/substrate/in-instructions/pallet/src/lib.rs b/substrate/in-instructions/pallet/src/lib.rs index 23d8a875..7580056d 100644 --- a/substrate/in-instructions/pallet/src/lib.rs +++ b/substrate/in-instructions/pallet/src/lib.rs @@ -9,6 +9,12 @@ use serai_primitives::*; pub use in_instructions_primitives as primitives; use primitives::*; +#[cfg(test)] +mod mock; + +#[cfg(test)] +mod tests; + // TODO: Investigate why Substrate generates these #[allow( unreachable_patterns, diff --git a/substrate/in-instructions/pallet/src/mock.rs b/substrate/in-instructions/pallet/src/mock.rs new file mode 100644 index 00000000..05417863 --- /dev/null +++ b/substrate/in-instructions/pallet/src/mock.rs @@ -0,0 +1,201 @@ +//! Test environment for InInstructions pallet. + +use super::*; + +use std::collections::HashMap; + +use frame_support::{ + construct_runtime, + traits::{ConstU16, ConstU32, ConstU64}, +}; + +use sp_core::{H256, Pair, sr25519::Public}; +use sp_runtime::{ + traits::{BlakeTwo256, IdentityLookup}, + BuildStorage, +}; + +use validator_sets::{primitives::MAX_KEY_SHARES_PER_SET, MembershipProof}; + +pub use crate as in_instructions; +pub use coins_pallet as coins; +pub use validator_sets_pallet as validator_sets; +pub use genesis_liquidity_pallet as genesis_liquidity; +pub use emissions_pallet as emissions; +pub use dex_pallet as dex; +pub use pallet_babe as babe; +pub use pallet_grandpa as grandpa; +pub use pallet_timestamp as timestamp; +pub use economic_security_pallet as economic_security; + +type Block = frame_system::mocking::MockBlock; +// Maximum number of authorities per session. +pub type MaxAuthorities = ConstU32<{ MAX_KEY_SHARES_PER_SET }>; + +pub const MEDIAN_PRICE_WINDOW_LENGTH: u16 = 10; + +construct_runtime!( + pub enum Test + { + System: frame_system, + Timestamp: timestamp, + Coins: coins, + LiquidityTokens: coins::::{Pallet, Call, Storage, Event}, + Emissions: emissions, + ValidatorSets: validator_sets, + GenesisLiquidity: genesis_liquidity, + EconomicSecurity: economic_security, + Dex: dex, + Babe: babe, + Grandpa: grandpa, + InInstructions: in_instructions, + } +); + +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type RuntimeOrigin = RuntimeOrigin; + type RuntimeCall = RuntimeCall; + type Nonce = u64; + type Hash = H256; + type Hashing = BlakeTwo256; + type AccountId = Public; + type Lookup = IdentityLookup; + type Block = Block; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type DbWeight = (); + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} + +impl timestamp::Config for Test { + type Moment = u64; + type OnTimestampSet = Babe; + type MinimumPeriod = ConstU64<{ (TARGET_BLOCK_TIME * 1000) / 2 }>; + type WeightInfo = (); +} + +impl babe::Config for Test { + type EpochDuration = ConstU64<{ FAST_EPOCH_DURATION }>; + + type ExpectedBlockTime = ConstU64<{ TARGET_BLOCK_TIME * 1000 }>; + type EpochChangeTrigger = babe::ExternalTrigger; + type DisabledValidators = ValidatorSets; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl grandpa::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type MaxSetIdSessionEntries = ConstU64<0>; + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = ValidatorSets; +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = (); +} + +impl dex::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type LPFee = ConstU32<3>; // 0.3% + type MintMinLiquidity = ConstU64<10000>; + + type MaxSwapPathLength = ConstU32<3>; // coin1 -> SRI -> coin2 + + type MedianPriceWindowLength = ConstU16<{ MEDIAN_PRICE_WINDOW_LENGTH }>; + + type WeightInfo = dex::weights::SubstrateWeight; +} + +impl validator_sets::Config for Test { + type RuntimeEvent = RuntimeEvent; + type ShouldEndSession = Babe; +} + +impl genesis_liquidity::Config for Test { + type RuntimeEvent = RuntimeEvent; +} + +impl emissions::Config for Test { + type RuntimeEvent = RuntimeEvent; +} + +impl economic_security::Config for Test { + type RuntimeEvent = RuntimeEvent; +} + +impl Config for Test { + type RuntimeEvent = RuntimeEvent; +} + +// Amounts for single key share per network +pub fn key_shares() -> HashMap { + HashMap::from([ + (NetworkId::Serai, Amount(50_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Bitcoin), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Ethereum), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Monero), Amount(100_000 * 10_u64.pow(8))), + ]) +} + +pub(crate) fn new_test_ext() -> sp_io::TestExternalities { + let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap(); + let networks: Vec<(NetworkId, Amount)> = key_shares().into_iter().collect::>(); + + let accounts: Vec = vec![ + insecure_pair_from_name("Alice").public(), + insecure_pair_from_name("Bob").public(), + insecure_pair_from_name("Charlie").public(), + insecure_pair_from_name("Dave").public(), + insecure_pair_from_name("Eve").public(), + insecure_pair_from_name("Ferdie").public(), + ]; + let validators = accounts.clone(); + + coins::GenesisConfig:: { + accounts: accounts + .into_iter() + .map(|a| (a, Balance { coin: Coin::Serai, amount: Amount(1 << 60) })) + .collect(), + _ignore: Default::default(), + } + .assimilate_storage(&mut t) + .unwrap(); + + validator_sets::GenesisConfig:: { + networks: networks.clone(), + participants: validators.clone(), + } + .assimilate_storage(&mut t) + .unwrap(); + + let mut ext = sp_io::TestExternalities::new(t); + ext.execute_with(|| System::set_block_number(0)); + ext +} diff --git a/substrate/in-instructions/pallet/src/tests.rs b/substrate/in-instructions/pallet/src/tests.rs new file mode 100644 index 00000000..cc2b0f3e --- /dev/null +++ b/substrate/in-instructions/pallet/src/tests.rs @@ -0,0 +1,500 @@ +use super::*; +use crate::mock::*; + +use emissions_pallet::primitives::POL_ACCOUNT; +use genesis_liquidity_pallet::primitives::INITIAL_GENESIS_LP_SHARES; +use scale::Encode; + +use frame_support::{pallet_prelude::InvalidTransaction, traits::OnFinalize}; +use frame_system::RawOrigin; + +use sp_core::{sr25519::Public, Pair}; +use sp_runtime::{traits::ValidateUnsigned, transaction_validity::TransactionSource, BoundedVec}; + +use validator_sets::{Pallet as ValidatorSets, primitives::KeyPair}; +use coins::primitives::{OutInstruction, OutInstructionWithBalance}; +use genesis_liquidity::primitives::GENESIS_LIQUIDITY_ACCOUNT; + +fn set_keys_for_session(key: Public) { + for n in EXTERNAL_NETWORKS { + ValidatorSets::::set_keys( + RawOrigin::None.into(), + n, + BoundedVec::new(), + KeyPair(key, vec![].try_into().unwrap()), + Signature([0u8; 64]), + ) + .unwrap(); + } +} + +fn get_events() -> Vec> { + let events = System::events() + .iter() + .filter_map(|event| { + if let RuntimeEvent::InInstructions(e) = &event.event { + Some(e.clone()) + } else { + None + } + }) + .collect::>(); + + System::reset_events(); + events +} + +fn make_liquid_pool(coin: ExternalCoin, amount: u64) { + // mint coins so that we can add liquidity + let account = insecure_pair_from_name("make-pool-account").public(); + Coins::mint(account, ExternalBalance { coin, amount: Amount(amount) }.into()).unwrap(); + Coins::mint(account, Balance { coin: Coin::Serai, amount: Amount(amount) }).unwrap(); + + // make some liquid pool + Dex::add_liquidity(RawOrigin::Signed(account).into(), coin, amount, amount, 1, 1, account) + .unwrap(); +} + +#[test] +fn validate_batch() { + new_test_ext().execute_with(|| { + let pair = insecure_pair_from_name("Alice"); + set_keys_for_session(pair.public()); + + let mut batch_size = 0; + let mut batch = Batch { + network: ExternalNetworkId::Monero, + id: 1, + block: BlockHash([0u8; 32]), + instructions: vec![], + }; + + // batch size bigger than MAX_BATCH_SIZE should fail + while batch_size <= MAX_BATCH_SIZE + 1000 { + batch.instructions.push(InInstructionWithBalance { + instruction: InInstruction::Transfer(SeraiAddress::new([0u8; 32])), + balance: ExternalBalance { coin: ExternalCoin::Monero, amount: Amount(1) }, + }); + batch_size = batch.encode().len(); + } + + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature: Signature([0u8; 64]) }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::ExhaustsResources.into() + ); + + // reduce the batch size into allowed size + while batch_size > MAX_BATCH_SIZE { + batch.instructions.pop(); + batch_size = batch.encode().len(); + } + + // 0 signature should be invalid + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature: Signature([0u8; 64]) }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::BadProof.into() + ); + + // submit a valid signature + let signature = pair.sign(&batch_message(&batch)); + + // network shouldn't be halted + InInstructions::halt(ExternalNetworkId::Monero).unwrap(); + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Custom(1).into() // network halted error + ); + + // submit from an un-halted network + batch.network = ExternalNetworkId::Bitcoin; + let signature = pair.sign(&batch_message(&batch)); + + // can't submit in the first block(Block 0) + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature: signature.clone() }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Future.into() + ); + + // update block number + System::set_block_number(1); + + // first batch id should be 0 + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature: signature.clone() }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Future.into() + ); + + // update batch id + batch.id = 0; + let signature = pair.sign(&batch_message(&batch)); + + // can't have more than 1 batch per block + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature: signature.clone() }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Future.into() + ); + + // update block number + System::set_block_number(2); + + // network and the instruction coins should match + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Custom(2).into() // network and instruction coins doesn't match error + ); + + // update block number & batch + System::set_block_number(3); + for ins in &mut batch.instructions { + ins.balance.coin = ExternalCoin::Bitcoin; + } + let signature = pair.sign(&batch_message(&batch)); + + // batch id can't be equal or less than previous id + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Stale.into() + ); + + // update block number & batch + System::set_block_number(4); + batch.id += 2; + let signature = pair.sign(&batch_message(&batch)); + + // batch id can't be incremented more than once per batch + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature }, + }; + assert_eq!( + InInstructions::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Future.into() + ); + + // update block number & batch + System::set_block_number(5); + batch.id = (batch.id - 2) + 1; + let signature = pair.sign(&batch_message(&batch)); + + // it should now pass + let call = pallet::Call::::execute_batch { + batch: SignedBatch { batch: batch.clone(), signature }, + }; + InInstructions::validate_unsigned(TransactionSource::External, &call).unwrap(); + }); +} + +#[test] +fn transfer_instruction() { + new_test_ext().execute_with(|| { + let coin = ExternalCoin::Bitcoin; + let amount = Amount(2 * 10u64.pow(coin.decimals())); + let account = insecure_pair_from_name("random1").public(); + let batch = SignedBatch { + batch: Batch { + network: coin.network(), + id: 0, + block: BlockHash([0u8; 32]), + instructions: vec![InInstructionWithBalance { + instruction: InInstruction::Transfer(account.into()), + balance: ExternalBalance { coin, amount }, + }], + }, + signature: Signature([0u8; 64]), + }; + InInstructions::execute_batch(RawOrigin::None.into(), batch).unwrap(); + + // check that account has the coins + assert_eq!(Coins::balance(account, coin.into()), amount); + }) +} + +#[test] +fn dex_instruction_add_liquidity() { + new_test_ext().execute_with(|| { + let coin = ExternalCoin::Ether; + let amount = Amount(2 * 10u64.pow(coin.decimals())); + let account = insecure_pair_from_name("random1").public(); + + let batch = SignedBatch { + batch: Batch { + network: coin.network(), + id: 0, + block: BlockHash([0u8; 32]), + instructions: vec![InInstructionWithBalance { + instruction: InInstruction::Dex(DexCall::SwapAndAddLiquidity(account.into())), + balance: ExternalBalance { coin, amount }, + }], + }, + signature: Signature([0u8; 64]), + }; + + // we should have a liquid pool before we can swap + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + + // check that the instruction is failed + assert_eq!( + get_events() + .into_iter() + .filter(|event| matches!(event, in_instructions::Event::::InstructionFailure { .. })) + .collect::>(), + vec![in_instructions::Event::::InstructionFailure { + network: batch.batch.network, + id: batch.batch.id, + index: 0 + }] + ); + + let original_coin_amount = 5 * 10u64.pow(coin.decimals()); + make_liquid_pool(coin, original_coin_amount); + + // this should now be successful + InInstructions::execute_batch(RawOrigin::None.into(), batch).unwrap(); + + // check that the instruction was successful + assert_eq!( + get_events() + .into_iter() + .filter(|event| matches!(event, in_instructions::Event::::InstructionFailure { .. })) + .collect::>(), + vec![] + ); + + // check that we now have a Ether pool with correct liquidity + // we can't know the actual SRI amount since we don't know the result of the swap. + // Moreover, knowing exactly how much isn't the responsibility of InInstruction pallet, + // it is responsibility of the Dex pallet. + let (coin_amount, _serai_amount) = Dex::get_reserves(&coin.into(), &Coin::Serai).unwrap(); + assert_eq!(coin_amount, original_coin_amount + amount.0); + + // assert that the account got the liquidity tokens, again we don't how much and + // it isn't this pallets responsibility. + assert!(LiquidityTokens::balance(account, coin.into()).0 > 0); + + // check that in ins account doesn't have the coins + assert_eq!(Coins::balance(IN_INSTRUCTION_EXECUTOR.into(), coin.into()), Amount(0)); + assert_eq!(Coins::balance(IN_INSTRUCTION_EXECUTOR.into(), Coin::Serai), Amount(0)); + }) +} + +#[test] +fn dex_instruction_swap() { + new_test_ext().execute_with(|| { + let coin = ExternalCoin::Bitcoin; + let amount = Amount(2 * 10u64.pow(coin.decimals())); + let account = insecure_pair_from_name("random1").public(); + + // make a pool so that can actually swap + make_liquid_pool(coin, 5 * 10u64.pow(coin.decimals())); + + let mut batch = SignedBatch { + batch: Batch { + network: coin.network(), + id: 0, + block: BlockHash([0u8; 32]), + instructions: vec![InInstructionWithBalance { + instruction: InInstruction::Dex(DexCall::Swap( + Balance { coin: Coin::Serai, amount: Amount(1) }, + OutAddress::External(ExternalAddress::new([0u8; 64].to_vec()).unwrap()), + )), + balance: ExternalBalance { coin, amount }, + }], + }, + signature: Signature([0u8; 64]), + }; + + // we can't send SRI to external address + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + + // check that the instruction was failed + assert_eq!( + get_events() + .into_iter() + .filter(|event| matches!(event, in_instructions::Event::::InstructionFailure { .. })) + .collect::>(), + vec![in_instructions::Event::::InstructionFailure { + network: batch.batch.network, + id: batch.batch.id, + index: 0 + }] + ); + + // make it internal address + batch.batch.instructions[0].instruction = InInstruction::Dex(DexCall::Swap( + Balance { coin: Coin::Serai, amount: Amount(1) }, + OutAddress::Serai(account.into()), + )); + + // check that swap is successful this time + assert_eq!(Coins::balance(account, Coin::Serai), Amount(0)); + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + assert!(Coins::balance(account, Coin::Serai).0 > 0); + + // make another pool for external coin + let coin2 = ExternalCoin::Monero; + make_liquid_pool(coin2, 5 * 10u64.pow(coin.decimals())); + + // update the batch + let out_addr = ExternalAddress::new([0u8; 64].to_vec()).unwrap(); + batch.batch.instructions[0].instruction = InInstruction::Dex(DexCall::Swap( + Balance { coin: ExternalCoin::Monero.into(), amount: Amount(1) }, + OutAddress::External(out_addr.clone()), + )); + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + + // check that we got out instruction + let events = System::events() + .iter() + .filter_map(|event| { + if let RuntimeEvent::Coins(e) = &event.event { + if matches!(e, coins::Event::::BurnWithInstruction { .. }) { + Some(e.clone()) + } else { + None + } + } else { + None + } + }) + .collect::>(); + + assert_eq!( + events, + vec![coins::Event::::BurnWithInstruction { + from: IN_INSTRUCTION_EXECUTOR.into(), + instruction: OutInstructionWithBalance { + instruction: OutInstruction { address: out_addr, data: None }, + balance: ExternalBalance { coin: coin2, amount: Amount(68228493) } + } + }] + ) + }) +} + +#[test] +fn genesis_liquidity_instruction() { + new_test_ext().execute_with(|| { + let coin = ExternalCoin::Bitcoin; + let amount = Amount(2 * 10u64.pow(coin.decimals())); + let account = insecure_pair_from_name("random1").public(); + + let batch = SignedBatch { + batch: Batch { + network: coin.network(), + id: 0, + block: BlockHash([0u8; 32]), + instructions: vec![InInstructionWithBalance { + instruction: InInstruction::GenesisLiquidity(account.into()), + balance: ExternalBalance { coin, amount }, + }], + }, + signature: Signature([0u8; 64]), + }; + + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + + // check that genesis liq account got the coins + assert_eq!(Coins::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), coin.into()), amount); + + // check that it registered the liquidity for the account + // detailed tests about the amounts has to be done in GenesisLiquidity pallet tests. + let liquidity_amount = GenesisLiquidity::liquidity(coin, account).unwrap(); + assert_eq!(liquidity_amount.coins, amount.0); + assert_eq!(liquidity_amount.shares, INITIAL_GENESIS_LP_SHARES); + + let supply = GenesisLiquidity::supply(coin).unwrap(); + assert_eq!(supply.coins, amount.0); + assert_eq!(supply.shares, INITIAL_GENESIS_LP_SHARES); + }) +} + +#[test] +fn swap_to_staked_sri_instruction() { + new_test_ext().execute_with(|| { + let coin = ExternalCoin::Monero; + let key_share = + ValidatorSets::::allocation_per_key_share(NetworkId::from(coin.network())).unwrap(); + let amount = Amount(2 * key_share.0); + let account = insecure_pair_from_name("random1").public(); + + // make a pool so that can actually swap + make_liquid_pool(coin, 5 * 10u64.pow(coin.decimals())); + + // set the keys to set the TAS for the network + ValidatorSets::::set_keys( + RawOrigin::None.into(), + coin.network(), + Vec::new().try_into().unwrap(), + KeyPair(insecure_pair_from_name("random-key").public(), Vec::new().try_into().unwrap()), + Signature([0u8; 64]), + ) + .unwrap(); + + // make sure account doesn't already have lTs or allocation + let current_liq_tokens = LiquidityTokens::balance(POL_ACCOUNT.into(), coin.into()).0; + assert_eq!(current_liq_tokens, 0); + assert_eq!(ValidatorSets::::allocation((NetworkId::from(coin.network()), account)), None); + + // we need this so that value for the coin exist + Dex::on_finalize(0); + System::set_block_number(1); // we need this for the spot price + + let batch = SignedBatch { + batch: Batch { + network: coin.network(), + id: 0, + block: BlockHash([0u8; 32]), + instructions: vec![InInstructionWithBalance { + instruction: InInstruction::SwapToStakedSRI(account.into(), coin.network().into()), + balance: ExternalBalance { coin, amount }, + }], + }, + signature: Signature([0u8; 64]), + }; + + InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap(); + + // assert that we added liq from POL account + assert!(LiquidityTokens::balance(POL_ACCOUNT.into(), coin.into()).0 > current_liq_tokens); + + // assert that user allocated SRI for the network + let value = Dex::spot_price_for_block(0, coin).unwrap(); + let sri_amount = Amount( + u64::try_from( + u128::from(amount.0) + .checked_mul(u128::from(value.0)) + .unwrap() + .checked_div(u128::from(10u64.pow(coin.decimals()))) + .unwrap(), + ) + .unwrap(), + ); + assert_eq!( + ValidatorSets::::allocation((NetworkId::from(coin.network()), account)).unwrap(), + sri_amount + ); + }) +} From 52d853c8ba0724cef29417076f4c30f053bc61a8 Mon Sep 17 00:00:00 2001 From: akildemir <34187742+akildemir@users.noreply.github.com> Date: Thu, 30 Jan 2025 12:16:19 +0300 Subject: [PATCH 367/368] add validator sets pallet tests (#614) * add validator sets pallet tests * update tests with new types --------- Co-authored-by: Luke Parker --- Cargo.lock | 8 + substrate/validator-sets/pallet/Cargo.toml | 23 +- substrate/validator-sets/pallet/src/lib.rs | 19 +- substrate/validator-sets/pallet/src/mock.rs | 210 +++++++ substrate/validator-sets/pallet/src/tests.rs | 561 +++++++++++++++++++ 5 files changed, 817 insertions(+), 4 deletions(-) create mode 100644 substrate/validator-sets/pallet/src/mock.rs create mode 100644 substrate/validator-sets/pallet/src/tests.rs diff --git a/Cargo.lock b/Cargo.lock index 7b438c66..9fb17677 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9869,11 +9869,17 @@ name = "serai-validator-sets-pallet" version = "0.1.0" dependencies = [ "bitvec", + "ciphersuite", "frame-support", "frame-system", + "frost-schnorrkel", + "hashbrown 0.14.5", + "modular-frost", "pallet-babe", "pallet-grandpa", + "pallet-timestamp", "parity-scale-codec", + "rand_core", "scale-info", "serai-coins-pallet", "serai-dex-pallet", @@ -9881,12 +9887,14 @@ dependencies = [ "serai-validator-sets-primitives", "serde", "sp-application-crypto", + "sp-consensus-babe", "sp-core", "sp-io", "sp-runtime", "sp-session", "sp-staking", "sp-std", + "zeroize", ] [[package]] diff --git a/substrate/validator-sets/pallet/Cargo.toml b/substrate/validator-sets/pallet/Cargo.toml index 7f0a9850..78bc1718 100644 --- a/substrate/validator-sets/pallet/Cargo.toml +++ b/substrate/validator-sets/pallet/Cargo.toml @@ -43,6 +43,18 @@ validator-sets-primitives = { package = "serai-validator-sets-primitives", path coins-pallet = { package = "serai-coins-pallet", path = "../../coins/pallet", default-features = false } dex-pallet = { package = "serai-dex-pallet", path = "../../dex/pallet", default-features = false } +[dev-dependencies] +pallet-timestamp = { git = "https://github.com/serai-dex/substrate", default-features = false } + +sp-consensus-babe = { git = "https://github.com/serai-dex/substrate", default-features = false } + +ciphersuite = { path = "../../../crypto/ciphersuite", features = ["ristretto"] } +frost = { package = "modular-frost", path = "../../../crypto/frost", features = ["tests"] } +schnorrkel = { path = "../../../crypto/schnorrkel", package = "frost-schnorrkel" } + +zeroize = "^1.5" +rand_core = "0.6" + [features] std = [ "bitvec/std", @@ -57,12 +69,15 @@ std = [ "sp-runtime/std", "sp-session/std", "sp-staking/std", + + "sp-consensus-babe/std", "frame-system/std", "frame-support/std", "pallet-babe/std", "pallet-grandpa/std", + "pallet-timestamp/std", "serai-primitives/std", "validator-sets-primitives/std", @@ -71,8 +86,12 @@ std = [ "dex-pallet/std", ] -# TODO -try-runtime = [] +try-runtime = [ + "frame-system/try-runtime", + "frame-support/try-runtime", + + "sp-runtime/try-runtime", +] runtime-benchmarks = [ "frame-system/runtime-benchmarks", diff --git a/substrate/validator-sets/pallet/src/lib.rs b/substrate/validator-sets/pallet/src/lib.rs index 755e980a..5bfa58f0 100644 --- a/substrate/validator-sets/pallet/src/lib.rs +++ b/substrate/validator-sets/pallet/src/lib.rs @@ -1,5 +1,11 @@ #![cfg_attr(not(feature = "std"), no_std)] +#[cfg(test)] +mod mock; + +#[cfg(test)] +mod tests; + use core::marker::PhantomData; use scale::{Encode, Decode}; @@ -321,6 +327,7 @@ pub mod pallet { /// Pending deallocations, keyed by the Session they become unlocked on. #[pallet::storage] + #[pallet::getter(fn pending_deallocations)] type PendingDeallocations = StorageDoubleMap< _, Blake2_128Concat, @@ -411,6 +418,7 @@ pub mod pallet { let allocation_per_key_share = Self::allocation_per_key_share(network).unwrap().0; let mut participants = vec![]; + let mut total_allocated_stake = 0; { let mut iter = SortedAllocationsIter::::new(network); let mut key_shares = 0; @@ -421,6 +429,7 @@ pub mod pallet { (amount.0 / allocation_per_key_share).min(u64::from(MAX_KEY_SHARES_PER_SET_U32)); participants.push((key, these_key_shares)); + total_allocated_stake += amount.0; key_shares += these_key_shares; } amortize_excess_key_shares(&mut participants); @@ -433,6 +442,12 @@ pub mod pallet { let set = ValidatorSet { network, session }; Pallet::::deposit_event(Event::NewSet { set }); + // other networks set their Session(0) TAS once they set their keys but serai network + // doesn't have that so we set it here. + if network == NetworkId::Serai && session == Session(0) { + TotalAllocatedStake::::set(network, Some(Amount(total_allocated_stake))); + } + Participants::::set(network, Some(participants.try_into().unwrap())); SessionBeginBlock::::set( network, @@ -658,7 +673,7 @@ pub mod pallet { // If we're not removing the entire allocation, yet the allocation is no longer at or above // the threshold for a key share, error let allocation_per_key_share = Self::allocation_per_key_share(network).unwrap().0; - if (new_allocation != 0) && (new_allocation < allocation_per_key_share) { + if (new_allocation > 0) && (new_allocation < allocation_per_key_share) { Err(Error::::DeallocationWouldRemoveParticipant)?; } @@ -819,7 +834,7 @@ pub mod pallet { PendingDeallocations::::take((network, key), session) } - fn rotate_session() { + pub(crate) fn rotate_session() { // next serai validators that is in the queue. let now_validators = Participants::::get(NetworkId::Serai) .expect("no Serai participants upon rotate_session"); diff --git a/substrate/validator-sets/pallet/src/mock.rs b/substrate/validator-sets/pallet/src/mock.rs new file mode 100644 index 00000000..d6d12050 --- /dev/null +++ b/substrate/validator-sets/pallet/src/mock.rs @@ -0,0 +1,210 @@ +//! Test environment for ValidatorSets pallet. + +use super::*; + +use std::collections::HashMap; + +use frame_support::{ + construct_runtime, + traits::{ConstU16, ConstU32, ConstU64}, +}; + +use sp_core::{ + H256, Pair as PairTrait, + sr25519::{Public, Pair}, +}; +use sp_runtime::{ + traits::{BlakeTwo256, IdentityLookup}, + BuildStorage, +}; + +use serai_primitives::*; +use validator_sets::{primitives::MAX_KEY_SHARES_PER_SET, MembershipProof}; + +pub use crate as validator_sets; +pub use coins_pallet as coins; +pub use dex_pallet as dex; +pub use pallet_babe as babe; +pub use pallet_grandpa as grandpa; +pub use pallet_timestamp as timestamp; + +type Block = frame_system::mocking::MockBlock; +// Maximum number of authorities per session. +pub type MaxAuthorities = ConstU32<{ MAX_KEY_SHARES_PER_SET }>; + +pub const PRIMARY_PROBABILITY: (u64, u64) = (1, 4); +pub const BABE_GENESIS_EPOCH_CONFIG: sp_consensus_babe::BabeEpochConfiguration = + sp_consensus_babe::BabeEpochConfiguration { + c: PRIMARY_PROBABILITY, + allowed_slots: sp_consensus_babe::AllowedSlots::PrimaryAndSecondaryPlainSlots, + }; + +pub const MEDIAN_PRICE_WINDOW_LENGTH: u16 = 10; + +construct_runtime!( + pub enum Test + { + System: frame_system, + Timestamp: timestamp, + Coins: coins, + LiquidityTokens: coins::::{Pallet, Call, Storage, Event}, + ValidatorSets: validator_sets, + Dex: dex, + Babe: babe, + Grandpa: grandpa, + } +); + +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type RuntimeOrigin = RuntimeOrigin; + type RuntimeCall = RuntimeCall; + type Nonce = u64; + type Hash = H256; + type Hashing = BlakeTwo256; + type AccountId = Public; + type Lookup = IdentityLookup; + type Block = Block; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type DbWeight = (); + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} + +impl timestamp::Config for Test { + type Moment = u64; + type OnTimestampSet = Babe; + type MinimumPeriod = ConstU64<{ (TARGET_BLOCK_TIME * 1000) / 2 }>; + type WeightInfo = (); +} + +impl babe::Config for Test { + type EpochDuration = ConstU64<{ FAST_EPOCH_DURATION }>; + + type ExpectedBlockTime = ConstU64<{ TARGET_BLOCK_TIME * 1000 }>; + type EpochChangeTrigger = babe::ExternalTrigger; + type DisabledValidators = ValidatorSets; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl grandpa::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type MaxSetIdSessionEntries = ConstU64<0>; + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = ValidatorSets; +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = (); +} + +impl dex::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type LPFee = ConstU32<3>; // 0.3% + type MintMinLiquidity = ConstU64<10000>; + + type MaxSwapPathLength = ConstU32<3>; // coin1 -> SRI -> coin2 + + type MedianPriceWindowLength = ConstU16<{ MEDIAN_PRICE_WINDOW_LENGTH }>; + + type WeightInfo = dex::weights::SubstrateWeight; +} + +impl Config for Test { + type RuntimeEvent = RuntimeEvent; + type ShouldEndSession = Babe; +} + +// For a const we can't define +pub fn genesis_participants() -> Vec { + vec![ + insecure_pair_from_name("Alice"), + insecure_pair_from_name("Bob"), + insecure_pair_from_name("Charlie"), + insecure_pair_from_name("Dave"), + ] +} + +// Amounts for single key share per network +pub fn key_shares() -> HashMap { + HashMap::from([ + (NetworkId::Serai, Amount(50_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Bitcoin), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Ethereum), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Monero), Amount(100_000 * 10_u64.pow(8))), + ]) +} + +pub(crate) fn new_test_ext() -> sp_io::TestExternalities { + let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap(); + let networks: Vec<(NetworkId, Amount)> = key_shares().into_iter().collect::>(); + + coins::GenesisConfig:: { + accounts: genesis_participants() + .clone() + .into_iter() + .map(|a| (a.public(), Balance { coin: Coin::Serai, amount: Amount(1 << 60) })) + .collect(), + _ignore: Default::default(), + } + .assimilate_storage(&mut t) + .unwrap(); + + validator_sets::GenesisConfig:: { + networks, + participants: genesis_participants().into_iter().map(|p| p.public()).collect(), + } + .assimilate_storage(&mut t) + .unwrap(); + + babe::GenesisConfig:: { + authorities: genesis_participants() + .into_iter() + .map(|validator| (validator.public().into(), 1)) + .collect(), + epoch_config: Some(BABE_GENESIS_EPOCH_CONFIG), + _config: PhantomData, + } + .assimilate_storage(&mut t) + .unwrap(); + + grandpa::GenesisConfig:: { + authorities: genesis_participants() + .into_iter() + .map(|validator| (validator.public().into(), 1)) + .collect(), + _config: PhantomData, + } + .assimilate_storage(&mut t) + .unwrap(); + + let mut ext = sp_io::TestExternalities::new(t); + ext.execute_with(|| System::set_block_number(0)); + ext +} diff --git a/substrate/validator-sets/pallet/src/tests.rs b/substrate/validator-sets/pallet/src/tests.rs new file mode 100644 index 00000000..6c407abd --- /dev/null +++ b/substrate/validator-sets/pallet/src/tests.rs @@ -0,0 +1,561 @@ +use crate::{mock::*, primitives::*}; + +use std::collections::HashMap; + +use ciphersuite::{Ciphersuite, Ristretto}; +use frost::dkg::musig::musig; +use schnorrkel::Schnorrkel; + +use zeroize::Zeroizing; +use rand_core::OsRng; + +use frame_support::{ + assert_noop, assert_ok, + pallet_prelude::{InvalidTransaction, TransactionSource}, + traits::{OnFinalize, OnInitialize}, +}; +use frame_system::RawOrigin; + +use sp_core::{ + sr25519::{Public, Pair, Signature}, + Pair as PairTrait, +}; +use sp_runtime::{traits::ValidateUnsigned, BoundedVec}; + +use serai_primitives::*; + +fn active_network_validators(network: NetworkId) -> Vec<(Public, u64)> { + if network == NetworkId::Serai { + Babe::authorities().into_iter().map(|(id, key_share)| (id.into_inner(), key_share)).collect() + } else { + ValidatorSets::participants_for_latest_decided_set(network).unwrap().into_inner() + } +} + +fn verify_session_and_active_validators(network: NetworkId, participants: &[Public], session: u32) { + let mut validators: Vec = active_network_validators(network) + .into_iter() + .map(|(p, ks)| { + assert_eq!(ks, 1); + p + }) + .collect(); + validators.sort(); + + assert_eq!(ValidatorSets::session(network).unwrap(), Session(session)); + assert_eq!(participants, validators); + + // TODO: how to make sure block finalizations work as usual here? +} + +fn get_session_at_which_changes_activate(network: NetworkId) -> u32 { + let current_session = ValidatorSets::session(network).unwrap().0; + // changes should be active in the next session + if network == NetworkId::Serai { + // it takes 1 extra session for serai net to make the changes active. + current_session + 2 + } else { + current_session + 1 + } +} + +fn set_keys_for_session(network: ExternalNetworkId) { + ValidatorSets::set_keys( + RawOrigin::None.into(), + network, + BoundedVec::new(), + KeyPair(insecure_pair_from_name("Alice").public(), vec![].try_into().unwrap()), + Signature([0u8; 64]), + ) + .unwrap(); +} + +fn set_keys_signature(set: &ExternalValidatorSet, key_pair: &KeyPair, pairs: &[Pair]) -> Signature { + let mut pub_keys = vec![]; + for pair in pairs { + let public_key = + ::read_G::<&[u8]>(&mut pair.public().0.as_ref()).unwrap(); + pub_keys.push(public_key); + } + + let mut threshold_keys = vec![]; + for i in 0 .. pairs.len() { + let secret_key = ::read_F::<&[u8]>( + &mut pairs[i].as_ref().secret.to_bytes()[.. 32].as_ref(), + ) + .unwrap(); + assert_eq!(Ristretto::generator() * secret_key, pub_keys[i]); + + threshold_keys.push( + musig::(&musig_context((*set).into()), &Zeroizing::new(secret_key), &pub_keys) + .unwrap(), + ); + } + + let mut musig_keys = HashMap::new(); + for tk in threshold_keys { + musig_keys.insert(tk.params().i(), tk.into()); + } + + let sig = frost::tests::sign_without_caching( + &mut OsRng, + frost::tests::algorithm_machines(&mut OsRng, &Schnorrkel::new(b"substrate"), &musig_keys), + &set_keys_message(set, &[], key_pair), + ); + + Signature(sig.to_bytes()) +} + +fn get_ordered_keys(network: NetworkId, participants: &[Pair]) -> Vec { + // retrieve the current session validators so that we know the order of the keys + // that is necessary for the correct musig signature. + let validators = ValidatorSets::participants_for_latest_decided_set(network).unwrap(); + + // collect the pairs of the validators + let mut pairs = vec![]; + for (v, _) in validators { + let p = participants.iter().find(|pair| pair.public() == v).unwrap().clone(); + pairs.push(p); + } + + pairs +} + +fn rotate_session_until(network: NetworkId, session: u32) { + let mut current = ValidatorSets::session(network).unwrap().0; + while current < session { + Babe::on_initialize(System::block_number() + 1); + ValidatorSets::rotate_session(); + if let NetworkId::External(n) = network { + set_keys_for_session(n); + } + ValidatorSets::retire_set(ValidatorSet { session: Session(current), network }); + current += 1; + } + assert_eq!(current, session); +} + +#[test] +fn rotate_session() { + new_test_ext().execute_with(|| { + let genesis_participants: Vec = + genesis_participants().into_iter().map(|p| p.public()).collect(); + let key_shares = key_shares(); + + let mut participants = HashMap::from([ + (NetworkId::Serai, genesis_participants.clone()), + (NetworkId::External(ExternalNetworkId::Bitcoin), genesis_participants.clone()), + (NetworkId::External(ExternalNetworkId::Ethereum), genesis_participants.clone()), + (NetworkId::External(ExternalNetworkId::Monero), genesis_participants), + ]); + + // rotate session + for network in NETWORKS { + let participants = participants.get_mut(&network).unwrap(); + + // verify for session 0 + participants.sort(); + if let NetworkId::External(n) = network { + set_keys_for_session(n); + } + verify_session_and_active_validators(network, participants, 0); + + // add 1 participant + let new_participant = insecure_pair_from_name("new-guy").public(); + Coins::mint(new_participant, Balance { coin: Coin::Serai, amount: key_shares[&network] }) + .unwrap(); + ValidatorSets::allocate( + RawOrigin::Signed(new_participant).into(), + network, + key_shares[&network], + ) + .unwrap(); + participants.push(new_participant); + + // move network to the activation session + let activation_session = get_session_at_which_changes_activate(network); + rotate_session_until(network, activation_session); + + // verify + participants.sort(); + verify_session_and_active_validators(network, participants, activation_session); + + // remove 1 participant + let participant_to_remove = participants[0]; + ValidatorSets::deallocate( + RawOrigin::Signed(participant_to_remove).into(), + network, + key_shares[&network], + ) + .unwrap(); + participants + .swap_remove(participants.iter().position(|k| *k == participant_to_remove).unwrap()); + + // check pending deallocations + let pending = ValidatorSets::pending_deallocations( + (network, participant_to_remove), + Session(if network == NetworkId::Serai { + activation_session + 3 + } else { + activation_session + 2 + }), + ); + assert_eq!(pending, Some(key_shares[&network])); + + // move network to the activation session + let activation_session = get_session_at_which_changes_activate(network); + rotate_session_until(network, activation_session); + + // verify + participants.sort(); + verify_session_and_active_validators(network, participants, activation_session); + } + }) +} + +#[test] +fn allocate() { + new_test_ext().execute_with(|| { + let genesis_participants: Vec = + genesis_participants().into_iter().map(|p| p.public()).collect(); + let key_shares = key_shares(); + let participant = insecure_pair_from_name("random1").public(); + let network = NetworkId::External(ExternalNetworkId::Ethereum); + + // check genesis TAS + set_keys_for_session(network.try_into().unwrap()); + assert_eq!( + ValidatorSets::total_allocated_stake(network).unwrap().0, + key_shares[&network].0 * u64::try_from(genesis_participants.len()).unwrap() + ); + + // we can't allocate less than a key share + let amount = Amount(key_shares[&network].0 * 3); + Coins::mint(participant, Balance { coin: Coin::Serai, amount }).unwrap(); + assert_noop!( + ValidatorSets::allocate( + RawOrigin::Signed(participant).into(), + network, + Amount(key_shares[&network].0 - 1) + ), + validator_sets::Error::::InsufficientAllocation + ); + + // we can't allocate too much that the net exhibits the ability to handle any single node + // becoming byzantine + assert_noop!( + ValidatorSets::allocate(RawOrigin::Signed(participant).into(), network, amount), + validator_sets::Error::::AllocationWouldRemoveFaultTolerance + ); + + // we should be allocate a proper amount + assert_ok!(ValidatorSets::allocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + assert_eq!(Coins::balance(participant, Coin::Serai).0, amount.0 - key_shares[&network].0); + + // check new amount is reflected on TAS on new session + rotate_session_until(network, 1); + assert_eq!( + ValidatorSets::total_allocated_stake(network).unwrap().0, + key_shares[&network].0 * (u64::try_from(genesis_participants.len()).unwrap() + 1) + ); + + // check that new participants match + let mut active_participants: Vec = + active_network_validators(network).into_iter().map(|(p, _)| p).collect(); + + let mut current_participants = genesis_participants.clone(); + current_participants.push(participant); + + current_participants.sort(); + active_participants.sort(); + assert_eq!(current_participants, active_participants); + }) +} + +#[test] +fn deallocate_pending() { + new_test_ext().execute_with(|| { + let genesis_participants: Vec = + genesis_participants().into_iter().map(|p| p.public()).collect(); + let key_shares = key_shares(); + let participant = insecure_pair_from_name("random1").public(); + let network = NetworkId::External(ExternalNetworkId::Bitcoin); + + // check genesis TAS + set_keys_for_session(network.try_into().unwrap()); + assert_eq!( + ValidatorSets::total_allocated_stake(network).unwrap().0, + key_shares[&network].0 * u64::try_from(genesis_participants.len()).unwrap() + ); + + // allocate some amount + Coins::mint(participant, Balance { coin: Coin::Serai, amount: key_shares[&network] }).unwrap(); + assert_ok!(ValidatorSets::allocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + assert_eq!(Coins::balance(participant, Coin::Serai).0, 0); + + // move to next session + let mut current_session = ValidatorSets::session(network).unwrap().0; + current_session += 1; + rotate_session_until(network, current_session); + assert_eq!( + ValidatorSets::total_allocated_stake(network).unwrap().0, + key_shares[&network].0 * (u64::try_from(genesis_participants.len()).unwrap() + 1) + ); + + // we can deallocate all of our allocation + assert_ok!(ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + + // check pending deallocations + let pending_session = + if network == NetworkId::Serai { current_session + 3 } else { current_session + 2 }; + assert_eq!( + ValidatorSets::pending_deallocations((network, participant), Session(pending_session)), + Some(key_shares[&network]) + ); + + // we can't claim it immediately + assert_noop!( + ValidatorSets::claim_deallocation( + RawOrigin::Signed(participant).into(), + network, + Session(pending_session), + ), + validator_sets::Error::::NonExistentDeallocation + ); + + // we should be able to claim it in the pending session + rotate_session_until(network, pending_session); + assert_ok!(ValidatorSets::claim_deallocation( + RawOrigin::Signed(participant).into(), + network, + Session(pending_session), + )); + }) +} + +#[test] +fn deallocate_immediately() { + new_test_ext().execute_with(|| { + let genesis_participants: Vec = + genesis_participants().into_iter().map(|p| p.public()).collect(); + let key_shares = key_shares(); + let participant = insecure_pair_from_name("random1").public(); + let network = NetworkId::External(ExternalNetworkId::Monero); + + // check genesis TAS + set_keys_for_session(network.try_into().unwrap()); + assert_eq!( + ValidatorSets::total_allocated_stake(network).unwrap().0, + key_shares[&network].0 * u64::try_from(genesis_participants.len()).unwrap() + ); + + // we can't deallocate when we don't have an allocation + assert_noop!( + ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + ), + validator_sets::Error::::NonExistentValidator + ); + + // allocate some amount + Coins::mint(participant, Balance { coin: Coin::Serai, amount: key_shares[&network] }).unwrap(); + assert_ok!(ValidatorSets::allocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + assert_eq!(Coins::balance(participant, Coin::Serai).0, 0); + + // we can't deallocate more than our allocation + assert_noop!( + ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + Amount(key_shares[&network].0 + 1) + ), + validator_sets::Error::::NotEnoughAllocated + ); + + // we can't deallocate an amount that would left us less than a key share as long as it isn't 0 + assert_noop!( + ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + Amount(key_shares[&network].0 / 2) + ), + validator_sets::Error::::DeallocationWouldRemoveParticipant + ); + + // we can deallocate all of our allocation + assert_ok!(ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + + // It should be immediately deallocated since we are not yet in an active set + assert_eq!(Coins::balance(participant, Coin::Serai), key_shares[&network]); + assert!(ValidatorSets::pending_deallocations((network, participant), Session(1)).is_none()); + + // allocate again + assert_ok!(ValidatorSets::allocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + )); + assert_eq!(Coins::balance(participant, Coin::Serai).0, 0); + + // make a pool so that we have security oracle value for the coin + let liq_acc = insecure_pair_from_name("liq-acc").public(); + let coin = ExternalCoin::Monero; + let balance = ExternalBalance { coin, amount: Amount(2 * key_shares[&network].0) }; + Coins::mint(liq_acc, balance.into()).unwrap(); + Coins::mint(liq_acc, Balance { coin: Coin::Serai, amount: balance.amount }).unwrap(); + Dex::add_liquidity( + RawOrigin::Signed(liq_acc).into(), + coin, + balance.amount.0 / 2, + balance.amount.0 / 2, + 1, + 1, + liq_acc, + ) + .unwrap(); + Dex::on_finalize(1); + assert!(Dex::security_oracle_value(coin).unwrap().0 > 0); + + // we can't deallocate if it would break economic security + // The reason we don't have economic security for the network now is that we just set + // the value for coin/SRI to 1:1 when making the pool and we minted 2 * key_share amount + // of coin but we only allocated 1 key_share of SRI for the network although we need more than + // 3 for the same amount of coin. + assert_noop!( + ValidatorSets::deallocate( + RawOrigin::Signed(participant).into(), + network, + key_shares[&network] + ), + validator_sets::Error::::DeallocationWouldRemoveEconomicSecurity + ); + }) +} + +#[test] +fn set_keys_keys_exist() { + new_test_ext().execute_with(|| { + let network = ExternalNetworkId::Monero; + + // set the keys first + ValidatorSets::set_keys( + RawOrigin::None.into(), + network, + Vec::new().try_into().unwrap(), + KeyPair(insecure_pair_from_name("name").public(), Vec::new().try_into().unwrap()), + Signature([0u8; 64]), + ) + .unwrap(); + + let call = validator_sets::Call::::set_keys { + network, + removed_participants: Vec::new().try_into().unwrap(), + key_pair: KeyPair(insecure_pair_from_name("name").public(), Vec::new().try_into().unwrap()), + signature: Signature([0u8; 64]), + }; + + assert_eq!( + ValidatorSets::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::Stale.into() + ); + }) +} + +#[test] +fn set_keys_invalid_signature() { + new_test_ext().execute_with(|| { + let network = ExternalNetworkId::Ethereum; + let mut participants = get_ordered_keys(network.into(), &genesis_participants()); + + // we can't have invalid set + let mut set = ExternalValidatorSet { network, session: Session(1) }; + let key_pair = + KeyPair(insecure_pair_from_name("name").public(), Vec::new().try_into().unwrap()); + let signature = set_keys_signature(&set, &key_pair, &participants); + + let call = validator_sets::Call::::set_keys { + network, + removed_participants: Vec::new().try_into().unwrap(), + key_pair: key_pair.clone(), + signature, + }; + assert_eq!( + ValidatorSets::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::BadProof.into() + ); + + // fix the set + set.session = Session(0); + + // participants should match + participants.push(insecure_pair_from_name("random1")); + let signature = set_keys_signature(&set, &key_pair, &participants); + + let call = validator_sets::Call::::set_keys { + network, + removed_participants: Vec::new().try_into().unwrap(), + key_pair: key_pair.clone(), + signature, + }; + assert_eq!( + ValidatorSets::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::BadProof.into() + ); + + // fix the participants + participants.pop(); + + // msg key pair and the key pair to set should match + let key_pair2 = + KeyPair(insecure_pair_from_name("name2").public(), Vec::new().try_into().unwrap()); + let signature = set_keys_signature(&set, &key_pair2, &participants); + + let call = validator_sets::Call::::set_keys { + network, + removed_participants: Vec::new().try_into().unwrap(), + key_pair: key_pair.clone(), + signature, + }; + assert_eq!( + ValidatorSets::validate_unsigned(TransactionSource::External, &call), + InvalidTransaction::BadProof.into() + ); + + // use the same key pair + let signature = set_keys_signature(&set, &key_pair, &participants); + let call = validator_sets::Call::::set_keys { + network, + removed_participants: Vec::new().try_into().unwrap(), + key_pair, + signature, + }; + ValidatorSets::validate_unsigned(TransactionSource::External, &call).unwrap(); + + // TODO: removed_participants parameter isn't tested since it will be removed in upcoming + // commits? + }) +} + +// TODO: add report_slashes tests when the feature is complete. From e4cc23b72d337f4f4c0833a53f37605890c16e1a Mon Sep 17 00:00:00 2001 From: akildemir <34187742+akildemir@users.noreply.github.com> Date: Thu, 30 Jan 2025 12:19:12 +0300 Subject: [PATCH 368/368] add economic security pallet tests (#623) --- Cargo.lock | 8 + substrate/economic-security/pallet/Cargo.toml | 31 ++- substrate/economic-security/pallet/src/lib.rs | 6 + .../economic-security/pallet/src/mock.rs | 217 ++++++++++++++++++ .../economic-security/pallet/src/tests.rs | 82 +++++++ 5 files changed, 343 insertions(+), 1 deletion(-) create mode 100644 substrate/economic-security/pallet/src/mock.rs create mode 100644 substrate/economic-security/pallet/src/tests.rs diff --git a/Cargo.lock b/Cargo.lock index 9fb17677..56e4dc96 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -9133,11 +9133,19 @@ version = "0.1.0" dependencies = [ "frame-support", "frame-system", + "pallet-babe", + "pallet-grandpa", + "pallet-timestamp", "parity-scale-codec", "scale-info", "serai-coins-pallet", "serai-dex-pallet", "serai-primitives", + "serai-validator-sets-pallet", + "sp-consensus-babe", + "sp-core", + "sp-io", + "sp-runtime", ] [[package]] diff --git a/substrate/economic-security/pallet/Cargo.toml b/substrate/economic-security/pallet/Cargo.toml index 033a05fc..efd969c8 100644 --- a/substrate/economic-security/pallet/Cargo.toml +++ b/substrate/economic-security/pallet/Cargo.toml @@ -30,6 +30,19 @@ coins-pallet = { package = "serai-coins-pallet", path = "../../coins/pallet", de serai-primitives = { path = "../../primitives", default-features = false } + +[dev-dependencies] +pallet-babe = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-grandpa = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-timestamp = { git = "https://github.com/serai-dex/substrate", default-features = false } + +validator-sets-pallet = { package = "serai-validator-sets-pallet", path = "../../validator-sets/pallet", default-features = false } + +sp-io = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-core = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-consensus-babe = { git = "https://github.com/serai-dex/substrate", default-features = false } + [features] std = [ "scale/std", @@ -38,11 +51,27 @@ std = [ "frame-system/std", "frame-support/std", + "sp-io/std", + "sp-core/std", + "sp-consensus-babe/std", + "dex-pallet/std", "coins-pallet/std", + "validator-sets-pallet/std", "serai-primitives/std", + + "pallet-babe/std", + "pallet-grandpa/std", + "pallet-timestamp/std", ] -try-runtime = [] # TODO + +try-runtime = [ + "frame-system/try-runtime", + "frame-support/try-runtime", + + "sp-runtime/try-runtime", +] + default = ["std"] diff --git a/substrate/economic-security/pallet/src/lib.rs b/substrate/economic-security/pallet/src/lib.rs index 045297f4..20897aaa 100644 --- a/substrate/economic-security/pallet/src/lib.rs +++ b/substrate/economic-security/pallet/src/lib.rs @@ -1,5 +1,11 @@ #![cfg_attr(not(feature = "std"), no_std)] +#[cfg(test)] +mod mock; + +#[cfg(test)] +mod tests; + #[allow( unreachable_patterns, clippy::cast_possible_truncation, diff --git a/substrate/economic-security/pallet/src/mock.rs b/substrate/economic-security/pallet/src/mock.rs new file mode 100644 index 00000000..ffa7d7fb --- /dev/null +++ b/substrate/economic-security/pallet/src/mock.rs @@ -0,0 +1,217 @@ +//! Test environment for EconomicSecurity pallet. + +use super::*; + +use core::marker::PhantomData; +use std::collections::HashMap; + +use frame_support::{ + construct_runtime, + traits::{ConstU16, ConstU32, ConstU64}, +}; + +use sp_core::{ + H256, Pair as PairTrait, + sr25519::{Public, Pair}, +}; +use sp_runtime::{ + traits::{BlakeTwo256, IdentityLookup}, + BuildStorage, +}; + +use serai_primitives::*; +use validator_sets::{primitives::MAX_KEY_SHARES_PER_SET, MembershipProof}; + +pub use crate as economic_security; +pub use coins_pallet as coins; +pub use dex_pallet as dex; +pub use pallet_babe as babe; +pub use pallet_grandpa as grandpa; +pub use pallet_timestamp as timestamp; +pub use validator_sets_pallet as validator_sets; + +type Block = frame_system::mocking::MockBlock; +// Maximum number of authorities per session. +pub type MaxAuthorities = ConstU32<{ MAX_KEY_SHARES_PER_SET }>; + +pub const PRIMARY_PROBABILITY: (u64, u64) = (1, 4); +pub const BABE_GENESIS_EPOCH_CONFIG: sp_consensus_babe::BabeEpochConfiguration = + sp_consensus_babe::BabeEpochConfiguration { + c: PRIMARY_PROBABILITY, + allowed_slots: sp_consensus_babe::AllowedSlots::PrimaryAndSecondaryPlainSlots, + }; + +pub const MEDIAN_PRICE_WINDOW_LENGTH: u16 = 10; + +construct_runtime!( + pub enum Test + { + System: frame_system, + Timestamp: timestamp, + Coins: coins, + LiquidityTokens: coins::::{Pallet, Call, Storage, Event}, + ValidatorSets: validator_sets, + EconomicSecurity: economic_security, + Dex: dex, + Babe: babe, + Grandpa: grandpa, + } +); + +impl frame_system::Config for Test { + type BaseCallFilter = frame_support::traits::Everything; + type BlockWeights = (); + type BlockLength = (); + type RuntimeOrigin = RuntimeOrigin; + type RuntimeCall = RuntimeCall; + type Nonce = u64; + type Hash = H256; + type Hashing = BlakeTwo256; + type AccountId = Public; + type Lookup = IdentityLookup; + type Block = Block; + type RuntimeEvent = RuntimeEvent; + type BlockHashCount = ConstU64<250>; + type DbWeight = (); + type Version = (); + type PalletInfo = PalletInfo; + type AccountData = (); + type OnNewAccount = (); + type OnKilledAccount = (); + type SystemWeightInfo = (); + type SS58Prefix = (); + type OnSetCode = (); + type MaxConsumers = ConstU32<16>; +} + +impl timestamp::Config for Test { + type Moment = u64; + type OnTimestampSet = Babe; + type MinimumPeriod = ConstU64<{ (TARGET_BLOCK_TIME * 1000) / 2 }>; + type WeightInfo = (); +} + +impl babe::Config for Test { + type EpochDuration = ConstU64<{ FAST_EPOCH_DURATION }>; + + type ExpectedBlockTime = ConstU64<{ TARGET_BLOCK_TIME * 1000 }>; + type EpochChangeTrigger = babe::ExternalTrigger; + type DisabledValidators = ValidatorSets; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl grandpa::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type WeightInfo = (); + type MaxAuthorities = MaxAuthorities; + + type MaxSetIdSessionEntries = ConstU64<0>; + type KeyOwnerProof = MembershipProof; + type EquivocationReportSystem = (); +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = ValidatorSets; +} + +impl coins::Config for Test { + type RuntimeEvent = RuntimeEvent; + type AllowMint = (); +} + +impl dex::Config for Test { + type RuntimeEvent = RuntimeEvent; + + type LPFee = ConstU32<3>; // 0.3% + type MintMinLiquidity = ConstU64<10000>; + + type MaxSwapPathLength = ConstU32<3>; // coin1 -> SRI -> coin2 + + type MedianPriceWindowLength = ConstU16<{ MEDIAN_PRICE_WINDOW_LENGTH }>; + + type WeightInfo = dex::weights::SubstrateWeight; +} + +impl validator_sets::Config for Test { + type RuntimeEvent = RuntimeEvent; + type ShouldEndSession = Babe; +} + +impl Config for Test { + type RuntimeEvent = RuntimeEvent; +} + +// For a const we can't define +pub fn genesis_participants() -> Vec { + vec![ + insecure_pair_from_name("Alice"), + insecure_pair_from_name("Bob"), + insecure_pair_from_name("Charlie"), + insecure_pair_from_name("Dave"), + ] +} + +// Amounts for single key share per network +pub fn key_shares() -> HashMap { + HashMap::from([ + (NetworkId::Serai, Amount(50_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Bitcoin), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Ethereum), Amount(1_000_000 * 10_u64.pow(8))), + (NetworkId::External(ExternalNetworkId::Monero), Amount(100_000 * 10_u64.pow(8))), + ]) +} + +pub(crate) fn new_test_ext() -> sp_io::TestExternalities { + let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap(); + let networks: Vec<(NetworkId, Amount)> = key_shares().into_iter().collect::>(); + + coins::GenesisConfig:: { + accounts: genesis_participants() + .clone() + .into_iter() + .map(|a| (a.public(), Balance { coin: Coin::Serai, amount: Amount(1 << 60) })) + .collect(), + _ignore: Default::default(), + } + .assimilate_storage(&mut t) + .unwrap(); + + validator_sets::GenesisConfig:: { + networks, + participants: genesis_participants().into_iter().map(|p| p.public()).collect(), + } + .assimilate_storage(&mut t) + .unwrap(); + + babe::GenesisConfig:: { + authorities: genesis_participants() + .into_iter() + .map(|validator| (validator.public().into(), 1)) + .collect(), + epoch_config: Some(BABE_GENESIS_EPOCH_CONFIG), + _config: PhantomData, + } + .assimilate_storage(&mut t) + .unwrap(); + + grandpa::GenesisConfig:: { + authorities: genesis_participants() + .into_iter() + .map(|validator| (validator.public().into(), 1)) + .collect(), + _config: PhantomData, + } + .assimilate_storage(&mut t) + .unwrap(); + + let mut ext = sp_io::TestExternalities::new(t); + ext.execute_with(|| System::set_block_number(0)); + ext +} diff --git a/substrate/economic-security/pallet/src/tests.rs b/substrate/economic-security/pallet/src/tests.rs new file mode 100644 index 00000000..a6010e71 --- /dev/null +++ b/substrate/economic-security/pallet/src/tests.rs @@ -0,0 +1,82 @@ +use crate::mock::*; + +use frame_support::traits::Hooks; +use frame_system::RawOrigin; + +use sp_core::{sr25519::Signature, Pair as PairTrait}; +use sp_runtime::BoundedVec; + +use validator_sets::primitives::KeyPair; +use serai_primitives::{ + insecure_pair_from_name, Balance, Coin, ExternalBalance, ExternalCoin, ExternalNetworkId, + EXTERNAL_COINS, EXTERNAL_NETWORKS, +}; + +fn set_keys_for_session(network: ExternalNetworkId) { + ValidatorSets::set_keys( + RawOrigin::None.into(), + network, + BoundedVec::new(), + KeyPair(insecure_pair_from_name("Alice").public(), vec![].try_into().unwrap()), + Signature([0u8; 64]), + ) + .unwrap(); +} + +fn make_pool_with_liquidity(coin: &ExternalCoin) { + // make a pool so that we have security oracle value for the coin + let liq_acc = insecure_pair_from_name("liq-acc").public(); + let balance = ExternalBalance { coin: *coin, amount: key_shares()[&coin.network().into()] }; + Coins::mint(liq_acc, balance.into()).unwrap(); + Coins::mint(liq_acc, Balance { coin: Coin::Serai, amount: balance.amount }).unwrap(); + + Dex::add_liquidity( + RawOrigin::Signed(liq_acc).into(), + *coin, + balance.amount.0 / 2, + balance.amount.0 / 2, + 1, + 1, + liq_acc, + ) + .unwrap(); + Dex::on_finalize(1); + assert!(Dex::security_oracle_value(coin).unwrap().0 > 0) +} + +#[test] +fn economic_security() { + new_test_ext().execute_with(|| { + // update the state + EconomicSecurity::on_initialize(1); + + // make sure it is right at the beginning + // this is none at this point since no set has set their keys so TAS isn't up-to-date + for network in EXTERNAL_NETWORKS { + assert_eq!(EconomicSecurity::economic_security_block(network), None); + } + + // set the keys for TAS and have pools for oracle value + for coin in EXTERNAL_COINS { + set_keys_for_session(coin.network()); + make_pool_with_liquidity(&coin); + } + + // update the state + EconomicSecurity::on_initialize(1); + + // check again. The reason we have economic security now is because we stake a key share + // per participant per network(total of 4 key share) in genesis for all networks. + for network in EXTERNAL_NETWORKS { + assert_eq!(EconomicSecurity::economic_security_block(network), Some(1)); + } + + // TODO: Not sure how much sense this test makes since we start from an economically secure + // state. Ideally we should start from not economically secure state and stake the necessary + // amount and then check whether the pallet set the value right since that will be the mainnet + // path. But we cant do that at the moment since vs-pallet genesis build auto stake per network + // to construct the set. This also makes a missing piece of logic explicit. We need genesis + // validators to be in-set but without their stake, or at least its affect on TAS. So this test + // should be updated once that logic is coded. + }); +}