35 Commits

Author SHA1 Message Date
Luke Parker
ca93c82156 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-11-16 18:25:23 -05:00
Luke Parker
5b1875dae6 Update lockfile after recent cherry picks 2025-11-16 18:21:02 -05:00
Luke Parker
bcd68441be Use Alpine to build the runtime
Smaller, works without issue.
2025-11-16 18:21:02 -05:00
Luke Parker
4ebf9ad9c7 Rust 1.91.1 due to the regression re: wasm builds 2025-11-16 18:21:02 -05:00
Luke Parker
807572199c Update misc versions 2025-11-16 18:21:02 -05:00
Luke Parker
3cdc1536c5 Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-16 18:21:01 -05:00
Luke Parker
9e13e5ebff Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-16 18:21:01 -05:00
Luke Parker
9b2c254eee Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-16 18:20:55 -05:00
Luke Parker
0883479068 Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-16 17:57:07 -05:00
Luke Parker
c5480c63be Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-16 17:56:51 -05:00
Luke Parker
4280ee6987 revm 33 2025-11-16 17:54:43 -05:00
Luke Parker
91673d7ae3 Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-16 17:52:58 -05:00
Luke Parker
927f07b62b Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-16 17:52:33 -05:00
Luke Parker
7e774d6d2d Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-16 17:52:26 -05:00
Luke Parker
fccd06b376 Bump revm 2025-11-16 17:52:23 -05:00
Luke Parker
e3edc0a7fc Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-11-16 17:51:38 -05:00
Luke Parker
9c47ef2658 Restore deny exception for kayabaNerve/elliptic-curves, accidentally dropped when merging develop 2025-11-04 13:18:59 -05:00
Luke Parker
e1b6b638c6 Merge branch 'develop' into next 2025-11-04 13:14:38 -05:00
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
87ee879dea doc_auto_cfg -> doc_cfg 2025-11-04 10:20:17 -05:00
Luke Parker
b5603560e8 Merge branch 'develop' into next 2025-11-04 10:19:38 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
94faf098b6 Update nightly version 2025-10-05 18:44:04 -04:00
Luke Parker
03e45f73cd Merge branch 'develop' into next 2025-10-05 18:43:53 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
138a0e9b40 Resolve https://github.com/serai-dex/serai/issues/680 2025-09-30 01:05:17 -04:00
Luke Parker
4fc7263ac3 Make simple_request::Client generic to the executor
Part of https://github.com/serai-dex/serai/issues/682.

We don't remove the use of `tokio::sync::Mutex` now as `hyper` pulls in
`tokio::sync` anyways, so there's no point in replacing it. This doesn't yet
solve TLS for non-`tokio` `Client`s.
2025-09-30 01:05:12 -04:00
Luke Parker
f27fd59fa6 Update documentation within modular-frost
Resolves https://github.com/serai-dex/serai/issues/675.
2025-09-30 00:27:29 -04:00
Luke Parker
437f0e9a93 Remove serdect by removing the unnecessary "alloc" feature from crypto-bigint
This only works for the legacy `crypto-bigint` and downstream consumers who
don't have the modern `serdect` pulled in for independent reasons.
2025-09-26 20:58:45 -04:00
Luke Parker
cc5d38f1ce dkg-evrf Security Proofs (#681)
* Add audit statement for `dkg-evrf`

This doesn't cover the implementation, solely the academia and background.

Also moves the existing audit of the `crypto` folder for organizational
reasons.

* Add files via upload
2025-09-26 11:20:48 -04:00
Luke Parker
0ce025e0c2 Update build-dependencies CI action 2025-09-21 15:40:58 -04:00
Luke Parker
224cf4ea21 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:00:10 -04:00
373 changed files with 8922 additions and 10004 deletions

View File

@@ -5,7 +5,7 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: "29.1" default: "30.0"
runs: runs:
using: "composite" using: "composite"

View File

@@ -7,6 +7,10 @@ runs:
- name: Remove unused packages - name: Remove unused packages
shell: bash shell: bash
run: | run: |
# Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*" sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*" sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
@@ -14,8 +18,9 @@ runs:
sudo apt remove -y --allow-remove-essential -f shim-signed *python3* sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise # This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*" sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
# Reinstall python3 as a general dependency of a functional operating system # Reinstall python3 as a general dependency of a functional operating system
sudo apt install python3 sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux' if: runner.os == 'Linux'
- name: Remove unused packages - name: Remove unused packages
@@ -33,19 +38,23 @@ runs:
shell: bash shell: bash
run: | run: |
if [ "$RUNNER_OS" == "Linux" ]; then if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi fi
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
cargo +1.90 install svm-rs --version =0.5.19 cargo +1.91 install svm-rs --version =0.5.19
svm install 0.8.26 svm install 0.8.29
svm use 0.8.26 svm use 0.8.29
- name: Remove preinstalled Docker - name: Remove preinstalled Docker
shell: bash shell: bash

View File

@@ -5,7 +5,7 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.4 default: v0.18.4.3
runs: runs:
using: "composite" using: "composite"

View File

@@ -5,7 +5,7 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.4 default: v0.18.4.3
runs: runs:
using: "composite" using: "composite"

View File

@@ -5,12 +5,12 @@ inputs:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.3.4 default: v0.18.4.3
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
required: false required: false
default: "29.1" default: "30.0"
runs: runs:
using: "composite" using: "composite"

View File

@@ -1 +1 @@
nightly-2025-09-01 nightly-2025-11-11

View File

@@ -18,7 +18,7 @@ jobs:
key: rust-advisory-db key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.90 install cargo-deny --version =0.18.4 run: cargo +1.91 install cargo-deny --version =0.18.5
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -11,7 +11,7 @@ jobs:
clippy: clippy:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest] os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install nightly rust - name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-src -c clippy run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy - name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -52,7 +52,7 @@ jobs:
key: rust-advisory-db key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.90 install cargo-deny --version =0.18.4 run: cargo +1.91 install cargo-deny --version =0.18.5
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -88,8 +88,8 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use - name: Verify all dependencies are in use
run: | run: |
cargo +1.90 install cargo-machete --version =0.9.1 cargo +1.91 install cargo-machete --version =0.9.1
cargo +1.90 machete cargo +1.91 machete
msrv: msrv:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -98,7 +98,7 @@ jobs:
- name: Verify claimed `rust-version` - name: Verify claimed `rust-version`
shell: bash shell: bash
run: | run: |
cargo +1.90 install cargo-msrv --version =0.18.4 cargo +1.91 install cargo-msrv --version =0.18.4
function check_msrv { function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the # We `cd` into the directory passed as the first argument, but will return to the
@@ -155,8 +155,6 @@ jobs:
# Correct the last line, which was malleated to "]," # Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/") members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the patches
members=$(echo "$members" | grep -v "patches")
# Don't check the following # Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a # Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline # bespoke build pipeline
@@ -192,12 +190,12 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither - name: Slither
run: | run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol

View File

@@ -69,8 +69,8 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Buld Rust docs - name: Buld Rust docs
run: | run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs -c rust-src rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --all-features RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust mv target/doc docs/_site/rust
- name: Upload artifact - name: Upload artifact

View File

@@ -61,7 +61,6 @@ jobs:
-p serai-monero-processor \ -p serai-monero-processor \
-p tendermint-machine \ -p tendermint-machine \
-p tributary-sdk \ -p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \ -p serai-cosign \
-p serai-coordinator-substrate \ -p serai-coordinator-substrate \
-p serai-coordinator-tributary \ -p serai-coordinator-tributary \
@@ -83,16 +82,21 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \ -p serai-primitives \
-p serai-abi \ -p serai-coins-primitives \
-p serai-core-pallet \
-p serai-coins-pallet \ -p serai-coins-pallet \
-p serai-validator-sets-pallet \
-p serai-signals-pallet \
-p serai-dex-pallet \ -p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \ -p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \ -p serai-emissions-primitives \
-p serai-emissions-pallet \ -p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \ -p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \
-p serai-runtime \ -p serai-runtime \
-p serai-node -p serai-node

7
.gitignore vendored
View File

@@ -1,7 +1,14 @@
target target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile Dockerfile
Dockerfile.fast-epoch Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile !orchestration/runtime/Dockerfile
.test-logs .test-logs
.vscode .vscode

2875
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,6 @@
[workspace] [workspace]
resolver = "2" resolver = "2"
members = [ members = [
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims", "common/std-shims",
"common/zalloc", "common/zalloc",
"common/patchable-async-sleep", "common/patchable-async-sleep",
@@ -73,7 +69,6 @@ members = [
"coordinator/tributary-sdk/tendermint", "coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk", "coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign", "coordinator/cosign",
"coordinator/substrate", "coordinator/substrate",
"coordinator/tributary", "coordinator/tributary",
@@ -82,17 +77,30 @@ members = [
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/abi",
"substrate/core", "substrate/coins/primitives",
"substrate/coins", "substrate/coins/pallet",
"substrate/validator-sets",
"substrate/signals", "substrate/dex/pallet",
"substrate/dex",
"substrate/genesis-liquidity", "substrate/validator-sets/primitives",
"substrate/economic-security", "substrate/validator-sets/pallet",
"substrate/emissions",
"substrate/in-instructions", "substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
@@ -164,16 +172,29 @@ panic = "unwind"
overflow-checks = true overflow-checks = true
[patch.crates-io] [patch.crates-io]
# Point to empty crates for unused crates in our tree
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree # Dependencies from monero-oxide which originate from within our own tree
std-shims = { path = "patches/std-shims" } std-shims = { path = "patches/std-shims" }
simple-request = { path = "common/request" } simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" } multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" } flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" } ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "crypto/dalek-ff-group" } dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" } minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" } modular-frost = { path = "crypto/frost" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201 # https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" } lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
@@ -185,19 +206,22 @@ lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev
option-ext = { path = "patches/option-ext" } option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" } directories-next = { path = "patches/directories-next" }
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar` # Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" } k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" } p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# Patch due to `std` now including the required functionality # `jemalloc` conflicts with `mimalloc`, so patch to a `rocksdb` which never uses `jemalloc`
is_terminal_polyfill = { path = "./patches/is_terminal_polyfill" } librocksdb-sys = { path = "patches/librocksdb-sys" }
[workspace.lints.clippy] [workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow" unwrap_or_default = "allow"
map_unwrap_or = "allow" map_unwrap_or = "allow"
needless_continue = "allow" needless_continue = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny" borrow_as_ptr = "deny"
cast_lossless = "deny" cast_lossless = "deny"
cast_possible_truncation = "deny" cast_possible_truncation = "deny"
@@ -236,7 +260,7 @@ redundant_closure_for_method_calls = "deny"
redundant_else = "deny" redundant_else = "deny"
string_add_assign = "deny" string_add_assign = "deny"
string_slice = "deny" string_slice = "deny"
unchecked_duration_subtraction = "deny" unchecked_time_subtraction = "deny"
uninlined_format_args = "deny" uninlined_format_args = "deny"
unnecessary_box_returns = "deny" unnecessary_box_returns = "deny"
unnecessary_join = "deny" unnecessary_join = "deny"
@@ -245,3 +269,6 @@ unnested_or_patterns = "deny"
unused_async = "deny" unused_async = "deny"
unused_self = "deny" unused_self = "deny"
zero_sized_map_values = "deny" zero_sized_map_values = "deny"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648

View File

@@ -0,0 +1,50 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

Binary file not shown.

View File

@@ -17,7 +17,7 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
parity-db = { version = "0.5", default-features = false, features = ["arc"], optional = true } parity-db = { version = "0.5", default-features = false, optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true } rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true }
[features] [features]

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
/// ///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro /// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key, /// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `borsh` serializable. The return type is used to auto (de)serialize the database /// they must be `scale` encodable. The return type is used to auto encode and decode the database
/// value bytes using `borsh`. /// value bytes using `borsh`.
/// ///
/// # Arguments /// # Arguments
@@ -54,10 +54,11 @@ macro_rules! create_db {
)?; )?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? { impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> { pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key( $crate::serai_db_key(
stringify!($db_name).as_bytes(), stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(), stringify!($field_name).as_bytes(),
&borsh::to_vec(&($($arg),*)).unwrap(), ($($arg),*).encode()
) )
} }
pub(crate) fn set( pub(crate) fn set(

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
// Obtain a variable from the Serai environment/secret store. // Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> { pub fn var(variable: &str) -> Option<String> {

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "simple-request" name = "simple-request"
version = "0.2.0" version = "0.3.0"
description = "A simple HTTP(S) request library" description = "A simple HTTP(S) request library"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/request" repository = "https://github.com/serai-dex/serai/tree/develop/common/request"
@@ -19,10 +19,10 @@ workspace = true
[dependencies] [dependencies]
tower-service = { version = "0.3", default-features = false } tower-service = { version = "0.3", default-features = false }
hyper = { version = "1", default-features = false, features = ["http1", "client"] } hyper = { version = "1", default-features = false, features = ["http1", "client"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy", "tokio"] } hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy"] }
http-body-util = { version = "0.1", default-features = false } http-body-util = { version = "0.1", default-features = false }
futures-util = { version = "0.3", default-features = false, features = ["std"] } futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false } tokio = { version = "1", default-features = false, features = ["sync"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true } hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
@@ -30,7 +30,8 @@ zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true } base64ct = { version = "1", features = ["alloc"], optional = true }
[features] [features]
tls = ["hyper-rustls"] tokio = ["hyper-util/tokio"]
tls = ["tokio", "hyper-rustls"]
webpki-roots = ["tls", "hyper-rustls/webpki-roots"] webpki-roots = ["tls", "hyper-rustls/webpki-roots"]
basic-auth = ["zeroize", "base64ct"] basic-auth = ["zeroize", "base64ct"]
default = ["tls"] default = ["tls"]

View File

@@ -1,19 +1,20 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
use core::{pin::Pin, future::Future};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::Mutex; use futures_util::FutureExt;
use ::tokio::sync::Mutex;
use tower_service::Service as TowerService; use tower_service::Service as TowerService;
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest, rt::Executor};
pub use hyper;
use hyper_util::client::legacy::{Client as HyperClient, connect::HttpConnector};
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector}; use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector};
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest};
use hyper_util::{
rt::tokio::TokioExecutor,
client::legacy::{Client as HyperClient, connect::HttpConnector},
};
pub use hyper;
mod request; mod request;
pub use request::*; pub use request::*;
@@ -37,21 +38,32 @@ type Connector = HttpConnector;
type Connector = HttpsConnector<HttpConnector>; type Connector = HttpsConnector<HttpConnector>;
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
enum Connection { enum Connection<
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
ConnectionPool(HyperClient<Connector, Full<Bytes>>), ConnectionPool(HyperClient<Connector, Full<Bytes>>),
Connection { Connection {
executor: E,
connector: Connector, connector: Connector,
host: Uri, host: Uri,
connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>, connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>,
}, },
} }
/// An HTTP client.
///
/// `tls` is only guaranteed to work when using the `tokio` executor. Instantiating a client when
/// the `tls` feature is active without using the `tokio` executor will cause errors.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct Client { pub struct Client<
connection: Connection, E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
connection: Connection<E>,
} }
impl Client { impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Client<E>
{
#[allow(clippy::unnecessary_wraps)] #[allow(clippy::unnecessary_wraps)]
fn connector() -> Result<Connector, Error> { fn connector() -> Result<Connector, Error> {
let mut res = HttpConnector::new(); let mut res = HttpConnector::new();
@@ -59,6 +71,15 @@ impl Client {
res.set_nodelay(true); res.set_nodelay(true);
res.set_reuse_address(true); res.set_reuse_address(true);
#[cfg(feature = "tls")]
if core::any::TypeId::of::<E>() !=
core::any::TypeId::of::<hyper_util::rt::tokio::TokioExecutor>()
{
Err(Error::ConnectionError(
"`tls` feature enabled but not using the `tokio` executor".into(),
))?;
}
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
res.enforce_http(false); res.enforce_http(false);
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
@@ -79,19 +100,23 @@ impl Client {
Ok(res) Ok(res)
} }
pub fn with_connection_pool() -> Result<Client, Error> { pub fn with_executor_and_connection_pool(executor: E) -> Result<Client<E>, Error> {
Ok(Client { Ok(Client {
connection: Connection::ConnectionPool( connection: Connection::ConnectionPool(
HyperClient::builder(TokioExecutor::new()) HyperClient::builder(executor)
.pool_idle_timeout(core::time::Duration::from_secs(60)) .pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()?), .build(Self::connector()?),
), ),
}) })
} }
pub fn without_connection_pool(host: &str) -> Result<Client, Error> { pub fn with_executor_and_without_connection_pool(
executor: E,
host: &str,
) -> Result<Client<E>, Error> {
Ok(Client { Ok(Client {
connection: Connection::Connection { connection: Connection::Connection {
executor,
connector: Self::connector()?, connector: Self::connector()?,
host: { host: {
let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?; let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?;
@@ -105,7 +130,7 @@ impl Client {
}) })
} }
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_>, Error> { pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_, E>, Error> {
let request: Request = request.into(); let request: Request = request.into();
let Request { mut request, response_size_limit } = request; let Request { mut request, response_size_limit } = request;
if let Some(header_host) = request.headers().get(hyper::header::HOST) { if let Some(header_host) = request.headers().get(hyper::header::HOST) {
@@ -141,7 +166,7 @@ impl Client {
Connection::ConnectionPool(client) => { Connection::ConnectionPool(client) => {
client.request(request).await.map_err(Error::HyperUtil)? client.request(request).await.map_err(Error::HyperUtil)?
} }
Connection::Connection { connector, host, connection } => { Connection::Connection { executor, connector, host, connection } => {
let mut connection_lock = connection.lock().await; let mut connection_lock = connection.lock().await;
// If there's not a connection... // If there's not a connection...
@@ -153,9 +178,8 @@ impl Client {
let call_res = call_res.map_err(Error::ConnectionError); let call_res = call_res.map_err(Error::ConnectionError);
let (requester, connection) = let (requester, connection) =
hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?; hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?;
// This will die when we drop the requester, so we don't need to track an AbortHandle // This task will die when we drop the requester
// for it executor.execute(Box::pin(connection.map(|_| ())));
tokio::spawn(connection);
*connection_lock = Some(requester); *connection_lock = Some(requester);
} }
@@ -178,3 +202,22 @@ impl Client {
Ok(Response { response, size_limit: response_size_limit, client: self }) Ok(Response { response, size_limit: response_size_limit, client: self })
} }
} }
#[cfg(feature = "tokio")]
mod tokio {
use hyper_util::rt::tokio::TokioExecutor;
use super::*;
pub type TokioClient = Client<TokioExecutor>;
impl Client<TokioExecutor> {
pub fn with_connection_pool() -> Result<Self, Error> {
Self::with_executor_and_connection_pool(TokioExecutor::new())
}
pub fn without_connection_pool(host: &str) -> Result<Self, Error> {
Self::with_executor_and_without_connection_pool(TokioExecutor::new(), host)
}
}
}
#[cfg(feature = "tokio")]
pub use tokio::TokioClient;

View File

@@ -1,9 +1,11 @@
use core::{pin::Pin, future::Future};
use std::io; use std::io;
use hyper::{ use hyper::{
StatusCode, StatusCode,
header::{HeaderValue, HeaderMap}, header::{HeaderValue, HeaderMap},
body::Incoming, body::Incoming,
rt::Executor,
}; };
use http_body_util::BodyExt; use http_body_util::BodyExt;
@@ -14,13 +16,18 @@ use crate::{Client, Error};
// Borrows the client so its async task lives as long as this response exists. // Borrows the client so its async task lives as long as this response exists.
#[allow(dead_code)] #[allow(dead_code)]
#[derive(Debug)] #[derive(Debug)]
pub struct Response<'a> { pub struct Response<
'a,
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
pub(crate) response: hyper::Response<Incoming>, pub(crate) response: hyper::Response<Incoming>,
pub(crate) size_limit: Option<usize>, pub(crate) size_limit: Option<usize>,
pub(crate) client: &'a Client, pub(crate) client: &'a Client<E>,
} }
impl Response<'_> { impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Response<'_, E>
{
pub fn status(&self) -> StatusCode { pub fn status(&self) -> StatusCode {
self.response.status() self.response.status()
} }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -35,6 +35,9 @@ mod mutex_shim {
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard}; pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[rustversion::before(1.80)] #[rustversion::before(1.80)]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock; pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)] #[rustversion::since(1.80)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))] #![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation. //! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -31,6 +31,7 @@ frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" } frost-schnorrkel = { path = "../crypto/schnorrkel" }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
zalloc = { path = "../common/zalloc" } zalloc = { path = "../common/zalloc" }
@@ -42,7 +43,7 @@ messages = { package = "serai-processor-messages", path = "../processor/messages
message-queue = { package = "serai-message-queue", path = "../message-queue" } message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary-sdk = { path = "./tributary-sdk" } tributary-sdk = { path = "./tributary-sdk" }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] } serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] }

View File

@@ -19,9 +19,11 @@ workspace = true
[dependencies] [dependencies]
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
@@ -29,5 +31,3 @@ tokio = { version = "1", default-features = false }
serai-db = { path = "../../common/db", version = "0.1.1" } serai-db = { path = "../../common/db", version = "0.1.1" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }
serai-cosign-types = { path = "./types" }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
@@ -7,6 +7,7 @@ use std::{sync::Arc, collections::HashMap, time::Instant};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_client::{ use serai_client::{
@@ -18,8 +19,6 @@ use serai_client::{
use serai_db::*; use serai_db::*;
use serai_task::*; use serai_task::*;
use serai_cosign_types::*;
/// The cosigns which are intended to be performed. /// The cosigns which are intended to be performed.
mod intend; mod intend;
/// The evaluator of the cosigns. /// The evaluator of the cosigns.
@@ -29,6 +28,9 @@ mod delay;
pub use delay::BROADCAST_FREQUENCY; pub use delay::BROADCAST_FREQUENCY;
use delay::LatestCosignedBlockNumber; use delay::LatestCosignedBlockNumber;
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// A 'global session', defined as all validator sets used for cosigning at a given moment. /// A 'global session', defined as all validator sets used for cosigning at a given moment.
/// ///
/// We evaluate cosign faults within a global session. This ensures even if cosigners cosign /// We evaluate cosign faults within a global session. This ensures even if cosigners cosign
@@ -76,6 +78,68 @@ enum HasEvents {
No, No,
} }
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, Encode, Decode, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
self.encode()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
fn verify_signature(&self, signer: serai_client::Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}
create_db! { create_db! {
Cosign { Cosign {
// The following are populated by the intend task and used throughout the library // The following are populated by the intend task and used throughout the library

View File

@@ -1,25 +0,0 @@
[package]
name = "serai-cosign-types"
version = "0.1.0"
description = "Evaluator of cosigns for the Serai network"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/cosign"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-primitives = { path = "../../../substrate/primitives", default-features = false, features = ["std"] }

View File

@@ -1,72 +0,0 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![deny(missing_docs)]
//! Types used when cosigning Serai. For more info, please see `serai-cosign`.
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{crypto::Public, network_id::ExternalNetworkId};
/// The schnorrkel context to used when signing a cosign.
pub const COSIGN_CONTEXT: &[u8] = b"/serai/coordinator/cosign";
/// An intended cosign.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct CosignIntent {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// If this cosign must be handled before further cosigns are.
pub notable: bool,
}
/// A cosign.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Cosign {
/// The global session this cosign is being performed under.
pub global_session: [u8; 32],
/// The number of the block to cosign.
pub block_number: u64,
/// The hash of the block to cosign.
pub block_hash: [u8; 32],
/// The actual cosigner.
pub cosigner: ExternalNetworkId,
}
impl CosignIntent {
/// Convert this into a `Cosign`.
pub fn into_cosign(self, cosigner: ExternalNetworkId) -> Cosign {
let CosignIntent { global_session, block_number, block_hash, notable: _ } = self;
Cosign { global_session, block_number, block_hash, cosigner }
}
}
impl Cosign {
/// The message to sign to sign this cosign.
///
/// This must be signed with schnorrkel, the context set to `COSIGN_CONTEXT`.
pub fn signature_message(&self) -> Vec<u8> {
// We use a schnorrkel context to domain-separate this
borsh::to_vec(self).unwrap()
}
}
/// A signed cosign.
#[derive(Clone, Debug, BorshSerialize, BorshDeserialize)]
pub struct SignedCosign {
/// The cosign.
pub cosign: Cosign,
/// The signature for the cosign.
pub signature: [u8; 64],
}
impl SignedCosign {
/// Verify a cosign's signature.
pub fn verify_signature(&self, signer: Public) -> bool {
let Ok(signer) = schnorrkel::PublicKey::from_bytes(&signer.0) else { return false };
let Ok(signature) = schnorrkel::Signature::from_bytes(&self.signature) else { return false };
signer.verify_simple(COSIGN_CONTEXT, &self.cosign.signature_message(), &signature).is_ok()
}
}

View File

@@ -22,7 +22,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
serai-db = { path = "../../common/db", version = "0.1" } serai-db = { path = "../../common/db", version = "0.1" }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-cosign = { path = "../cosign" } serai-cosign = { path = "../cosign" }
tributary-sdk = { path = "../tributary-sdk" } tributary-sdk = { path = "../tributary-sdk" }

View File

@@ -29,7 +29,7 @@ schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai"] } serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-cosign = { path = "../../cosign" } serai-cosign = { path = "../../cosign" }
tributary-sdk = { path = "../../tributary-sdk" } tributary-sdk = { path = "../../tributary-sdk" }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -92,7 +92,8 @@ impl SwarmTask {
} }
} }
gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {} gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}
gossip::Event::GossipsubNotSupported { peer_id } => { gossip::Event::GossipsubNotSupported { peer_id } |
gossip::Event::SlowPeer { peer_id, .. } => {
let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id); let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id);
} }
} }

View File

@@ -1,7 +1,7 @@
use core::future::Future; use core::future::Future;
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use serai_primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet}; use serai_client::validator_sets::primitives::{MAX_KEY_SHARES_PER_SET, ExternalValidatorSet};
use futures_lite::FutureExt; use futures_lite::FutureExt;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
@@ -7,7 +7,7 @@ use std::collections::HashMap;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{network_id::ExternalNetworkId, validator_sets::ExternalValidatorSet}; use serai_client::{primitives::ExternalNetworkId, validator_sets::primitives::ExternalValidatorSet};
use serai_db::Db; use serai_db::Db;
use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader}; use tributary_sdk::{ReadWrite, TransactionTrait, Tributary, TributaryReader};

View File

@@ -103,7 +103,7 @@ mod _internal_db {
// Tributary transactions to publish from the DKG confirmation task // Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction, TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove // Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> Participant, RemoveParticipant: (set: ExternalValidatorSet) -> u16,
} }
} }
} }
@@ -139,10 +139,11 @@ impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) { pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction // If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &participant); _internal_db::RemoveParticipant::send(txn, set, &u16::from(participant));
} }
} }
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> { pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set) _internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
} }
} }

View File

@@ -284,7 +284,7 @@ async fn handle_network(
&mut txn, &mut txn,
ExternalValidatorSet { network, session }, ExternalValidatorSet { network, session },
slash_report, slash_report,
Signature(signature), Signature::from(signature),
); );
} }
}, },

View File

@@ -11,6 +11,7 @@ use tokio::sync::mpsc;
use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel}; use serai_db::{Get, DbTxn, Db as DbTrait, create_db, db_channel};
use scale::Encode;
use serai_client::validator_sets::primitives::ExternalValidatorSet; use serai_client::validator_sets::primitives::ExternalValidatorSet;
use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary}; use tributary_sdk::{TransactionKind, TransactionError, ProvidedError, TransactionTrait, Tributary};
@@ -478,8 +479,7 @@ pub(crate) async fn spawn_tributary<P: P2p>(
return; return;
} }
let genesis = let genesis = <[u8; 32]>::from(Blake2s::<U32>::digest((set.serai_block, set.set).encode()));
<[u8; 32]>::from(Blake2s::<U32>::digest(borsh::to_vec(&(set.serai_block, set.set)).unwrap()));
// Since the Serai block will be finalized, then cosigned, before we handle this, this time will // Since the Serai block will be finalized, then cosigned, before we handle this, this time will
// be a couple of minutes stale. While the Tributary will still function with a start time in the // be a couple of minutes stale. While the Tributary will still function with a start time in the

View File

@@ -20,11 +20,12 @@ workspace = true
[dependencies] [dependencies]
bitvec = { version = "1", default-features = false, features = ["std"] } bitvec = { version = "1", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive", "bit-vec"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai"] } serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,9 +1,10 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
use std::collections::HashMap; use std::collections::HashMap;
use scale::{Encode, Decode};
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use dkg::Participant; use dkg::Participant;
@@ -177,13 +178,14 @@ impl Keys {
signature_participants, signature_participants,
signature, signature,
); );
_public_db::Keys::set(txn, set.network, &(set.session, tx)); _public_db::Keys::set(txn, set.network, &(set.session, tx.encode()));
} }
pub(crate) fn take( pub(crate) fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
network: ExternalNetworkId, network: ExternalNetworkId,
) -> Option<(Session, Transaction)> { ) -> Option<(Session, Transaction)> {
_public_db::Keys::take(txn, network) let (session, tx) = _public_db::Keys::take(txn, network)?;
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
} }
} }
@@ -224,12 +226,13 @@ impl SlashReports {
slash_report, slash_report,
signature, signature,
); );
_public_db::SlashReports::set(txn, set.network, &(set.session, tx)); _public_db::SlashReports::set(txn, set.network, &(set.session, tx.encode()));
} }
pub(crate) fn take( pub(crate) fn take(
txn: &mut impl DbTxn, txn: &mut impl DbTxn,
network: ExternalNetworkId, network: ExternalNetworkId,
) -> Option<(Session, Transaction)> { ) -> Option<(Session, Transaction)> {
_public_db::SlashReports::take(txn, network) let (session, tx) = _public_db::SlashReports::take(txn, network)?;
Some((session, <_>::decode(&mut tx.as_slice()).unwrap()))
} }
} }

View File

@@ -36,7 +36,7 @@ log = { version = "0.4", default-features = false, features = ["std"] }
serai-db = { path = "../../common/db", version = "0.1" } serai-db = { path = "../../common/db", version = "0.1" }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] } futures-util = { version = "0.3", default-features = false, features = ["std", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }
tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" } tendermint = { package = "tendermint-machine", path = "./tendermint", version = "0.2" }

View File

@@ -5,7 +5,7 @@ use ciphersuite::{group::GroupEncoding, *};
use serai_db::{Get, DbTxn, Db}; use serai_db::{Get, DbTxn, Db};
use borsh::BorshDeserialize; use scale::Decode;
use tendermint::ext::{Network, Commit}; use tendermint::ext::{Network, Commit};
@@ -62,7 +62,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
D::key( D::key(
b"tributary_blockchain", b"tributary_blockchain",
b"next_nonce", b"next_nonce",
[genesis.as_slice(), signer.to_bytes().as_slice(), order].concat(), [genesis.as_ref(), signer.to_bytes().as_ref(), order].concat(),
) )
} }
@@ -109,7 +109,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn block_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Block<T>> { pub(crate) fn block_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Block<T>> {
db.get(Self::block_key(&genesis, block)) db.get(Self::block_key(&genesis, block))
.map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_slice()).unwrap()) .map(|bytes| Block::<T>::read::<&[u8]>(&mut bytes.as_ref()).unwrap())
} }
pub(crate) fn commit_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Vec<u8>> { pub(crate) fn commit_from_db(db: &D, genesis: [u8; 32], block: &[u8; 32]) -> Option<Vec<u8>> {
@@ -169,7 +169,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
// we must have a commit per valid hash // we must have a commit per valid hash
let commit = Self::commit_from_db(db, genesis, &hash).unwrap(); let commit = Self::commit_from_db(db, genesis, &hash).unwrap();
// commit has to be valid if it is coming from our db // commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap()) Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap())
}; };
let unsigned_in_chain = let unsigned_in_chain =
|hash: [u8; 32]| db.get(Self::unsigned_included_key(&self.genesis, &hash)).is_some(); |hash: [u8; 32]| db.get(Self::unsigned_included_key(&self.genesis, &hash)).is_some();
@@ -244,7 +244,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
let commit = |block: u64| -> Option<Commit<N::SignatureScheme>> { let commit = |block: u64| -> Option<Commit<N::SignatureScheme>> {
let commit = self.commit_by_block_number(block)?; let commit = self.commit_by_block_number(block)?;
// commit has to be valid if it is coming from our db // commit has to be valid if it is coming from our db
Some(Commit::<N::SignatureScheme>::deserialize_reader(&mut commit.as_slice()).unwrap()) Some(Commit::<N::SignatureScheme>::decode(&mut commit.as_ref()).unwrap())
}; };
let mut txn_db = db.clone(); let mut txn_db = db.clone();

View File

@@ -3,11 +3,10 @@ use std::{sync::Arc, io};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use borsh::BorshDeserialize;
use ciphersuite::*; use ciphersuite::*;
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver; use futures_channel::mpsc::UnboundedReceiver;
use futures_util::{StreamExt, SinkExt}; use futures_util::{StreamExt, SinkExt};
use ::tendermint::{ use ::tendermint::{
@@ -178,7 +177,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
let block_number = BlockNumber(blockchain.block_number()); let block_number = BlockNumber(blockchain.block_number());
let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) { let start_time = if let Some(commit) = blockchain.commit(&blockchain.tip()) {
Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time
} else { } else {
start_time start_time
}; };
@@ -277,8 +276,8 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
} }
let block = TendermintBlock(block.serialize()); let block = TendermintBlock(block.serialize());
let mut commit_ref = commit.as_slice(); let mut commit_ref = commit.as_ref();
let Ok(commit) = Commit::<Arc<Validators>>::deserialize_reader(&mut commit_ref) else { let Ok(commit) = Commit::<Arc<Validators>>::decode(&mut commit_ref) else {
log::error!("sent an invalidly serialized commit"); log::error!("sent an invalidly serialized commit");
return false; return false;
}; };
@@ -328,7 +327,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
Some(&TENDERMINT_MESSAGE) => { Some(&TENDERMINT_MESSAGE) => {
let Ok(msg) = let Ok(msg) =
SignedMessageFor::<TendermintNetwork<D, T, P>>::deserialize_reader(&mut &msg[1 ..]) SignedMessageFor::<TendermintNetwork<D, T, P>>::decode::<&[u8]>(&mut &msg[1 ..])
else { else {
log::error!("received invalid tendermint message"); log::error!("received invalid tendermint message");
return false; return false;
@@ -368,17 +367,15 @@ impl<D: Db, T: TransactionTrait> TributaryReader<D, T> {
Blockchain::<D, T>::commit_from_db(&self.0, self.1, hash) Blockchain::<D, T>::commit_from_db(&self.0, self.1, hash)
} }
pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option<Commit<Validators>> { pub fn parsed_commit(&self, hash: &[u8; 32]) -> Option<Commit<Validators>> {
self self.commit(hash).map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap())
.commit(hash)
.map(|commit| Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap())
} }
pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> { pub fn block_after(&self, hash: &[u8; 32]) -> Option<[u8; 32]> {
Blockchain::<D, T>::block_after(&self.0, self.1, hash) Blockchain::<D, T>::block_after(&self.0, self.1, hash)
} }
pub fn time_of_block(&self, hash: &[u8; 32]) -> Option<u64> { pub fn time_of_block(&self, hash: &[u8; 32]) -> Option<u64> {
self.commit(hash).map(|commit| { self
Commit::<Validators>::deserialize_reader(&mut commit.as_slice()).unwrap().end_time .commit(hash)
}) .map(|commit| Commit::<Validators>::decode(&mut commit.as_ref()).unwrap().end_time)
} }
pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool { pub fn locally_provided_txs_in_block(&self, hash: &[u8; 32], order: &str) -> bool {

View File

@@ -21,7 +21,7 @@ use schnorr::{
use serai_db::Db; use serai_db::Db;
use borsh::{BorshSerialize, BorshDeserialize}; use scale::{Encode, Decode};
use tendermint::{ use tendermint::{
SignedMessageFor, SignedMessageFor,
ext::{ ext::{
@@ -248,7 +248,7 @@ impl Weights for Validators {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
pub struct TendermintBlock(pub Vec<u8>); pub struct TendermintBlock(pub Vec<u8>);
impl BlockTrait for TendermintBlock { impl BlockTrait for TendermintBlock {
type Id = [u8; 32]; type Id = [u8; 32];
@@ -300,7 +300,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
fn broadcast(&mut self, msg: SignedMessageFor<Self>) -> impl Send + Future<Output = ()> { fn broadcast(&mut self, msg: SignedMessageFor<Self>) -> impl Send + Future<Output = ()> {
async move { async move {
let mut to_broadcast = vec![TENDERMINT_MESSAGE]; let mut to_broadcast = vec![TENDERMINT_MESSAGE];
msg.serialize(&mut to_broadcast).unwrap(); to_broadcast.extend(msg.encode());
self.p2p.broadcast(self.genesis, to_broadcast).await self.p2p.broadcast(self.genesis, to_broadcast).await
} }
} }
@@ -390,7 +390,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Network for TendermintNetwork<D, T, P>
return invalid_block(); return invalid_block();
}; };
let encoded_commit = borsh::to_vec(&commit).unwrap(); let encoded_commit = commit.encode();
loop { loop {
let block_res = self.blockchain.write().await.add_block::<Self>( let block_res = self.blockchain.write().await.add_block::<Self>(
&block, &block,

View File

@@ -1,6 +1,6 @@
use std::io; use std::io;
use borsh::BorshDeserialize; use scale::{Encode, Decode, IoReader};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
@@ -27,14 +27,14 @@ pub enum TendermintTx {
impl ReadWrite for TendermintTx { impl ReadWrite for TendermintTx {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> { fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Evidence::deserialize_reader(reader) Evidence::decode(&mut IoReader(reader))
.map(TendermintTx::SlashEvidence) .map(TendermintTx::SlashEvidence)
.map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "invalid evidence format")) .map_err(|_| io::Error::new(io::ErrorKind::InvalidData, "invalid evidence format"))
} }
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> { fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self { match self {
TendermintTx::SlashEvidence(ev) => writer.write_all(&borsh::to_vec(&ev).unwrap()), TendermintTx::SlashEvidence(ev) => writer.write_all(&ev.encode()),
} }
} }
} }

View File

@@ -10,6 +10,8 @@ use dalek_ff_group::Ristretto;
use ciphersuite::*; use ciphersuite::*;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode;
use ::tendermint::{ use ::tendermint::{
ext::{Network, Signer as SignerTrait, SignatureScheme, BlockNumber, RoundNumber}, ext::{Network, Signer as SignerTrait, SignatureScheme, BlockNumber, RoundNumber},
SignedMessageFor, DataFor, Message, SignedMessage, Data, Evidence, SignedMessageFor, DataFor, Message, SignedMessage, Data, Evidence,
@@ -198,7 +200,7 @@ pub async fn signed_from_data<N: Network>(
round: RoundNumber(round_number), round: RoundNumber(round_number),
data, data,
}; };
let sig = signer.sign(&borsh::to_vec(&msg).unwrap()).await; let sig = signer.sign(&msg.encode()).await;
SignedMessage { msg, sig } SignedMessage { msg, sig }
} }
@@ -211,5 +213,5 @@ pub async fn random_evidence_tx<N: Network>(
let data = Data::Proposal(Some(RoundNumber(0)), b); let data = Data::Proposal(Some(RoundNumber(0)), b);
let signer_id = signer.validator_id().await.unwrap(); let signer_id = signer.validator_id().await.unwrap();
let signed = signed_from_data::<N>(signer, signer_id, 0, 0, data).await; let signed = signed_from_data::<N>(signer, signer_id, 0, 0, data).await;
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap())) TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode()))
} }

View File

@@ -6,6 +6,8 @@ use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use ciphersuite::*; use ciphersuite::*;
use scale::Encode;
use tendermint::{ use tendermint::{
time::CanonicalInstant, time::CanonicalInstant,
round::RoundData, round::RoundData,
@@ -50,10 +52,7 @@ async fn invalid_valid_round() {
async move { async move {
let data = Data::Proposal(valid_round, TendermintBlock(vec![])); let data = Data::Proposal(valid_round, TendermintBlock(vec![]));
let signed = signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, data).await; let signed = signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, data).await;
( (signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode())))
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap())),
)
} }
}; };
@@ -71,8 +70,7 @@ async fn invalid_valid_round() {
let mut random_sig = [0u8; 64]; let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig); OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig; signed.sig = random_sig;
let tx = let tx = TendermintTx::SlashEvidence(Evidence::InvalidValidRound(signed.encode()));
TendermintTx::SlashEvidence(Evidence::InvalidValidRound(borsh::to_vec(&signed).unwrap()));
// should fail // should fail
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
@@ -92,10 +90,7 @@ async fn invalid_precommit_signature() {
let signed = let signed =
signed_from_data::<N>(signer.clone().into(), signer_id, 1, 0, Data::Precommit(precommit)) signed_from_data::<N>(signer.clone().into(), signer_id, 1, 0, Data::Precommit(precommit))
.await; .await;
( (signed.clone(), TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode())))
signed.clone(),
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())),
)
} }
}; };
@@ -125,8 +120,7 @@ async fn invalid_precommit_signature() {
let mut random_sig = [0u8; 64]; let mut random_sig = [0u8; 64];
OsRng.fill_bytes(&mut random_sig); OsRng.fill_bytes(&mut random_sig);
signed.sig = random_sig; signed.sig = random_sig;
let tx = let tx = TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(signed.encode()));
TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap()));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
} }
} }
@@ -144,32 +138,24 @@ async fn evidence_with_prevote() {
// it should fail for all reasons. // it should fail for all reasons.
let mut txs = vec![]; let mut txs = vec![];
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidPrecommit( txs.push(TendermintTx::SlashEvidence(Evidence::InvalidPrecommit(
borsh::to_vec( signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
&&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) .await
.await, .encode(),
)
.unwrap(),
))); )));
txs.push(TendermintTx::SlashEvidence(Evidence::InvalidValidRound( txs.push(TendermintTx::SlashEvidence(Evidence::InvalidValidRound(
borsh::to_vec( signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) .await
.await, .encode(),
)
.unwrap(),
))); )));
// Since these require a second message, provide this one again // Since these require a second message, provide this one again
// ConflictingMessages can be fired for actually conflicting Prevotes however // ConflictingMessages can be fired for actually conflicting Prevotes however
txs.push(TendermintTx::SlashEvidence(Evidence::ConflictingMessages( txs.push(TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec( signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id)) .await
.await, .encode(),
) signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.unwrap(), .await
borsh::to_vec( .encode(),
&signed_from_data::<N>(signer.clone().into(), signer_id, 0, 0, Data::Prevote(block_id))
.await,
)
.unwrap(),
))); )));
txs txs
} }
@@ -203,16 +189,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail // non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass // conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -220,16 +206,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit) // (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(0, 1, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence // Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await; let signed_2 = signed_for_b_r(1, 0, Data::Proposal(None, TendermintBlock(vec![0x22]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
} }
@@ -239,16 +225,16 @@ async fn conflicting_msgs_evidence_tx() {
// non-conflicting data should fail // non-conflicting data should fail
let signed_1 = signed_for_b_r(0, 0, Data::Prevote(Some([0x11; 32]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Prevote(Some([0x11; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
// conflicting data should pass // conflicting data should pass
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(0, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap();
@@ -256,16 +242,16 @@ async fn conflicting_msgs_evidence_tx() {
// (except for Precommit) // (except for Precommit)
let signed_2 = signed_for_b_r(0, 1, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(0, 1, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
// Proposals for different block numbers should also fail as evidence // Proposals for different block numbers should also fail as evidence
let signed_2 = signed_for_b_r(1, 0, Data::Prevote(Some([0x22; 32]))).await; let signed_2 = signed_for_b_r(1, 0, Data::Prevote(Some([0x22; 32]))).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err(); verify_tendermint_tx::<N>(&tx, &validators, commit).unwrap_err();
} }
@@ -287,8 +273,8 @@ async fn conflicting_msgs_evidence_tx() {
.await; .await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
// update schema so that we don't fail due to invalid signature // update schema so that we don't fail due to invalid signature
@@ -306,8 +292,8 @@ async fn conflicting_msgs_evidence_tx() {
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![]))).await;
let signed_2 = signed_for_b_r(0, 0, Data::Prevote(None)).await; let signed_2 = signed_for_b_r(0, 0, Data::Prevote(None)).await;
let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages( let tx = TendermintTx::SlashEvidence(Evidence::ConflictingMessages(
borsh::to_vec(&signed_1).unwrap(), signed_1.encode(),
borsh::to_vec(&signed_2).unwrap(), signed_2.encode(),
)); ));
assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err()); assert!(verify_tendermint_tx::<N>(&tx, &validators, commit).is_err());
} }

View File

@@ -21,7 +21,7 @@ thiserror = { version = "2", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } parity-scale-codec = { version = "3", default-features = false, features = ["std", "derive"] }
futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] } futures-util = { version = "0.3", default-features = false, features = ["std", "async-await-macro", "sink", "channel"] }
futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] } futures-channel = { version = "0.3", default-features = false, features = ["std", "sink"] }

View File

@@ -3,41 +3,33 @@ use std::{sync::Arc, collections::HashSet};
use thiserror::Error; use thiserror::Error;
use borsh::{BorshSerialize, BorshDeserialize}; use parity_scale_codec::{Encode, Decode};
use crate::{SignedMessageFor, SlashEvent, commit_msg}; use crate::{SignedMessageFor, SlashEvent, commit_msg};
/// An alias for a series of traits required for a type to be usable as a validator ID, /// An alias for a series of traits required for a type to be usable as a validator ID,
/// automatically implemented for all types satisfying those traits. /// automatically implemented for all types satisfying those traits.
pub trait ValidatorId: pub trait ValidatorId:
Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode
{ {
} }
#[rustfmt::skip] impl<V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode> ValidatorId
impl< for V
V: Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + BorshSerialize + BorshDeserialize,
> ValidatorId for V
{ {
} }
/// An alias for a series of traits required for a type to be usable as a signature, /// An alias for a series of traits required for a type to be usable as a signature,
/// automatically implemented for all types satisfying those traits. /// automatically implemented for all types satisfying those traits.
pub trait Signature: pub trait Signature: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode {}
Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode> Signature for S {}
{
}
impl<S: Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize> Signature
for S
{
}
// Type aliases which are distinct according to the type system // Type aliases which are distinct according to the type system
/// A struct containing a Block Number, wrapped to have a distinct type. /// A struct containing a Block Number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
pub struct BlockNumber(pub u64); pub struct BlockNumber(pub u64);
/// A struct containing a round number, wrapped to have a distinct type. /// A struct containing a round number, wrapped to have a distinct type.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
pub struct RoundNumber(pub u32); pub struct RoundNumber(pub u32);
/// A signer for a validator. /// A signer for a validator.
@@ -135,7 +127,7 @@ impl<S: SignatureScheme> SignatureScheme for Arc<S> {
/// A commit for a specific block. /// A commit for a specific block.
/// ///
/// The list of validators have weight exceeding the threshold for a valid commit. /// The list of validators have weight exceeding the threshold for a valid commit.
#[derive(PartialEq, Debug, BorshSerialize, BorshDeserialize)] #[derive(PartialEq, Debug, Encode, Decode)]
pub struct Commit<S: SignatureScheme> { pub struct Commit<S: SignatureScheme> {
/// End time of the round which created this commit, used as the start time of the next block. /// End time of the round which created this commit, used as the start time of the next block.
pub end_time: u64, pub end_time: u64,
@@ -193,7 +185,7 @@ impl<W: Weights> Weights for Arc<W> {
} }
/// Simplified error enum representing a block's validity. /// Simplified error enum representing a block's validity.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Error, Encode, Decode)]
pub enum BlockError { pub enum BlockError {
/// Malformed block which is wholly invalid. /// Malformed block which is wholly invalid.
#[error("invalid block")] #[error("invalid block")]
@@ -205,20 +197,9 @@ pub enum BlockError {
} }
/// Trait representing a Block. /// Trait representing a Block.
pub trait Block: pub trait Block: Send + Sync + Clone + PartialEq + Eq + Debug + Encode + Decode {
Send + Sync + Clone + PartialEq + Eq + Debug + BorshSerialize + BorshDeserialize
{
// Type used to identify blocks. Presumably a cryptographic hash of the block. // Type used to identify blocks. Presumably a cryptographic hash of the block.
type Id: Send type Id: Send + Sync + Copy + Clone + PartialEq + Eq + AsRef<[u8]> + Debug + Encode + Decode;
+ Sync
+ Copy
+ Clone
+ PartialEq
+ Eq
+ AsRef<[u8]>
+ Debug
+ BorshSerialize
+ BorshDeserialize;
/// Return the deterministic, unique ID for this block. /// Return the deterministic, unique ID for this block.
fn id(&self) -> Self::Id; fn id(&self) -> Self::Id;

View File

@@ -1,3 +1,5 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug; use core::fmt::Debug;
use std::{ use std::{
@@ -6,7 +8,7 @@ use std::{
collections::{VecDeque, HashMap}, collections::{VecDeque, HashMap},
}; };
use borsh::{BorshSerialize, BorshDeserialize}; use parity_scale_codec::{Encode, Decode, IoReader};
use futures_channel::mpsc; use futures_channel::mpsc;
use futures_util::{ use futures_util::{
@@ -41,14 +43,14 @@ pub fn commit_msg(end_time: u64, id: &[u8]) -> Vec<u8> {
[&end_time.to_le_bytes(), id].concat() [&end_time.to_le_bytes(), id].concat()
} }
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)]
pub enum Step { pub enum Step {
Propose, Propose,
Prevote, Prevote,
Precommit, Precommit,
} }
#[derive(Clone, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Eq, Debug, Encode, Decode)]
pub enum Data<B: Block, S: Signature> { pub enum Data<B: Block, S: Signature> {
Proposal(Option<RoundNumber>, B), Proposal(Option<RoundNumber>, B),
Prevote(Option<B::Id>), Prevote(Option<B::Id>),
@@ -90,7 +92,7 @@ impl<B: Block, S: Signature> Data<B, S> {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
pub struct Message<V: ValidatorId, B: Block, S: Signature> { pub struct Message<V: ValidatorId, B: Block, S: Signature> {
pub sender: V, pub sender: V,
pub block: BlockNumber, pub block: BlockNumber,
@@ -100,7 +102,7 @@ pub struct Message<V: ValidatorId, B: Block, S: Signature> {
} }
/// A signed Tendermint consensus message to be broadcast to the other validators. /// A signed Tendermint consensus message to be broadcast to the other validators.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
pub struct SignedMessage<V: ValidatorId, B: Block, S: Signature> { pub struct SignedMessage<V: ValidatorId, B: Block, S: Signature> {
pub msg: Message<V, B, S>, pub msg: Message<V, B, S>,
pub sig: S, pub sig: S,
@@ -117,18 +119,18 @@ impl<V: ValidatorId, B: Block, S: Signature> SignedMessage<V, B, S> {
&self, &self,
signer: &Scheme, signer: &Scheme,
) -> bool { ) -> bool {
signer.verify(self.msg.sender, &borsh::to_vec(&self.msg).unwrap(), &self.sig) signer.verify(self.msg.sender, &self.msg.encode(), &self.sig)
} }
} }
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, Decode)]
pub enum SlashReason { pub enum SlashReason {
FailToPropose, FailToPropose,
InvalidBlock, InvalidBlock,
InvalidProposer, InvalidProposer,
} }
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
pub enum Evidence { pub enum Evidence {
ConflictingMessages(Vec<u8>, Vec<u8>), ConflictingMessages(Vec<u8>, Vec<u8>),
InvalidPrecommit(Vec<u8>), InvalidPrecommit(Vec<u8>),
@@ -159,7 +161,7 @@ pub type SignedMessageFor<N> = SignedMessage<
>; >;
pub fn decode_signed_message<N: Network>(mut data: &[u8]) -> Option<SignedMessageFor<N>> { pub fn decode_signed_message<N: Network>(mut data: &[u8]) -> Option<SignedMessageFor<N>> {
SignedMessageFor::<N>::deserialize_reader(&mut data).ok() SignedMessageFor::<N>::decode(&mut data).ok()
} }
fn decode_and_verify_signed_message<N: Network>( fn decode_and_verify_signed_message<N: Network>(
@@ -339,7 +341,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
target: "tendermint", target: "tendermint",
"proposer for block {}, round {round:?} was {} (me: {res})", "proposer for block {}, round {round:?} was {} (me: {res})",
self.block.number.0, self.block.number.0,
hex::encode(borsh::to_vec(&proposer).unwrap()), hex::encode(proposer.encode()),
); );
res res
} }
@@ -420,11 +422,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
// TODO: If the new slash event has evidence, emit to prevent a low-importance slash from // TODO: If the new slash event has evidence, emit to prevent a low-importance slash from
// cancelling emission of high-importance slashes // cancelling emission of high-importance slashes
if !self.block.slashes.contains(&validator) { if !self.block.slashes.contains(&validator) {
log::info!( log::info!(target: "tendermint", "Slashing validator {}", hex::encode(validator.encode()));
target: "tendermint",
"Slashing validator {}",
hex::encode(borsh::to_vec(&validator).unwrap()),
);
self.block.slashes.insert(validator); self.block.slashes.insert(validator);
self.network.slash(validator, slash_event).await; self.network.slash(validator, slash_event).await;
} }
@@ -674,7 +672,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
self self
.slash( .slash(
msg.sender, msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidPrecommit(borsh::to_vec(&signed).unwrap())), SlashEvent::WithEvidence(Evidence::InvalidPrecommit(signed.encode())),
) )
.await; .await;
Err(TendermintError::Malicious)?; Err(TendermintError::Malicious)?;
@@ -745,10 +743,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
self.broadcast(Data::Prevote(None)); self.broadcast(Data::Prevote(None));
} }
self self
.slash( .slash(msg.sender, SlashEvent::WithEvidence(Evidence::InvalidValidRound(msg.encode())))
msg.sender,
SlashEvent::WithEvidence(Evidence::InvalidValidRound(borsh::to_vec(&msg).unwrap())),
)
.await; .await;
Err(TendermintError::Malicious)?; Err(TendermintError::Malicious)?;
} }
@@ -1039,7 +1034,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
while !messages.is_empty() { while !messages.is_empty() {
self.network.broadcast( self.network.broadcast(
SignedMessageFor::<N>::deserialize_reader(&mut messages) SignedMessageFor::<N>::decode(&mut IoReader(&mut messages))
.expect("saved invalid message to DB") .expect("saved invalid message to DB")
).await; ).await;
} }
@@ -1064,7 +1059,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
} { } {
if our_message { if our_message {
assert!(sig.is_none()); assert!(sig.is_none());
sig = Some(self.signer.sign(&borsh::to_vec(&msg).unwrap()).await); sig = Some(self.signer.sign(&msg.encode()).await);
} }
let sig = sig.unwrap(); let sig = sig.unwrap();
@@ -1084,7 +1079,7 @@ impl<N: Network + 'static> TendermintMachine<N> {
let message_tape_key = message_tape_key(self.genesis); let message_tape_key = message_tape_key(self.genesis);
let mut txn = self.db.txn(); let mut txn = self.db.txn();
let mut message_tape = txn.get(&message_tape_key).unwrap_or(vec![]); let mut message_tape = txn.get(&message_tape_key).unwrap_or(vec![]);
signed_msg.serialize(&mut message_tape).unwrap(); message_tape.extend(signed_msg.encode());
txn.put(&message_tape_key, message_tape); txn.put(&message_tape_key, message_tape);
txn.commit(); txn.commit();
} }

View File

@@ -1,5 +1,7 @@
use std::{sync::Arc, collections::HashMap}; use std::{sync::Arc, collections::HashMap};
use parity_scale_codec::Encode;
use crate::{ext::*, RoundNumber, Step, DataFor, SignedMessageFor, Evidence}; use crate::{ext::*, RoundNumber, Step, DataFor, SignedMessageFor, Evidence};
type RoundLog<N> = HashMap<<N as Network>::ValidatorId, HashMap<Step, SignedMessageFor<N>>>; type RoundLog<N> = HashMap<<N as Network>::ValidatorId, HashMap<Step, SignedMessageFor<N>>>;
@@ -37,10 +39,7 @@ impl<N: Network> MessageLog<N> {
target: "tendermint", target: "tendermint",
"Validator sent multiple messages for the same block + round + step" "Validator sent multiple messages for the same block + round + step"
); );
Err(Evidence::ConflictingMessages( Err(Evidence::ConflictingMessages(existing.encode(), signed.encode()))?;
borsh::to_vec(&existing).unwrap(),
borsh::to_vec(&signed).unwrap(),
))?;
} }
return Ok(false); return Ok(false);
} }

View File

@@ -4,7 +4,7 @@ use std::{
time::{UNIX_EPOCH, SystemTime, Duration}, time::{UNIX_EPOCH, SystemTime, Duration},
}; };
use borsh::{BorshSerialize, BorshDeserialize}; use parity_scale_codec::{Encode, Decode};
use futures_util::sink::SinkExt; use futures_util::sink::SinkExt;
use tokio::{sync::RwLock, time::sleep}; use tokio::{sync::RwLock, time::sleep};
@@ -89,7 +89,7 @@ impl Weights for TestWeights {
} }
} }
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, PartialEq, Eq, Debug, Encode, Decode)]
struct TestBlock { struct TestBlock {
id: TestBlockId, id: TestBlockId,
valid: Result<(), BlockError>, valid: Result<(), BlockError>,

View File

@@ -21,6 +21,7 @@ workspace = true
zeroize = { version = "^1.5", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] }
@@ -29,7 +30,7 @@ dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = fals
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
serai-primitives = { path = "../../substrate/primitives", default-features = false, features = ["std"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] }
serai-db = { path = "../../common/db" } serai-db = { path = "../../common/db" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -1,8 +1,11 @@
#![expect(clippy::cast_possible_truncation)]
use std::collections::HashMap; use std::collections::HashMap;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{address::SeraiAddress, validator_sets::primitives::ExternalValidatorSet}; use serai_client::{primitives::SeraiAddress, validator_sets::primitives::ExternalValidatorSet};
use messages::sign::{VariantSignId, SignId}; use messages::sign::{VariantSignId, SignId};
@@ -13,7 +16,7 @@ use serai_cosign::CosignIntent;
use crate::transaction::SigningProtocolRound; use crate::transaction::SigningProtocolRound;
/// A topic within the database which the group participates in /// A topic within the database which the group participates in
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
pub enum Topic { pub enum Topic {
/// Vote to remove a participant /// Vote to remove a participant
RemoveParticipant { RemoveParticipant {
@@ -122,7 +125,7 @@ impl Topic {
Topic::DkgConfirmation { attempt, round: _ } => Some({ Topic::DkgConfirmation { attempt, round: _ } => Some({
let id = { let id = {
let mut id = [0; 32]; let mut id = [0; 32];
let encoded_set = borsh::to_vec(set).unwrap(); let encoded_set = set.encode();
id[.. encoded_set.len()].copy_from_slice(&encoded_set); id[.. encoded_set.len()].copy_from_slice(&encoded_set);
VariantSignId::Batch(id) VariantSignId::Batch(id)
}; };

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
@@ -8,9 +8,9 @@ use std::collections::HashMap;
use ciphersuite::group::GroupEncoding; use ciphersuite::group::GroupEncoding;
use dkg::Participant; use dkg::Participant;
use serai_primitives::{ use serai_client::{
address::SeraiAddress, primitives::SeraiAddress,
validator_sets::{ExternalValidatorSet, Slash}, validator_sets::primitives::{ExternalValidatorSet, Slash},
}; };
use serai_db::*; use serai_db::*;

View File

@@ -12,9 +12,10 @@ use ciphersuite::{
use dalek_ff_group::Ristretto; use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode;
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{addess::SeraiAddress, validator_sets::MAX_KEY_SHARES_PER_SET}; use serai_client::{primitives::SeraiAddress, validator_sets::primitives::MAX_KEY_SHARES_PER_SET};
use messages::sign::VariantSignId; use messages::sign::VariantSignId;
@@ -28,7 +29,7 @@ use tributary_sdk::{
use crate::db::Topic; use crate::db::Topic;
/// The round this data is for, within a signing protocol. /// The round this data is for, within a signing protocol.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Encode, BorshSerialize, BorshDeserialize)]
pub enum SigningProtocolRound { pub enum SigningProtocolRound {
/// A preprocess. /// A preprocess.
Preprocess, Preprocess,
@@ -241,20 +242,19 @@ impl TransactionTrait for Transaction {
fn kind(&self) -> TransactionKind { fn kind(&self) -> TransactionKind {
match self { match self {
Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed( Transaction::RemoveParticipant { participant, signed } => TransactionKind::Signed(
borsh::to_vec(&(b"RemoveParticipant".as_slice(), participant)).unwrap(), (b"RemoveParticipant", participant).encode(),
signed.to_tributary_signed(0), signed.to_tributary_signed(0),
), ),
Transaction::DkgParticipation { signed, .. } => TransactionKind::Signed( Transaction::DkgParticipation { signed, .. } => {
borsh::to_vec(b"DkgParticipation".as_slice()).unwrap(), TransactionKind::Signed(b"DkgParticipation".encode(), signed.to_tributary_signed(0))
signed.to_tributary_signed(0), }
),
Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed( Transaction::DkgConfirmationPreprocess { attempt, signed, .. } => TransactionKind::Signed(
borsh::to_vec(b"DkgConfirmation".as_slice(), attempt).unwrap(), (b"DkgConfirmation", attempt).encode(),
signed.to_tributary_signed(0), signed.to_tributary_signed(0),
), ),
Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed( Transaction::DkgConfirmationShare { attempt, signed, .. } => TransactionKind::Signed(
borsh::to_vec(b"DkgConfirmation".as_slice(), attempt).unwrap(), (b"DkgConfirmation", attempt).encode(),
signed.to_tributary_signed(1), signed.to_tributary_signed(1),
), ),
@@ -264,14 +264,13 @@ impl TransactionTrait for Transaction {
Transaction::Batch { .. } => TransactionKind::Provided("Batch"), Transaction::Batch { .. } => TransactionKind::Provided("Batch"),
Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed( Transaction::Sign { id, attempt, round, signed, .. } => TransactionKind::Signed(
borsh::to_vec(b"Sign".as_slice(), id, attempt).unwrap(), (b"Sign", id, attempt).encode(),
signed.to_tributary_signed(round.nonce()), signed.to_tributary_signed(round.nonce()),
), ),
Transaction::SlashReport { signed, .. } => TransactionKind::Signed( Transaction::SlashReport { signed, .. } => {
borsh::to_vec(b"SlashReport".as_slice()).unwrap(), TransactionKind::Signed(b"SlashReport".encode(), signed.to_tributary_signed(0))
signed.to_tributary_signed(0), }
),
} }
} }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
use zeroize::Zeroize; use zeroize::Zeroize;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("lib.md")] #![doc = include_str!("lib.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,5 +1,5 @@
#![allow(deprecated)] #![allow(deprecated)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![no_std] // Prevents writing new code, in what should be a simple wrapper, which requires std #![no_std] // Prevents writing new code, in what should be a simple wrapper, which requires std
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![allow(clippy::redundant_closure_call)] #![allow(clippy::redundant_closure_call)]

View File

@@ -23,19 +23,12 @@ thiserror = { version = "2", default-features = false }
std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, features = ["alloc"] } std-shims = { version = "0.1", path = "../../common/std-shims", default-features = false, features = ["alloc"] }
borsh = { version = "1", default-features = false, features = ["derive", "de_strict_order"], optional = true }
ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] } ciphersuite = { path = "../ciphersuite", version = "^0.4.1", default-features = false, features = ["alloc"] }
[features] [features]
std = [ std = [
"thiserror/std", "thiserror/std",
"std-shims/std", "std-shims/std",
"borsh?/std",
"ciphersuite/std", "ciphersuite/std",
] ]
borsh = ["dep:borsh"]
default = ["std"] default = ["std"]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]

View File

@@ -26,21 +26,9 @@ presented in section 4.2 is extended, with the following changes:
just one round. just one round.
For a gist of the verifiable encryption scheme, please see For a gist of the verifiable encryption scheme, please see
https://gist.github.com/kayabaNerve/cfbde74b0660dfdf8dd55326d6ec33d7. Security https://gist.github.com/kayabaNerve/cfbde74b0660dfdf8dd55326d6ec33d7. For
proofs are currently being worked on. security proofs and audit information, please see
[here](../../../audits/crypto/dkg/evrf).
---
This library relies on an implementation of Bulletproofs and various
zero-knowledge gadgets. This library uses
[`generalized-bulletproofs`](https://docs.rs/generalized-bulletproofs),
[`generalized-bulletproofs-circuit-abstraction`](https://docs.rs/generalized-bulletproofs-circuit-abstraction),
and
[`generalized-bulletproofs-ec-gadgets`](https://docs.rs/generalized-bulletproofs-ec-gadgets)
from the Monero project's FCMP++ codebase. These libraries have received the
following audits in the past:
- https://github.com/kayabaNerve/monero-oxide/tree/fcmp++/audits/generalized-bulletproofs
- https://github.com/kayabaNerve/monero-oxide/tree/fcmp++/audits/fcmps
--- ---

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
@@ -22,7 +22,6 @@ use ciphersuite::{
/// The ID of a participant, defined as a non-zero u16. /// The ID of a participant, defined as a non-zero u16.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Zeroize)] #[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct Participant(u16); pub struct Participant(u16);
impl Participant { impl Participant {
/// Create a new Participant identifier from a u16. /// Create a new Participant identifier from a u16.
@@ -129,18 +128,8 @@ pub enum DkgError {
NotParticipating, NotParticipating,
} }
// Manually implements BorshDeserialize so we can enforce it's a valid index
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for Participant {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Participant::new(u16::deserialize_reader(reader)?)
.ok_or_else(|| io::Error::other("invalid participant"))
}
}
/// Parameters for a multisig. /// Parameters for a multisig.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)] #[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[cfg_attr(feature = "borsh", derive(borsh::BorshSerialize))]
pub struct ThresholdParams { pub struct ThresholdParams {
/// Participants needed to sign on behalf of the group. /// Participants needed to sign on behalf of the group.
t: u16, t: u16,
@@ -210,16 +199,6 @@ impl ThresholdParams {
} }
} }
#[cfg(feature = "borsh")]
impl borsh::BorshDeserialize for ThresholdParams {
fn deserialize_reader<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let t = u16::deserialize_reader(reader)?;
let n = u16::deserialize_reader(reader)?;
let i = Participant::deserialize_reader(reader)?;
ThresholdParams::new(t, n, i).map_err(|e| io::Error::other(format!("{e:?}")))
}
}
/// A method of interpolation. /// A method of interpolation.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)] #[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub enum Interpolation<F: Zeroize + PrimeField> { pub enum Interpolation<F: Zeroize + PrimeField> {

View File

@@ -33,6 +33,6 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { path = "../ff-group-tests" } ff-group-tests = { path = "../ff-group-tests" }
[features] [features]
alloc = ["zeroize/alloc", "sha3/alloc", "crypto-bigint/alloc", "prime-field/alloc", "ciphersuite/alloc"] alloc = ["zeroize/alloc", "sha3/alloc", "prime-field/alloc", "ciphersuite/alloc"]
std = ["alloc", "zeroize/std", "prime-field/std", "ciphersuite/std"] std = ["alloc", "zeroize/std", "prime-field/std", "ciphersuite/std"]
default = ["std"] default = ["std"]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
/// Tests for the Field trait. /// Tests for the Field trait.

View File

@@ -28,8 +28,10 @@ impl<A: Send + Sync + Clone + PartialEq + Debug + WriteAddendum> Addendum for A
/// Algorithm trait usable by the FROST signing machine to produce signatures.. /// Algorithm trait usable by the FROST signing machine to produce signatures..
pub trait Algorithm<C: Curve>: Send + Sync { pub trait Algorithm<C: Curve>: Send + Sync {
/// The transcript format this algorithm uses. This likely should NOT be the IETF-compatible /// The transcript format this algorithm uses.
/// transcript included in this crate. ///
/// This MUST NOT be the IETF-compatible transcript included in this crate UNLESS this is an
/// IETF-specified ciphersuite.
type Transcript: Sync + Clone + Debug + Transcript; type Transcript: Sync + Clone + Debug + Transcript;
/// Serializable addendum, used in algorithms requiring more data than just the nonces. /// Serializable addendum, used in algorithms requiring more data than just the nonces.
type Addendum: Addendum; type Addendum: Addendum;
@@ -69,8 +71,10 @@ pub trait Algorithm<C: Curve>: Send + Sync {
) -> Result<(), FrostError>; ) -> Result<(), FrostError>;
/// Sign a share with the given secret/nonce. /// Sign a share with the given secret/nonce.
///
/// The secret will already have been its lagrange coefficient applied so it is the necessary /// The secret will already have been its lagrange coefficient applied so it is the necessary
/// key share. /// key share.
///
/// The nonce will already have been processed into the combined form d + (e * p). /// The nonce will already have been processed into the combined form d + (e * p).
fn sign_share( fn sign_share(
&mut self, &mut self,
@@ -85,6 +89,7 @@ pub trait Algorithm<C: Curve>: Send + Sync {
fn verify(&self, group_key: C::G, nonces: &[Vec<C::G>], sum: C::F) -> Option<Self::Signature>; fn verify(&self, group_key: C::G, nonces: &[Vec<C::G>], sum: C::F) -> Option<Self::Signature>;
/// Verify a specific share given as a response. /// Verify a specific share given as a response.
///
/// This function should return a series of pairs whose products should sum to zero for a valid /// This function should return a series of pairs whose products should sum to zero for a valid
/// share. Any error raised is treated as the share being invalid. /// share. Any error raised is treated as the share being invalid.
#[allow(clippy::type_complexity, clippy::result_unit_err)] #[allow(clippy::type_complexity, clippy::result_unit_err)]
@@ -99,8 +104,10 @@ pub trait Algorithm<C: Curve>: Send + Sync {
mod sealed { mod sealed {
pub use super::*; pub use super::*;
/// IETF-compliant transcript. This is incredibly naive and should not be used within larger /// IETF-compliant transcript.
/// protocols. ///
/// This is incredibly naive and MUST NOT be used within larger protocols. No guarantees are made
/// about its safety EXCEPT as used with the IETF-specified FROST ciphersuites.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct IetfTranscript(pub(crate) Vec<u8>); pub struct IetfTranscript(pub(crate) Vec<u8>);
impl Transcript for IetfTranscript { impl Transcript for IetfTranscript {
@@ -131,6 +138,7 @@ pub(crate) use sealed::IetfTranscript;
/// HRAm usable by the included Schnorr signature algorithm to generate challenges. /// HRAm usable by the included Schnorr signature algorithm to generate challenges.
pub trait Hram<C: Curve>: Send + Sync + Clone { pub trait Hram<C: Curve>: Send + Sync + Clone {
/// HRAm function to generate a challenge. /// HRAm function to generate a challenge.
///
/// H2 from the IETF draft, despite having a different argument set (not being pre-formatted). /// H2 from the IETF draft, despite having a different argument set (not being pre-formatted).
#[allow(non_snake_case)] #[allow(non_snake_case)]
fn hram(R: &C::G, A: &C::G, m: &[u8]) -> C::F; fn hram(R: &C::G, A: &C::G, m: &[u8]) -> C::F;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -102,6 +102,7 @@ pub trait PreprocessMachine: Send {
type SignMachine: SignMachine<Self::Signature, Preprocess = Self::Preprocess>; type SignMachine: SignMachine<Self::Signature, Preprocess = Self::Preprocess>;
/// Perform the preprocessing round required in order to sign. /// Perform the preprocessing round required in order to sign.
///
/// Returns a preprocess message to be broadcast to all participants, over an authenticated /// Returns a preprocess message to be broadcast to all participants, over an authenticated
/// channel. /// channel.
fn preprocess<R: RngCore + CryptoRng>(self, rng: &mut R) fn preprocess<R: RngCore + CryptoRng>(self, rng: &mut R)
@@ -235,6 +236,8 @@ pub trait SignMachine<S>: Send + Sync + Sized {
/// Takes in the participants' preprocess messages. Returns the signature share to be broadcast /// Takes in the participants' preprocess messages. Returns the signature share to be broadcast
/// to all participants, over an authenticated channel. The parties who participate here will /// to all participants, over an authenticated channel. The parties who participate here will
/// become the signing set for this session. /// become the signing set for this session.
///
/// The caller MUST only use preprocesses obtained via this machine's `read_preprocess` function.
fn sign( fn sign(
self, self,
commitments: HashMap<Participant, Self::Preprocess>, commitments: HashMap<Participant, Self::Preprocess>,
@@ -421,7 +424,10 @@ pub trait SignatureMachine<S>: Send + Sync {
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare>; fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare>;
/// Complete signing. /// Complete signing.
///
/// Takes in everyone elses' shares. Returns the signature. /// Takes in everyone elses' shares. Returns the signature.
///
/// The caller MUST only use shares obtained via this machine's `read_shares` function.
fn complete(self, shares: HashMap<Participant, Self::SignatureShare>) -> Result<S, FrostError>; fn complete(self, shares: HashMap<Participant, Self::SignatureShare>) -> Result<S, FrostError>;
} }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -26,6 +26,6 @@ ff = { version = "0.13", default-features = false, features = ["bits"] }
ff-group-tests = { version = "0.13", path = "../ff-group-tests", optional = true } ff-group-tests = { version = "0.13", path = "../ff-group-tests", optional = true }
[features] [features]
alloc = ["zeroize/alloc", "crypto-bigint/alloc", "ff/alloc"] alloc = ["zeroize/alloc", "ff/alloc"]
std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "ff/std", "ff-group-tests"] std = ["alloc", "zeroize/std", "subtle/std", "rand_core/std", "ff/std", "ff-group-tests"]
default = ["std"] default = ["std"]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]
#![allow(non_snake_case)] #![allow(non_snake_case)]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![no_std] #![no_std]

View File

@@ -10,6 +10,7 @@ ignore = [
"RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227 "RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227
"RUSTSEC-2024-0370", # proc-macro-error is unmaintained "RUSTSEC-2024-0370", # proc-macro-error is unmaintained
"RUSTSEC-2024-0436", # paste is unmaintained "RUSTSEC-2024-0436", # paste is unmaintained
"RUSTSEC-2025-0057", # fxhash is unmaintained, fixed with bytecodealliance/wasmtime/pull/11634
] ]
[licenses] [licenses]
@@ -28,6 +29,7 @@ allow = [
"ISC", "ISC",
"Zlib", "Zlib",
"Unicode-3.0", "Unicode-3.0",
# "OpenSSL", # Commented as it's not currently in-use within the Serai tree
"CDLA-Permissive-2.0", "CDLA-Permissive-2.0",
# Non-invasive copyleft # Non-invasive copyleft
@@ -71,7 +73,6 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-monero-processor" }, { allow = ["AGPL-3.0-only"], name = "serai-monero-processor" },
{ allow = ["AGPL-3.0-only"], name = "tributary-sdk" }, { allow = ["AGPL-3.0-only"], name = "tributary-sdk" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign-types" },
{ allow = ["AGPL-3.0-only"], name = "serai-cosign" }, { allow = ["AGPL-3.0-only"], name = "serai-cosign" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-substrate" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-substrate" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-tributary" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-tributary" },
@@ -79,7 +80,8 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-libp2p-p2p" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator-libp2p-p2p" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator" }, { allow = ["AGPL-3.0-only"], name = "serai-coordinator" },
{ allow = ["AGPL-3.0-only"], name = "serai-core-pallet" }, { allow = ["AGPL-3.0-only"], name = "pallet-session" },
{ allow = ["AGPL-3.0-only"], name = "serai-coins-pallet" }, { allow = ["AGPL-3.0-only"], name = "serai-coins-pallet" },
{ allow = ["AGPL-3.0-only"], name = "serai-dex-pallet" }, { allow = ["AGPL-3.0-only"], name = "serai-dex-pallet" },
@@ -138,5 +140,8 @@ allow-git = [
"https://github.com/rust-lang-nursery/lazy-static.rs", "https://github.com/rust-lang-nursery/lazy-static.rs",
"https://github.com/kayabaNerve/elliptic-curves", "https://github.com/kayabaNerve/elliptic-curves",
"https://github.com/monero-oxide/monero-oxide", "https://github.com/monero-oxide/monero-oxide",
"https://github.com/kayabaNerve/monero-oxide",
"https://github.com/rust-bitcoin/rust-bip39",
"https://github.com/rust-rocksdb/rust-rocksdb",
"https://github.com/serai-dex/patch-polkadot-sdk", "https://github.com/serai-dex/patch-polkadot-sdk",
] ]

View File

@@ -46,7 +46,7 @@ serai-db = { path = "../common/db", optional = true }
serai-env = { path = "../common/env" } serai-env = { path = "../common/env" }
serai-primitives = { path = "../substrate/primitives", default-features = false, features = ["std"] } serai-primitives = { path = "../substrate/primitives", features = ["borsh"] }
[features] [features]
parity-db = ["serai-db/parity-db"] parity-db = ["serai-db/parity-db"]

View File

@@ -7,7 +7,7 @@ use dalek_ff_group::Ristretto;
pub(crate) use ciphersuite::{group::GroupEncoding, WrappedGroup, GroupCanonicalEncoding}; pub(crate) use ciphersuite::{group::GroupEncoding, WrappedGroup, GroupCanonicalEncoding};
pub(crate) use schnorr_signatures::SchnorrSignature; pub(crate) use schnorr_signatures::SchnorrSignature;
pub(crate) use serai_primitives::network_id::ExternalNetworkId; pub(crate) use serai_primitives::ExternalNetworkId;
pub(crate) use tokio::{ pub(crate) use tokio::{
io::{AsyncReadExt, AsyncWriteExt}, io::{AsyncReadExt, AsyncWriteExt},
@@ -198,7 +198,7 @@ async fn main() {
KEYS.write().unwrap().insert(service, key); KEYS.write().unwrap().insert(service, key);
let mut queues = QUEUES.write().unwrap(); let mut queues = QUEUES.write().unwrap();
if service == Service::Coordinator { if service == Service::Coordinator {
for network in ExternalNetworkId::all() { for network in serai_primitives::EXTERNAL_NETWORKS {
queues.insert( queues.insert(
(service, Service::Processor(network)), (service, Service::Processor(network)),
RwLock::new(Queue(db.clone(), service, Service::Processor(network))), RwLock::new(Queue(db.clone(), service, Service::Processor(network))),
@@ -213,13 +213,12 @@ async fn main() {
}; };
// Make queues for each ExternalNetworkId // Make queues for each ExternalNetworkId
for network in ExternalNetworkId::all() { for network in serai_primitives::EXTERNAL_NETWORKS {
// Use a match so we error if the list of NetworkIds changes // Use a match so we error if the list of NetworkIds changes
let Some(key) = read_key(match network { let Some(key) = read_key(match network {
ExternalNetworkId::Bitcoin => "BITCOIN_KEY", ExternalNetworkId::Bitcoin => "BITCOIN_KEY",
ExternalNetworkId::Ethereum => "ETHEREUM_KEY", ExternalNetworkId::Ethereum => "ETHEREUM_KEY",
ExternalNetworkId::Monero => "MONERO_KEY", ExternalNetworkId::Monero => "MONERO_KEY",
_ => panic!("unrecognized network"),
}) else { }) else {
continue; continue;
}; };
@@ -239,8 +238,7 @@ async fn main() {
// TODO: Add a magic value with a key at the start of the connection to make this authed // TODO: Add a magic value with a key at the start of the connection to make this authed
let mut db = db.clone(); let mut db = db.clone();
tokio::spawn(async move { tokio::spawn(async move {
loop { while let Ok(msg_len) = socket.read_u32_le().await {
let Ok(msg_len) = socket.read_u32_le().await else { break };
let mut buf = vec![0; usize::try_from(msg_len).unwrap()]; let mut buf = vec![0; usize::try_from(msg_len).unwrap()];
let Ok(_) = socket.read_exact(&mut buf).await else { break }; let Ok(_) = socket.read_exact(&mut buf).await else { break };
let msg = borsh::from_slice(&buf).unwrap(); let msg = borsh::from_slice(&buf).unwrap();

View File

@@ -4,7 +4,7 @@ use ciphersuite::{group::GroupEncoding, FromUniformBytes, WrappedGroup, WithPref
use borsh::{BorshSerialize, BorshDeserialize}; use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::network_id::ExternalNetworkId; use serai_primitives::ExternalNetworkId;
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, BorshSerialize, BorshDeserialize)]
pub enum Service { pub enum Service {

View File

@@ -30,9 +30,9 @@ k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic"
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.11", default-features = false, features = ["secp256k1"] } frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.11", default-features = false, features = ["secp256k1"] }
hex = { version = "0.4", default-features = false, optional = true } hex = { version = "0.4", default-features = false, optional = true }
serde = { version = "1", default-features = false, features = ["derive"], optional = true } core-json-traits = { version = "0.4", default-features = false, features = ["alloc"], optional = true }
serde_json = { version = "1", default-features = false, optional = true } core-json-derive = { version = "0.4", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.2", default-features = false, features = ["tls", "basic-auth"], optional = true } simple-request = { path = "../../common/request", version = "0.3", default-features = false, features = ["tokio", "tls", "basic-auth"], optional = true }
[dev-dependencies] [dev-dependencies]
secp256k1 = { version = "0.29", default-features = false, features = ["std"] } secp256k1 = { version = "0.29", default-features = false, features = ["std"] }
@@ -52,15 +52,16 @@ std = [
"rand_core/std", "rand_core/std",
"bitcoin/std", "bitcoin/std",
"bitcoin/serde",
"k256/std", "k256/std",
"frost/std", "frost/std",
]
rpc = [
"std",
"hex/std", "hex/std",
"serde/std", "core-json-traits",
"serde_json/std", "core-json-derive",
"simple-request", "simple-request",
] ]
hazmat = [] hazmat = []
default = ["std"] default = ["std", "rpc"]

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
@@ -14,7 +14,7 @@ pub(crate) mod crypto;
/// Wallet functionality to create transactions. /// Wallet functionality to create transactions.
pub mod wallet; pub mod wallet;
/// A minimal asynchronous Bitcoin RPC client. /// A minimal asynchronous Bitcoin RPC client.
#[cfg(feature = "std")] #[cfg(feature = "rpc")]
pub mod rpc; pub mod rpc;
#[cfg(test)] #[cfg(test)]

View File

@@ -1,12 +1,9 @@
use core::fmt::Debug; use core::{str::FromStr, fmt::Debug};
use std::collections::HashSet; use std::{io::Read, collections::HashSet};
use thiserror::Error; use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned}; use simple_request::{hyper, Request, TokioClient as Client};
use serde_json::json;
use simple_request::{hyper, Request, Client};
use bitcoin::{ use bitcoin::{
hashes::{Hash, hex::FromHex}, hashes::{Hash, hex::FromHex},
@@ -14,19 +11,12 @@ use bitcoin::{
Txid, Transaction, BlockHash, Block, Txid, Transaction, BlockHash, Block,
}; };
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)] #[derive(Clone, Debug)]
pub struct Error { pub struct Error {
code: isize, code: isize,
message: String, message: String,
} }
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client. /// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct Rpc { pub struct Rpc {
@@ -34,14 +24,14 @@ pub struct Rpc {
url: String, url: String,
} }
#[derive(Clone, PartialEq, Eq, Debug, Error)] #[derive(Clone, Debug, Error)]
pub enum RpcError { pub enum RpcError {
#[error("couldn't connect to node")] #[error("couldn't connect to node")]
ConnectionError, ConnectionError,
#[error("request had an error: {0:?}")] #[error("request had an error: {0:?}")]
RequestError(Error), RequestError(Error),
#[error("node replied with invalid JSON")] #[error("node replied with invalid JSON")]
InvalidJson(serde_json::error::Category), InvalidJson,
#[error("node sent an invalid response ({0})")] #[error("node sent an invalid response ({0})")]
InvalidResponse(&'static str), InvalidResponse(&'static str),
#[error("node was missing expected methods")] #[error("node was missing expected methods")]
@@ -66,7 +56,7 @@ impl Rpc {
Rpc { client: Client::with_connection_pool().map_err(|_| RpcError::ConnectionError)?, url }; Rpc { client: Client::with_connection_pool().map_err(|_| RpcError::ConnectionError)?, url };
// Make an RPC request to verify the node is reachable and sane // Make an RPC request to verify the node is reachable and sane
let res: String = rpc.rpc_call("help", json!([])).await?; let res: String = rpc.call("help", "[]").await?;
// Verify all methods we expect are present // Verify all methods we expect are present
// If we had a more expanded RPC, due to differences in RPC versions, it wouldn't make sense to // If we had a more expanded RPC, due to differences in RPC versions, it wouldn't make sense to
@@ -103,22 +93,21 @@ impl Rpc {
} }
/// Perform an arbitrary RPC call. /// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>( pub async fn call<Response: 'static + Default + core_json_traits::JsonDeserialize>(
&self, &self,
method: &str, method: &str,
params: serde_json::Value, params: &str,
) -> Result<Response, RpcError> { ) -> Result<Response, RpcError> {
let mut request = Request::from( let mut request = Request::from(
hyper::Request::post(&self.url) hyper::Request::post(&self.url)
.header("Content-Type", "application/json") .header("Content-Type", "application/json")
.body( .body(
serde_json::to_vec(&json!({ "jsonrpc": "2.0", "method": method, "params": params })) format!(r#"{{ "method": "{method}", "params": {params} }}"#).as_bytes().to_vec().into(),
.unwrap()
.into(),
) )
.unwrap(), .unwrap(),
); );
request.with_basic_auth(); request.with_basic_auth();
request.set_response_size_limit(Some(100 * 1024 * 1024));
let mut res = self let mut res = self
.client .client
.request(request) .request(request)
@@ -128,11 +117,52 @@ impl Rpc {
.await .await
.map_err(|_| RpcError::ConnectionError)?; .map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> = #[derive(Default, core_json_derive::JsonDeserialize)]
serde_json::from_reader(&mut res).map_err(|e| RpcError::InvalidJson(e.classify()))?; struct InternalError {
code: Option<i64>,
message: Option<String>,
}
#[derive(core_json_derive::JsonDeserialize)]
struct RpcResponse<T: core_json_traits::JsonDeserialize> {
result: Option<T>,
error: Option<InternalError>,
}
impl<T: core_json_traits::JsonDeserialize> Default for RpcResponse<T> {
fn default() -> Self {
Self { result: None, error: None }
}
}
// TODO: `core_json::ReadAdapter`
let mut res_vec = vec![];
res.read_to_end(&mut res_vec).map_err(|_| RpcError::ConnectionError)?;
let res = <RpcResponse<Response> as core_json_traits::JsonStructure>::deserialize_structure::<
_,
core_json_traits::ConstStack<32>,
>(res_vec.as_slice())
.map_err(|_| RpcError::InvalidJson)?;
match res { match res {
RpcResponse::Ok { result } => Ok(result), RpcResponse { result: Some(result), error: None } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)), RpcResponse { result: None, error: Some(error) } => {
let code =
error.code.ok_or_else(|| RpcError::InvalidResponse("error was missing `code`"))?;
let code = isize::try_from(code)
.map_err(|_| RpcError::InvalidResponse("error code exceeded isize::MAX"))?;
let message =
error.message.ok_or_else(|| RpcError::InvalidResponse("error was missing `message`"))?;
Err(RpcError::RequestError(Error { code, message }))
}
// `invalidateblock` yields this edge case
RpcResponse { result: None, error: None } => {
if core::any::TypeId::of::<Response>() == core::any::TypeId::of::<()>() {
Ok(Default::default())
} else {
Err(RpcError::InvalidResponse("response lacked both a result and an error"))
}
}
_ => Err(RpcError::InvalidResponse("response contained both a result and an error")),
} }
} }
@@ -145,16 +175,17 @@ impl Rpc {
// tip block of the current chain. The "height" of a block is defined as the amount of blocks // tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and // present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block. // getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await usize::try_from(self.call::<u64>("getblockcount", "[]").await?)
.map_err(|_| RpcError::InvalidResponse("latest block number exceeded usize::MAX"))
} }
/// Get the hash of a block by the block's number. /// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> { pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = self let mut hash =
.rpc_call::<BlockHash>("getblockhash", json!([number])) BlockHash::from_str(&self.call::<String>("getblockhash", &format!("[{number}]")).await?)
.await? .map_err(|_| RpcError::InvalidResponse("block hash was not valid hex"))?
.as_raw_hash() .as_raw_hash()
.to_byte_array(); .to_byte_array();
// bitcoin stores the inner bytes in reverse order. // bitcoin stores the inner bytes in reverse order.
hash.reverse(); hash.reverse();
Ok(hash) Ok(hash)
@@ -162,16 +193,25 @@ impl Rpc {
/// Get a block's number by its hash. /// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> { pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)] #[derive(Default, core_json_derive::JsonDeserialize)]
struct Number { struct Number {
height: usize, height: Option<u64>,
} }
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height) usize::try_from(
self
.call::<Number>("getblockheader", &format!(r#"["{}"]"#, hex::encode(hash)))
.await?
.height
.ok_or_else(|| {
RpcError::InvalidResponse("`getblockheader` did not include `height` field")
})?,
)
.map_err(|_| RpcError::InvalidResponse("block number exceeded usize::MAX"))
} }
/// Get a block by its hash. /// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> { pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?; let hex = self.call::<String>("getblock", &format!(r#"["{}", 0]"#, hex::encode(hash))).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex) let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the block"))?; .map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the block"))?;
let block: Block = encode::deserialize(&bytes) let block: Block = encode::deserialize(&bytes)
@@ -188,8 +228,13 @@ impl Rpc {
/// Publish a transaction. /// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> { pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = match self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await { let txid = match self
Ok(txid) => txid, .call::<String>("sendrawtransaction", &format!(r#"["{}"]"#, encode::serialize_hex(tx)))
.await
{
Ok(txid) => {
Txid::from_str(&txid).map_err(|_| RpcError::InvalidResponse("TXID was not valid hex"))?
}
Err(e) => { Err(e) => {
// A const from Bitcoin's bitcoin/src/rpc/protocol.h // A const from Bitcoin's bitcoin/src/rpc/protocol.h
const RPC_VERIFY_ALREADY_IN_CHAIN: isize = -27; const RPC_VERIFY_ALREADY_IN_CHAIN: isize = -27;
@@ -210,7 +255,8 @@ impl Rpc {
/// Get a transaction by its hash. /// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> { pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?; let hex =
self.call::<String>("getrawtransaction", &format!(r#"["{}"]"#, hex::encode(hash))).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex) let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the transaction"))?; .map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the transaction"))?;
let tx: Transaction = encode::deserialize(&bytes) let tx: Transaction = encode::deserialize(&bytes)

View File

@@ -14,9 +14,9 @@ pub(crate) async fn rpc() -> Rpc {
// If this node has already been interacted with, clear its chain // If this node has already been interacted with, clear its chain
if rpc.get_latest_block_number().await.unwrap() > 0 { if rpc.get_latest_block_number().await.unwrap() > 0 {
rpc rpc
.rpc_call( .call(
"invalidateblock", "invalidateblock",
serde_json::json!([hex::encode(rpc.get_block_hash(1).await.unwrap())]), &format!(r#"["{}"]"#, hex::encode(rpc.get_block_hash(1).await.unwrap())),
) )
.await .await
.unwrap() .unwrap()

View File

@@ -41,21 +41,21 @@ async fn send_and_get_output(rpc: &Rpc, scanner: &Scanner, key: ProjectivePoint)
let block_number = rpc.get_latest_block_number().await.unwrap() + 1; let block_number = rpc.get_latest_block_number().await.unwrap() + 1;
rpc rpc
.rpc_call::<Vec<String>>( .call::<Vec<String>>(
"generatetoaddress", "generatetoaddress",
serde_json::json!([ &format!(
1, r#"[1, "{}"]"#,
Address::from_script(&p2tr_script_buf(key).unwrap(), Network::Regtest).unwrap() Address::from_script(&p2tr_script_buf(key).unwrap(), Network::Regtest).unwrap()
]), ),
) )
.await .await
.unwrap(); .unwrap();
// Mine until maturity // Mine until maturity
rpc rpc
.rpc_call::<Vec<String>>( .call::<Vec<String>>(
"generatetoaddress", "generatetoaddress",
serde_json::json!([100, Address::p2sh(Script::new(), Network::Regtest).unwrap()]), &format!(r#"[100, "{}"]"#, Address::p2sh(Script::new(), Network::Regtest).unwrap()),
) )
.await .await
.unwrap(); .unwrap();

View File

@@ -19,7 +19,7 @@ workspace = true
tower = "0.5" tower = "0.5"
serde_json = { version = "1", default-features = false } serde_json = { version = "1", default-features = false }
simple-request = { path = "../../../common/request", version = "0.2", default-features = false } simple-request = { path = "../../../common/request", version = "0.3", default-features = false, features = ["tokio"] }
alloy-json-rpc = { version = "1", default-features = false } alloy-json-rpc = { version = "1", default-features = false }
alloy-transport = { version = "1", default-features = false } alloy-transport = { version = "1", default-features = false }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
use core::task; use core::task;
@@ -7,7 +7,7 @@ use std::io;
use alloy_json_rpc::{RequestPacket, ResponsePacket}; use alloy_json_rpc::{RequestPacket, ResponsePacket};
use alloy_transport::{TransportError, TransportErrorKind, TransportFut}; use alloy_transport::{TransportError, TransportErrorKind, TransportFut};
use simple_request::{hyper, Error, Request, Client}; use simple_request::{hyper, Error, Request, TokioClient as Client};
use tower::Service; use tower::Service;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
@@ -6,7 +6,7 @@ use std::{path::PathBuf, fs, process::Command};
/// Build contracts from the specified path, outputting the artifacts to the specified path. /// Build contracts from the specified path, outputting the artifacts to the specified path.
/// ///
/// Requires solc 0.8.26. /// Requires solc 0.8.29.
pub fn build( pub fn build(
include_paths: &[&str], include_paths: &[&str],
contracts_path: &str, contracts_path: &str,
@@ -35,8 +35,8 @@ pub fn build(
if let Some(version) = line.strip_prefix("Version: ") { if let Some(version) = line.strip_prefix("Version: ") {
let version = let version =
version.split('+').next().ok_or_else(|| "no value present on line".to_string())?; version.split('+').next().ok_or_else(|| "no value present on line".to_string())?;
if version != "0.8.26" { if version != "0.8.29" {
Err(format!("version was {version}, 0.8.26 required"))? Err(format!("version was {version}, 0.8.29 required"))?
} }
} }
} }

View File

@@ -53,8 +53,7 @@ async fn main() {
let db = db.clone(); let db = db.clone();
tokio::spawn(async move { tokio::spawn(async move {
let mut db = db.clone(); let mut db = db.clone();
loop { while let Ok(msg_len) = socket.read_u32_le().await {
let Ok(msg_len) = socket.read_u32_le().await else { break };
let mut buf = vec![0; usize::try_from(msg_len).unwrap()]; let mut buf = vec![0; usize::try_from(msg_len).unwrap()];
let Ok(_) = socket.read_exact(&mut buf).await else { break }; let Ok(_) = socket.read_exact(&mut buf).await else { break };

View File

@@ -16,10 +16,12 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
subtle = { version = "2", default-features = false, features = ["std"] } std-shims = { path = "../../../common/std-shims", version = "0.1", default-features = false }
sha3 = { version = "0.10", default-features = false, features = ["std"] }
group = { version = "0.13", default-features = false, features = ["alloc"] } subtle = { version = "2", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic"] } sha3 = { version = "0.10", default-features = false }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic"] }
[build-dependencies] [build-dependencies]
build-solidity-contracts = { path = "../build-contracts", version = "0.1" } build-solidity-contracts = { path = "../build-contracts", version = "0.1" }
@@ -40,3 +42,8 @@ alloy-provider = { version = "1", default-features = false }
alloy-node-bindings = { version = "1", default-features = false } alloy-node-bindings = { version = "1", default-features = false }
tokio = { version = "1", default-features = false, features = ["macros"] } tokio = { version = "1", default-features = false, features = ["macros"] }
[features]
alloc = ["std-shims/alloc", "group/alloc"]
std = ["alloc", "std-shims/std", "subtle/std", "sha3/std", "k256/std"]
default = ["std"]

Some files were not shown because too many files have changed in this diff Show More