16 Commits

Author SHA1 Message Date
Luke Parker
1a766ab773 Populate UnbalancedMerkleTrees in headers 2025-03-04 06:00:06 -05:00
Luke Parker
df2ae10d2f Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-03-04 04:00:05 -05:00
Luke Parker
b92ac4a15b Use borsh entirely in create_db 2025-02-26 14:50:59 -05:00
Luke Parker
51bae4fedc Remove now-consolidated primitives crates 2025-02-26 14:49:28 -05:00
Luke Parker
ee8b353132 Skeleton ruintime with new types 2025-02-26 14:16:04 -05:00
Luke Parker
a2d558ee34 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-02-26 08:00:07 -05:00
Luke Parker
3273a4b725 Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-02-26 07:41:07 -05:00
Luke Parker
df87abbae0 Correct distinction/flow of check/validate/apply 2025-02-26 07:24:58 -05:00
Luke Parker
fdf2ec8e92 Make transaction an enum of Unsigned, Signed 2025-02-26 06:54:42 -05:00
Luke Parker
f92fe922a6 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-02-26 05:19:04 -05:00
Luke Parker
121a48b55c Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-02-26 05:05:35 -05:00
Luke Parker
dff9a04a8c Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-02-17 02:14:31 -05:00
Luke Parker
2d8f70036a Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-02-12 03:54:57 -05:00
Luke Parker
dd95494d9c Update deny, rust-src component 2025-02-04 08:12:02 -05:00
Luke Parker
653b0e0bbc Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-02-04 07:53:41 -05:00
Luke Parker
d78c92bc3e Update nightly version 2025-02-04 00:53:22 -05:00
755 changed files with 43785 additions and 16774 deletions

2
.github/LICENSE vendored
View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2025 Luke Parker Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: "30.0" default: "27.0"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Bitcoin Daemon Cache - name: Bitcoin Daemon Cache
id: cache-bitcoind id: cache-bitcoind
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: bitcoin.tar.gz path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -7,20 +7,13 @@ runs:
- name: Remove unused packages - name: Remove unused packages
shell: bash shell: bash
run: | run: |
# Ensure the repositories are synced sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*" sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*" sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*" sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
# Reinstall python3 as a general dependency of a functional operating system sudo apt clean
sudo apt install -y python3 --fix-missing docker system prune -a --volumes
if: runner.os == 'Linux' if: runner.os == 'Linux'
- name: Remove unused packages - name: Remove unused packages
@@ -38,48 +31,19 @@ runs:
shell: bash shell: bash
run: | run: |
if [ "$RUNNER_OS" == "Linux" ]; then if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler libclang-dev sudo apt install -y ca-certificates protobuf-compiler
elif [ "$RUNNER_OS" == "Windows" ]; then elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf llvm brew install protobuf
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi fi
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
cargo +1.91 install svm-rs --version =0.5.19 cargo install svm-rs
svm install 0.8.29 svm install 0.8.26
svm use 0.8.29 svm use 0.8.26
- name: Remove preinstalled Docker
shell: bash
run: |
docker system prune -a --volumes
sudo apt remove -y *docker*
# Install uidmap which will be required for the explicitly installed Docker
sudo apt install uidmap
if: runner.os == 'Linux'
- name: Update system dependencies
shell: bash
run: |
sudo apt update -y
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@b60f85385d03ac8acfca6d9996982511d8620a19
with:
rootless: true
set-host: true
if: runner.os == 'Linux'
# - name: Cache Rust # - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 # uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.4.3 default: v0.18.3.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Wallet RPC Cache - name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc id: cache-monero-wallet-rpc
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: monero-wallet-rpc path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.4.3 default: v0.18.3.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Daemon Cache - name: Monero Daemon Cache
id: cache-monerod id: cache-monerod
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: /usr/bin/monerod path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,12 +5,12 @@ inputs:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.4.3 default: v0.18.3.4
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
required: false required: false
default: "30.0" default: "27.1"
runs: runs:
using: "composite" using: "composite"

View File

@@ -1 +1 @@
nightly-2025-11-11 nightly-2025-02-01

View File

@@ -32,17 +32,13 @@ jobs:
-p dalek-ff-group \ -p dalek-ff-group \
-p minimal-ed448 \ -p minimal-ed448 \
-p ciphersuite \ -p ciphersuite \
-p ciphersuite-kp256 \
-p multiexp \ -p multiexp \
-p schnorr-signatures \ -p schnorr-signatures \
-p prime-field \ -p dleq \
-p short-weierstrass \ -p generalized-bulletproofs \
-p secq256k1 \ -p generalized-bulletproofs-circuit-abstraction \
-p embedwards25519 \ -p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p dkg \ -p dkg \
-p dkg-recovery \
-p dkg-dealer \
-p dkg-musig \
-p dkg-evrf \
-p modular-frost \ -p modular-frost \
-p frost-schnorrkel -p frost-schnorrkel

View File

@@ -12,13 +12,13 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache - name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: ~/.cargo/advisory-db path: ~/.cargo/advisory-db
key: rust-advisory-db key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.91 install cargo-deny --version =0.18.5 run: cargo install --locked cargo-deny
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -11,7 +11,7 @@ jobs:
clippy: clippy:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest] os: [ubuntu-latest, macos-13, macos-14, windows-latest]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install nightly rust - name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasmv1-none -c clippy
- name: Run Clippy - name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -46,13 +46,13 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache - name: Advisory Cache
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: ~/.cargo/advisory-db path: ~/.cargo/advisory-db
key: rust-advisory-db key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo +1.91 install cargo-deny --version =0.18.5 run: cargo install --locked cargo-deny
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph run: cargo deny -L error --all-features check --hide-inclusion-graph
@@ -88,114 +88,19 @@ jobs:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use - name: Verify all dependencies are in use
run: | run: |
cargo +1.91 install cargo-machete --version =0.9.1 cargo install cargo-machete
cargo +1.91 machete cargo machete
msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.91 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
# directory called from.
return_to=$(pwd)
echo "Checking $1"
cd $1
# We then find the existing `rust-version` using `grep` (for the right line) and then a
# regex (to strip to just the major and minor version).
existing=$(cat ./Cargo.toml | grep "rust-version" | grep -Eo "[0-9]+\.[0-9]+")
# We then backup the `Cargo.toml`, allowing us to restore it after, saving time on future
# MSRV checks (as they'll benefit from immediately exiting if the queried version is less
# than the declared MSRV).
mv ./Cargo.toml ./Cargo.toml.bak
# We then use an inverted (`-v`) grep to remove the existing `rust-version` from the
# `Cargo.toml`, as required because else earlier versions of Rust won't even attempt to
# compile this crate.
cat ./Cargo.toml.bak | grep -v "rust-version" > Cargo.toml
# We then find the actual `rust-version` using `cargo-msrv` (again stripping to just the
# major and minor version).
actual=$(cargo msrv find --output-format minimal | grep -Eo "^[0-9]+\.[0-9]+")
# Finally, we compare the two.
echo "Declared rust-version: $existing"
echo "Actual rust-version: $actual"
[ $existing == $actual ]
result=$?
# Restore the original `Cargo.toml`.
rm Cargo.toml
mv ./Cargo.toml.bak ./Cargo.toml
# Return to the directory called from and return the result.
cd $return_to
return $result
}
# Check each member of the workspace
function check_workspace {
# Get the members array from the workspace's `Cargo.toml`
cargo_toml_lines=$(cat ./Cargo.toml | wc -l)
# Keep all lines after the start of the array, then keep all lines before the next "]"
members=$(cat Cargo.toml | grep "members\ \=\ \[" -m1 -A$cargo_toml_lines | grep "]" -m1 -B$cargo_toml_lines)
# Parse out any comments, whitespace, including comments post-fixed on the same line as an entry
# We accomplish the latter by pruning all characters after the entry's ","
members=$(echo "$members" | grep -Ev "^[[:space:]]*(#|$)" | awk -F',' '{print $1","}')
# Replace the first line, which was "members = [" and is now "members = [,", with "["
members=$(echo "$members" | sed "1s/.*/\[/")
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
members=$(echo "$members" | grep -v "networks/ethereum/relayer\"")
members=$(echo "$members" | grep -v "message-queue\"")
members=$(echo "$members" | grep -v "processor/bin\"")
members=$(echo "$members" | grep -v "processor/bitcoin\"")
members=$(echo "$members" | grep -v "processor/ethereum\"")
members=$(echo "$members" | grep -v "processor/monero\"")
members=$(echo "$members" | grep -v "coordinator\"")
members=$(echo "$members" | grep -v "substrate/runtime\"")
members=$(echo "$members" | grep -v "substrate/node\"")
members=$(echo "$members" | grep -v "orchestration\"")
# Don't check the tests
members=$(echo "$members" | grep -v "mini\"")
members=$(echo "$members" | grep -v "tests/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
echo $members | jq -r ".[]" | while read -r member; do
check_msrv $member
correct=$?
if [ $correct -ne 0 ]; then
return $correct
fi
done
}
check_workspace
slither: slither:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither - name: Slither
run: | run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol

72
.github/workflows/monero-tests.yaml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "networks/monero/**"
- "processor/**"
pull_request:
paths:
- "networks/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.3.4]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.3.4' }}
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'

259
.github/workflows/msrv.yml vendored Normal file
View File

@@ -0,0 +1,259 @@
name: Weekly MSRV Check
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
msrv-common:
name: Run cargo msrv on common
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on common
run: |
cargo msrv verify --manifest-path common/zalloc/Cargo.toml
cargo msrv verify --manifest-path common/std-shims/Cargo.toml
cargo msrv verify --manifest-path common/env/Cargo.toml
cargo msrv verify --manifest-path common/db/Cargo.toml
cargo msrv verify --manifest-path common/task/Cargo.toml
cargo msrv verify --manifest-path common/request/Cargo.toml
cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml
msrv-crypto:
name: Run cargo msrv on crypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on crypto
run: |
cargo msrv verify --manifest-path crypto/transcript/Cargo.toml
cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml
cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml
cargo msrv verify --manifest-path crypto/ed448/Cargo.toml
cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml
cargo msrv verify --manifest-path crypto/dleq/Cargo.toml
cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml
cargo msrv verify --manifest-path crypto/dkg/Cargo.toml
cargo msrv verify --manifest-path crypto/frost/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml
msrv-networks:
name: Run cargo msrv on networks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on networks
run: |
cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db
cargo msrv verify --manifest-path networks/monero/io/Cargo.toml
cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml
cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml
cargo msrv verify --manifest-path networks/monero/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml
cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml
msrv-message-queue:
name: Run cargo msrv on message-queue
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db
msrv-processor:
name: Run cargo msrv on processor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on processor
run: |
cargo msrv verify --manifest-path processor/view-keys/Cargo.toml
cargo msrv verify --manifest-path processor/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/messages/Cargo.toml
cargo msrv verify --manifest-path processor/scanner/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml
cargo msrv verify --manifest-path processor/key-gen/Cargo.toml
cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml
cargo msrv verify --manifest-path processor/signers/Cargo.toml
cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db
cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/Cargo.toml
cargo msrv verify --manifest-path processor/monero/Cargo.toml
msrv-coordinator:
name: Run cargo msrv on coordinator
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on coordinator
run: |
cargo msrv verify --manifest-path coordinator/tributary-sdk/tendermint/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary-sdk/Cargo.toml
cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml
cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/Cargo.toml
msrv-substrate:
name: Run cargo msrv on substrate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on substrate
run: |
cargo msrv verify --manifest-path substrate/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/abi/Cargo.toml
cargo msrv verify --manifest-path substrate/client/Cargo.toml
cargo msrv verify --manifest-path substrate/runtime/Cargo.toml
cargo msrv verify --manifest-path substrate/node/Cargo.toml
msrv-orchestration:
name: Run cargo msrv on orchestration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path orchestration/Cargo.toml
msrv-mini:
name: Run cargo msrv on mini
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on mini
run: |
cargo msrv verify --manifest-path mini/Cargo.toml

View File

@@ -34,3 +34,16 @@ jobs:
-p ethereum-schnorr-contract \ -p ethereum-schnorr-contract \
-p alloy-simple-request-transport \ -p alloy-simple-request-transport \
-p serai-ethereum-relayer \ -p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-serai-verify-chain

View File

@@ -28,18 +28,8 @@ jobs:
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install RISC-V Toolchain - name: Install RISC-V Toolchain
run: | run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
sudo apt update
sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal --component rust-src --target riscv32imac-unknown-none-elf
- name: Verify no-std builds - name: Verify no-std builds
run: | run: CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf -p serai-no-std-tests
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core -p serai-no-std-tests
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core,alloc -p serai-no-std-tests --features "alloc"

View File

@@ -46,16 +46,16 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac uses: actions/checkout@v3
- name: Setup Ruby - name: Setup Ruby
uses: ruby/setup-ruby@44511735964dcb71245e7e55f72539531f7bc0eb uses: ruby/setup-ruby@v1
with: with:
bundler-cache: true bundler-cache: true
cache-version: 0 cache-version: 0
working-directory: "${{ github.workspace }}/docs" working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages - name: Setup Pages
id: pages id: pages
uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b uses: actions/configure-pages@v3
- name: Build with Jekyll - name: Build with Jekyll
run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}" run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env: env:
@@ -69,12 +69,12 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Buld Rust docs - name: Buld Rust docs
run: | run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasmv1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --all-features
mv target/doc docs/_site/rust mv target/doc docs/_site/rust
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b uses: actions/upload-pages-artifact@v3
with: with:
path: "docs/_site/" path: "docs/_site/"
@@ -88,4 +88,4 @@ jobs:
steps: steps:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e uses: actions/deploy-pages@v4

7
.gitignore vendored
View File

@@ -1,14 +1,7 @@
target target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!./Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile Dockerfile
Dockerfile.fast-epoch Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile !orchestration/runtime/Dockerfile
.test-logs .test-logs
.vscode .vscode

6186
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,20 @@
[workspace] [workspace]
resolver = "2" resolver = "2"
members = [ members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims", "common/std-shims",
"common/zalloc", "common/zalloc",
"common/patchable-async-sleep", "common/patchable-async-sleep",
@@ -15,21 +29,19 @@ members = [
"crypto/dalek-ff-group", "crypto/dalek-ff-group",
"crypto/ed448", "crypto/ed448",
"crypto/ciphersuite", "crypto/ciphersuite",
"crypto/ciphersuite/kp256",
"crypto/multiexp", "crypto/multiexp",
"crypto/schnorr", "crypto/schnorr",
"crypto/dleq",
"crypto/prime-field", "crypto/evrf/secq256k1",
"crypto/short-weierstrass", "crypto/evrf/embedwards25519",
"crypto/secq256k1", "crypto/evrf/generalized-bulletproofs",
"crypto/embedwards25519", "crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/dkg", "crypto/dkg",
"crypto/dkg/recovery",
"crypto/dkg/dealer",
"crypto/dkg/musig",
"crypto/dkg/evrf",
"crypto/frost", "crypto/frost",
"crypto/schnorrkel", "crypto/schnorrkel",
@@ -40,6 +52,20 @@ members = [
"networks/ethereum/alloy-simple-request-transport", "networks/ethereum/alloy-simple-request-transport",
"networks/ethereum/relayer", "networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/verify-chain",
"message-queue", "message-queue",
"processor/messages", "processor/messages",
@@ -77,31 +103,17 @@ members = [
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi", "substrate/abi",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
@@ -121,83 +133,56 @@ members = [
"tests/reproducible-runtime", "tests/reproducible-runtime",
] ]
[profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due # Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs # to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 } subtle = { opt-level = 3 }
sha3 = { opt-level = 3 }
blake2 = { opt-level = 3 }
ff = { opt-level = 3 } ff = { opt-level = 3 }
group = { opt-level = 3 } group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 } crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 } curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 } dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 } multiexp = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 } monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 } monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 } monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 } monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 } monero-clsag = { opt-level = 3 }
monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well
secp256k1 = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
# revm also effectively requires being built with optimizations
revm = { opt-level = 3 }
revm-bytecode = { opt-level = 3 }
revm-context = { opt-level = 3 }
revm-context-interface = { opt-level = 3 }
revm-database = { opt-level = 3 }
revm-database-interface = { opt-level = 3 }
revm-handler = { opt-level = 3 }
revm-inspector = { opt-level = 3 }
revm-interpreter = { opt-level = 3 }
revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 }
[profile.release] [profile.release]
panic = "unwind" panic = "unwind"
overflow-checks = true
[patch.crates-io] [patch.crates-io]
# Point to empty crates for unused crates in our tree
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201 # https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" } lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# directories-next was created because directories was unmaintained # directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained # directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers # The directories author pulls in ridiculously pointless crates and prefers
@@ -206,22 +191,10 @@ lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev
option-ext = { path = "patches/option-ext" } option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" } directories-next = { path = "patches/directories-next" }
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `rocksdb` which never uses `jemalloc`
librocksdb-sys = { path = "patches/librocksdb-sys" }
[workspace.lints.clippy] [workspace.lints.clippy]
unwrap_or_default = "allow" unwrap_or_default = "allow"
map_unwrap_or = "allow" map_unwrap_or = "allow"
needless_continue = "allow" needless_continue = "allow"
manual_is_multiple_of = "allow"
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
borrow_as_ptr = "deny" borrow_as_ptr = "deny"
cast_lossless = "deny" cast_lossless = "deny"
cast_possible_truncation = "deny" cast_possible_truncation = "deny"
@@ -260,7 +233,7 @@ redundant_closure_for_method_calls = "deny"
redundant_else = "deny" redundant_else = "deny"
string_add_assign = "deny" string_add_assign = "deny"
string_slice = "deny" string_slice = "deny"
unchecked_time_subtraction = "deny" unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny" uninlined_format_args = "deny"
unnecessary_box_returns = "deny" unnecessary_box_returns = "deny"
unnecessary_join = "deny" unnecessary_join = "deny"
@@ -269,6 +242,3 @@ unnested_or_patterns = "deny"
unused_async = "deny" unused_async = "deny"
unused_self = "deny" unused_self = "deny"
zero_sized_map_values = "deny" zero_sized_map_values = "deny"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648

View File

@@ -59,6 +59,7 @@ issued at the discretion of the Immunefi program managers.
- [Website](https://serai.exchange/): https://serai.exchange/ - [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/ - [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX - [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz - [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org - [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/ - [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/

View File

@@ -1,14 +0,0 @@
# Trail of Bits Ethereum Contracts Audit, June 2025
This audit included:
- Our Schnorr contract and associated library (/networks/ethereum/schnorr)
- Our Ethereum primitives library (/processor/ethereum/primitives)
- Our Deployer contract and associated library (/processor/ethereum/deployer)
- Our ERC20 library (/processor/ethereum/erc20)
- Our Router contract and associated library (/processor/ethereum/router)
It is encompassing up to commit 4e0c58464fc4673623938335f06e2e9ea96ca8dd.
Please see
https://github.com/trailofbits/publications/blob/30c4fa3ebf39ff8e4d23ba9567344ec9691697b5/reviews/2025-04-serai-dex-security-review.pdf
for the actual report.

View File

@@ -1,50 +0,0 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.65" rust-version = "1.71"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -17,8 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
parity-db = { version = "0.5", default-features = false, optional = true } parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.24", default-features = false, features = ["zstd"], optional = true } rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true }
[features] [features]
parity-db = ["dep:parity-db"] parity-db = ["dep:parity-db"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2025 Luke Parker Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -15,7 +15,7 @@ pub fn serai_db_key(
/// ///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro /// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key, /// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database /// they must be `borsh` serializable. The return type is used to auto (de)serialize the database
/// value bytes using `borsh`. /// value bytes using `borsh`.
/// ///
/// # Arguments /// # Arguments
@@ -54,11 +54,10 @@ macro_rules! create_db {
)?; )?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? { impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> { pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key( $crate::serai_db_key(
stringify!($db_name).as_bytes(), stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(), stringify!($field_name).as_bytes(),
($($arg),*).encode() &borsh::to_vec(&($($arg),*)).unwrap(),
) )
} }
pub(crate) fn set( pub(crate) fn set(

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.64" rust-version = "1.71"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

2
common/env/LICENSE vendored
View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
// Obtain a variable from the Serai environment/secret store. // Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> { pub fn var(variable: &str) -> Option<String> {

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-a
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"] keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021" edition = "2021"
rust-version = "1.70" rust-version = "1.71"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2024-2025 Luke Parker Copyright (c) 2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -1,9 +1,9 @@
[package] [package]
name = "simple-request" name = "simple-request"
version = "0.3.0" version = "0.1.0"
description = "A simple HTTP(S) request library" description = "A simple HTTP(S) request library"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/request" repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-request"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"] keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021" edition = "2021"
@@ -19,10 +19,9 @@ workspace = true
[dependencies] [dependencies]
tower-service = { version = "0.3", default-features = false } tower-service = { version = "0.3", default-features = false }
hyper = { version = "1", default-features = false, features = ["http1", "client"] } hyper = { version = "1", default-features = false, features = ["http1", "client"] }
hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy"] } hyper-util = { version = "0.1", default-features = false, features = ["http1", "client-legacy", "tokio"] }
http-body-util = { version = "0.1", default-features = false } http-body-util = { version = "0.1", default-features = false }
futures-util = { version = "0.3", default-features = false, features = ["std"] } tokio = { version = "1", default-features = false }
tokio = { version = "1", default-features = false, features = ["sync"] }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true } hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
@@ -30,8 +29,6 @@ zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true } base64ct = { version = "1", features = ["alloc"], optional = true }
[features] [features]
tokio = ["hyper-util/tokio"] tls = ["hyper-rustls"]
tls = ["tokio", "hyper-rustls"]
webpki-roots = ["tls", "hyper-rustls/webpki-roots"]
basic-auth = ["zeroize", "base64ct"] basic-auth = ["zeroize", "base64ct"]
default = ["tls"] default = ["tls"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,20 +1,19 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
use core::{pin::Pin, future::Future};
use std::sync::Arc; use std::sync::Arc;
use futures_util::FutureExt; use tokio::sync::Mutex;
use ::tokio::sync::Mutex;
use tower_service::Service as TowerService; use tower_service::Service as TowerService;
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest, rt::Executor};
pub use hyper;
use hyper_util::client::legacy::{Client as HyperClient, connect::HttpConnector};
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector}; use hyper_rustls::{HttpsConnectorBuilder, HttpsConnector};
use hyper::{Uri, header::HeaderValue, body::Bytes, client::conn::http1::SendRequest};
use hyper_util::{
rt::tokio::TokioExecutor,
client::legacy::{Client as HyperClient, connect::HttpConnector},
};
pub use hyper;
mod request; mod request;
pub use request::*; pub use request::*;
@@ -38,86 +37,52 @@ type Connector = HttpConnector;
type Connector = HttpsConnector<HttpConnector>; type Connector = HttpsConnector<HttpConnector>;
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
enum Connection< enum Connection {
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
ConnectionPool(HyperClient<Connector, Full<Bytes>>), ConnectionPool(HyperClient<Connector, Full<Bytes>>),
Connection { Connection {
executor: E,
connector: Connector, connector: Connector,
host: Uri, host: Uri,
connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>, connection: Arc<Mutex<Option<SendRequest<Full<Bytes>>>>>,
}, },
} }
/// An HTTP client.
///
/// `tls` is only guaranteed to work when using the `tokio` executor. Instantiating a client when
/// the `tls` feature is active without using the `tokio` executor will cause errors.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct Client< pub struct Client {
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>, connection: Connection,
> {
connection: Connection<E>,
} }
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>> impl Client {
Client<E> fn connector() -> Connector {
{
#[allow(clippy::unnecessary_wraps)]
fn connector() -> Result<Connector, Error> {
let mut res = HttpConnector::new(); let mut res = HttpConnector::new();
res.set_keepalive(Some(core::time::Duration::from_secs(60))); res.set_keepalive(Some(core::time::Duration::from_secs(60)));
res.set_nodelay(true); res.set_nodelay(true);
res.set_reuse_address(true); res.set_reuse_address(true);
#[cfg(feature = "tls")]
if core::any::TypeId::of::<E>() !=
core::any::TypeId::of::<hyper_util::rt::tokio::TokioExecutor>()
{
Err(Error::ConnectionError(
"`tls` feature enabled but not using the `tokio` executor".into(),
))?;
}
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
res.enforce_http(false); res.enforce_http(false);
#[cfg(feature = "tls")] #[cfg(feature = "tls")]
let https = HttpsConnectorBuilder::new().with_native_roots(); let res = HttpsConnectorBuilder::new()
#[cfg(all(feature = "tls", not(feature = "webpki-roots")))] .with_native_roots()
let https = https.map_err(|e| { .expect("couldn't fetch system's SSL roots")
Error::ConnectionError( .https_or_http()
format!("couldn't load system's SSL root certificates and webpki-roots unavilable: {e:?}") .enable_http1()
.into(), .wrap_connector(res);
) res
})?;
// Fallback to `webpki-roots` if present
#[cfg(all(feature = "tls", feature = "webpki-roots"))]
let https = https.unwrap_or(HttpsConnectorBuilder::new().with_webpki_roots());
#[cfg(feature = "tls")]
let res = https.https_or_http().enable_http1().wrap_connector(res);
Ok(res)
} }
pub fn with_executor_and_connection_pool(executor: E) -> Result<Client<E>, Error> { pub fn with_connection_pool() -> Client {
Ok(Client { Client {
connection: Connection::ConnectionPool( connection: Connection::ConnectionPool(
HyperClient::builder(executor) HyperClient::builder(TokioExecutor::new())
.pool_idle_timeout(core::time::Duration::from_secs(60)) .pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()?), .build(Self::connector()),
), ),
}) }
} }
pub fn with_executor_and_without_connection_pool( pub fn without_connection_pool(host: &str) -> Result<Client, Error> {
executor: E,
host: &str,
) -> Result<Client<E>, Error> {
Ok(Client { Ok(Client {
connection: Connection::Connection { connection: Connection::Connection {
executor, connector: Self::connector(),
connector: Self::connector()?,
host: { host: {
let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?; let uri: Uri = host.parse().map_err(|_| Error::InvalidUri)?;
if uri.host().is_none() { if uri.host().is_none() {
@@ -130,9 +95,9 @@ impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Outpu
}) })
} }
pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_, E>, Error> { pub async fn request<R: Into<Request>>(&self, request: R) -> Result<Response<'_>, Error> {
let request: Request = request.into(); let request: Request = request.into();
let Request { mut request, response_size_limit } = request; let mut request = request.0;
if let Some(header_host) = request.headers().get(hyper::header::HOST) { if let Some(header_host) = request.headers().get(hyper::header::HOST) {
match &self.connection { match &self.connection {
Connection::ConnectionPool(_) => {} Connection::ConnectionPool(_) => {}
@@ -166,7 +131,7 @@ impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Outpu
Connection::ConnectionPool(client) => { Connection::ConnectionPool(client) => {
client.request(request).await.map_err(Error::HyperUtil)? client.request(request).await.map_err(Error::HyperUtil)?
} }
Connection::Connection { executor, connector, host, connection } => { Connection::Connection { connector, host, connection } => {
let mut connection_lock = connection.lock().await; let mut connection_lock = connection.lock().await;
// If there's not a connection... // If there's not a connection...
@@ -178,46 +143,28 @@ impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Outpu
let call_res = call_res.map_err(Error::ConnectionError); let call_res = call_res.map_err(Error::ConnectionError);
let (requester, connection) = let (requester, connection) =
hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?; hyper::client::conn::http1::handshake(call_res?).await.map_err(Error::Hyper)?;
// This task will die when we drop the requester // This will die when we drop the requester, so we don't need to track an AbortHandle
executor.execute(Box::pin(connection.map(|_| ()))); // for it
tokio::spawn(connection);
*connection_lock = Some(requester); *connection_lock = Some(requester);
} }
let connection = connection_lock.as_mut().expect("lock over the connection was poisoned"); let connection = connection_lock.as_mut().unwrap();
let mut err = connection.ready().await.err(); let mut err = connection.ready().await.err();
if err.is_none() { if err.is_none() {
// Send the request // Send the request
let response = connection.send_request(request).await; let res = connection.send_request(request).await;
if let Ok(response) = response { if let Ok(res) = res {
return Ok(Response { response, size_limit: response_size_limit, client: self }); return Ok(Response(res, self));
} }
err = response.err(); err = res.err();
} }
// Since this connection has been put into an error state, drop it // Since this connection has been put into an error state, drop it
*connection_lock = None; *connection_lock = None;
Err(Error::Hyper(err.expect("only here if `err` is some yet no error")))? Err(Error::Hyper(err.unwrap()))?
} }
}; };
Ok(Response { response, size_limit: response_size_limit, client: self }) Ok(Response(response, self))
} }
} }
#[cfg(feature = "tokio")]
mod tokio {
use hyper_util::rt::tokio::TokioExecutor;
use super::*;
pub type TokioClient = Client<TokioExecutor>;
impl Client<TokioExecutor> {
pub fn with_connection_pool() -> Result<Self, Error> {
Self::with_executor_and_connection_pool(TokioExecutor::new())
}
pub fn without_connection_pool(host: &str) -> Result<Self, Error> {
Self::with_executor_and_without_connection_pool(TokioExecutor::new(), host)
}
}
}
#[cfg(feature = "tokio")]
pub use tokio::TokioClient;

View File

@@ -7,15 +7,11 @@ pub use http_body_util::Full;
use crate::Error; use crate::Error;
#[derive(Debug)] #[derive(Debug)]
pub struct Request { pub struct Request(pub(crate) hyper::Request<Full<Bytes>>);
pub(crate) request: hyper::Request<Full<Bytes>>,
pub(crate) response_size_limit: Option<usize>,
}
impl Request { impl Request {
#[cfg(feature = "basic-auth")] #[cfg(feature = "basic-auth")]
fn username_password_from_uri(&self) -> Result<(String, String), Error> { fn username_password_from_uri(&self) -> Result<(String, String), Error> {
if let Some(authority) = self.request.uri().authority() { if let Some(authority) = self.0.uri().authority() {
let authority = authority.as_str(); let authority = authority.as_str();
if authority.contains('@') { if authority.contains('@') {
// Decode the username and password from the URI // Decode the username and password from the URI
@@ -40,10 +36,9 @@ impl Request {
let mut formatted = format!("{username}:{password}"); let mut formatted = format!("{username}:{password}");
let mut encoded = Base64::encode_string(formatted.as_bytes()); let mut encoded = Base64::encode_string(formatted.as_bytes());
formatted.zeroize(); formatted.zeroize();
self.request.headers_mut().insert( self.0.headers_mut().insert(
hyper::header::AUTHORIZATION, hyper::header::AUTHORIZATION,
HeaderValue::from_str(&format!("Basic {encoded}")) HeaderValue::from_str(&format!("Basic {encoded}")).unwrap(),
.expect("couldn't form header from base64-encoded string"),
); );
encoded.zeroize(); encoded.zeroize();
} }
@@ -64,17 +59,9 @@ impl Request {
pub fn with_basic_auth(&mut self) { pub fn with_basic_auth(&mut self) {
let _ = self.basic_auth_from_uri(); let _ = self.basic_auth_from_uri();
} }
/// Set a size limit for the response.
///
/// This may be exceeded by a single HTTP frame and accordingly isn't perfect.
pub fn set_response_size_limit(&mut self, response_size_limit: Option<usize>) {
self.response_size_limit = response_size_limit;
}
} }
impl From<hyper::Request<Full<Bytes>>> for Request { impl From<hyper::Request<Full<Bytes>>> for Request {
fn from(request: hyper::Request<Full<Bytes>>) -> Request { fn from(request: hyper::Request<Full<Bytes>>) -> Request {
Request { request, response_size_limit: None } Request(request)
} }
} }

View File

@@ -1,54 +1,24 @@
use core::{pin::Pin, future::Future};
use std::io;
use hyper::{ use hyper::{
StatusCode, StatusCode,
header::{HeaderValue, HeaderMap}, header::{HeaderValue, HeaderMap},
body::Incoming, body::{Buf, Incoming},
rt::Executor,
}; };
use http_body_util::BodyExt; use http_body_util::BodyExt;
use futures_util::{Stream, StreamExt};
use crate::{Client, Error}; use crate::{Client, Error};
// Borrows the client so its async task lives as long as this response exists. // Borrows the client so its async task lives as long as this response exists.
#[allow(dead_code)] #[allow(dead_code)]
#[derive(Debug)] #[derive(Debug)]
pub struct Response< pub struct Response<'a>(pub(crate) hyper::Response<Incoming>, pub(crate) &'a Client);
'a, impl Response<'_> {
E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>,
> {
pub(crate) response: hyper::Response<Incoming>,
pub(crate) size_limit: Option<usize>,
pub(crate) client: &'a Client<E>,
}
impl<E: 'static + Send + Sync + Clone + Executor<Pin<Box<dyn Send + Future<Output = ()>>>>>
Response<'_, E>
{
pub fn status(&self) -> StatusCode { pub fn status(&self) -> StatusCode {
self.response.status() self.0.status()
} }
pub fn headers(&self) -> &HeaderMap<HeaderValue> { pub fn headers(&self) -> &HeaderMap<HeaderValue> {
self.response.headers() self.0.headers()
} }
pub async fn body(self) -> Result<impl std::io::Read, Error> { pub async fn body(self) -> Result<impl std::io::Read, Error> {
let mut body = self.response.into_body().into_data_stream(); Ok(self.0.into_body().collect().await.map_err(Error::Hyper)?.aggregate().reader())
let mut res: Vec<u8> = vec![];
loop {
if let Some(size_limit) = self.size_limit {
let (lower, upper) = body.size_hint();
if res.len().wrapping_add(upper.unwrap_or(lower)) > size_limit.min(usize::MAX - 1) {
Err(Error::ConnectionError("response exceeded size limit".into()))?;
}
}
let Some(part) = body.next().await else { break };
let part = part.map_err(Error::Hyper)?;
res.extend(part.as_ref());
}
Ok(io::Cursor::new(res))
} }
} }

View File

@@ -1,13 +1,13 @@
[package] [package]
name = "std-shims" name = "std-shims"
version = "0.1.5" version = "0.1.1"
description = "A series of std shims to make alloc more feasible" description = "A series of std shims to make alloc more feasible"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims" repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"] keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021" edition = "2021"
rust-version = "1.65" rust-version = "1.80"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -17,11 +17,9 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
rustversion = { version = "1", default-features = false } spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
spin = { version = "0.10", default-features = false, features = ["use_ticket_mutex", "fair_mutex", "once", "lazy"] } hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] }
hashbrown = { version = "0.16", default-features = false, features = ["default-hasher", "inline-more"], optional = true }
[features] [features]
alloc = ["hashbrown"] std = []
std = ["alloc", "spin/std"]
default = ["std"] default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,28 +1,6 @@
# `std` shims # std shims
`std-shims` is a Rust crate with two purposes: A crate which passes through to std when the default `std` feature is enabled,
- Expand the functionality of `core` and `alloc` yet provides a series of shims when it isn't.
- Polyfill functionality only available on newer version of Rust
The goal is to make supporting no-`std` environments, and older versions of `HashSet` and `HashMap` are provided via `hashbrown`.
Rust, as simple as possible. For most use cases, replacing `std::` with
`std_shims::` and adding `use std_shims::prelude::*` is sufficient to take full
advantage of `std-shims`.
# API Surface
`std-shims` only aims to have items _mutually available_ between `alloc` (with
extra dependencies) and `std` publicly exposed. Items exclusive to `std`, with
no shims available, will not be exported by `std-shims`.
# Dependencies
`HashSet` and `HashMap` are provided via `hashbrown`. Synchronization
primitives are provided via `spin` (avoiding a requirement on
`critical-section`). Sections of `std::io` are independently matched as
possible. `rustversion` is used to detect when to provide polyfills.
# Disclaimer
No guarantee of one-to-one parity is provided. The shims provided aim to be
sufficient for the average case. Pull requests are _welcome_.

View File

@@ -1,7 +1,7 @@
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::collections::*;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use hashbrown::{HashSet, HashMap};
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub use std::collections::*; pub use std::collections::*;
#[cfg(not(feature = "std"))]
pub use alloc::collections::*;
#[cfg(not(feature = "std"))]
pub use hashbrown::{HashSet, HashMap};

View File

@@ -1,74 +1,42 @@
#[cfg(feature = "std")]
pub use std::io::*;
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
mod shims { mod shims {
use core::fmt::{self, Debug, Display, Formatter}; use core::fmt::{Debug, Formatter};
#[cfg(feature = "alloc")] use alloc::{boxed::Box, vec::Vec};
use extern_alloc::{boxed::Box, vec::Vec};
use crate::error::Error as CoreError;
/// The kind of error.
#[derive(Clone, Copy, PartialEq, Eq, Debug)] #[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum ErrorKind { pub enum ErrorKind {
UnexpectedEof, UnexpectedEof,
Other, Other,
} }
/// An error.
#[derive(Debug)]
pub struct Error { pub struct Error {
kind: ErrorKind, kind: ErrorKind,
#[cfg(feature = "alloc")] error: Box<dyn Send + Sync>,
error: Box<dyn Send + Sync + CoreError>,
} }
impl Display for Error { impl Debug for Error {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { fn fmt(&self, fmt: &mut Formatter<'_>) -> core::result::Result<(), core::fmt::Error> {
<Self as Debug>::fmt(self, f) fmt.debug_struct("Error").field("kind", &self.kind).finish_non_exhaustive()
} }
} }
impl CoreError for Error {}
#[cfg(not(feature = "alloc"))]
pub trait IntoBoxSendSyncError {}
#[cfg(not(feature = "alloc"))]
impl<I> IntoBoxSendSyncError for I {}
#[cfg(feature = "alloc")]
pub trait IntoBoxSendSyncError: Into<Box<dyn Send + Sync + CoreError>> {}
#[cfg(feature = "alloc")]
impl<I: Into<Box<dyn Send + Sync + CoreError>>> IntoBoxSendSyncError for I {}
impl Error { impl Error {
/// Create a new error. pub fn new<E: 'static + Send + Sync>(kind: ErrorKind, error: E) -> Error {
/// Error { kind, error: Box::new(error) }
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn new<E: 'static + IntoBoxSendSyncError>(kind: ErrorKind, error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind };
#[cfg(feature = "alloc")]
let res = Error { kind, error: error.into() };
res
} }
/// Create a new error with `io::ErrorKind::Other` as its kind. pub fn other<E: 'static + Send + Sync>(error: E) -> Error {
/// Error { kind: ErrorKind::Other, error: Box::new(error) }
/// The error object itself is silently dropped when `alloc` is not enabled.
#[allow(unused)]
pub fn other<E: 'static + IntoBoxSendSyncError>(error: E) -> Error {
#[cfg(not(feature = "alloc"))]
let res = Error { kind: ErrorKind::Other };
#[cfg(feature = "alloc")]
let res = Error { kind: ErrorKind::Other, error: error.into() };
res
} }
/// The kind of error.
pub fn kind(&self) -> ErrorKind { pub fn kind(&self) -> ErrorKind {
self.kind self.kind
} }
/// Retrieve the inner error. pub fn into_inner(self) -> Option<Box<dyn Send + Sync>> {
#[cfg(feature = "alloc")]
pub fn into_inner(self) -> Option<Box<dyn Send + Sync + CoreError>> {
Some(self.error) Some(self.error)
} }
} }
@@ -96,12 +64,6 @@ mod shims {
} }
} }
impl<R: Read> Read for &mut R {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
R::read(*self, buf)
}
}
pub trait BufRead: Read { pub trait BufRead: Read {
fn fill_buf(&mut self) -> Result<&[u8]>; fn fill_buf(&mut self) -> Result<&[u8]>;
fn consume(&mut self, amt: usize); fn consume(&mut self, amt: usize);
@@ -126,7 +88,6 @@ mod shims {
} }
} }
#[cfg(feature = "alloc")]
impl Write for Vec<u8> { impl Write for Vec<u8> {
fn write(&mut self, buf: &[u8]) -> Result<usize> { fn write(&mut self, buf: &[u8]) -> Result<usize> {
self.extend(buf); self.extend(buf);
@@ -134,8 +95,6 @@ mod shims {
} }
} }
} }
#[cfg(not(feature = "std"))] #[cfg(not(feature = "std"))]
pub use shims::*; pub use shims::*;
#[cfg(feature = "std")]
pub use std::io::{ErrorKind, Error, Result, Read, BufRead, Write};

View File

@@ -1,102 +1,13 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)] #![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "alloc"))] pub extern crate alloc;
pub use core::*;
#[cfg(not(feature = "alloc"))]
pub use core::{alloc, borrow, ffi, fmt, slice, str, task};
#[cfg(not(feature = "std"))]
#[rustversion::before(1.81)]
pub mod error {
use core::fmt::Debug::Display;
pub trait Error: Debug + Display {}
}
#[cfg(not(feature = "std"))]
#[rustversion::since(1.81)]
pub use core::error;
#[cfg(feature = "alloc")]
extern crate alloc as extern_alloc;
#[cfg(all(feature = "alloc", not(feature = "std")))]
pub use extern_alloc::{alloc, borrow, boxed, ffi, fmt, rc, slice, str, string, task, vec, format};
#[cfg(feature = "std")]
pub use std::{alloc, borrow, boxed, error, ffi, fmt, rc, slice, str, string, task, vec, format};
pub mod sync;
pub mod collections; pub mod collections;
pub mod io; pub mod io;
pub mod sync;
pub mod prelude { pub use alloc::vec;
// Shim the `std` prelude pub use alloc::str;
#[cfg(feature = "alloc")] pub use alloc::string;
pub use extern_alloc::{
format, vec,
borrow::ToOwned,
boxed::Box,
vec::Vec,
string::{String, ToString},
};
// Shim `div_ceil`
#[rustversion::before(1.73)]
#[doc(hidden)]
pub trait StdShimsDivCeil {
fn div_ceil(self, rhs: Self) -> Self;
}
#[rustversion::before(1.73)]
mod impl_divceil {
use super::StdShimsDivCeil;
impl StdShimsDivCeil for u8 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u16 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u32 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u64 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for u128 {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
impl StdShimsDivCeil for usize {
fn div_ceil(self, rhs: Self) -> Self {
(self + (rhs - 1)) / rhs
}
}
}
// Shim `io::Error::other`
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
#[doc(hidden)]
pub trait StdShimsIoErrorOther {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>;
}
#[cfg(feature = "std")]
#[rustversion::before(1.74)]
impl StdShimsIoErrorOther for std::io::Error {
fn other<E>(error: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync>>,
{
std::io::Error::new(std::io::ErrorKind::Other, error)
}
}
}

View File

@@ -1,28 +1,19 @@
pub use core::sync::atomic; pub use core::sync::*;
#[cfg(all(feature = "alloc", not(feature = "std")))] pub use alloc::sync::*;
pub use extern_alloc::sync::{Arc, Weak};
#[cfg(feature = "std")]
pub use std::sync::{Arc, Weak};
mod mutex_shim { mod mutex_shim {
#[cfg(not(feature = "std"))]
pub use spin::{Mutex, MutexGuard};
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard}; pub use std::sync::*;
#[cfg(not(feature = "std"))]
pub use spin::*;
/// A shimmed `Mutex` with an API mutual to `spin` and `std`.
#[derive(Default, Debug)] #[derive(Default, Debug)]
pub struct ShimMutex<T>(Mutex<T>); pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> { impl<T> ShimMutex<T> {
/// Construct a new `Mutex`.
pub const fn new(value: T) -> Self { pub const fn new(value: T) -> Self {
Self(Mutex::new(value)) Self(Mutex::new(value))
} }
/// Acquire a lock on the contents of the `Mutex`.
///
/// On no-`std` environments, this may spin until the lock is acquired. On `std` environments,
/// this may panic if the `Mutex` was poisoned.
pub fn lock(&self) -> MutexGuard<'_, T> { pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")] #[cfg(feature = "std")]
let res = self.0.lock().unwrap(); let res = self.0.lock().unwrap();
@@ -34,12 +25,7 @@ mod mutex_shim {
} }
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard}; pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[rustversion::before(1.80)]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;
#[rustversion::since(1.80)]
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub use std::sync::LazyLock; pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
pub use spin::Lazy as LazyLock;

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2022-2025 Luke Parker Copyright (c) 2022-2024 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -7,9 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
# This must be specified with the patch version, else Rust believes `1.77` < `1.77.0` and will rust-version = "1.77"
# refuse to compile due to relying on versions introduced with `1.77.0`
rust-version = "1.77.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2025 Luke Parker Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,5 +1,5 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))] #![cfg_attr(all(zalloc_rustc_nightly, feature = "allocator"), feature(allocator_api))]
//! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation. //! Implementation of a Zeroizing Allocator, enabling zeroizing memory on deallocation.

View File

@@ -8,6 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -21,13 +22,11 @@ zeroize = { version = "^1.5", default-features = false, features = ["std"] }
bitvec = { version = "1", default-features = false, features = ["std"] } bitvec = { version = "1", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.10", default-features = false, features = ["std"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"] } ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std", "ristretto"] }
ciphersuite = { path = "../crypto/ciphersuite", default-features = false, features = ["std"] } dkg = { path = "../crypto/dkg", default-features = false, features = ["std"] }
dkg = { package = "dkg-musig", path = "../crypto/dkg/musig", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../crypto/frost" }
frost-schnorrkel = { path = "../crypto/schnorrkel" } frost-schnorrkel = { path = "../crypto/schnorrkel" }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
@@ -43,7 +42,7 @@ messages = { package = "serai-processor-messages", path = "../processor/messages
message-queue = { package = "serai-message-queue", path = "../message-queue" } message-queue = { package = "serai-message-queue", path = "../message-queue" }
tributary-sdk = { path = "./tributary-sdk" } tributary-sdk = { path = "./tributary-sdk" }
serai-client = { path = "../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
env_logger = { version = "0.10", default-features = false, features = ["humantime"] } env_logger = { version = "0.10", default-features = false, features = ["humantime"] }

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.85" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -18,12 +18,12 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.10", default-features = false, features = ["std"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023-2024 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -155,7 +155,7 @@ impl<D: Db> ContinuallyRan for CosignIntendTask<D> {
// Tell each set of their expectation to cosign this block // Tell each set of their expectation to cosign this block
for set in global_session_info.sets { for set in global_session_info.sets {
log::debug!("{set:?} will be cosigning block #{block_number}"); log::debug!("{:?} will be cosigning block #{block_number}", set);
IntendedCosigns::send( IntendedCosigns::send(
&mut txn, &mut txn,
set, set,

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.85" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -22,7 +22,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
serai-db = { path = "../../common/db", version = "0.1" } serai-db = { path = "../../common/db", version = "0.1" }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai"] }
serai-cosign = { path = "../cosign" } serai-cosign = { path = "../cosign" }
tributary-sdk = { path = "../tributary-sdk" } tributary-sdk = { path = "../tributary-sdk" }

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.87" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -23,19 +23,19 @@ async-trait = { version = "0.1", default-features = false }
rand_core = { version = "0.6", default-features = false, features = ["std"] } rand_core = { version = "0.6", default-features = false, features = ["std"] }
zeroize = { version = "^1.5", default-features = false, features = ["std"] } zeroize = { version = "^1.5", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.10", default-features = false, features = ["std"] }
schnorrkel = { version = "0.11", default-features = false, features = ["std"] } schnorrkel = { version = "0.11", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../../../substrate/client", default-features = false, features = ["serai"] }
serai-cosign = { path = "../../cosign" } serai-cosign = { path = "../../cosign" }
tributary-sdk = { path = "../../tributary-sdk" } tributary-sdk = { path = "../../tributary-sdk" }
futures-util = { version = "0.3", default-features = false, features = ["std"] } futures-util = { version = "0.3", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["sync"] } tokio = { version = "1", default-features = false, features = ["sync"] }
libp2p = { version = "0.56", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] } libp2p = { version = "0.54", default-features = false, features = ["tokio", "tcp", "noise", "yamux", "ping", "request-response", "gossipsub", "macros"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }
serai-task = { path = "../../../common/task", version = "0.1" } serai-task = { path = "../../../common/task", version = "0.1" }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -92,8 +92,7 @@ impl SwarmTask {
} }
} }
gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {} gossip::Event::Subscribed { .. } | gossip::Event::Unsubscribed { .. } => {}
gossip::Event::GossipsubNotSupported { peer_id } | gossip::Event::GossipsubNotSupported { peer_id } => {
gossip::Event::SlowPeer { peer_id, .. } => {
let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id); let _: Result<_, _> = self.swarm.disconnect_peer_id(peer_id);
} }
} }

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -103,7 +103,7 @@ mod _internal_db {
// Tributary transactions to publish from the DKG confirmation task // Tributary transactions to publish from the DKG confirmation task
TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction, TributaryTransactionsFromDkgConfirmation: (set: ExternalValidatorSet) -> Transaction,
// Participants to remove // Participants to remove
RemoveParticipant: (set: ExternalValidatorSet) -> u16, RemoveParticipant: (set: ExternalValidatorSet) -> Participant,
} }
} }
} }
@@ -139,11 +139,10 @@ impl RemoveParticipant {
pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) { pub(crate) fn send(txn: &mut impl DbTxn, set: ExternalValidatorSet, participant: Participant) {
// If this set has yet to be retired, send this transaction // If this set has yet to be retired, send this transaction
if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) { if RetiredTributary::get(txn, set.network).map(|session| session.0) < Some(set.session.0) {
_internal_db::RemoveParticipant::send(txn, set, &u16::from(participant)); _internal_db::RemoveParticipant::send(txn, set, &participant);
} }
} }
pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> { pub(crate) fn try_recv(txn: &mut impl DbTxn, set: ExternalValidatorSet) -> Option<Participant> {
_internal_db::RemoveParticipant::try_recv(txn, set) _internal_db::RemoveParticipant::try_recv(txn, set)
.map(|i| Participant::new(i).expect("sent invalid participant index for removal"))
} }
} }

View File

@@ -3,10 +3,13 @@ use std::{boxed::Box, collections::HashMap};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use rand_core::OsRng; use rand_core::OsRng;
use ciphersuite::{group::GroupEncoding, *}; use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use dkg::{Participant, musig};
use frost_schnorrkel::{ use frost_schnorrkel::{
frost::{curve::Ristretto, FrostError, sign::*}, frost::{
dkg::{Participant, musig::musig},
FrostError,
sign::*,
},
Schnorrkel, Schnorrkel,
}; };
@@ -30,7 +33,7 @@ fn schnorrkel() -> Schnorrkel {
fn our_i( fn our_i(
set: &NewSetInformation, set: &NewSetInformation,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
data: &HashMap<Participant, Vec<u8>>, data: &HashMap<Participant, Vec<u8>>,
) -> Participant { ) -> Participant {
let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes()); let public = SeraiAddress((Ristretto::generator() * key.deref()).to_bytes());
@@ -124,7 +127,7 @@ pub(crate) struct ConfirmDkgTask<CD: DbTrait, TD: DbTrait> {
set: NewSetInformation, set: NewSetInformation,
tributary_db: TD, tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
signer: Option<Signer>, signer: Option<Signer>,
} }
@@ -133,7 +136,7 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
db: CD, db: CD,
set: NewSetInformation, set: NewSetInformation,
tributary_db: TD, tributary_db: TD,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
) -> Self { ) -> Self {
Self { db, set, tributary_db, key, signer: None } Self { db, set, tributary_db, key, signer: None }
} }
@@ -152,15 +155,16 @@ impl<CD: DbTrait, TD: DbTrait> ConfirmDkgTask<CD, TD> {
db: &mut CD, db: &mut CD,
set: ExternalValidatorSet, set: ExternalValidatorSet,
attempt: u32, attempt: u32,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
signer: &mut Option<Signer>, signer: &mut Option<Signer>,
) { ) {
// Perform the preprocess // Perform the preprocess
let public_key = Ristretto::generator() * key.deref();
let (machine, preprocess) = AlgorithmMachine::new( let (machine, preprocess) = AlgorithmMachine::new(
schnorrkel(), schnorrkel(),
// We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet // We use a 1-of-1 Musig here as we don't know who will actually be in this Musig yet
musig(musig_context(set.into()), key, &[public_key]).unwrap(), musig(&musig_context(set.into()), key, &[Ristretto::generator() * key.deref()])
.unwrap()
.into(),
) )
.preprocess(&mut OsRng); .preprocess(&mut OsRng);
// We take the preprocess so we can use it in a distinct machine with the actual Musig // We take the preprocess so we can use it in a distinct machine with the actual Musig
@@ -195,7 +199,7 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
// If we were sent a key to set, create the signer for it // If we were sent a key to set, create the signer for it
if self.signer.is_none() && KeysToConfirm::get(&self.db, self.set.set).is_some() { if self.signer.is_none() && KeysToConfirm::get(&self.db, self.set.set).is_some() {
// Create and publish the initial preprocess // Create and publish the initial preprocess
Self::preprocess(&mut self.db, self.set.set, 0, self.key.clone(), &mut self.signer); Self::preprocess(&mut self.db, self.set.set, 0, &self.key, &mut self.signer);
made_progress = true; made_progress = true;
} }
@@ -215,13 +219,7 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
id: messages::sign::SignId { attempt, .. }, id: messages::sign::SignId { attempt, .. },
} => { } => {
// Create and publish the preprocess for the specified attempt // Create and publish the preprocess for the specified attempt
Self::preprocess( Self::preprocess(&mut self.db, self.set.set, attempt, &self.key, &mut self.signer);
&mut self.db,
self.set.set,
attempt,
self.key.clone(),
&mut self.signer,
);
} }
messages::sign::CoordinatorMessage::Preprocesses { messages::sign::CoordinatorMessage::Preprocesses {
id: messages::sign::SignId { attempt, .. }, id: messages::sign::SignId { attempt, .. },
@@ -260,9 +258,9 @@ impl<CD: DbTrait, TD: DbTrait> ContinuallyRan for ConfirmDkgTask<CD, TD> {
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let keys = let keys = musig(&musig_context(self.set.set.into()), &self.key, &musig_public_keys)
musig(musig_context(self.set.set.into()), self.key.clone(), &musig_public_keys) .unwrap()
.unwrap(); .into();
// Rebuild the machine // Rebuild the machine
let (machine, preprocess_from_cache) = let (machine, preprocess_from_cache) =

View File

@@ -4,10 +4,9 @@ use std::{sync::Arc, collections::HashMap, time::Instant};
use zeroize::{Zeroize, Zeroizing}; use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, OsRng}; use rand_core::{RngCore, OsRng};
use dalek_ff_group::Ristretto;
use ciphersuite::{ use ciphersuite::{
group::{ff::PrimeField, GroupEncoding}, group::{ff::PrimeField, GroupEncoding},
*, Ciphersuite, Ristretto,
}; };
use borsh::BorshDeserialize; use borsh::BorshDeserialize;
@@ -284,7 +283,7 @@ async fn handle_network(
&mut txn, &mut txn,
ExternalValidatorSet { network, session }, ExternalValidatorSet { network, session },
slash_report, slash_report,
Signature::from(signature), Signature(signature),
); );
} }
}, },
@@ -352,7 +351,7 @@ async fn main() {
let mut key_bytes = [0; 32]; let mut key_bytes = [0; 32];
key_bytes.copy_from_slice(&key_vec); key_bytes.copy_from_slice(&key_vec);
key_vec.zeroize(); key_vec.zeroize();
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::from_repr(key_bytes).unwrap()); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::from_repr(key_bytes).unwrap());
key_bytes.zeroize(); key_bytes.zeroize();
key key
}; };
@@ -439,7 +438,7 @@ async fn main() {
EphemeralEventStream::new( EphemeralEventStream::new(
db.clone(), db.clone(),
serai.clone(), serai.clone(),
SeraiAddress((<Ristretto as WrappedGroup>::generator() * serai_key.deref()).to_bytes()), SeraiAddress((<Ristretto as Ciphersuite>::generator() * serai_key.deref()).to_bytes()),
) )
.continually_run(substrate_ephemeral_task_def, vec![substrate_task]), .continually_run(substrate_ephemeral_task_def, vec![substrate_task]),
); );

View File

@@ -3,8 +3,7 @@ use std::sync::Arc;
use zeroize::Zeroizing; use zeroize::Zeroizing;
use ciphersuite::*; use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc; use tokio::sync::mpsc;
@@ -23,7 +22,7 @@ use serai_coordinator_p2p::P2p;
use crate::{Db, KeySet}; use crate::{Db, KeySet};
pub(crate) struct SubstrateTask<P: P2p> { pub(crate) struct SubstrateTask<P: P2p> {
pub(crate) serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>, pub(crate) serai_key: Zeroizing<<Ristretto as Ciphersuite>::F>,
pub(crate) db: Db, pub(crate) db: Db,
pub(crate) message_queue: Arc<MessageQueue>, pub(crate) message_queue: Arc<MessageQueue>,
pub(crate) p2p: P, pub(crate) p2p: P,

View File

@@ -4,8 +4,7 @@ use std::sync::Arc;
use zeroize::Zeroizing; use zeroize::Zeroizing;
use rand_core::OsRng; use rand_core::OsRng;
use blake2::{digest::typenum::U32, Digest, Blake2s}; use blake2::{digest::typenum::U32, Digest, Blake2s};
use ciphersuite::*; use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use tokio::sync::mpsc; use tokio::sync::mpsc;
@@ -68,7 +67,9 @@ async fn provide_transaction<TD: DbTrait, P: P2p>(
// advancing // advancing
Err(ProvidedError::LocalMismatchesOnChain) => loop { Err(ProvidedError::LocalMismatchesOnChain) => loop {
log::error!( log::error!(
"Tributary {set:?} was supposed to provide {tx:?} but peers disagree, halting Tributary", "Tributary {:?} was supposed to provide {:?} but peers disagree, halting Tributary",
set,
tx,
); );
// Print this every five minutes as this does need to be handled // Print this every five minutes as this does need to be handled
tokio::time::sleep(Duration::from_secs(5 * 60)).await; tokio::time::sleep(Duration::from_secs(5 * 60)).await;
@@ -160,7 +161,7 @@ impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan
#[must_use] #[must_use]
async fn add_signed_unsigned_transaction<TD: DbTrait, P: P2p>( async fn add_signed_unsigned_transaction<TD: DbTrait, P: P2p>(
tributary: &Tributary<TD, Transaction, P>, tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
mut tx: Transaction, mut tx: Transaction,
) -> bool { ) -> bool {
// If this is a signed transaction, sign it // If this is a signed transaction, sign it
@@ -213,7 +214,7 @@ async fn add_with_recognition_check<TD: DbTrait, P: P2p>(
set: ExternalValidatorSet, set: ExternalValidatorSet,
tributary_db: &mut TD, tributary_db: &mut TD,
tributary: &Tributary<TD, Transaction, P>, tributary: &Tributary<TD, Transaction, P>,
key: &Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
tx: Transaction, tx: Transaction,
) -> bool { ) -> bool {
let kind = tx.kind(); let kind = tx.kind();
@@ -252,7 +253,7 @@ pub(crate) struct AddTributaryTransactionsTask<CD: DbTrait, TD: DbTrait, P: P2p>
tributary_db: TD, tributary_db: TD,
tributary: Tributary<TD, Transaction, P>, tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation, set: NewSetInformation,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
} }
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for AddTributaryTransactionsTask<CD, TD, P> { impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for AddTributaryTransactionsTask<CD, TD, P> {
type Error = DoesNotError; type Error = DoesNotError;
@@ -382,7 +383,7 @@ pub(crate) struct SignSlashReportTask<CD: DbTrait, TD: DbTrait, P: P2p> {
tributary_db: TD, tributary_db: TD,
tributary: Tributary<TD, Transaction, P>, tributary: Tributary<TD, Transaction, P>,
set: NewSetInformation, set: NewSetInformation,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
} }
impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for SignSlashReportTask<CD, TD, P> { impl<CD: DbTrait, TD: DbTrait, P: P2p> ContinuallyRan for SignSlashReportTask<CD, TD, P> {
type Error = DoesNotError; type Error = DoesNotError;
@@ -470,7 +471,7 @@ pub(crate) async fn spawn_tributary<P: P2p>(
p2p: P, p2p: P,
p2p_add_tributary: &mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>, p2p_add_tributary: &mpsc::UnboundedSender<(ExternalValidatorSet, Tributary<Db, Transaction, P>)>,
set: NewSetInformation, set: NewSetInformation,
serai_key: Zeroizing<<Ristretto as WrappedGroup>::F>, serai_key: Zeroizing<<Ristretto as Ciphersuite>::F>,
) { ) {
// Don't spawn retired Tributaries // Don't spawn retired Tributaries
if crate::db::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >= if crate::db::RetiredTributary::get(&db, set.set.network).map(|session| session.0) >=
@@ -490,7 +491,7 @@ pub(crate) async fn spawn_tributary<P: P2p>(
let mut tributary_validators = Vec::with_capacity(set.validators.len()); let mut tributary_validators = Vec::with_capacity(set.validators.len());
for (validator, weight) in set.validators.iter().copied() { for (validator, weight) in set.validators.iter().copied() {
let validator_key = <Ristretto as GroupIo>::read_G(&mut validator.0.as_slice()) let validator_key = <Ristretto as Ciphersuite>::read_G(&mut validator.0.as_slice())
.expect("Serai validator had an invalid public key"); .expect("Serai validator had an invalid public key");
let weight = u64::from(weight); let weight = u64::from(weight);
tributary_validators.push((validator_key, weight)); tributary_validators.push((validator_key, weight));

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.85" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -25,7 +25,7 @@ borsh = { version = "1", default-features = false, features = ["std", "derive",
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../../substrate/client", version = "0.1", default-features = false, features = ["serai"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023-2024 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]

View File

@@ -6,7 +6,7 @@ license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary-sdk" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tributary-sdk"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021" edition = "2021"
rust-version = "1.85" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -24,12 +24,11 @@ zeroize = { version = "^1.5", default-features = false, features = ["std"] }
rand = { version = "0.8", default-features = false, features = ["std"] } rand = { version = "0.8", default-features = false, features = ["std"] }
rand_chacha = { version = "0.3", default-features = false, features = ["std"] } rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.10", default-features = false, features = ["std"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["std", "recommended"] } transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["std", "recommended"] }
ciphersuite = { path = "../../crypto/ciphersuite", version = "0.4", default-features = false, features = ["std"] } ciphersuite = { package = "ciphersuite", path = "../../crypto/ciphersuite", version = "0.4", default-features = false, features = ["std", "ristretto"] }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", version = "0.5", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", version = "0.5", default-features = false, features = ["std", "aggregate"] }
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
log = { version = "0.4", default-features = false, features = ["std"] } log = { version = "0.4", default-features = false, features = ["std"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license AGPL-3.0-only license
Copyright (c) 2023-2025 Luke Parker Copyright (c) 2023 Luke Parker
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -1,7 +1,6 @@
use std::collections::{VecDeque, HashSet}; use std::collections::{VecDeque, HashSet};
use dalek_ff_group::Ristretto; use ciphersuite::{group::GroupEncoding, Ciphersuite, Ristretto};
use ciphersuite::{group::GroupEncoding, *};
use serai_db::{Get, DbTxn, Db}; use serai_db::{Get, DbTxn, Db};
@@ -21,7 +20,7 @@ pub(crate) struct Blockchain<D: Db, T: TransactionTrait> {
block_number: u64, block_number: u64,
tip: [u8; 32], tip: [u8; 32],
participants: HashSet<[u8; 32]>, participants: HashSet<<Ristretto as Ciphersuite>::G>,
provided: ProvidedTransactions<D, T>, provided: ProvidedTransactions<D, T>,
mempool: Mempool<D, T>, mempool: Mempool<D, T>,
@@ -56,7 +55,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
} }
fn next_nonce_key( fn next_nonce_key(
genesis: &[u8; 32], genesis: &[u8; 32],
signer: &<Ristretto as WrappedGroup>::G, signer: &<Ristretto as Ciphersuite>::G,
order: &[u8], order: &[u8],
) -> Vec<u8> { ) -> Vec<u8> {
D::key( D::key(
@@ -69,15 +68,12 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn new( pub(crate) fn new(
db: D, db: D,
genesis: [u8; 32], genesis: [u8; 32],
participants: &[<Ristretto as WrappedGroup>::G], participants: &[<Ristretto as Ciphersuite>::G],
) -> Self { ) -> Self {
let mut res = Self { let mut res = Self {
db: Some(db.clone()), db: Some(db.clone()),
genesis, genesis,
participants: participants participants: participants.iter().copied().collect(),
.iter()
.map(<<Ristretto as WrappedGroup>::G as GroupEncoding>::to_bytes)
.collect(),
block_number: 0, block_number: 0,
tip: genesis, tip: genesis,
@@ -176,7 +172,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
self.mempool.add::<N, _>( self.mempool.add::<N, _>(
|signer, order| { |signer, order| {
if self.participants.contains(&signer.to_bytes()) { if self.participants.contains(&signer) {
Some( Some(
db.get(Self::next_nonce_key(&self.genesis, &signer, &order)) db.get(Self::next_nonce_key(&self.genesis, &signer, &order))
.map_or(0, |bytes| u32::from_le_bytes(bytes.try_into().unwrap())), .map_or(0, |bytes| u32::from_le_bytes(bytes.try_into().unwrap())),
@@ -199,13 +195,13 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
pub(crate) fn next_nonce( pub(crate) fn next_nonce(
&self, &self,
signer: &<Ristretto as WrappedGroup>::G, signer: &<Ristretto as Ciphersuite>::G,
order: &[u8], order: &[u8],
) -> Option<u32> { ) -> Option<u32> {
if let Some(next_nonce) = self.mempool.next_nonce_in_mempool(signer, order.to_vec()) { if let Some(next_nonce) = self.mempool.next_nonce_in_mempool(signer, order.to_vec()) {
return Some(next_nonce); return Some(next_nonce);
} }
if self.participants.contains(&signer.to_bytes()) { if self.participants.contains(signer) {
Some( Some(
self self
.db .db
@@ -254,7 +250,7 @@ impl<D: Db, T: TransactionTrait> Blockchain<D, T> {
self.tip, self.tip,
self.provided.transactions.clone(), self.provided.transactions.clone(),
&mut |signer, order| { &mut |signer, order| {
if self.participants.contains(&signer.to_bytes()) { if self.participants.contains(signer) {
let key = Self::next_nonce_key(&self.genesis, signer, order); let key = Self::next_nonce_key(&self.genesis, signer, order);
let next = txn let next = txn
.get(&key) .get(&key)

View File

@@ -3,8 +3,7 @@ use std::{sync::Arc, io};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use ciphersuite::*; use ciphersuite::{Ciphersuite, Ristretto};
use dalek_ff_group::Ristretto;
use scale::Decode; use scale::Decode;
use futures_channel::mpsc::UnboundedReceiver; use futures_channel::mpsc::UnboundedReceiver;
@@ -162,8 +161,8 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
db: D, db: D,
genesis: [u8; 32], genesis: [u8; 32],
start_time: u64, start_time: u64,
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
validators: Vec<(<Ristretto as WrappedGroup>::G, u64)>, validators: Vec<(<Ristretto as Ciphersuite>::G, u64)>,
p2p: P, p2p: P,
) -> Option<Self> { ) -> Option<Self> {
log::info!("new Tributary with genesis {}", hex::encode(genesis)); log::info!("new Tributary with genesis {}", hex::encode(genesis));
@@ -235,7 +234,7 @@ impl<D: Db, T: TransactionTrait, P: P2p> Tributary<D, T, P> {
pub async fn next_nonce( pub async fn next_nonce(
&self, &self,
signer: &<Ristretto as WrappedGroup>::G, signer: &<Ristretto as Ciphersuite>::G,
order: &[u8], order: &[u8],
) -> Option<u32> { ) -> Option<u32> {
self.network.blockchain.read().await.next_nonce(signer, order) self.network.blockchain.read().await.next_nonce(signer, order)

View File

@@ -1,7 +1,6 @@
use std::collections::HashMap; use std::collections::HashMap;
use dalek_ff_group::Ristretto; use ciphersuite::{Ciphersuite, Ristretto};
use ciphersuite::{group::GroupEncoding, *};
use serai_db::{DbTxn, Db}; use serai_db::{DbTxn, Db};
@@ -21,9 +20,9 @@ pub(crate) struct Mempool<D: Db, T: TransactionTrait> {
db: D, db: D,
genesis: [u8; 32], genesis: [u8; 32],
last_nonce_in_mempool: HashMap<([u8; 32], Vec<u8>), u32>, last_nonce_in_mempool: HashMap<(<Ristretto as Ciphersuite>::G, Vec<u8>), u32>,
txs: HashMap<[u8; 32], Transaction<T>>, txs: HashMap<[u8; 32], Transaction<T>>,
txs_per_signer: HashMap<[u8; 32], u32>, txs_per_signer: HashMap<<Ristretto as Ciphersuite>::G, u32>,
} }
impl<D: Db, T: TransactionTrait> Mempool<D, T> { impl<D: Db, T: TransactionTrait> Mempool<D, T> {
@@ -82,7 +81,6 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
} }
Transaction::Application(tx) => match tx.kind() { Transaction::Application(tx) => match tx.kind() {
TransactionKind::Signed(order, Signed { signer, nonce, .. }) => { TransactionKind::Signed(order, Signed { signer, nonce, .. }) => {
let signer = signer.to_bytes();
let amount = *res.txs_per_signer.get(&signer).unwrap_or(&0) + 1; let amount = *res.txs_per_signer.get(&signer).unwrap_or(&0) + 1;
res.txs_per_signer.insert(signer, amount); res.txs_per_signer.insert(signer, amount);
@@ -108,7 +106,7 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
// Returns Ok(true) if new, Ok(false) if an already present unsigned, or the error. // Returns Ok(true) if new, Ok(false) if an already present unsigned, or the error.
pub(crate) fn add< pub(crate) fn add<
N: Network, N: Network,
F: FnOnce(<Ristretto as WrappedGroup>::G, Vec<u8>) -> Option<u32>, F: FnOnce(<Ristretto as Ciphersuite>::G, Vec<u8>) -> Option<u32>,
>( >(
&mut self, &mut self,
blockchain_next_nonce: F, blockchain_next_nonce: F,
@@ -141,8 +139,6 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
}; };
let mut next_nonce = blockchain_next_nonce; let mut next_nonce = blockchain_next_nonce;
let signer = signer.to_bytes();
if let Some(mempool_last_nonce) = if let Some(mempool_last_nonce) =
self.last_nonce_in_mempool.get(&(signer, order.clone())) self.last_nonce_in_mempool.get(&(signer, order.clone()))
{ {
@@ -182,10 +178,10 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
// Returns None if the mempool doesn't have a nonce tracked. // Returns None if the mempool doesn't have a nonce tracked.
pub(crate) fn next_nonce_in_mempool( pub(crate) fn next_nonce_in_mempool(
&self, &self,
signer: &<Ristretto as WrappedGroup>::G, signer: &<Ristretto as Ciphersuite>::G,
order: Vec<u8>, order: Vec<u8>,
) -> Option<u32> { ) -> Option<u32> {
self.last_nonce_in_mempool.get(&(signer.to_bytes(), order)).copied().map(|nonce| nonce + 1) self.last_nonce_in_mempool.get(&(*signer, order)).copied().map(|nonce| nonce + 1)
} }
/// Get transactions to include in a block. /// Get transactions to include in a block.
@@ -246,8 +242,6 @@ impl<D: Db, T: TransactionTrait> Mempool<D, T> {
if let Some(tx) = self.txs.remove(tx) { if let Some(tx) = self.txs.remove(tx) {
if let TransactionKind::Signed(order, Signed { signer, nonce, .. }) = tx.kind() { if let TransactionKind::Signed(order, Signed { signer, nonce, .. }) = tx.kind() {
let signer = signer.to_bytes();
let amount = *self.txs_per_signer.get(&signer).unwrap() - 1; let amount = *self.txs_per_signer.get(&signer).unwrap() - 1;
self.txs_per_signer.insert(signer, amount); self.txs_per_signer.insert(signer, amount);

View File

@@ -10,10 +10,12 @@ use rand_chacha::ChaCha12Rng;
use transcript::{Transcript, RecommendedTranscript}; use transcript::{Transcript, RecommendedTranscript};
use ciphersuite::{ use ciphersuite::{
group::{ff::PrimeField, GroupEncoding}, group::{
*, GroupEncoding,
ff::{Field, PrimeField},
},
Ciphersuite, Ristretto,
}; };
use dalek_ff_group::Ristretto;
use schnorr::{ use schnorr::{
SchnorrSignature, SchnorrSignature,
aggregate::{SchnorrAggregator, SchnorrAggregate}, aggregate::{SchnorrAggregator, SchnorrAggregate},
@@ -48,26 +50,24 @@ fn challenge(
key: [u8; 32], key: [u8; 32],
nonce: &[u8], nonce: &[u8],
msg: &[u8], msg: &[u8],
) -> <Ristretto as WrappedGroup>::F { ) -> <Ristretto as Ciphersuite>::F {
let mut transcript = RecommendedTranscript::new(b"Tributary Chain Tendermint Message"); let mut transcript = RecommendedTranscript::new(b"Tributary Chain Tendermint Message");
transcript.append_message(b"genesis", genesis); transcript.append_message(b"genesis", genesis);
transcript.append_message(b"key", key); transcript.append_message(b"key", key);
transcript.append_message(b"nonce", nonce); transcript.append_message(b"nonce", nonce);
transcript.append_message(b"message", msg); transcript.append_message(b"message", msg);
<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide( <Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(&transcript.challenge(b"schnorr").into())
&transcript.challenge(b"schnorr").into(),
)
} }
#[derive(Clone, PartialEq, Eq, Debug)] #[derive(Clone, PartialEq, Eq, Debug)]
pub struct Signer { pub struct Signer {
genesis: [u8; 32], genesis: [u8; 32],
key: Zeroizing<<Ristretto as WrappedGroup>::F>, key: Zeroizing<<Ristretto as Ciphersuite>::F>,
} }
impl Signer { impl Signer {
pub(crate) fn new(genesis: [u8; 32], key: Zeroizing<<Ristretto as WrappedGroup>::F>) -> Signer { pub(crate) fn new(genesis: [u8; 32], key: Zeroizing<<Ristretto as Ciphersuite>::F>) -> Signer {
Signer { genesis, key } Signer { genesis, key }
} }
} }
@@ -100,10 +100,10 @@ impl SignerTrait for Signer {
assert_eq!(nonce_ref, [0; 64].as_ref()); assert_eq!(nonce_ref, [0; 64].as_ref());
let nonce = let nonce =
Zeroizing::new(<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide(&nonce_arr)); Zeroizing::new(<Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(&nonce_arr));
nonce_arr.zeroize(); nonce_arr.zeroize();
assert!(!bool::from(nonce.ct_eq(&<Ristretto as WrappedGroup>::F::ZERO))); assert!(!bool::from(nonce.ct_eq(&<Ristretto as Ciphersuite>::F::ZERO)));
let challenge = challenge( let challenge = challenge(
self.genesis, self.genesis,
@@ -132,7 +132,7 @@ pub struct Validators {
impl Validators { impl Validators {
pub(crate) fn new( pub(crate) fn new(
genesis: [u8; 32], genesis: [u8; 32],
validators: Vec<(<Ristretto as WrappedGroup>::G, u64)>, validators: Vec<(<Ristretto as Ciphersuite>::G, u64)>,
) -> Option<Validators> { ) -> Option<Validators> {
let mut total_weight = 0; let mut total_weight = 0;
let mut weights = HashMap::new(); let mut weights = HashMap::new();
@@ -163,6 +163,7 @@ impl SignatureScheme for Validators {
type AggregateSignature = Vec<u8>; type AggregateSignature = Vec<u8>;
type Signer = Arc<Signer>; type Signer = Arc<Signer>;
#[must_use]
fn verify(&self, validator: Self::ValidatorId, msg: &[u8], sig: &Self::Signature) -> bool { fn verify(&self, validator: Self::ValidatorId, msg: &[u8], sig: &Self::Signature) -> bool {
if !self.weights.contains_key(&validator) { if !self.weights.contains_key(&validator) {
return false; return false;
@@ -195,6 +196,7 @@ impl SignatureScheme for Validators {
aggregate.serialize() aggregate.serialize()
} }
#[must_use]
fn verify_aggregate( fn verify_aggregate(
&self, &self,
signers: &[Self::ValidatorId], signers: &[Self::ValidatorId],
@@ -219,7 +221,7 @@ impl SignatureScheme for Validators {
signers signers
.iter() .iter()
.zip(challenges) .zip(challenges)
.map(|(s, c)| (<Ristretto as GroupIo>::read_G(&mut s.as_slice()).unwrap(), c)) .map(|(s, c)| (<Ristretto as Ciphersuite>::read_G(&mut s.as_slice()).unwrap(), c))
.collect::<Vec<_>>() .collect::<Vec<_>>()
.as_slice(), .as_slice(),
) )

View File

@@ -4,8 +4,7 @@ use scale::{Encode, Decode, IoReader};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto; use ciphersuite::{Ciphersuite, Ristretto};
use ciphersuite::*;
use crate::{ use crate::{
transaction::{Transaction, TransactionKind, TransactionError}, transaction::{Transaction, TransactionKind, TransactionError},
@@ -50,7 +49,7 @@ impl Transaction for TendermintTx {
Blake2s256::digest(self.serialize()).into() Blake2s256::digest(self.serialize()).into()
} }
fn sig_hash(&self, _genesis: [u8; 32]) -> <Ristretto as WrappedGroup>::F { fn sig_hash(&self, _genesis: [u8; 32]) -> <Ristretto as Ciphersuite>::F {
match self { match self {
TendermintTx::SlashEvidence(_) => panic!("sig_hash called on slash evidence transaction"), TendermintTx::SlashEvidence(_) => panic!("sig_hash called on slash evidence transaction"),
} }

View File

@@ -1,9 +1,10 @@
use std::{sync::Arc, io, collections::HashMap, fmt::Debug}; use std::{sync::Arc, io, collections::HashMap, fmt::Debug};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use ciphersuite::{
use dalek_ff_group::Ristretto; group::{ff::Field, Group},
use ciphersuite::{group::Group, *}; Ciphersuite, Ristretto,
};
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use serai_db::MemDb; use serai_db::MemDb;
@@ -29,11 +30,11 @@ impl NonceTransaction {
nonce, nonce,
distinguisher, distinguisher,
Signed { Signed {
signer: <Ristretto as WrappedGroup>::G::identity(), signer: <Ristretto as Ciphersuite>::G::identity(),
nonce, nonce,
signature: SchnorrSignature::<Ristretto> { signature: SchnorrSignature::<Ristretto> {
R: <Ristretto as WrappedGroup>::G::identity(), R: <Ristretto as Ciphersuite>::G::identity(),
s: <Ristretto as WrappedGroup>::F::ZERO, s: <Ristretto as Ciphersuite>::F::ZERO,
}, },
}, },
) )

View File

@@ -10,8 +10,7 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto; use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use ciphersuite::*;
use serai_db::{DbTxn, Db, MemDb}; use serai_db::{DbTxn, Db, MemDb};
@@ -31,7 +30,7 @@ type N = TendermintNetwork<MemDb, SignedTransaction, DummyP2p>;
fn new_blockchain<T: TransactionTrait>( fn new_blockchain<T: TransactionTrait>(
genesis: [u8; 32], genesis: [u8; 32],
participants: &[<Ristretto as WrappedGroup>::G], participants: &[<Ristretto as Ciphersuite>::G],
) -> (MemDb, Blockchain<MemDb, T>) { ) -> (MemDb, Blockchain<MemDb, T>) {
let db = MemDb::new(); let db = MemDb::new();
let blockchain = Blockchain::new(db.clone(), genesis, participants); let blockchain = Blockchain::new(db.clone(), genesis, participants);
@@ -82,7 +81,7 @@ fn invalid_block() {
assert!(blockchain.verify_block::<N>(&block, &validators, false).is_err()); assert!(blockchain.verify_block::<N>(&block, &validators, false).is_err());
} }
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0); let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0);
// Not a participant // Not a participant
@@ -134,7 +133,7 @@ fn invalid_block() {
blockchain.verify_block::<N>(&block, &validators, false).unwrap(); blockchain.verify_block::<N>(&block, &validators, false).unwrap();
match &mut block.transactions[0] { match &mut block.transactions[0] {
Transaction::Application(tx) => { Transaction::Application(tx) => {
tx.1.signature.s += <Ristretto as WrappedGroup>::F::ONE; tx.1.signature.s += <Ristretto as Ciphersuite>::F::ONE;
} }
_ => panic!("non-signed tx found"), _ => panic!("non-signed tx found"),
} }
@@ -150,7 +149,7 @@ fn invalid_block() {
fn signed_transaction() { fn signed_transaction() {
let genesis = new_genesis(); let genesis = new_genesis();
let validators = Arc::new(Validators::new(genesis, vec![]).unwrap()); let validators = Arc::new(Validators::new(genesis, vec![]).unwrap());
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0); let tx = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0);
let signer = tx.1.signer; let signer = tx.1.signer;
@@ -339,7 +338,7 @@ fn provided_transaction() {
#[tokio::test] #[tokio::test]
async fn tendermint_evidence_tx() { async fn tendermint_evidence_tx() {
let genesis = new_genesis(); let genesis = new_genesis();
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let signer = Signer::new(genesis, key.clone()); let signer = Signer::new(genesis, key.clone());
let signer_id = Ristretto::generator() * key.deref(); let signer_id = Ristretto::generator() * key.deref();
let validators = Arc::new(Validators::new(genesis, vec![(signer_id, 1)]).unwrap()); let validators = Arc::new(Validators::new(genesis, vec![(signer_id, 1)]).unwrap());
@@ -379,7 +378,7 @@ async fn tendermint_evidence_tx() {
let mut mempool: Vec<Transaction<SignedTransaction>> = vec![]; let mut mempool: Vec<Transaction<SignedTransaction>> = vec![];
let mut signers = vec![]; let mut signers = vec![];
for _ in 0 .. 5 { for _ in 0 .. 5 {
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let signer = Signer::new(genesis, key.clone()); let signer = Signer::new(genesis, key.clone());
let signer_id = Ristretto::generator() * key.deref(); let signer_id = Ristretto::generator() * key.deref();
signers.push((signer_id, 1)); signers.push((signer_id, 1));
@@ -446,7 +445,7 @@ async fn block_tx_ordering() {
} }
let genesis = new_genesis(); let genesis = new_genesis();
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
// signer // signer
let signer = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0).1.signer; let signer = crate::tests::signed_transaction(&mut OsRng, genesis, &key, 0).1.signer;

View File

@@ -3,8 +3,7 @@ use std::{sync::Arc, collections::HashMap};
use zeroize::Zeroizing; use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng}; use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto; use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use ciphersuite::*;
use tendermint::ext::Commit; use tendermint::ext::Commit;
@@ -33,7 +32,7 @@ async fn mempool_addition() {
Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] }) Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] })
}; };
let unsigned_in_chain = |_: [u8; 32]| false; let unsigned_in_chain = |_: [u8; 32]| false;
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let first_tx = signed_transaction(&mut OsRng, genesis, &key, 0); let first_tx = signed_transaction(&mut OsRng, genesis, &key, 0);
let signer = first_tx.1.signer; let signer = first_tx.1.signer;
@@ -125,7 +124,7 @@ async fn mempool_addition() {
// If the mempool doesn't have a nonce for an account, it should successfully use the // If the mempool doesn't have a nonce for an account, it should successfully use the
// blockchain's // blockchain's
let second_key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let second_key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
let tx = signed_transaction(&mut OsRng, genesis, &second_key, 2); let tx = signed_transaction(&mut OsRng, genesis, &second_key, 2);
let second_signer = tx.1.signer; let second_signer = tx.1.signer;
assert_eq!(mempool.next_nonce_in_mempool(&second_signer, vec![]), None); assert_eq!(mempool.next_nonce_in_mempool(&second_signer, vec![]), None);
@@ -165,7 +164,7 @@ fn too_many_mempool() {
Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] }) Some(Commit::<Arc<Validators>> { end_time: 0, validators: vec![], signature: vec![] })
}; };
let unsigned_in_chain = |_: [u8; 32]| false; let unsigned_in_chain = |_: [u8; 32]| false;
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng));
// We should be able to add transactions up to the limit // We should be able to add transactions up to the limit
for i in 0 .. ACCOUNT_MEMPOOL_LIMIT { for i in 0 .. ACCOUNT_MEMPOOL_LIMIT {

View File

@@ -6,8 +6,10 @@ use rand::{RngCore, CryptoRng, rngs::OsRng};
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto; use ciphersuite::{
use ciphersuite::*; group::{ff::Field, Group},
Ciphersuite, Ristretto,
};
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode; use scale::Encode;
@@ -31,11 +33,11 @@ mod tendermint;
pub fn random_signed<R: RngCore + CryptoRng>(rng: &mut R) -> Signed { pub fn random_signed<R: RngCore + CryptoRng>(rng: &mut R) -> Signed {
Signed { Signed {
signer: <Ristretto as WrappedGroup>::G::random(&mut *rng), signer: <Ristretto as Ciphersuite>::G::random(&mut *rng),
nonce: u32::try_from(rng.next_u64() >> 32 >> 1).unwrap(), nonce: u32::try_from(rng.next_u64() >> 32 >> 1).unwrap(),
signature: SchnorrSignature::<Ristretto> { signature: SchnorrSignature::<Ristretto> {
R: <Ristretto as WrappedGroup>::G::random(&mut *rng), R: <Ristretto as Ciphersuite>::G::random(&mut *rng),
s: <Ristretto as WrappedGroup>::F::random(rng), s: <Ristretto as Ciphersuite>::F::random(rng),
}, },
} }
} }
@@ -134,18 +136,18 @@ impl Transaction for SignedTransaction {
pub fn signed_transaction<R: RngCore + CryptoRng>( pub fn signed_transaction<R: RngCore + CryptoRng>(
rng: &mut R, rng: &mut R,
genesis: [u8; 32], genesis: [u8; 32],
key: &Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
nonce: u32, nonce: u32,
) -> SignedTransaction { ) -> SignedTransaction {
let mut data = vec![0; 512]; let mut data = vec![0; 512];
rng.fill_bytes(&mut data); rng.fill_bytes(&mut data);
let signer = <Ristretto as WrappedGroup>::generator() * **key; let signer = <Ristretto as Ciphersuite>::generator() * **key;
let mut tx = let mut tx =
SignedTransaction(data, Signed { signer, nonce, signature: random_signed(rng).signature }); SignedTransaction(data, Signed { signer, nonce, signature: random_signed(rng).signature });
let sig_nonce = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(rng)); let sig_nonce = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(rng));
tx.1.signature.R = Ristretto::generator() * sig_nonce.deref(); tx.1.signature.R = Ristretto::generator() * sig_nonce.deref();
tx.1.signature = SchnorrSignature::sign(key, sig_nonce, tx.sig_hash(genesis)); tx.1.signature = SchnorrSignature::sign(key, sig_nonce, tx.sig_hash(genesis));
@@ -160,7 +162,7 @@ pub fn random_signed_transaction<R: RngCore + CryptoRng>(
let mut genesis = [0; 32]; let mut genesis = [0; 32];
rng.fill_bytes(&mut genesis); rng.fill_bytes(&mut genesis);
let key = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut *rng)); let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut *rng));
// Shift over an additional bit to ensure it won't overflow when incremented // Shift over an additional bit to ensure it won't overflow when incremented
let nonce = u32::try_from(rng.next_u64() >> 32 >> 1).unwrap(); let nonce = u32::try_from(rng.next_u64() >> 32 >> 1).unwrap();
@@ -177,11 +179,12 @@ pub async fn tendermint_meta() -> ([u8; 32], Signer, [u8; 32], Arc<Validators>)
// signer // signer
let genesis = new_genesis(); let genesis = new_genesis();
let signer = let signer =
Signer::new(genesis, Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng))); Signer::new(genesis, Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng)));
let validator_id = signer.validator_id().await.unwrap(); let validator_id = signer.validator_id().await.unwrap();
// schema // schema
let signer_pub = <Ristretto as GroupIo>::read_G::<&[u8]>(&mut validator_id.as_slice()).unwrap(); let signer_pub =
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut validator_id.as_slice()).unwrap();
let validators = Arc::new(Validators::new(genesis, vec![(signer_pub, 1)]).unwrap()); let validators = Arc::new(Validators::new(genesis, vec![(signer_pub, 1)]).unwrap());
(genesis, signer, validator_id, validators) (genesis, signer, validator_id, validators)

View File

@@ -2,8 +2,7 @@ use rand::rngs::OsRng;
use blake2::{Digest, Blake2s256}; use blake2::{Digest, Blake2s256};
use dalek_ff_group::Ristretto; use ciphersuite::{group::ff::Field, Ciphersuite, Ristretto};
use ciphersuite::*;
use crate::{ use crate::{
ReadWrite, ReadWrite,
@@ -69,7 +68,7 @@ fn signed_transaction() {
} }
{ {
let mut tx = tx.clone(); let mut tx = tx.clone();
tx.1.signature.s += <Ristretto as WrappedGroup>::F::ONE; tx.1.signature.s += <Ristretto as Ciphersuite>::F::ONE;
assert!(verify_transaction(&tx, genesis, &mut |_, _| Some(tx.1.nonce)).is_err()); assert!(verify_transaction(&tx, genesis, &mut |_, _| Some(tx.1.nonce)).is_err());
} }

View File

@@ -3,8 +3,7 @@ use std::sync::Arc;
use zeroize::Zeroizing; use zeroize::Zeroizing;
use rand::{RngCore, rngs::OsRng}; use rand::{RngCore, rngs::OsRng};
use dalek_ff_group::Ristretto; use ciphersuite::{Ristretto, Ciphersuite, group::ff::Field};
use ciphersuite::*;
use scale::Encode; use scale::Encode;
@@ -261,7 +260,7 @@ async fn conflicting_msgs_evidence_tx() {
let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await; let signed_1 = signed_for_b_r(0, 0, Data::Proposal(None, TendermintBlock(vec![0x11]))).await;
let signer_2 = let signer_2 =
Signer::new(genesis, Zeroizing::new(<Ristretto as WrappedGroup>::F::random(&mut OsRng))); Signer::new(genesis, Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut OsRng)));
let signed_id_2 = signer_2.validator_id().await.unwrap(); let signed_id_2 = signer_2.validator_id().await.unwrap();
let signed_2 = signed_from_data::<N>( let signed_2 = signed_from_data::<N>(
signer_2.into(), signer_2.into(),
@@ -278,9 +277,10 @@ async fn conflicting_msgs_evidence_tx() {
)); ));
// update schema so that we don't fail due to invalid signature // update schema so that we don't fail due to invalid signature
let signer_pub = <Ristretto as GroupIo>::read_G::<&[u8]>(&mut signer_id.as_slice()).unwrap(); let signer_pub =
<Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut signer_id.as_slice()).unwrap();
let signer_pub_2 = let signer_pub_2 =
<Ristretto as GroupIo>::read_G::<&[u8]>(&mut signed_id_2.as_slice()).unwrap(); <Ristretto as Ciphersuite>::read_G::<&[u8]>(&mut signed_id_2.as_slice()).unwrap();
let validators = let validators =
Arc::new(Validators::new(genesis, vec![(signer_pub, 1), (signer_pub_2, 1)]).unwrap()); Arc::new(Validators::new(genesis, vec![(signer_pub, 1), (signer_pub_2, 1)]).unwrap());

View File

@@ -8,9 +8,8 @@ use blake2::{Digest, Blake2b512};
use ciphersuite::{ use ciphersuite::{
group::{Group, GroupEncoding}, group::{Group, GroupEncoding},
*, Ciphersuite, Ristretto,
}; };
use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use crate::{TRANSACTION_SIZE_LIMIT, ReadWrite}; use crate::{TRANSACTION_SIZE_LIMIT, ReadWrite};
@@ -43,7 +42,7 @@ pub enum TransactionError {
/// Data for a signed transaction. /// Data for a signed transaction.
#[derive(Clone, PartialEq, Eq, Debug)] #[derive(Clone, PartialEq, Eq, Debug)]
pub struct Signed { pub struct Signed {
pub signer: <Ristretto as WrappedGroup>::G, pub signer: <Ristretto as Ciphersuite>::G,
pub nonce: u32, pub nonce: u32,
pub signature: SchnorrSignature<Ristretto>, pub signature: SchnorrSignature<Ristretto>,
} }
@@ -160,10 +159,10 @@ pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite {
/// Do not override this unless you know what you're doing. /// Do not override this unless you know what you're doing.
/// ///
/// Panics if called on non-signed transactions. /// Panics if called on non-signed transactions.
fn sig_hash(&self, genesis: [u8; 32]) -> <Ristretto as WrappedGroup>::F { fn sig_hash(&self, genesis: [u8; 32]) -> <Ristretto as Ciphersuite>::F {
match self.kind() { match self.kind() {
TransactionKind::Signed(order, Signed { signature, .. }) => { TransactionKind::Signed(order, Signed { signature, .. }) => {
<Ristretto as WrappedGroup>::F::from_bytes_mod_order_wide( <Ristretto as Ciphersuite>::F::from_bytes_mod_order_wide(
&Blake2b512::digest( &Blake2b512::digest(
[ [
b"Tributary Signed Transaction", b"Tributary Signed Transaction",
@@ -182,8 +181,8 @@ pub trait Transaction: 'static + Send + Sync + Clone + Eq + Debug + ReadWrite {
} }
} }
pub trait GAIN: FnMut(&<Ristretto as WrappedGroup>::G, &[u8]) -> Option<u32> {} pub trait GAIN: FnMut(&<Ristretto as Ciphersuite>::G, &[u8]) -> Option<u32> {}
impl<F: FnMut(&<Ristretto as WrappedGroup>::G, &[u8]) -> Option<u32>> GAIN for F {} impl<F: FnMut(&<Ristretto as Ciphersuite>::G, &[u8]) -> Option<u32>> GAIN for F {}
pub(crate) fn verify_transaction<F: GAIN, T: Transaction>( pub(crate) fn verify_transaction<F: GAIN, T: Transaction>(
tx: &T, tx: &T,

View File

@@ -6,7 +6,7 @@ license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint" repository = "https://github.com/serai-dex/serai/tree/develop/coordinator/tendermint"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021" edition = "2021"
rust-version = "1.75" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2025 Luke Parker Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -114,6 +114,7 @@ impl<S: SignatureScheme> SignatureScheme for Arc<S> {
self.as_ref().aggregate(validators, msg, sigs) self.as_ref().aggregate(validators, msg, sigs)
} }
#[must_use]
fn verify_aggregate( fn verify_aggregate(
&self, &self,
signers: &[Self::ValidatorId], signers: &[Self::ValidatorId],

View File

@@ -1,5 +1,3 @@
#![expect(clippy::cast_possible_truncation)]
use core::fmt::Debug; use core::fmt::Debug;
use std::{ use std::{

View File

@@ -46,6 +46,7 @@ impl SignatureScheme for TestSignatureScheme {
type AggregateSignature = Vec<[u8; 32]>; type AggregateSignature = Vec<[u8; 32]>;
type Signer = TestSigner; type Signer = TestSigner;
#[must_use]
fn verify(&self, validator: u16, msg: &[u8], sig: &[u8; 32]) -> bool { fn verify(&self, validator: u16, msg: &[u8], sig: &[u8; 32]) -> bool {
(sig[.. 2] == validator.to_le_bytes()) && (sig[2 ..] == [msg, &[0; 30]].concat()[.. 30]) (sig[.. 2] == validator.to_le_bytes()) && (sig[2 ..] == [msg, &[0; 30]].concat()[.. 30])
} }
@@ -59,6 +60,7 @@ impl SignatureScheme for TestSignatureScheme {
sigs.to_vec() sigs.to_vec()
} }
#[must_use]
fn verify_aggregate( fn verify_aggregate(
&self, &self,
signers: &[TestValidatorId], signers: &[TestValidatorId],

View File

@@ -8,7 +8,7 @@ authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
publish = false publish = false
rust-version = "1.85" rust-version = "1.81"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -24,13 +24,12 @@ rand_core = { version = "0.6", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] } scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std", "derive"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] } borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
blake2 = { version = "0.11.0-rc.0", default-features = false, features = ["alloc"] } blake2 = { version = "0.10", default-features = false, features = ["std"] }
ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] } ciphersuite = { path = "../../crypto/ciphersuite", default-features = false, features = ["std"] }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", default-features = false, features = ["std"] }
dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] } dkg = { path = "../../crypto/dkg", default-features = false, features = ["std"] }
schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] } schnorr = { package = "schnorr-signatures", path = "../../crypto/schnorr", default-features = false, features = ["std"] }
serai-client = { path = "../../substrate/client", default-features = false, features = ["serai", "borsh"] } serai-client = { path = "../../substrate/client", default-features = false, features = ["serai"] }
serai-db = { path = "../../common/db" } serai-db = { path = "../../common/db" }
serai-task = { path = "../../common/task", version = "0.1" } serai-task = { path = "../../common/task", version = "0.1" }

View File

@@ -1,5 +1,3 @@
#![expect(clippy::cast_possible_truncation)]
use std::collections::HashMap; use std::collections::HashMap;
use scale::Encode; use scale::Encode;

View File

@@ -1,4 +1,4 @@
#![cfg_attr(docsrs, feature(doc_cfg))] #![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")] #![doc = include_str!("../README.md")]
#![deny(missing_docs)] #![deny(missing_docs)]
@@ -253,7 +253,7 @@ impl<TD: Db, TDT: DbTxn, P: P2p> ScanBlock<'_, TD, TDT, P> {
let signer = signer(signed); let signer = signer(signed);
// Check the participant voted to be removed actually exists // Check the participant voted to be removed actually exists
if !self.validators.contains(&participant) { if !self.validators.iter().any(|validator| *validator == participant) {
TributaryDb::fatal_slash( TributaryDb::fatal_slash(
self.tributary_txn, self.tributary_txn,
self.set.set, self.set.set,

View File

@@ -6,10 +6,9 @@ use rand_core::{RngCore, CryptoRng};
use blake2::{digest::typenum::U32, Digest, Blake2b}; use blake2::{digest::typenum::U32, Digest, Blake2b};
use ciphersuite::{ use ciphersuite::{
group::{Group, GroupEncoding}, group::{ff::Field, Group, GroupEncoding},
*, Ciphersuite, Ristretto,
}; };
use dalek_ff_group::Ristretto;
use schnorr::SchnorrSignature; use schnorr::SchnorrSignature;
use scale::Encode; use scale::Encode;
@@ -52,7 +51,7 @@ impl SigningProtocolRound {
#[derive(Clone, Copy, PartialEq, Eq, Debug)] #[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Signed { pub struct Signed {
/// The signer. /// The signer.
signer: <Ristretto as WrappedGroup>::G, signer: <Ristretto as Ciphersuite>::G,
/// The signature. /// The signature.
signature: SchnorrSignature<Ristretto>, signature: SchnorrSignature<Ristretto>,
} }
@@ -73,7 +72,7 @@ impl BorshDeserialize for Signed {
impl Signed { impl Signed {
/// Fetch the signer. /// Fetch the signer.
pub(crate) fn signer(&self) -> <Ristretto as WrappedGroup>::G { pub(crate) fn signer(&self) -> <Ristretto as Ciphersuite>::G {
self.signer self.signer
} }
@@ -86,10 +85,10 @@ impl Signed {
impl Default for Signed { impl Default for Signed {
fn default() -> Self { fn default() -> Self {
Self { Self {
signer: <Ristretto as WrappedGroup>::G::identity(), signer: <Ristretto as Ciphersuite>::G::identity(),
signature: SchnorrSignature { signature: SchnorrSignature {
R: <Ristretto as WrappedGroup>::G::identity(), R: <Ristretto as Ciphersuite>::G::identity(),
s: <Ristretto as WrappedGroup>::F::ZERO, s: <Ristretto as Ciphersuite>::F::ZERO,
}, },
} }
} }
@@ -356,7 +355,7 @@ impl Transaction {
&mut self, &mut self,
rng: &mut R, rng: &mut R,
genesis: [u8; 32], genesis: [u8; 32],
key: &Zeroizing<<Ristretto as WrappedGroup>::F>, key: &Zeroizing<<Ristretto as Ciphersuite>::F>,
) { ) {
fn signed(tx: &mut Transaction) -> &mut Signed { fn signed(tx: &mut Transaction) -> &mut Signed {
#[allow(clippy::match_same_arms)] // This doesn't make semantic sense here #[allow(clippy::match_same_arms)] // This doesn't make semantic sense here
@@ -380,13 +379,13 @@ impl Transaction {
} }
// Decide the nonce to sign with // Decide the nonce to sign with
let sig_nonce = Zeroizing::new(<Ristretto as WrappedGroup>::F::random(rng)); let sig_nonce = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(rng));
{ {
// Set the signer and the nonce // Set the signer and the nonce
let signed = signed(self); let signed = signed(self);
signed.signer = Ristretto::generator() * key.deref(); signed.signer = Ristretto::generator() * key.deref();
signed.signature.R = <Ristretto as WrappedGroup>::generator() * sig_nonce.deref(); signed.signature.R = <Ristretto as Ciphersuite>::generator() * sig_nonce.deref();
} }
// Get the signature hash (which now includes `R || A` making it valid as the challenge) // Get the signature hash (which now includes `R || A` making it valid as the challenge)

View File

@@ -1,13 +1,13 @@
[package] [package]
name = "ciphersuite" name = "ciphersuite"
version = "0.4.2" version = "0.4.1"
description = "Ciphersuites built around ff/group" description = "Ciphersuites built around ff/group"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite" repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"] keywords = ["ciphersuite", "ff", "group"]
edition = "2021" edition = "2021"
rust-version = "1.85" rust-version = "1.80"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
@@ -17,32 +17,69 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true workspace = true
[dependencies] [dependencies]
std-shims = { path = "../../common/std-shims", version = "0.1.4", default-features = false } std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false, optional = true }
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] } zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
subtle = { version = "^2.4", default-features = false } subtle = { version = "^2.4", default-features = false }
digest = { version = "0.11.0-rc.1", default-features = false } digest = { version = "0.10", default-features = false }
transcript = { package = "flexible-transcript", path = "../transcript", version = "^0.3.2", default-features = false }
sha2 = { version = "0.10", default-features = false, optional = true }
sha3 = { version = "0.10", default-features = false, optional = true }
ff = { version = "0.13", default-features = false, features = ["bits"] } ff = { version = "0.13", default-features = false, features = ["bits"] }
group = { version = "0.13", default-features = false } group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../dalek-ff-group", version = "0.4", default-features = false, optional = true }
elliptic-curve = { version = "0.13", default-features = false, features = ["hash2curve"], optional = true }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"], optional = true }
minimal-ed448 = { path = "../ed448", version = "0.4", default-features = false, optional = true }
[dev-dependencies] [dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] } hex = { version = "0.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../ff-group-tests" } ff-group-tests = { version = "0.13", path = "../ff-group-tests" }
[features] [features]
alloc = ["zeroize/alloc", "digest/alloc", "ff/alloc"] alloc = ["std-shims"]
std = [ std = [
"alloc",
"std-shims/std", "std-shims/std",
"rand_core/std",
"zeroize/std", "zeroize/std",
"subtle/std", "subtle/std",
"digest/std",
"transcript/std",
"sha2?/std",
"sha3?/std",
"ff/std", "ff/std",
"dalek-ff-group?/std",
"elliptic-curve?/std",
"p256?/std",
"k256?/std",
"minimal-ed448?/std",
] ]
dalek = ["sha2", "dalek-ff-group"]
ed25519 = ["dalek"]
ristretto = ["dalek"]
kp256 = ["sha2", "elliptic-curve"]
p256 = ["kp256", "dep:p256"]
secp256k1 = ["kp256", "k256"]
ed448 = ["sha3", "minimal-ed448"]
default = ["std"] default = ["std"]

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2021-2025 Luke Parker Copyright (c) 2021-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -17,7 +17,9 @@ Secp256k1 and P-256 are offered via [k256](https://crates.io/crates/k256) and
[p256](https://crates.io/crates/p256), two libraries maintained by [p256](https://crates.io/crates/p256), two libraries maintained by
[RustCrypto](https://github.com/RustCrypto). [RustCrypto](https://github.com/RustCrypto).
Please see the [`ciphersuite-kp256`](https://docs.rs/ciphersuite-kp256) crate for more info. Their `hash_to_F` is the
[IETF's hash to curve](https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html),
yet applied to their scalar field.
### Ed25519/Ristretto ### Ed25519/Ristretto
@@ -25,7 +27,11 @@ Ed25519/Ristretto are offered via
[dalek-ff-group](https://crates.io/crates/dalek-ff-group), an ff/group wrapper [dalek-ff-group](https://crates.io/crates/dalek-ff-group), an ff/group wrapper
around [curve25519-dalek](https://crates.io/crates/curve25519-dalek). around [curve25519-dalek](https://crates.io/crates/curve25519-dalek).
Please see the [`dalek-ff-group`](https://docs.rs/dalek-ff-group) crate for more info. Their `hash_to_F` is the wide reduction of SHA2-512, as used in
[RFC-8032](https://www.rfc-editor.org/rfc/rfc8032). This is also compliant with
the draft
[RFC-RISTRETTO](https://www.ietf.org/archive/id/draft-irtf-cfrg-ristretto255-decaf448-05.html).
The domain-separation tag is naively prefixed to the message.
### Ed448 ### Ed448
@@ -33,4 +39,6 @@ Ed448 is offered via [minimal-ed448](https://crates.io/crates/minimal-ed448), an
explicitly not recommended, unaudited, incomplete Ed448 implementation, limited explicitly not recommended, unaudited, incomplete Ed448 implementation, limited
to its prime-order subgroup. to its prime-order subgroup.
Please see the [`minimal-ed448`](https://docs.rs/minimal-ed448) crate for more info. Its `hash_to_F` is the wide reduction of SHAKE256, with a 114-byte output, as
used in [RFC-8032](https://www.rfc-editor.org/rfc/rfc8032). The
domain-separation tag is naively prefixed to the message.

View File

@@ -1,51 +0,0 @@
[package]
name = "ciphersuite-kp256"
version = "0.4.0"
description = "Ciphersuites built around ff/group"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/crypto/ciphersuite/kp256"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["ciphersuite", "ff", "group"]
edition = "2021"
rust-version = "1.85"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
rand_core = { version = "0.6", default-features = false }
zeroize = { version = "^1.5", default-features = false, features = ["derive"] }
sha2 = { version = "0.11.0-rc.2", default-features = false }
p256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits", "hash2curve"] }
ciphersuite = { path = "../", version = "0.4", default-features = false }
[dev-dependencies]
hex = { version = "0.4", default-features = false, features = ["std"] }
rand_core = { version = "0.6", default-features = false, features = ["std"] }
ff-group-tests = { version = "0.13", path = "../../ff-group-tests" }
[features]
alloc = ["ciphersuite/alloc"]
std = [
"rand_core/std",
"zeroize/std",
"p256/std",
"k256/std",
"ciphersuite/std",
]
default = ["std"]

View File

@@ -1,3 +0,0 @@
# Ciphersuite {k, p}256
SECP256k1 and P-256 Ciphersuites around k256 and p256.

View File

@@ -1,54 +0,0 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
use zeroize::Zeroize;
use sha2::Sha512;
use ciphersuite::{WrappedGroup, Id, WithPreferredHash, GroupCanonicalEncoding};
pub use k256;
pub use p256;
macro_rules! kp_curve {
(
$feature: literal,
$lib: ident,
$Ciphersuite: ident,
$ID: literal
) => {
impl WrappedGroup for $Ciphersuite {
type F = $lib::Scalar;
type G = $lib::ProjectivePoint;
fn generator() -> Self::G {
$lib::ProjectivePoint::GENERATOR
}
}
impl Id for $Ciphersuite {
const ID: &'static [u8] = $ID;
}
impl WithPreferredHash for $Ciphersuite {
type H = Sha512;
}
impl GroupCanonicalEncoding for $Ciphersuite {}
};
}
/// Ciphersuite for Secp256k1.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Secp256k1;
kp_curve!("secp256k1", k256, Secp256k1, b"secp256k1");
#[test]
fn test_secp256k1() {
ff_group_tests::group::test_prime_group_bits::<_, k256::ProjectivePoint>(&mut rand_core::OsRng);
}
/// Ciphersuite for P-256.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct P256;
kp_curve!("p256", p256, P256, b"P-256");
#[test]
fn test_p256() {
ff_group_tests::group::test_prime_group_bits::<_, p256::ProjectivePoint>(&mut rand_core::OsRng);
}

View File

@@ -0,0 +1,106 @@
use zeroize::Zeroize;
use sha2::{Digest, Sha512};
use group::Group;
use dalek_ff_group::Scalar;
use crate::Ciphersuite;
macro_rules! dalek_curve {
(
$feature: literal,
$Ciphersuite: ident,
$Point: ident,
$ID: literal
) => {
use dalek_ff_group::$Point;
impl Ciphersuite for $Ciphersuite {
type F = Scalar;
type G = $Point;
type H = Sha512;
const ID: &'static [u8] = $ID;
fn generator() -> Self::G {
$Point::generator()
}
fn reduce_512(mut scalar: [u8; 64]) -> Self::F {
let res = Scalar::from_bytes_mod_order_wide(&scalar);
scalar.zeroize();
res
}
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
Scalar::from_hash(Sha512::new_with_prefix(&[dst, data].concat()))
}
}
};
}
/// Ciphersuite for Ristretto.
///
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(any(test, feature = "ristretto"))]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ristretto;
#[cfg(any(test, feature = "ristretto"))]
dalek_curve!("ristretto", Ristretto, RistrettoPoint, b"ristretto");
#[cfg(any(test, feature = "ristretto"))]
#[test]
fn test_ristretto() {
ff_group_tests::group::test_prime_group_bits::<_, RistrettoPoint>(&mut rand_core::OsRng);
assert_eq!(
Ristretto::hash_to_F(
b"FROST-RISTRETTO255-SHA512-v11nonce",
&hex::decode(
"\
81800157bb554f299fe0b6bd658e4c4591d74168b5177bf55e8dceed59dc80c7\
5c3430d391552f6e60ecdc093ff9f6f4488756aa6cebdbad75a768010b8f830e"
)
.unwrap()
)
.to_bytes()
.as_ref(),
&hex::decode("40f58e8df202b21c94f826e76e4647efdb0ea3ca7ae7e3689bc0cbe2e2f6660c").unwrap()
);
}
/// Ciphersuite for Ed25519, inspired by RFC-8032.
///
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[cfg(feature = "ed25519")]
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed25519;
#[cfg(feature = "ed25519")]
dalek_curve!("ed25519", Ed25519, EdwardsPoint, b"edwards25519");
#[cfg(feature = "ed25519")]
#[test]
fn test_ed25519() {
ff_group_tests::group::test_prime_group_bits::<_, EdwardsPoint>(&mut rand_core::OsRng);
// Ideally, a test vector from RFC-8032 (not FROST) would be here
// Unfortunately, the IETF draft doesn't provide any vectors for the derived challenges
assert_eq!(
Ed25519::hash_to_F(
b"FROST-ED25519-SHA512-v11nonce",
&hex::decode(
"\
9d06a6381c7a4493929761a73692776772b274236fb5cfcc7d1b48ac3a9c249f\
929dcc590407aae7d388761cddb0c0db6f5627aea8e217f4a033f2ec83d93509"
)
.unwrap()
)
.to_bytes()
.as_ref(),
&hex::decode("70652da3e8d7533a0e4b9e9104f01b48c396b5b553717784ed8d05c6a36b9609").unwrap()
);
}

View File

@@ -0,0 +1,110 @@
use zeroize::Zeroize;
use digest::{
typenum::U114, core_api::BlockSizeUser, Update, Output, OutputSizeUser, FixedOutput,
ExtendableOutput, XofReader, HashMarker, Digest,
};
use sha3::Shake256;
use group::Group;
use minimal_ed448::{Scalar, Point};
use crate::Ciphersuite;
/// Shake256, fixed to a 114-byte output, as used by Ed448.
#[derive(Clone, Default)]
pub struct Shake256_114(Shake256);
impl BlockSizeUser for Shake256_114 {
type BlockSize = <Shake256 as BlockSizeUser>::BlockSize;
fn block_size() -> usize {
Shake256::block_size()
}
}
impl OutputSizeUser for Shake256_114 {
type OutputSize = U114;
fn output_size() -> usize {
114
}
}
impl Update for Shake256_114 {
fn update(&mut self, data: &[u8]) {
self.0.update(data);
}
fn chain(mut self, data: impl AsRef<[u8]>) -> Self {
Update::update(&mut self, data.as_ref());
self
}
}
impl FixedOutput for Shake256_114 {
fn finalize_fixed(self) -> Output<Self> {
let mut res = Default::default();
FixedOutput::finalize_into(self, &mut res);
res
}
fn finalize_into(self, out: &mut Output<Self>) {
let mut reader = self.0.finalize_xof();
reader.read(out);
}
}
impl HashMarker for Shake256_114 {}
/// Ciphersuite for Ed448, inspired by RFC-8032. This is not recommended for usage.
///
/// hash_to_F is implemented with a naive concatenation of the dst and data, allowing transposition
/// between the two. This means `dst: b"abc", data: b"def"`, will produce the same scalar as
/// `dst: "abcdef", data: b""`. Please use carefully, not letting dsts be substrings of each other.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Ed448;
impl Ciphersuite for Ed448 {
type F = Scalar;
type G = Point;
type H = Shake256_114;
const ID: &'static [u8] = b"ed448";
fn generator() -> Self::G {
Point::generator()
}
fn reduce_512(mut scalar: [u8; 64]) -> Self::F {
let res = Self::hash_to_F(b"Ciphersuite-reduce_512", &scalar);
scalar.zeroize();
res
}
fn hash_to_F(dst: &[u8], data: &[u8]) -> Self::F {
Scalar::wide_reduce(Self::H::digest([dst, data].concat()).as_ref().try_into().unwrap())
}
}
#[test]
fn test_ed448() {
use ff::PrimeField;
ff_group_tests::group::test_prime_group_bits::<_, Point>(&mut rand_core::OsRng);
// Ideally, a test vector from RFC-8032 (not FROST) would be here
// Unfortunately, the IETF draft doesn't provide any vectors for the derived challenges
assert_eq!(
Ed448::hash_to_F(
b"FROST-ED448-SHAKE256-v11nonce",
&hex::decode(
"\
89bf16040081ff2990336b200613787937ebe1f024b8cdff90eb6f1c741d91c1\
4a2b2f5858a932ad3d3b18bd16e76ced3070d72fd79ae4402df201f5\
25e754716a1bc1b87a502297f2a99d89ea054e0018eb55d39562fd01\
00"
)
.unwrap()
)
.to_repr()
.to_vec(),
hex::decode(
"\
67a6f023e77361707c6e894c625e809e80f33fdb310810053ae29e28\
e7011f3193b9020e73c183a98cc3a519160ed759376dd92c94831622\
00"
)
.unwrap()
);
}

Some files were not shown because too many files have changed in this diff Show More