16 Commits

Author SHA1 Message Date
Luke Parker
0849d60f28 Run Bitcoin, Monero nodes on Alpine
While prior this didn't work well, presumably due to stack size limitations,
a shell script is included to raise the default stack size limit. This should
be tried again.
2025-12-08 02:30:34 -05:00
Luke Parker
3a792f9ce5 Update documentation on the serai-runtime build.rs 2025-12-08 02:22:29 -05:00
Luke Parker
50959fa0e3 Update the polkadot-sdk used
Removes `parity-wasm` as a dependency, closing
https://github.com/serai-dex/issues/227 and tidying our `deny.toml`.

This removes the `import-memory` flag from the linker as part of
`parity-wasm`'s usage was to map imports into exports
(5a1128b94b/substrate/client/executor/common/src/runtime_blob/runtime_blob.rs (L91-L142)).
2025-12-08 02:22:25 -05:00
Luke Parker
2fb90ebe55 Extend crates we patch to be empty from the Ethereum ecosystem
`ruint` pulls in many versions of many crates. This has it pull in less.
2025-12-06 08:27:34 -05:00
Luke Parker
b24adcbd14 Add panic-on-poison to no-std std_shims::sync::Mutex
We already had this behavior on `std`. It was omitted when no-`std` due to
deferring to `spin::Mutex`, which does not track poisoning at all. This
increases the parity of the two.

Part of https://github.com/serai-dex/serai/issues/698.
2025-12-06 08:06:38 -05:00
Luke Parker
b791256648 Remove substrate-wasm-builder
By defining our own build script, we gain complete clarity and control over how
the WASM is built. This also removes the need to patch the upstream due to it
allowing pollution of the environment variables from the host.

Notable appreciation is given to
https://github.com/rust-lang/rust/issues/145491 for identifying an issue
encountered here, with the associated PR clarifying the necessary flags for the
linker to fix this.
2025-12-04 23:23:38 -05:00
Luke Parker
36ac9c56a4 Remove workaround for lack of musl-dev now that musl-dev is provided in Rust Alpine images
Additionally, optimizes the build process a bit via leaving only the runtime
(and `busybox`) in the final image, and additionally building the runtime
without `std` (as we solely need the WASM blob from this process).
2025-12-04 11:58:38 -05:00
Luke Parker
57bf4984f8 panic = "abort"
`panic = "unwind"` was originally a requirement of Substrate, notably due to
its [native runtime](https://github.com/paritytech/substrate/issues/10874).
This does not mean all of Serai should use this setting however.

As the native runtime has been removed, we do no longer need this for the
Substrate node. With a review of our derivative, a panic guard is only used
when fetching the version from the runtime, causing an error on boot if a
panic occurs. Accordingly, we shouldn't have a need for `panic = "unwind"`
within the node, and the runtime itself should be fine.

The rest of Serai's services already registered bespoke hooks to ensure any
panic caused the process to exit. Those are left as-is, even though they're
now unnecessary.
2025-12-04 11:58:38 -05:00
Luke Parker
87750407de cargo-deny 0.18.8, remove bip39 git dependency
The former is necessary due to `cargo-deny` misinterpreting select licenses.
The latter is finally possible with the recent 2.2.1 release 🎉
2025-12-04 11:58:28 -05:00
Luke Parker
3ce90c55d9 Define a 512 KiB block size limit 2025-12-02 21:24:05 -05:00
Luke Parker
ff95c58341 Round out the runtime
Ensures the block's size limit is respected.

Defines a policy for weights. While I'm unsure I want to commit to this
forever, I do want to acknowledge it's valid and well-defined.

Cleans up the `serai-runtime` crate a bit with further modules in the `wasm`
folder.
2025-12-02 21:16:34 -05:00
Luke Parker
98044f93b1 Stub the in-instructions pallet 2025-12-02 16:46:10 -05:00
Luke Parker
eb04f873d5 Stub the genesis-liquidity pallet 2025-12-02 16:46:06 -05:00
Luke Parker
af74c318aa Add event emissions to the DEX pallet 2025-12-02 13:31:33 -05:00
Luke Parker
d711d8915f Update docs Ruby/gem versions 2025-12-02 13:20:17 -05:00
Luke Parker
3d549564a8 Misc tweaks in the style of the last commit
Notably removes the `kvdb-rocksdb` patch via updating the Substrate version
used to one which disables the `jemalloc` feature itself.

Simplifies the path of the built WASM file within the Dockerfile to consumers.
This also ensures if the image is built, the path of the WASM file is as
expected (prior unasserted).
2025-12-02 09:10:44 -05:00
62 changed files with 1591 additions and 2602 deletions

View File

@@ -52,7 +52,7 @@ runs:
- name: Install solc
shell: bash
run: |
cargo +1.91.1 install svm-rs --version =0.5.21
cargo +1.91.1 install svm-rs --version =0.5.22
svm install 0.8.29
svm use 0.8.29

View File

@@ -12,7 +12,7 @@ jobs:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo +1.91.1 install cargo-deny --version =0.18.6
run: cargo +1.91.1 install cargo-deny --version =0.18.8
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -46,7 +46,7 @@ jobs:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install cargo deny
run: cargo +1.91.1 install cargo-deny --version =0.18.6
run: cargo +1.91.1 install cargo-deny --version =0.18.8
- name: Run cargo deny
run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -33,4 +33,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests -- --nocapture

499
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -116,6 +116,19 @@ members = [
"tests/reproducible-runtime",
]
[profile.dev]
panic = "abort"
overflow-checks = true
[profile.release]
panic = "abort"
overflow-checks = true
# These do not respect the `panic` configuration value, so we don't provide them
[profile.test]
# panic = "abort" # https://github.com/rust-lang/issues/67650
overflow-checks = true
[profile.bench]
overflow-checks = true
[profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs
@@ -165,17 +178,17 @@ revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 }
[profile.release]
panic = "unwind"
overflow-checks = true
[patch.crates-io]
# Point to empty crates for crates unused within in our tree
alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
fastrlp-3 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.3" }
fastrlp-4 = { package = "fastrlp", path = "patches/ethereum/fastrlp-0.4" }
primitive-types-12 = { package = "primitive-types", path = "patches/ethereum/primitive-types-0.12" }
rlp = { path = "patches/ethereum/rlp" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-0.30" }
# Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" }
@@ -214,9 +227,6 @@ parity-bip39 = { path = "patches/parity-bip39" }
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
# `jemalloc` conflicts with `mimalloc`, so patch to a `kvdb-rocksdb` which never exposes `jemalloc`
kvdb-rocksdb = { path = "patches/kvdb-rocksdb" }
[workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"

View File

@@ -6,12 +6,63 @@ pub use std::sync::{Arc, Weak};
mod mutex_shim {
#[cfg(not(feature = "std"))]
pub use spin::{Mutex, MutexGuard};
mod spin_mutex {
use core::ops::{Deref, DerefMut};
// We wrap this in an `Option` so we can consider `None` as poisoned
pub(super) struct Mutex<T>(spin::Mutex<Option<T>>);
/// An acquired view of a `Mutex`.
pub struct MutexGuard<'mutex, T> {
mutex: spin::MutexGuard<'mutex, Option<T>>,
// This is `Some` for the lifetime of this guard, and is only represented as an `Option` due
// to needing to move it on `Drop` (which solely gives us a mutable reference to `self`)
value: Option<T>,
}
impl<T> Mutex<T> {
pub(super) const fn new(value: T) -> Self {
Self(spin::Mutex::new(Some(value)))
}
pub(super) fn lock(&self) -> MutexGuard<'_, T> {
let mut mutex = self.0.lock();
// Take from the `Mutex` so future acquisitions will see `None` unless this is restored
let value = mutex.take();
// Check the prior acquisition did in fact restore the value
if value.is_none() {
panic!("locking a `spin::Mutex` held by a thread which panicked");
}
MutexGuard { mutex, value }
}
}
impl<T> Deref for MutexGuard<'_, T> {
type Target = T;
fn deref(&self) -> &T {
self.value.as_ref().expect("no value yet checked upon lock acquisition")
}
}
impl<T> DerefMut for MutexGuard<'_, T> {
fn deref_mut(&mut self) -> &mut T {
self.value.as_mut().expect("no value yet checked upon lock acquisition")
}
}
impl<'mutex, T> Drop for MutexGuard<'mutex, T> {
fn drop(&mut self) {
// Restore the value
*self.mutex = self.value.take();
}
}
}
#[cfg(not(feature = "std"))]
pub use spin_mutex::*;
#[cfg(feature = "std")]
pub use std::sync::{Mutex, MutexGuard};
/// A shimmed `Mutex` with an API mutual to `spin` and `std`.
#[derive(Default, Debug)]
pub struct ShimMutex<T>(Mutex<T>);
impl<T> ShimMutex<T> {
/// Construct a new `Mutex`.
@@ -21,8 +72,9 @@ mod mutex_shim {
/// Acquire a lock on the contents of the `Mutex`.
///
/// On no-`std` environments, this may spin until the lock is acquired. On `std` environments,
/// this may panic if the `Mutex` was poisoned.
/// This will panic if the `Mutex` was poisoned.
///
/// On no-`std` environments, the implementation presumably defers to that of a spin lock.
pub fn lock(&self) -> MutexGuard<'_, T> {
#[cfg(feature = "std")]
let res = self.0.lock().unwrap();

View File

@@ -7,8 +7,7 @@ db-urls = ["https://github.com/rustsec/advisory-db"]
yanked = "deny"
ignore = [
"RUSTSEC-2022-0061", # https://github.com/serai-dex/serai/227
"RUSTSEC-2024-0370", # proc-macro-error is unmaintained
"RUSTSEC-2024-0370", # `proc-macro-error` is unmaintained, in-tree due to Substrate/`litep2p`
"RUSTSEC-2024-0436", # paste is unmaintained
]
@@ -79,7 +78,6 @@ exceptions = [
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator-libp2p-p2p" },
{ allow = ["AGPL-3.0-only"], name = "serai-coordinator" },
{ allow = ["AGPL-3.0-only"], name = "pallet-session" },
{ allow = ["AGPL-3.0-only"], name = "substrate-median" },
{ allow = ["AGPL-3.0-only"], name = "serai-core-pallet" },
@@ -152,6 +150,5 @@ allow-git = [
"https://github.com/rust-lang-nursery/lazy-static.rs",
"https://github.com/kayabaNerve/elliptic-curves",
"https://github.com/monero-oxide/monero-oxide",
"https://github.com/rust-bitcoin/rust-bip39",
"https://github.com/serai-dex/patch-polkadot-sdk",
]

View File

@@ -1 +1 @@
3.3.4
3.3.10

View File

@@ -1,4 +1,4 @@
source 'https://rubygems.org'
gem "jekyll", "~> 4.3.3"
gem "just-the-docs", "0.8.2"
gem "jekyll", "~> 4.4"
gem "just-the-docs", "0.10.1"

View File

@@ -1,34 +1,39 @@
GEM
remote: https://rubygems.org/
specs:
addressable (2.8.7)
public_suffix (>= 2.0.2, < 7.0)
bigdecimal (3.1.8)
addressable (2.8.8)
public_suffix (>= 2.0.2, < 8.0)
base64 (0.3.0)
bigdecimal (3.3.1)
colorator (1.1.0)
concurrent-ruby (1.3.4)
concurrent-ruby (1.3.5)
csv (3.3.5)
em-websocket (0.5.3)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0)
eventmachine (1.2.7)
ffi (1.17.0-x86_64-linux-gnu)
ffi (1.17.2-x86_64-linux-gnu)
forwardable-extended (2.6.0)
google-protobuf (4.28.2-x86_64-linux)
google-protobuf (4.33.1-x86_64-linux-gnu)
bigdecimal
rake (>= 13)
http_parser.rb (0.8.0)
i18n (1.14.6)
i18n (1.14.7)
concurrent-ruby (~> 1.0)
jekyll (4.3.4)
jekyll (4.4.1)
addressable (~> 2.4)
base64 (~> 0.2)
colorator (~> 1.0)
csv (~> 3.0)
em-websocket (~> 0.5)
i18n (~> 1.0)
jekyll-sass-converter (>= 2.0, < 4.0)
jekyll-watch (~> 2.0)
json (~> 2.6)
kramdown (~> 2.3, >= 2.3.1)
kramdown-parser-gfm (~> 1.0)
liquid (~> 4.0)
mercenary (>= 0.3.6, < 0.5)
mercenary (~> 0.3, >= 0.3.6)
pathutil (~> 0.9)
rouge (>= 3.0, < 5.0)
safe_yaml (~> 1.0)
@@ -36,19 +41,20 @@ GEM
webrick (~> 1.7)
jekyll-include-cache (0.2.1)
jekyll (>= 3.7, < 5.0)
jekyll-sass-converter (3.0.0)
sass-embedded (~> 1.54)
jekyll-sass-converter (3.1.0)
sass-embedded (~> 1.75)
jekyll-seo-tag (2.8.0)
jekyll (>= 3.8, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
just-the-docs (0.8.2)
json (2.16.0)
just-the-docs (0.10.1)
jekyll (>= 3.8.5)
jekyll-include-cache
jekyll-seo-tag (>= 2.0)
rake (>= 12.3.1)
kramdown (2.4.0)
rexml
kramdown (2.5.1)
rexml (>= 3.3.9)
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.4)
@@ -58,27 +64,27 @@ GEM
mercenary (0.4.0)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (6.0.1)
rake (13.2.1)
public_suffix (7.0.0)
rake (13.3.1)
rb-fsevent (0.11.2)
rb-inotify (0.11.1)
ffi (~> 1.0)
rexml (3.3.7)
rouge (4.4.0)
rexml (3.4.4)
rouge (4.6.1)
safe_yaml (1.0.5)
sass-embedded (1.79.3-x86_64-linux-gnu)
google-protobuf (~> 4.27)
sass-embedded (1.94.2-x86_64-linux-gnu)
google-protobuf (~> 4.31)
terminal-table (3.0.2)
unicode-display_width (>= 1.1.1, < 3)
unicode-display_width (2.6.0)
webrick (1.8.2)
webrick (1.9.2)
PLATFORMS
x86_64-linux
DEPENDENCIES
jekyll (~> 4.3.3)
just-the-docs (= 0.8.2)
jekyll (~> 4.4)
just-the-docs (= 0.10.1)
BUNDLED WITH
2.5.11
2.5.22

View File

@@ -0,0 +1,166 @@
# Raises `PT_GNU_STACK`'s memory to be at least 8 MB.
#
# This causes `musl` to use a 8 MB default for new threads, resolving the primary
# compatibility issue faced when executing a program on a `musl` system.
#
# See https://wiki.musl-libc.org/functional-differences-from-glibc.html#Thread-stack-size
# for reference. This differs that instead of setting at time of link, it
# patches the binary as an already-linked ELF executable.
#!/bin/bash
set -eo pipefail
ELF="$1"
if [ ! -f "$ELF" ]; then
echo "\`increase_default_stack_size.sh\` [ELF binary]"
echo ""
echo "Sets the \`PT_GNU_STACK\` program header to its existing value or 8 MB,"
echo "whichever is greater."
exit 1
fi
function hex {
hexdump -e '1 1 "%.2x"' -v
}
function read_bytes {
dd status=none bs=1 skip=$1 count=$2 if="$ELF" | hex
}
function write_bytes {
POS=$1
BYTES=$2
while [ ! $BYTES = "" ]; do
printf "\x$(printf $BYTES | head -c2)" | dd status=none conv=notrunc bs=1 seek=$POS of="$ELF"
# Start with the third byte, as in, after the first two bytes
BYTES=$(printf $BYTES | tail -c+3)
POS=$(($POS + 1))
done
}
# Magic
MAGIC=$(read_bytes 0 4)
if [ ! $MAGIC = $(printf "\x7fELF" | hex) ]; then
echo "Not ELF"
exit 2
fi
# 1 if 32-bit, 2 if 64-bit
BITS=$(read_bytes 4 1)
case $BITS in
"01") BITS=32;;
"02") BITS=64;;
*)
echo "Not 32- or 64- bit"
exit 3
;;
esac
# For `value_per_bits a b`, `a` if 32-bit and `b` if 64-bit
function value_per_bits {
RESULT=$(($1))
if [ $BITS = 64 ]; then
RESULT=$(($2))
fi
printf $RESULT
}
# Read an integer by its offset, differing depending on if 32- or 64-bit
function read_integer_by_offset {
OFFSET=$(value_per_bits $1 $2)
printf $(( 0x$(swap_native_endian $(read_bytes $OFFSET $3)) ))
}
# 1 if little-endian, 2 if big-endian
LITTLE_ENDIAN=$(read_bytes 5 1)
case $LITTLE_ENDIAN in
"01") LITTLE_ENDIAN=1;;
"02") LITTLE_ENDIAN=0;;
*)
echo "Not little- or big- endian"
exit 4
;;
esac
# While this script is written in big-endian, we need to work with the file in
# its declared endian. This function swaps from big to native, or vice versa,
# as necessary.
function swap_native_endian {
BYTES="$1"
if [ "$BYTES" = "" ]; then
read BYTES
fi
if [ $LITTLE_ENDIAN -eq 0 ]; then
printf $BYTES
return
fi
while [ ! $BYTES = "" ]; do
printf $(printf $BYTES | tail -c2)
BYTES=$(printf $BYTES | head -c-2)
done
}
ELF_VERSION=$(read_bytes 6 1)
if [ ! $ELF_VERSION = "01" ]; then
echo "Unknown ELF Version ($ELF_VERSION)"
exit 5
fi
ELF_VERSION_2=$(read_bytes $((0x14)) 4)
if [ ! $ELF_VERSION_2 = $(swap_native_endian 00000001) ]; then
echo "Unknown secondary ELF Version ($ELF_VERSION_2)"
exit 6
fi
# Find where the program headers are
PROGRAM_HEADERS_OFFSET=$(read_integer_by_offset 0x1c 0x20 $(value_per_bits 4 8))
PROGRAM_HEADER_SIZE=$(value_per_bits 0x20 0x38)
DECLARED_PROGRAM_HEADER_SIZE=$(read_integer_by_offset 0x2a 0x36 2)
if [ ! $PROGRAM_HEADER_SIZE -eq $DECLARED_PROGRAM_HEADER_SIZE ]; then
echo "Unexpected size of a program header ($DECLARED_PROGRAM_HEADER_SIZE)"
exit 7
fi
function program_header_start {
printf $(($PROGRAM_HEADERS_OFFSET + ($1 * $PROGRAM_HEADER_SIZE)))
}
function read_program_header {
read_bytes $(program_header_start $1) $PROGRAM_HEADER_SIZE
}
# Iterate over each program header
PROGRAM_HEADERS=$(read_integer_by_offset 0x2c 0x38 2)
NEXT_PROGRAM_HEADER=$(( $PROGRAM_HEADERS - 1 ))
FOUND=0
while [ $NEXT_PROGRAM_HEADER -ne -1 ]; do
THIS_PROGRAM_HEADER=$NEXT_PROGRAM_HEADER
NEXT_PROGRAM_HEADER=$(( $NEXT_PROGRAM_HEADER - 1 ))
PROGRAM_HEADER=$(read_program_header $THIS_PROGRAM_HEADER)
HEADER_TYPE=$(printf $PROGRAM_HEADER | head -c8)
# `PT_GNU_STACK`
# https://github.com/torvalds/linux/blob/c2f2b01b74be8b40a2173372bcd770723f87e7b2/include/uapi/linux/elf.h#L41
if [ ! "$(swap_native_endian $HEADER_TYPE)" = "6474e551" ]; then
continue
fi
FOUND=1
MEMSZ_OFFSET=$(( $(program_header_start $THIS_PROGRAM_HEADER) + $(value_per_bits 0x14 0x28) ))
MEMSZ_LEN=$(value_per_bits 4 8)
# `MEMSZ_OFFSET MEMSZ_OFFSET` as we've already derived it depending on the amount of bits
MEMSZ=$(read_integer_by_offset $MEMSZ_OFFSET $MEMSZ_OFFSET $MEMSZ_LEN)
DESIRED_STACK_SIZE=$((8 * 1024 * 1024))
# Only run if the inherent value is _smaller_
if [ $MEMSZ -lt $DESIRED_STACK_SIZE ]; then
# `2 *`, as this is its length in hexadecimal
HEX_MEMSZ=$(printf %."$((2 * $MEMSZ_LEN))"x $DESIRED_STACK_SIZE)
write_bytes $MEMSZ_OFFSET $(swap_native_endian $HEX_MEMSZ)
fi
done
if [ $FOUND -eq 0 ]; then
echo "\`PT_GNU_STACK\` program header not found"
exit 8
fi
echo "All instances of \`PT_GNU_STACK\` patched to be at least 8 MB"
exit 0

View File

@@ -1,28 +1,8 @@
#check=skip=FromPlatformFlagConstDisallowed
# We want to explicitly set the platform to ensure a constant host environment
# rust:1.91.1-alpine as of November 11th, 2025 (GMT)
FROM --platform=linux/amd64 rust@sha256:700c0959b23445f69c82676b72caa97ca4359decd075dca55b13339df27dc4d3
# In order to compile the runtime, including the `proc-macro`s and build scripts, we need the
# required development libraries. These are traditionally provided by `musl-dev` which is not
# inherently included with this image (https://github.com/rust-lang/docker-rust/issues/68). While we
# could install it here, we'd be unable to pin the installed package by its hash as desired.
#
# Rust does have self-contained libraries, intended to be used when the desired development files
# are not otherwise available. These can be enabled with `link-self-contained=yes`. Unfortunately,
# this doesn't work here (https://github.com/rust-lang/rust/issues/149371).
#
# While we can't set `link-self-contained=yes`, we can install Rust's self-contained libraries onto
# our system so they're generally available.
RUN echo '#!/bin/sh' > libs.sh
RUN echo 'set -e' >> libs.sh
RUN echo 'SYSROOT=$(rustc --print sysroot)' >> libs.sh
RUN echo 'LIBS=$SYSROOT/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained' >> libs.sh
RUN echo 'ln -s $LIBS/Scrt1.o $LIBS/crti.o $LIBS/crtn.o /usr/lib' >> libs.sh
# We also need `libc.so` which is already present on the system, just not under that name
RUN echo 'ln -s /lib/libc.musl-x86_64.so.1 /usr/lib/libc.so' >> libs.sh
RUN /bin/sh ./libs.sh
# rust:1.91.1-alpine as of December 4th, 2025 (GMT)
FROM --platform=linux/amd64 rust@sha256:84f263251b0ada72c1913d82a824d47be15a607f3faf015d8bdae48db544cdf2 AS builder
# Add the WASM toolchain
RUN rustup target add wasm32v1-none
@@ -47,7 +27,16 @@ ADD AGPL-3.0 /serai
WORKDIR /serai
# Build the runtime
RUN cargo build --release -p serai-runtime
RUN cargo build --release -p serai-runtime --no-default-features
# Copy the runtime to the provided volume
CMD ["cp", "/serai/target/release/wbuild/serai-runtime/serai_runtime.wasm", "/volume/serai.wasm"]
# Copy the artifact to its own image which solely exists to further export it
FROM scratch
# Copy `busybox`, including the necessary shared libraries, from the builder for a functioning `cp`
COPY --from=builder /lib/ld-musl-x86_64.so.1 /lib/libc.musl-x86_64.so.1 /lib/
COPY --from=builder /bin/busybox /bin/
ENV LD_LIBRARY_PATH=/lib/
ENV PATH=/bin
# Copy the artifact itself
COPY --from=builder /serai/target/release/serai_runtime.wasm /serai.wasm
# By default, copy the artifact to `/volume`, presumably a provided volume
CMD ["busybox", "cp", "/serai.wasm", "/volume/serai.wasm"]

View File

@@ -29,7 +29,7 @@ RUN tar xzvf bitcoin-${BITCOIN_VERSION}-$(uname -m)-linux-gnu.tar.gz
RUN mv bitcoin-${BITCOIN_VERSION}/bin/bitcoind .
"#;
let setup = mimalloc(Os::Debian) + DOWNLOAD_BITCOIN;
let setup = mimalloc(Os::Alpine) + DOWNLOAD_BITCOIN;
let run_bitcoin = format!(
r#"
@@ -43,7 +43,7 @@ CMD ["/run.sh"]
network.label()
);
let run = os(Os::Debian, "", "bitcoin") + &run_bitcoin;
let run = os(Os::Alpine, "", "bitcoin") + &run_bitcoin;
let res = setup + &run;
let mut bitcoin_path = orchestration_path.to_path_buf();

View File

@@ -21,7 +21,7 @@ fn monero_internal(
};
#[rustfmt::skip]
let download_monero = format!(r#"
let mut download_monero = format!(r#"
FROM alpine:latest AS monero
RUN apk --no-cache add wget gnupg
@@ -41,6 +41,16 @@ RUN tar -xvjf monero-linux-{arch}-v{MONERO_VERSION}.tar.bz2 --strip-components=1
network.label(),
);
if os == Os::Alpine {
// Increase the default stack size, as Monero does heavily use its stack
download_monero += &format!(
r#"
ADD orchestration/increase_default_stack_size.sh .
RUN ./increase_default_stack_size.sh {monero_binary}
"#
);
}
let setup = mimalloc(os) + &download_monero;
let run_monero = format!(
@@ -69,13 +79,13 @@ CMD ["/run.sh"]
}
pub fn monero(orchestration_path: &Path, network: Network) {
monero_internal(network, Os::Debian, orchestration_path, "monero", "monerod", "18080 18081")
monero_internal(network, Os::Alpine, orchestration_path, "monero", "monerod", "18080 18081")
}
pub fn monero_wallet_rpc(orchestration_path: &Path) {
monero_internal(
Network::Dev,
Os::Debian,
Os::Alpine,
orchestration_path,
"monero-wallet-rpc",
"monero-wallet-rpc",

View File

@@ -1,10 +1,11 @@
[package]
name = "kvdb-rocksdb"
version = "0.20.99"
description = "Replacement for `kvdb-rocksdb` which removes the `jemalloc` feature"
name = "fastrlp"
version = "0.3.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/kvdb-rocksdb"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/fastrlp-0.3"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
@@ -13,8 +14,6 @@ rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[dependencies]
kvdb-rocksdb = { version = "0.21", default-features = false }
[features]
jemalloc = []
alloc = []
std = []

View File

@@ -0,0 +1,19 @@
[package]
name = "fastrlp"
version = "0.4.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/fastrlp-0.4"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[features]
alloc = []
std = []

View File

@@ -0,0 +1 @@
const _NEVER_COMPILED: [(); 0 - 1] = [(); 0 - 1];

View File

@@ -0,0 +1,18 @@
[package]
name = "primitive-types"
version = "0.12.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/primitive-types"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[features]
std = []

View File

@@ -0,0 +1 @@
const _NEVER_COMPILED: [(); 0 - 1] = [(); 0 - 1];

View File

@@ -0,0 +1,18 @@
[package]
name = "rlp"
version = "0.5.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/rlp"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[features]
std = []

View File

@@ -0,0 +1 @@
const _NEVER_COMPILED: [(); 0 - 1] = [(); 0 - 1];

View File

@@ -3,7 +3,7 @@ name = "secp256k1"
version = "0.30.99"
description = "Patch to an empty crate"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/secp256k1-30"
repository = "https://github.com/serai-dex/serai/tree/develop/patches/ethereum/secp256k1-0.30"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"

View File

@@ -0,0 +1 @@
const _NEVER_COMPILED: [(); 0 - 1] = [(); 0 - 1];

View File

@@ -1 +0,0 @@
pub use kvdb_rocksdb::*;

View File

@@ -15,7 +15,7 @@ rustdoc-args = ["--cfg", "docsrs"]
[workspace]
[dependencies]
bip39 = { git = "https://github.com/rust-bitcoin/rust-bip39", commit = "f735e2559f30049f6738d1bf68c69a0b7bd7b858", default-features = false }
bip39 = { version = "2.2.1", default-features = false }
[features]
default = ["bip39/default"]

View File

@@ -1,3 +1,3 @@
#![cfg_attr(not(feature = "std"), no_std)]
#![no_std]
pub use bip39::*;

View File

@@ -63,6 +63,11 @@ pub struct HeaderV1 {
pub consensus_commitment: [u8; 32],
}
impl HeaderV1 {
/// The size of a serialized V1 header.
pub const SIZE: usize = 8 + 32 + 8 + 32 + 32 + 32;
}
/// A header for a block.
#[derive(Clone, Copy, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Header {
@@ -71,6 +76,9 @@ pub enum Header {
}
impl Header {
/// The size of a serialized header.
pub const SIZE: usize = 1 + HeaderV1::SIZE;
/// Get the hash of the header.
pub fn number(&self) -> u64 {
match self {
@@ -109,8 +117,8 @@ impl Header {
/// A block.
///
/// This does not guarantee consistency. The header's `transactions_root` may not match the
/// contained transactions.
/// This does not guarantee consistency nor validity. The header's `transactions_root` may not
/// match the contained transactions, among other ill effects.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub struct Block {
/// The block's header.
@@ -119,6 +127,13 @@ pub struct Block {
pub transactions: Vec<Transaction>,
}
impl Block {
/// The size limit for a block.
///
/// This is not enforced upon deserialization. Be careful accordingly.
pub const SIZE_LIMIT: usize = 512 * 1024;
}
#[cfg(feature = "substrate")]
mod substrate {
use core::fmt::Debug;
@@ -133,7 +148,7 @@ mod substrate {
use super::*;
// Add `serde` implementations which treat self as a `Vec<u8>`
// Add `serde` implementations which treat `self` as a `Vec<u8>`
impl sp_core::serde::Serialize for Transaction {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where

View File

@@ -79,41 +79,43 @@ impl Call {
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Event {
/// Liquidity was added to a pool.
LiquidityAdded {
/// The account which added the liquidity.
origin: SeraiAddress,
/// The account which received the liquidity tokens.
LiquidityAddition {
/// The account which received the minted liquidity tokens.
recipient: SeraiAddress,
/// The pool liquidity was added to.
pool: ExternalCoin,
/// The amount of liquidity tokens which were minted.
liquidity_tokens_minted: Amount,
/// The amount of the coin which was added to the pool's liquidity.
external_coin_amount: Amount,
/// The liquidity tokens which were minted.
liquidity_tokens: ExternalBalance,
/// The amount of SRI which was added to the pool's liquidity.
sri_amount: Amount,
/// The amount of the coin which was added to the pool's liquidity.
external_coin_amount: Amount,
},
/// The specified liquidity tokens were transferred.
LiquidityTransfer {
/// The address transferred from.
from: SeraiAddress,
/// The address transferred to.
to: SeraiAddress,
/// The liquidity tokens transferred.
liquidity_tokens: ExternalBalance,
},
/// Liquidity was removed from a pool.
LiquidityRemoved {
LiquidityRemoval {
/// The account which removed the liquidity.
origin: SeraiAddress,
/// The pool liquidity was removed from.
pool: ExternalCoin,
/// The mount of liquidity tokens which were burnt.
liquidity_tokens_burnt: Amount,
/// The amount of the coin which was removed from the pool's liquidity.
external_coin_amount: Amount,
from: SeraiAddress,
/// The liquidity tokens which were burnt.
liquidity_tokens: ExternalBalance,
/// The amount of SRI which was removed from the pool's liquidity.
sri_amount: Amount,
/// The amount of the coin which was removed from the pool's liquidity.
external_coin_amount: Amount,
},
/// A swap through the liquidity pools occurred.
Swap {
/// The account which made the swap.
origin: SeraiAddress,
/// The recipient for the output of the swap.
recipient: SeraiAddress,
from: SeraiAddress,
/// The deltas incurred by the pools.
///
/// For a swap of sriABC to sriDEF, this would be

View File

@@ -1,23 +1,38 @@
use borsh::{BorshSerialize, BorshDeserialize};
use serai_primitives::{
crypto::Signature, address::SeraiAddress, balance::ExternalBalance, genesis::GenesisValues,
crypto::Signature, address::SeraiAddress, coin::ExternalCoin, balance::ExternalBalance,
genesis_liquidity::GenesisValues,
};
/// The address used for to hold genesis liquidity for a pool.
pub fn address(coin: ExternalCoin) -> SeraiAddress {
SeraiAddress::system(borsh::to_vec(&(b"GenesisLiquidity", coin)).unwrap())
}
/// A call to the genesis liquidity.
#[derive(Clone, PartialEq, Eq, Debug, BorshSerialize, BorshDeserialize)]
pub enum Call {
/// Oraclize the value of non-Bitcoin external coins relative to Bitcoin.
/// Oraclize the values of the coins available on genesis, relative to BTC.
///
/// This will trigger the addition of the liquidity into the pools and their initialization.
oraclize_values {
/// The values of the non-Bitcoin external coins.
values: GenesisValues,
/// The signature by the genesis validators for these values.
signature: Signature,
},
/// Remove liquidity.
remove_liquidity {
/// Transfer genesis liquidity.
transfer_genesis_liquidity {
/// The account to transfer the liquidity to.
to: SeraiAddress,
/// The genesis liquidity to transfer.
genesis_liquidity: ExternalBalance,
},
/// Remove genesis liquidity.
remove_genesis_liquidity {
/// The genesis liquidity to remove.
balance: ExternalBalance,
genesis_liquidity: ExternalBalance,
},
}
@@ -25,7 +40,7 @@ impl Call {
pub(crate) fn is_signed(&self) -> bool {
match self {
Call::oraclize_values { .. } => false,
Call::remove_liquidity { .. } => true,
Call::transfer_genesis_liquidity { .. } | Call::remove_genesis_liquidity { .. } => true,
}
}
}
@@ -38,13 +53,22 @@ pub enum Event {
/// The recipient of the genesis liquidity.
recipient: SeraiAddress,
/// The coins added as genesis liquidity.
balance: ExternalBalance,
genesis_liquidity: ExternalBalance,
},
/// Genesis liquidity added.
GenesisLiquidityTransferred {
/// The address transferred from.
from: SeraiAddress,
/// The address transferred to.
to: SeraiAddress,
/// The genesis liquidity transferred.
genesis_liquidity: ExternalBalance,
},
/// Genesis liquidity removed.
GenesisLiquidityRemoved {
/// The account which removed the genesis liquidity.
origin: SeraiAddress,
from: SeraiAddress,
/// The amount of genesis liquidity removed.
balance: ExternalBalance,
genesis_liquidity: ExternalBalance,
},
}

View File

@@ -279,12 +279,13 @@ mod substrate {
/// The implicit context to verify transactions with.
fn implicit_context() -> ImplicitContext;
/// The size of the current block.
fn current_block_size(&self) -> usize;
/// If a block is present in the blockchain.
fn block_is_present_in_blockchain(&self, hash: &BlockHash) -> bool;
/// The time embedded into the current block.
///
/// Returns `None` if the time has yet to be set.
fn current_time(&self) -> Option<u64>;
fn current_time(&self) -> u64;
/// Get the next nonce for an account.
fn next_nonce(&self, signer: &SeraiAddress) -> u32;
/// If the signer can pay the SRI fee.
@@ -295,7 +296,7 @@ mod substrate {
) -> Result<(), TransactionValidityError>;
/// Begin execution of a transaction.
fn start_transaction(&self);
fn start_transaction(&self, len: usize);
/// Consume the next nonce for an account.
///
/// This MUST NOT be called if the next nonce is `u32::MAX`. The caller MAY panic in that case.
@@ -390,9 +391,14 @@ mod substrate {
impl<Context: TransactionContext> TransactionWithContext<Context> {
fn validate_except_fee<V: ValidateUnsigned<Call = Context::RuntimeCall>>(
&self,
len: usize,
source: TransactionSource,
mempool_priority_if_signed: u64,
) -> TransactionValidity {
if self.1.current_block_size().saturating_add(len) > crate::Block::SIZE_LIMIT {
Err(TransactionValidityError::Invalid(InvalidTransaction::ExhaustsResources))?;
}
match &self.0 {
Transaction::Unsigned { call } => {
let ValidTransaction { priority: _, requires, provides, longevity: _, propagate: _ } =
@@ -417,15 +423,10 @@ mod substrate {
Err(TransactionValidityError::Unknown(UnknownTransaction::CannotLookup))?;
}
if let Some(include_by) = *include_by {
if let Some(current_time) = self.1.current_time() {
if current_time >= u64::from(include_by) {
if self.1.current_time() >= u64::from(include_by) {
// Since this transaction has a time bound which has passed, error
Err(TransactionValidityError::Invalid(InvalidTransaction::Stale))?;
}
} else {
// Since this transaction has a time bound, yet we don't know the time, error
Err(TransactionValidityError::Invalid(InvalidTransaction::Stale))?;
}
}
{
@@ -471,7 +472,7 @@ mod substrate {
&self,
source: TransactionSource,
info: &DispatchInfo,
_len: usize,
len: usize,
) -> TransactionValidity {
let mempool_priority_if_signed = match &self.0 {
Transaction::Unsigned { .. } => {
@@ -493,19 +494,19 @@ mod substrate {
}
}
};
self.validate_except_fee::<V>(source, mempool_priority_if_signed)
self.validate_except_fee::<V>(len, source, mempool_priority_if_signed)
}
fn apply<V: ValidateUnsigned<Call = Context::RuntimeCall>>(
self,
_info: &DispatchInfo,
_len: usize,
len: usize,
) -> sp_runtime::ApplyExtrinsicResultWithInfo<PostDispatchInfo> {
// We use 0 for the mempool priority, as this is no longer in the mempool so it's irrelevant
self.validate_except_fee::<V>(TransactionSource::InBlock, 0)?;
self.validate_except_fee::<V>(len, TransactionSource::InBlock, 0)?;
// Start the transaction
self.1.start_transaction();
self.1.start_transaction(len);
let transaction_hash = self.0.hash();

View File

@@ -21,6 +21,8 @@ impl frame_system::Config for Test {
type AccountId = sp_core::sr25519::Public;
type Lookup = frame_support::sp_runtime::traits::IdentityLookup<Self::AccountId>;
type Block = frame_system::mocking::MockBlock<Test>;
type BlockLength = serai_core_pallet::Limits;
type BlockWeights = serai_core_pallet::Limits;
}
#[derive_impl(pallet_timestamp::config_preludes::TestDefaultConfig)]

View File

@@ -10,7 +10,7 @@ pub type CoinsEvent = serai_abi::coins::Event;
#[test]
fn mint() {
new_test_ext().execute_with(|| {
Core::start_transaction();
Core::start_transaction(0);
// minting u64::MAX should work
let coin = Coin::Serai;
@@ -51,7 +51,7 @@ fn mint() {
#[test]
fn burn_with_instruction() {
new_test_ext().execute_with(|| {
Core::start_transaction();
Core::start_transaction(0);
// mint some coin
let coin = Coin::External(ExternalCoin::Bitcoin);
@@ -106,7 +106,7 @@ fn burn_with_instruction() {
#[test]
fn transfer() {
new_test_ext().execute_with(|| {
Core::start_transaction();
Core::start_transaction(0);
// mint some coin
let coin = Coin::External(ExternalCoin::Bitcoin);

View File

@@ -8,6 +8,9 @@ extern crate alloc;
use frame_support::traits::{PreInherents, PostTransactions};
mod limits;
pub use limits::Limits;
mod iumt;
pub use iumt::*;
@@ -83,7 +86,8 @@ pub mod pallet {
#[pallet::config]
pub trait Config:
frame_system::Config<Hash: Into<[u8; 32]>> + pallet_timestamp::Config<Moment = u64>
frame_system::Config<Hash: Into<[u8; 32]>, BlockLength = Limits, BlockWeights = Limits>
+ pallet_timestamp::Config<Moment = u64>
{
}
@@ -120,7 +124,7 @@ pub mod pallet {
BlockTransactionsCommitmentMerkle::<T>::new_expecting_none();
BlockEventsCommitmentMerkle::<T>::new_expecting_none();
Self::start_transaction();
Self::start_transaction(0);
<_>::build(config);
Self::end_transaction([0; 32]);
@@ -130,7 +134,15 @@ pub mod pallet {
/// The code to run when beginning execution of a transaction.
///
/// The caller MUST ensure two transactions aren't simultaneously started.
pub fn start_transaction() {
pub fn start_transaction(len: usize) {
{
let existing_len = frame_system::AllExtrinsicsLen::<T>::get().unwrap_or(0);
let new_len = existing_len.saturating_add(u32::try_from(len).unwrap_or(u32::MAX));
// We panic here as this should've been caught earlier during validation
assert!(new_len <= u32::try_from(serai_abi::Block::SIZE_LIMIT).unwrap());
frame_system::AllExtrinsicsLen::<T>::set(Some(new_len));
}
TransactionEventsMerkle::<T>::new_expecting_none();
Self::deposit_event(Event::BeginTransaction);
}
@@ -192,7 +204,21 @@ impl<T: Config> PreInherents for StartOfBlock<T> {
BlockTransactionsCommitmentMerkle::<T>::new_expecting_none();
BlockEventsCommitmentMerkle::<T>::new_expecting_none();
Pallet::<T>::start_transaction();
/*
We assign the implicit transaction with the block the length of the block itself: its
header's length and the length of the length-prefix for the list of transactions.
The length-prefix will be a little-endian `u32`, as `Block` will be borsh-serialized
(https://borsh.io).
The length of each actual transaction is expected to be accurate as the SCALE implementation
defers to the `borsh` serialization.
*/
assert!(
frame_system::AllExtrinsicsLen::<T>::get().is_none(),
"AllExtrinsicsLen wasn't killed at the end of the last block"
);
Pallet::<T>::start_transaction(serai_abi::Header::SIZE + 4);
// Handle the `SeraiPreExecutionDigest`
/*
@@ -220,7 +246,7 @@ impl<T: Config> PreInherents for StartOfBlock<T> {
pub struct EndOfBlock<T: Config>(PhantomData<T>);
impl<T: Config> PostTransactions for EndOfBlock<T> {
fn post_transactions() {
Pallet::<T>::start_transaction();
Pallet::<T>::start_transaction(0);
// Other modules' `PostTransactions`
@@ -229,6 +255,8 @@ impl<T: Config> PostTransactions for EndOfBlock<T> {
end_of_block_transaction_hash[.. 16].copy_from_slice(&[0xff; 16]);
Pallet::<T>::end_transaction(end_of_block_transaction_hash);
frame_system::AllExtrinsicsLen::<T>::kill();
use serai_abi::SeraiExecutionDigest;
frame_system::Pallet::<T>::deposit_log(
frame_support::sp_runtime::generic::DigestItem::Consensus(

View File

@@ -0,0 +1,34 @@
use sp_core::Get;
use frame_support::weights::Weight;
use frame_system::limits::{BlockLength, BlockWeights};
/// The limits for the Serai protocol.
pub struct Limits;
impl Get<BlockLength> for Limits {
fn get() -> BlockLength {
/*
We do not reserve an allocation for mandatory/operational transactions, assuming they'll be
prioritized in the mempool. This does technically give block producers an inventive to
misbehave by on-purposely favoring paying non-operational transactions over operational
transactions, but ensures the entire block is available to the transactions actually present
in the mempool.
*/
BlockLength::max(u32::try_from(serai_abi::Block::SIZE_LIMIT).unwrap())
}
}
impl Get<BlockWeights> for Limits {
fn get() -> BlockWeights {
/*
While Serai does limit the size of a block, every transaction is expected to operate in
complexity constant to the current state size, regardless of what the state is. Accordingly,
the most efficient set of transactions (basic transfers?) is expected to be within an order
of magnitude of the most expensive transactions (multi-pool swaps?).
Instead of engaging with the complexity within the consensus protocol of metering both
bandwidth and computation, we do not define limits for weights. We do, however, still use the
weight system in order to determine fee rates and ensure prioritization to
computationally-cheaper transactions. That solely serves as mempool policy however.
*/
BlockWeights::simple_max(Weight::MAX)
}
}

View File

@@ -12,6 +12,9 @@ rust-version = "1.85"
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[package.metadata.cargo-machete]
ignored = ["scale"]
[lints]
workspace = true
@@ -23,8 +26,6 @@ sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-fea
frame-system = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-support = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
substrate-median = { path = "../median", default-features = false }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"] }
serai-core-pallet = { path = "../core", default-features = false }
serai-coins-pallet = { path = "../coins", default-features = false }
@@ -45,8 +46,6 @@ std = [
"frame-system/std",
"frame-support/std",
"substrate-median/std",
"serai-abi/std",
"serai-core-pallet/std",
"serai-coins-pallet/std",

View File

@@ -10,6 +10,8 @@ mod mock;
#[expect(clippy::cast_possible_truncation)]
#[frame_support::pallet]
mod pallet {
use alloc::vec::Vec;
use frame_system::pallet_prelude::*;
use frame_support::pallet_prelude::*;
@@ -18,7 +20,7 @@ mod pallet {
prelude::*,
dex::{Error as PrimitivesError, Reserves, Premise},
},
Event,
dex::Event,
};
use serai_core_pallet::Pallet as Core;
@@ -76,6 +78,10 @@ mod pallet {
}
}
/// The minimum amount of liquidity allowed to be initially added.
///
/// This should be sufficiently low it isn't inaccessible, yet sufficiently high that future
/// additions can be reasonably grained when their share of the new supply is calculated.
const MINIMUM_LIQUIDITY: u64 = 1 << 16;
#[pallet::call]
@@ -99,10 +105,20 @@ mod pallet {
let (sri_actual, external_coin_actual, liquidity) = if supply == 0 {
let sri_actual = sri_intended;
let external_coin_actual = external_coin_intended;
let liquidity = Amount(
u64::try_from((u128::from(sri_actual.0) * u128::from(external_coin_actual.0)).isqrt())
.map_err(|_| Error::<T>::Overflow)?,
);
/*
The best way to explain this is to first consider how would one would write shares of a
liquidity pool with only a single coin (however purposeless that may be). The immediate
suggestion would simply be to use the amount of the singular coin initially added as the
initial amount of shares, with further shares being distributed pro-rata as further
liquidity is added. This inherently has the amount of liquidity tokens approximate the
magnitude and scale of the underlying coin.
When we scale the two-coin case, this methodology no longer immediately applies. The
solution here is to take the product, and then the square root, of the two values. This
provides a magnitude/scale of the liquidity tokens approximately in-between both coins.
*/
let liquidity = (u128::from(sri_actual.0) * u128::from(external_coin_actual.0)).isqrt();
let liquidity = Amount(u64::try_from(liquidity).map_err(|_| Error::<T>::Overflow)?);
if liquidity.0 < MINIMUM_LIQUIDITY {
Err(Error::<T>::InvalidLiquidity)?;
}
@@ -149,6 +165,10 @@ mod pallet {
Amount(sri_liquidity.min(external_coin_liquidity))
};
if liquidity == Amount(0) {
Err(Error::<T>::Unsatisfied)?;
}
(sri_actual, external_coin_actual, liquidity)
};
@@ -162,12 +182,15 @@ mod pallet {
pool.into(),
Balance { coin: Coin::from(external_coin), amount: external_coin_actual },
)?;
LiquidityTokens::<T>::mint(
from,
Balance { coin: Coin::from(external_coin), amount: liquidity },
)?;
let liquidity_tokens = ExternalBalance { coin: external_coin, amount: liquidity };
LiquidityTokens::<T>::mint(from, liquidity_tokens.into())?;
// TODO: Event
Self::emit_event(Event::LiquidityAddition {
recipient: from.into(),
liquidity_tokens,
sri_amount: sri_actual,
external_coin_amount: external_coin_actual,
});
Ok(())
}
@@ -183,7 +206,7 @@ mod pallet {
let from = ensure_signed(origin)?;
LiquidityTokens::<T>::transfer_fn(from, to.into(), liquidity_tokens.into())?;
// TODO: Event
Self::emit_event(Event::LiquidityTransfer { from: from.into(), to, liquidity_tokens });
Ok(())
}
@@ -234,7 +257,12 @@ mod pallet {
Balance { coin: Coin::from(external_coin), amount: external_coin_amount },
)?;
// TODO: Event
Self::emit_event(Event::LiquidityRemoval {
from: from.into(),
liquidity_tokens,
sri_amount,
external_coin_amount,
});
Ok(())
}
@@ -254,6 +282,7 @@ mod pallet {
let swaps = Premise::route(coins_to_swap.coin, minimum_to_receive.coin)
.ok_or(Error::<T>::FromToSelf)?;
let mut deltas = Vec::with_capacity(swaps.len() + 1);
for swap in &swaps {
let external_coin = swap.external_coin();
let pool = serai_abi::dex::address(external_coin);
@@ -273,11 +302,9 @@ mod pallet {
be credited both as part of the reserves _and_ the amount in if violated.
*/
assert!(transfer_from != pool, "swap routed from a coin to itself");
Coins::<T>::transfer_fn(
transfer_from.into(),
pool.into(),
Balance { coin: swap.r#in(), amount: next_amount },
)?;
let delta = Balance { coin: swap.r#in(), amount: next_amount };
Coins::<T>::transfer_fn(transfer_from.into(), pool.into(), delta)?;
deltas.push(delta);
// Update the current status
transfer_from = pool;
@@ -290,13 +317,11 @@ mod pallet {
}
// Transfer the resulting coins to the origin
Coins::<T>::transfer_fn(
transfer_from.into(),
origin.into(),
Balance { coin: minimum_to_receive.coin, amount: next_amount },
)?;
let delta = Balance { coin: minimum_to_receive.coin, amount: next_amount };
Coins::<T>::transfer_fn(transfer_from.into(), origin.into(), delta)?;
deltas.push(delta);
// TODO: Event
Self::emit_event(Event::Swap { from: origin, deltas });
Ok(())
}
@@ -316,6 +341,7 @@ mod pallet {
let swaps = Premise::route(maximum_to_swap.coin, coins_to_receive.coin)
.ok_or(Error::<T>::FromToSelf)?;
let mut deltas = Vec::with_capacity(swaps.len() + 1);
let mut i = swaps.len();
while {
i -= 1;
@@ -342,11 +368,9 @@ mod pallet {
excluded when determining the reserves on the next iteration.
*/
assert!(transfer_to != pool, "swap routed to a coin from itself");
Coins::<T>::transfer_fn(
pool.into(),
transfer_to.into(),
Balance { coin: swap.out(), amount: next_amount },
)?;
let delta = Balance { coin: swap.out(), amount: next_amount };
Coins::<T>::transfer_fn(pool.into(), transfer_to.into(), delta)?;
deltas.push(delta);
transfer_to = pool;
next_amount = swap.quote_for_out(reserves, next_amount).map_err(Error::<T>::from)?;
@@ -360,13 +384,12 @@ mod pallet {
}
// Transfer the necessary coins from the origin
Coins::<T>::transfer_fn(
origin.into(),
transfer_to.into(),
Balance { coin: maximum_to_swap.coin, amount: next_amount },
)?;
let delta = Balance { coin: maximum_to_swap.coin, amount: next_amount };
Coins::<T>::transfer_fn(origin.into(), transfer_to.into(), delta)?;
deltas.push(delta);
// TODO: Event
deltas.reverse();
Self::emit_event(Event::Swap { from: origin, deltas });
Ok(())
}

View File

@@ -25,6 +25,8 @@ impl frame_system::Config for Test {
type AccountId = sp_core::sr25519::Public;
type Lookup = frame_support::sp_runtime::traits::IdentityLookup<Self::AccountId>;
type Block = frame_system::mocking::MockBlock<Test>;
type BlockLength = serai_core_pallet::Limits;
type BlockWeights = serai_core_pallet::Limits;
}
#[derive_impl(pallet_timestamp::config_preludes::TestDefaultConfig)]

View File

@@ -1,10 +1,10 @@
[package]
name = "serai-genesis-liquidity-pallet"
version = "0.1.0"
description = "Genesis liquidity pallet for Serai"
description = "Genesis Liquidity pallet for Serai"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/substrate/genesis-liquidity"
authors = ["Akil Demir <akildemir72@gmail.com>"]
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.85"
@@ -21,40 +21,48 @@ workspace = true
[dependencies]
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-system = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-support = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-std = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
dex-pallet = { package = "serai-dex-pallet", path = "../dex", default-features = false }
coins-pallet = { package = "serai-coins-pallet", path = "../coins", default-features = false }
validator-sets-pallet = { package = "serai-validator-sets-pallet", path = "../validator-sets", default-features = false }
economic-security-pallet = { package = "serai-economic-security-pallet", path = "../economic-security", default-features = false }
serai-primitives = { path = "../primitives", default-features = false }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"] }
serai-core-pallet = { path = "../core", default-features = false }
serai-coins-pallet = { path = "../coins", default-features = false }
serai-dex-pallet = { path = "../dex", default-features = false }
[features]
std = [
"scale/std",
"sp-core/std",
"frame-system/std",
"frame-support/std",
"sp-std/std",
"sp-core/std",
"sp-application-crypto/std",
"coins-pallet/std",
"dex-pallet/std",
"validator-sets-pallet/std",
"economic-security-pallet/std",
"serai-primitives/std",
"serai-abi/std",
"serai-core-pallet/std",
"serai-coins-pallet/std",
"serai-dex-pallet/std",
]
try-runtime = [
"frame-system/try-runtime",
"frame-support/try-runtime",
"serai-abi/try-runtime",
"serai-core-pallet/try-runtime",
"serai-coins-pallet/try-runtime",
"serai-dex-pallet/try-runtime",
]
runtime-benchmarks = [
"frame-system/runtime-benchmarks",
"frame-support/runtime-benchmarks",
"serai-core-pallet/runtime-benchmarks",
"serai-coins-pallet/runtime-benchmarks",
"serai-dex-pallet/runtime-benchmarks",
]
try-runtime = [] # TODO
default = ["std"]

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2024-2025 Luke Parker
Copyright (c) 2023-2025 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

View File

@@ -0,0 +1,3 @@
# Genesis Liquidity Pallet
Pallet implementing the Serai protocol's genesis liquidity.

View File

@@ -1,464 +1,102 @@
#![cfg_attr(not(feature = "std"), no_std)]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![cfg_attr(not(any(feature = "std", test)), no_std)]
#[allow(
unreachable_patterns,
clippy::cast_possible_truncation,
clippy::no_effect_underscore_binding,
clippy::empty_docs
)]
extern crate alloc;
#[expect(clippy::cast_possible_truncation)]
#[frame_support::pallet]
pub mod pallet {
mod pallet {
use frame_system::pallet_prelude::*;
use frame_support::pallet_prelude::*;
use serai_abi::{
primitives::{prelude::*, crypto::Signature, genesis_liquidity::GenesisValues},
genesis_liquidity::Event,
};
use serai_core_pallet::Pallet as Core;
type Coins<T> = serai_coins_pallet::Pallet<T, serai_coins_pallet::CoinsInstance>;
type LiquidityTokens<T> =
serai_coins_pallet::Pallet<T, serai_coins_pallet::LiquidityTokensInstance>;
use super::*;
use frame_system::{pallet_prelude::*, RawOrigin};
use frame_support::{pallet_prelude::*, sp_runtime::SaturatedConversion};
use sp_std::{vec, vec::Vec};
use sp_core::sr25519::Signature;
use sp_application_crypto::RuntimePublic;
use dex_pallet::{Pallet as Dex, Config as DexConfig};
use coins_pallet::{Config as CoinsConfig, Pallet as Coins};
use validator_sets_pallet::{Config as VsConfig, Pallet as ValidatorSets};
use economic_security_pallet::{Config as EconomicSecurityConfig, Pallet as EconomicSecurity};
use serai_primitives::*;
use validator_sets_primitives::{ValidatorSet, musig_key};
pub use genesis_liquidity_primitives as primitives;
use primitives::*;
// TODO: Have a more robust way of accessing LiquidityTokens pallet.
/// LiquidityTokens Pallet as an instance of coins pallet.
pub type LiquidityTokens<T> = coins_pallet::Pallet<T, coins_pallet::Instance1>;
/// The configuration of this pallet.
#[pallet::config]
pub trait Config:
frame_system::Config
+ VsConfig
+ DexConfig
+ EconomicSecurityConfig
+ CoinsConfig
+ coins_pallet::Config<coins_pallet::Instance1>
+ serai_core_pallet::Config
+ serai_coins_pallet::Config<serai_coins_pallet::CoinsInstance>
+ serai_coins_pallet::Config<serai_coins_pallet::LiquidityTokensInstance>
+ serai_dex_pallet::Config
{
type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;
}
/// An error incurred.
#[pallet::error]
pub enum Error<T> {
GenesisPeriodEnded,
AmountOverflowed,
NotEnoughLiquidity,
CanOnlyRemoveFullAmount,
}
#[pallet::event]
#[pallet::generate_deposit(fn deposit_event)]
pub enum Event<T: Config> {
GenesisLiquidityAdded { by: SeraiAddress, balance: ExternalBalance },
GenesisLiquidityRemoved { by: SeraiAddress, balance: ExternalBalance },
GenesisLiquidityAddedToPool { coin: ExternalBalance, sri: Amount },
}
pub enum Error<T> {}
/// The Pallet struct.
#[pallet::pallet]
pub struct Pallet<T>(PhantomData<T>);
/// Keeps shares and the amount of coins per account.
#[pallet::storage]
#[pallet::getter(fn liquidity)]
pub(crate) type Liquidity<T: Config> = StorageDoubleMap<
_,
Identity,
ExternalCoin,
Blake2_128Concat,
PublicKey,
LiquidityAmount,
OptionQuery,
>;
/// Keeps the total shares and the total amount of coins per coin.
#[pallet::storage]
#[pallet::getter(fn supply)]
pub(crate) type Supply<T: Config> =
StorageMap<_, Identity, ExternalCoin, LiquidityAmount, OptionQuery>;
#[pallet::storage]
pub(crate) type Oracle<T: Config> = StorageMap<_, Identity, ExternalCoin, u64, OptionQuery>;
#[pallet::storage]
#[pallet::getter(fn genesis_complete_block)]
pub(crate) type GenesisCompleteBlock<T: Config> = StorageValue<_, u64, OptionQuery>;
#[pallet::hooks]
impl<T: Config> Hooks<BlockNumberFor<T>> for Pallet<T> {
fn on_initialize(n: BlockNumberFor<T>) -> Weight {
#[cfg(feature = "fast-epoch")]
let final_block = 10u64;
#[cfg(not(feature = "fast-epoch"))]
let final_block = MONTHS;
// Distribute the genesis sri to pools after a month
if (n.saturated_into::<u64>() >= final_block) &&
Self::oraclization_is_done() &&
GenesisCompleteBlock::<T>::get().is_none()
{
// mint the SRI
Coins::<T>::mint(
GENESIS_LIQUIDITY_ACCOUNT.into(),
Balance { coin: Coin::Serai, amount: Amount(GENESIS_SRI) },
)
.unwrap();
// get pool & total values
let mut pool_values = vec![];
let mut total_value: u128 = 0;
for coin in EXTERNAL_COINS {
// initial coin value in terms of btc
let Some(value) = Oracle::<T>::get(coin) else {
continue;
};
let pool_amount =
u128::from(Supply::<T>::get(coin).unwrap_or(LiquidityAmount::zero()).coins);
let pool_value = pool_amount
.checked_mul(value.into())
.unwrap()
.checked_div(10u128.pow(coin.decimals()))
.unwrap();
total_value = total_value.checked_add(pool_value).unwrap();
pool_values.push((coin, pool_amount, pool_value));
}
// add the liquidity per pool
let mut total_sri_distributed = 0;
let pool_values_len = pool_values.len();
for (i, (coin, pool_amount, pool_value)) in pool_values.into_iter().enumerate() {
// whatever sri left for the last coin should be ~= it's ratio
let sri_amount = if i == (pool_values_len - 1) {
GENESIS_SRI.checked_sub(total_sri_distributed).unwrap()
} else {
u64::try_from(
u128::from(GENESIS_SRI)
.checked_mul(pool_value)
.unwrap()
.checked_div(total_value)
.unwrap(),
)
.unwrap()
};
total_sri_distributed = total_sri_distributed.checked_add(sri_amount).unwrap();
// actually add the liquidity to dex
let origin = RawOrigin::Signed(GENESIS_LIQUIDITY_ACCOUNT.into());
let Ok(()) = Dex::<T>::add_liquidity(
origin.into(),
coin,
u64::try_from(pool_amount).unwrap(),
sri_amount,
u64::try_from(pool_amount).unwrap(),
sri_amount,
GENESIS_LIQUIDITY_ACCOUNT.into(),
) else {
continue;
};
// let everyone know about the event
Self::deposit_event(Event::GenesisLiquidityAddedToPool {
coin: ExternalBalance { coin, amount: Amount(u64::try_from(pool_amount).unwrap()) },
sri: Amount(sri_amount),
});
}
assert_eq!(total_sri_distributed, GENESIS_SRI);
// we shouldn't have left any coin in genesis account at this moment, including SRI.
// All transferred to the pools.
for coin in COINS {
assert_eq!(Coins::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), coin), Amount(0));
}
GenesisCompleteBlock::<T>::set(Some(n.saturated_into::<u64>()));
}
Weight::zero() // TODO
}
}
pub struct Pallet<T>(_);
impl<T: Config> Pallet<T> {
/// Add genesis liquidity for the given account. All accounts that provide liquidity
/// will receive the genesis SRI according to their liquidity ratio.
pub fn add_coin_liquidity(account: PublicKey, balance: ExternalBalance) -> DispatchResult {
// check we are still in genesis period
if Self::genesis_ended() {
Err(Error::<T>::GenesisPeriodEnded)?;
}
// calculate new shares & supply
let (new_liquidity, new_supply) = if let Some(supply) = Supply::<T>::get(balance.coin) {
// calculate amount of shares for this amount
let shares = Self::mul_div(supply.shares, balance.amount.0, supply.coins)?;
// get new shares for this account
let existing =
Liquidity::<T>::get(balance.coin, account).unwrap_or(LiquidityAmount::zero());
(
LiquidityAmount {
shares: existing.shares.checked_add(shares).ok_or(Error::<T>::AmountOverflowed)?,
coins: existing
.coins
.checked_add(balance.amount.0)
.ok_or(Error::<T>::AmountOverflowed)?,
},
LiquidityAmount {
shares: supply.shares.checked_add(shares).ok_or(Error::<T>::AmountOverflowed)?,
coins: supply
.coins
.checked_add(balance.amount.0)
.ok_or(Error::<T>::AmountOverflowed)?,
},
)
} else {
let first_amount =
LiquidityAmount { shares: INITIAL_GENESIS_LP_SHARES, coins: balance.amount.0 };
(first_amount, first_amount)
};
// save
Liquidity::<T>::set(balance.coin, account, Some(new_liquidity));
Supply::<T>::set(balance.coin, Some(new_supply));
Self::deposit_event(Event::GenesisLiquidityAdded { by: account.into(), balance });
Ok(())
}
/// Returns the number of blocks since the all networks reached economic security first time.
/// If networks is yet to be reached that threshold, None is returned.
fn blocks_since_ec_security() -> Option<u64> {
let mut min = u64::MAX;
for n in EXTERNAL_NETWORKS {
let ec_security_block =
EconomicSecurity::<T>::economic_security_block(n)?.saturated_into::<u64>();
let current = <frame_system::Pallet<T>>::block_number().saturated_into::<u64>();
let diff = current.saturating_sub(ec_security_block);
min = diff.min(min);
}
Some(min)
}
fn genesis_ended() -> bool {
Self::oraclization_is_done() &&
<frame_system::Pallet<T>>::block_number().saturated_into::<u64>() >= MONTHS
}
fn oraclization_is_done() -> bool {
for c in EXTERNAL_COINS {
if Oracle::<T>::get(c).is_none() {
return false;
fn emit_event(event: Event) {
Core::<T>::emit_event(event)
}
}
true
}
/// The minimum amount of liquidity allowed to be initially added.
///
/// This should be sufficiently low it isn't inaccessible, yet sufficiently high that future
/// additions can be reasonably grained when their share of the new supply is calculated.
///
/// This constant is duplicated with `serai-dex-pallet` intentionally as while they have the same
/// value, they are distinct constants and don't require being equivalent.
const MINIMUM_LIQUIDITY: u64 = 1 << 16;
fn mul_div(a: u64, b: u64, c: u64) -> Result<u64, Error<T>> {
let a = u128::from(a);
let b = u128::from(b);
let c = u128::from(c);
let result = a
.checked_mul(b)
.ok_or(Error::<T>::AmountOverflowed)?
.checked_div(c)
.ok_or(Error::<T>::AmountOverflowed)?;
result.try_into().map_err(|_| Error::<T>::AmountOverflowed)
impl<T: Config> Pallet<T> {
/// Add liquidity on behalf of the specified address.
pub fn add_liquidity(to: SeraiAddress, balance: ExternalBalance) -> Result<(), Error<T>> {
todo!("TODO")
}
}
#[pallet::call]
impl<T: Config> Pallet<T> {
/// Remove the provided genesis liquidity for an account.
/// Oraclize the values of the coins available on genesis, relative to BTC.
///
/// This will trigger the addition of the liquidity into the pools and their initialization.
#[pallet::call_index(0)]
#[pallet::weight((0, DispatchClass::Operational))] // TODO
pub fn remove_coin_liquidity(origin: OriginFor<T>, balance: ExternalBalance) -> DispatchResult {
let account = ensure_signed(origin)?;
let origin = RawOrigin::Signed(GENESIS_LIQUIDITY_ACCOUNT.into());
let supply = Supply::<T>::get(balance.coin).ok_or(Error::<T>::NotEnoughLiquidity)?;
// check we are still in genesis period
let (new_liquidity, new_supply) = if Self::genesis_ended() {
// see how much liq tokens we have
let total_liq_tokens =
LiquidityTokens::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), Coin::Serai).0;
// get how much user wants to remove
let LiquidityAmount { shares, coins } =
Liquidity::<T>::get(balance.coin, account).unwrap_or(LiquidityAmount::zero());
let total_shares = Supply::<T>::get(balance.coin).unwrap_or(LiquidityAmount::zero()).shares;
let user_liq_tokens = Self::mul_div(total_liq_tokens, shares, total_shares)?;
let amount_to_remove =
Self::mul_div(user_liq_tokens, balance.amount.0, INITIAL_GENESIS_LP_SHARES)?;
// remove liquidity from pool
let prev_sri = Coins::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), Coin::Serai);
let prev_coin = Coins::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), balance.coin.into());
Dex::<T>::remove_liquidity(
origin.clone().into(),
balance.coin,
amount_to_remove,
1,
1,
GENESIS_LIQUIDITY_ACCOUNT.into(),
)?;
let current_sri = Coins::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), Coin::Serai);
let current_coin =
Coins::<T>::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), balance.coin.into());
// burn the SRI if necessary
// TODO: take into consideration movement between pools.
let mut sri: u64 = current_sri.0.saturating_sub(prev_sri.0);
let distance_to_full_pay =
GENESIS_SRI_TRICKLE_FEED.saturating_sub(Self::blocks_since_ec_security().unwrap_or(0));
let burn_sri_amount = u64::try_from(
u128::from(sri)
.checked_mul(u128::from(distance_to_full_pay))
.ok_or(Error::<T>::AmountOverflowed)?
.checked_div(u128::from(GENESIS_SRI_TRICKLE_FEED))
.ok_or(Error::<T>::AmountOverflowed)?,
)
.map_err(|_| Error::<T>::AmountOverflowed)?;
Coins::<T>::burn(
origin.clone().into(),
Balance { coin: Coin::Serai, amount: Amount(burn_sri_amount) },
)?;
sri = sri.checked_sub(burn_sri_amount).ok_or(Error::<T>::AmountOverflowed)?;
// transfer to owner
let coin_out = current_coin.0.saturating_sub(prev_coin.0);
Coins::<T>::transfer(
origin.clone().into(),
account,
Balance { coin: balance.coin.into(), amount: Amount(coin_out) },
)?;
Coins::<T>::transfer(
origin.into(),
account,
Balance { coin: Coin::Serai, amount: Amount(sri) },
)?;
// return new amounts
(
LiquidityAmount {
shares: shares.checked_sub(amount_to_remove).ok_or(Error::<T>::AmountOverflowed)?,
coins: coins.checked_sub(coin_out).ok_or(Error::<T>::AmountOverflowed)?,
},
LiquidityAmount {
shares: supply
.shares
.checked_sub(amount_to_remove)
.ok_or(Error::<T>::AmountOverflowed)?,
coins: supply.coins.checked_sub(coin_out).ok_or(Error::<T>::AmountOverflowed)?,
},
)
} else {
if balance.amount.0 != INITIAL_GENESIS_LP_SHARES {
Err(Error::<T>::CanOnlyRemoveFullAmount)?;
}
let existing =
Liquidity::<T>::get(balance.coin, account).ok_or(Error::<T>::NotEnoughLiquidity)?;
// transfer to the user
Coins::<T>::transfer(
origin.into(),
account,
Balance { coin: balance.coin.into(), amount: Amount(existing.coins) },
)?;
(
LiquidityAmount::zero(),
LiquidityAmount {
shares: supply
.shares
.checked_sub(existing.shares)
.ok_or(Error::<T>::AmountOverflowed)?,
coins: supply.coins.checked_sub(existing.coins).ok_or(Error::<T>::AmountOverflowed)?,
},
)
};
// save
if new_liquidity == LiquidityAmount::zero() {
Liquidity::<T>::set(balance.coin, account, None);
} else {
Liquidity::<T>::set(balance.coin, account, Some(new_liquidity));
}
Supply::<T>::set(balance.coin, Some(new_supply));
Self::deposit_event(Event::GenesisLiquidityRemoved { by: account.into(), balance });
Ok(())
}
/// A call to submit the initial coin values in terms of BTC.
#[pallet::call_index(1)]
#[pallet::weight((0, DispatchClass::Operational))] // TODO
#[pallet::weight((0, DispatchClass::Normal))] // TODO
pub fn oraclize_values(
origin: OriginFor<T>,
values: Values,
_signature: Signature,
values: GenesisValues,
signature: Signature,
) -> DispatchResult {
ensure_none(origin)?;
// set their relative values
Oracle::<T>::set(ExternalCoin::Bitcoin, Some(10u64.pow(ExternalCoin::Bitcoin.decimals())));
Oracle::<T>::set(ExternalCoin::Monero, Some(values.monero));
Oracle::<T>::set(ExternalCoin::Ether, Some(values.ether));
Oracle::<T>::set(ExternalCoin::Dai, Some(values.dai));
Ok(())
}
todo!("TODO")
}
#[pallet::validate_unsigned]
impl<T: Config> ValidateUnsigned for Pallet<T> {
type Call = Call<T>;
fn validate_unsigned(_: TransactionSource, call: &Self::Call) -> TransactionValidity {
match call {
Call::oraclize_values { ref values, ref signature } => {
let network = NetworkId::Serai;
let Some(session) = ValidatorSets::<T>::session(network) else {
return Err(TransactionValidityError::from(InvalidTransaction::Custom(0)));
};
let set = ValidatorSet { network, session };
let signers = ValidatorSets::<T>::participants_for_latest_decided_set(network)
.expect("no participant in the current set")
.into_iter()
.map(|(p, _)| p)
.collect::<Vec<_>>();
// check this didn't get called before
if Self::oraclization_is_done() {
Err(InvalidTransaction::Custom(1))?;
/// Transfer genesis liquidity.
#[pallet::call_index(1)]
#[pallet::weight((0, DispatchClass::Normal))] // TODO
pub fn transfer_genesis_liquidity(
origin: OriginFor<T>,
to: SeraiAddress,
genesis_liquidity: ExternalBalance,
) -> DispatchResult {
todo!("TODO")
}
// make sure signers settings the value at the end of the genesis period.
// we don't need this check for tests.
#[cfg(not(feature = "fast-epoch"))]
if <frame_system::Pallet<T>>::block_number().saturated_into::<u64>() < MONTHS {
Err(InvalidTransaction::Custom(2))?;
}
if !musig_key(set, &signers).verify(&oraclize_values_message(&set, values), signature) {
Err(InvalidTransaction::BadProof)?;
}
ValidTransaction::with_tag_prefix("GenesisLiquidity")
.and_provides((0, set))
.longevity(u64::MAX)
.propagate(true)
.build()
}
Call::remove_coin_liquidity { .. } => Err(InvalidTransaction::Call)?,
Call::__Ignore(_, _) => unreachable!(),
}
/// Remove genesis liquidity.
#[pallet::call_index(2)]
#[pallet::weight((0, DispatchClass::Normal))] // TODO
pub fn remove_genesis_liquidity(
origin: OriginFor<T>,
genesis_liquidity: ExternalBalance,
) -> DispatchResult {
todo!("TODO")
}
}
}

View File

@@ -3,7 +3,7 @@ name = "serai-in-instructions-pallet"
version = "0.1.0"
description = "Execute calls via In Instructions from unsigned transactions"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/substrate/genesis-liquidity"
repository = "https://github.com/serai-dex/serai/tree/develop/substrate/in-instructions"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
publish = false
@@ -20,67 +20,60 @@ ignored = ["scale"]
workspace = true
[dependencies]
bitvec = { version = "1", default-features = false, features = ["alloc"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive", "max-encoded-len"] }
sp-std = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-application-crypto = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-io = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-system = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-support = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
serai-primitives = { path = "../primitives", default-features = false }
coins-pallet = { package = "serai-coins-pallet", path = "../coins", default-features = false }
dex-pallet = { package = "serai-dex-pallet", path = "../dex", default-features = false }
validator-sets-pallet = { package = "serai-validator-sets-pallet", path = "../validator-sets", default-features = false }
genesis-liquidity-pallet = { package = "serai-genesis-liquidity-pallet", path = "../genesis-liquidity", default-features = false }
emissions-pallet = { package = "serai-emissions-pallet", path = "../emissions", default-features = false }
[dev-dependencies]
pallet-babe = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
pallet-grandpa = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
pallet-timestamp = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
economic-security-pallet = { package = "serai-economic-security-pallet", path = "../economic-security", default-features = false }
bitvec = { version = "1", default-features = false, features = ["alloc"] }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"] }
serai-core-pallet = { path = "../core", default-features = false }
serai-coins-pallet = { path = "../coins", default-features = false }
serai-validator-sets-pallet = { path = "../validator-sets", default-features = false }
serai-dex-pallet = { path = "../dex", default-features = false }
serai-genesis-liquidity-pallet = { path = "../genesis-liquidity", default-features = false }
[features]
std = [
"scale/std",
"sp-std/std",
"sp-application-crypto/std",
"sp-io/std",
"sp-runtime/std",
"sp-core/std",
"frame-system/std",
"frame-support/std",
"serai-primitives/std",
"coins-pallet/std",
"dex-pallet/std",
"validator-sets-pallet/std",
"genesis-liquidity-pallet/std",
"emissions-pallet/std",
"economic-security-pallet/std",
"pallet-babe/std",
"pallet-grandpa/std",
"pallet-timestamp/std",
"bitvec/std",
"serai-abi/std",
"serai-core-pallet/std",
"serai-coins-pallet/std",
"serai-validator-sets-pallet/std",
"serai-dex-pallet/std",
"serai-genesis-liquidity-pallet/std",
]
try-runtime = [
"frame-system/try-runtime",
"frame-support/try-runtime",
"sp-runtime/try-runtime",
"serai-abi/try-runtime",
"serai-core-pallet/try-runtime",
"serai-coins-pallet/try-runtime",
"serai-validator-sets-pallet/try-runtime",
"serai-dex-pallet/try-runtime",
"serai-genesis-liquidity-pallet/try-runtime",
]
runtime-benchmarks = [
"frame-system/runtime-benchmarks",
"frame-support/runtime-benchmarks",
"serai-core-pallet/runtime-benchmarks",
"serai-coins-pallet/runtime-benchmarks",
"serai-validator-sets-pallet/runtime-benchmarks",
"serai-dex-pallet/runtime-benchmarks",
"serai-genesis-liquidity-pallet/runtime-benchmarks",
]
default = ["std"]

View File

@@ -0,0 +1 @@
# Serai In-Instructions Pallet

View File

@@ -1,387 +1,58 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
#![cfg_attr(not(any(feature = "std", test)), no_std)]
use sp_io::hashing::blake2_256;
extern crate alloc;
use serai_primitives::*;
pub use in_instructions_primitives as primitives;
use primitives::*;
#[cfg(test)]
mod mock;
#[cfg(test)]
mod tests;
// TODO: Investigate why Substrate generates these
#[allow(
unreachable_patterns,
clippy::cast_possible_truncation,
clippy::no_effect_underscore_binding,
clippy::empty_docs
)]
#[expect(clippy::cast_possible_truncation)]
#[frame_support::pallet]
pub mod pallet {
use sp_std::vec;
use sp_application_crypto::RuntimePublic;
use sp_runtime::traits::Zero;
use sp_core::sr25519::Public;
mod pallet {
use frame_system::pallet_prelude::*;
use frame_support::pallet_prelude::*;
use frame_system::{pallet_prelude::*, RawOrigin};
use coins_pallet::{
Config as CoinsConfig, Pallet as Coins,
primitives::{OutInstruction, OutInstructionWithBalance},
};
use dex_pallet::{Config as DexConfig, Pallet as Dex};
use validator_sets_pallet::{
primitives::{Session, ValidatorSet, ExternalValidatorSet},
Config as ValidatorSetsConfig, Pallet as ValidatorSets,
};
use serai_abi::{primitives::prelude::*, in_instructions::Event};
use genesis_liquidity_pallet::{
Pallet as GenesisLiq, Config as GenesisLiqConfig, primitives::GENESIS_LIQUIDITY_ACCOUNT,
};
use emissions_pallet::{Pallet as Emissions, Config as EmissionsConfig, primitives::POL_ACCOUNT};
use serai_core_pallet::Pallet as Core;
type Coins<T> = serai_coins_pallet::Pallet<T, serai_coins_pallet::CoinsInstance>;
type LiquidityTokens<T> =
serai_coins_pallet::Pallet<T, serai_coins_pallet::LiquidityTokensInstance>;
use super::*;
/// The configuration of this pallet.
#[pallet::config]
pub trait Config:
frame_system::Config
+ CoinsConfig
+ DexConfig
+ ValidatorSetsConfig
+ GenesisLiqConfig
+ EmissionsConfig
+ serai_core_pallet::Config
+ serai_coins_pallet::Config<serai_coins_pallet::CoinsInstance>
+ serai_validator_sets_pallet::Config
+ serai_coins_pallet::Config<serai_coins_pallet::LiquidityTokensInstance>
+ serai_dex_pallet::Config
+ serai_genesis_liquidity_pallet::Config
{
type RuntimeEvent: From<Event<Self>> + IsType<<Self as frame_system::Config>::RuntimeEvent>;
}
#[pallet::event]
#[pallet::generate_deposit(fn deposit_event)]
pub enum Event<T: Config> {
Batch {
network: ExternalNetworkId,
publishing_session: Session,
id: u32,
external_network_block_hash: BlockHash,
in_instructions_hash: [u8; 32],
in_instruction_results: bitvec::vec::BitVec<u8, bitvec::order::Lsb0>,
},
Halt {
network: ExternalNetworkId,
},
}
/// An error incurred.
#[pallet::error]
pub enum Error<T> {
/// Coin and OutAddress types don't match.
InvalidAddressForCoin,
}
pub enum Error<T> {}
/// The Pallet struct.
#[pallet::pallet]
pub struct Pallet<T>(PhantomData<T>);
// The ID of the last executed Batch for a network.
#[pallet::storage]
#[pallet::getter(fn batches)]
pub(crate) type LastBatch<T: Config> =
StorageMap<_, Identity, ExternalNetworkId, u32, OptionQuery>;
// The last Serai block in which this validator set included a batch
#[pallet::storage]
#[pallet::getter(fn last_batch_block)]
pub(crate) type LastBatchBlock<T: Config> =
StorageMap<_, Identity, ExternalNetworkId, BlockNumberFor<T>, OptionQuery>;
// Halted networks.
#[pallet::storage]
pub(crate) type Halted<T: Config> = StorageMap<_, Identity, ExternalNetworkId, (), OptionQuery>;
pub struct Pallet<T>(_);
impl<T: Config> Pallet<T> {
// Use a dedicated transaction layer when executing this InInstruction
// This lets it individually error without causing any storage modifications
#[frame_support::transactional]
fn execute(instruction: &InInstructionWithBalance) -> Result<(), DispatchError> {
match &instruction.instruction {
InInstruction::Transfer(address) => {
Coins::<T>::mint((*address).into(), instruction.balance.into())?;
fn emit_event(event: Event) {
Core::<T>::emit_event(event)
}
InInstruction::Dex(call) => {
// This will only be initiated by external chain transactions. That is why we only need
// add liquidity and swaps. Other functionalities (such as remove_liq, etc) will be
// called directly from Serai with a native transaction.
match call {
DexCall::SwapAndAddLiquidity(address) => {
let origin = RawOrigin::Signed(IN_INSTRUCTION_EXECUTOR.into());
let address = *address;
let coin = instruction.balance.coin;
// mint the given coin on the account
Coins::<T>::mint(IN_INSTRUCTION_EXECUTOR.into(), instruction.balance.into())?;
// swap half of it for SRI
let half = instruction.balance.amount.0 / 2;
let path = BoundedVec::try_from(vec![coin.into(), Coin::Serai]).unwrap();
Dex::<T>::swap_exact_tokens_for_tokens(
origin.clone().into(),
path,
half,
1, // minimum out, so we accept whatever we get.
IN_INSTRUCTION_EXECUTOR.into(),
)?;
// get how much we got for our swap
let sri_amount = Coins::<T>::balance(IN_INSTRUCTION_EXECUTOR.into(), Coin::Serai).0;
// add liquidity
Dex::<T>::add_liquidity(
origin.clone().into(),
coin,
half,
sri_amount,
1,
1,
address.into(),
)?;
// TODO: minimums are set to 1 above to guarantee successful adding liq call.
// Ideally we either get this info from user or send the leftovers back to user.
// Let's send the leftovers back to user for now.
let coin_balance = Coins::<T>::balance(IN_INSTRUCTION_EXECUTOR.into(), coin.into());
let sri_balance = Coins::<T>::balance(IN_INSTRUCTION_EXECUTOR.into(), Coin::Serai);
if coin_balance != Amount(0) {
Coins::<T>::transfer_internal(
IN_INSTRUCTION_EXECUTOR.into(),
address.into(),
Balance { coin: coin.into(), amount: coin_balance },
)?;
}
if sri_balance != Amount(0) {
Coins::<T>::transfer_internal(
IN_INSTRUCTION_EXECUTOR.into(),
address.into(),
Balance { coin: Coin::Serai, amount: sri_balance },
)?;
}
}
DexCall::Swap(out_balance, out_address) => {
let send_to_external = !out_address.is_native();
let native_coin = out_balance.coin.is_native();
// we can't send native coin to external chain
if native_coin && send_to_external {
Err(Error::<T>::InvalidAddressForCoin)?;
}
// mint the given coin on our account
Coins::<T>::mint(IN_INSTRUCTION_EXECUTOR.into(), instruction.balance.into())?;
// get the path
let mut path = vec![instruction.balance.coin.into(), Coin::Serai];
if !native_coin {
path.push(out_balance.coin);
}
// get the swap address
// if the address is internal, we can directly swap to it. if not, we swap to
// ourselves and burn the coins to send them back on the external chain.
let send_to = if send_to_external {
IN_INSTRUCTION_EXECUTOR
} else {
out_address.clone().as_native().unwrap()
};
// do the swap
let origin = RawOrigin::Signed(IN_INSTRUCTION_EXECUTOR.into());
Dex::<T>::swap_exact_tokens_for_tokens(
origin.clone().into(),
BoundedVec::try_from(path).unwrap(),
instruction.balance.amount.0,
out_balance.amount.0,
send_to.into(),
)?;
// burn the received coins so that they sent back to the user
// if it is requested to an external address.
if send_to_external {
// see how much we got
let coin_balance =
Coins::<T>::balance(IN_INSTRUCTION_EXECUTOR.into(), out_balance.coin);
let instruction = OutInstructionWithBalance {
instruction: OutInstruction {
address: out_address.clone().as_external().unwrap(),
},
balance: ExternalBalance {
coin: out_balance.coin.try_into().unwrap(),
amount: coin_balance,
},
};
Coins::<T>::burn_with_instruction(origin.into(), instruction)?;
}
}
}
}
InInstruction::GenesisLiquidity(address) => {
Coins::<T>::mint(GENESIS_LIQUIDITY_ACCOUNT.into(), instruction.balance.into())?;
GenesisLiq::<T>::add_coin_liquidity((*address).into(), instruction.balance)?;
}
InInstruction::SwapToStakedSRI(address, network) => {
Coins::<T>::mint(POL_ACCOUNT.into(), instruction.balance.into())?;
Emissions::<T>::swap_to_staked_sri((*address).into(), *network, instruction.balance)?;
}
}
Ok(())
}
pub fn halt(network: ExternalNetworkId) -> Result<(), DispatchError> {
Halted::<T>::set(network, Some(()));
Self::deposit_event(Event::Halt { network });
Ok(())
}
}
fn keys_for_network<T: Config>(
network: ExternalNetworkId,
) -> Result<(Session, Option<Public>, Option<Public>), InvalidTransaction> {
// If there's no session set, and therefore no keys set, then this must be an invalid signature
let Some(session) = ValidatorSets::<T>::session(NetworkId::from(network)) else {
Err(InvalidTransaction::BadProof)?
};
let mut set = ExternalValidatorSet { network, session };
let latest = ValidatorSets::<T>::keys(set).map(|keys| keys.0);
let prior = if set.session.0 != 0 {
set.session.0 -= 1;
ValidatorSets::<T>::keys(set).map(|keys| keys.0)
} else {
None
};
if prior.is_none() && latest.is_none() {
Err(InvalidTransaction::BadProof)?;
}
Ok((session, prior, latest))
}
#[pallet::call]
impl<T: Config> Pallet<T> {
/// Execute a batch of `InInstruction`s.
#[pallet::call_index(0)]
#[pallet::weight((0, DispatchClass::Operational))] // TODO
pub fn execute_batch(origin: OriginFor<T>, _batch: SignedBatch) -> DispatchResult {
ensure_none(origin)?;
// The entire Batch execution is handled in pre_dispatch
Ok(())
}
}
#[pallet::validate_unsigned]
impl<T: Config> ValidateUnsigned for Pallet<T> {
type Call = Call<T>;
fn validate_unsigned(_: TransactionSource, call: &Self::Call) -> TransactionValidity {
// Match to be exhaustive
let batch = match call {
Call::execute_batch { ref batch } => batch,
Call::__Ignore(_, _) => unreachable!(),
};
// verify the batch size
// TODO: Merge this encode with the one done by batch_message
if batch.batch.encode().len() > MAX_BATCH_SIZE {
Err(InvalidTransaction::ExhaustsResources)?;
}
let network = batch.batch.network;
// verify the signature
let (current_session, prior, current) = keys_for_network::<T>(network)?;
let prior_session = Session(current_session.0 - 1);
let batch_message = batch_message(&batch.batch);
// Check the prior key first since only a single `Batch` (the last one) will be when prior is
// Some yet prior wasn't the signing key
let valid_by_prior =
if let Some(key) = prior { key.verify(&batch_message, &batch.signature) } else { false };
let valid = valid_by_prior ||
(if let Some(key) = current {
key.verify(&batch_message, &batch.signature)
} else {
false
});
if !valid {
Err(InvalidTransaction::BadProof)?;
}
let batch = &batch.batch;
if Halted::<T>::contains_key(network) {
Err(InvalidTransaction::Custom(1))?;
}
// If it wasn't valid by the prior key, meaning it was valid by the current key, the current
// key is publishing `Batch`s. This should only happen once the current key has verified all
// `Batch`s published by the prior key, meaning they are accepting the hand-over.
if prior.is_some() && (!valid_by_prior) {
ValidatorSets::<T>::retire_set(ValidatorSet {
network: network.into(),
session: prior_session,
});
}
// check that this validator set isn't publishing a batch more than once per block
let current_block = <frame_system::Pallet<T>>::block_number();
let last_block = LastBatchBlock::<T>::get(network).unwrap_or(Zero::zero());
if last_block >= current_block {
Err(InvalidTransaction::Future)?;
}
LastBatchBlock::<T>::insert(batch.network, frame_system::Pallet::<T>::block_number());
// Verify the batch is sequential
// LastBatch has the last ID set. The next ID should be it + 1
// If there's no ID, the next ID should be 0
let expected = LastBatch::<T>::get(network).map_or(0, |prev| prev + 1);
if batch.id < expected {
Err(InvalidTransaction::Stale)?;
}
if batch.id > expected {
Err(InvalidTransaction::Future)?;
}
LastBatch::<T>::insert(batch.network, batch.id);
let in_instructions_hash = blake2_256(&batch.instructions.encode());
let mut in_instruction_results = bitvec::vec::BitVec::new();
for instruction in &batch.instructions {
// Verify this coin is for this network
if instruction.balance.coin.network() != batch.network {
Err(InvalidTransaction::Custom(2))?;
}
in_instruction_results.push(Self::execute(instruction).is_ok());
}
Self::deposit_event(Event::Batch {
network: batch.network,
publishing_session: if valid_by_prior { prior_session } else { current_session },
id: batch.id,
external_network_block_hash: batch.external_network_block_hash,
in_instructions_hash,
in_instruction_results,
});
ValidTransaction::with_tag_prefix("in-instructions")
.and_provides((batch.network, batch.id))
// Set a 10 block longevity, though this should be included in the next block
.longevity(10)
.propagate(true)
.build()
}
// Explicitly provide a pre-dispatch which calls validate_unsigned
fn pre_dispatch(call: &Self::Call) -> Result<(), TransactionValidityError> {
Self::validate_unsigned(TransactionSource::InBlock, call).map(|_| ())
#[pallet::weight((0, DispatchClass::Normal))] // TODO
pub fn execute_batch(origin: OriginFor<T>, batch: SignedBatch) -> DispatchResult {
todo!("TODO")
}
}
}

View File

@@ -1,209 +0,0 @@
//! Test environment for InInstructions pallet.
use super::*;
use std::collections::HashMap;
use frame_support::{
construct_runtime,
traits::{ConstU16, ConstU32, ConstU64},
};
use sp_core::{H256, Pair, sr25519::Public};
use sp_runtime::{
traits::{BlakeTwo256, IdentityLookup},
BuildStorage,
};
use validator_sets::{primitives::MAX_KEY_SHARES_PER_SET_U32, MembershipProof};
pub use crate as in_instructions;
pub use coins_pallet as coins;
pub use validator_sets_pallet as validator_sets;
pub use genesis_liquidity_pallet as genesis_liquidity;
pub use emissions_pallet as emissions;
pub use dex_pallet as dex;
pub use pallet_babe as babe;
pub use pallet_grandpa as grandpa;
pub use pallet_timestamp as timestamp;
pub use economic_security_pallet as economic_security;
type Block = frame_system::mocking::MockBlock<Test>;
// Maximum number of authorities per session.
pub type MaxAuthorities = ConstU32<{ MAX_KEY_SHARES_PER_SET_U32 }>;
pub const MEDIAN_PRICE_WINDOW_LENGTH: u16 = 10;
construct_runtime!(
pub enum Test
{
System: frame_system,
Timestamp: timestamp,
Coins: coins,
LiquidityTokens: coins::<Instance1>::{Pallet, Call, Storage, Event<T>},
Emissions: emissions,
ValidatorSets: validator_sets,
GenesisLiquidity: genesis_liquidity,
EconomicSecurity: economic_security,
Dex: dex,
Babe: babe,
Grandpa: grandpa,
InInstructions: in_instructions,
}
);
impl frame_system::Config for Test {
type BaseCallFilter = frame_support::traits::Everything;
type BlockWeights = ();
type BlockLength = ();
type RuntimeOrigin = RuntimeOrigin;
type RuntimeCall = RuntimeCall;
type Nonce = u64;
type Hash = H256;
type Hashing = BlakeTwo256;
type AccountId = Public;
type Lookup = IdentityLookup<Self::AccountId>;
type Block = Block;
type RuntimeEvent = RuntimeEvent;
type BlockHashCount = ConstU64<250>;
type DbWeight = ();
type Version = ();
type PalletInfo = PalletInfo;
type AccountData = ();
type OnNewAccount = ();
type OnKilledAccount = ();
type SystemWeightInfo = ();
type SS58Prefix = ();
type OnSetCode = ();
type MaxConsumers = ConstU32<16>;
}
impl timestamp::Config for Test {
type Moment = u64;
type OnTimestampSet = Babe;
type MinimumPeriod = ConstU64<{ (TARGET_BLOCK_TIME * 1000) / 2 }>;
type WeightInfo = ();
}
impl babe::Config for Test {
type EpochDuration = ConstU64<{ FAST_EPOCH_DURATION }>;
type ExpectedBlockTime = ConstU64<{ TARGET_BLOCK_TIME * 1000 }>;
type EpochChangeTrigger = babe::ExternalTrigger;
type DisabledValidators = ValidatorSets;
type WeightInfo = ();
type MaxAuthorities = MaxAuthorities;
type KeyOwnerProof = MembershipProof<Self>;
type EquivocationReportSystem = ();
}
impl grandpa::Config for Test {
type RuntimeEvent = RuntimeEvent;
type WeightInfo = ();
type MaxAuthorities = MaxAuthorities;
type MaxSetIdSessionEntries = ConstU64<0>;
type KeyOwnerProof = MembershipProof<Self>;
type EquivocationReportSystem = ();
}
impl coins::Config for Test {
type RuntimeEvent = RuntimeEvent;
type AllowMint = ValidatorSets;
}
impl coins::Config<coins::Instance1> for Test {
type RuntimeEvent = RuntimeEvent;
type AllowMint = ();
}
impl dex::Config for Test {
type RuntimeEvent = RuntimeEvent;
type LPFee = ConstU32<3>; // 0.3%
type MintMinLiquidity = ConstU64<10000>;
type MaxSwapPathLength = ConstU32<3>; // coin1 -> SRI -> coin2
type MedianPriceWindowLength = ConstU16<{ MEDIAN_PRICE_WINDOW_LENGTH }>;
type WeightInfo = dex::weights::SubstrateWeight<Test>;
}
impl validator_sets::Config for Test {
type RuntimeEvent = RuntimeEvent;
type ShouldEndSession = Babe;
}
impl genesis_liquidity::Config for Test {
type RuntimeEvent = RuntimeEvent;
}
impl emissions::Config for Test {
type RuntimeEvent = RuntimeEvent;
}
impl economic_security::Config for Test {
type RuntimeEvent = RuntimeEvent;
}
impl Config for Test {
type RuntimeEvent = RuntimeEvent;
}
// Amounts for single key share per network
pub fn key_shares() -> HashMap<NetworkId, Amount> {
HashMap::from([
(NetworkId::Serai, Amount(50_000 * 10_u64.pow(8))),
(NetworkId::External(ExternalNetworkId::Bitcoin), Amount(1_000_000 * 10_u64.pow(8))),
(NetworkId::External(ExternalNetworkId::Ethereum), Amount(1_000_000 * 10_u64.pow(8))),
(NetworkId::External(ExternalNetworkId::Monero), Amount(100_000 * 10_u64.pow(8))),
])
}
pub(crate) fn new_test_ext() -> sp_io::TestExternalities {
let mut t = frame_system::GenesisConfig::<Test>::default().build_storage().unwrap();
let networks: Vec<(NetworkId, Amount)> = key_shares().into_iter().collect::<Vec<_>>();
let accounts: Vec<Public> = vec![
insecure_pair_from_name("Alice").public(),
insecure_pair_from_name("Bob").public(),
insecure_pair_from_name("Charlie").public(),
insecure_pair_from_name("Dave").public(),
insecure_pair_from_name("Eve").public(),
insecure_pair_from_name("Ferdie").public(),
];
let validators = accounts.clone();
coins::GenesisConfig::<Test> {
accounts: accounts
.into_iter()
.map(|a| (a, Balance { coin: Coin::Serai, amount: Amount(1 << 60) }))
.collect(),
_ignore: Default::default(),
}
.assimilate_storage(&mut t)
.unwrap();
#[expect(unused_variables, unreachable_code, clippy::diverging_sub_expression)]
validator_sets::GenesisConfig::<Test> {
networks: networks.clone(),
participants: validators
.clone()
.into_iter()
.map(|p| {
let keys: validator_sets_pallet::AllEmbeddedEllipticCurveKeysAtGenesis = todo!("TODO");
(p, keys)
})
.collect(),
}
.assimilate_storage(&mut t)
.unwrap();
let mut ext = sp_io::TestExternalities::new(t);
ext.execute_with(|| System::set_block_number(0));
ext
}

View File

@@ -1,507 +0,0 @@
use super::*;
use crate::mock::*;
use emissions_pallet::primitives::POL_ACCOUNT;
use genesis_liquidity_pallet::primitives::INITIAL_GENESIS_LP_SHARES;
use scale::Encode;
use frame_support::{pallet_prelude::InvalidTransaction, traits::OnFinalize};
use frame_system::RawOrigin;
use sp_core::{sr25519::Public, Pair};
use sp_runtime::{traits::ValidateUnsigned, transaction_validity::TransactionSource};
use validator_sets::{Pallet as ValidatorSets, primitives::KeyPair};
use coins::primitives::{OutInstruction, OutInstructionWithBalance};
use genesis_liquidity::primitives::GENESIS_LIQUIDITY_ACCOUNT;
fn set_keys_for_session(key: Public) {
for n in EXTERNAL_NETWORKS {
ValidatorSets::<Test>::set_keys(
RawOrigin::None.into(),
n,
KeyPair(key, vec![].try_into().unwrap()),
vec![].try_into().unwrap(),
Signature([0u8; 64]),
)
.unwrap();
}
}
#[expect(dead_code)]
fn get_events() -> Vec<Event<Test>> {
let events = System::events()
.iter()
.filter_map(|event| {
if let RuntimeEvent::InInstructions(e) = &event.event {
Some(e.clone())
} else {
None
}
})
.collect::<Vec<_>>();
System::reset_events();
events
}
fn make_liquid_pool(coin: ExternalCoin, amount: u64) {
// mint coins so that we can add liquidity
let account = insecure_pair_from_name("make-pool-account").public();
Coins::mint(account, ExternalBalance { coin, amount: Amount(amount) }.into()).unwrap();
Coins::mint(account, Balance { coin: Coin::Serai, amount: Amount(amount) }).unwrap();
// make some liquid pool
Dex::add_liquidity(RawOrigin::Signed(account).into(), coin, amount, amount, 1, 1, account)
.unwrap();
}
#[test]
fn validate_batch() {
new_test_ext().execute_with(|| {
let pair = insecure_pair_from_name("Alice");
set_keys_for_session(pair.public());
let mut batch_size = 0;
let mut batch = Batch {
network: ExternalNetworkId::Monero,
id: 1,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![],
};
// batch size bigger than MAX_BATCH_SIZE should fail
while batch_size <= MAX_BATCH_SIZE + 1000 {
batch.instructions.push(InInstructionWithBalance {
instruction: InInstruction::Transfer(SeraiAddress::new([0u8; 32])),
balance: ExternalBalance { coin: ExternalCoin::Monero, amount: Amount(1) },
});
batch_size = batch.encode().len();
}
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature: Signature([0u8; 64]) },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::ExhaustsResources.into()
);
// reduce the batch size into allowed size
while batch_size > MAX_BATCH_SIZE {
batch.instructions.pop();
batch_size = batch.encode().len();
}
// 0 signature should be invalid
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature: Signature([0u8; 64]) },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::BadProof.into()
);
// submit a valid signature
let signature = pair.sign(&batch_message(&batch));
// network shouldn't be halted
InInstructions::halt(ExternalNetworkId::Monero).unwrap();
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Custom(1).into() // network halted error
);
// submit from an un-halted network
batch.network = ExternalNetworkId::Bitcoin;
let signature = pair.sign(&batch_message(&batch));
// can't submit in the first block(Block 0)
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature: signature.clone() },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Future.into()
);
// update block number
System::set_block_number(1);
// first batch id should be 0
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature: signature.clone() },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Future.into()
);
// update batch id
batch.id = 0;
let signature = pair.sign(&batch_message(&batch));
// can't have more than 1 batch per block
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature: signature.clone() },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Future.into()
);
// update block number
System::set_block_number(2);
// network and the instruction coins should match
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Custom(2).into() // network and instruction coins doesn't match error
);
// update block number & batch
System::set_block_number(3);
for ins in &mut batch.instructions {
ins.balance.coin = ExternalCoin::Bitcoin;
}
let signature = pair.sign(&batch_message(&batch));
// batch id can't be equal or less than previous id
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Stale.into()
);
// update block number & batch
System::set_block_number(4);
batch.id += 2;
let signature = pair.sign(&batch_message(&batch));
// batch id can't be incremented more than once per batch
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature },
};
assert_eq!(
InInstructions::validate_unsigned(TransactionSource::External, &call),
InvalidTransaction::Future.into()
);
// update block number & batch
System::set_block_number(5);
batch.id = (batch.id - 2) + 1;
let signature = pair.sign(&batch_message(&batch));
// it should now pass
let call = pallet::Call::<Test>::execute_batch {
batch: SignedBatch { batch: batch.clone(), signature },
};
InInstructions::validate_unsigned(TransactionSource::External, &call).unwrap();
});
}
#[test]
fn transfer_instruction() {
new_test_ext().execute_with(|| {
let coin = ExternalCoin::Bitcoin;
let amount = Amount(2 * 10u64.pow(coin.decimals()));
let account = insecure_pair_from_name("random1").public();
let batch = SignedBatch {
batch: Batch {
network: coin.network(),
id: 0,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![InInstructionWithBalance {
instruction: InInstruction::Transfer(account.into()),
balance: ExternalBalance { coin, amount },
}],
},
signature: Signature([0u8; 64]),
};
InInstructions::execute_batch(RawOrigin::None.into(), batch).unwrap();
// check that account has the coins
assert_eq!(Coins::balance(account, coin.into()), amount);
})
}
#[test]
fn dex_instruction_add_liquidity() {
new_test_ext().execute_with(|| {
let coin = ExternalCoin::Ether;
let amount = Amount(2 * 10u64.pow(coin.decimals()));
let account = insecure_pair_from_name("random1").public();
let batch = SignedBatch {
batch: Batch {
network: coin.network(),
id: 0,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![InInstructionWithBalance {
instruction: InInstruction::Dex(DexCall::SwapAndAddLiquidity(account.into())),
balance: ExternalBalance { coin, amount },
}],
},
signature: Signature([0u8; 64]),
};
// we should have a liquid pool before we can swap
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
// check that the instruction is failed
/* TODO
assert_eq!(
get_events()
.into_iter()
.filter(|event| matches!(event, in_instructions::Event::<Test>::InstructionFailure { .. }))
.collect::<Vec<_>>(),
vec![in_instructions::Event::<Test>::InstructionFailure {
network: batch.batch.network,
id: batch.batch.id,
index: 0
}]
);
*/
let original_coin_amount = 5 * 10u64.pow(coin.decimals());
make_liquid_pool(coin, original_coin_amount);
// this should now be successful
InInstructions::execute_batch(RawOrigin::None.into(), batch).unwrap();
// check that the instruction was successful
/* TODO
assert_eq!(
get_events()
.into_iter()
.filter(|event| matches!(event, in_instructions::Event::<Test>::InstructionFailure { .. }))
.collect::<Vec<_>>(),
vec![]
);
*/
// check that we now have a Ether pool with correct liquidity
// we can't know the actual SRI amount since we don't know the result of the swap.
// Moreover, knowing exactly how much isn't the responsibility of InInstruction pallet,
// it is responsibility of the Dex pallet.
let (coin_amount, _serai_amount) = Dex::get_reserves(&coin.into(), &Coin::Serai).unwrap();
assert_eq!(coin_amount, original_coin_amount + amount.0);
// assert that the account got the liquidity tokens, again we don't how much and
// it isn't this pallets responsibility.
assert!(LiquidityTokens::balance(account, coin.into()).0 > 0);
// check that in ins account doesn't have the coins
assert_eq!(Coins::balance(IN_INSTRUCTION_EXECUTOR.into(), coin.into()), Amount(0));
assert_eq!(Coins::balance(IN_INSTRUCTION_EXECUTOR.into(), Coin::Serai), Amount(0));
})
}
#[test]
fn dex_instruction_swap() {
new_test_ext().execute_with(|| {
let coin = ExternalCoin::Bitcoin;
let amount = Amount(2 * 10u64.pow(coin.decimals()));
let account = insecure_pair_from_name("random1").public();
// make a pool so that can actually swap
make_liquid_pool(coin, 5 * 10u64.pow(coin.decimals()));
let mut batch = SignedBatch {
batch: Batch {
network: coin.network(),
id: 0,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![InInstructionWithBalance {
instruction: InInstruction::Dex(DexCall::Swap(
Balance { coin: Coin::Serai, amount: Amount(1) },
OutAddress::External(ExternalAddress::new([0u8; 64].to_vec()).unwrap()),
)),
balance: ExternalBalance { coin, amount },
}],
},
signature: Signature([0u8; 64]),
};
// we can't send SRI to external address
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
// check that the instruction was failed
/* TODO
assert_eq!(
get_events()
.into_iter()
.filter(|event| matches!(event, in_instructions::Event::<Test>::InstructionFailure { .. }))
.collect::<Vec<_>>(),
vec![in_instructions::Event::<Test>::InstructionFailure {
network: batch.batch.network,
id: batch.batch.id,
index: 0
}]
);
*/
// make it internal address
batch.batch.instructions[0].instruction = InInstruction::Dex(DexCall::Swap(
Balance { coin: Coin::Serai, amount: Amount(1) },
OutAddress::Serai(account.into()),
));
// check that swap is successful this time
assert_eq!(Coins::balance(account, Coin::Serai), Amount(0));
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
assert!(Coins::balance(account, Coin::Serai).0 > 0);
// make another pool for external coin
let coin2 = ExternalCoin::Monero;
make_liquid_pool(coin2, 5 * 10u64.pow(coin.decimals()));
// update the batch
let out_addr = ExternalAddress::new([0u8; 64].to_vec()).unwrap();
batch.batch.instructions[0].instruction = InInstruction::Dex(DexCall::Swap(
Balance { coin: ExternalCoin::Monero.into(), amount: Amount(1) },
OutAddress::External(out_addr.clone()),
));
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
// check that we got out instruction
let events = System::events()
.iter()
.filter_map(|event| {
if let RuntimeEvent::Coins(e) = &event.event {
if matches!(e, coins::Event::<Test>::BurnWithInstruction { .. }) {
Some(e.clone())
} else {
None
}
} else {
None
}
})
.collect::<Vec<_>>();
assert_eq!(
events,
vec![coins::Event::<Test>::BurnWithInstruction {
from: IN_INSTRUCTION_EXECUTOR.into(),
instruction: OutInstructionWithBalance {
instruction: OutInstruction { address: out_addr },
balance: ExternalBalance { coin: coin2, amount: Amount(68228493) }
}
}]
)
})
}
#[test]
fn genesis_liquidity_instruction() {
new_test_ext().execute_with(|| {
let coin = ExternalCoin::Bitcoin;
let amount = Amount(2 * 10u64.pow(coin.decimals()));
let account = insecure_pair_from_name("random1").public();
let batch = SignedBatch {
batch: Batch {
network: coin.network(),
id: 0,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![InInstructionWithBalance {
instruction: InInstruction::GenesisLiquidity(account.into()),
balance: ExternalBalance { coin, amount },
}],
},
signature: Signature([0u8; 64]),
};
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
// check that genesis liq account got the coins
assert_eq!(Coins::balance(GENESIS_LIQUIDITY_ACCOUNT.into(), coin.into()), amount);
// check that it registered the liquidity for the account
// detailed tests about the amounts has to be done in GenesisLiquidity pallet tests.
let liquidity_amount = GenesisLiquidity::liquidity(coin, account).unwrap();
assert_eq!(liquidity_amount.coins, amount.0);
assert_eq!(liquidity_amount.shares, INITIAL_GENESIS_LP_SHARES);
let supply = GenesisLiquidity::supply(coin).unwrap();
assert_eq!(supply.coins, amount.0);
assert_eq!(supply.shares, INITIAL_GENESIS_LP_SHARES);
})
}
#[test]
fn swap_to_staked_sri_instruction() {
new_test_ext().execute_with(|| {
let coin = ExternalCoin::Monero;
let key_share =
ValidatorSets::<Test>::allocation_per_key_share(NetworkId::from(coin.network())).unwrap();
let amount = Amount(2 * key_share.0);
let account = insecure_pair_from_name("random1").public();
// make a pool so that can actually swap
make_liquid_pool(coin, 5 * 10u64.pow(coin.decimals()));
// set the keys to set the TAS for the network
ValidatorSets::<Test>::set_keys(
RawOrigin::None.into(),
coin.network(),
KeyPair(insecure_pair_from_name("random-key").public(), Vec::new().try_into().unwrap()),
Vec::new().try_into().unwrap(),
Signature([0u8; 64]),
)
.unwrap();
// make sure account doesn't already have lTs or allocation
let current_liq_tokens = LiquidityTokens::balance(POL_ACCOUNT.into(), coin.into()).0;
assert_eq!(current_liq_tokens, 0);
assert_eq!(ValidatorSets::<Test>::allocation((NetworkId::from(coin.network()), account)), None);
// we need this so that value for the coin exist
Dex::on_finalize(0);
System::set_block_number(1); // we need this for the spot price
let batch = SignedBatch {
batch: Batch {
network: coin.network(),
id: 0,
external_network_block_hash: BlockHash([0u8; 32]),
instructions: vec![InInstructionWithBalance {
instruction: InInstruction::SwapToStakedSRI(account.into(), coin.network().into()),
balance: ExternalBalance { coin, amount },
}],
},
signature: Signature([0u8; 64]),
};
InInstructions::execute_batch(RawOrigin::None.into(), batch.clone()).unwrap();
// assert that we added liq from POL account
assert!(LiquidityTokens::balance(POL_ACCOUNT.into(), coin.into()).0 > current_liq_tokens);
// assert that user allocated SRI for the network
let value = Dex::spot_price_for_block(0, coin).unwrap();
let sri_amount = Amount(
u64::try_from(
u128::from(amount.0)
.checked_mul(u128::from(value.0))
.unwrap()
.checked_div(u128::from(10u64.pow(coin.decimals())))
.unwrap(),
)
.unwrap(),
);
assert_eq!(
ValidatorSets::<Test>::allocation((NetworkId::from(coin.network()), account)).unwrap(),
sri_amount
);
})
}

View File

@@ -74,7 +74,7 @@ fn wasm_binary(dev: bool) -> Vec<u8> {
}
log::info!("using built-in wasm");
serai_runtime::WASM_BINARY.ok_or("compiled in wasm not available").unwrap().to_vec()
serai_runtime::WASM.to_vec()
}
fn devnet_genesis(validators: &[&'static str], endowed_accounts: Vec<Public>) -> GenesisConfig {

View File

@@ -6,7 +6,7 @@ use sp_timestamp::InherentDataProvider as TimestampInherent;
use sp_consensus_babe::{SlotDuration, inherents::InherentDataProvider as BabeInherent};
use sp_io::SubstrateHostFunctions;
use sc_executor::{sp_wasm_interface::ExtendedHostFunctions, WasmExecutor};
use sc_executor::{sp_wasm_interface::ExtendedHostFunctions, HeapAllocStrategy, WasmExecutor};
use sc_network::{Event, NetworkEventStream, NetworkBackend};
use sc_service::{error::Error as ServiceError, Configuration, TaskManager, TFullClient};
@@ -99,14 +99,13 @@ pub fn new_partial(
})
.transpose()?;
#[allow(deprecated)]
let executor = Executor::new(
config.executor.wasm_method,
config.executor.default_heap_pages,
config.executor.max_runtime_instances,
None,
config.executor.runtime_cache_size,
);
let executor = Executor::builder()
.with_execution_method(config.executor.wasm_method)
.with_onchain_heap_alloc_strategy(HeapAllocStrategy::Dynamic { maximum_pages: None })
.with_offchain_heap_alloc_strategy(HeapAllocStrategy::Dynamic { maximum_pages: None })
.with_max_runtime_instances(config.executor.max_runtime_instances)
.with_runtime_cache_size(config.executor.runtime_cache_size)
.build();
let (client, backend, keystore_container, task_manager) = {
let telemetry = telemetry.as_ref().map(|(_, telemetry)| telemetry.handle());

View File

@@ -7,6 +7,10 @@ use crate::balance::Amount;
/// The value of non-Bitcoin externals coins present at genesis, relative to Bitcoin.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize, BorshSerialize, BorshDeserialize)]
#[cfg_attr(
feature = "non_canonical_scale_derivations",
derive(scale::Encode, scale::Decode, scale::MaxEncodedLen, scale::DecodeWithMemTracking)
)]
pub struct GenesisValues {
/// The value of Ether, relative to Bitcoin.
pub ether: Amount,

View File

@@ -171,3 +171,26 @@ impl Zeroize for SignedBatch {
self.signature.0.as_mut().zeroize();
}
}
#[cfg(feature = "non_canonical_scale_derivations")]
impl scale::Encode for SignedBatch {
fn using_encoded<R, F: FnOnce(&[u8]) -> R>(&self, f: F) -> R {
f(&borsh::to_vec(self).unwrap())
}
}
#[cfg(feature = "non_canonical_scale_derivations")]
impl scale::MaxEncodedLen for SignedBatch {
fn max_encoded_len() -> usize {
Batch::MAX_SIZE + 64
}
}
#[cfg(feature = "non_canonical_scale_derivations")]
impl scale::EncodeLike<SignedBatch> for SignedBatch {}
#[cfg(feature = "non_canonical_scale_derivations")]
impl scale::Decode for SignedBatch {
fn decode<I: scale::Input>(input: &mut I) -> Result<Self, scale::Error> {
crate::read_scale_as_borsh(input)
}
}
#[cfg(feature = "non_canonical_scale_derivations")]
impl scale::DecodeWithMemTracking for SignedBatch {}

View File

@@ -29,8 +29,8 @@ pub mod coin;
/// The `Amount`, `ExternalBalance`, and `Balance` types.
pub mod balance;
/// Types for genesis.
pub mod genesis;
/// Types for the genesis liquidity functionality.
pub mod genesis_liquidity;
/// Types for identifying networks and their properties.
pub mod network_id;

View File

@@ -17,11 +17,31 @@ ignored = ["scale"]
[lints]
workspace = true
[dependencies]
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
[target.'cfg(not(target_family = "wasm"))'.dependencies]
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"], optional = true }
sp-version = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-api = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-transaction-pool = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-inherents = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-block-builder = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-consensus-babe = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-consensus-grandpa = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
sp-authority-discovery = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false, optional = true }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"], optional = true }
[target.'cfg(target_family = "wasm")'.dependencies]
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["derive"] }
borsh = { version = "1", default-features = false }
sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-version = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-runtime = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-session = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-timestamp = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-api = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-transaction-pool = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
@@ -31,15 +51,6 @@ sp-consensus-babe = { git = "https://github.com/serai-dex/patch-polkadot-sdk", d
sp-consensus-grandpa = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-authority-discovery = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"] }
[target.'cfg(target_family = "wasm")'.dependencies]
borsh = { version = "1", default-features = false }
sp-core = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-session = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
sp-timestamp = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-system = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-support = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
frame-executive = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
@@ -49,14 +60,15 @@ pallet-session = { git = "https://github.com/serai-dex/patch-polkadot-sdk", defa
pallet-babe = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
pallet-grandpa = { git = "https://github.com/serai-dex/patch-polkadot-sdk", default-features = false }
serai-abi = { path = "../abi", default-features = false, features = ["substrate"] }
serai-core-pallet = { path = "../core", default-features = false }
serai-coins-pallet = { path = "../coins", default-features = false }
serai-validator-sets-pallet = { path = "../validator-sets", default-features = false }
serai-signals-pallet = { path = "../signals", default-features = false }
serai-dex-pallet = { path = "../dex", default-features = false }
[build-dependencies]
substrate-wasm-builder = { git = "https://github.com/serai-dex/patch-polkadot-sdk" }
serai-genesis-liquidity-pallet = { path = "../genesis-liquidity", default-features = false }
serai-in-instructions-pallet = { path = "../in-instructions", default-features = false }
[features]
std = [
@@ -70,6 +82,7 @@ std = [
"sp-runtime/std",
"sp-api/std",
"sp-transaction-pool/std",
"sp-inherents/std",
"sp-block-builder/std",
"sp-consensus-babe/std",
"sp-consensus-grandpa/std",
@@ -90,6 +103,8 @@ std = [
"serai-validator-sets-pallet/std",
"serai-signals-pallet/std",
"serai-dex-pallet/std",
"serai-genesis-liquidity-pallet/std",
"serai-in-instructions-pallet/std",
]
try-runtime = [
@@ -110,6 +125,8 @@ try-runtime = [
"serai-validator-sets-pallet/try-runtime",
"serai-signals-pallet/try-runtime",
"serai-dex-pallet/try-runtime",
"serai-genesis-liquidity-pallet/try-runtime",
"serai-in-instructions-pallet/try-runtime",
]
runtime-benchmarks = [
@@ -127,6 +144,8 @@ runtime-benchmarks = [
"serai-validator-sets-pallet/runtime-benchmarks",
"serai-signals-pallet/runtime-benchmarks",
"serai-dex-pallet/runtime-benchmarks",
"serai-genesis-liquidity-pallet/runtime-benchmarks",
"serai-in-instructions-pallet/runtime-benchmarks",
]
default = ["std"]

View File

@@ -1,4 +1,101 @@
fn main() {
#[cfg(feature = "std")]
substrate_wasm_builder::WasmBuilder::build_using_defaults();
use std::{path::PathBuf, fs, env, process::Command};
// Prevent recursing infinitely
if env::var("TARGET").unwrap() == "wasm32v1-none" {
return;
}
// https://github.com/rust-lang/rust/issues/145491
const ONE_45491: &str = "-C link-arg=--mllvm=-mcpu=mvp,--mllvm=-mattr=+mutable-globals";
const WASM: &str = "-C link-arg=--export-table";
const REQUIRED_BY_SUBSTRATE: &str = "--cfg substrate_runtime";
const SAFETY: &str = "-C overflow-checks=true -C panic=abort";
// `symbol-mangling-version` is defined to provide an explicit, canonical definition of symbols.
// `embed-bitcode=false` is set as the bitcode is unnecessary yet takes notable time to compile.
/*
Rust's LTO requires bitcode, forcing us to defer to the linker's LTO. While this would suggest
we _should_ set `embed-bitcode=true`, Rust's documentation suggests that's likely not desired
and should solely be done when compiling one library with mixed methods of linking. When
compiling and linking just once (as seen here), it's suggested to use the linker's LTO instead.
https://doc.rust-lang.org/1.91.1/rustc/codegen-options/index.html#embed-bitcode
*/
const COMPILATION: &str =
"-C symbol-mangling-version=v0 -C embed-bitcode=false -C linker-plugin-lto=true";
let profile = env::var("PROFILE").unwrap();
let release = profile == "release";
let rustflags = format!("{ONE_45491} {WASM} {REQUIRED_BY_SUBSTRATE} {SAFETY} {COMPILATION}");
let rustflags = if release {
format!("{rustflags} -C codegen-units=1 -C strip=symbols -C debug-assertions=false")
} else {
rustflags
};
let target_dir = PathBuf::from(env::var("OUT_DIR").unwrap()).join("target");
let cargo_command = || {
let cargo = env::var("CARGO").unwrap();
let mut command = Command::new(&cargo);
command
.current_dir(env::var("CARGO_MANIFEST_DIR").unwrap())
.env_clear()
.env("PATH", env::var("PATH").unwrap())
.env("CARGO", cargo)
.env("RUSTC", env::var("RUSTC").unwrap())
.env("RUSTFLAGS", &rustflags)
.env("CARGO_TARGET_DIR", &target_dir);
command
};
let workspace = {
let workspace = cargo_command()
.arg("locate-project")
.arg("--workspace")
.arg("--message-format")
.arg("plain")
.output()
.unwrap();
assert!(workspace.status.success());
let mut workspace = PathBuf::from(String::from_utf8(workspace.stdout).unwrap().trim());
assert_eq!(workspace.file_name().unwrap(), "Cargo.toml");
assert!(workspace.pop());
workspace
};
// Re-run anytime the workspace changes
// TODO: Re-run anytime `Cargo.lock` or specifically the `src` folders change
println!("cargo::rerun-if-changed={}", workspace.display());
let mut command = cargo_command();
command
.arg("rustc")
.arg("--package")
.arg(env::var("CARGO_PKG_NAME").unwrap())
.arg("--target")
.arg("wasm32v1-none")
.arg("--crate-type")
.arg("cdylib")
.arg("--no-default-features");
if release {
command.arg("--release");
}
assert!(command.status().unwrap().success());
// Place the resulting WASM blob into the parent `target` directory
{
let wasm_file = env::var("CARGO_PKG_NAME").unwrap().replace('-', "_") + ".wasm";
let src_file = target_dir.join("wasm32v1-none").join(&profile).join(&wasm_file);
let dst_file = {
// TODO: This sets `dst_dir` to the default target directory, not the actual
let mut dst_dir = workspace.clone();
// e.g. workspace/target/debug
dst_dir.extend(["target", &profile]);
let _ = fs::create_dir_all(&dst_dir);
// e.g. workspace/target/debug/serai_runtime.wasm
dst_dir.join(&wasm_file)
};
fs::copy(&src_file, &dst_file).unwrap();
}
}

View File

@@ -0,0 +1,39 @@
use alloc::vec::Vec;
use serai_abi::{
primitives::{
crypto::{Public, EmbeddedEllipticCurveKeys, SignedEmbeddedEllipticCurveKeys, KeyPair},
network_id::{ExternalNetworkId, NetworkId},
validator_sets::{Session, ExternalValidatorSet, ValidatorSet},
balance::{Amount, Balance},
address::SeraiAddress,
},
Event,
};
/// The genesis configuration for Serai.
#[derive(scale::Encode, scale::Decode)]
pub struct GenesisConfig {
/// The genesis validators for the network.
pub validators: Vec<(Public, Vec<SignedEmbeddedEllipticCurveKeys>)>,
/// The accounts to start with balances, intended solely for testing purposes.
pub coins: Vec<(Public, Balance)>,
}
sp_api::decl_runtime_apis! {
pub trait GenesisApi {
fn build(genesis: GenesisConfig);
}
pub trait SeraiApi {
fn events() -> Vec<Vec<Vec<u8>>>;
fn validators(network: NetworkId) -> Vec<Public>;
fn current_session(network: NetworkId) -> Option<Session>;
fn current_stake(network: NetworkId) -> Option<Amount>;
fn keys(set: ExternalValidatorSet) -> Option<KeyPair>;
fn current_validators(network: NetworkId) -> Option<Vec<SeraiAddress>>;
fn pending_slash_report(network: ExternalNetworkId) -> bool;
fn embedded_elliptic_curve_keys(
validator: SeraiAddress,
network: ExternalNetworkId,
) -> Option<EmbeddedEllipticCurveKeys>;
}
}

View File

@@ -1,222 +1,28 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(any(feature = "std", target_family = "wasm"))]
extern crate alloc;
use alloc::vec::Vec;
use serai_abi::{
primitives::{
crypto::{Public, EmbeddedEllipticCurveKeys, SignedEmbeddedEllipticCurveKeys, KeyPair},
network_id::{ExternalNetworkId, NetworkId},
validator_sets::{Session, ExternalValidatorSet, ValidatorSet},
balance::{Amount, Balance},
address::SeraiAddress,
},
Event,
};
#[cfg(feature = "std")]
include!(concat!(env!("OUT_DIR"), "/wasm_binary.rs"));
#[cfg(any(feature = "std", target_family = "wasm"))]
mod common;
#[cfg(any(feature = "std", target_family = "wasm"))]
pub use common::*;
// If this is WASM, we build the runtime proper
#[cfg(target_family = "wasm")]
mod wasm;
/// The genesis configuration for Serai.
#[derive(scale::Encode, scale::Decode)]
pub struct GenesisConfig {
/// The genesis validators for the network.
pub validators: Vec<(Public, Vec<SignedEmbeddedEllipticCurveKeys>)>,
/// The accounts to start with balances, intended solely for testing purposes.
pub coins: Vec<(Public, Balance)>,
}
// If this is `std`, we solely stub with `impl_runtime_apis` for the `RuntimeApi` the node requires
#[cfg(feature = "std")]
mod std_runtime_api;
#[cfg(feature = "std")]
pub use std_runtime_api::RuntimeApi;
sp_api::decl_runtime_apis! {
pub trait GenesisApi {
fn build(genesis: GenesisConfig);
}
pub trait SeraiApi {
fn events() -> Vec<Vec<Vec<u8>>>;
fn validators(network: NetworkId) -> Vec<Public>;
fn current_session(network: NetworkId) -> Option<Session>;
fn current_stake(network: NetworkId) -> Option<Amount>;
fn keys(set: ExternalValidatorSet) -> Option<KeyPair>;
fn current_validators(network: NetworkId) -> Option<Vec<SeraiAddress>>;
fn pending_slash_report(network: ExternalNetworkId) -> bool;
fn embedded_elliptic_curve_keys(
validator: SeraiAddress,
network: ExternalNetworkId,
) -> Option<EmbeddedEllipticCurveKeys>;
}
}
// We stub `impl_runtime_apis` to generate the `RuntimeApi` object the node needs
#[cfg(not(target_family = "wasm"))]
mod apis {
use alloc::borrow::Cow;
use serai_abi::{SubstrateHeader as Header, SubstrateBlock as Block};
use super::*;
#[sp_version::runtime_version]
pub const VERSION: sp_version::RuntimeVersion = sp_version::RuntimeVersion {
spec_name: Cow::Borrowed("serai"),
impl_name: Cow::Borrowed("core"),
authoring_version: 0,
// Use the highest possible value so the node doesn't attempt to use this in place of the WASM
spec_version: 0xffffffff,
impl_version: 0,
apis: RUNTIME_API_VERSIONS,
transaction_version: 0,
system_version: 0,
};
/// A `struct` representing the runtime as necessary to define the available APIs.
pub struct Runtime;
sp_api::impl_runtime_apis! {
impl sp_api::Core<Block> for Runtime {
fn version() -> sp_version::RuntimeVersion {
VERSION
}
fn initialize_block(header: &Header) -> sp_runtime::ExtrinsicInclusionMode {
unimplemented!("runtime is only implemented when WASM")
}
fn execute_block(block: Block) {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_block_builder::BlockBuilder<Block> for Runtime {
fn apply_extrinsic(
extrinsic: <Block as sp_runtime::traits::Block>::Extrinsic,
) -> sp_runtime::ApplyExtrinsicResult {
unimplemented!("runtime is only implemented when WASM")
}
fn finalize_block() -> Header {
unimplemented!("runtime is only implemented when WASM")
}
fn inherent_extrinsics(
data: sp_inherents::InherentData,
) -> Vec<<Block as sp_runtime::traits::Block>::Extrinsic> {
unimplemented!("runtime is only implemented when WASM")
}
fn check_inherents(
block: Block,
data: sp_inherents::InherentData,
) -> sp_inherents::CheckInherentsResult {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_transaction_pool::runtime_api::TaggedTransactionQueue<Block> for Runtime {
fn validate_transaction(
source: sp_runtime::transaction_validity::TransactionSource,
tx: <Block as sp_runtime::traits::Block>::Extrinsic,
block_hash: <Block as sp_runtime::traits::Block>::Hash,
) -> sp_runtime::transaction_validity::TransactionValidity {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_consensus_babe::BabeApi<Block> for Runtime {
fn configuration() -> sp_consensus_babe::BabeConfiguration {
unimplemented!("runtime is only implemented when WASM")
}
fn current_epoch_start() -> sp_consensus_babe::Slot {
unimplemented!("runtime is only implemented when WASM")
}
fn current_epoch() -> sp_consensus_babe::Epoch {
unimplemented!("runtime is only implemented when WASM")
}
fn next_epoch() -> sp_consensus_babe::Epoch {
unimplemented!("runtime is only implemented when WASM")
}
fn generate_key_ownership_proof(
_slot: sp_consensus_babe::Slot,
_authority_id: sp_consensus_babe::AuthorityId,
) -> Option<sp_consensus_babe::OpaqueKeyOwnershipProof> {
unimplemented!("runtime is only implemented when WASM")
}
fn submit_report_equivocation_unsigned_extrinsic(
equivocation_proof: sp_consensus_babe::EquivocationProof<Header>,
_: sp_consensus_babe::OpaqueKeyOwnershipProof,
) -> Option<()> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_consensus_grandpa::GrandpaApi<Block> for Runtime {
fn grandpa_authorities() -> sp_consensus_grandpa::AuthorityList {
unimplemented!("runtime is only implemented when WASM")
}
fn current_set_id() -> sp_consensus_grandpa::SetId {
unimplemented!("runtime is only implemented when WASM")
}
fn generate_key_ownership_proof(
_set_id: sp_consensus_grandpa::SetId,
_authority_id: sp_consensus_grandpa::AuthorityId,
) -> Option<sp_consensus_grandpa::OpaqueKeyOwnershipProof> {
unimplemented!("runtime is only implemented when WASM")
}
fn submit_report_equivocation_unsigned_extrinsic(
equivocation_proof: sp_consensus_grandpa::EquivocationProof<
<Block as sp_runtime::traits::Block>::Hash,
u64,
>,
_: sp_consensus_grandpa::OpaqueKeyOwnershipProof,
) -> Option<()> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_authority_discovery::AuthorityDiscoveryApi<Block> for Runtime {
fn authorities() -> Vec<sp_authority_discovery::AuthorityId> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl crate::SeraiApi<Block> for Runtime {
fn events() -> Vec<Vec<Vec<u8>>> {
unimplemented!("runtime is only implemented when WASM")
}
fn validators(
network: NetworkId
) -> Vec<serai_abi::primitives::crypto::Public> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_session(network: NetworkId) -> Option<Session> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_stake(network: NetworkId) -> Option<Amount> {
unimplemented!("runtime is only implemented when WASM")
}
fn keys(set: ExternalValidatorSet) -> Option<KeyPair> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_validators(network: NetworkId) -> Option<Vec<SeraiAddress>> {
unimplemented!("runtime is only implemented when WASM")
}
fn pending_slash_report(network: ExternalNetworkId) -> bool {
unimplemented!("runtime is only implemented when WASM")
}
fn embedded_elliptic_curve_keys(
validator: SeraiAddress,
network: ExternalNetworkId,
) -> Option<EmbeddedEllipticCurveKeys> {
unimplemented!("runtime is only implemented when WASM")
}
}
}
}
#[cfg(not(target_family = "wasm"))]
pub use apis::RuntimeApi;
// If this isn't WASM, regardless of what it is, we include the WASM blob from the build script
#[cfg(all(not(target_family = "wasm"), debug_assertions))]
pub const WASM: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/target/wasm32v1-none/debug/serai_runtime.wasm"));
#[cfg(all(not(target_family = "wasm"), not(debug_assertions)))]
pub const WASM: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/target/wasm32v1-none/release/serai_runtime.wasm"));

View File

@@ -0,0 +1,174 @@
use alloc::borrow::Cow;
use serai_abi::{
primitives::{
crypto::{KeyPair, EmbeddedEllipticCurveKeys},
network_id::{ExternalNetworkId, NetworkId},
validator_sets::{Session, ExternalValidatorSet},
balance::Amount,
address::SeraiAddress,
},
SubstrateHeader as Header, SubstrateBlock as Block,
};
use super::*;
#[sp_version::runtime_version]
pub const VERSION: sp_version::RuntimeVersion = sp_version::RuntimeVersion {
spec_name: Cow::Borrowed("serai"),
impl_name: Cow::Borrowed("core"),
authoring_version: 0,
// Use the highest possible value so the node doesn't attempt to use this in place of the WASM
spec_version: 0xffffffff,
impl_version: 0,
apis: RUNTIME_API_VERSIONS,
transaction_version: 0,
system_version: 0,
};
/// A `struct` representing the runtime as necessary to define the available APIs.
pub struct Runtime;
sp_api::impl_runtime_apis! {
impl sp_api::Core<Block> for Runtime {
fn version() -> sp_version::RuntimeVersion {
VERSION
}
fn initialize_block(header: &Header) -> sp_runtime::ExtrinsicInclusionMode {
unimplemented!("runtime is only implemented when WASM")
}
fn execute_block(block: Block) {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_block_builder::BlockBuilder<Block> for Runtime {
fn apply_extrinsic(
extrinsic: <Block as sp_runtime::traits::Block>::Extrinsic,
) -> sp_runtime::ApplyExtrinsicResult {
unimplemented!("runtime is only implemented when WASM")
}
fn finalize_block() -> Header {
unimplemented!("runtime is only implemented when WASM")
}
fn inherent_extrinsics(
data: sp_inherents::InherentData,
) -> Vec<<Block as sp_runtime::traits::Block>::Extrinsic> {
unimplemented!("runtime is only implemented when WASM")
}
fn check_inherents(
block: Block,
data: sp_inherents::InherentData,
) -> sp_inherents::CheckInherentsResult {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_transaction_pool::runtime_api::TaggedTransactionQueue<Block> for Runtime {
fn validate_transaction(
source: sp_runtime::transaction_validity::TransactionSource,
tx: <Block as sp_runtime::traits::Block>::Extrinsic,
block_hash: <Block as sp_runtime::traits::Block>::Hash,
) -> sp_runtime::transaction_validity::TransactionValidity {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_consensus_babe::BabeApi<Block> for Runtime {
fn configuration() -> sp_consensus_babe::BabeConfiguration {
unimplemented!("runtime is only implemented when WASM")
}
fn current_epoch_start() -> sp_consensus_babe::Slot {
unimplemented!("runtime is only implemented when WASM")
}
fn current_epoch() -> sp_consensus_babe::Epoch {
unimplemented!("runtime is only implemented when WASM")
}
fn next_epoch() -> sp_consensus_babe::Epoch {
unimplemented!("runtime is only implemented when WASM")
}
fn generate_key_ownership_proof(
_slot: sp_consensus_babe::Slot,
_authority_id: sp_consensus_babe::AuthorityId,
) -> Option<sp_consensus_babe::OpaqueKeyOwnershipProof> {
unimplemented!("runtime is only implemented when WASM")
}
fn submit_report_equivocation_unsigned_extrinsic(
equivocation_proof: sp_consensus_babe::EquivocationProof<Header>,
_: sp_consensus_babe::OpaqueKeyOwnershipProof,
) -> Option<()> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_consensus_grandpa::GrandpaApi<Block> for Runtime {
fn grandpa_authorities() -> sp_consensus_grandpa::AuthorityList {
unimplemented!("runtime is only implemented when WASM")
}
fn current_set_id() -> sp_consensus_grandpa::SetId {
unimplemented!("runtime is only implemented when WASM")
}
fn generate_key_ownership_proof(
_set_id: sp_consensus_grandpa::SetId,
_authority_id: sp_consensus_grandpa::AuthorityId,
) -> Option<sp_consensus_grandpa::OpaqueKeyOwnershipProof> {
unimplemented!("runtime is only implemented when WASM")
}
fn submit_report_equivocation_unsigned_extrinsic(
equivocation_proof: sp_consensus_grandpa::EquivocationProof<
<Block as sp_runtime::traits::Block>::Hash,
u64,
>,
_: sp_consensus_grandpa::OpaqueKeyOwnershipProof,
) -> Option<()> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl sp_authority_discovery::AuthorityDiscoveryApi<Block> for Runtime {
fn authorities() -> Vec<sp_authority_discovery::AuthorityId> {
unimplemented!("runtime is only implemented when WASM")
}
}
impl crate::SeraiApi<Block> for Runtime {
fn events() -> Vec<Vec<Vec<u8>>> {
unimplemented!("runtime is only implemented when WASM")
}
fn validators(
network: NetworkId
) -> Vec<serai_abi::primitives::crypto::Public> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_session(network: NetworkId) -> Option<Session> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_stake(network: NetworkId) -> Option<Amount> {
unimplemented!("runtime is only implemented when WASM")
}
fn keys(set: ExternalValidatorSet) -> Option<KeyPair> {
unimplemented!("runtime is only implemented when WASM")
}
fn current_validators(network: NetworkId) -> Option<Vec<SeraiAddress>> {
unimplemented!("runtime is only implemented when WASM")
}
fn pending_slash_report(network: ExternalNetworkId) -> bool {
unimplemented!("runtime is only implemented when WASM")
}
fn embedded_elliptic_curve_keys(
validator: SeraiAddress,
network: ExternalNetworkId,
) -> Option<EmbeddedEllipticCurveKeys> {
unimplemented!("runtime is only implemented when WASM")
}
}
}

View File

@@ -0,0 +1,127 @@
use super::*;
impl From<Option<SeraiAddress>> for RuntimeOrigin {
fn from(signer: Option<SeraiAddress>) -> Self {
match signer {
None => RuntimeOrigin::none(),
Some(signer) => RuntimeOrigin::signed(signer.into()),
}
}
}
impl From<serai_abi::Call> for RuntimeCall {
fn from(call: serai_abi::Call) -> Self {
match call {
serai_abi::Call::Coins(call) => {
use serai_abi::coins::Call;
use serai_coins_pallet::Call as Scall;
RuntimeCall::Coins(match call {
Call::transfer { to, coins } => Scall::transfer { to: to.into(), coins },
Call::burn { coins } => Scall::burn { coins },
Call::burn_with_instruction { instruction } => {
Scall::burn_with_instruction { instruction }
}
})
}
serai_abi::Call::ValidatorSets(call) => {
use serai_abi::validator_sets::Call;
use serai_validator_sets_pallet::Call as Scall;
RuntimeCall::ValidatorSets(match call {
Call::set_keys { network, key_pair, signature_participants, signature } => {
Scall::set_keys { network, key_pair, signature_participants, signature }
}
Call::report_slashes { network, slashes, signature } => {
Scall::report_slashes { network, slashes, signature }
}
Call::set_embedded_elliptic_curve_keys { keys } => {
Scall::set_embedded_elliptic_curve_keys { keys }
}
Call::allocate { network, amount } => Scall::allocate { network, amount },
Call::deallocate { network, amount } => Scall::deallocate { network, amount },
Call::claim_deallocation { deallocation } => Scall::claim_deallocation {
network: deallocation.network,
session: deallocation.session,
},
})
}
serai_abi::Call::Signals(call) => {
use serai_abi::signals::Call;
use serai_signals_pallet::Call as Scall;
RuntimeCall::Signals(match call {
Call::register_retirement_signal { in_favor_of } => {
Scall::register_retirement_signal { in_favor_of }
}
Call::revoke_retirement_signal { was_in_favor_of } => {
Scall::revoke_retirement_signal { retirement_signal: was_in_favor_of }
}
Call::favor { signal, with_network } => Scall::favor { signal, with_network },
Call::revoke_favor { signal, with_network } => {
Scall::revoke_favor { signal, with_network }
}
Call::stand_against { signal, with_network } => {
Scall::stand_against { signal, with_network }
}
})
}
serai_abi::Call::Dex(call) => {
use serai_abi::dex::Call;
RuntimeCall::Dex(match call {
Call::add_liquidity {
external_coin,
sri_intended,
external_coin_intended,
sri_minimum,
external_coin_minimum,
} => serai_dex_pallet::Call::add_liquidity {
external_coin,
sri_intended,
external_coin_intended,
sri_minimum,
external_coin_minimum,
},
Call::transfer_liquidity { to, liquidity_tokens } => {
serai_dex_pallet::Call::transfer_liquidity { to, liquidity_tokens }
}
Call::remove_liquidity { liquidity_tokens, sri_minimum, external_coin_minimum } => {
serai_dex_pallet::Call::remove_liquidity {
liquidity_tokens,
sri_minimum,
external_coin_minimum,
}
}
Call::swap { coins_to_swap, minimum_to_receive } => {
serai_dex_pallet::Call::swap { coins_to_swap, minimum_to_receive }
}
Call::swap_for { coins_to_receive, maximum_to_swap } => {
serai_dex_pallet::Call::swap_for { coins_to_receive, maximum_to_swap }
}
})
}
serai_abi::Call::GenesisLiquidity(call) => {
use serai_abi::genesis_liquidity::Call;
RuntimeCall::GenesisLiquidity(match call {
Call::oraclize_values { values, signature } => {
serai_genesis_liquidity_pallet::Call::oraclize_values { values, signature }
}
Call::transfer_genesis_liquidity { to, genesis_liquidity } => {
serai_genesis_liquidity_pallet::Call::transfer_genesis_liquidity {
to,
genesis_liquidity,
}
}
Call::remove_genesis_liquidity { genesis_liquidity } => {
serai_genesis_liquidity_pallet::Call::remove_genesis_liquidity { genesis_liquidity }
}
})
}
serai_abi::Call::InInstructions(call) => {
use serai_abi::in_instructions::Call;
RuntimeCall::InInstructions(match call {
Call::execute_batch { batch } => {
serai_in_instructions_pallet::Call::execute_batch { batch }
}
})
}
}
}
}

View File

@@ -1,7 +1,7 @@
use core::marker::PhantomData;
use alloc::{borrow::Cow, vec, vec::Vec};
use sp_core::{ConstU32, ConstU64, sr25519::Public};
use sp_core::{Get, ConstU32, ConstU64, sr25519::Public};
use sp_runtime::{
Perbill, Weight,
traits::{Header as _, Block as _},
@@ -21,51 +21,10 @@ use serai_abi::{
use serai_coins_pallet::{CoinsInstance, LiquidityTokensInstance};
/// The lookup for a SeraiAddress -> Public.
pub struct Lookup;
impl sp_runtime::traits::StaticLookup for Lookup {
type Source = SeraiAddress;
type Target = Public;
fn lookup(source: SeraiAddress) -> Result<Public, sp_runtime::traits::LookupError> {
Ok(source.into())
}
fn unlookup(source: Public) -> SeraiAddress {
source.into()
}
}
// TODO: Remove
#[sp_version::runtime_version]
pub const VERSION: RuntimeVersion = RuntimeVersion {
spec_name: Cow::Borrowed("serai"),
impl_name: Cow::Borrowed("core"),
authoring_version: 0,
spec_version: 0,
impl_version: 0,
apis: RUNTIME_API_VERSIONS,
transaction_version: 0,
system_version: 0,
};
frame_support::parameter_types! {
pub const Version: RuntimeVersion = VERSION;
// TODO
pub BlockLength: frame_system::limits::BlockLength =
frame_system::limits::BlockLength::max_with_normal_ratio(
100 * 1024,
Perbill::from_percent(75),
);
// TODO
pub BlockWeights: frame_system::limits::BlockWeights =
frame_system::limits::BlockWeights::with_sensible_defaults(
Weight::from_parts(
2u64 * frame_support::weights::constants::WEIGHT_REF_TIME_PER_SECOND,
u64::MAX,
),
Perbill::from_percent(75),
);
}
/// Maps `serai_abi` types into the types expected within the Substrate runtime
mod map;
/// The configuration for `frame_system`.
mod system;
#[frame_support::runtime]
mod runtime {
@@ -96,6 +55,12 @@ mod runtime {
#[runtime::pallet_index(6)]
pub type Dex = serai_dex_pallet::Pallet<Runtime>;
#[runtime::pallet_index(7)]
pub type GenesisLiquidity = serai_genesis_liquidity_pallet::Pallet<Runtime>;
#[runtime::pallet_index(8)]
pub type InInstructions = serai_in_instructions_pallet::Pallet<Runtime>;
#[runtime::pallet_index(0xfd)]
#[runtime::disable_inherent]
pub type Timestamp = pallet_timestamp::Pallet<Runtime>;
@@ -107,45 +72,6 @@ mod runtime {
pub type Grandpa = pallet_grandpa::Pallet<Runtime>;
}
impl frame_system::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
type BaseCallFilter = frame_support::traits::Everything;
type BlockWeights = BlockWeights;
type BlockLength = BlockLength;
type RuntimeOrigin = RuntimeOrigin;
type RuntimeCall = RuntimeCall;
type Nonce = u32;
type Hash = <Self::Block as sp_runtime::traits::Block>::Hash;
type Hashing = sp_runtime::traits::BlakeTwo256;
type AccountId = sp_core::sr25519::Public;
type Lookup = Lookup;
type Block = Block;
// Don't track old block hashes within the System pallet
// We use not a number -> hash index, but a hash -> () index, in our own pallet
type BlockHashCount = ConstU64<1>;
type DbWeight = frame_support::weights::constants::RocksDbWeight;
type Version = Version;
type PalletInfo = PalletInfo;
type AccountData = ();
type OnNewAccount = ();
type OnKilledAccount = ();
// We use the default weights as we never expose/call any of these methods
type SystemWeightInfo = ();
// We also don't use the provided extensions framework
type ExtensionsWeightInfo = ();
// We don't invoke any hooks on-set-code as we don't perform upgrades via the blockchain yet via
// nodes, ensuring everyone who upgrades consents to the rules they upgrade to
type OnSetCode = ();
type MaxConsumers = ConstU32<{ u32::MAX }>;
// No migrations set
type SingleBlockMigrations = ();
type MultiBlockMigrator = ();
type PreInherents = serai_core_pallet::StartOfBlock<Runtime>;
type PostInherents = ();
type PostTransactions = serai_core_pallet::EndOfBlock<Runtime>;
}
impl serai_core_pallet::Config for Runtime {}
impl serai_coins_pallet::Config<CoinsInstance> for Runtime {
@@ -174,6 +100,8 @@ impl serai_coins_pallet::Config<LiquidityTokensInstance> for Runtime {
type AllowMint = serai_coins_pallet::AlwaysAllowMint;
}
impl serai_dex_pallet::Config for Runtime {}
impl serai_genesis_liquidity_pallet::Config for Runtime {}
impl serai_in_instructions_pallet::Config for Runtime {}
impl pallet_timestamp::Config for Runtime {
type Moment = u64;
@@ -229,119 +157,6 @@ impl pallet_grandpa::Config for Runtime {
type EquivocationReportSystem = ();
}
impl From<Option<SeraiAddress>> for RuntimeOrigin {
fn from(signer: Option<SeraiAddress>) -> Self {
match signer {
None => RuntimeOrigin::none(),
Some(signer) => RuntimeOrigin::signed(signer.into()),
}
}
}
impl From<serai_abi::Call> for RuntimeCall {
fn from(call: serai_abi::Call) -> Self {
match call {
serai_abi::Call::Coins(call) => {
use serai_abi::coins::Call;
use serai_coins_pallet::Call as Scall;
RuntimeCall::Coins(match call {
Call::transfer { to, coins } => Scall::transfer { to: to.into(), coins },
Call::burn { coins } => Scall::burn { coins },
Call::burn_with_instruction { instruction } => {
Scall::burn_with_instruction { instruction }
}
})
}
serai_abi::Call::ValidatorSets(call) => {
use serai_abi::validator_sets::Call;
use serai_validator_sets_pallet::Call as Scall;
RuntimeCall::ValidatorSets(match call {
Call::set_keys { network, key_pair, signature_participants, signature } => {
Scall::set_keys { network, key_pair, signature_participants, signature }
}
Call::report_slashes { network, slashes, signature } => {
Scall::report_slashes { network, slashes, signature }
}
Call::set_embedded_elliptic_curve_keys { keys } => {
Scall::set_embedded_elliptic_curve_keys { keys }
}
Call::allocate { network, amount } => Scall::allocate { network, amount },
Call::deallocate { network, amount } => Scall::deallocate { network, amount },
Call::claim_deallocation { deallocation } => Scall::claim_deallocation {
network: deallocation.network,
session: deallocation.session,
},
})
}
serai_abi::Call::Signals(call) => {
use serai_abi::signals::Call;
use serai_signals_pallet::Call as Scall;
RuntimeCall::Signals(match call {
Call::register_retirement_signal { in_favor_of } => {
Scall::register_retirement_signal { in_favor_of }
}
Call::revoke_retirement_signal { was_in_favor_of } => {
Scall::revoke_retirement_signal { retirement_signal: was_in_favor_of }
}
Call::favor { signal, with_network } => Scall::favor { signal, with_network },
Call::revoke_favor { signal, with_network } => {
Scall::revoke_favor { signal, with_network }
}
Call::stand_against { signal, with_network } => {
Scall::stand_against { signal, with_network }
}
})
}
serai_abi::Call::Dex(call) => {
use serai_abi::dex::Call;
match call {
Call::add_liquidity {
external_coin,
sri_intended,
external_coin_intended,
sri_minimum,
external_coin_minimum,
} => RuntimeCall::Dex(serai_dex_pallet::Call::add_liquidity {
external_coin,
sri_intended,
external_coin_intended,
sri_minimum,
external_coin_minimum,
}),
Call::transfer_liquidity { to, liquidity_tokens } => {
RuntimeCall::Dex(serai_dex_pallet::Call::transfer_liquidity { to, liquidity_tokens })
}
Call::remove_liquidity { liquidity_tokens, sri_minimum, external_coin_minimum } => {
RuntimeCall::Dex(serai_dex_pallet::Call::remove_liquidity {
liquidity_tokens,
sri_minimum,
external_coin_minimum,
})
}
Call::swap { coins_to_swap, minimum_to_receive } => {
RuntimeCall::Dex(serai_dex_pallet::Call::swap { coins_to_swap, minimum_to_receive })
}
Call::swap_for { coins_to_receive, maximum_to_swap } => {
RuntimeCall::Dex(serai_dex_pallet::Call::swap_for { coins_to_receive, maximum_to_swap })
}
}
}
serai_abi::Call::GenesisLiquidity(call) => {
use serai_abi::genesis_liquidity::Call;
match call {
Call::oraclize_values { .. } | Call::remove_liquidity { .. } => todo!("TODO"),
}
}
serai_abi::Call::InInstructions(call) => {
use serai_abi::in_instructions::Call;
match call {
Call::execute_batch { .. } => todo!("TODO"),
}
}
}
}
}
type Executive = frame_executive::Executive<Runtime, Block, Context, Runtime, AllPalletsWithSystem>;
const PRIMARY_PROBABILITY: (u64, u64) = (1, 4);
@@ -385,7 +200,7 @@ sp_api::impl_runtime_apis! {
impl sp_api::Core<Block> for Runtime {
fn version() -> RuntimeVersion {
VERSION
<Runtime as frame_system::Config>::Version::get()
}
fn initialize_block(header: &Header) -> sp_runtime::ExtrinsicInclusionMode {
Executive::initialize_block(header)
@@ -641,13 +456,19 @@ impl serai_abi::TransactionContext for Context {
}
}
/// The size of the current block.
fn current_block_size(&self) -> usize {
let current_block_size = frame_system::AllExtrinsicsLen::<Runtime>::get().unwrap_or(0);
usize::try_from(current_block_size).unwrap_or(usize::MAX)
}
/// If a block is present in the blockchain.
fn block_is_present_in_blockchain(&self, hash: &serai_abi::primitives::BlockHash) -> bool {
serai_core_pallet::Pallet::<Runtime>::block_exists(hash)
}
/// The time embedded into the current block.
fn current_time(&self) -> Option<u64> {
todo!("TODO")
fn current_time(&self) -> u64 {
pallet_timestamp::Pallet::<Runtime>::get()
}
/// Get the next nonce for an account.
fn next_nonce(&self, signer: &SeraiAddress) -> u32 {
@@ -669,8 +490,8 @@ impl serai_abi::TransactionContext for Context {
}
}
fn start_transaction(&self) {
Core::start_transaction()
fn start_transaction(&self, len: usize) {
Core::start_transaction(len)
}
fn consume_next_nonce(&self, signer: &SeraiAddress) {
serai_core_pallet::Pallet::<Runtime>::consume_next_nonce(signer)
@@ -700,26 +521,7 @@ impl serai_abi::TransactionContext for Context {
/* TODO
use validator_sets::MembershipProof;
const NORMAL_DISPATCH_RATIO: Perbill = Perbill::from_percent(75);
parameter_types! {
pub const Version: RuntimeVersion = VERSION;
pub const SS58Prefix: u8 = 42; // TODO: Remove for Bech32m
// 1 MB block size limit
pub BlockLength: system::limits::BlockLength =
system::limits::BlockLength::max_with_normal_ratio(BLOCK_SIZE, NORMAL_DISPATCH_RATIO);
pub BlockWeights: system::limits::BlockWeights =
system::limits::BlockWeights::with_sensible_defaults(
Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX),
NORMAL_DISPATCH_RATIO,
);
}
impl timestamp::Config for Runtime {
type Moment = u64;
type OnTimestampSet = Babe;
type MinimumPeriod = ConstU64<{ (TARGET_BLOCK_TIME * 1000) / 2 }>;
type WeightInfo = ();
}
@@ -756,14 +558,6 @@ impl signals::Config for Runtime {
type RetirementLockInDuration = ConstU32<{ (2 * 7 * 24 * 60 * 60) / (TARGET_BLOCK_TIME as u32) }>;
}
impl in_instructions::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
}
impl genesis_liquidity::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
}
impl emissions::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
}
@@ -792,54 +586,4 @@ impl pallet_authorship::Config for Runtime {
/// Longevity of an offence report.
pub type ReportLongevity = <Runtime as pallet_babe::Config>::EpochDuration;
#[cfg(feature = "runtime-benchmarks")]
#[macro_use]
extern crate frame_benchmarking;
#[cfg(feature = "runtime-benchmarks")]
mod benches {
define_benchmarks!(
[frame_benchmarking, BaselineBench::<Runtime>]
[system, SystemBench::<Runtime>]
[balances, Balances]
[babe, Babe]
[grandpa, Grandpa]
);
}
sp_api::impl_runtime_apis! {
impl validator_sets::ValidatorSetsApi<Block> for Runtime {
fn external_network_key(network: ExternalNetworkId) -> Option<Vec<u8>> {
ValidatorSets::external_network_key(network)
}
}
impl dex::DexApi<Block> for Runtime {
fn quote_price_exact_tokens_for_tokens(
coin1: Coin,
coin2: Coin,
amount: SubstrateAmount,
include_fee: bool
) -> Option<SubstrateAmount> {
Dex::quote_price_exact_tokens_for_tokens(coin1, coin2, amount, include_fee)
}
fn quote_price_tokens_for_exact_tokens(
coin1: Coin,
coin2: Coin,
amount: SubstrateAmount,
include_fee: bool
) -> Option<SubstrateAmount> {
Dex::quote_price_tokens_for_exact_tokens(coin1, coin2, amount, include_fee)
}
fn get_reserves(coin1: Coin, coin2: Coin) -> Option<(SubstrateAmount, SubstrateAmount)> {
Dex::get_reserves(&coin1, &coin2).ok()
}
}
}
*/

View File

@@ -0,0 +1,94 @@
use super::*;
/// The lookup for a SeraiAddress -> Public.
pub struct Lookup;
impl sp_runtime::traits::StaticLookup for Lookup {
type Source = SeraiAddress;
type Target = Public;
fn lookup(source: SeraiAddress) -> Result<Public, sp_runtime::traits::LookupError> {
Ok(source.into())
}
fn unlookup(source: Public) -> SeraiAddress {
source.into()
}
}
/// The runtime version.
pub struct Version;
// TODO: Are we reasonably able to prune `RuntimeVersion` from Substrate?
impl Get<RuntimeVersion> for Version {
fn get() -> RuntimeVersion {
#[sp_version::runtime_version]
pub const VERSION: RuntimeVersion = RuntimeVersion {
spec_name: Cow::Borrowed("serai"),
impl_name: Cow::Borrowed("core"),
authoring_version: 0,
spec_version: 0,
impl_version: 0,
apis: RUNTIME_API_VERSIONS,
transaction_version: 0,
system_version: 0,
};
VERSION
}
}
impl frame_system::Config for Runtime {
type RuntimeOrigin = RuntimeOrigin;
type RuntimeCall = RuntimeCall;
type RuntimeEvent = RuntimeEvent;
type PalletInfo = PalletInfo;
type Hashing = sp_runtime::traits::BlakeTwo256;
type Hash = <Self::Block as sp_runtime::traits::Block>::Hash;
type Block = Block;
type AccountId = sp_core::sr25519::Public;
type Lookup = Lookup;
type Nonce = u32;
type PreInherents = serai_core_pallet::StartOfBlock<Runtime>;
type PostInherents = ();
type PostTransactions = serai_core_pallet::EndOfBlock<Runtime>;
/*
We do not globally filter the types of calls which may be performed. Instead, our ABI only
exposes the calls we want exposed, and each call individually errors if it's called when it
shouldn't be.
*/
type BaseCallFilter = frame_support::traits::Everything;
/*
We do not have `frame_system` track historical block hashes by their block number. Instead,
`serai_core_pallet` populates a hash set (map of `[u8; 32] -> ()`) of all historical block's
hashes within itself.
The usage of `1` here is solely as `frame_system` requires it be at least `1`.
*/
type BlockHashCount = ConstU64<1>;
type Version = Version;
type BlockLength = serai_core_pallet::Limits;
type BlockWeights = serai_core_pallet::Limits;
// We assume `serai-node` will be run using the RocksDB backend
type DbWeight = frame_support::weights::constants::RocksDbWeight;
/*
Serai does not expose `frame_system::Call`. We accordingly have no consequence to using the
default weights for these accordingly.
*/
type SystemWeightInfo = ();
// We also don't use `frame_system`'s account system at all, leaving us to bottom these out.
type AccountData = ();
type MaxConsumers = ConstU32<{ u32::MAX }>;
type OnNewAccount = ();
type OnKilledAccount = ();
// Serai does perform any 'on-chain upgrades' to ensure upgrades are opted into by the entity
// running this node and accordingly consented to
type OnSetCode = ();
// We do not have any migrations declared
type SingleBlockMigrations = ();
type MultiBlockMigrator = ();
}

View File

@@ -54,8 +54,9 @@ pub fn reproducibly_builds() {
.arg("--quiet")
.arg("--rm")
.arg(&image)
.arg("busybox")
.arg("sha256sum")
.arg("/serai/target/release/wbuild/serai-runtime/serai_runtime.wasm")
.arg("/serai.wasm")
.output(),
);
// Attempt to clean up the image