Ethereum Integration (#557)

* Clean up Ethereum

* Consistent contract address for deployed contracts

* Flesh out Router a bit

* Add a Deployer for DoS-less deployment

* Implement Router-finding

* Use CREATE2 helper present in ethers

* Move from CREATE2 to CREATE

Bit more streamlined for our use case.

* Document ethereum-serai

* Tidy tests a bit

* Test updateSeraiKey

* Use encodePacked for updateSeraiKey

* Take in the block hash to read state during

* Add a Sandbox contract to the Ethereum integration

* Add retrieval of transfers from Ethereum

* Add inInstruction function to the Router

* Augment our handling of InInstructions events with a check the transfer event also exists

* Have the Deployer error upon failed deployments

* Add --via-ir

* Make get_transaction test-only

We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).

* Modularize Eventuality

Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.

* Modularize the Scheduler behind a trait

* Add an extremely basic account Scheduler

* Add nonce uses, key rotation to the account scheduler

* Only report the account Scheduler empty after transferring keys

Also ban payments to the branch/change/forward addresses.

* Make fns reliant on state test-only

* Start of an Ethereum integration for the processor

* Add a session to the Router to prevent updateSeraiKey replaying

This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.

* Add a RouterCommand + SignMachine for producing it to coins/ethereum

* Ethereum which compiles

* Have branch/change/forward return an option

Also defines a UtxoNetwork extension trait for MAX_INPUTS.

* Make external_address exclusively a test fn

* Move the "account" scheduler to "smart contract"

* Remove ABI artifact

* Move refund/forward Plan creation into the Processor

We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).

Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.

* Flesh out the Ethereum integration more

* Two commits ago, into the **Scheduler, not Processor

* Remove misc TODOs in SC Scheduler

* Add constructor to RouterCommandMachine

* RouterCommand read, pairing with the prior added write

* Further add serialization methods

* Have the Router's key included with the InInstruction

This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.

This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).

Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.

* Add read/write to InInstruction

* Abstract the ABI for Call/OutInstruction in ethereum-serai

* Fill out signable_transaction for Ethereum

* Move ethereum-serai to alloy

Resolves #331.

* Use the opaque sol macro instead of generated files

* Move the processor over to the now-alloy-based ethereum-serai

* Use the ecrecover provided by alloy

* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)

* Always use the latest keys for SC scheduled plans

* get_eventuality_completions for Ethereum

* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests

This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.

* Add alloy-simple-request-transport to the GH workflows

* cargo update

* Clarify a few comments and make one check more robust

* Use a string for 27.0 in .github

* Remove optional from no-longer-optional dependencies in processor

* Add alloy to git deny exception

* Fix no longer optional specification in processor's binaries feature

* Use a version of foundry from 2024

* Correct fetching Bitcoin TXs in the processor docker tests

* Update rustls to resolve RUSTSEC warnings

* Use the monthly nightly foundry, not the deleted daily nightly
This commit is contained in:
Luke Parker
2024-04-21 06:02:12 -04:00
committed by GitHub
parent 43083dfd49
commit 0f0db14f05
58 changed files with 5031 additions and 1385 deletions

View File

@@ -28,6 +28,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std", "get
rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
# Encoders
const-hex = { version = "1", default-features = false }
hex = { version = "0.4", default-features = false, features = ["std"] }
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] }
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
@@ -40,11 +41,16 @@ transcript = { package = "flexible-transcript", path = "../crypto/transcript", d
frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] }
frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false }
# Bitcoin/Ethereum
k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true }
# Bitcoin
secp256k1 = { version = "0.28", default-features = false, features = ["std", "global-context", "rand-std"], optional = true }
k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true }
bitcoin-serai = { path = "../coins/bitcoin", default-features = false, features = ["std"], optional = true }
# Ethereum
ethereum-serai = { path = "../coins/ethereum", default-features = false, optional = true }
# Monero
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true }
monero-serai = { path = "../coins/monero", default-features = false, features = ["std", "http-rpc", "multisig"], optional = true }
@@ -55,12 +61,12 @@ env_logger = { version = "0.10", default-features = false, features = ["humantim
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] }
zalloc = { path = "../common/zalloc" }
serai-db = { path = "../common/db", optional = true }
serai-db = { path = "../common/db" }
serai-env = { path = "../common/env", optional = true }
# TODO: Replace with direct usage of primitives
serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] }
messages = { package = "serai-processor-messages", path = "./messages", optional = true }
messages = { package = "serai-processor-messages", path = "./messages" }
message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true }
@@ -69,6 +75,8 @@ frost = { package = "modular-frost", path = "../crypto/frost", features = ["test
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
ethereum-serai = { path = "../coins/ethereum", default-features = false, features = ["tests"] }
dockertest = "0.4"
serai-docker-tests = { path = "../tests/docker" }
@@ -76,9 +84,11 @@ serai-docker-tests = { path = "../tests/docker" }
secp256k1 = ["k256", "frost/secp256k1"]
bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"]
ethereum = ["secp256k1", "ethereum-serai"]
ed25519 = ["dalek-ff-group", "frost/ed25519"]
monero = ["ed25519", "monero-serai", "serai-client/monero"]
binaries = ["env_logger", "serai-env", "messages", "message-queue"]
binaries = ["env_logger", "serai-env", "message-queue"]
parity-db = ["serai-db/parity-db"]
rocksdb = ["serai-db/rocksdb"]

View File

@@ -1,7 +1,15 @@
#![allow(dead_code)]
mod plan;
pub use plan::*;
mod db;
pub(crate) use db::*;
mod key_gen;
pub mod networks;
pub(crate) mod multisigs;
mod additional_key;
pub use additional_key::additional_key;

View File

@@ -31,6 +31,8 @@ mod networks;
use networks::{Block, Network};
#[cfg(feature = "bitcoin")]
use networks::Bitcoin;
#[cfg(feature = "ethereum")]
use networks::Ethereum;
#[cfg(feature = "monero")]
use networks::Monero;
@@ -735,6 +737,7 @@ async fn main() {
};
let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() {
"bitcoin" => NetworkId::Bitcoin,
"ethereum" => NetworkId::Ethereum,
"monero" => NetworkId::Monero,
_ => panic!("unrecognized network"),
};
@@ -744,6 +747,8 @@ async fn main() {
match network_id {
#[cfg(feature = "bitcoin")]
NetworkId::Bitcoin => run(db, Bitcoin::new(url).await, coordinator).await,
#[cfg(feature = "ethereum")]
NetworkId::Ethereum => run(db.clone(), Ethereum::new(db, url).await, coordinator).await,
#[cfg(feature = "monero")]
NetworkId::Monero => run(db, Monero::new(url).await, coordinator).await,
_ => panic!("spawning a processor for an unsupported network"),

View File

@@ -1,3 +1,5 @@
use std::io;
use ciphersuite::Ciphersuite;
pub use serai_db::*;
@@ -6,9 +8,59 @@ use serai_client::{primitives::Balance, in_instructions::primitives::InInstructi
use crate::{
Get, Plan,
networks::{Transaction, Network},
networks::{Output, Transaction, Network},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum PlanFromScanning<N: Network> {
Refund(N::Output, N::Address),
Forward(N::Output),
}
impl<N: Network> PlanFromScanning<N> {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let mut kind = [0xff];
reader.read_exact(&mut kind)?;
match kind[0] {
0 => {
let output = N::Output::read(reader)?;
let mut address_vec_len = [0; 4];
reader.read_exact(&mut address_vec_len)?;
let mut address_vec =
vec![0; usize::try_from(u32::from_le_bytes(address_vec_len)).unwrap()];
reader.read_exact(&mut address_vec)?;
let address =
N::Address::try_from(address_vec).map_err(|_| "invalid address saved to disk").unwrap();
Ok(PlanFromScanning::Refund(output, address))
}
1 => {
let output = N::Output::read(reader)?;
Ok(PlanFromScanning::Forward(output))
}
_ => panic!("reading unrecognized PlanFromScanning"),
}
}
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self {
PlanFromScanning::Refund(output, address) => {
writer.write_all(&[0])?;
output.write(writer)?;
let address_vec: Vec<u8> =
address.clone().try_into().map_err(|_| "invalid address being refunded to").unwrap();
writer.write_all(&u32::try_from(address_vec.len()).unwrap().to_le_bytes())?;
writer.write_all(&address_vec)
}
PlanFromScanning::Forward(output) => {
writer.write_all(&[1])?;
output.write(writer)
}
}
}
}
create_db!(
MultisigsDb {
NextBatchDb: () -> u32,
@@ -80,7 +132,11 @@ impl PlanDb {
) -> bool {
let plan = Plan::<N>::read::<&[u8]>(&mut &Self::get(getter, &id).unwrap()[8 ..]).unwrap();
assert_eq!(plan.id(), id);
(key == plan.key) && (Some(N::change_address(plan.key)) == plan.change)
if let Some(change) = N::change_address(plan.key) {
(key == plan.key) && (Some(change) == plan.change)
} else {
false
}
}
}
@@ -130,7 +186,7 @@ impl PlansFromScanningDb {
pub fn set_plans_from_scanning<N: Network>(
txn: &mut impl DbTxn,
block_number: usize,
plans: Vec<Plan<N>>,
plans: Vec<PlanFromScanning<N>>,
) {
let mut buf = vec![];
for plan in plans {
@@ -142,13 +198,13 @@ impl PlansFromScanningDb {
pub fn take_plans_from_scanning<N: Network>(
txn: &mut impl DbTxn,
block_number: usize,
) -> Option<Vec<Plan<N>>> {
) -> Option<Vec<PlanFromScanning<N>>> {
let block_number = u64::try_from(block_number).unwrap();
let res = Self::get(txn, block_number).map(|plans| {
let mut plans_ref = plans.as_slice();
let mut res = vec![];
while !plans_ref.is_empty() {
res.push(Plan::<N>::read(&mut plans_ref).unwrap());
res.push(PlanFromScanning::<N>::read(&mut plans_ref).unwrap());
}
res
});

View File

@@ -7,7 +7,7 @@ use scale::{Encode, Decode};
use messages::SubstrateContext;
use serai_client::{
primitives::{MAX_DATA_LEN, NetworkId, Coin, ExternalAddress, BlockHash, Data},
primitives::{MAX_DATA_LEN, ExternalAddress, BlockHash, Data},
in_instructions::primitives::{
InInstructionWithBalance, Batch, RefundableInInstruction, Shorthand, MAX_BATCH_SIZE,
},
@@ -28,15 +28,12 @@ use scanner::{ScannerEvent, ScannerHandle, Scanner};
mod db;
use db::*;
#[cfg(not(test))]
mod scheduler;
#[cfg(test)]
pub mod scheduler;
pub(crate) mod scheduler;
use scheduler::Scheduler;
use crate::{
Get, Db, Payment, Plan,
networks::{OutputType, Output, Transaction, SignableTransaction, Block, PreparedSend, Network},
networks::{OutputType, Output, SignableTransaction, Eventuality, Block, PreparedSend, Network},
};
// InInstructionWithBalance from an external output
@@ -95,6 +92,8 @@ enum RotationStep {
ClosingExisting,
}
// This explicitly shouldn't take the database as we prepare Plans we won't execute for fee
// estimates
async fn prepare_send<N: Network>(
network: &N,
block_number: usize,
@@ -122,7 +121,7 @@ async fn prepare_send<N: Network>(
pub struct MultisigViewer<N: Network> {
activation_block: usize,
key: <N::Curve as Ciphersuite>::G,
scheduler: Scheduler<N>,
scheduler: N::Scheduler,
}
#[allow(clippy::type_complexity)]
@@ -131,7 +130,7 @@ pub enum MultisigEvent<N: Network> {
// Batches to publish
Batches(Option<(<N::Curve as Ciphersuite>::G, <N::Curve as Ciphersuite>::G)>, Vec<Batch>),
// Eventuality completion found on-chain
Completed(Vec<u8>, [u8; 32], N::Transaction),
Completed(Vec<u8>, [u8; 32], <N::Eventuality as Eventuality>::Completion),
}
pub struct MultisigManager<D: Db, N: Network> {
@@ -157,20 +156,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
assert!(current_keys.len() <= 2);
let mut actively_signing = vec![];
for (_, key) in &current_keys {
schedulers.push(
Scheduler::from_db(
raw_db,
*key,
match N::NETWORK {
NetworkId::Serai => panic!("adding a key for Serai"),
NetworkId::Bitcoin => Coin::Bitcoin,
// TODO: This is incomplete to DAI
NetworkId::Ethereum => Coin::Ether,
NetworkId::Monero => Coin::Monero,
},
)
.unwrap(),
);
schedulers.push(N::Scheduler::from_db(raw_db, *key, N::NETWORK).unwrap());
// Load any TXs being actively signed
let key = key.to_bytes();
@@ -245,17 +231,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
let viewer = Some(MultisigViewer {
activation_block,
key: external_key,
scheduler: Scheduler::<N>::new::<D>(
txn,
external_key,
match N::NETWORK {
NetworkId::Serai => panic!("adding a key for Serai"),
NetworkId::Bitcoin => Coin::Bitcoin,
// TODO: This is incomplete to DAI
NetworkId::Ethereum => Coin::Ether,
NetworkId::Monero => Coin::Monero,
},
),
scheduler: N::Scheduler::new::<D>(txn, external_key, N::NETWORK),
});
if self.existing.is_none() {
@@ -352,48 +328,30 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
(existing_outputs, new_outputs)
}
fn refund_plan(output: N::Output, refund_to: N::Address) -> Plan<N> {
fn refund_plan(
scheduler: &mut N::Scheduler,
txn: &mut D::Transaction<'_>,
output: N::Output,
refund_to: N::Address,
) -> Plan<N> {
log::info!("creating refund plan for {}", hex::encode(output.id()));
assert_eq!(output.kind(), OutputType::External);
Plan {
key: output.key(),
// Uses a payment as this will still be successfully sent due to fee amortization,
// and because change is currently always a Serai key
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
inputs: vec![output],
change: None,
}
scheduler.refund_plan::<D>(txn, output, refund_to)
}
fn forward_plan(&self, output: N::Output) -> Plan<N> {
// Returns the plan for forwarding if one is needed.
// Returns None if one is not needed to forward this output.
fn forward_plan(&mut self, txn: &mut D::Transaction<'_>, output: &N::Output) -> Option<Plan<N>> {
log::info!("creating forwarding plan for {}", hex::encode(output.id()));
/*
Sending a Plan, with arbitrary data proxying the InInstruction, would require adding
a flow for networks which drop their data to still embed arbitrary data. It'd also have
edge cases causing failures (we'd need to manually provide the origin if it was implied,
which may exceed the encoding limit).
Instead, we save the InInstruction as we scan this output. Then, when the output is
successfully forwarded, we simply read it from the local database. This also saves the
costs of embedding arbitrary data.
Since we can't rely on the Eventuality system to detect if it's a forwarded transaction,
due to the asynchonicity of the Eventuality system, we instead interpret an Forwarded
output which has an amount associated with an InInstruction which was forwarded as having
been forwarded.
*/
Plan {
key: self.existing.as_ref().unwrap().key,
payments: vec![Payment {
address: N::forward_address(self.new.as_ref().unwrap().key),
data: None,
balance: output.balance(),
}],
inputs: vec![output],
change: None,
let res = self.existing.as_mut().unwrap().scheduler.forward_plan::<D>(
txn,
output.clone(),
self.new.as_ref().expect("forwarding plan yet no new multisig").key,
);
if res.is_none() {
log::info!("no forwarding plan was necessary for {}", hex::encode(output.id()));
}
res
}
// Filter newly received outputs due to the step being RotationStep::ClosingExisting.
@@ -605,7 +563,31 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
block_number
{
// Load plans crated when we scanned the block
plans = PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
let scanning_plans =
PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
// Expand into actual plans
plans = scanning_plans
.into_iter()
.map(|plan| match plan {
PlanFromScanning::Refund(output, refund_to) => {
let existing = self.existing.as_mut().unwrap();
if output.key() == existing.key {
Self::refund_plan(&mut existing.scheduler, txn, output, refund_to)
} else {
let new = self
.new
.as_mut()
.expect("new multisig didn't expect yet output wasn't for existing multisig");
assert_eq!(output.key(), new.key, "output wasn't for existing nor new multisig");
Self::refund_plan(&mut new.scheduler, txn, output, refund_to)
}
}
PlanFromScanning::Forward(output) => self
.forward_plan(txn, &output)
.expect("supposed to forward an output yet no forwarding plan"),
})
.collect();
for plan in &plans {
plans_from_scanning.insert(plan.id());
}
@@ -665,13 +647,23 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
});
for plan in &plans {
if plan.change == Some(N::change_address(plan.key)) {
// Assert these are only created during the expected step
match *step {
RotationStep::UseExisting => {}
RotationStep::NewAsChange |
RotationStep::ForwardFromExisting |
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
// This first equality should 'never meaningfully' be false
// All created plans so far are by the existing multisig EXCEPT:
// A) If we created a refund plan from the new multisig (yet that wouldn't have change)
// B) The existing Scheduler returned a Plan for the new key (yet that happens with the SC
// scheduler, yet that doesn't have change)
// Despite being 'unnecessary' now, it's better to explicitly ensure and be robust
if plan.key == self.existing.as_ref().unwrap().key {
if let Some(change) = N::change_address(plan.key) {
if plan.change == Some(change) {
// Assert these (self-change) are only created during the expected step
match *step {
RotationStep::UseExisting => {}
RotationStep::NewAsChange |
RotationStep::ForwardFromExisting |
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
}
}
}
}
}
@@ -853,15 +845,20 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
let plans_at_start = plans.len();
let (refund_to, instruction) = instruction_from_output::<N>(output);
if let Some(mut instruction) = instruction {
// Build a dedicated Plan forwarding this
let forward_plan = self.forward_plan(output.clone());
plans.push(forward_plan.clone());
let Some(shimmed_plan) = N::Scheduler::shim_forward_plan(
output.clone(),
self.new.as_ref().expect("forwarding from existing yet no new multisig").key,
) else {
// If this network doesn't need forwarding, report the output now
return true;
};
plans.push(PlanFromScanning::<N>::Forward(output.clone()));
// Set the instruction for this output to be returned
// We need to set it under the amount it's forwarded with, so prepare its forwarding
// TX to determine the fees involved
let PreparedSend { tx, post_fee_branches: _, operating_costs } =
prepare_send(network, block_number, forward_plan, 0).await;
prepare_send(network, block_number, shimmed_plan, 0).await;
// operating_costs should not increase in a forwarding TX
assert_eq!(operating_costs, 0);
@@ -872,12 +869,28 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
// letting it die out
if let Some(tx) = &tx {
instruction.balance.amount.0 -= tx.0.fee();
/*
Sending a Plan, with arbitrary data proxying the InInstruction, would require
adding a flow for networks which drop their data to still embed arbitrary data.
It'd also have edge cases causing failures (we'd need to manually provide the
origin if it was implied, which may exceed the encoding limit).
Instead, we save the InInstruction as we scan this output. Then, when the
output is successfully forwarded, we simply read it from the local database.
This also saves the costs of embedding arbitrary data.
Since we can't rely on the Eventuality system to detect if it's a forwarded
transaction, due to the asynchonicity of the Eventuality system, we instead
interpret an Forwarded output which has an amount associated with an
InInstruction which was forwarded as having been forwarded.
*/
ForwardedOutputDb::save_forwarded_output(txn, &instruction);
}
} else if let Some(refund_to) = refund_to {
if let Ok(refund_to) = refund_to.consume().try_into() {
// Build a dedicated Plan refunding this
plans.push(Self::refund_plan(output.clone(), refund_to));
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
}
}
@@ -909,7 +922,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
let Some(instruction) = instruction else {
if let Some(refund_to) = refund_to {
if let Ok(refund_to) = refund_to.consume().try_into() {
plans.push(Self::refund_plan(output.clone(), refund_to));
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
}
}
continue;
@@ -999,9 +1012,9 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
// This must be emitted before ScannerEvent::Block for all completions of known Eventualities
// within the block. Unknown Eventualities may have their Completed events emitted after
// ScannerEvent::Block however.
ScannerEvent::Completed(key, block_number, id, tx) => {
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx.id());
(block_number, MultisigEvent::Completed(key, id, tx))
ScannerEvent::Completed(key, block_number, id, tx_id, completion) => {
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx_id);
(block_number, MultisigEvent::Completed(key, id, completion))
}
};

View File

@@ -17,15 +17,25 @@ use tokio::{
use crate::{
Get, DbTxn, Db,
networks::{Output, Transaction, EventualitiesTracker, Block, Network},
networks::{Output, Transaction, Eventuality, EventualitiesTracker, Block, Network},
};
#[derive(Clone, Debug)]
pub enum ScannerEvent<N: Network> {
// Block scanned
Block { is_retirement_block: bool, block: <N::Block as Block<N>>::Id, outputs: Vec<N::Output> },
Block {
is_retirement_block: bool,
block: <N::Block as Block<N>>::Id,
outputs: Vec<N::Output>,
},
// Eventuality completion found on-chain
Completed(Vec<u8>, usize, [u8; 32], N::Transaction),
Completed(
Vec<u8>,
usize,
[u8; 32],
<N::Transaction as Transaction<N>>::Id,
<N::Eventuality as Eventuality>::Completion,
),
}
pub type ScannerEventChannel<N> = mpsc::UnboundedReceiver<ScannerEvent<N>>;
@@ -555,19 +565,25 @@ impl<N: Network, D: Db> Scanner<N, D> {
}
}
for (id, (block_number, tx)) in network
for (id, (block_number, tx, completion)) in network
.get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block)
.await
{
info!(
"eventuality {} resolved by {}, as found on chain",
hex::encode(id),
hex::encode(&tx.id())
hex::encode(tx.as_ref())
);
completion_block_numbers.push(block_number);
// This must be before the mission of ScannerEvent::Block, per commentary in mod.rs
if !scanner.emit(ScannerEvent::Completed(key_vec.clone(), block_number, id, tx)) {
if !scanner.emit(ScannerEvent::Completed(
key_vec.clone(),
block_number,
id,
tx,
completion,
)) {
return;
}
}

View File

@@ -0,0 +1,95 @@
use core::fmt::Debug;
use std::io;
use ciphersuite::Ciphersuite;
use serai_client::primitives::{NetworkId, Balance};
use crate::{networks::Network, Db, Payment, Plan};
pub(crate) mod utxo;
pub(crate) mod smart_contract;
pub trait SchedulerAddendum: Send + Clone + PartialEq + Debug {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()>;
}
impl SchedulerAddendum for () {
fn read<R: io::Read>(_: &mut R) -> io::Result<Self> {
Ok(())
}
fn write<W: io::Write>(&self, _: &mut W) -> io::Result<()> {
Ok(())
}
}
pub trait Scheduler<N: Network>: Sized + Clone + PartialEq + Debug {
type Addendum: SchedulerAddendum;
/// Check if this Scheduler is empty.
fn empty(&self) -> bool;
/// Create a new Scheduler.
fn new<D: Db>(
txn: &mut D::Transaction<'_>,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> Self;
/// Load a Scheduler from the DB.
fn from_db<D: Db>(
db: &D,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> io::Result<Self>;
/// Check if a branch is usable.
fn can_use_branch(&self, balance: Balance) -> bool;
/// Schedule a series of outputs/payments.
fn schedule<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
utxos: Vec<N::Output>,
payments: Vec<Payment<N>>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
force_spend: bool,
) -> Vec<Plan<N>>;
/// Consume all payments still pending within this Scheduler, without scheduling them.
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>>;
/// Note a branch output as having been created, with the amount it was actually created with,
/// or not having been created due to being too small.
fn created_output<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
expected: u64,
actual: Option<u64>,
);
/// Refund a specific output.
fn refund_plan<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
output: N::Output,
refund_to: N::Address,
) -> Plan<N>;
/// Shim the forwarding Plan as necessary to obtain a fee estimate.
///
/// If this Scheduler is for a Network which requires forwarding, this must return Some with a
/// plan with identical fee behavior. If forwarding isn't necessary, returns None.
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>>;
/// Forward a specific output to the new multisig.
///
/// Returns None if no forwarding is necessary. Must return Some if forwarding is necessary.
fn forward_plan<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
output: N::Output,
to: <N::Curve as Ciphersuite>::G,
) -> Option<Plan<N>>;
}

View File

@@ -0,0 +1,208 @@
use std::{io, collections::HashSet};
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::primitives::{NetworkId, Coin, Balance};
use crate::{
Get, DbTxn, Db, Payment, Plan, create_db,
networks::{Output, Network},
multisigs::scheduler::{SchedulerAddendum, Scheduler as SchedulerTrait},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Scheduler<N: Network> {
key: <N::Curve as Ciphersuite>::G,
coins: HashSet<Coin>,
rotated: bool,
}
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum Addendum<N: Network> {
Nonce(u64),
RotateTo { nonce: u64, new_key: <N::Curve as Ciphersuite>::G },
}
impl<N: Network> SchedulerAddendum for Addendum<N> {
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let mut kind = [0xff];
reader.read_exact(&mut kind)?;
match kind[0] {
0 => {
let mut nonce = [0; 8];
reader.read_exact(&mut nonce)?;
Ok(Addendum::Nonce(u64::from_le_bytes(nonce)))
}
1 => {
let mut nonce = [0; 8];
reader.read_exact(&mut nonce)?;
let nonce = u64::from_le_bytes(nonce);
let new_key = N::Curve::read_G(reader)?;
Ok(Addendum::RotateTo { nonce, new_key })
}
_ => Err(io::Error::other("reading unknown Addendum type"))?,
}
}
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
match self {
Addendum::Nonce(nonce) => {
writer.write_all(&[0])?;
writer.write_all(&nonce.to_le_bytes())
}
Addendum::RotateTo { nonce, new_key } => {
writer.write_all(&[1])?;
writer.write_all(&nonce.to_le_bytes())?;
writer.write_all(new_key.to_bytes().as_ref())
}
}
}
}
create_db! {
SchedulerDb {
LastNonce: () -> u64,
RotatedTo: (key: &[u8]) -> Vec<u8>,
}
}
impl<N: Network<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
type Addendum = Addendum<N>;
/// Check if this Scheduler is empty.
fn empty(&self) -> bool {
self.rotated
}
/// Create a new Scheduler.
fn new<D: Db>(
_txn: &mut D::Transaction<'_>,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> Self {
assert!(N::branch_address(key).is_none());
assert!(N::change_address(key).is_none());
assert!(N::forward_address(key).is_none());
Scheduler { key, coins: network.coins().iter().copied().collect(), rotated: false }
}
/// Load a Scheduler from the DB.
fn from_db<D: Db>(
db: &D,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> io::Result<Self> {
Ok(Scheduler {
key,
coins: network.coins().iter().copied().collect(),
rotated: RotatedTo::get(db, key.to_bytes().as_ref()).is_some(),
})
}
fn can_use_branch(&self, _balance: Balance) -> bool {
false
}
fn schedule<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
utxos: Vec<N::Output>,
payments: Vec<Payment<N>>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
force_spend: bool,
) -> Vec<Plan<N>> {
for utxo in utxos {
assert!(self.coins.contains(&utxo.balance().coin));
}
let mut nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
let mut plans = vec![];
for chunk in payments.as_slice().chunks(N::MAX_OUTPUTS) {
// Once we rotate, all further payments should be scheduled via the new multisig
assert!(!self.rotated);
plans.push(Plan {
key: self.key,
inputs: vec![],
payments: chunk.to_vec(),
change: None,
scheduler_addendum: Addendum::Nonce(nonce),
});
nonce += 1;
}
// If we're supposed to rotate to the new key, create an empty Plan which will signify the key
// update
if force_spend && (!self.rotated) {
plans.push(Plan {
key: self.key,
inputs: vec![],
payments: vec![],
change: None,
scheduler_addendum: Addendum::RotateTo { nonce, new_key: key_for_any_change },
});
nonce += 1;
self.rotated = true;
RotatedTo::set(
txn,
self.key.to_bytes().as_ref(),
&key_for_any_change.to_bytes().as_ref().to_vec(),
);
}
LastNonce::set(txn, &nonce);
plans
}
fn consume_payments<D: Db>(&mut self, _txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
vec![]
}
fn created_output<D: Db>(
&mut self,
_txn: &mut D::Transaction<'_>,
_expected: u64,
_actual: Option<u64>,
) {
panic!("Smart Contract Scheduler created a Branch output")
}
/// Refund a specific output.
fn refund_plan<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
output: N::Output,
refund_to: N::Address,
) -> Plan<N> {
let current_key = RotatedTo::get(txn, self.key.to_bytes().as_ref())
.and_then(|key_bytes| <N::Curve as Ciphersuite>::read_G(&mut key_bytes.as_slice()).ok())
.unwrap_or(self.key);
let nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
LastNonce::set(txn, &(nonce + 1));
Plan {
key: current_key,
inputs: vec![],
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
change: None,
scheduler_addendum: Addendum::Nonce(nonce),
}
}
fn shim_forward_plan(_output: N::Output, _to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
None
}
/// Forward a specific output to the new multisig.
///
/// Returns None if no forwarding is necessary.
fn forward_plan<D: Db>(
&mut self,
_txn: &mut D::Transaction<'_>,
_output: N::Output,
_to: <N::Curve as Ciphersuite>::G,
) -> Option<Plan<N>> {
None
}
}

View File

@@ -5,16 +5,17 @@ use std::{
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use serai_client::primitives::{Coin, Amount, Balance};
use serai_client::primitives::{NetworkId, Coin, Amount, Balance};
use crate::{
networks::{OutputType, Output, Network},
DbTxn, Db, Payment, Plan,
networks::{OutputType, Output, Network, UtxoNetwork},
multisigs::scheduler::Scheduler as SchedulerTrait,
};
/// Stateless, deterministic output/payment manager.
#[derive(PartialEq, Eq, Debug)]
pub struct Scheduler<N: Network> {
/// Deterministic output/payment manager.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Scheduler<N: UtxoNetwork> {
key: <N::Curve as Ciphersuite>::G,
coin: Coin,
@@ -46,7 +47,7 @@ fn scheduler_key<D: Db, G: GroupEncoding>(key: &G) -> Vec<u8> {
D::key(b"SCHEDULER", b"scheduler", key.to_bytes())
}
impl<N: Network> Scheduler<N> {
impl<N: UtxoNetwork<Scheduler = Self>> Scheduler<N> {
pub fn empty(&self) -> bool {
self.queued_plans.is_empty() &&
self.plans.is_empty() &&
@@ -144,8 +145,18 @@ impl<N: Network> Scheduler<N> {
pub fn new<D: Db>(
txn: &mut D::Transaction<'_>,
key: <N::Curve as Ciphersuite>::G,
coin: Coin,
network: NetworkId,
) -> Self {
assert!(N::branch_address(key).is_some());
assert!(N::change_address(key).is_some());
assert!(N::forward_address(key).is_some());
let coin = {
let coins = network.coins();
assert_eq!(coins.len(), 1);
coins[0]
};
let res = Scheduler {
key,
coin,
@@ -159,7 +170,17 @@ impl<N: Network> Scheduler<N> {
res
}
pub fn from_db<D: Db>(db: &D, key: <N::Curve as Ciphersuite>::G, coin: Coin) -> io::Result<Self> {
pub fn from_db<D: Db>(
db: &D,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> io::Result<Self> {
let coin = {
let coins = network.coins();
assert_eq!(coins.len(), 1);
coins[0]
};
let scheduler = db.get(scheduler_key::<D, _>(&key)).unwrap_or_else(|| {
panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes()))
});
@@ -201,7 +222,7 @@ impl<N: Network> Scheduler<N> {
amount
};
let branch_address = N::branch_address(self.key);
let branch_address = N::branch_address(self.key).unwrap();
// If we have more payments than we can handle in a single TX, create plans for them
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
@@ -237,7 +258,8 @@ impl<N: Network> Scheduler<N> {
key: self.key,
inputs,
payments,
change: Some(N::change_address(key_for_any_change)).filter(|_| change),
change: Some(N::change_address(key_for_any_change).unwrap()).filter(|_| change),
scheduler_addendum: (),
}
}
@@ -305,7 +327,7 @@ impl<N: Network> Scheduler<N> {
its *own* branch address, since created_output is called on the signer's Scheduler.
*/
{
let branch_address = N::branch_address(self.key);
let branch_address = N::branch_address(self.key).unwrap();
payments =
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
}
@@ -357,7 +379,8 @@ impl<N: Network> Scheduler<N> {
key: self.key,
inputs: chunk,
payments: vec![],
change: Some(N::change_address(key_for_any_change)),
change: Some(N::change_address(key_for_any_change).unwrap()),
scheduler_addendum: (),
})
}
@@ -403,7 +426,8 @@ impl<N: Network> Scheduler<N> {
key: self.key,
inputs: self.utxos.drain(..).collect::<Vec<_>>(),
payments: vec![],
change: Some(N::change_address(key_for_any_change)),
change: Some(N::change_address(key_for_any_change).unwrap()),
scheduler_addendum: (),
});
}
@@ -435,9 +459,6 @@ impl<N: Network> Scheduler<N> {
// Note a branch output as having been created, with the amount it was actually created with,
// or not having been created due to being too small
// This can be called whenever, so long as it's properly ordered
// (it's independent to Serai/the chain we're scheduling over, yet still expects outputs to be
// created in the same order Plans are returned in)
pub fn created_output<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
@@ -501,3 +522,106 @@ impl<N: Network> Scheduler<N> {
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
}
}
impl<N: UtxoNetwork<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
type Addendum = ();
/// Check if this Scheduler is empty.
fn empty(&self) -> bool {
Scheduler::empty(self)
}
/// Create a new Scheduler.
fn new<D: Db>(
txn: &mut D::Transaction<'_>,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> Self {
Scheduler::new::<D>(txn, key, network)
}
/// Load a Scheduler from the DB.
fn from_db<D: Db>(
db: &D,
key: <N::Curve as Ciphersuite>::G,
network: NetworkId,
) -> io::Result<Self> {
Scheduler::from_db::<D>(db, key, network)
}
/// Check if a branch is usable.
fn can_use_branch(&self, balance: Balance) -> bool {
Scheduler::can_use_branch(self, balance)
}
/// Schedule a series of outputs/payments.
fn schedule<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
utxos: Vec<N::Output>,
payments: Vec<Payment<N>>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
force_spend: bool,
) -> Vec<Plan<N>> {
Scheduler::schedule::<D>(self, txn, utxos, payments, key_for_any_change, force_spend)
}
/// Consume all payments still pending within this Scheduler, without scheduling them.
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
Scheduler::consume_payments::<D>(self, txn)
}
/// Note a branch output as having been created, with the amount it was actually created with,
/// or not having been created due to being too small.
// TODO: Move this to Balance.
fn created_output<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
expected: u64,
actual: Option<u64>,
) {
Scheduler::created_output::<D>(self, txn, expected, actual)
}
fn refund_plan<D: Db>(
&mut self,
_: &mut D::Transaction<'_>,
output: N::Output,
refund_to: N::Address,
) -> Plan<N> {
Plan {
key: output.key(),
// Uses a payment as this will still be successfully sent due to fee amortization,
// and because change is currently always a Serai key
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
inputs: vec![output],
change: None,
scheduler_addendum: (),
}
}
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
Some(Plan {
key: output.key(),
payments: vec![Payment {
address: N::forward_address(to).unwrap(),
data: None,
balance: output.balance(),
}],
inputs: vec![output],
change: None,
scheduler_addendum: (),
})
}
fn forward_plan<D: Db>(
&mut self,
_: &mut D::Transaction<'_>,
output: N::Output,
to: <N::Curve as Ciphersuite>::G,
) -> Option<Plan<N>> {
assert_eq!(self.key, output.key());
// Call shim as shim returns the actual
Self::shim_forward_plan(output, to)
}
}

View File

@@ -52,9 +52,10 @@ use crate::{
networks::{
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
Eventuality as EventualityTrait, EventualitiesTracker, Network,
Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork,
},
Payment,
multisigs::scheduler::utxo::Scheduler,
};
#[derive(Clone, PartialEq, Eq, Debug)]
@@ -178,14 +179,6 @@ impl TransactionTrait<Bitcoin> for Transaction {
hash.reverse();
hash
}
fn serialize(&self) -> Vec<u8> {
let mut buf = vec![];
self.consensus_encode(&mut buf).unwrap();
buf
}
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Transaction::consensus_decode(reader).map_err(|e| io::Error::other(format!("{e}")))
}
#[cfg(test)]
async fn fee(&self, network: &Bitcoin) -> u64 {
@@ -209,7 +202,23 @@ impl TransactionTrait<Bitcoin> for Transaction {
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Eventuality([u8; 32]);
#[derive(Clone, PartialEq, Eq, Default, Debug)]
pub struct EmptyClaim;
impl AsRef<[u8]> for EmptyClaim {
fn as_ref(&self) -> &[u8] {
&[]
}
}
impl AsMut<[u8]> for EmptyClaim {
fn as_mut(&mut self) -> &mut [u8] {
&mut []
}
}
impl EventualityTrait for Eventuality {
type Claim = EmptyClaim;
type Completion = Transaction;
fn lookup(&self) -> Vec<u8> {
self.0.to_vec()
}
@@ -224,6 +233,18 @@ impl EventualityTrait for Eventuality {
fn serialize(&self) -> Vec<u8> {
self.0.to_vec()
}
fn claim(_: &Transaction) -> EmptyClaim {
EmptyClaim
}
fn serialize_completion(completion: &Transaction) -> Vec<u8> {
let mut buf = vec![];
completion.consensus_encode(&mut buf).unwrap();
buf
}
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Transaction> {
Transaction::consensus_decode(reader).map_err(|e| io::Error::other(format!("{e}")))
}
}
#[derive(Clone, Debug)]
@@ -374,8 +395,12 @@ impl Bitcoin {
for input in &tx.input {
let mut input_tx = input.previous_output.txid.to_raw_hash().to_byte_array();
input_tx.reverse();
in_value += self.get_transaction(&input_tx).await?.output
[usize::try_from(input.previous_output.vout).unwrap()]
in_value += self
.rpc
.get_transaction(&input_tx)
.await
.map_err(|_| NetworkError::ConnectionError)?
.output[usize::try_from(input.previous_output.vout).unwrap()]
.value
.to_sat();
}
@@ -537,6 +562,25 @@ impl Bitcoin {
}
}
// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT)
// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes
// While our inputs are entirely SegWit, such fine tuning is not necessary and could create
// issues in the future (if the size decreases or we misevaluate it)
// It also offers a minimal amount of benefit when we are able to logarithmically accumulate
// inputs
// For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and
// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192
// bytes
// 100,000 / 192 = 520
// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself
const MAX_INPUTS: usize = 520;
const MAX_OUTPUTS: usize = 520;
fn address_from_key(key: ProjectivePoint) -> Address {
Address::new(BAddress::<NetworkChecked>::new(BNetwork::Bitcoin, address_payload(key).unwrap()))
.unwrap()
}
#[async_trait]
impl Network for Bitcoin {
type Curve = Secp256k1;
@@ -549,6 +593,8 @@ impl Network for Bitcoin {
type Eventuality = Eventuality;
type TransactionMachine = TransactionMachine;
type Scheduler = Scheduler<Bitcoin>;
type Address = Address;
const NETWORK: NetworkId = NetworkId::Bitcoin;
@@ -598,19 +644,7 @@ impl Network for Bitcoin {
// aggregation TX
const COST_TO_AGGREGATE: u64 = 800;
// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT)
// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes
// While our inputs are entirely SegWit, such fine tuning is not necessary and could create
// issues in the future (if the size decreases or we misevaluate it)
// It also offers a minimal amount of benefit when we are able to logarithmically accumulate
// inputs
// For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and
// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192
// bytes
// 100,000 / 192 = 520
// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself
const MAX_INPUTS: usize = 520;
const MAX_OUTPUTS: usize = 520;
const MAX_OUTPUTS: usize = MAX_OUTPUTS;
fn tweak_keys(keys: &mut ThresholdKeys<Self::Curve>) {
*keys = tweak_keys(keys);
@@ -618,24 +652,24 @@ impl Network for Bitcoin {
scanner(keys.group_key());
}
fn external_address(key: ProjectivePoint) -> Address {
Address::new(BAddress::<NetworkChecked>::new(BNetwork::Bitcoin, address_payload(key).unwrap()))
.unwrap()
#[cfg(test)]
async fn external_address(&self, key: ProjectivePoint) -> Address {
address_from_key(key)
}
fn branch_address(key: ProjectivePoint) -> Address {
fn branch_address(key: ProjectivePoint) -> Option<Address> {
let (_, offsets, _) = scanner(key);
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch]))
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch])))
}
fn change_address(key: ProjectivePoint) -> Address {
fn change_address(key: ProjectivePoint) -> Option<Address> {
let (_, offsets, _) = scanner(key);
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change]))
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change])))
}
fn forward_address(key: ProjectivePoint) -> Address {
fn forward_address(key: ProjectivePoint) -> Option<Address> {
let (_, offsets, _) = scanner(key);
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded]))
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded])))
}
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
@@ -682,7 +716,7 @@ impl Network for Bitcoin {
spent_tx.reverse();
let mut tx;
while {
tx = self.get_transaction(&spent_tx).await;
tx = self.rpc.get_transaction(&spent_tx).await;
tx.is_err()
} {
log::error!("couldn't get transaction from bitcoin node: {tx:?}");
@@ -710,7 +744,7 @@ impl Network for Bitcoin {
&self,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Self::Block,
) -> HashMap<[u8; 32], (usize, Transaction)> {
) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> {
let mut res = HashMap::new();
if eventualities.map.is_empty() {
return res;
@@ -719,11 +753,11 @@ impl Network for Bitcoin {
fn check_block(
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>,
) {
for tx in &block.txdata[1 ..] {
if let Some((plan, _)) = eventualities.map.remove(tx.id().as_slice()) {
res.insert(plan, (eventualities.block_number, tx.clone()));
res.insert(plan, (eventualities.block_number, tx.id(), tx.clone()));
}
}
@@ -770,7 +804,6 @@ impl Network for Bitcoin {
async fn needed_fee(
&self,
block_number: usize,
_: &[u8; 32],
inputs: &[Output],
payments: &[Payment<Self>],
change: &Option<Address>,
@@ -787,9 +820,11 @@ impl Network for Bitcoin {
&self,
block_number: usize,
plan_id: &[u8; 32],
_key: ProjectivePoint,
inputs: &[Output],
payments: &[Payment<Self>],
change: &Option<Address>,
(): &(),
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
Ok(self.make_signable_transaction(block_number, inputs, payments, change, false).await?.map(
|signable| {
@@ -803,7 +838,7 @@ impl Network for Bitcoin {
))
}
async fn attempt_send(
async fn attempt_sign(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: Self::SignableTransaction,
@@ -817,7 +852,7 @@ impl Network for Bitcoin {
)
}
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError> {
async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> {
match self.rpc.send_raw_transaction(tx).await {
Ok(_) => (),
Err(RpcError::ConnectionError) => Err(NetworkError::ConnectionError)?,
@@ -828,12 +863,14 @@ impl Network for Bitcoin {
Ok(())
}
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, NetworkError> {
self.rpc.get_transaction(id).await.map_err(|_| NetworkError::ConnectionError)
}
fn confirm_completion(&self, eventuality: &Self::Eventuality, tx: &Transaction) -> bool {
eventuality.0 == tx.id()
async fn confirm_completion(
&self,
eventuality: &Self::Eventuality,
_: &EmptyClaim,
) -> Result<Option<Transaction>, NetworkError> {
Ok(Some(
self.rpc.get_transaction(&eventuality.0).await.map_err(|_| NetworkError::ConnectionError)?,
))
}
#[cfg(test)]
@@ -841,6 +878,20 @@ impl Network for Bitcoin {
self.rpc.get_block_number(id).await.unwrap()
}
#[cfg(test)]
async fn check_eventuality_by_claim(
&self,
eventuality: &Self::Eventuality,
_: &EmptyClaim,
) -> bool {
self.rpc.get_transaction(&eventuality.0).await.is_ok()
}
#[cfg(test)]
async fn get_transaction_by_eventuality(&self, _: usize, id: &Eventuality) -> Transaction {
self.rpc.get_transaction(&id.0).await.unwrap()
}
#[cfg(test)]
async fn mine_block(&self) {
self
@@ -892,3 +943,7 @@ impl Network for Bitcoin {
self.get_block(block).await.unwrap()
}
}
impl UtxoNetwork for Bitcoin {
const MAX_INPUTS: usize = MAX_INPUTS;
}

View File

@@ -0,0 +1,827 @@
use core::{fmt::Debug, time::Duration};
use std::{
sync::Arc,
collections::{HashSet, HashMap},
io,
};
use async_trait::async_trait;
use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1};
use frost::ThresholdKeys;
use ethereum_serai::{
alloy_core::primitives::U256,
alloy_rpc_types::{BlockNumberOrTag, Transaction},
alloy_simple_request_transport::SimpleRequest,
alloy_rpc_client::ClientBuilder,
alloy_provider::{Provider, RootProvider},
crypto::{PublicKey, Signature},
deployer::Deployer,
router::{Router, Coin as EthereumCoin, InInstruction as EthereumInInstruction},
machine::*,
};
#[cfg(test)]
use ethereum_serai::alloy_core::primitives::B256;
use tokio::{
time::sleep,
sync::{RwLock, RwLockReadGuard},
};
use serai_client::{
primitives::{Coin, Amount, Balance, NetworkId},
validator_sets::primitives::Session,
};
use crate::{
Db, Payment,
networks::{
OutputType, Output, Transaction as TransactionTrait, SignableTransaction, Block,
Eventuality as EventualityTrait, EventualitiesTracker, NetworkError, Network,
},
key_gen::NetworkKeyDb,
multisigs::scheduler::{
Scheduler as SchedulerTrait,
smart_contract::{Addendum, Scheduler},
},
};
#[cfg(not(test))]
const DAI: [u8; 20] =
match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") {
Ok(res) => res,
Err(_) => panic!("invalid non-test DAI hex address"),
};
#[cfg(test)] // TODO
const DAI: [u8; 20] =
match const_hex::const_decode_to_array(b"0000000000000000000000000000000000000000") {
Ok(res) => res,
Err(_) => panic!("invalid test DAI hex address"),
};
fn coin_to_serai_coin(coin: &EthereumCoin) -> Option<Coin> {
match coin {
EthereumCoin::Ether => Some(Coin::Ether),
EthereumCoin::Erc20(token) => {
if *token == DAI {
return Some(Coin::Dai);
}
None
}
}
}
fn amount_to_serai_amount(coin: Coin, amount: U256) -> Amount {
assert_eq!(coin.network(), NetworkId::Ethereum);
assert_eq!(coin.decimals(), 8);
// Remove 10 decimals so we go from 18 decimals to 8 decimals
let divisor = U256::from(10_000_000_000u64);
// This is valid up to 184b, which is assumed for the coins allowed
Amount(u64::try_from(amount / divisor).unwrap())
}
fn balance_to_ethereum_amount(balance: Balance) -> U256 {
assert_eq!(balance.coin.network(), NetworkId::Ethereum);
assert_eq!(balance.coin.decimals(), 8);
// Restore 10 decimals so we go from 8 decimals to 18 decimals
let factor = U256::from(10_000_000_000u64);
U256::from(balance.amount.0) * factor
}
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Address(pub [u8; 20]);
impl TryFrom<Vec<u8>> for Address {
type Error = ();
fn try_from(bytes: Vec<u8>) -> Result<Address, ()> {
if bytes.len() != 20 {
Err(())?;
}
let mut res = [0; 20];
res.copy_from_slice(&bytes);
Ok(Address(res))
}
}
impl TryInto<Vec<u8>> for Address {
type Error = ();
fn try_into(self) -> Result<Vec<u8>, ()> {
Ok(self.0.to_vec())
}
}
impl ToString for Address {
fn to_string(&self) -> String {
ethereum_serai::alloy_core::primitives::Address::from(self.0).to_string()
}
}
impl SignableTransaction for RouterCommand {
fn fee(&self) -> u64 {
// Return a fee of 0 as we'll handle amortization on our end
0
}
}
#[async_trait]
impl<D: Debug + Db> TransactionTrait<Ethereum<D>> for Transaction {
type Id = [u8; 32];
fn id(&self) -> Self::Id {
self.hash.0
}
#[cfg(test)]
async fn fee(&self, _network: &Ethereum<D>) -> u64 {
// Return a fee of 0 as we'll handle amortization on our end
0
}
}
// We use 32-block Epochs to represent blocks.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Epoch {
// The hash of the block which ended the prior Epoch.
prior_end_hash: [u8; 32],
// The first block number within this Epoch.
start: u64,
// The hash of the last block within this Epoch.
end_hash: [u8; 32],
// The monotonic time for this Epoch.
time: u64,
}
impl Epoch {
fn end(&self) -> u64 {
self.start + 31
}
}
#[async_trait]
impl<D: Debug + Db> Block<Ethereum<D>> for Epoch {
type Id = [u8; 32];
fn id(&self) -> [u8; 32] {
self.end_hash
}
fn parent(&self) -> [u8; 32] {
self.prior_end_hash
}
async fn time(&self, _: &Ethereum<D>) -> u64 {
self.time
}
}
impl<D: Debug + Db> Output<Ethereum<D>> for EthereumInInstruction {
type Id = [u8; 32];
fn kind(&self) -> OutputType {
OutputType::External
}
fn id(&self) -> Self::Id {
let mut id = [0; 40];
id[.. 32].copy_from_slice(&self.id.0);
id[32 ..].copy_from_slice(&self.id.1.to_le_bytes());
*ethereum_serai::alloy_core::primitives::keccak256(id)
}
fn tx_id(&self) -> [u8; 32] {
self.id.0
}
fn key(&self) -> <Secp256k1 as Ciphersuite>::G {
self.key_at_end_of_block
}
fn presumed_origin(&self) -> Option<Address> {
Some(Address(self.from))
}
fn balance(&self) -> Balance {
let coin = coin_to_serai_coin(&self.coin).unwrap_or_else(|| {
panic!(
"requesting coin for an EthereumInInstruction with a coin {}",
"we don't handle. this never should have been yielded"
)
});
Balance { coin, amount: amount_to_serai_amount(coin, self.amount) }
}
fn data(&self) -> &[u8] {
&self.data
}
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
EthereumInInstruction::write(self, writer)
}
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
EthereumInInstruction::read(reader)
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Claim {
signature: [u8; 64],
}
impl AsRef<[u8]> for Claim {
fn as_ref(&self) -> &[u8] {
&self.signature
}
}
impl AsMut<[u8]> for Claim {
fn as_mut(&mut self) -> &mut [u8] {
&mut self.signature
}
}
impl Default for Claim {
fn default() -> Self {
Self { signature: [0; 64] }
}
}
impl From<&Signature> for Claim {
fn from(sig: &Signature) -> Self {
Self { signature: sig.to_bytes() }
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Eventuality(PublicKey, RouterCommand);
impl EventualityTrait for Eventuality {
type Claim = Claim;
type Completion = SignedRouterCommand;
fn lookup(&self) -> Vec<u8> {
match self.1 {
RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => {
nonce.as_le_bytes().to_vec()
}
}
}
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
let point = Secp256k1::read_G(reader)?;
let command = RouterCommand::read(reader)?;
Ok(Eventuality(
PublicKey::new(point).ok_or(io::Error::other("unusable key within Eventuality"))?,
command,
))
}
fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
res.extend(self.0.point().to_bytes().as_slice());
self.1.write(&mut res).unwrap();
res
}
fn claim(completion: &Self::Completion) -> Self::Claim {
Claim::from(completion.signature())
}
fn serialize_completion(completion: &Self::Completion) -> Vec<u8> {
let mut res = vec![];
completion.write(&mut res).unwrap();
res
}
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Self::Completion> {
SignedRouterCommand::read(reader)
}
}
#[derive(Clone, Debug)]
pub struct Ethereum<D: Debug + Db> {
// This DB is solely used to access the first key generated, as needed to determine the Router's
// address. Accordingly, all methods present are consistent to a Serai chain with a finalized
// first key (regardless of local state), and this is safe.
db: D,
provider: Arc<RootProvider<SimpleRequest>>,
deployer: Deployer,
router: Arc<RwLock<Option<Router>>>,
}
impl<D: Debug + Db> PartialEq for Ethereum<D> {
fn eq(&self, _other: &Ethereum<D>) -> bool {
true
}
}
impl<D: Debug + Db> Ethereum<D> {
pub async fn new(db: D, url: String) -> Self {
let provider = Arc::new(RootProvider::new(
ClientBuilder::default().transport(SimpleRequest::new(url), true),
));
#[cfg(test)] // TODO: Move to test code
provider.raw_request::<_, ()>("evm_setAutomine".into(), false).await.unwrap();
let mut deployer = Deployer::new(provider.clone()).await;
while !matches!(deployer, Ok(Some(_))) {
log::error!("Deployer wasn't deployed yet or networking error");
sleep(Duration::from_secs(5)).await;
deployer = Deployer::new(provider.clone()).await;
}
let deployer = deployer.unwrap().unwrap();
Ethereum { db, provider, deployer, router: Arc::new(RwLock::new(None)) }
}
// Obtain a reference to the Router, sleeping until it's deployed if it hasn't already been.
// This is guaranteed to return Some.
pub async fn router(&self) -> RwLockReadGuard<'_, Option<Router>> {
// If we've already instantiated the Router, return a read reference
{
let router = self.router.read().await;
if router.is_some() {
return router;
}
}
// Instantiate it
let mut router = self.router.write().await;
// If another attempt beat us to it, return
if router.is_some() {
drop(router);
return self.router.read().await;
}
// Get the first key from the DB
let first_key =
NetworkKeyDb::get(&self.db, Session(0)).expect("getting outputs before confirming a key");
let key = Secp256k1::read_G(&mut first_key.as_slice()).unwrap();
let public_key = PublicKey::new(key).unwrap();
// Find the router
let mut found = self.deployer.find_router(self.provider.clone(), &public_key).await;
while !matches!(found, Ok(Some(_))) {
log::error!("Router wasn't deployed yet or networking error");
sleep(Duration::from_secs(5)).await;
found = self.deployer.find_router(self.provider.clone(), &public_key).await;
}
// Set it
*router = Some(found.unwrap().unwrap());
// Downgrade to a read lock
// Explicitly doesn't use `downgrade` so that another pending write txn can realize it's no
// longer necessary
drop(router);
self.router.read().await
}
}
#[async_trait]
impl<D: Debug + Db> Network for Ethereum<D> {
type Curve = Secp256k1;
type Transaction = Transaction;
type Block = Epoch;
type Output = EthereumInInstruction;
type SignableTransaction = RouterCommand;
type Eventuality = Eventuality;
type TransactionMachine = RouterCommandMachine;
type Scheduler = Scheduler<Self>;
type Address = Address;
const NETWORK: NetworkId = NetworkId::Ethereum;
const ID: &'static str = "Ethereum";
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 32 * 12;
const CONFIRMATIONS: usize = 1;
const DUST: u64 = 0; // TODO
const COST_TO_AGGREGATE: u64 = 0;
// TODO: usize::max, with a merkle tree in the router
const MAX_OUTPUTS: usize = 256;
fn tweak_keys(keys: &mut ThresholdKeys<Self::Curve>) {
while PublicKey::new(keys.group_key()).is_none() {
*keys = keys.offset(<Secp256k1 as Ciphersuite>::F::ONE);
}
}
#[cfg(test)]
async fn external_address(&self, _key: <Secp256k1 as Ciphersuite>::G) -> Address {
Address(self.router().await.as_ref().unwrap().address())
}
fn branch_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
None
}
fn change_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
None
}
fn forward_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
None
}
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
let actual_number = self
.provider
.get_block(BlockNumberOrTag::Finalized.into(), false)
.await
.map_err(|_| NetworkError::ConnectionError)?
.expect("no blocks were finalized")
.header
.number
.unwrap();
// Error if there hasn't been a full epoch yet
if actual_number < 32 {
Err(NetworkError::ConnectionError)?
}
// If this is 33, the division will return 1, yet 1 is the epoch in progress
let latest_full_epoch = (actual_number / 32).saturating_sub(1);
Ok(latest_full_epoch.try_into().unwrap())
}
async fn get_block(&self, number: usize) -> Result<Self::Block, NetworkError> {
let latest_finalized = self.get_latest_block_number().await?;
if number > latest_finalized {
Err(NetworkError::ConnectionError)?
}
let start = number * 32;
let prior_end_hash = if start == 0 {
[0; 32]
} else {
self
.provider
.get_block(u64::try_from(start - 1).unwrap().into(), false)
.await
.ok()
.flatten()
.ok_or(NetworkError::ConnectionError)?
.header
.hash
.unwrap()
.into()
};
let end_header = self
.provider
.get_block(u64::try_from(start + 31).unwrap().into(), false)
.await
.ok()
.flatten()
.ok_or(NetworkError::ConnectionError)?
.header;
let end_hash = end_header.hash.unwrap().into();
let time = end_header.timestamp;
Ok(Epoch { prior_end_hash, start: start.try_into().unwrap(), end_hash, time })
}
async fn get_outputs(
&self,
block: &Self::Block,
_: <Secp256k1 as Ciphersuite>::G,
) -> Vec<Self::Output> {
let router = self.router().await;
let router = router.as_ref().unwrap();
// TODO: Top-level transfers
let mut all_events = vec![];
for block in block.start .. (block.start + 32) {
let mut events = router.in_instructions(block, &HashSet::from([DAI])).await;
while let Err(e) = events {
log::error!("couldn't connect to Ethereum node for the Router's events: {e:?}");
sleep(Duration::from_secs(5)).await;
events = router.in_instructions(block, &HashSet::from([DAI])).await;
}
all_events.extend(events.unwrap());
}
for event in &all_events {
assert!(
coin_to_serai_coin(&event.coin).is_some(),
"router yielded events for unrecognized coins"
);
}
all_events
}
async fn get_eventuality_completions(
&self,
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
block: &Self::Block,
) -> HashMap<
[u8; 32],
(
usize,
<Self::Transaction as TransactionTrait<Self>>::Id,
<Self::Eventuality as EventualityTrait>::Completion,
),
> {
let mut res = HashMap::new();
if eventualities.map.is_empty() {
return res;
}
let router = self.router().await;
let router = router.as_ref().unwrap();
let past_scanned_epoch = loop {
match self.get_block(eventualities.block_number).await {
Ok(block) => break block,
Err(e) => log::error!("couldn't get the last scanned block in the tracker: {}", e),
}
sleep(Duration::from_secs(10)).await;
};
assert_eq!(
past_scanned_epoch.start / 32,
u64::try_from(eventualities.block_number).unwrap(),
"assumption of tracker block number's relation to epoch start is incorrect"
);
// Iterate from after the epoch number in the tracker to the end of this epoch
for block_num in (past_scanned_epoch.end() + 1) ..= block.end() {
let executed = loop {
match router.executed_commands(block_num).await {
Ok(executed) => break executed,
Err(e) => log::error!("couldn't get the executed commands in block {block_num}: {e}"),
}
sleep(Duration::from_secs(10)).await;
};
for executed in executed {
let lookup = executed.nonce.to_le_bytes().to_vec();
if let Some((plan_id, eventuality)) = eventualities.map.get(&lookup) {
if let Some(command) =
SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &executed.signature)
{
res.insert(*plan_id, (block_num.try_into().unwrap(), executed.tx_id, command));
eventualities.map.remove(&lookup);
}
}
}
}
eventualities.block_number = (block.start / 32).try_into().unwrap();
res
}
async fn needed_fee(
&self,
_block_number: usize,
inputs: &[Self::Output],
_payments: &[Payment<Self>],
_change: &Option<Self::Address>,
) -> Result<Option<u64>, NetworkError> {
assert_eq!(inputs.len(), 0);
// Claim no fee is needed so we can perform amortization ourselves
Ok(Some(0))
}
async fn signable_transaction(
&self,
_block_number: usize,
_plan_id: &[u8; 32],
key: <Self::Curve as Ciphersuite>::G,
inputs: &[Self::Output],
payments: &[Payment<Self>],
change: &Option<Self::Address>,
scheduler_addendum: &<Self::Scheduler as SchedulerTrait<Self>>::Addendum,
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
assert_eq!(inputs.len(), 0);
assert!(change.is_none());
let chain_id = self.provider.get_chain_id().await.map_err(|_| NetworkError::ConnectionError)?;
// TODO: Perform fee amortization (in scheduler?
// TODO: Make this function internal and have needed_fee properly return None as expected?
// TODO: signable_transaction is written as cannot return None if needed_fee returns Some
// TODO: Why can this return None at all if it isn't allowed to return None?
let command = match scheduler_addendum {
Addendum::Nonce(nonce) => RouterCommand::Execute {
chain_id: U256::try_from(chain_id).unwrap(),
nonce: U256::try_from(*nonce).unwrap(),
outs: payments
.iter()
.filter_map(|payment| {
Some(OutInstruction {
target: if let Some(data) = payment.data.as_ref() {
// This introspects the Call serialization format, expecting the first 20 bytes to
// be the address
// This avoids wasting the 20-bytes allocated within address
let full_data = [payment.address.0.as_slice(), data].concat();
let mut reader = full_data.as_slice();
let mut calls = vec![];
while !reader.is_empty() {
calls.push(Call::read(&mut reader).ok()?)
}
// The above must have executed at least once since reader contains the address
assert_eq!(calls[0].to, payment.address.0);
OutInstructionTarget::Calls(calls)
} else {
OutInstructionTarget::Direct(payment.address.0)
},
value: {
assert_eq!(payment.balance.coin, Coin::Ether); // TODO
balance_to_ethereum_amount(payment.balance)
},
})
})
.collect(),
},
Addendum::RotateTo { nonce, new_key } => {
assert!(payments.is_empty());
RouterCommand::UpdateSeraiKey {
chain_id: U256::try_from(chain_id).unwrap(),
nonce: U256::try_from(*nonce).unwrap(),
key: PublicKey::new(*new_key).expect("new key wasn't a valid ETH public key"),
}
}
};
Ok(Some((
command.clone(),
Eventuality(PublicKey::new(key).expect("key wasn't a valid ETH public key"), command),
)))
}
async fn attempt_sign(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: Self::SignableTransaction,
) -> Result<Self::TransactionMachine, NetworkError> {
Ok(
RouterCommandMachine::new(keys, transaction)
.expect("keys weren't usable to sign router commands"),
)
}
async fn publish_completion(
&self,
completion: &<Self::Eventuality as EventualityTrait>::Completion,
) -> Result<(), NetworkError> {
// Publish this to the dedicated TX server for a solver to actually publish
#[cfg(not(test))]
{
let _ = completion;
todo!("TODO");
}
// Publish this using a dummy account we fund with magic RPC commands
#[cfg(test)]
{
use rand_core::OsRng;
use ciphersuite::group::ff::Field;
let key = <Secp256k1 as Ciphersuite>::F::random(&mut OsRng);
let address = ethereum_serai::crypto::address(&(Secp256k1::generator() * key));
// Set a 1.1 ETH balance
self
.provider
.raw_request::<_, ()>(
"anvil_setBalance".into(),
[Address(address).to_string(), "1100000000000000000".into()],
)
.await
.unwrap();
let router = self.router().await;
let router = router.as_ref().unwrap();
let mut tx = match completion.command() {
RouterCommand::UpdateSeraiKey { key, .. } => {
router.update_serai_key(key, completion.signature())
}
RouterCommand::Execute { outs, .. } => router.execute(
&outs.iter().cloned().map(Into::into).collect::<Vec<_>>(),
completion.signature(),
),
};
tx.gas_price = 100_000_000_000u128;
use ethereum_serai::alloy_consensus::SignableTransaction;
let sig =
k256::ecdsa::SigningKey::from(k256::elliptic_curve::NonZeroScalar::new(key).unwrap())
.sign_prehash_recoverable(tx.signature_hash().as_ref())
.unwrap();
let mut bytes = vec![];
tx.encode_with_signature_fields(&sig.into(), &mut bytes);
let _ = self.provider.send_raw_transaction(&bytes).await.ok().unwrap();
Ok(())
}
}
async fn confirm_completion(
&self,
eventuality: &Self::Eventuality,
claim: &<Self::Eventuality as EventualityTrait>::Claim,
) -> Result<Option<<Self::Eventuality as EventualityTrait>::Completion>, NetworkError> {
Ok(SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature))
}
#[cfg(test)]
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize {
self
.provider
.get_block(B256::from(*id).into(), false)
.await
.unwrap()
.unwrap()
.header
.number
.unwrap()
.try_into()
.unwrap()
}
#[cfg(test)]
async fn check_eventuality_by_claim(
&self,
eventuality: &Self::Eventuality,
claim: &<Self::Eventuality as EventualityTrait>::Claim,
) -> bool {
SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature).is_some()
}
#[cfg(test)]
async fn get_transaction_by_eventuality(
&self,
block: usize,
eventuality: &Self::Eventuality,
) -> Self::Transaction {
match eventuality.1 {
RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => {
let router = self.router().await;
let router = router.as_ref().unwrap();
let block = u64::try_from(block).unwrap();
let filter = router
.key_updated_filter()
.from_block(block * 32)
.to_block(((block + 1) * 32) - 1)
.topic1(nonce);
let logs = self.provider.get_logs(&filter).await.unwrap();
if let Some(log) = logs.first() {
return self
.provider
.get_transaction_by_hash(log.clone().transaction_hash.unwrap())
.await
.unwrap();
};
let filter = router
.executed_filter()
.from_block(block * 32)
.to_block(((block + 1) * 32) - 1)
.topic1(nonce);
let logs = self.provider.get_logs(&filter).await.unwrap();
self.provider.get_transaction_by_hash(logs[0].transaction_hash.unwrap()).await.unwrap()
}
}
}
#[cfg(test)]
async fn mine_block(&self) {
self.provider.raw_request::<_, ()>("anvil_mine".into(), [32]).await.unwrap();
}
#[cfg(test)]
async fn test_send(&self, send_to: Self::Address) -> Self::Block {
use rand_core::OsRng;
use ciphersuite::group::ff::Field;
let key = <Secp256k1 as Ciphersuite>::F::random(&mut OsRng);
let address = ethereum_serai::crypto::address(&(Secp256k1::generator() * key));
// Set a 1.1 ETH balance
self
.provider
.raw_request::<_, ()>(
"anvil_setBalance".into(),
[Address(address).to_string(), "1100000000000000000".into()],
)
.await
.unwrap();
let tx = ethereum_serai::alloy_consensus::TxLegacy {
chain_id: None,
nonce: 0,
gas_price: 100_000_000_000u128,
gas_limit: 21_0000u128,
to: ethereum_serai::alloy_core::primitives::TxKind::Call(send_to.0.into()),
// 1 ETH
value: U256::from_str_radix("1000000000000000000", 10).unwrap(),
input: vec![].into(),
};
use ethereum_serai::alloy_consensus::SignableTransaction;
let sig = k256::ecdsa::SigningKey::from(k256::elliptic_curve::NonZeroScalar::new(key).unwrap())
.sign_prehash_recoverable(tx.signature_hash().as_ref())
.unwrap();
let mut bytes = vec![];
tx.encode_with_signature_fields(&sig.into(), &mut bytes);
let pending_tx = self.provider.send_raw_transaction(&bytes).await.ok().unwrap();
// Mine an epoch containing this TX
self.mine_block().await;
assert!(pending_tx.get_receipt().await.unwrap().status());
// Yield the freshly mined block
self.get_block(self.get_latest_block_number().await.unwrap()).await.unwrap()
}
}

View File

@@ -21,12 +21,17 @@ pub mod bitcoin;
#[cfg(feature = "bitcoin")]
pub use self::bitcoin::Bitcoin;
#[cfg(feature = "ethereum")]
pub mod ethereum;
#[cfg(feature = "ethereum")]
pub use ethereum::Ethereum;
#[cfg(feature = "monero")]
pub mod monero;
#[cfg(feature = "monero")]
pub use monero::Monero;
use crate::{Payment, Plan};
use crate::{Payment, Plan, multisigs::scheduler::Scheduler};
#[derive(Clone, Copy, Error, Debug)]
pub enum NetworkError {
@@ -105,7 +110,7 @@ pub trait Output<N: Network>: Send + Sync + Sized + Clone + PartialEq + Eq + Deb
fn kind(&self) -> OutputType;
fn id(&self) -> Self::Id;
fn tx_id(&self) -> <N::Transaction as Transaction<N>>::Id;
fn tx_id(&self) -> <N::Transaction as Transaction<N>>::Id; // TODO: Review use of
fn key(&self) -> <N::Curve as Ciphersuite>::G;
fn presumed_origin(&self) -> Option<N::Address>;
@@ -118,25 +123,33 @@ pub trait Output<N: Network>: Send + Sync + Sized + Clone + PartialEq + Eq + Deb
}
#[async_trait]
pub trait Transaction<N: Network>: Send + Sync + Sized + Clone + Debug {
pub trait Transaction<N: Network>: Send + Sync + Sized + Clone + PartialEq + Debug {
type Id: 'static + Id;
fn id(&self) -> Self::Id;
fn serialize(&self) -> Vec<u8>;
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
// TODO: Move to Balance
#[cfg(test)]
async fn fee(&self, network: &N) -> u64;
}
pub trait SignableTransaction: Send + Sync + Clone + Debug {
// TODO: Move to Balance
fn fee(&self) -> u64;
}
pub trait Eventuality: Send + Sync + Clone + Debug {
pub trait Eventuality: Send + Sync + Clone + PartialEq + Debug {
type Claim: Send + Sync + Clone + PartialEq + Default + AsRef<[u8]> + AsMut<[u8]> + Debug;
type Completion: Send + Sync + Clone + PartialEq + Debug;
fn lookup(&self) -> Vec<u8>;
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
fn serialize(&self) -> Vec<u8>;
fn claim(completion: &Self::Completion) -> Self::Claim;
// TODO: Make a dedicated Completion trait
fn serialize_completion(completion: &Self::Completion) -> Vec<u8>;
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Self::Completion>;
}
#[derive(Clone, PartialEq, Eq, Debug)]
@@ -211,7 +224,7 @@ fn drop_branches<N: Network>(
) -> Vec<PostFeeBranch> {
let mut branch_outputs = vec![];
for payment in payments {
if payment.address == N::branch_address(key) {
if Some(&payment.address) == N::branch_address(key).as_ref() {
branch_outputs.push(PostFeeBranch { expected: payment.balance.amount.0, actual: None });
}
}
@@ -227,12 +240,12 @@ pub struct PreparedSend<N: Network> {
}
#[async_trait]
pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
pub trait Network: 'static + Send + Sync + Clone + PartialEq + Debug {
/// The elliptic curve used for this network.
type Curve: Curve;
/// The type representing the transaction for this network.
type Transaction: Transaction<Self>;
type Transaction: Transaction<Self>; // TODO: Review use of
/// The type representing the block for this network.
type Block: Block<Self>;
@@ -246,7 +259,12 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// This must be binding to both the outputs expected and the plan ID.
type Eventuality: Eventuality;
/// The FROST machine to sign a transaction.
type TransactionMachine: PreprocessMachine<Signature = Self::Transaction>;
type TransactionMachine: PreprocessMachine<
Signature = <Self::Eventuality as Eventuality>::Completion,
>;
/// The scheduler for this network.
type Scheduler: Scheduler<Self>;
/// The type representing an address.
// This should NOT be a String, yet a tailored type representing an efficient binary encoding,
@@ -269,10 +287,6 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize;
/// The amount of confirmations required to consider a block 'final'.
const CONFIRMATIONS: usize;
/// The maximum amount of inputs which will fit in a TX.
/// This should be equal to MAX_OUTPUTS unless one is specifically limited.
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
const MAX_INPUTS: usize;
/// The maximum amount of outputs which will fit in a TX.
/// This should be equal to MAX_INPUTS unless one is specifically limited.
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
@@ -293,13 +307,16 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
fn tweak_keys(key: &mut ThresholdKeys<Self::Curve>);
/// Address for the given group key to receive external coins to.
fn external_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
#[cfg(test)]
async fn external_address(&self, key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
/// Address for the given group key to use for scheduled branches.
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
/// Address for the given group key to use for change.
fn change_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
fn change_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
/// Address for forwarded outputs from prior multisigs.
fn forward_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
///
/// forward_address must only return None if explicit forwarding isn't necessary.
fn forward_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
/// Get the latest block's number.
async fn get_latest_block_number(&self) -> Result<usize, NetworkError>;
@@ -349,13 +366,24 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// registered eventualities may have been completed in.
///
/// This may panic if not fed a block greater than the tracker's block number.
///
/// Plan ID -> (block number, TX ID, completion)
// TODO: get_eventuality_completions_internal + provided get_eventuality_completions for common
// code
// TODO: Consider having this return the Transaction + the Completion?
// Or Transaction with extract_completion?
async fn get_eventuality_completions(
&self,
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
block: &Self::Block,
) -> HashMap<[u8; 32], (usize, Self::Transaction)>;
) -> HashMap<
[u8; 32],
(
usize,
<Self::Transaction as Transaction<Self>>::Id,
<Self::Eventuality as Eventuality>::Completion,
),
>;
/// Returns the needed fee to fulfill this Plan at this fee rate.
///
@@ -363,7 +391,6 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
async fn needed_fee(
&self,
block_number: usize,
plan_id: &[u8; 32],
inputs: &[Self::Output],
payments: &[Payment<Self>],
change: &Option<Self::Address>,
@@ -375,16 +402,25 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// 1) Call needed_fee
/// 2) If the Plan is fulfillable, amortize the fee
/// 3) Call signable_transaction *which MUST NOT return None if the above was done properly*
///
/// This takes a destructured Plan as some of these arguments are malleated from the original
/// Plan.
// TODO: Explicit AmortizedPlan?
#[allow(clippy::too_many_arguments)]
async fn signable_transaction(
&self,
block_number: usize,
plan_id: &[u8; 32],
key: <Self::Curve as Ciphersuite>::G,
inputs: &[Self::Output],
payments: &[Payment<Self>],
change: &Option<Self::Address>,
scheduler_addendum: &<Self::Scheduler as Scheduler<Self>>::Addendum,
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError>;
/// Prepare a SignableTransaction for a transaction.
///
/// This must not persist anything as we will prepare Plans we never intend to execute.
async fn prepare_send(
&self,
block_number: usize,
@@ -395,13 +431,12 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
assert!((!plan.payments.is_empty()) || plan.change.is_some());
let plan_id = plan.id();
let Plan { key, inputs, mut payments, change } = plan;
let Plan { key, inputs, mut payments, change, scheduler_addendum } = plan;
let theoretical_change_amount =
inputs.iter().map(|input| input.balance().amount.0).sum::<u64>() -
payments.iter().map(|payment| payment.balance.amount.0).sum::<u64>();
let Some(tx_fee) = self.needed_fee(block_number, &plan_id, &inputs, &payments, &change).await?
else {
let Some(tx_fee) = self.needed_fee(block_number, &inputs, &payments, &change).await? else {
// This Plan is not fulfillable
// TODO: Have Plan explicitly distinguish payments and branches in two separate Vecs?
return Ok(PreparedSend {
@@ -466,7 +501,7 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
// Note the branch outputs' new values
let mut branch_outputs = vec![];
for (initial_amount, payment) in initial_payment_amounts.into_iter().zip(&payments) {
if payment.address == Self::branch_address(key) {
if Some(&payment.address) == Self::branch_address(key).as_ref() {
branch_outputs.push(PostFeeBranch {
expected: initial_amount,
actual: if payment.balance.amount.0 == 0 {
@@ -508,11 +543,20 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
)
})();
let Some(tx) =
self.signable_transaction(block_number, &plan_id, &inputs, &payments, &change).await?
let Some(tx) = self
.signable_transaction(
block_number,
&plan_id,
key,
&inputs,
&payments,
&change,
&scheduler_addendum,
)
.await?
else {
panic!(
"{}. {}: {}, {}: {:?}, {}: {:?}, {}: {:?}, {}: {}",
"{}. {}: {}, {}: {:?}, {}: {:?}, {}: {:?}, {}: {}, {}: {:?}",
"signable_transaction returned None for a TX we prior successfully calculated the fee for",
"id",
hex::encode(plan_id),
@@ -524,6 +568,8 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
change,
"successfully amoritized fee",
tx_fee,
"scheduler's addendum",
scheduler_addendum,
)
};
@@ -546,31 +592,49 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
}
/// Attempt to sign a SignableTransaction.
async fn attempt_send(
async fn attempt_sign(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: Self::SignableTransaction,
) -> Result<Self::TransactionMachine, NetworkError>;
/// Publish a transaction.
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError>;
/// Get a transaction by its ID.
async fn get_transaction(
/// Publish a completion.
async fn publish_completion(
&self,
id: &<Self::Transaction as Transaction<Self>>::Id,
) -> Result<Self::Transaction, NetworkError>;
completion: &<Self::Eventuality as Eventuality>::Completion,
) -> Result<(), NetworkError>;
/// Confirm a plan was completed by the specified transaction.
// This is allowed to take shortcuts.
// This may assume an honest multisig, solely checking the inputs specified were spent.
// This may solely check the outputs are equivalent *so long as it's locked to the plan ID*.
fn confirm_completion(&self, eventuality: &Self::Eventuality, tx: &Self::Transaction) -> bool;
/// Confirm a plan was completed by the specified transaction, per our bounds.
///
/// Returns Err if there was an error with the confirmation methodology.
/// Returns Ok(None) if this is not a valid completion.
/// Returns Ok(Some(_)) with the completion if it's valid.
async fn confirm_completion(
&self,
eventuality: &Self::Eventuality,
claim: &<Self::Eventuality as Eventuality>::Claim,
) -> Result<Option<<Self::Eventuality as Eventuality>::Completion>, NetworkError>;
/// Get a block's number by its ID.
#[cfg(test)]
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize;
/// Check an Eventuality is fulfilled by a claim.
#[cfg(test)]
async fn check_eventuality_by_claim(
&self,
eventuality: &Self::Eventuality,
claim: &<Self::Eventuality as Eventuality>::Claim,
) -> bool;
/// Get a transaction by the Eventuality it completes.
#[cfg(test)]
async fn get_transaction_by_eventuality(
&self,
block: usize,
eventuality: &Self::Eventuality,
) -> Self::Transaction;
#[cfg(test)]
async fn mine_block(&self);
@@ -579,3 +643,10 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
#[cfg(test)]
async fn test_send(&self, key: Self::Address) -> Self::Block;
}
pub trait UtxoNetwork: Network {
/// The maximum amount of inputs which will fit in a TX.
/// This should be equal to MAX_OUTPUTS unless one is specifically limited.
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
const MAX_INPUTS: usize;
}

View File

@@ -39,8 +39,9 @@ use crate::{
networks::{
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
Eventuality as EventualityTrait, EventualitiesTracker, Network,
Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork,
},
multisigs::scheduler::utxo::Scheduler,
};
#[derive(Clone, PartialEq, Eq, Debug)]
@@ -117,12 +118,6 @@ impl TransactionTrait<Monero> for Transaction {
fn id(&self) -> Self::Id {
self.hash()
}
fn serialize(&self) -> Vec<u8> {
self.serialize()
}
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
Transaction::read(reader)
}
#[cfg(test)]
async fn fee(&self, _: &Monero) -> u64 {
@@ -131,6 +126,9 @@ impl TransactionTrait<Monero> for Transaction {
}
impl EventualityTrait for Eventuality {
type Claim = [u8; 32];
type Completion = Transaction;
// Use the TX extra to look up potential matches
// While anyone can forge this, a transaction with distinct outputs won't actually match
// Extra includess the one time keys which are derived from the plan ID, so a collision here is a
@@ -145,6 +143,16 @@ impl EventualityTrait for Eventuality {
fn serialize(&self) -> Vec<u8> {
self.serialize()
}
fn claim(tx: &Transaction) -> [u8; 32] {
tx.id()
}
fn serialize_completion(completion: &Transaction) -> Vec<u8> {
completion.serialize()
}
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Transaction> {
Transaction::read(reader)
}
}
#[derive(Clone, Debug)]
@@ -274,7 +282,8 @@ impl Monero {
async fn median_fee(&self, block: &Block) -> Result<Fee, NetworkError> {
let mut fees = vec![];
for tx_hash in &block.txs {
let tx = self.get_transaction(tx_hash).await?;
let tx =
self.rpc.get_transaction(*tx_hash).await.map_err(|_| NetworkError::ConnectionError)?;
// Only consider fees from RCT transactions, else the fee property read wouldn't be accurate
if tx.rct_signatures.rct_type() != RctType::Null {
continue;
@@ -454,6 +463,8 @@ impl Network for Monero {
type Eventuality = Eventuality;
type TransactionMachine = TransactionMachine;
type Scheduler = Scheduler<Monero>;
type Address = Address;
const NETWORK: NetworkId = NetworkId::Monero;
@@ -461,11 +472,6 @@ impl Network for Monero {
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120;
const CONFIRMATIONS: usize = 10;
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
// larger than 150kb. This fits within the 100kb mark
// Technically, it can be ~124, yet a small bit of buffer is appreciated
// TODO: Test creating a TX this big
const MAX_INPUTS: usize = 120;
const MAX_OUTPUTS: usize = 16;
// 0.01 XMR
@@ -478,20 +484,21 @@ impl Network for Monero {
// Monero doesn't require/benefit from tweaking
fn tweak_keys(_: &mut ThresholdKeys<Self::Curve>) {}
fn external_address(key: EdwardsPoint) -> Address {
#[cfg(test)]
async fn external_address(&self, key: EdwardsPoint) -> Address {
Self::address_internal(key, EXTERNAL_SUBADDRESS)
}
fn branch_address(key: EdwardsPoint) -> Address {
Self::address_internal(key, BRANCH_SUBADDRESS)
fn branch_address(key: EdwardsPoint) -> Option<Address> {
Some(Self::address_internal(key, BRANCH_SUBADDRESS))
}
fn change_address(key: EdwardsPoint) -> Address {
Self::address_internal(key, CHANGE_SUBADDRESS)
fn change_address(key: EdwardsPoint) -> Option<Address> {
Some(Self::address_internal(key, CHANGE_SUBADDRESS))
}
fn forward_address(key: EdwardsPoint) -> Address {
Self::address_internal(key, FORWARD_SUBADDRESS)
fn forward_address(key: EdwardsPoint) -> Option<Address> {
Some(Self::address_internal(key, FORWARD_SUBADDRESS))
}
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
@@ -558,7 +565,7 @@ impl Network for Monero {
&self,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
) -> HashMap<[u8; 32], (usize, Transaction)> {
) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> {
let mut res = HashMap::new();
if eventualities.map.is_empty() {
return res;
@@ -568,13 +575,13 @@ impl Network for Monero {
network: &Monero,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>,
) {
for hash in &block.txs {
let tx = {
let mut tx;
while {
tx = network.get_transaction(hash).await;
tx = network.rpc.get_transaction(*hash).await;
tx.is_err()
} {
log::error!("couldn't get transaction {}: {}", hex::encode(hash), tx.err().unwrap());
@@ -587,7 +594,7 @@ impl Network for Monero {
if eventuality.matches(&tx) {
res.insert(
eventualities.map.remove(&tx.prefix.extra).unwrap().0,
(usize::try_from(block.number().unwrap()).unwrap(), tx),
(usize::try_from(block.number().unwrap()).unwrap(), tx.id(), tx),
);
}
}
@@ -625,14 +632,13 @@ impl Network for Monero {
async fn needed_fee(
&self,
block_number: usize,
plan_id: &[u8; 32],
inputs: &[Output],
payments: &[Payment<Self>],
change: &Option<Address>,
) -> Result<Option<u64>, NetworkError> {
Ok(
self
.make_signable_transaction(block_number, plan_id, inputs, payments, change, true)
.make_signable_transaction(block_number, &[0; 32], inputs, payments, change, true)
.await?
.map(|(_, signable)| signable.fee()),
)
@@ -642,9 +648,11 @@ impl Network for Monero {
&self,
block_number: usize,
plan_id: &[u8; 32],
_key: EdwardsPoint,
inputs: &[Output],
payments: &[Payment<Self>],
change: &Option<Address>,
(): &(),
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
Ok(
self
@@ -658,7 +666,7 @@ impl Network for Monero {
)
}
async fn attempt_send(
async fn attempt_sign(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: SignableTransaction,
@@ -669,7 +677,7 @@ impl Network for Monero {
}
}
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError> {
async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> {
match self.rpc.publish_transaction(tx).await {
Ok(()) => Ok(()),
Err(RpcError::ConnectionError(e)) => {
@@ -682,12 +690,17 @@ impl Network for Monero {
}
}
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, NetworkError> {
self.rpc.get_transaction(*id).await.map_err(map_rpc_err)
}
fn confirm_completion(&self, eventuality: &Eventuality, tx: &Transaction) -> bool {
eventuality.matches(tx)
async fn confirm_completion(
&self,
eventuality: &Eventuality,
id: &[u8; 32],
) -> Result<Option<Transaction>, NetworkError> {
let tx = self.rpc.get_transaction(*id).await.map_err(map_rpc_err)?;
if eventuality.matches(&tx) {
Ok(Some(tx))
} else {
Ok(None)
}
}
#[cfg(test)]
@@ -695,6 +708,31 @@ impl Network for Monero {
self.rpc.get_block(*id).await.unwrap().number().unwrap().try_into().unwrap()
}
#[cfg(test)]
async fn check_eventuality_by_claim(
&self,
eventuality: &Self::Eventuality,
claim: &[u8; 32],
) -> bool {
return eventuality.matches(&self.rpc.get_transaction(*claim).await.unwrap());
}
#[cfg(test)]
async fn get_transaction_by_eventuality(
&self,
block: usize,
eventuality: &Eventuality,
) -> Transaction {
let block = self.rpc.get_block_by_number(block).await.unwrap();
for tx in &block.txs {
let tx = self.rpc.get_transaction(*tx).await.unwrap();
if eventuality.matches(&tx) {
return tx;
}
}
panic!("block didn't have a transaction for this eventuality")
}
#[cfg(test)]
async fn mine_block(&self) {
// https://github.com/serai-dex/serai/issues/198
@@ -775,3 +813,11 @@ impl Network for Monero {
self.get_block(block).await.unwrap()
}
}
impl UtxoNetwork for Monero {
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
// larger than 150kb. This fits within the 100kb mark
// Technically, it can be ~124, yet a small bit of buffer is appreciated
// TODO: Test creating a TX this big
const MAX_INPUTS: usize = 120;
}

View File

@@ -8,7 +8,10 @@ use frost::curve::Ciphersuite;
use serai_client::primitives::Balance;
use crate::networks::{Output, Network};
use crate::{
networks::{Output, Network},
multisigs::scheduler::{SchedulerAddendum, Scheduler},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Payment<N: Network> {
@@ -73,7 +76,7 @@ impl<N: Network> Payment<N> {
}
}
#[derive(Clone, PartialEq, Eq)]
#[derive(Clone, PartialEq)]
pub struct Plan<N: Network> {
pub key: <N::Curve as Ciphersuite>::G,
pub inputs: Vec<N::Output>,
@@ -90,7 +93,11 @@ pub struct Plan<N: Network> {
/// This MUST contain a Serai address. Operating costs may be deducted from the payments in this
/// Plan on the premise that the change address is Serai's, and accordingly, Serai will recoup
/// the operating costs.
//
// TODO: Consider moving to ::G?
pub change: Option<N::Address>,
/// The scheduler's additional data.
pub scheduler_addendum: <N::Scheduler as Scheduler<N>>::Addendum,
}
impl<N: Network> core::fmt::Debug for Plan<N> {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
@@ -100,6 +107,7 @@ impl<N: Network> core::fmt::Debug for Plan<N> {
.field("inputs", &self.inputs)
.field("payments", &self.payments)
.field("change", &self.change.as_ref().map(ToString::to_string))
.field("scheduler_addendum", &self.scheduler_addendum)
.finish()
}
}
@@ -125,6 +133,10 @@ impl<N: Network> Plan<N> {
transcript.append_message(b"change", change.to_string());
}
let mut addendum_bytes = vec![];
self.scheduler_addendum.write(&mut addendum_bytes).unwrap();
transcript.append_message(b"scheduler_addendum", addendum_bytes);
transcript
}
@@ -161,7 +173,8 @@ impl<N: Network> Plan<N> {
};
assert!(serai_client::primitives::MAX_ADDRESS_LEN <= u8::MAX.into());
writer.write_all(&[u8::try_from(change.len()).unwrap()])?;
writer.write_all(&change)
writer.write_all(&change)?;
self.scheduler_addendum.write(writer)
}
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
@@ -193,6 +206,7 @@ impl<N: Network> Plan<N> {
})?)
};
Ok(Plan { key, inputs, payments, change })
let scheduler_addendum = <N::Scheduler as Scheduler<N>>::Addendum::read(reader)?;
Ok(Plan { key, inputs, payments, change, scheduler_addendum })
}
}

View File

@@ -2,7 +2,6 @@ use core::{marker::PhantomData, fmt};
use std::collections::HashMap;
use rand_core::OsRng;
use ciphersuite::group::GroupEncoding;
use frost::{
ThresholdKeys, FrostError,
sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine},
@@ -17,7 +16,7 @@ pub use serai_db::*;
use crate::{
Get, DbTxn, Db,
networks::{Transaction, Eventuality, Network},
networks::{Eventuality, Network},
};
create_db!(
@@ -25,7 +24,7 @@ create_db!(
CompletionsDb: (id: [u8; 32]) -> Vec<u8>,
EventualityDb: (id: [u8; 32]) -> Vec<u8>,
AttemptDb: (id: &SignId) -> (),
TransactionDb: (id: &[u8]) -> Vec<u8>,
CompletionDb: (claim: &[u8]) -> Vec<u8>,
ActiveSignsDb: () -> Vec<[u8; 32]>,
CompletedOnChainDb: (id: &[u8; 32]) -> (),
}
@@ -59,12 +58,20 @@ impl CompletionsDb {
fn completions<N: Network>(
getter: &impl Get,
id: [u8; 32],
) -> Vec<<N::Transaction as Transaction<N>>::Id> {
let completions = Self::get(getter, id).unwrap_or_default();
) -> Vec<<N::Eventuality as Eventuality>::Claim> {
let Some(completions) = Self::get(getter, id) else { return vec![] };
// If this was set yet is empty, it's because it's the encoding of a claim with a length of 0
if completions.is_empty() {
let default = <N::Eventuality as Eventuality>::Claim::default();
assert_eq!(default.as_ref().len(), 0);
return vec![default];
}
let mut completions_ref = completions.as_slice();
let mut res = vec![];
while !completions_ref.is_empty() {
let mut id = <N::Transaction as Transaction<N>>::Id::default();
let mut id = <N::Eventuality as Eventuality>::Claim::default();
let id_len = id.as_ref().len();
id.as_mut().copy_from_slice(&completions_ref[.. id_len]);
completions_ref = &completions_ref[id_len ..];
@@ -73,25 +80,37 @@ impl CompletionsDb {
res
}
fn complete<N: Network>(txn: &mut impl DbTxn, id: [u8; 32], tx: &N::Transaction) {
let tx_id = tx.id();
// Transactions can be completed by multiple signatures
fn complete<N: Network>(
txn: &mut impl DbTxn,
id: [u8; 32],
completion: &<N::Eventuality as Eventuality>::Completion,
) {
// Completions can be completed by multiple signatures
// Save every solution in order to be robust
TransactionDb::save_transaction::<N>(txn, tx);
let mut existing = Self::get(txn, id).unwrap_or_default();
// Don't add this TX if it's already present
let tx_len = tx_id.as_ref().len();
assert_eq!(existing.len() % tx_len, 0);
CompletionDb::save_completion::<N>(txn, completion);
let mut i = 0;
while i < existing.len() {
if &existing[i .. (i + tx_len)] == tx_id.as_ref() {
return;
}
i += tx_len;
let claim = N::Eventuality::claim(completion);
let claim: &[u8] = claim.as_ref();
// If claim has a 0-byte encoding, the set key, even if empty, is the claim
if claim.is_empty() {
Self::set(txn, id, &vec![]);
return;
}
existing.extend(tx_id.as_ref());
let mut existing = Self::get(txn, id).unwrap_or_default();
assert_eq!(existing.len() % claim.len(), 0);
// Don't add this completion if it's already present
let mut i = 0;
while i < existing.len() {
if &existing[i .. (i + claim.len())] == claim {
return;
}
i += claim.len();
}
existing.extend(claim);
Self::set(txn, id, &existing);
}
}
@@ -110,25 +129,33 @@ impl EventualityDb {
}
}
impl TransactionDb {
fn save_transaction<N: Network>(txn: &mut impl DbTxn, tx: &N::Transaction) {
Self::set(txn, tx.id().as_ref(), &tx.serialize());
impl CompletionDb {
fn save_completion<N: Network>(
txn: &mut impl DbTxn,
completion: &<N::Eventuality as Eventuality>::Completion,
) {
let claim = N::Eventuality::claim(completion);
let claim: &[u8] = claim.as_ref();
Self::set(txn, claim, &N::Eventuality::serialize_completion(completion));
}
fn transaction<N: Network>(
fn completion<N: Network>(
getter: &impl Get,
id: &<N::Transaction as Transaction<N>>::Id,
) -> Option<N::Transaction> {
Self::get(getter, id.as_ref()).map(|tx| N::Transaction::read(&mut tx.as_slice()).unwrap())
claim: &<N::Eventuality as Eventuality>::Claim,
) -> Option<<N::Eventuality as Eventuality>::Completion> {
Self::get(getter, claim.as_ref())
.map(|completion| N::Eventuality::read_completion::<&[u8]>(&mut completion.as_ref()).unwrap())
}
}
type PreprocessFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::Preprocess;
type SignMachineFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::SignMachine;
type SignatureShareFor<N> =
<SignMachineFor<N> as SignMachine<<N as Network>::Transaction>>::SignatureShare;
type SignatureMachineFor<N> =
<SignMachineFor<N> as SignMachine<<N as Network>::Transaction>>::SignatureMachine;
type SignatureShareFor<N> = <SignMachineFor<N> as SignMachine<
<<N as Network>::Eventuality as Eventuality>::Completion,
>>::SignatureShare;
type SignatureMachineFor<N> = <SignMachineFor<N> as SignMachine<
<<N as Network>::Eventuality as Eventuality>::Completion,
>>::SignatureMachine;
pub struct Signer<N: Network, D: Db> {
db: PhantomData<D>,
@@ -164,12 +191,11 @@ impl<N: Network, D: Db> Signer<N, D> {
log::info!("rebroadcasting transactions for plans whose completions yet to be confirmed...");
loop {
for active in ActiveSignsDb::get(&db).unwrap_or_default() {
for completion in CompletionsDb::completions::<N>(&db, active) {
log::info!("rebroadcasting {}", hex::encode(&completion));
for claim in CompletionsDb::completions::<N>(&db, active) {
log::info!("rebroadcasting completion with claim {}", hex::encode(claim.as_ref()));
// TODO: Don't drop the error entirely. Check for invariants
let _ = network
.publish_transaction(&TransactionDb::transaction::<N>(&db, &completion).unwrap())
.await;
let _ =
network.publish_completion(&CompletionDb::completion::<N>(&db, &claim).unwrap()).await;
}
}
// Only run every five minutes so we aren't frequently loading tens to hundreds of KB from
@@ -242,7 +268,7 @@ impl<N: Network, D: Db> Signer<N, D> {
fn complete(
&mut self,
id: [u8; 32],
tx_id: &<N::Transaction as Transaction<N>>::Id,
claim: &<N::Eventuality as Eventuality>::Claim,
) -> ProcessorMessage {
// Assert we're actively signing for this TX
assert!(self.signable.remove(&id).is_some(), "completed a TX we weren't signing for");
@@ -256,7 +282,7 @@ impl<N: Network, D: Db> Signer<N, D> {
self.signing.remove(&id);
// Emit the event for it
ProcessorMessage::Completed { session: self.session, id, tx: tx_id.as_ref().to_vec() }
ProcessorMessage::Completed { session: self.session, id, tx: claim.as_ref().to_vec() }
}
#[must_use]
@@ -264,16 +290,16 @@ impl<N: Network, D: Db> Signer<N, D> {
&mut self,
txn: &mut D::Transaction<'_>,
id: [u8; 32],
tx: &N::Transaction,
completion: &<N::Eventuality as Eventuality>::Completion,
) -> Option<ProcessorMessage> {
let first_completion = !Self::already_completed(txn, id);
// Save this completion to the DB
CompletedOnChainDb::complete_on_chain(txn, &id);
CompletionsDb::complete::<N>(txn, id, tx);
CompletionsDb::complete::<N>(txn, id, completion);
if first_completion {
Some(self.complete(id, &tx.id()))
Some(self.complete(id, &N::Eventuality::claim(completion)))
} else {
None
}
@@ -286,49 +312,50 @@ impl<N: Network, D: Db> Signer<N, D> {
&mut self,
txn: &mut D::Transaction<'_>,
id: [u8; 32],
tx_id: &<N::Transaction as Transaction<N>>::Id,
claim: &<N::Eventuality as Eventuality>::Claim,
) -> Option<ProcessorMessage> {
if let Some(eventuality) = EventualityDb::eventuality::<N>(txn, id) {
// Transaction hasn't hit our mempool/was dropped for a different signature
// The latter can happen given certain latency conditions/a single malicious signer
// In the case of a single malicious signer, they can drag multiple honest validators down
// with them, so we unfortunately can't slash on this case
let Ok(tx) = self.network.get_transaction(tx_id).await else {
warn!(
"a validator claimed {} completed {} yet we didn't have that TX in our mempool {}",
hex::encode(tx_id),
hex::encode(id),
"(or had another connectivity issue)",
);
return None;
};
match self.network.confirm_completion(&eventuality, claim).await {
Ok(Some(completion)) => {
info!(
"signer eventuality for {} resolved in {}",
hex::encode(id),
hex::encode(claim.as_ref())
);
if self.network.confirm_completion(&eventuality, &tx) {
info!("signer eventuality for {} resolved in TX {}", hex::encode(id), hex::encode(tx_id));
let first_completion = !Self::already_completed(txn, id);
let first_completion = !Self::already_completed(txn, id);
// Save this completion to the DB
CompletionsDb::complete::<N>(txn, id, &completion);
// Save this completion to the DB
CompletionsDb::complete::<N>(txn, id, &tx);
if first_completion {
return Some(self.complete(id, &tx.id()));
if first_completion {
return Some(self.complete(id, claim));
}
}
Ok(None) => {
warn!(
"a validator claimed {} completed {} when it did not",
hex::encode(claim.as_ref()),
hex::encode(id),
);
}
Err(_) => {
// Transaction hasn't hit our mempool/was dropped for a different signature
// The latter can happen given certain latency conditions/a single malicious signer
// In the case of a single malicious signer, they can drag multiple honest validators down
// with them, so we unfortunately can't slash on this case
warn!(
"a validator claimed {} completed {} yet we couldn't check that claim",
hex::encode(claim.as_ref()),
hex::encode(id),
);
}
} else {
warn!(
"a validator claimed {} completed {} when it did not",
hex::encode(tx_id),
hex::encode(id)
);
}
} else {
// If we don't have this in RAM, it should be because we already finished signing it
assert!(!CompletionsDb::completions::<N>(txn, id).is_empty());
info!(
"signer {} informed of the eventuality completion for plan {}, {}",
hex::encode(self.keys[0].group_key().to_bytes()),
warn!(
"informed of completion {} for eventuality {}, when we didn't have that eventuality",
hex::encode(claim.as_ref()),
hex::encode(id),
"which we already marked as completed",
);
}
None
@@ -405,7 +432,7 @@ impl<N: Network, D: Db> Signer<N, D> {
let mut preprocesses = vec![];
let mut serialized_preprocesses = vec![];
for keys in &self.keys {
let machine = match self.network.attempt_send(keys.clone(), tx.clone()).await {
let machine = match self.network.attempt_sign(keys.clone(), tx.clone()).await {
Err(e) => {
error!("failed to attempt {}, #{}: {:?}", hex::encode(id.id), id.attempt, e);
return None;
@@ -572,7 +599,7 @@ impl<N: Network, D: Db> Signer<N, D> {
assert!(shares.insert(self.keys[i].params().i(), our_share).is_none());
}
let tx = match machine.complete(shares) {
let completion = match machine.complete(shares) {
Ok(res) => res,
Err(e) => match e {
FrostError::InternalError(_) |
@@ -588,40 +615,39 @@ impl<N: Network, D: Db> Signer<N, D> {
},
};
// Save the transaction in case it's needed for recovery
CompletionsDb::complete::<N>(txn, id.id, &tx);
// Save the completion in case it's needed for recovery
CompletionsDb::complete::<N>(txn, id.id, &completion);
// Publish it
let tx_id = tx.id();
if let Err(e) = self.network.publish_transaction(&tx).await {
error!("couldn't publish {:?}: {:?}", tx, e);
if let Err(e) = self.network.publish_completion(&completion).await {
error!("couldn't publish completion for plan {}: {:?}", hex::encode(id.id), e);
} else {
info!("published {} for plan {}", hex::encode(&tx_id), hex::encode(id.id));
info!("published completion for plan {}", hex::encode(id.id));
}
// Stop trying to sign for this TX
Some(self.complete(id.id, &tx_id))
Some(self.complete(id.id, &N::Eventuality::claim(&completion)))
}
CoordinatorMessage::Reattempt { id } => self.attempt(txn, id.id, id.attempt).await,
CoordinatorMessage::Completed { session: _, id, tx: mut tx_vec } => {
let mut tx = <N::Transaction as Transaction<N>>::Id::default();
if tx.as_ref().len() != tx_vec.len() {
let true_len = tx_vec.len();
tx_vec.truncate(2 * tx.as_ref().len());
CoordinatorMessage::Completed { session: _, id, tx: mut claim_vec } => {
let mut claim = <N::Eventuality as Eventuality>::Claim::default();
if claim.as_ref().len() != claim_vec.len() {
let true_len = claim_vec.len();
claim_vec.truncate(2 * claim.as_ref().len());
warn!(
"a validator claimed {}... (actual length {}) completed {} yet {}",
hex::encode(&tx_vec),
hex::encode(&claim_vec),
true_len,
hex::encode(id),
"that's not a valid TX ID",
"that's not a valid Claim",
);
return None;
}
tx.as_mut().copy_from_slice(&tx_vec);
claim.as_mut().copy_from_slice(&claim_vec);
self.claimed_eventuality_completion(txn, id, &tx).await
self.claimed_eventuality_completion(txn, id, &claim).await
}
}
}

View File

@@ -13,18 +13,23 @@ use serai_db::{DbTxn, MemDb};
use crate::{
Plan, Db,
networks::{OutputType, Output, Block, Network},
multisigs::scanner::{ScannerEvent, Scanner, ScannerHandle},
networks::{OutputType, Output, Block, UtxoNetwork},
multisigs::{
scheduler::Scheduler,
scanner::{ScannerEvent, Scanner, ScannerHandle},
},
tests::sign,
};
async fn spend<N: Network, D: Db>(
async fn spend<N: UtxoNetwork, D: Db>(
db: &mut D,
network: &N,
keys: &HashMap<Participant, ThresholdKeys<N::Curve>>,
scanner: &mut ScannerHandle<N, D>,
outputs: Vec<N::Output>,
) {
) where
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
{
let key = keys[&Participant::new(1).unwrap()].group_key();
let mut keys_txs = HashMap::new();
@@ -41,7 +46,8 @@ async fn spend<N: Network, D: Db>(
key,
inputs: outputs.clone(),
payments: vec![],
change: Some(N::change_address(key)),
change: Some(N::change_address(key).unwrap()),
scheduler_addendum: ().into(),
},
0,
)
@@ -70,13 +76,16 @@ async fn spend<N: Network, D: Db>(
scanner.release_lock().await;
txn.commit();
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
}
}
pub async fn test_addresses<N: Network>(network: N) {
pub async fn test_addresses<N: UtxoNetwork>(network: N)
where
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
{
let mut keys = frost::tests::key_gen::<_, N::Curve>(&mut OsRng);
for keys in keys.values_mut() {
N::tweak_keys(keys);
@@ -101,10 +110,10 @@ pub async fn test_addresses<N: Network>(network: N) {
// Receive funds to the various addresses and make sure they're properly identified
let mut received_outputs = vec![];
for (kind, address) in [
(OutputType::External, N::external_address(key)),
(OutputType::Branch, N::branch_address(key)),
(OutputType::Change, N::change_address(key)),
(OutputType::Forwarded, N::forward_address(key)),
(OutputType::External, N::external_address(&network, key).await),
(OutputType::Branch, N::branch_address(key).unwrap()),
(OutputType::Change, N::change_address(key).unwrap()),
(OutputType::Forwarded, N::forward_address(key).unwrap()),
] {
let block_id = network.test_send(address).await.id();
@@ -123,7 +132,7 @@ pub async fn test_addresses<N: Network>(network: N) {
txn.commit();
received_outputs.extend(outputs);
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
};

View File

@@ -65,7 +65,7 @@ mod bitcoin {
.unwrap();
<Bitcoin as Network>::tweak_keys(&mut keys);
let group_key = keys.group_key();
let serai_btc_address = <Bitcoin as Network>::external_address(group_key);
let serai_btc_address = <Bitcoin as Network>::external_address(&btc, group_key).await;
// btc key pair to send from
let private_key = PrivateKey::new(SecretKey::new(&mut rand_core::OsRng), BNetwork::Regtest);

View File

@@ -11,11 +11,11 @@ use tokio::{sync::Mutex, time::timeout};
use serai_db::{DbTxn, Db, MemDb};
use crate::{
networks::{OutputType, Output, Block, Network},
networks::{OutputType, Output, Block, UtxoNetwork},
multisigs::scanner::{ScannerEvent, Scanner, ScannerHandle},
};
pub async fn new_scanner<N: Network, D: Db>(
pub async fn new_scanner<N: UtxoNetwork, D: Db>(
network: &N,
db: &D,
group_key: <N::Curve as Ciphersuite>::G,
@@ -40,7 +40,7 @@ pub async fn new_scanner<N: Network, D: Db>(
scanner
}
pub async fn test_scanner<N: Network>(network: N) {
pub async fn test_scanner<N: UtxoNetwork>(network: N) {
let mut keys =
frost::tests::key_gen::<_, N::Curve>(&mut OsRng).remove(&Participant::new(1).unwrap()).unwrap();
N::tweak_keys(&mut keys);
@@ -56,7 +56,7 @@ pub async fn test_scanner<N: Network>(network: N) {
let scanner = new_scanner(&network, &db, group_key, &first).await;
// Receive funds
let block = network.test_send(N::external_address(keys.group_key())).await;
let block = network.test_send(N::external_address(&network, keys.group_key()).await).await;
let block_id = block.id();
// Verify the Scanner picked them up
@@ -71,7 +71,7 @@ pub async fn test_scanner<N: Network>(network: N) {
assert_eq!(outputs[0].kind(), OutputType::External);
outputs
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
};
@@ -101,7 +101,7 @@ pub async fn test_scanner<N: Network>(network: N) {
.is_err());
}
pub async fn test_no_deadlock_in_multisig_completed<N: Network>(network: N) {
pub async fn test_no_deadlock_in_multisig_completed<N: UtxoNetwork>(network: N) {
// Mine blocks so there's a confirmed block
for _ in 0 .. N::CONFIRMATIONS {
network.mine_block().await;
@@ -142,14 +142,14 @@ pub async fn test_no_deadlock_in_multisig_completed<N: Network>(network: N) {
assert!(!is_retirement_block);
block
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
};
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
ScannerEvent::Block { .. } => {}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
};

View File

@@ -17,19 +17,20 @@ use serai_client::{
use messages::sign::*;
use crate::{
Payment, Plan,
networks::{Output, Transaction, Network},
networks::{Output, Transaction, Eventuality, UtxoNetwork},
multisigs::scheduler::Scheduler,
signer::Signer,
};
#[allow(clippy::type_complexity)]
pub async fn sign<N: Network>(
pub async fn sign<N: UtxoNetwork>(
network: N,
session: Session,
mut keys_txs: HashMap<
Participant,
(ThresholdKeys<N::Curve>, (N::SignableTransaction, N::Eventuality)),
>,
) -> <N::Transaction as Transaction<N>>::Id {
) -> <N::Eventuality as Eventuality>::Claim {
let actual_id = SignId { session, id: [0xaa; 32], attempt: 0 };
let mut keys = HashMap::new();
@@ -65,14 +66,15 @@ pub async fn sign<N: Network>(
let mut preprocesses = HashMap::new();
let mut eventuality = None;
for i in 1 ..= signers.len() {
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
let (tx, eventuality) = txs.remove(&i).unwrap();
let (tx, this_eventuality) = txs.remove(&i).unwrap();
let mut txn = dbs.get_mut(&i).unwrap().txn();
match signers
.get_mut(&i)
.unwrap()
.sign_transaction(&mut txn, actual_id.id, tx, &eventuality)
.sign_transaction(&mut txn, actual_id.id, tx, &this_eventuality)
.await
{
// All participants should emit a preprocess
@@ -86,6 +88,11 @@ pub async fn sign<N: Network>(
_ => panic!("didn't get preprocess back"),
}
txn.commit();
if eventuality.is_none() {
eventuality = Some(this_eventuality.clone());
}
assert_eq!(eventuality, Some(this_eventuality));
}
let mut shares = HashMap::new();
@@ -140,19 +147,25 @@ pub async fn sign<N: Network>(
txn.commit();
}
let mut typed_tx_id = <N::Transaction as Transaction<N>>::Id::default();
typed_tx_id.as_mut().copy_from_slice(tx_id.unwrap().as_ref());
typed_tx_id
let mut typed_claim = <N::Eventuality as Eventuality>::Claim::default();
typed_claim.as_mut().copy_from_slice(tx_id.unwrap().as_ref());
assert!(network.check_eventuality_by_claim(&eventuality.unwrap(), &typed_claim).await);
typed_claim
}
pub async fn test_signer<N: Network>(network: N) {
pub async fn test_signer<N: UtxoNetwork>(network: N)
where
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
{
let mut keys = key_gen(&mut OsRng);
for keys in keys.values_mut() {
N::tweak_keys(keys);
}
let key = keys[&Participant::new(1).unwrap()].group_key();
let outputs = network.get_outputs(&network.test_send(N::external_address(key)).await, key).await;
let outputs = network
.get_outputs(&network.test_send(N::external_address(&network, key).await).await, key)
.await;
let sync_block = network.get_latest_block_number().await.unwrap() - N::CONFIRMATIONS;
let amount = 2 * N::DUST;
@@ -166,7 +179,7 @@ pub async fn test_signer<N: Network>(network: N) {
key,
inputs: outputs.clone(),
payments: vec![Payment {
address: N::external_address(key),
address: N::external_address(&network, key).await,
data: None,
balance: Balance {
coin: match N::NETWORK {
@@ -178,7 +191,8 @@ pub async fn test_signer<N: Network>(network: N) {
amount: Amount(amount),
},
}],
change: Some(N::change_address(key)),
change: Some(N::change_address(key).unwrap()),
scheduler_addendum: ().into(),
},
0,
)
@@ -191,13 +205,12 @@ pub async fn test_signer<N: Network>(network: N) {
keys_txs.insert(i, (keys, (signable, eventuality)));
}
// The signer may not publish the TX if it has a connection error
// It doesn't fail in this case
let txid = sign(network.clone(), Session(0), keys_txs).await;
let tx = network.get_transaction(&txid).await.unwrap();
assert_eq!(tx.id(), txid);
let claim = sign(network.clone(), Session(0), keys_txs).await;
// Mine a block, and scan it, to ensure that the TX actually made it on chain
network.mine_block().await;
let block_number = network.get_latest_block_number().await.unwrap();
let tx = network.get_transaction_by_eventuality(block_number, &eventualities[0]).await;
let outputs = network
.get_outputs(
&network.get_block(network.get_latest_block_number().await.unwrap()).await.unwrap(),
@@ -212,6 +225,7 @@ pub async fn test_signer<N: Network>(network: N) {
// Check the eventualities pass
for eventuality in eventualities {
assert!(network.confirm_completion(&eventuality, &tx));
let completion = network.confirm_completion(&eventuality, &claim).await.unwrap().unwrap();
assert_eq!(N::Eventuality::claim(&completion), claim);
}
}

View File

@@ -15,7 +15,7 @@ use serai_client::{
use crate::{
Payment, Plan,
networks::{Output, Transaction, Block, Network},
networks::{Output, Transaction, Eventuality, Block, UtxoNetwork},
multisigs::{
scanner::{ScannerEvent, Scanner},
scheduler::Scheduler,
@@ -24,7 +24,7 @@ use crate::{
};
// Tests the Scanner, Scheduler, and Signer together
pub async fn test_wallet<N: Network>(network: N) {
pub async fn test_wallet<N: UtxoNetwork>(network: N) {
// Mine blocks so there's a confirmed block
for _ in 0 .. N::CONFIRMATIONS {
network.mine_block().await;
@@ -47,7 +47,7 @@ pub async fn test_wallet<N: Network>(network: N) {
network.mine_block().await;
}
let block = network.test_send(N::external_address(key)).await;
let block = network.test_send(N::external_address(&network, key).await).await;
let block_id = block.id();
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
@@ -58,7 +58,7 @@ pub async fn test_wallet<N: Network>(network: N) {
assert_eq!(outputs.len(), 1);
(block_id, outputs)
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
}
@@ -69,22 +69,13 @@ pub async fn test_wallet<N: Network>(network: N) {
txn.commit();
let mut txn = db.txn();
let mut scheduler = Scheduler::new::<MemDb>(
&mut txn,
key,
match N::NETWORK {
NetworkId::Serai => panic!("test_wallet called with Serai"),
NetworkId::Bitcoin => Coin::Bitcoin,
NetworkId::Ethereum => Coin::Ether,
NetworkId::Monero => Coin::Monero,
},
);
let mut scheduler = N::Scheduler::new::<MemDb>(&mut txn, key, N::NETWORK);
let amount = 2 * N::DUST;
let plans = scheduler.schedule::<MemDb>(
&mut txn,
outputs.clone(),
vec![Payment {
address: N::external_address(key),
address: N::external_address(&network, key).await,
data: None,
balance: Balance {
coin: match N::NETWORK {
@@ -100,27 +91,26 @@ pub async fn test_wallet<N: Network>(network: N) {
false,
);
txn.commit();
assert_eq!(plans.len(), 1);
assert_eq!(plans[0].key, key);
assert_eq!(plans[0].inputs, outputs);
assert_eq!(
plans,
vec![Plan {
key,
inputs: outputs.clone(),
payments: vec![Payment {
address: N::external_address(key),
data: None,
balance: Balance {
coin: match N::NETWORK {
NetworkId::Serai => panic!("test_wallet called with Serai"),
NetworkId::Bitcoin => Coin::Bitcoin,
NetworkId::Ethereum => Coin::Ether,
NetworkId::Monero => Coin::Monero,
},
amount: Amount(amount),
}
}],
change: Some(N::change_address(key)),
plans[0].payments,
vec![Payment {
address: N::external_address(&network, key).await,
data: None,
balance: Balance {
coin: match N::NETWORK {
NetworkId::Serai => panic!("test_wallet called with Serai"),
NetworkId::Bitcoin => Coin::Bitcoin,
NetworkId::Ethereum => Coin::Ether,
NetworkId::Monero => Coin::Monero,
},
amount: Amount(amount),
}
}]
);
assert_eq!(plans[0].change, Some(N::change_address(key).unwrap()));
{
let mut buf = vec![];
@@ -143,10 +133,10 @@ pub async fn test_wallet<N: Network>(network: N) {
keys_txs.insert(i, (keys, (signable, eventuality)));
}
let txid = sign(network.clone(), Session(0), keys_txs).await;
let tx = network.get_transaction(&txid).await.unwrap();
let claim = sign(network.clone(), Session(0), keys_txs).await;
network.mine_block().await;
let block_number = network.get_latest_block_number().await.unwrap();
let tx = network.get_transaction_by_eventuality(block_number, &eventualities[0]).await;
let block = network.get_block(block_number).await.unwrap();
let outputs = network.get_outputs(&block, key).await;
assert_eq!(outputs.len(), 2);
@@ -154,7 +144,8 @@ pub async fn test_wallet<N: Network>(network: N) {
assert!((outputs[0].balance().amount.0 == amount) || (outputs[1].balance().amount.0 == amount));
for eventuality in eventualities {
assert!(network.confirm_completion(&eventuality, &tx));
let completion = network.confirm_completion(&eventuality, &claim).await.unwrap().unwrap();
assert_eq!(N::Eventuality::claim(&completion), claim);
}
for _ in 1 .. N::CONFIRMATIONS {
@@ -168,7 +159,7 @@ pub async fn test_wallet<N: Network>(network: N) {
assert_eq!(block_id, block.id());
assert_eq!(these_outputs, outputs);
}
ScannerEvent::Completed(_, _, _, _) => {
ScannerEvent::Completed(_, _, _, _, _) => {
panic!("unexpectedly got eventuality completion");
}
}