mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-11 05:29:25 +00:00
Ethereum Integration (#557)
* Clean up Ethereum * Consistent contract address for deployed contracts * Flesh out Router a bit * Add a Deployer for DoS-less deployment * Implement Router-finding * Use CREATE2 helper present in ethers * Move from CREATE2 to CREATE Bit more streamlined for our use case. * Document ethereum-serai * Tidy tests a bit * Test updateSeraiKey * Use encodePacked for updateSeraiKey * Take in the block hash to read state during * Add a Sandbox contract to the Ethereum integration * Add retrieval of transfers from Ethereum * Add inInstruction function to the Router * Augment our handling of InInstructions events with a check the transfer event also exists * Have the Deployer error upon failed deployments * Add --via-ir * Make get_transaction test-only We only used it to get transactions to confirm the resolution of Eventualities. Eventualities need to be modularized. By introducing the dedicated confirm_completion function, we remove the need for a non-test get_transaction AND begin this modularization (by no longer explicitly grabbing a transaction to check with). * Modularize Eventuality Almost fully-deprecates the Transaction trait for Completion. Replaces Transaction ID with Claim. * Modularize the Scheduler behind a trait * Add an extremely basic account Scheduler * Add nonce uses, key rotation to the account scheduler * Only report the account Scheduler empty after transferring keys Also ban payments to the branch/change/forward addresses. * Make fns reliant on state test-only * Start of an Ethereum integration for the processor * Add a session to the Router to prevent updateSeraiKey replaying This would only happen if an old key was rotated to again, which would require n-of-n collusion (already ridiculous and a valid fault attributable event). It just clarifies the formal arguments. * Add a RouterCommand + SignMachine for producing it to coins/ethereum * Ethereum which compiles * Have branch/change/forward return an option Also defines a UtxoNetwork extension trait for MAX_INPUTS. * Make external_address exclusively a test fn * Move the "account" scheduler to "smart contract" * Remove ABI artifact * Move refund/forward Plan creation into the Processor We create forward Plans in the scan path, and need to know their exact fees in the scan path. This requires adding a somewhat wonky shim_forward_plan method so we can obtain a Plan equivalent to the actual forward Plan for fee reasons, yet don't expect it to be the actual forward Plan (which may be distinct if the Plan pulls from the global state, such as with a nonce). Also properly types a Scheduler addendum such that the SC scheduler isn't cramming the nonce to use into the N::Output type. * Flesh out the Ethereum integration more * Two commits ago, into the **Scheduler, not Processor * Remove misc TODOs in SC Scheduler * Add constructor to RouterCommandMachine * RouterCommand read, pairing with the prior added write * Further add serialization methods * Have the Router's key included with the InInstruction This does not use the key at the time of the event. This uses the key at the end of the block for the event. Its much simpler than getting the full event streams for each, checking when they interlace. This does not read the state. Every block, this makes a request for every single key update and simply chooses the last one. This allows pruning state, only keeping the event tree. Ideally, we'd also introduce a cache to reduce the cost of the filter (small in events yielded, long in blocks searched). Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of our Plans should solely have payments out, and there's no expectation of a Plan being made under one key broken by it being received by another key. * Add read/write to InInstruction * Abstract the ABI for Call/OutInstruction in ethereum-serai * Fill out signable_transaction for Ethereum * Move ethereum-serai to alloy Resolves #331. * Use the opaque sol macro instead of generated files * Move the processor over to the now-alloy-based ethereum-serai * Use the ecrecover provided by alloy * Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized) * Always use the latest keys for SC scheduled plans * get_eventuality_completions for Ethereum * Finish fleshing out the processor Ethereum integration as needed for serai-processor tests This doesn't not support any actual deployments, not even the ones simulated by serai-processor-docker-tests. * Add alloy-simple-request-transport to the GH workflows * cargo update * Clarify a few comments and make one check more robust * Use a string for 27.0 in .github * Remove optional from no-longer-optional dependencies in processor * Add alloy to git deny exception * Fix no longer optional specification in processor's binaries feature * Use a version of foundry from 2024 * Correct fetching Bitcoin TXs in the processor docker tests * Update rustls to resolve RUSTSEC warnings * Use the monthly nightly foundry, not the deleted daily nightly
This commit is contained in:
@@ -1,3 +1,5 @@
|
||||
use std::io;
|
||||
|
||||
use ciphersuite::Ciphersuite;
|
||||
pub use serai_db::*;
|
||||
|
||||
@@ -6,9 +8,59 @@ use serai_client::{primitives::Balance, in_instructions::primitives::InInstructi
|
||||
|
||||
use crate::{
|
||||
Get, Plan,
|
||||
networks::{Transaction, Network},
|
||||
networks::{Output, Transaction, Network},
|
||||
};
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub enum PlanFromScanning<N: Network> {
|
||||
Refund(N::Output, N::Address),
|
||||
Forward(N::Output),
|
||||
}
|
||||
|
||||
impl<N: Network> PlanFromScanning<N> {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let mut kind = [0xff];
|
||||
reader.read_exact(&mut kind)?;
|
||||
match kind[0] {
|
||||
0 => {
|
||||
let output = N::Output::read(reader)?;
|
||||
|
||||
let mut address_vec_len = [0; 4];
|
||||
reader.read_exact(&mut address_vec_len)?;
|
||||
let mut address_vec =
|
||||
vec![0; usize::try_from(u32::from_le_bytes(address_vec_len)).unwrap()];
|
||||
reader.read_exact(&mut address_vec)?;
|
||||
let address =
|
||||
N::Address::try_from(address_vec).map_err(|_| "invalid address saved to disk").unwrap();
|
||||
|
||||
Ok(PlanFromScanning::Refund(output, address))
|
||||
}
|
||||
1 => {
|
||||
let output = N::Output::read(reader)?;
|
||||
Ok(PlanFromScanning::Forward(output))
|
||||
}
|
||||
_ => panic!("reading unrecognized PlanFromScanning"),
|
||||
}
|
||||
}
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
match self {
|
||||
PlanFromScanning::Refund(output, address) => {
|
||||
writer.write_all(&[0])?;
|
||||
output.write(writer)?;
|
||||
|
||||
let address_vec: Vec<u8> =
|
||||
address.clone().try_into().map_err(|_| "invalid address being refunded to").unwrap();
|
||||
writer.write_all(&u32::try_from(address_vec.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(&address_vec)
|
||||
}
|
||||
PlanFromScanning::Forward(output) => {
|
||||
writer.write_all(&[1])?;
|
||||
output.write(writer)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
create_db!(
|
||||
MultisigsDb {
|
||||
NextBatchDb: () -> u32,
|
||||
@@ -80,7 +132,11 @@ impl PlanDb {
|
||||
) -> bool {
|
||||
let plan = Plan::<N>::read::<&[u8]>(&mut &Self::get(getter, &id).unwrap()[8 ..]).unwrap();
|
||||
assert_eq!(plan.id(), id);
|
||||
(key == plan.key) && (Some(N::change_address(plan.key)) == plan.change)
|
||||
if let Some(change) = N::change_address(plan.key) {
|
||||
(key == plan.key) && (Some(change) == plan.change)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -130,7 +186,7 @@ impl PlansFromScanningDb {
|
||||
pub fn set_plans_from_scanning<N: Network>(
|
||||
txn: &mut impl DbTxn,
|
||||
block_number: usize,
|
||||
plans: Vec<Plan<N>>,
|
||||
plans: Vec<PlanFromScanning<N>>,
|
||||
) {
|
||||
let mut buf = vec![];
|
||||
for plan in plans {
|
||||
@@ -142,13 +198,13 @@ impl PlansFromScanningDb {
|
||||
pub fn take_plans_from_scanning<N: Network>(
|
||||
txn: &mut impl DbTxn,
|
||||
block_number: usize,
|
||||
) -> Option<Vec<Plan<N>>> {
|
||||
) -> Option<Vec<PlanFromScanning<N>>> {
|
||||
let block_number = u64::try_from(block_number).unwrap();
|
||||
let res = Self::get(txn, block_number).map(|plans| {
|
||||
let mut plans_ref = plans.as_slice();
|
||||
let mut res = vec![];
|
||||
while !plans_ref.is_empty() {
|
||||
res.push(Plan::<N>::read(&mut plans_ref).unwrap());
|
||||
res.push(PlanFromScanning::<N>::read(&mut plans_ref).unwrap());
|
||||
}
|
||||
res
|
||||
});
|
||||
|
||||
@@ -7,7 +7,7 @@ use scale::{Encode, Decode};
|
||||
use messages::SubstrateContext;
|
||||
|
||||
use serai_client::{
|
||||
primitives::{MAX_DATA_LEN, NetworkId, Coin, ExternalAddress, BlockHash, Data},
|
||||
primitives::{MAX_DATA_LEN, ExternalAddress, BlockHash, Data},
|
||||
in_instructions::primitives::{
|
||||
InInstructionWithBalance, Batch, RefundableInInstruction, Shorthand, MAX_BATCH_SIZE,
|
||||
},
|
||||
@@ -28,15 +28,12 @@ use scanner::{ScannerEvent, ScannerHandle, Scanner};
|
||||
mod db;
|
||||
use db::*;
|
||||
|
||||
#[cfg(not(test))]
|
||||
mod scheduler;
|
||||
#[cfg(test)]
|
||||
pub mod scheduler;
|
||||
pub(crate) mod scheduler;
|
||||
use scheduler::Scheduler;
|
||||
|
||||
use crate::{
|
||||
Get, Db, Payment, Plan,
|
||||
networks::{OutputType, Output, Transaction, SignableTransaction, Block, PreparedSend, Network},
|
||||
networks::{OutputType, Output, SignableTransaction, Eventuality, Block, PreparedSend, Network},
|
||||
};
|
||||
|
||||
// InInstructionWithBalance from an external output
|
||||
@@ -95,6 +92,8 @@ enum RotationStep {
|
||||
ClosingExisting,
|
||||
}
|
||||
|
||||
// This explicitly shouldn't take the database as we prepare Plans we won't execute for fee
|
||||
// estimates
|
||||
async fn prepare_send<N: Network>(
|
||||
network: &N,
|
||||
block_number: usize,
|
||||
@@ -122,7 +121,7 @@ async fn prepare_send<N: Network>(
|
||||
pub struct MultisigViewer<N: Network> {
|
||||
activation_block: usize,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
scheduler: Scheduler<N>,
|
||||
scheduler: N::Scheduler,
|
||||
}
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
@@ -131,7 +130,7 @@ pub enum MultisigEvent<N: Network> {
|
||||
// Batches to publish
|
||||
Batches(Option<(<N::Curve as Ciphersuite>::G, <N::Curve as Ciphersuite>::G)>, Vec<Batch>),
|
||||
// Eventuality completion found on-chain
|
||||
Completed(Vec<u8>, [u8; 32], N::Transaction),
|
||||
Completed(Vec<u8>, [u8; 32], <N::Eventuality as Eventuality>::Completion),
|
||||
}
|
||||
|
||||
pub struct MultisigManager<D: Db, N: Network> {
|
||||
@@ -157,20 +156,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
assert!(current_keys.len() <= 2);
|
||||
let mut actively_signing = vec![];
|
||||
for (_, key) in ¤t_keys {
|
||||
schedulers.push(
|
||||
Scheduler::from_db(
|
||||
raw_db,
|
||||
*key,
|
||||
match N::NETWORK {
|
||||
NetworkId::Serai => panic!("adding a key for Serai"),
|
||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
||||
// TODO: This is incomplete to DAI
|
||||
NetworkId::Ethereum => Coin::Ether,
|
||||
NetworkId::Monero => Coin::Monero,
|
||||
},
|
||||
)
|
||||
.unwrap(),
|
||||
);
|
||||
schedulers.push(N::Scheduler::from_db(raw_db, *key, N::NETWORK).unwrap());
|
||||
|
||||
// Load any TXs being actively signed
|
||||
let key = key.to_bytes();
|
||||
@@ -245,17 +231,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
let viewer = Some(MultisigViewer {
|
||||
activation_block,
|
||||
key: external_key,
|
||||
scheduler: Scheduler::<N>::new::<D>(
|
||||
txn,
|
||||
external_key,
|
||||
match N::NETWORK {
|
||||
NetworkId::Serai => panic!("adding a key for Serai"),
|
||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
||||
// TODO: This is incomplete to DAI
|
||||
NetworkId::Ethereum => Coin::Ether,
|
||||
NetworkId::Monero => Coin::Monero,
|
||||
},
|
||||
),
|
||||
scheduler: N::Scheduler::new::<D>(txn, external_key, N::NETWORK),
|
||||
});
|
||||
|
||||
if self.existing.is_none() {
|
||||
@@ -352,48 +328,30 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
(existing_outputs, new_outputs)
|
||||
}
|
||||
|
||||
fn refund_plan(output: N::Output, refund_to: N::Address) -> Plan<N> {
|
||||
fn refund_plan(
|
||||
scheduler: &mut N::Scheduler,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
refund_to: N::Address,
|
||||
) -> Plan<N> {
|
||||
log::info!("creating refund plan for {}", hex::encode(output.id()));
|
||||
assert_eq!(output.kind(), OutputType::External);
|
||||
Plan {
|
||||
key: output.key(),
|
||||
// Uses a payment as this will still be successfully sent due to fee amortization,
|
||||
// and because change is currently always a Serai key
|
||||
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
||||
inputs: vec![output],
|
||||
change: None,
|
||||
}
|
||||
scheduler.refund_plan::<D>(txn, output, refund_to)
|
||||
}
|
||||
|
||||
fn forward_plan(&self, output: N::Output) -> Plan<N> {
|
||||
// Returns the plan for forwarding if one is needed.
|
||||
// Returns None if one is not needed to forward this output.
|
||||
fn forward_plan(&mut self, txn: &mut D::Transaction<'_>, output: &N::Output) -> Option<Plan<N>> {
|
||||
log::info!("creating forwarding plan for {}", hex::encode(output.id()));
|
||||
|
||||
/*
|
||||
Sending a Plan, with arbitrary data proxying the InInstruction, would require adding
|
||||
a flow for networks which drop their data to still embed arbitrary data. It'd also have
|
||||
edge cases causing failures (we'd need to manually provide the origin if it was implied,
|
||||
which may exceed the encoding limit).
|
||||
|
||||
Instead, we save the InInstruction as we scan this output. Then, when the output is
|
||||
successfully forwarded, we simply read it from the local database. This also saves the
|
||||
costs of embedding arbitrary data.
|
||||
|
||||
Since we can't rely on the Eventuality system to detect if it's a forwarded transaction,
|
||||
due to the asynchonicity of the Eventuality system, we instead interpret an Forwarded
|
||||
output which has an amount associated with an InInstruction which was forwarded as having
|
||||
been forwarded.
|
||||
*/
|
||||
|
||||
Plan {
|
||||
key: self.existing.as_ref().unwrap().key,
|
||||
payments: vec![Payment {
|
||||
address: N::forward_address(self.new.as_ref().unwrap().key),
|
||||
data: None,
|
||||
balance: output.balance(),
|
||||
}],
|
||||
inputs: vec![output],
|
||||
change: None,
|
||||
let res = self.existing.as_mut().unwrap().scheduler.forward_plan::<D>(
|
||||
txn,
|
||||
output.clone(),
|
||||
self.new.as_ref().expect("forwarding plan yet no new multisig").key,
|
||||
);
|
||||
if res.is_none() {
|
||||
log::info!("no forwarding plan was necessary for {}", hex::encode(output.id()));
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
// Filter newly received outputs due to the step being RotationStep::ClosingExisting.
|
||||
@@ -605,7 +563,31 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
block_number
|
||||
{
|
||||
// Load plans crated when we scanned the block
|
||||
plans = PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
|
||||
let scanning_plans =
|
||||
PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
|
||||
// Expand into actual plans
|
||||
plans = scanning_plans
|
||||
.into_iter()
|
||||
.map(|plan| match plan {
|
||||
PlanFromScanning::Refund(output, refund_to) => {
|
||||
let existing = self.existing.as_mut().unwrap();
|
||||
if output.key() == existing.key {
|
||||
Self::refund_plan(&mut existing.scheduler, txn, output, refund_to)
|
||||
} else {
|
||||
let new = self
|
||||
.new
|
||||
.as_mut()
|
||||
.expect("new multisig didn't expect yet output wasn't for existing multisig");
|
||||
assert_eq!(output.key(), new.key, "output wasn't for existing nor new multisig");
|
||||
Self::refund_plan(&mut new.scheduler, txn, output, refund_to)
|
||||
}
|
||||
}
|
||||
PlanFromScanning::Forward(output) => self
|
||||
.forward_plan(txn, &output)
|
||||
.expect("supposed to forward an output yet no forwarding plan"),
|
||||
})
|
||||
.collect();
|
||||
|
||||
for plan in &plans {
|
||||
plans_from_scanning.insert(plan.id());
|
||||
}
|
||||
@@ -665,13 +647,23 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
});
|
||||
|
||||
for plan in &plans {
|
||||
if plan.change == Some(N::change_address(plan.key)) {
|
||||
// Assert these are only created during the expected step
|
||||
match *step {
|
||||
RotationStep::UseExisting => {}
|
||||
RotationStep::NewAsChange |
|
||||
RotationStep::ForwardFromExisting |
|
||||
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
|
||||
// This first equality should 'never meaningfully' be false
|
||||
// All created plans so far are by the existing multisig EXCEPT:
|
||||
// A) If we created a refund plan from the new multisig (yet that wouldn't have change)
|
||||
// B) The existing Scheduler returned a Plan for the new key (yet that happens with the SC
|
||||
// scheduler, yet that doesn't have change)
|
||||
// Despite being 'unnecessary' now, it's better to explicitly ensure and be robust
|
||||
if plan.key == self.existing.as_ref().unwrap().key {
|
||||
if let Some(change) = N::change_address(plan.key) {
|
||||
if plan.change == Some(change) {
|
||||
// Assert these (self-change) are only created during the expected step
|
||||
match *step {
|
||||
RotationStep::UseExisting => {}
|
||||
RotationStep::NewAsChange |
|
||||
RotationStep::ForwardFromExisting |
|
||||
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -853,15 +845,20 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
let plans_at_start = plans.len();
|
||||
let (refund_to, instruction) = instruction_from_output::<N>(output);
|
||||
if let Some(mut instruction) = instruction {
|
||||
// Build a dedicated Plan forwarding this
|
||||
let forward_plan = self.forward_plan(output.clone());
|
||||
plans.push(forward_plan.clone());
|
||||
let Some(shimmed_plan) = N::Scheduler::shim_forward_plan(
|
||||
output.clone(),
|
||||
self.new.as_ref().expect("forwarding from existing yet no new multisig").key,
|
||||
) else {
|
||||
// If this network doesn't need forwarding, report the output now
|
||||
return true;
|
||||
};
|
||||
plans.push(PlanFromScanning::<N>::Forward(output.clone()));
|
||||
|
||||
// Set the instruction for this output to be returned
|
||||
// We need to set it under the amount it's forwarded with, so prepare its forwarding
|
||||
// TX to determine the fees involved
|
||||
let PreparedSend { tx, post_fee_branches: _, operating_costs } =
|
||||
prepare_send(network, block_number, forward_plan, 0).await;
|
||||
prepare_send(network, block_number, shimmed_plan, 0).await;
|
||||
// operating_costs should not increase in a forwarding TX
|
||||
assert_eq!(operating_costs, 0);
|
||||
|
||||
@@ -872,12 +869,28 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
// letting it die out
|
||||
if let Some(tx) = &tx {
|
||||
instruction.balance.amount.0 -= tx.0.fee();
|
||||
|
||||
/*
|
||||
Sending a Plan, with arbitrary data proxying the InInstruction, would require
|
||||
adding a flow for networks which drop their data to still embed arbitrary data.
|
||||
It'd also have edge cases causing failures (we'd need to manually provide the
|
||||
origin if it was implied, which may exceed the encoding limit).
|
||||
|
||||
Instead, we save the InInstruction as we scan this output. Then, when the
|
||||
output is successfully forwarded, we simply read it from the local database.
|
||||
This also saves the costs of embedding arbitrary data.
|
||||
|
||||
Since we can't rely on the Eventuality system to detect if it's a forwarded
|
||||
transaction, due to the asynchonicity of the Eventuality system, we instead
|
||||
interpret an Forwarded output which has an amount associated with an
|
||||
InInstruction which was forwarded as having been forwarded.
|
||||
*/
|
||||
ForwardedOutputDb::save_forwarded_output(txn, &instruction);
|
||||
}
|
||||
} else if let Some(refund_to) = refund_to {
|
||||
if let Ok(refund_to) = refund_to.consume().try_into() {
|
||||
// Build a dedicated Plan refunding this
|
||||
plans.push(Self::refund_plan(output.clone(), refund_to));
|
||||
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -909,7 +922,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
let Some(instruction) = instruction else {
|
||||
if let Some(refund_to) = refund_to {
|
||||
if let Ok(refund_to) = refund_to.consume().try_into() {
|
||||
plans.push(Self::refund_plan(output.clone(), refund_to));
|
||||
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
|
||||
}
|
||||
}
|
||||
continue;
|
||||
@@ -999,9 +1012,9 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||
// This must be emitted before ScannerEvent::Block for all completions of known Eventualities
|
||||
// within the block. Unknown Eventualities may have their Completed events emitted after
|
||||
// ScannerEvent::Block however.
|
||||
ScannerEvent::Completed(key, block_number, id, tx) => {
|
||||
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx.id());
|
||||
(block_number, MultisigEvent::Completed(key, id, tx))
|
||||
ScannerEvent::Completed(key, block_number, id, tx_id, completion) => {
|
||||
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx_id);
|
||||
(block_number, MultisigEvent::Completed(key, id, completion))
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -17,15 +17,25 @@ use tokio::{
|
||||
|
||||
use crate::{
|
||||
Get, DbTxn, Db,
|
||||
networks::{Output, Transaction, EventualitiesTracker, Block, Network},
|
||||
networks::{Output, Transaction, Eventuality, EventualitiesTracker, Block, Network},
|
||||
};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum ScannerEvent<N: Network> {
|
||||
// Block scanned
|
||||
Block { is_retirement_block: bool, block: <N::Block as Block<N>>::Id, outputs: Vec<N::Output> },
|
||||
Block {
|
||||
is_retirement_block: bool,
|
||||
block: <N::Block as Block<N>>::Id,
|
||||
outputs: Vec<N::Output>,
|
||||
},
|
||||
// Eventuality completion found on-chain
|
||||
Completed(Vec<u8>, usize, [u8; 32], N::Transaction),
|
||||
Completed(
|
||||
Vec<u8>,
|
||||
usize,
|
||||
[u8; 32],
|
||||
<N::Transaction as Transaction<N>>::Id,
|
||||
<N::Eventuality as Eventuality>::Completion,
|
||||
),
|
||||
}
|
||||
|
||||
pub type ScannerEventChannel<N> = mpsc::UnboundedReceiver<ScannerEvent<N>>;
|
||||
@@ -555,19 +565,25 @@ impl<N: Network, D: Db> Scanner<N, D> {
|
||||
}
|
||||
}
|
||||
|
||||
for (id, (block_number, tx)) in network
|
||||
for (id, (block_number, tx, completion)) in network
|
||||
.get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block)
|
||||
.await
|
||||
{
|
||||
info!(
|
||||
"eventuality {} resolved by {}, as found on chain",
|
||||
hex::encode(id),
|
||||
hex::encode(&tx.id())
|
||||
hex::encode(tx.as_ref())
|
||||
);
|
||||
|
||||
completion_block_numbers.push(block_number);
|
||||
// This must be before the mission of ScannerEvent::Block, per commentary in mod.rs
|
||||
if !scanner.emit(ScannerEvent::Completed(key_vec.clone(), block_number, id, tx)) {
|
||||
if !scanner.emit(ScannerEvent::Completed(
|
||||
key_vec.clone(),
|
||||
block_number,
|
||||
id,
|
||||
tx,
|
||||
completion,
|
||||
)) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
95
processor/src/multisigs/scheduler/mod.rs
Normal file
95
processor/src/multisigs/scheduler/mod.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use core::fmt::Debug;
|
||||
use std::io;
|
||||
|
||||
use ciphersuite::Ciphersuite;
|
||||
|
||||
use serai_client::primitives::{NetworkId, Balance};
|
||||
|
||||
use crate::{networks::Network, Db, Payment, Plan};
|
||||
|
||||
pub(crate) mod utxo;
|
||||
pub(crate) mod smart_contract;
|
||||
|
||||
pub trait SchedulerAddendum: Send + Clone + PartialEq + Debug {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()>;
|
||||
}
|
||||
|
||||
impl SchedulerAddendum for () {
|
||||
fn read<R: io::Read>(_: &mut R) -> io::Result<Self> {
|
||||
Ok(())
|
||||
}
|
||||
fn write<W: io::Write>(&self, _: &mut W) -> io::Result<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Scheduler<N: Network>: Sized + Clone + PartialEq + Debug {
|
||||
type Addendum: SchedulerAddendum;
|
||||
|
||||
/// Check if this Scheduler is empty.
|
||||
fn empty(&self) -> bool;
|
||||
|
||||
/// Create a new Scheduler.
|
||||
fn new<D: Db>(
|
||||
txn: &mut D::Transaction<'_>,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> Self;
|
||||
|
||||
/// Load a Scheduler from the DB.
|
||||
fn from_db<D: Db>(
|
||||
db: &D,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> io::Result<Self>;
|
||||
|
||||
/// Check if a branch is usable.
|
||||
fn can_use_branch(&self, balance: Balance) -> bool;
|
||||
|
||||
/// Schedule a series of outputs/payments.
|
||||
fn schedule<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
utxos: Vec<N::Output>,
|
||||
payments: Vec<Payment<N>>,
|
||||
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||
force_spend: bool,
|
||||
) -> Vec<Plan<N>>;
|
||||
|
||||
/// Consume all payments still pending within this Scheduler, without scheduling them.
|
||||
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>>;
|
||||
|
||||
/// Note a branch output as having been created, with the amount it was actually created with,
|
||||
/// or not having been created due to being too small.
|
||||
fn created_output<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
expected: u64,
|
||||
actual: Option<u64>,
|
||||
);
|
||||
|
||||
/// Refund a specific output.
|
||||
fn refund_plan<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
refund_to: N::Address,
|
||||
) -> Plan<N>;
|
||||
|
||||
/// Shim the forwarding Plan as necessary to obtain a fee estimate.
|
||||
///
|
||||
/// If this Scheduler is for a Network which requires forwarding, this must return Some with a
|
||||
/// plan with identical fee behavior. If forwarding isn't necessary, returns None.
|
||||
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>>;
|
||||
|
||||
/// Forward a specific output to the new multisig.
|
||||
///
|
||||
/// Returns None if no forwarding is necessary. Must return Some if forwarding is necessary.
|
||||
fn forward_plan<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
to: <N::Curve as Ciphersuite>::G,
|
||||
) -> Option<Plan<N>>;
|
||||
}
|
||||
208
processor/src/multisigs/scheduler/smart_contract.rs
Normal file
208
processor/src/multisigs/scheduler/smart_contract.rs
Normal file
@@ -0,0 +1,208 @@
|
||||
use std::{io, collections::HashSet};
|
||||
|
||||
use ciphersuite::{group::GroupEncoding, Ciphersuite};
|
||||
|
||||
use serai_client::primitives::{NetworkId, Coin, Balance};
|
||||
|
||||
use crate::{
|
||||
Get, DbTxn, Db, Payment, Plan, create_db,
|
||||
networks::{Output, Network},
|
||||
multisigs::scheduler::{SchedulerAddendum, Scheduler as SchedulerTrait},
|
||||
};
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Scheduler<N: Network> {
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
coins: HashSet<Coin>,
|
||||
rotated: bool,
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum Addendum<N: Network> {
|
||||
Nonce(u64),
|
||||
RotateTo { nonce: u64, new_key: <N::Curve as Ciphersuite>::G },
|
||||
}
|
||||
|
||||
impl<N: Network> SchedulerAddendum for Addendum<N> {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let mut kind = [0xff];
|
||||
reader.read_exact(&mut kind)?;
|
||||
match kind[0] {
|
||||
0 => {
|
||||
let mut nonce = [0; 8];
|
||||
reader.read_exact(&mut nonce)?;
|
||||
Ok(Addendum::Nonce(u64::from_le_bytes(nonce)))
|
||||
}
|
||||
1 => {
|
||||
let mut nonce = [0; 8];
|
||||
reader.read_exact(&mut nonce)?;
|
||||
let nonce = u64::from_le_bytes(nonce);
|
||||
|
||||
let new_key = N::Curve::read_G(reader)?;
|
||||
Ok(Addendum::RotateTo { nonce, new_key })
|
||||
}
|
||||
_ => Err(io::Error::other("reading unknown Addendum type"))?,
|
||||
}
|
||||
}
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
match self {
|
||||
Addendum::Nonce(nonce) => {
|
||||
writer.write_all(&[0])?;
|
||||
writer.write_all(&nonce.to_le_bytes())
|
||||
}
|
||||
Addendum::RotateTo { nonce, new_key } => {
|
||||
writer.write_all(&[1])?;
|
||||
writer.write_all(&nonce.to_le_bytes())?;
|
||||
writer.write_all(new_key.to_bytes().as_ref())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
create_db! {
|
||||
SchedulerDb {
|
||||
LastNonce: () -> u64,
|
||||
RotatedTo: (key: &[u8]) -> Vec<u8>,
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: Network<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
|
||||
type Addendum = Addendum<N>;
|
||||
|
||||
/// Check if this Scheduler is empty.
|
||||
fn empty(&self) -> bool {
|
||||
self.rotated
|
||||
}
|
||||
|
||||
/// Create a new Scheduler.
|
||||
fn new<D: Db>(
|
||||
_txn: &mut D::Transaction<'_>,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> Self {
|
||||
assert!(N::branch_address(key).is_none());
|
||||
assert!(N::change_address(key).is_none());
|
||||
assert!(N::forward_address(key).is_none());
|
||||
|
||||
Scheduler { key, coins: network.coins().iter().copied().collect(), rotated: false }
|
||||
}
|
||||
|
||||
/// Load a Scheduler from the DB.
|
||||
fn from_db<D: Db>(
|
||||
db: &D,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> io::Result<Self> {
|
||||
Ok(Scheduler {
|
||||
key,
|
||||
coins: network.coins().iter().copied().collect(),
|
||||
rotated: RotatedTo::get(db, key.to_bytes().as_ref()).is_some(),
|
||||
})
|
||||
}
|
||||
|
||||
fn can_use_branch(&self, _balance: Balance) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn schedule<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
utxos: Vec<N::Output>,
|
||||
payments: Vec<Payment<N>>,
|
||||
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||
force_spend: bool,
|
||||
) -> Vec<Plan<N>> {
|
||||
for utxo in utxos {
|
||||
assert!(self.coins.contains(&utxo.balance().coin));
|
||||
}
|
||||
|
||||
let mut nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
|
||||
let mut plans = vec![];
|
||||
for chunk in payments.as_slice().chunks(N::MAX_OUTPUTS) {
|
||||
// Once we rotate, all further payments should be scheduled via the new multisig
|
||||
assert!(!self.rotated);
|
||||
plans.push(Plan {
|
||||
key: self.key,
|
||||
inputs: vec![],
|
||||
payments: chunk.to_vec(),
|
||||
change: None,
|
||||
scheduler_addendum: Addendum::Nonce(nonce),
|
||||
});
|
||||
nonce += 1;
|
||||
}
|
||||
|
||||
// If we're supposed to rotate to the new key, create an empty Plan which will signify the key
|
||||
// update
|
||||
if force_spend && (!self.rotated) {
|
||||
plans.push(Plan {
|
||||
key: self.key,
|
||||
inputs: vec![],
|
||||
payments: vec![],
|
||||
change: None,
|
||||
scheduler_addendum: Addendum::RotateTo { nonce, new_key: key_for_any_change },
|
||||
});
|
||||
nonce += 1;
|
||||
self.rotated = true;
|
||||
RotatedTo::set(
|
||||
txn,
|
||||
self.key.to_bytes().as_ref(),
|
||||
&key_for_any_change.to_bytes().as_ref().to_vec(),
|
||||
);
|
||||
}
|
||||
|
||||
LastNonce::set(txn, &nonce);
|
||||
|
||||
plans
|
||||
}
|
||||
|
||||
fn consume_payments<D: Db>(&mut self, _txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
|
||||
vec![]
|
||||
}
|
||||
|
||||
fn created_output<D: Db>(
|
||||
&mut self,
|
||||
_txn: &mut D::Transaction<'_>,
|
||||
_expected: u64,
|
||||
_actual: Option<u64>,
|
||||
) {
|
||||
panic!("Smart Contract Scheduler created a Branch output")
|
||||
}
|
||||
|
||||
/// Refund a specific output.
|
||||
fn refund_plan<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
refund_to: N::Address,
|
||||
) -> Plan<N> {
|
||||
let current_key = RotatedTo::get(txn, self.key.to_bytes().as_ref())
|
||||
.and_then(|key_bytes| <N::Curve as Ciphersuite>::read_G(&mut key_bytes.as_slice()).ok())
|
||||
.unwrap_or(self.key);
|
||||
|
||||
let nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
|
||||
LastNonce::set(txn, &(nonce + 1));
|
||||
Plan {
|
||||
key: current_key,
|
||||
inputs: vec![],
|
||||
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
||||
change: None,
|
||||
scheduler_addendum: Addendum::Nonce(nonce),
|
||||
}
|
||||
}
|
||||
|
||||
fn shim_forward_plan(_output: N::Output, _to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
|
||||
None
|
||||
}
|
||||
|
||||
/// Forward a specific output to the new multisig.
|
||||
///
|
||||
/// Returns None if no forwarding is necessary.
|
||||
fn forward_plan<D: Db>(
|
||||
&mut self,
|
||||
_txn: &mut D::Transaction<'_>,
|
||||
_output: N::Output,
|
||||
_to: <N::Curve as Ciphersuite>::G,
|
||||
) -> Option<Plan<N>> {
|
||||
None
|
||||
}
|
||||
}
|
||||
@@ -5,16 +5,17 @@ use std::{
|
||||
|
||||
use ciphersuite::{group::GroupEncoding, Ciphersuite};
|
||||
|
||||
use serai_client::primitives::{Coin, Amount, Balance};
|
||||
use serai_client::primitives::{NetworkId, Coin, Amount, Balance};
|
||||
|
||||
use crate::{
|
||||
networks::{OutputType, Output, Network},
|
||||
DbTxn, Db, Payment, Plan,
|
||||
networks::{OutputType, Output, Network, UtxoNetwork},
|
||||
multisigs::scheduler::Scheduler as SchedulerTrait,
|
||||
};
|
||||
|
||||
/// Stateless, deterministic output/payment manager.
|
||||
#[derive(PartialEq, Eq, Debug)]
|
||||
pub struct Scheduler<N: Network> {
|
||||
/// Deterministic output/payment manager.
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Scheduler<N: UtxoNetwork> {
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
coin: Coin,
|
||||
|
||||
@@ -46,7 +47,7 @@ fn scheduler_key<D: Db, G: GroupEncoding>(key: &G) -> Vec<u8> {
|
||||
D::key(b"SCHEDULER", b"scheduler", key.to_bytes())
|
||||
}
|
||||
|
||||
impl<N: Network> Scheduler<N> {
|
||||
impl<N: UtxoNetwork<Scheduler = Self>> Scheduler<N> {
|
||||
pub fn empty(&self) -> bool {
|
||||
self.queued_plans.is_empty() &&
|
||||
self.plans.is_empty() &&
|
||||
@@ -144,8 +145,18 @@ impl<N: Network> Scheduler<N> {
|
||||
pub fn new<D: Db>(
|
||||
txn: &mut D::Transaction<'_>,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
coin: Coin,
|
||||
network: NetworkId,
|
||||
) -> Self {
|
||||
assert!(N::branch_address(key).is_some());
|
||||
assert!(N::change_address(key).is_some());
|
||||
assert!(N::forward_address(key).is_some());
|
||||
|
||||
let coin = {
|
||||
let coins = network.coins();
|
||||
assert_eq!(coins.len(), 1);
|
||||
coins[0]
|
||||
};
|
||||
|
||||
let res = Scheduler {
|
||||
key,
|
||||
coin,
|
||||
@@ -159,7 +170,17 @@ impl<N: Network> Scheduler<N> {
|
||||
res
|
||||
}
|
||||
|
||||
pub fn from_db<D: Db>(db: &D, key: <N::Curve as Ciphersuite>::G, coin: Coin) -> io::Result<Self> {
|
||||
pub fn from_db<D: Db>(
|
||||
db: &D,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> io::Result<Self> {
|
||||
let coin = {
|
||||
let coins = network.coins();
|
||||
assert_eq!(coins.len(), 1);
|
||||
coins[0]
|
||||
};
|
||||
|
||||
let scheduler = db.get(scheduler_key::<D, _>(&key)).unwrap_or_else(|| {
|
||||
panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes()))
|
||||
});
|
||||
@@ -201,7 +222,7 @@ impl<N: Network> Scheduler<N> {
|
||||
amount
|
||||
};
|
||||
|
||||
let branch_address = N::branch_address(self.key);
|
||||
let branch_address = N::branch_address(self.key).unwrap();
|
||||
|
||||
// If we have more payments than we can handle in a single TX, create plans for them
|
||||
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
|
||||
@@ -237,7 +258,8 @@ impl<N: Network> Scheduler<N> {
|
||||
key: self.key,
|
||||
inputs,
|
||||
payments,
|
||||
change: Some(N::change_address(key_for_any_change)).filter(|_| change),
|
||||
change: Some(N::change_address(key_for_any_change).unwrap()).filter(|_| change),
|
||||
scheduler_addendum: (),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -305,7 +327,7 @@ impl<N: Network> Scheduler<N> {
|
||||
its *own* branch address, since created_output is called on the signer's Scheduler.
|
||||
*/
|
||||
{
|
||||
let branch_address = N::branch_address(self.key);
|
||||
let branch_address = N::branch_address(self.key).unwrap();
|
||||
payments =
|
||||
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
|
||||
}
|
||||
@@ -357,7 +379,8 @@ impl<N: Network> Scheduler<N> {
|
||||
key: self.key,
|
||||
inputs: chunk,
|
||||
payments: vec![],
|
||||
change: Some(N::change_address(key_for_any_change)),
|
||||
change: Some(N::change_address(key_for_any_change).unwrap()),
|
||||
scheduler_addendum: (),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -403,7 +426,8 @@ impl<N: Network> Scheduler<N> {
|
||||
key: self.key,
|
||||
inputs: self.utxos.drain(..).collect::<Vec<_>>(),
|
||||
payments: vec![],
|
||||
change: Some(N::change_address(key_for_any_change)),
|
||||
change: Some(N::change_address(key_for_any_change).unwrap()),
|
||||
scheduler_addendum: (),
|
||||
});
|
||||
}
|
||||
|
||||
@@ -435,9 +459,6 @@ impl<N: Network> Scheduler<N> {
|
||||
|
||||
// Note a branch output as having been created, with the amount it was actually created with,
|
||||
// or not having been created due to being too small
|
||||
// This can be called whenever, so long as it's properly ordered
|
||||
// (it's independent to Serai/the chain we're scheduling over, yet still expects outputs to be
|
||||
// created in the same order Plans are returned in)
|
||||
pub fn created_output<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
@@ -501,3 +522,106 @@ impl<N: Network> Scheduler<N> {
|
||||
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
|
||||
}
|
||||
}
|
||||
|
||||
impl<N: UtxoNetwork<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
|
||||
type Addendum = ();
|
||||
|
||||
/// Check if this Scheduler is empty.
|
||||
fn empty(&self) -> bool {
|
||||
Scheduler::empty(self)
|
||||
}
|
||||
|
||||
/// Create a new Scheduler.
|
||||
fn new<D: Db>(
|
||||
txn: &mut D::Transaction<'_>,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> Self {
|
||||
Scheduler::new::<D>(txn, key, network)
|
||||
}
|
||||
|
||||
/// Load a Scheduler from the DB.
|
||||
fn from_db<D: Db>(
|
||||
db: &D,
|
||||
key: <N::Curve as Ciphersuite>::G,
|
||||
network: NetworkId,
|
||||
) -> io::Result<Self> {
|
||||
Scheduler::from_db::<D>(db, key, network)
|
||||
}
|
||||
|
||||
/// Check if a branch is usable.
|
||||
fn can_use_branch(&self, balance: Balance) -> bool {
|
||||
Scheduler::can_use_branch(self, balance)
|
||||
}
|
||||
|
||||
/// Schedule a series of outputs/payments.
|
||||
fn schedule<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
utxos: Vec<N::Output>,
|
||||
payments: Vec<Payment<N>>,
|
||||
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||
force_spend: bool,
|
||||
) -> Vec<Plan<N>> {
|
||||
Scheduler::schedule::<D>(self, txn, utxos, payments, key_for_any_change, force_spend)
|
||||
}
|
||||
|
||||
/// Consume all payments still pending within this Scheduler, without scheduling them.
|
||||
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
|
||||
Scheduler::consume_payments::<D>(self, txn)
|
||||
}
|
||||
|
||||
/// Note a branch output as having been created, with the amount it was actually created with,
|
||||
/// or not having been created due to being too small.
|
||||
// TODO: Move this to Balance.
|
||||
fn created_output<D: Db>(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction<'_>,
|
||||
expected: u64,
|
||||
actual: Option<u64>,
|
||||
) {
|
||||
Scheduler::created_output::<D>(self, txn, expected, actual)
|
||||
}
|
||||
|
||||
fn refund_plan<D: Db>(
|
||||
&mut self,
|
||||
_: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
refund_to: N::Address,
|
||||
) -> Plan<N> {
|
||||
Plan {
|
||||
key: output.key(),
|
||||
// Uses a payment as this will still be successfully sent due to fee amortization,
|
||||
// and because change is currently always a Serai key
|
||||
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
||||
inputs: vec![output],
|
||||
change: None,
|
||||
scheduler_addendum: (),
|
||||
}
|
||||
}
|
||||
|
||||
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
|
||||
Some(Plan {
|
||||
key: output.key(),
|
||||
payments: vec![Payment {
|
||||
address: N::forward_address(to).unwrap(),
|
||||
data: None,
|
||||
balance: output.balance(),
|
||||
}],
|
||||
inputs: vec![output],
|
||||
change: None,
|
||||
scheduler_addendum: (),
|
||||
})
|
||||
}
|
||||
|
||||
fn forward_plan<D: Db>(
|
||||
&mut self,
|
||||
_: &mut D::Transaction<'_>,
|
||||
output: N::Output,
|
||||
to: <N::Curve as Ciphersuite>::G,
|
||||
) -> Option<Plan<N>> {
|
||||
assert_eq!(self.key, output.key());
|
||||
// Call shim as shim returns the actual
|
||||
Self::shim_forward_plan(output, to)
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user