Add support for multiple multisigs to the processor (#377)

* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
This commit is contained in:
Luke Parker
2023-09-25 09:48:15 -04:00
committed by GitHub
parent fe19e8246e
commit ca69f97fef
50 changed files with 3490 additions and 1336 deletions

View File

@@ -15,6 +15,7 @@ use tokio::time::sleep;
use bitcoin_serai::{
bitcoin::{
hashes::Hash as HashTrait,
key::{Parity, XOnlyPublicKey},
consensus::{Encodable, Decodable},
script::Instruction,
address::{NetworkChecked, Address as BAddress},
@@ -45,8 +46,9 @@ use serai_client::{
use crate::{
networks::{
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
Transaction as TransactionTrait, Eventuality as EventualityTrait, EventualitiesTracker,
PostFeeBranch, Network, drop_branches, amortize_fee,
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
Eventuality as EventualityTrait, EventualitiesTracker, PostFeeBranch, Network, drop_branches,
amortize_fee,
},
Plan,
};
@@ -76,7 +78,7 @@ pub struct Output {
data: Vec<u8>,
}
impl OutputTrait for Output {
impl OutputTrait<Bitcoin> for Output {
type Id = OutputId;
fn kind(&self) -> OutputType {
@@ -97,6 +99,24 @@ impl OutputTrait for Output {
res
}
fn tx_id(&self) -> [u8; 32] {
let mut hash = *self.output.outpoint().txid.as_raw_hash().as_byte_array();
hash.reverse();
hash
}
fn key(&self) -> ProjectivePoint {
let script = &self.output.output().script_pubkey;
assert!(script.is_v1_p2tr());
let Instruction::PushBytes(key) = script.instructions_minimal().last().unwrap().unwrap() else {
panic!("last item in v1 Taproot script wasn't bytes")
};
let key = XOnlyPublicKey::from_slice(key.as_ref())
.expect("last item in v1 Taproot script wasn't x-only public key");
Secp256k1::read_G(&mut key.public_key(Parity::Even).serialize().as_slice()).unwrap() -
(ProjectivePoint::GENERATOR * self.output.offset())
}
fn balance(&self) -> Balance {
Balance { coin: SeraiCoin::Bitcoin, amount: Amount(self.output.value()) }
}
@@ -196,7 +216,6 @@ impl EventualityTrait for Eventuality {
#[derive(Clone, Debug)]
pub struct SignableTransaction {
keys: ThresholdKeys<Secp256k1>,
transcript: RecommendedTranscript,
actual: BSignableTransaction,
}
@@ -206,6 +225,11 @@ impl PartialEq for SignableTransaction {
}
}
impl Eq for SignableTransaction {}
impl SignableTransactionTrait for SignableTransaction {
fn fee(&self) -> u64 {
self.actual.fee()
}
}
impl BlockTrait<Bitcoin> for Block {
type Id = [u8; 32];
@@ -221,6 +245,8 @@ impl BlockTrait<Bitcoin> for Block {
hash
}
// TODO: Don't use this block's time, use the network time at this block
// TODO: Confirm network time is monotonic, enabling its usage here
fn time(&self) -> u64 {
self.header.time.into()
}
@@ -231,7 +257,7 @@ impl BlockTrait<Bitcoin> for Block {
}
}
const KEY_DST: &[u8] = b"Bitcoin Key";
const KEY_DST: &[u8] = b"Serai Bitcoin Output Offset";
lazy_static::lazy_static! {
static ref BRANCH_OFFSET: Scalar = Secp256k1::hash_to_F(KEY_DST, b"branch");
static ref CHANGE_OFFSET: Scalar = Secp256k1::hash_to_F(KEY_DST, b"change");
@@ -313,6 +339,7 @@ impl Network for Bitcoin {
const NETWORK: NetworkId = NetworkId::Bitcoin;
const ID: &'static str = "Bitcoin";
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 600;
const CONFIRMATIONS: usize = 6;
// 0.0001 BTC, 10,000 satoshis
@@ -348,6 +375,11 @@ impl Network for Bitcoin {
Self::address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch]))
}
fn change_address(key: ProjectivePoint) -> Self::Address {
let (_, offsets, _) = scanner(key);
Self::address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change]))
}
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
self.rpc.get_latest_block_number().await.map_err(|_| NetworkError::ConnectionError)
}
@@ -358,11 +390,7 @@ impl Network for Bitcoin {
self.rpc.get_block(&block_hash).await.map_err(|_| NetworkError::ConnectionError)
}
async fn get_outputs(
&self,
block: &Self::Block,
key: ProjectivePoint,
) -> Result<Vec<Self::Output>, NetworkError> {
async fn get_outputs(&self, block: &Self::Block, key: ProjectivePoint) -> Vec<Self::Output> {
let (scanner, _, kinds) = scanner(key);
let mut outputs = vec![];
@@ -390,18 +418,20 @@ impl Network for Bitcoin {
};
data.truncate(MAX_DATA_LEN.try_into().unwrap());
outputs.push(Output { kind, output, data })
let output = Output { kind, output, data };
assert_eq!(output.tx_id(), tx.id());
outputs.push(output);
}
}
Ok(outputs)
outputs
}
async fn get_eventuality_completions(
&self,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Self::Block,
) -> HashMap<[u8; 32], [u8; 32]> {
) -> HashMap<[u8; 32], (usize, Transaction)> {
let mut res = HashMap::new();
if eventualities.map.is_empty() {
return res;
@@ -410,7 +440,7 @@ impl Network for Bitcoin {
async fn check_block(
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
res: &mut HashMap<[u8; 32], [u8; 32]>,
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
) {
for tx in &block.txdata[1 ..] {
let input = &tx.input[0].previous_output;
@@ -430,7 +460,7 @@ impl Network for Bitcoin {
"dishonest multisig spent input on distinct set of outputs"
);
res.insert(plan, tx.id());
res.insert(plan, (eventualities.block_number, tx.clone()));
}
}
@@ -476,7 +506,6 @@ impl Network for Bitcoin {
async fn prepare_send(
&self,
keys: ThresholdKeys<Secp256k1>,
_: usize,
mut plan: Plan<Self>,
fee: Fee,
@@ -497,10 +526,7 @@ impl Network for Bitcoin {
match BSignableTransaction::new(
plan.inputs.iter().map(|input| input.output.clone()).collect(),
&payments,
plan.change.map(|key| {
let (_, offsets, _) = scanner(key);
Self::address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change])).0
}),
plan.change.as_ref().map(|change| change.0.clone()),
None,
fee.0,
) {
@@ -544,7 +570,7 @@ impl Network for Bitcoin {
Ok((
Some((
SignableTransaction { keys, transcript: plan.transcript(), actual: signable },
SignableTransaction { transcript: plan.transcript(), actual: signable },
Eventuality { plan_binding_input, outputs },
)),
branch_outputs,
@@ -553,13 +579,14 @@ impl Network for Bitcoin {
async fn attempt_send(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: Self::SignableTransaction,
) -> Result<Self::TransactionMachine, NetworkError> {
Ok(
transaction
.actual
.clone()
.multisig(transaction.keys.clone(), transaction.transcript)
.multisig(keys.clone(), transaction.transcript)
.expect("used the wrong keys"),
)
}

View File

@@ -1,4 +1,4 @@
use core::fmt::Debug;
use core::{fmt::Debug, time::Duration};
use std::{io, collections::HashMap};
use async_trait::async_trait;
@@ -12,6 +12,10 @@ use frost::{
use serai_client::primitives::{NetworkId, Balance};
use log::error;
use tokio::time::sleep;
#[cfg(feature = "bitcoin")]
pub mod bitcoin;
#[cfg(feature = "bitcoin")]
@@ -90,14 +94,17 @@ impl OutputType {
}
}
pub trait Output: Send + Sync + Sized + Clone + PartialEq + Eq + Debug {
pub trait Output<N: Network>: Send + Sync + Sized + Clone + PartialEq + Eq + Debug {
type Id: 'static + Id;
fn kind(&self) -> OutputType;
fn id(&self) -> Self::Id;
fn tx_id(&self) -> <N::Transaction as Transaction<N>>::Id;
fn key(&self) -> <N::Curve as Ciphersuite>::G;
fn balance(&self) -> Balance;
// TODO: Remove this?
fn amount(&self) -> u64 {
self.balance().amount.0
}
@@ -117,6 +124,10 @@ pub trait Transaction<N: Network>: Send + Sync + Sized + Clone + Debug {
async fn fee(&self, network: &N) -> u64;
}
pub trait SignableTransaction: Send + Sync + Clone + Debug {
fn fee(&self) -> u64;
}
pub trait Eventuality: Send + Sync + Clone + Debug {
fn lookup(&self) -> Vec<u8>;
@@ -172,10 +183,11 @@ impl<E: Eventuality> Default for EventualitiesTracker<E> {
}
pub trait Block<N: Network>: Send + Sync + Sized + Clone + Debug {
// This is currently bounded to being 32-bytes.
// This is currently bounded to being 32 bytes.
type Id: 'static + Id;
fn id(&self) -> Self::Id;
fn parent(&self) -> Self::Id;
// The monotonic network time at this block.
fn time(&self) -> u64;
fn median_fee(&self) -> N::Fee;
}
@@ -275,9 +287,9 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// The type containing all information on a scanned output.
// This is almost certainly distinct from the network's native output type.
type Output: Output;
type Output: Output<Self>;
/// The type containing all information on a planned transaction, waiting to be signed.
type SignableTransaction: Send + Sync + Clone + Debug;
type SignableTransaction: SignableTransaction;
/// The type containing all information to check if a plan was completed.
///
/// This must be binding to both the outputs expected and the plan ID.
@@ -302,6 +314,8 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
const NETWORK: NetworkId;
/// String ID for this network.
const ID: &'static str;
/// The estimated amount of time a block will take.
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize;
/// The amount of confirmations required to consider a block 'final'.
const CONFIRMATIONS: usize;
/// The maximum amount of inputs which will fit in a TX.
@@ -322,8 +336,9 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// Address for the given group key to receive external coins to.
fn address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
/// Address for the given group key to use for scheduled branches.
// This is purely used for debugging purposes. Any output may be used to execute a branch.
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
/// Address for the given group key to use for change.
fn change_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
/// Get the latest block's number.
async fn get_latest_block_number(&self) -> Result<usize, NetworkError>;
@@ -334,24 +349,26 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
&self,
block: &Self::Block,
key: <Self::Curve as Ciphersuite>::G,
) -> Result<Vec<Self::Output>, NetworkError>;
) -> Vec<Self::Output>;
/// Get the registered eventualities completed within this block, and any prior blocks which
/// registered eventualities may have been completed in.
///
/// This will panic if not fed a new block.
/// This may panic if not fed a block greater than the tracker's block number.
// TODO: get_eventuality_completions_internal + provided get_eventuality_completions for common
// code
async fn get_eventuality_completions(
&self,
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
block: &Self::Block,
) -> HashMap<[u8; 32], <Self::Transaction as Transaction<Self>>::Id>;
) -> HashMap<[u8; 32], (usize, Self::Transaction)>;
/// Prepare a SignableTransaction for a transaction.
///
/// Returns None for the transaction if the SignableTransaction was dropped due to lack of value.
#[rustfmt::skip]
async fn prepare_send(
&self,
keys: ThresholdKeys<Self::Curve>,
block_number: usize,
plan: Plan<Self>,
fee: Self::Fee,
@@ -363,6 +380,7 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
/// Attempt to sign a SignableTransaction.
async fn attempt_send(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: Self::SignableTransaction,
) -> Result<Self::TransactionMachine, NetworkError>;
@@ -396,3 +414,35 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
#[cfg(test)]
async fn test_send(&self, key: Self::Address) -> Self::Block;
}
// TODO: Move into above trait
pub async fn get_latest_block_number<N: Network>(network: &N) -> usize {
loop {
match network.get_latest_block_number().await {
Ok(number) => {
return number;
}
Err(e) => {
error!(
"couldn't get the latest block number in main's error-free get_block. {} {}",
"this should only happen if the node is offline. error: ", e
);
sleep(Duration::from_secs(10)).await;
}
}
}
}
pub async fn get_block<N: Network>(network: &N, block_number: usize) -> N::Block {
loop {
match network.get_block(block_number).await {
Ok(block) => {
return block;
}
Err(e) => {
error!("couldn't get block {block_number} in main's error-free get_block. error: {}", e);
sleep(Duration::from_secs(10)).await;
}
}
}
}

View File

@@ -37,8 +37,9 @@ use crate::{
Payment, Plan, additional_key,
networks::{
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
Transaction as TransactionTrait, Eventuality as EventualityTrait, EventualitiesTracker,
PostFeeBranch, Network, drop_branches, amortize_fee,
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
Eventuality as EventualityTrait, EventualitiesTracker, PostFeeBranch, Network, drop_branches,
amortize_fee,
},
};
@@ -49,7 +50,7 @@ const EXTERNAL_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(0, 0);
const BRANCH_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(1, 0);
const CHANGE_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(2, 0);
impl OutputTrait for Output {
impl OutputTrait<Monero> for Output {
// While we could use (tx, o), using the key ensures we won't be susceptible to the burning bug.
// While we already are immune, thanks to using featured address, this doesn't hurt and is
// technically more efficient.
@@ -68,6 +69,14 @@ impl OutputTrait for Output {
self.0.output.data.key.compress().to_bytes()
}
fn tx_id(&self) -> [u8; 32] {
self.0.output.absolute.tx
}
fn key(&self) -> EdwardsPoint {
EdwardsPoint(self.0.output.data.key - (EdwardsPoint::generator().0 * self.0.key_offset()))
}
fn balance(&self) -> Balance {
Balance { coin: SeraiCoin::Monero, amount: Amount(self.0.commitment().amount) }
}
@@ -130,10 +139,14 @@ impl EventualityTrait for Eventuality {
#[derive(Clone, Debug)]
pub struct SignableTransaction {
keys: ThresholdKeys<Ed25519>,
transcript: RecommendedTranscript,
actual: MSignableTransaction,
}
impl SignableTransactionTrait for SignableTransaction {
fn fee(&self) -> u64 {
self.actual.fee()
}
}
impl BlockTrait<Monero> for Block {
type Id = [u8; 32];
@@ -145,6 +158,7 @@ impl BlockTrait<Monero> for Block {
self.header.previous
}
// TODO: Check Monero enforces this to be monotonic and sane
fn time(&self) -> u64 {
self.header.timestamp
}
@@ -227,6 +241,7 @@ impl Network for Monero {
const NETWORK: NetworkId = NetworkId::Monero;
const ID: &'static str = "Monero";
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120;
const CONFIRMATIONS: usize = 10;
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
@@ -250,6 +265,10 @@ impl Network for Monero {
Self::address_internal(key, BRANCH_SUBADDRESS)
}
fn change_address(key: EdwardsPoint) -> Self::Address {
Self::address_internal(key, CHANGE_SUBADDRESS)
}
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
// Monero defines height as chain length, so subtract 1 for block number
Ok(self.rpc.get_height().await.map_err(|_| NetworkError::ConnectionError)? - 1)
@@ -267,15 +286,19 @@ impl Network for Monero {
)
}
async fn get_outputs(
&self,
block: &Block,
key: EdwardsPoint,
) -> Result<Vec<Self::Output>, NetworkError> {
let mut txs = Self::scanner(key)
.scan(&self.rpc, block)
.await
.map_err(|_| NetworkError::ConnectionError)?
async fn get_outputs(&self, block: &Block, key: EdwardsPoint) -> Vec<Self::Output> {
let outputs = loop {
match Self::scanner(key).scan(&self.rpc, block).await {
Ok(outputs) => break outputs,
Err(e) => {
log::error!("couldn't scan block {}: {e:?}", hex::encode(block.id()));
sleep(Duration::from_secs(60)).await;
continue;
}
}
};
let mut txs = outputs
.iter()
.filter_map(|outputs| Some(outputs.not_locked()).filter(|outputs| !outputs.is_empty()))
.collect::<Vec<_>>();
@@ -305,14 +328,14 @@ impl Network for Monero {
}
}
Ok(outputs)
outputs
}
async fn get_eventuality_completions(
&self,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
) -> HashMap<[u8; 32], [u8; 32]> {
) -> HashMap<[u8; 32], (usize, Transaction)> {
let mut res = HashMap::new();
if eventualities.map.is_empty() {
return res;
@@ -322,7 +345,7 @@ impl Network for Monero {
network: &Monero,
eventualities: &mut EventualitiesTracker<Eventuality>,
block: &Block,
res: &mut HashMap<[u8; 32], [u8; 32]>,
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
) {
for hash in &block.txs {
let tx = {
@@ -339,7 +362,7 @@ impl Network for Monero {
if let Some((_, eventuality)) = eventualities.map.get(&tx.prefix.extra) {
if eventuality.matches(&tx) {
res.insert(eventualities.map.remove(&tx.prefix.extra).unwrap().0, tx.hash());
res.insert(eventualities.map.remove(&tx.prefix.extra).unwrap().0, (block.number(), tx));
}
}
}
@@ -373,7 +396,6 @@ impl Network for Monero {
async fn prepare_send(
&self,
keys: ThresholdKeys<Ed25519>,
block_number: usize,
mut plan: Plan<Self>,
fee: Fee,
@@ -457,9 +479,7 @@ impl Network for Monero {
Some(Zeroizing::new(plan.id())),
inputs.clone(),
payments,
plan.change.map(|key| {
Change::fingerprintable(Self::address_internal(key, CHANGE_SUBADDRESS).into())
}),
plan.change.map(|change| Change::fingerprintable(change.into())),
vec![],
fee,
) {
@@ -509,7 +529,6 @@ impl Network for Monero {
let branch_outputs = amortize_fee(&mut plan, tx_fee);
let signable = SignableTransaction {
keys,
transcript,
actual: match signable(plan, Some(tx_fee))? {
Some(signable) => signable,
@@ -522,9 +541,10 @@ impl Network for Monero {
async fn attempt_send(
&self,
keys: ThresholdKeys<Self::Curve>,
transaction: SignableTransaction,
) -> Result<Self::TransactionMachine, NetworkError> {
match transaction.actual.clone().multisig(transaction.keys.clone(), transaction.transcript) {
match transaction.actual.clone().multisig(keys, transaction.transcript) {
Ok(machine) => Ok(machine),
Err(e) => panic!("failed to create a multisig machine for TX: {e}"),
}