mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-08 12:19:24 +00:00
Processor (#259)
* Initial work on a message box * Finish message-box (untested) * Expand documentation * Embed the recipient in the signature challenge Prevents a message from A -> B from being read as from A -> C. * Update documentation by bifurcating sender/receiver * Panic on receiving an invalid signature If we've received an invalid signature in an authenticated system, a service is malicious, critically faulty (equivalent to malicious), or the message layer has been compromised (or is otherwise critically faulty). Please note a receiver who handles a message they shouldn't will trigger this. That falls under being critically faulty. * Documentation and helper methods SecureMessage::new and SecureMessage::serialize. Secure Debug for MessageBox. * Have SecureMessage not be serialized by default Allows passing around in-memory, if desired, and moves the error from decrypt to new (which performs deserialization). Decrypt no longer has an error since it panics if given an invalid signature, due to this being intranet code. * Explain and improve nonce handling Includes a missing zeroize call. * Rebase to latest develop Updates to transcript 0.2.0. * Add a test for the MessageBox * Export PrivateKey and PublicKey * Also test serialization * Add a key_gen binary to message_box * Have SecureMessage support Serde * Add encrypt_to_bytes and decrypt_from_bytes * Support String ser via base64 * Rename encrypt/decrypt to encrypt_bytes/decrypt_to_bytes * Directly operate with values supporting Borsh * Use bincode instead of Borsh By staying inside of serde, we'll support many more structs. While bincode isn't canonical, we don't need canonicity on an authenticated, internal system. * Turn PrivateKey, PublicKey into structs Uses Zeroizing for the PrivateKey per #150. * from_string functions intended for loading from an env * Use &str for PublicKey from_string (now from_str) The PrivateKey takes the String to take ownership of its memory and zeroize it. That isn't needed with PublicKeys. * Finish updating from develop * Resolve warning * Use ZeroizingAlloc on the key_gen binary * Move message-box from crypto/ to common/ * Move key serialization functions to ser * add/remove functions in MessageBox * Implement Hash on dalek_ff_group Points * Make MessageBox generic to its key Exposes a &'static str variant for internal use and a RistrettoPoint variant for external use. * Add Private to_string as deprecated Stub before more competent tooling is deployed. * Private to_public * Test both Internal and External MessageBox, only use PublicKey in the pub API * Remove panics on invalid signatures Leftover from when this was solely internal which is now unsafe. * Chicken scratch a Scanner task * Add a write function to the DKG library Enables writing directly to a file. Also modifies serialize to return Zeroizing<Vec<u8>> instead of just Vec<u8>. * Make dkg::encryption pub * Remove encryption from MessageBox * Use a 64-bit block number in Substrate We use a 64-bit block number in general since u32 only works for 120 years (with a 1 second block time). As some chains even push the 1 second threshold, especially ones based on DAG consensus, this becomes potentially as low as 60 years. While that should still be plenty, it's not worth wondering/debating. Since Serai uses 64-bit block numbers elsewhere, this ensures consistency. * Misc crypto lints * Get the scanner scratch to compile * Initial scanner test * First few lines of scheduler * Further work on scheduler, solidify API * Define Scheduler TX format * Branch creation algorithm * Document when the branch algorithm isn't perfect * Only scanned confirmed blocks * Document Coin * Remove Canonical/ChainNumber from processor The processor should be abstracted from canonical numbers thanks to the coordinator, making this unnecessary. * Add README documenting processor flow * Use Zeroize on substrate primitives * Define messages from/to the processor * Correct over-specified versioning * Correct build re: in_instructions::primitives * Debug/some serde in crypto/ * Use a struct for ValidatorSetInstance * Add a processor key_gen task Redos DB handling code. * Replace trait + impl with wrapper struct * Add a key confirmation flow to the key gen task * Document concerns on key_gen * Start on a signer task * Add Send to FROST traits * Move processor lib.rs to main.rs Adds a dummy main to reduce clippy dead_code warnings. * Further flesh out main.rs * Move the DB trait to AsRef<[u8]> * Signer task * Remove a panic in bitcoin when there's insufficient funds Unchecked underflow. * Have Monero's mine_block mine one block, not 10 It was initially a nicety to deal with the 10 block lock. C::CONFIRMATIONS should be used for that instead. * Test signer * Replace channel expects with log statements The expects weren't problematic and had nicer code. They just clutter test output. * Remove the old wallet file It predates the coordinator design and shouldn't be used. * Rename tests/scan.rs to tests/scanner.rs * Add a wallet test Complements the recently removed wallet file by adding a test for the scanner, scheduler, and signer together. * Work on a run function Triggers a clippy ICE. * Resolve clippy ICE The issue was the non-fully specified lambda in signer. * Add KeyGenEvent and KeyGenOrder Needed so we get KeyConfirmed messages from the key gen task. While we could've read the CoordinatorMessage to see that, routing through the key gen tasks ensures we only handle it once it's been successfully saved to disk. * Expand scanner test * Clarify processor documentation * Have the Scanner load keys on boot/save outputs to disk * Use Vec<u8> for Block ID Much more flexible. * Panic if we see the same output multiple times * Have the Scanner DB mark itself as corrupt when doing a multi-put This REALLY should be a TX. Since we don't have a TX API right now, this at least offers detection. * Have DST'd DB keys accept AsRef<[u8]> * Restore polling all signers Writes a custom future to do so. Also loads signers on boot using what the scanner claims are active keys. * Schedule OutInstructions Adds a data field to Payment. Also cleans some dead code. * Panic if we create an invalid transaction Saves the TX once it's successfully signed so if we do panic, we have a copy. * Route coordinator messages to their respective signer Requires adding key to the SignId. * Send SignTransaction orders for all plans * Add a timer to retry sign_plans when prepare_send fails * Minor fmt'ing * Basic Fee API * Move the change key into Plan * Properly route activation_number * Remove ScannerEvent::Block It's not used under current designs * Nicen logs * Add utilities to get a block's number * Have main issue AckBlock Also has a few misc lints. * Parse instructions out of outputs * Tweak TODOs and remove an unwrap * Update Bitcoin max input/output quantity * Only read one piece of data from Monero Due to output randomization, it's infeasible. * Embed plan IDs into the TXs they create We need to stop attempting signing if we've already signed a protocol. Ideally, any one of the participating signers should be able to provide a proof the TX was successfully signed. We can't just run a second signing protocol though as a single malicious signer could complete the TX signature, and publish it, yet not complete the secondary signature. The TX itself has to be sufficient to show that the TX matches the plan. This is done by embedding the ID, so matching addresses/amounts plans are distinguished, and by allowing verification a TX actually matches a set of addresses/amounts. For Monero, this will need augmenting with the ephemeral keys (or usage of a static seed for them). * Don't use OP_RETURN to encode the plan ID on Bitcoin We can use the inputs to distinguih identical-output plans without issue. * Update OP_RETURN data access It's not required to be the last output. * Add Eventualities to Monero An Eventuality is an effective equivalent to a SignableTransaction. That is declared not by the inputs it spends, yet the outputs it creates. Eventualities are also bound to a 32-byte RNG seed, enabling usage of a hash-based identifier in a SignableTransaction, allowing multiple SignableTransactions with the same output set to have different Eventualities. In order to prevent triggering the burning bug, the RNG seed is hashed with the planned-to-be-used inputs' output keys. While this does bind to them, it's only loosely bound. The TX actually created may use different inputs entirely if a forgery is crafted (which requires no brute forcing). Binding to the key images would provide a strong binding, yet would require knowing the key images, which requires active communication with the spend key. The purpose of this is so a multisig can identify if a Transaction the entire group planned has been executed by a subset of the group or not. Once a plan is created, it can have an Eventuality made. The Eventuality's extra is able to be inserted into a HashMap, so all new on-chain transactions can be trivially checked as potential candidates. Once a potential candidate is found, a check involving ECC ops can be performed. While this is arguably a DoS vector, the underlying Monero blockchain would need to be spammed with transactions to trigger it. Accordingly, it becomes a Monero blockchain DoS vector, when this code is written on the premise of the Monero blockchain functioning. Accordingly, it is considered handled. If a forgery does match, it must have created the exact same outputs the multisig would've. Accordingly, it's argued the multisig shouldn't mind. This entire suite of code is only necessary due to the lack of outgoing view keys, yet it's able to avoid an interactive protocol to communicate key images on every single received output. While this could be locked to the multisig feature, there's no practical benefit to doing so. * Add support for encoding Monero address to instructions * Move Serai's Monero address encoding into serai-client serai-client is meant to be a single library enabling using Serai. While it was originally written as an RPC client for Serai, apps actually using Serai will primarily be sending transactions on connected networks. Sending those transactions require proper {In, Out}Instructions, including proper address encoding. Not only has address encoding been moved, yet the subxt client is now behind a feature. coin integrations have their own features, which are on by default. primitives are always exposed. * Reorganize file layout a bit, add feature flags to processor * Tidy up ETH Dockerfile * Add Bitcoin address encoding * Move Bitcoin::Address to serai-client's * Comment where tweaking needs to happen * Add an API to check if a plan was completed in a specific TX This allows any participating signer to submit the TX ID to prevent further signing attempts. Also performs some API cleanup. * Minimize FROST dependencies * Use a seeded RNG for key gen * Tweak keys from Key gen * Test proper usage of Branch/Change addresses Adds a more descriptive error to an error case in decoys, and pads Monero payments as needed. * Also test spending the change output * Add queued_plans to the Scheduler queued_plans is for payments to be issued when an amount appears, yet the amount is currently pre-fee. One the output is actually created, the Scheduler should be notified of the amount it was created with, moving from queued_plans to plans under the actual amount. Also tightens debug_asserts to asserts for invariants which may are at risk of being exclusive to prod. * Add missing tweak_keys call * Correct decoy selection height handling * Add a few log statements to the scheduler * Simplify test's get_block_number * Simplify, while making more robust, branch address handling in Scheduler * Have fees deducted from payments Corrects Monero's handling of fees when there's no change address. Adds a DUST variable, as needed due to 1_00_000_000 not being enough to pay its fee on Monero. * Add comment to Monero * Consolidate BTC/XMR prepare_send code These aren't fully consolidated. We'd need a SignableTransaction trait for that. This is a lot cleaner though. * Ban integrated addresses The reasoning why is accordingly documented. * Tidy TODOs/dust handling * Update README TODO * Use a determinisitic protocol version in Monero * Test rebuilt KeyGen machines function as expected * Use a more robust KeyGen entropy system * Add DB TXNs Also load entropy from env * Add a loop for processing messages from substrate Allows detecting if we're behind, and if so, waiting to handle the message * Set Monero MAX_INPUTS properly The previous number was based on an old hard fork. With the ring size having increased, transactions have since got larger. * Distinguish TODOs into TODO and TODO2s TODO2s are for after protonet * Zeroize secret share repr in ThresholdCore write * Work on Eventualities Adds serialization and stops signing when an eventuality is proven. * Use a more robust DB key schema * Update to {k, p}256 0.12 * cargo +nightly clippy * cargo update * Slight message-box tweaks * Update to recent Monero merge * Add a Coordinator trait for communication with coordinator * Remove KeyGenHandle for just KeyGen While KeyGen previously accepted instructions over a channel, this breaks the ack flow needed for coordinator communication. Now, KeyGen is the direct object with a handle() function for messages. Thankfully, this ended up being rather trivial for KeyGen as it has no background tasks. * Add a handle function to Signer Enables determining when it's finished handling a CoordinatorMessage and therefore creating an acknowledgement. * Save transactions used to complete eventualities * Use a more intelligent sleep in the signer * Emit SignedTransaction with the first ID *we can still get from our node* * Move Substrate message handling into the new coordinator recv loop * Add handle function to Scanner * Remove the plans timer Enables ensuring the ordring on the handling of plans. * Remove the outputs function which panicked if a precondition wasn't met The new API only returns outputs upon satisfaction of the precondition. * Convert SignerOrder::SignTransaction to a function * Remove the key_gen object from sign_plans * Refactor out get_fee/prepare_send into dedicated functions * Save plans being signed to the DB * Reload transactions being signed on boot * Stop reloading TXs being signed (and report it to peers) * Remove message-box from the processor branch We don't use it here yet. * cargo +nightly fmt * Move back common/zalloc * Update subxt to 0.27 * Zeroize ^1.5, not 1 * Update GitHub workflow * Remove usage of SignId in completed
This commit is contained in:
@@ -1,318 +0,0 @@
|
||||
use std::{io, collections::HashMap};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
#[rustfmt::skip]
|
||||
use bitcoin::{
|
||||
hashes::Hash, schnorr::TweakedPublicKey, OutPoint, Transaction, Block, Network, Address
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
use bitcoin::{
|
||||
secp256k1::{SECP256K1, SecretKey, Message},
|
||||
PrivateKey, PublicKey, EcdsaSighashType,
|
||||
blockdata::script::Builder,
|
||||
PackedLockTime, Sequence, Script, Witness, TxIn, TxOut,
|
||||
};
|
||||
|
||||
use transcript::RecommendedTranscript;
|
||||
use k256::{
|
||||
ProjectivePoint, Scalar,
|
||||
elliptic_curve::sec1::{ToEncodedPoint, Tag},
|
||||
};
|
||||
use frost::{curve::Secp256k1, ThresholdKeys};
|
||||
|
||||
use bitcoin_serai::{
|
||||
crypto::{x_only, make_even},
|
||||
wallet::{SpendableOutput, TransactionMachine, SignableTransaction as BSignableTransaction},
|
||||
rpc::Rpc,
|
||||
};
|
||||
|
||||
use crate::coin::{CoinError, Block as BlockTrait, OutputType, Output as OutputTrait, Coin};
|
||||
|
||||
impl BlockTrait for Block {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
self.block_hash().as_hash().into_inner()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub struct Fee(u64);
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Output(SpendableOutput, OutputType);
|
||||
impl OutputTrait for Output {
|
||||
type Id = [u8; 36];
|
||||
|
||||
fn kind(&self) -> OutputType {
|
||||
self.1
|
||||
}
|
||||
|
||||
fn id(&self) -> Self::Id {
|
||||
self.0.id()
|
||||
}
|
||||
|
||||
fn amount(&self) -> u64 {
|
||||
self.0.output.value
|
||||
}
|
||||
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
let mut res = self.0.serialize();
|
||||
self.1.write(&mut res).unwrap();
|
||||
res
|
||||
}
|
||||
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
Ok(Output(SpendableOutput::read(reader)?, OutputType::read(reader)?))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SignableTransaction {
|
||||
keys: ThresholdKeys<Secp256k1>,
|
||||
transcript: RecommendedTranscript,
|
||||
actual: BSignableTransaction,
|
||||
}
|
||||
|
||||
fn next_key(mut key: ProjectivePoint, i: usize) -> (ProjectivePoint, Scalar) {
|
||||
let mut offset = Scalar::ZERO;
|
||||
for _ in 0 .. i {
|
||||
key += ProjectivePoint::GENERATOR;
|
||||
offset += Scalar::ONE;
|
||||
|
||||
let even_offset;
|
||||
(key, even_offset) = make_even(key);
|
||||
offset += Scalar::from(even_offset);
|
||||
}
|
||||
(key, offset)
|
||||
}
|
||||
|
||||
fn branch(key: ProjectivePoint) -> (ProjectivePoint, Scalar) {
|
||||
next_key(key, 1)
|
||||
}
|
||||
|
||||
fn change(key: ProjectivePoint) -> (ProjectivePoint, Scalar) {
|
||||
next_key(key, 2)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Bitcoin {
|
||||
pub(crate) rpc: Rpc,
|
||||
}
|
||||
|
||||
impl Bitcoin {
|
||||
pub async fn new(url: String) -> Bitcoin {
|
||||
Bitcoin { rpc: Rpc::new(url) }
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub async fn fresh_chain(&self) {
|
||||
if self.rpc.get_latest_block_number().await.unwrap() > 0 {
|
||||
self
|
||||
.rpc
|
||||
.rpc_call("invalidateblock", serde_json::json!([self.rpc.get_block_hash(1).await.unwrap()]))
|
||||
.await
|
||||
.unwrap()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Coin for Bitcoin {
|
||||
type Curve = Secp256k1;
|
||||
|
||||
type Fee = Fee;
|
||||
type Transaction = Transaction;
|
||||
type Block = Block;
|
||||
|
||||
type Output = Output;
|
||||
type SignableTransaction = SignableTransaction;
|
||||
type TransactionMachine = TransactionMachine;
|
||||
|
||||
type Address = Address;
|
||||
|
||||
const ID: &'static [u8] = b"Bitcoin";
|
||||
const CONFIRMATIONS: usize = 3;
|
||||
|
||||
// TODO: Get hard numbers and tune
|
||||
const MAX_INPUTS: usize = 128;
|
||||
const MAX_OUTPUTS: usize = 16;
|
||||
|
||||
fn tweak_keys(&self, key: &mut ThresholdKeys<Self::Curve>) {
|
||||
let (_, offset) = make_even(key.group_key());
|
||||
*key = key.offset(Scalar::from(offset));
|
||||
}
|
||||
|
||||
fn address(&self, key: ProjectivePoint) -> Self::Address {
|
||||
debug_assert!(key.to_encoded_point(true).tag() == Tag::CompressedEvenY, "YKey is odd");
|
||||
Address::p2tr_tweaked(
|
||||
TweakedPublicKey::dangerous_assume_tweaked(x_only(&key)),
|
||||
Network::Regtest,
|
||||
)
|
||||
}
|
||||
|
||||
fn branch_address(&self, key: ProjectivePoint) -> Self::Address {
|
||||
self.address(branch(key).0)
|
||||
}
|
||||
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError> {
|
||||
Ok(self.rpc.get_latest_block_number().await.map_err(|_| CoinError::ConnectionError)?)
|
||||
}
|
||||
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError> {
|
||||
let block_hash =
|
||||
self.rpc.get_block_hash(number).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
self.rpc.get_block(&block_hash).await.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: ProjectivePoint,
|
||||
) -> Result<Vec<Self::Output>, CoinError> {
|
||||
let external = (key, Scalar::ZERO);
|
||||
let branch = branch(key);
|
||||
let change = change(key);
|
||||
|
||||
let entry =
|
||||
|pair: (_, _), kind| (self.address(pair.0).script_pubkey().to_bytes(), (pair.1, kind));
|
||||
let scripts = HashMap::from([
|
||||
entry(external, OutputType::External),
|
||||
entry(branch, OutputType::Branch),
|
||||
entry(change, OutputType::Change),
|
||||
]);
|
||||
|
||||
let mut outputs = Vec::new();
|
||||
// Skip the coinbase transaction which is burdened by maturity
|
||||
for tx in &block.txdata[1 ..] {
|
||||
for (vout, output) in tx.output.iter().enumerate() {
|
||||
if let Some(info) = scripts.get(&output.script_pubkey.to_bytes()) {
|
||||
outputs.push(Output(
|
||||
SpendableOutput {
|
||||
offset: info.0,
|
||||
output: output.clone(),
|
||||
outpoint: OutPoint { txid: tx.txid(), vout: u32::try_from(vout).unwrap() },
|
||||
},
|
||||
info.1,
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(outputs)
|
||||
}
|
||||
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Secp256k1>,
|
||||
transcript: RecommendedTranscript,
|
||||
_: usize,
|
||||
mut inputs: Vec<Output>,
|
||||
payments: &[(Address, u64)],
|
||||
change_key: Option<ProjectivePoint>,
|
||||
fee: Fee,
|
||||
) -> Result<Self::SignableTransaction, CoinError> {
|
||||
Ok(SignableTransaction {
|
||||
keys,
|
||||
transcript,
|
||||
actual: BSignableTransaction::new(
|
||||
inputs.drain(..).map(|input| input.0).collect(),
|
||||
payments,
|
||||
change_key.map(|change_key| self.address(change(change_key).0)),
|
||||
fee.0,
|
||||
)
|
||||
.ok_or(CoinError::NotEnoughFunds)?,
|
||||
})
|
||||
}
|
||||
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: Self::SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError> {
|
||||
transaction
|
||||
.actual
|
||||
.clone()
|
||||
.multisig(transaction.keys.clone(), transaction.transcript.clone())
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<Vec<u8>, CoinError> {
|
||||
Ok(self.rpc.send_raw_transaction(tx).await.unwrap().to_vec())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee {
|
||||
Fee(1)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self) {
|
||||
self
|
||||
.rpc
|
||||
.rpc_call::<Vec<String>>(
|
||||
"generatetoaddress",
|
||||
serde_json::json!([
|
||||
1,
|
||||
Address::p2sh(&Script::new(), Network::Regtest).unwrap().to_string()
|
||||
]),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, address: Self::Address) {
|
||||
let secret_key = SecretKey::new(&mut rand_core::OsRng);
|
||||
let private_key = PrivateKey::new(secret_key, Network::Regtest);
|
||||
let public_key = PublicKey::from_private_key(SECP256K1, &private_key);
|
||||
let main_addr = Address::p2pkh(&public_key, Network::Regtest);
|
||||
|
||||
let new_block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
self
|
||||
.rpc
|
||||
.rpc_call::<Vec<String>>("generatetoaddress", serde_json::json!([1, main_addr]))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
for _ in 0 .. 100 {
|
||||
self.mine_block().await;
|
||||
}
|
||||
|
||||
// TODO: Consider grabbing bdk as a dev dependency
|
||||
let tx = self.get_block(new_block).await.unwrap().txdata.swap_remove(0);
|
||||
let mut tx = Transaction {
|
||||
version: 2,
|
||||
lock_time: PackedLockTime::ZERO,
|
||||
input: vec![TxIn {
|
||||
previous_output: OutPoint { txid: tx.txid(), vout: 0 },
|
||||
script_sig: Script::default(),
|
||||
sequence: Sequence(u32::MAX),
|
||||
witness: Witness::default(),
|
||||
}],
|
||||
output: vec![TxOut {
|
||||
value: tx.output[0].value - 10000,
|
||||
script_pubkey: address.script_pubkey(),
|
||||
}],
|
||||
};
|
||||
|
||||
let mut der = SECP256K1
|
||||
.sign_ecdsa_low_r(
|
||||
&Message::from(
|
||||
tx.signature_hash(0, &main_addr.script_pubkey(), EcdsaSighashType::All.to_u32())
|
||||
.as_hash(),
|
||||
),
|
||||
&private_key.inner,
|
||||
)
|
||||
.serialize_der()
|
||||
.to_vec();
|
||||
der.push(1);
|
||||
tx.input[0].script_sig = Builder::new().push_slice(&der).push_key(&public_key).into_script();
|
||||
|
||||
self.rpc.send_raw_transaction(&tx).await.unwrap();
|
||||
for _ in 0 .. Self::CONFIRMATIONS {
|
||||
self.mine_block().await;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,134 +0,0 @@
|
||||
use std::io;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use thiserror::Error;
|
||||
|
||||
use transcript::RecommendedTranscript;
|
||||
use frost::{
|
||||
curve::{Ciphersuite, Curve},
|
||||
ThresholdKeys,
|
||||
sign::PreprocessMachine,
|
||||
};
|
||||
|
||||
pub mod bitcoin;
|
||||
pub use self::bitcoin::Bitcoin;
|
||||
|
||||
pub mod monero;
|
||||
pub use self::monero::Monero;
|
||||
|
||||
#[derive(Clone, Copy, Error, Debug)]
|
||||
pub enum CoinError {
|
||||
#[error("failed to connect to coin daemon")]
|
||||
ConnectionError,
|
||||
#[error("not enough funds")]
|
||||
NotEnoughFunds,
|
||||
}
|
||||
|
||||
pub trait Block: Sized + Clone {
|
||||
type Id: Clone + Copy + AsRef<[u8]>;
|
||||
fn id(&self) -> Self::Id;
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum OutputType {
|
||||
External,
|
||||
Branch,
|
||||
Change,
|
||||
}
|
||||
|
||||
impl OutputType {
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
writer.write_all(&[match self {
|
||||
OutputType::External => 0,
|
||||
OutputType::Branch => 1,
|
||||
OutputType::Change => 2,
|
||||
}])
|
||||
}
|
||||
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let mut byte = [0; 1];
|
||||
reader.read_exact(&mut byte)?;
|
||||
Ok(match byte[0] {
|
||||
0 => OutputType::External,
|
||||
1 => OutputType::Branch,
|
||||
2 => OutputType::Change,
|
||||
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid OutputType"))?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Output: Sized + Clone {
|
||||
type Id: Clone + Copy + AsRef<[u8]>;
|
||||
|
||||
fn kind(&self) -> OutputType;
|
||||
|
||||
fn id(&self) -> Self::Id;
|
||||
fn amount(&self) -> u64;
|
||||
|
||||
fn serialize(&self) -> Vec<u8>;
|
||||
fn read<R: std::io::Read>(reader: &mut R) -> std::io::Result<Self>;
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait Coin {
|
||||
type Curve: Curve;
|
||||
|
||||
type Fee: Copy;
|
||||
type Transaction;
|
||||
type Block: Block;
|
||||
|
||||
type Output: Output;
|
||||
type SignableTransaction;
|
||||
type TransactionMachine: PreprocessMachine<Signature = Self::Transaction>;
|
||||
|
||||
type Address: Send;
|
||||
|
||||
const ID: &'static [u8];
|
||||
const CONFIRMATIONS: usize;
|
||||
const MAX_INPUTS: usize;
|
||||
const MAX_OUTPUTS: usize; // TODO: Decide if this includes change or not
|
||||
|
||||
fn tweak_keys(&self, key: &mut ThresholdKeys<Self::Curve>);
|
||||
|
||||
/// Address for the given group key to receive external coins to.
|
||||
// Doesn't have to take self, enables some level of caching which is pleasant
|
||||
fn address(&self, key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
||||
/// Address for the given group key to use for scheduled branches.
|
||||
fn branch_address(&self, key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
||||
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError>;
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError>;
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: <Self::Curve as Ciphersuite>::G,
|
||||
) -> Result<Vec<Self::Output>, CoinError>;
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Self::Curve>,
|
||||
transcript: RecommendedTranscript,
|
||||
block_number: usize,
|
||||
inputs: Vec<Self::Output>,
|
||||
payments: &[(Self::Address, u64)],
|
||||
change: Option<<Self::Curve as Ciphersuite>::G>,
|
||||
fee: Self::Fee,
|
||||
) -> Result<Self::SignableTransaction, CoinError>;
|
||||
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: Self::SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError>;
|
||||
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<Vec<u8>, CoinError>;
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee;
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self);
|
||||
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, key: Self::Address);
|
||||
}
|
||||
@@ -1,331 +0,0 @@
|
||||
use async_trait::async_trait;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
use curve25519_dalek::scalar::Scalar;
|
||||
|
||||
use dalek_ff_group as dfg;
|
||||
use transcript::RecommendedTranscript;
|
||||
use frost::{curve::Ed25519, ThresholdKeys};
|
||||
|
||||
use monero_serai::{
|
||||
transaction::Transaction,
|
||||
block::Block as MBlock,
|
||||
rpc::Rpc,
|
||||
wallet::{
|
||||
ViewPair, Scanner,
|
||||
address::{Network, SubaddressIndex, AddressSpec, MoneroAddress},
|
||||
Fee, SpendableOutput, Change, SignableTransaction as MSignableTransaction, TransactionMachine,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
additional_key,
|
||||
coin::{CoinError, Block as BlockTrait, OutputType, Output as OutputTrait, Coin},
|
||||
};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Block([u8; 32], MBlock);
|
||||
impl BlockTrait for Block {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Output(SpendableOutput);
|
||||
impl From<SpendableOutput> for Output {
|
||||
fn from(output: SpendableOutput) -> Output {
|
||||
Output(output)
|
||||
}
|
||||
}
|
||||
|
||||
const EXTERNAL_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(0, 0);
|
||||
const BRANCH_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(1, 0);
|
||||
const CHANGE_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(2, 0);
|
||||
|
||||
impl OutputTrait for Output {
|
||||
// While we could use (tx, o), using the key ensures we won't be susceptible to the burning bug.
|
||||
// While we already are immune, thanks to using featured address, this doesn't hurt and is
|
||||
// technically more efficient.
|
||||
type Id = [u8; 32];
|
||||
|
||||
fn kind(&self) -> OutputType {
|
||||
match self.0.output.metadata.subaddress {
|
||||
EXTERNAL_SUBADDRESS => OutputType::External,
|
||||
BRANCH_SUBADDRESS => OutputType::Branch,
|
||||
CHANGE_SUBADDRESS => OutputType::Change,
|
||||
_ => panic!("unrecognized address was scanned for"),
|
||||
}
|
||||
}
|
||||
|
||||
fn id(&self) -> Self::Id {
|
||||
self.0.output.data.key.compress().to_bytes()
|
||||
}
|
||||
|
||||
fn amount(&self) -> u64 {
|
||||
self.0.commitment().amount
|
||||
}
|
||||
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
self.0.serialize()
|
||||
}
|
||||
|
||||
fn read<R: std::io::Read>(reader: &mut R) -> std::io::Result<Self> {
|
||||
SpendableOutput::read(reader).map(Output)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SignableTransaction {
|
||||
keys: ThresholdKeys<Ed25519>,
|
||||
transcript: RecommendedTranscript,
|
||||
// Monero height, defined as the length of the chain
|
||||
height: usize,
|
||||
actual: MSignableTransaction,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Monero {
|
||||
pub(crate) rpc: Rpc,
|
||||
view: Zeroizing<Scalar>,
|
||||
}
|
||||
|
||||
impl Monero {
|
||||
pub async fn new(url: String) -> Monero {
|
||||
Monero { rpc: Rpc::new(url).unwrap(), view: Zeroizing::new(additional_key::<Monero>(0).0) }
|
||||
}
|
||||
|
||||
fn view_pair(&self, spend: dfg::EdwardsPoint) -> ViewPair {
|
||||
ViewPair::new(spend.0, self.view.clone())
|
||||
}
|
||||
|
||||
fn address_internal(
|
||||
&self,
|
||||
spend: dfg::EdwardsPoint,
|
||||
subaddress: Option<SubaddressIndex>,
|
||||
) -> MoneroAddress {
|
||||
self.view_pair(spend).address(
|
||||
Network::Mainnet,
|
||||
AddressSpec::Featured { subaddress, payment_id: None, guaranteed: true },
|
||||
)
|
||||
}
|
||||
|
||||
fn scanner(&self, spend: dfg::EdwardsPoint) -> Scanner {
|
||||
let mut scanner = Scanner::from_view(self.view_pair(spend), None);
|
||||
debug_assert!(EXTERNAL_SUBADDRESS.is_none());
|
||||
scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap());
|
||||
scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap());
|
||||
scanner
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_view_pair() -> ViewPair {
|
||||
use group::Group;
|
||||
ViewPair::new(*dfg::EdwardsPoint::generator(), Zeroizing::new(Scalar::one()))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_scanner() -> Scanner {
|
||||
Scanner::from_view(Self::test_view_pair(), Some(std::collections::HashSet::new()))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_address() -> MoneroAddress {
|
||||
Self::test_view_pair().address(Network::Mainnet, AddressSpec::Standard)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Coin for Monero {
|
||||
type Curve = Ed25519;
|
||||
|
||||
type Fee = Fee;
|
||||
type Transaction = Transaction;
|
||||
type Block = Block;
|
||||
|
||||
type Output = Output;
|
||||
type SignableTransaction = SignableTransaction;
|
||||
type TransactionMachine = TransactionMachine;
|
||||
|
||||
type Address = MoneroAddress;
|
||||
|
||||
const ID: &'static [u8] = b"Monero";
|
||||
const CONFIRMATIONS: usize = 10;
|
||||
// Testnet TX bb4d188a4c571f2f0de70dca9d475abc19078c10ffa8def26dd4f63ce1bcfd79 uses 146 inputs
|
||||
// while using less than 100kb of space, albeit with just 2 outputs (though outputs share a BP)
|
||||
// The TX size limit is half the contextual median block weight, where said weight is >= 300,000
|
||||
// This means any TX which fits into 150kb will be accepted by Monero
|
||||
// 128, even with 16 outputs, should fit into 100kb. Further efficiency by 192 may be viable
|
||||
// TODO: Get hard numbers and tune
|
||||
const MAX_INPUTS: usize = 128;
|
||||
const MAX_OUTPUTS: usize = 16;
|
||||
|
||||
// Monero doesn't require/benefit from tweaking
|
||||
fn tweak_keys(&self, _: &mut ThresholdKeys<Self::Curve>) {}
|
||||
|
||||
fn address(&self, key: dfg::EdwardsPoint) -> Self::Address {
|
||||
self.address_internal(key, EXTERNAL_SUBADDRESS)
|
||||
}
|
||||
|
||||
fn branch_address(&self, key: dfg::EdwardsPoint) -> Self::Address {
|
||||
self.address_internal(key, BRANCH_SUBADDRESS)
|
||||
}
|
||||
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError> {
|
||||
// Monero defines height as chain length, so subtract 1 for block number
|
||||
Ok(self.rpc.get_height().await.map_err(|_| CoinError::ConnectionError)? - 1)
|
||||
}
|
||||
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError> {
|
||||
let hash = self.rpc.get_block_hash(number).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
let block = self.rpc.get_block(hash).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
Ok(Block(hash, block))
|
||||
}
|
||||
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: dfg::EdwardsPoint,
|
||||
) -> Result<Vec<Self::Output>, CoinError> {
|
||||
let mut transactions = self
|
||||
.scanner(key)
|
||||
.scan(&self.rpc, &block.1)
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)?
|
||||
.iter()
|
||||
.map(|outputs| outputs.not_locked())
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// This should be pointless as we shouldn't be able to scan for any other subaddress
|
||||
// This just ensures nothing invalid makes it through
|
||||
for transaction in transactions.iter_mut() {
|
||||
*transaction = transaction
|
||||
.drain(..)
|
||||
.filter(|output| {
|
||||
[EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS]
|
||||
.contains(&output.output.metadata.subaddress)
|
||||
})
|
||||
.collect();
|
||||
}
|
||||
|
||||
Ok(
|
||||
transactions
|
||||
.drain(..)
|
||||
.flat_map(|mut outputs| outputs.drain(..).map(Output::from).collect::<Vec<_>>())
|
||||
.collect(),
|
||||
)
|
||||
}
|
||||
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Ed25519>,
|
||||
transcript: RecommendedTranscript,
|
||||
block_number: usize,
|
||||
mut inputs: Vec<Output>,
|
||||
payments: &[(MoneroAddress, u64)],
|
||||
change: Option<dfg::EdwardsPoint>,
|
||||
fee: Fee,
|
||||
) -> Result<SignableTransaction, CoinError> {
|
||||
Ok(SignableTransaction {
|
||||
keys,
|
||||
transcript,
|
||||
height: block_number + 1,
|
||||
actual: MSignableTransaction::new(
|
||||
self.rpc.get_protocol().await.unwrap(), // TODO: Make this deterministic
|
||||
inputs.drain(..).map(|input| input.0).collect(),
|
||||
payments.to_vec(),
|
||||
change
|
||||
.map(|change| Change::fingerprintable(self.address_internal(change, CHANGE_SUBADDRESS))),
|
||||
vec![],
|
||||
fee,
|
||||
)
|
||||
.map_err(|_| CoinError::ConnectionError)?,
|
||||
})
|
||||
}
|
||||
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError> {
|
||||
transaction
|
||||
.actual
|
||||
.clone()
|
||||
.multisig(
|
||||
&self.rpc,
|
||||
transaction.keys.clone(),
|
||||
transaction.transcript.clone(),
|
||||
transaction.height,
|
||||
)
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<Vec<u8>, CoinError> {
|
||||
self.rpc.publish_transaction(tx).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
Ok(tx.hash().to_vec())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee {
|
||||
self.rpc.get_fee().await.unwrap()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self) {
|
||||
#[derive(serde::Deserialize, Debug)]
|
||||
struct EmptyResponse {}
|
||||
let _: EmptyResponse = self
|
||||
.rpc
|
||||
.rpc_call(
|
||||
"json_rpc",
|
||||
Some(serde_json::json!({
|
||||
"method": "generateblocks",
|
||||
"params": {
|
||||
"wallet_address": Self::test_address().to_string(),
|
||||
"amount_of_blocks": 10
|
||||
},
|
||||
})),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, address: Self::Address) {
|
||||
use zeroize::Zeroizing;
|
||||
use rand_core::OsRng;
|
||||
|
||||
let new_block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
|
||||
self.mine_block().await;
|
||||
for _ in 0 .. 7 {
|
||||
self.mine_block().await;
|
||||
}
|
||||
|
||||
let outputs = Self::test_scanner()
|
||||
.scan(&self.rpc, &self.rpc.get_block_by_number(new_block).await.unwrap())
|
||||
.await
|
||||
.unwrap()
|
||||
.swap_remove(0)
|
||||
.ignore_timelock();
|
||||
|
||||
let amount = outputs[0].commitment().amount;
|
||||
let fee = 3000000000; // TODO
|
||||
let tx = MSignableTransaction::new(
|
||||
self.rpc.get_protocol().await.unwrap(),
|
||||
outputs,
|
||||
vec![(address, amount - fee)],
|
||||
Some(Change::new(&Self::test_view_pair(), true)),
|
||||
vec![],
|
||||
self.rpc.get_fee().await.unwrap(),
|
||||
)
|
||||
.unwrap()
|
||||
.sign(&mut OsRng, &self.rpc, &Zeroizing::new(Scalar::one()))
|
||||
.await
|
||||
.unwrap();
|
||||
self.rpc.publish_transaction(&tx).await.unwrap();
|
||||
self.mine_block().await;
|
||||
}
|
||||
}
|
||||
517
processor/src/coins/bitcoin.rs
Normal file
517
processor/src/coins/bitcoin.rs
Normal file
@@ -0,0 +1,517 @@
|
||||
use std::{io, collections::HashMap};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use bitcoin::{
|
||||
hashes::Hash as HashTrait,
|
||||
schnorr::TweakedPublicKey,
|
||||
consensus::{Encodable, Decodable},
|
||||
psbt::serialize::Serialize,
|
||||
OutPoint,
|
||||
blockdata::script::Instruction,
|
||||
Transaction, Block, Network, Address as BAddress,
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
use bitcoin::{
|
||||
secp256k1::{SECP256K1, SecretKey, Message},
|
||||
PrivateKey, PublicKey, EcdsaSighashType,
|
||||
blockdata::script::Builder,
|
||||
PackedLockTime, Sequence, Script, Witness, TxIn, TxOut,
|
||||
};
|
||||
|
||||
use transcript::RecommendedTranscript;
|
||||
use k256::{
|
||||
ProjectivePoint, Scalar,
|
||||
elliptic_curve::sec1::{ToEncodedPoint, Tag},
|
||||
};
|
||||
use frost::{curve::Secp256k1, ThresholdKeys};
|
||||
|
||||
use bitcoin_serai::{
|
||||
crypto::{x_only, make_even},
|
||||
wallet::{SpendableOutput, TransactionMachine, SignableTransaction as BSignableTransaction},
|
||||
rpc::{RpcError, Rpc},
|
||||
};
|
||||
|
||||
use serai_client::coins::bitcoin::Address;
|
||||
|
||||
use crate::{
|
||||
coins::{
|
||||
CoinError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
||||
Transaction as TransactionTrait, Eventuality, PostFeeBranch, Coin, drop_branches, amortize_fee,
|
||||
},
|
||||
Plan,
|
||||
};
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct OutputId(pub [u8; 36]);
|
||||
impl Default for OutputId {
|
||||
fn default() -> Self {
|
||||
Self([0; 36])
|
||||
}
|
||||
}
|
||||
impl AsRef<[u8]> for OutputId {
|
||||
fn as_ref(&self) -> &[u8] {
|
||||
self.0.as_ref()
|
||||
}
|
||||
}
|
||||
impl AsMut<[u8]> for OutputId {
|
||||
fn as_mut(&mut self) -> &mut [u8] {
|
||||
self.0.as_mut()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Output {
|
||||
kind: OutputType,
|
||||
output: SpendableOutput,
|
||||
data: Vec<u8>,
|
||||
}
|
||||
|
||||
impl OutputTrait for Output {
|
||||
type Id = OutputId;
|
||||
|
||||
fn kind(&self) -> OutputType {
|
||||
self.kind
|
||||
}
|
||||
|
||||
fn id(&self) -> Self::Id {
|
||||
OutputId(self.output.id())
|
||||
}
|
||||
|
||||
fn amount(&self) -> u64 {
|
||||
self.output.output.value
|
||||
}
|
||||
|
||||
fn data(&self) -> &[u8] {
|
||||
&self.data
|
||||
}
|
||||
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
self.kind.write(writer)?;
|
||||
self.output.write(writer)?;
|
||||
writer.write_all(&u16::try_from(self.data.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(&self.data)
|
||||
}
|
||||
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
Ok(Output {
|
||||
kind: OutputType::read(reader)?,
|
||||
output: SpendableOutput::read(reader)?,
|
||||
data: {
|
||||
let mut data_len = [0; 2];
|
||||
reader.read_exact(&mut data_len)?;
|
||||
|
||||
let mut data = vec![0; usize::from(u16::from_le_bytes(data_len))];
|
||||
reader.read_exact(&mut data)?;
|
||||
data
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub struct Fee(u64);
|
||||
|
||||
#[async_trait]
|
||||
impl TransactionTrait<Bitcoin> for Transaction {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
let mut hash = self.txid().as_hash().into_inner();
|
||||
hash.reverse();
|
||||
hash
|
||||
}
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
Serialize::serialize(self)
|
||||
}
|
||||
#[cfg(test)]
|
||||
async fn fee(&self, coin: &Bitcoin) -> u64 {
|
||||
let mut value = 0;
|
||||
for input in &self.input {
|
||||
let output = input.previous_output;
|
||||
let mut hash = output.txid.as_hash().into_inner();
|
||||
hash.reverse();
|
||||
value += coin.rpc.get_transaction(&hash).await.unwrap().output
|
||||
[usize::try_from(output.vout).unwrap()]
|
||||
.value;
|
||||
}
|
||||
for output in &self.output {
|
||||
value -= output.value;
|
||||
}
|
||||
value
|
||||
}
|
||||
}
|
||||
|
||||
impl Eventuality for OutPoint {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
OutPoint::consensus_decode(reader)
|
||||
.map_err(|_| io::Error::new(io::ErrorKind::Other, "couldn't decode outpoint as eventuality"))
|
||||
}
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
let mut buf = Vec::with_capacity(36);
|
||||
self.consensus_encode(&mut buf).unwrap();
|
||||
buf
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SignableTransaction {
|
||||
keys: ThresholdKeys<Secp256k1>,
|
||||
transcript: RecommendedTranscript,
|
||||
actual: BSignableTransaction,
|
||||
}
|
||||
impl PartialEq for SignableTransaction {
|
||||
fn eq(&self, other: &SignableTransaction) -> bool {
|
||||
self.actual == other.actual
|
||||
}
|
||||
}
|
||||
impl Eq for SignableTransaction {}
|
||||
|
||||
impl BlockTrait<Bitcoin> for Block {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
let mut hash = self.block_hash().as_hash().into_inner();
|
||||
hash.reverse();
|
||||
hash
|
||||
}
|
||||
fn median_fee(&self) -> Fee {
|
||||
// TODO
|
||||
Fee(20)
|
||||
}
|
||||
}
|
||||
|
||||
fn next_key(mut key: ProjectivePoint, i: usize) -> (ProjectivePoint, Scalar) {
|
||||
let mut offset = Scalar::ZERO;
|
||||
for _ in 0 .. i {
|
||||
key += ProjectivePoint::GENERATOR;
|
||||
offset += Scalar::ONE;
|
||||
|
||||
let even_offset;
|
||||
(key, even_offset) = make_even(key);
|
||||
offset += Scalar::from(even_offset);
|
||||
}
|
||||
(key, offset)
|
||||
}
|
||||
|
||||
fn branch(key: ProjectivePoint) -> (ProjectivePoint, Scalar) {
|
||||
next_key(key, 1)
|
||||
}
|
||||
|
||||
fn change(key: ProjectivePoint) -> (ProjectivePoint, Scalar) {
|
||||
next_key(key, 2)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Bitcoin {
|
||||
pub(crate) rpc: Rpc,
|
||||
}
|
||||
// Shim required for testing/debugging purposes due to generic arguments also necessitating trait
|
||||
// bounds
|
||||
impl PartialEq for Bitcoin {
|
||||
fn eq(&self, _: &Self) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
impl Eq for Bitcoin {}
|
||||
|
||||
impl Bitcoin {
|
||||
pub fn new(url: String) -> Bitcoin {
|
||||
Bitcoin { rpc: Rpc::new(url) }
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub async fn fresh_chain(&self) {
|
||||
if self.rpc.get_latest_block_number().await.unwrap() > 0 {
|
||||
self
|
||||
.rpc
|
||||
.rpc_call(
|
||||
"invalidateblock",
|
||||
serde_json::json!([hex::encode(self.rpc.get_block_hash(1).await.unwrap())]),
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Coin for Bitcoin {
|
||||
type Curve = Secp256k1;
|
||||
|
||||
type Fee = Fee;
|
||||
type Transaction = Transaction;
|
||||
type Block = Block;
|
||||
|
||||
type Output = Output;
|
||||
type SignableTransaction = SignableTransaction;
|
||||
// Valid given an honest multisig, as assumed
|
||||
// Only the multisig can spend this output and the multisig, if spending this output, will
|
||||
// always create a specific plan
|
||||
type Eventuality = OutPoint;
|
||||
type TransactionMachine = TransactionMachine;
|
||||
|
||||
type Address = Address;
|
||||
|
||||
const ID: &'static str = "Bitcoin";
|
||||
const CONFIRMATIONS: usize = 3;
|
||||
|
||||
// 0.0001 BTC
|
||||
#[allow(clippy::inconsistent_digit_grouping)]
|
||||
const DUST: u64 = 1_00_000_000 / 10_000;
|
||||
|
||||
// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT)
|
||||
// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes
|
||||
// While our inputs are entirely SegWit, such fine tuning is not necessary and could create
|
||||
// issues in the future (if the size decreases or we mis-evaluate it)
|
||||
// It also offers a minimal amount of benefit when we are able to logarithmically accumulate
|
||||
// inputs
|
||||
// For 128-byte inputs (40-byte output specification, 64-byte signature, whatever overhead) and
|
||||
// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192
|
||||
// bytes
|
||||
// 100,000 / 192 = 520
|
||||
// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself
|
||||
const MAX_INPUTS: usize = 520;
|
||||
const MAX_OUTPUTS: usize = 520;
|
||||
|
||||
fn tweak_keys(key: &mut ThresholdKeys<Self::Curve>) {
|
||||
let (_, offset) = make_even(key.group_key());
|
||||
*key = key.offset(Scalar::from(offset));
|
||||
}
|
||||
|
||||
fn address(key: ProjectivePoint) -> Self::Address {
|
||||
assert!(key.to_encoded_point(true).tag() == Tag::CompressedEvenY, "YKey is odd");
|
||||
Address(BAddress::p2tr_tweaked(
|
||||
TweakedPublicKey::dangerous_assume_tweaked(x_only(&key)),
|
||||
Network::Bitcoin,
|
||||
))
|
||||
}
|
||||
|
||||
fn branch_address(key: ProjectivePoint) -> Self::Address {
|
||||
Self::address(branch(key).0)
|
||||
}
|
||||
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError> {
|
||||
self.rpc.get_latest_block_number().await.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError> {
|
||||
let block_hash =
|
||||
self.rpc.get_block_hash(number).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
self.rpc.get_block(&block_hash).await.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: ProjectivePoint,
|
||||
) -> Result<Vec<Self::Output>, CoinError> {
|
||||
let external = (key, Scalar::ZERO);
|
||||
let branch = branch(key);
|
||||
let change = change(key);
|
||||
|
||||
let entry =
|
||||
|pair: (_, _), kind| (Self::address(pair.0).0.script_pubkey().to_bytes(), (pair.1, kind));
|
||||
let scripts = HashMap::from([
|
||||
entry(external, OutputType::External),
|
||||
entry(branch, OutputType::Branch),
|
||||
entry(change, OutputType::Change),
|
||||
]);
|
||||
|
||||
let mut outputs = Vec::new();
|
||||
// Skip the coinbase transaction which is burdened by maturity
|
||||
for tx in &block.txdata[1 ..] {
|
||||
for (vout, output) in tx.output.iter().enumerate() {
|
||||
if let Some(info) = scripts.get(&output.script_pubkey.to_bytes()) {
|
||||
outputs.push(Output {
|
||||
kind: info.1,
|
||||
output: SpendableOutput {
|
||||
offset: info.0,
|
||||
output: output.clone(),
|
||||
outpoint: OutPoint { txid: tx.txid(), vout: u32::try_from(vout).unwrap() },
|
||||
},
|
||||
data: (|| {
|
||||
for output in &tx.output {
|
||||
if output.script_pubkey.is_op_return() {
|
||||
match output.script_pubkey.instructions_minimal().last() {
|
||||
Some(Ok(Instruction::PushBytes(data))) => return data.to_vec(),
|
||||
_ => continue,
|
||||
}
|
||||
}
|
||||
}
|
||||
vec![]
|
||||
})(),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(outputs)
|
||||
}
|
||||
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Secp256k1>,
|
||||
_: usize,
|
||||
mut plan: Plan<Self>,
|
||||
fee: Fee,
|
||||
) -> Result<(Option<(SignableTransaction, Self::Eventuality)>, Vec<PostFeeBranch>), CoinError> {
|
||||
let signable = |plan: &Plan<Self>, tx_fee: Option<_>| {
|
||||
let mut payments = vec![];
|
||||
for payment in &plan.payments {
|
||||
// If we're solely estimating the fee, don't actually specify an amount
|
||||
// This won't affect the fee calculation yet will ensure we don't hit an out of funds error
|
||||
payments
|
||||
.push((payment.address.0.clone(), if tx_fee.is_none() { 0 } else { payment.amount }));
|
||||
}
|
||||
|
||||
match BSignableTransaction::new(
|
||||
plan.inputs.iter().map(|input| input.output.clone()).collect(),
|
||||
&payments,
|
||||
plan.change.map(|key| Self::address(change(key).0).0),
|
||||
None,
|
||||
fee.0,
|
||||
) {
|
||||
Some(signable) => Some(signable),
|
||||
// TODO: Use a proper error here
|
||||
None => {
|
||||
if tx_fee.is_none() {
|
||||
// Not enough funds
|
||||
None
|
||||
} else {
|
||||
panic!("didn't have enough funds for a Bitcoin TX");
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let tx_fee = match signable(&plan, None) {
|
||||
Some(tx) => tx.fee(),
|
||||
None => return Ok((None, drop_branches(&plan))),
|
||||
};
|
||||
|
||||
let branch_outputs = amortize_fee(&mut plan, tx_fee);
|
||||
|
||||
Ok((
|
||||
Some((
|
||||
SignableTransaction {
|
||||
keys,
|
||||
transcript: plan.transcript(),
|
||||
actual: signable(&plan, Some(tx_fee)).unwrap(),
|
||||
},
|
||||
plan.inputs[0].output.outpoint,
|
||||
)),
|
||||
branch_outputs,
|
||||
))
|
||||
}
|
||||
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: Self::SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError> {
|
||||
transaction
|
||||
.actual
|
||||
.clone()
|
||||
.multisig(transaction.keys.clone(), transaction.transcript.clone())
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), CoinError> {
|
||||
match self.rpc.send_raw_transaction(tx).await {
|
||||
Ok(_) => (),
|
||||
Err(RpcError::ConnectionError) => Err(CoinError::ConnectionError)?,
|
||||
// TODO: Distinguish already in pool vs double spend (other signing attempt succeeded) vs
|
||||
// invalid transaction
|
||||
Err(e) => panic!("failed to publish TX {:?}: {e}", tx.txid()),
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, CoinError> {
|
||||
self.rpc.get_transaction(id).await.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
fn confirm_completion(&self, eventuality: &OutPoint, tx: &Transaction) -> bool {
|
||||
eventuality == &tx.input[0].previous_output
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_block_number(&self, id: &[u8; 32]) -> usize {
|
||||
self.rpc.get_block_number(id).await.unwrap()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee {
|
||||
Fee(1)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self) {
|
||||
self
|
||||
.rpc
|
||||
.rpc_call::<Vec<String>>(
|
||||
"generatetoaddress",
|
||||
serde_json::json!([
|
||||
1,
|
||||
BAddress::p2sh(&Script::new(), Network::Regtest).unwrap().to_string()
|
||||
]),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, address: Self::Address) -> Block {
|
||||
let secret_key = SecretKey::new(&mut rand_core::OsRng);
|
||||
let private_key = PrivateKey::new(secret_key, Network::Regtest);
|
||||
let public_key = PublicKey::from_private_key(SECP256K1, &private_key);
|
||||
let main_addr = BAddress::p2pkh(&public_key, Network::Regtest);
|
||||
|
||||
let new_block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
self
|
||||
.rpc
|
||||
.rpc_call::<Vec<String>>("generatetoaddress", serde_json::json!([1, main_addr]))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
for _ in 0 .. 100 {
|
||||
self.mine_block().await;
|
||||
}
|
||||
|
||||
let tx = self.get_block(new_block).await.unwrap().txdata.swap_remove(0);
|
||||
let mut tx = Transaction {
|
||||
version: 2,
|
||||
lock_time: PackedLockTime::ZERO,
|
||||
input: vec![TxIn {
|
||||
previous_output: OutPoint { txid: tx.txid(), vout: 0 },
|
||||
script_sig: Script::default(),
|
||||
sequence: Sequence(u32::MAX),
|
||||
witness: Witness::default(),
|
||||
}],
|
||||
output: vec![TxOut {
|
||||
value: tx.output[0].value - 10000,
|
||||
script_pubkey: address.0.script_pubkey(),
|
||||
}],
|
||||
};
|
||||
|
||||
let mut der = SECP256K1
|
||||
.sign_ecdsa_low_r(
|
||||
&Message::from(
|
||||
tx.signature_hash(0, &main_addr.script_pubkey(), EcdsaSighashType::All.to_u32())
|
||||
.as_hash(),
|
||||
),
|
||||
&private_key.inner,
|
||||
)
|
||||
.serialize_der()
|
||||
.to_vec();
|
||||
der.push(1);
|
||||
tx.input[0].script_sig = Builder::new().push_slice(&der).push_key(&public_key).into_script();
|
||||
|
||||
let block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
self.rpc.send_raw_transaction(&tx).await.unwrap();
|
||||
for _ in 0 .. Self::CONFIRMATIONS {
|
||||
self.mine_block().await;
|
||||
}
|
||||
self.get_block(block).await.unwrap()
|
||||
}
|
||||
}
|
||||
298
processor/src/coins/mod.rs
Normal file
298
processor/src/coins/mod.rs
Normal file
@@ -0,0 +1,298 @@
|
||||
use core::fmt::Debug;
|
||||
use std::io;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use thiserror::Error;
|
||||
|
||||
use frost::{
|
||||
curve::{Ciphersuite, Curve},
|
||||
ThresholdKeys,
|
||||
sign::PreprocessMachine,
|
||||
};
|
||||
|
||||
#[cfg(feature = "bitcoin")]
|
||||
pub mod bitcoin;
|
||||
#[cfg(feature = "bitcoin")]
|
||||
pub use self::bitcoin::Bitcoin;
|
||||
|
||||
#[cfg(feature = "monero")]
|
||||
pub mod monero;
|
||||
#[cfg(feature = "monero")]
|
||||
pub use monero::Monero;
|
||||
|
||||
use crate::Plan;
|
||||
|
||||
#[derive(Clone, Copy, Error, Debug)]
|
||||
pub enum CoinError {
|
||||
#[error("failed to connect to coin daemon")]
|
||||
ConnectionError,
|
||||
}
|
||||
|
||||
pub trait Id:
|
||||
Send + Sync + Clone + Default + PartialEq + AsRef<[u8]> + AsMut<[u8]> + Debug
|
||||
{
|
||||
}
|
||||
impl<I: Send + Sync + Clone + Default + PartialEq + AsRef<[u8]> + AsMut<[u8]> + Debug> Id for I {}
|
||||
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||
pub enum OutputType {
|
||||
// Needs to be processed/sent up to Substrate
|
||||
External,
|
||||
|
||||
// Given a known output set, and a known series of outbound transactions, we should be able to
|
||||
// form a completely deterministic schedule S. The issue is when S has TXs which spend prior TXs
|
||||
// in S (which is needed for our logarithmic scheduling). In order to have the descendant TX, say
|
||||
// S[1], build off S[0], we need to observe when S[0] is included on-chain.
|
||||
//
|
||||
// We cannot.
|
||||
//
|
||||
// Monero (and other privacy coins) do not expose their UTXO graphs. Even if we know how to
|
||||
// create S[0], and the actual payment info behind it, we cannot observe it on the blockchain
|
||||
// unless we participated in creating it. Locking the entire schedule, when we cannot sign for
|
||||
// the entire schedule at once, to a single signing set isn't feasible.
|
||||
//
|
||||
// While any member of the active signing set can provide data enabling other signers to
|
||||
// participate, it's several KB of data which we then have to code communication for.
|
||||
// The other option is to simply not observe S[0]. Instead, observe a TX with an identical output
|
||||
// to the one in S[0] we intended to use for S[1]. It's either from S[0], or Eve, a malicious
|
||||
// actor, has sent us a forged TX which is... equally as usable? so who cares?
|
||||
//
|
||||
// The only issue is if we have multiple outputs on-chain with identical amounts and purposes.
|
||||
// Accordingly, when the scheduler makes a plan for when a specific output is available, it
|
||||
// shouldn't write that plan. It should *push* that plan to a queue of plans to perform when
|
||||
// instances of that output occur.
|
||||
Branch,
|
||||
|
||||
// Should be added to the available UTXO pool with no further action
|
||||
Change,
|
||||
}
|
||||
|
||||
impl OutputType {
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
writer.write_all(&[match self {
|
||||
OutputType::External => 0,
|
||||
OutputType::Branch => 1,
|
||||
OutputType::Change => 2,
|
||||
}])
|
||||
}
|
||||
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let mut byte = [0; 1];
|
||||
reader.read_exact(&mut byte)?;
|
||||
Ok(match byte[0] {
|
||||
0 => OutputType::External,
|
||||
1 => OutputType::Branch,
|
||||
2 => OutputType::Change,
|
||||
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid OutputType"))?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub trait Output: Send + Sync + Sized + Clone + PartialEq + Eq + Debug {
|
||||
type Id: 'static + Id;
|
||||
|
||||
fn kind(&self) -> OutputType;
|
||||
|
||||
fn id(&self) -> Self::Id;
|
||||
fn amount(&self) -> u64;
|
||||
|
||||
fn data(&self) -> &[u8];
|
||||
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()>;
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait Transaction<C: Coin>: Send + Sync + Sized + Clone + Debug {
|
||||
type Id: 'static + Id;
|
||||
fn id(&self) -> Self::Id;
|
||||
fn serialize(&self) -> Vec<u8>;
|
||||
|
||||
#[cfg(test)]
|
||||
async fn fee(&self, coin: &C) -> u64;
|
||||
}
|
||||
|
||||
pub trait Eventuality: Send + Sync + Clone + Debug {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
||||
fn serialize(&self) -> Vec<u8>;
|
||||
}
|
||||
|
||||
pub trait Block<C: Coin>: Send + Sync + Sized + Clone + Debug {
|
||||
type Id: 'static + Id;
|
||||
fn id(&self) -> Self::Id;
|
||||
fn median_fee(&self) -> C::Fee;
|
||||
}
|
||||
|
||||
// The post-fee value of an expected branch.
|
||||
pub struct PostFeeBranch {
|
||||
pub expected: u64,
|
||||
pub actual: Option<u64>,
|
||||
}
|
||||
|
||||
// Return the PostFeeBranches needed when dropping a transaction
|
||||
pub fn drop_branches<C: Coin>(plan: &Plan<C>) -> Vec<PostFeeBranch> {
|
||||
let mut branch_outputs = vec![];
|
||||
for payment in &plan.payments {
|
||||
if payment.address == C::branch_address(plan.key) {
|
||||
branch_outputs.push(PostFeeBranch { expected: payment.amount, actual: None });
|
||||
}
|
||||
}
|
||||
branch_outputs
|
||||
}
|
||||
|
||||
// Amortize a fee over the plan's payments
|
||||
pub fn amortize_fee<C: Coin>(plan: &mut Plan<C>, tx_fee: u64) -> Vec<PostFeeBranch> {
|
||||
// No payments to amortize over
|
||||
if plan.payments.is_empty() {
|
||||
return vec![];
|
||||
}
|
||||
|
||||
// Amortize the transaction fee across outputs
|
||||
let payments_len = u64::try_from(plan.payments.len()).unwrap();
|
||||
// Use a formula which will round up
|
||||
let output_fee = (tx_fee + (payments_len - 1)) / payments_len;
|
||||
|
||||
let mut branch_outputs = vec![];
|
||||
for payment in plan.payments.iter_mut() {
|
||||
let mut post_fee = payment.amount.checked_sub(output_fee);
|
||||
// If this is under our dust threshold, drop it
|
||||
if let Some(amount) = post_fee {
|
||||
if amount < C::DUST {
|
||||
post_fee = None;
|
||||
}
|
||||
}
|
||||
|
||||
// Note the branch output, if this is one
|
||||
if payment.address == C::branch_address(plan.key) {
|
||||
branch_outputs.push(PostFeeBranch { expected: payment.amount, actual: post_fee });
|
||||
}
|
||||
payment.amount = post_fee.unwrap_or(0);
|
||||
}
|
||||
// Drop payments now worth 0
|
||||
plan.payments = plan.payments.drain(..).filter(|payment| payment.amount != 0).collect();
|
||||
branch_outputs
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
pub trait Coin: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||
/// The elliptic curve used for this coin.
|
||||
type Curve: Curve;
|
||||
|
||||
/// The type representing the fee for this coin.
|
||||
// This should likely be a u64, wrapped in a type which implements appropriate fee logic.
|
||||
type Fee: Copy;
|
||||
|
||||
/// The type representing the transaction for this coin.
|
||||
type Transaction: Transaction<Self>;
|
||||
/// The type representing the block for this coin.
|
||||
type Block: Block<Self>;
|
||||
|
||||
/// The type containing all information on a scanned output.
|
||||
// This is almost certainly distinct from the coin's native output type.
|
||||
type Output: Output;
|
||||
/// The type containing all information on a planned transaction, waiting to be signed.
|
||||
type SignableTransaction: Send + Sync + Clone + Debug;
|
||||
/// The type containing all information to check if a plan was completed.
|
||||
type Eventuality: Eventuality;
|
||||
/// The FROST machine to sign a transaction.
|
||||
type TransactionMachine: PreprocessMachine<Signature = Self::Transaction>;
|
||||
|
||||
/// The type representing an address.
|
||||
// This should NOT be a String, yet a tailored type representing an efficient binary encoding,
|
||||
// as detailed in the integration documentation.
|
||||
type Address: Send
|
||||
+ Sync
|
||||
+ Clone
|
||||
+ PartialEq
|
||||
+ Eq
|
||||
+ Debug
|
||||
+ ToString
|
||||
+ TryInto<Vec<u8>>
|
||||
+ TryFrom<Vec<u8>>;
|
||||
|
||||
/// String ID for this coin.
|
||||
const ID: &'static str;
|
||||
/// The amount of confirmations required to consider a block 'final'.
|
||||
const CONFIRMATIONS: usize;
|
||||
/// The maximum amount of inputs which will fit in a TX.
|
||||
/// This should be equal to MAX_OUTPUTS unless one is specifically limited.
|
||||
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
||||
const MAX_INPUTS: usize;
|
||||
/// The maximum amount of outputs which will fit in a TX.
|
||||
/// This should be equal to MAX_INPUTS unless one is specifically limited.
|
||||
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
||||
const MAX_OUTPUTS: usize;
|
||||
|
||||
/// Minimum output value which will be handled.
|
||||
const DUST: u64;
|
||||
|
||||
/// Tweak keys for this coin.
|
||||
fn tweak_keys(key: &mut ThresholdKeys<Self::Curve>);
|
||||
|
||||
/// Address for the given group key to receive external coins to.
|
||||
fn address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
||||
/// Address for the given group key to use for scheduled branches.
|
||||
// This is purely used for debugging purposes. Any output may be used to execute a branch.
|
||||
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
||||
|
||||
/// Get the latest block's number.
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError>;
|
||||
/// Get a block by its number.
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError>;
|
||||
/// Get the outputs within a block for a specific key.
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: <Self::Curve as Ciphersuite>::G,
|
||||
) -> Result<Vec<Self::Output>, CoinError>;
|
||||
|
||||
/// Prepare a SignableTransaction for a transaction.
|
||||
/// Returns None for the transaction if the SignableTransaction was dropped due to lack of value.
|
||||
#[rustfmt::skip]
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Self::Curve>,
|
||||
block_number: usize,
|
||||
plan: Plan<Self>,
|
||||
fee: Self::Fee,
|
||||
) -> Result<
|
||||
(Option<(Self::SignableTransaction, Self::Eventuality)>, Vec<PostFeeBranch>),
|
||||
CoinError
|
||||
>;
|
||||
|
||||
/// Attempt to sign a SignableTransaction.
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: Self::SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError>;
|
||||
|
||||
/// Publish a transaction.
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), CoinError>;
|
||||
|
||||
/// Get a transaction by its ID.
|
||||
async fn get_transaction(
|
||||
&self,
|
||||
id: &<Self::Transaction as Transaction<Self>>::Id,
|
||||
) -> Result<Self::Transaction, CoinError>;
|
||||
|
||||
/// Confirm a plan was completed by the specified transaction.
|
||||
// This is allowed to take shortcuts.
|
||||
// This may assume an honest multisig, solely checking the inputs specified were spent.
|
||||
// This may solely check the outputs are equivalent *so long as it's locked to the plan ID*.
|
||||
fn confirm_completion(&self, eventuality: &Self::Eventuality, tx: &Self::Transaction) -> bool;
|
||||
|
||||
/// Get a block's number by its ID.
|
||||
#[cfg(test)]
|
||||
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize;
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee;
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self);
|
||||
|
||||
/// Sends to the specified address.
|
||||
/// Additionally mines enough blocks so that the TX is past the confirmation depth.
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, key: Self::Address) -> Self::Block;
|
||||
}
|
||||
503
processor/src/coins/monero.rs
Normal file
503
processor/src/coins/monero.rs
Normal file
@@ -0,0 +1,503 @@
|
||||
use std::io;
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
use transcript::RecommendedTranscript;
|
||||
|
||||
use group::{ff::Field, Group};
|
||||
use dalek_ff_group::{Scalar, EdwardsPoint};
|
||||
use frost::{curve::Ed25519, ThresholdKeys};
|
||||
|
||||
use monero_serai::{
|
||||
Protocol,
|
||||
transaction::Transaction,
|
||||
block::Block as MBlock,
|
||||
rpc::{RpcError, Rpc},
|
||||
wallet::{
|
||||
ViewPair, Scanner,
|
||||
address::{Network, SubaddressIndex, AddressSpec},
|
||||
Fee, SpendableOutput, Change, TransactionError, SignableTransaction as MSignableTransaction,
|
||||
Eventuality, TransactionMachine,
|
||||
},
|
||||
};
|
||||
|
||||
pub use serai_client::{primitives::MAX_DATA_LEN, coins::monero::Address};
|
||||
|
||||
use crate::{
|
||||
Payment, Plan, additional_key,
|
||||
coins::{
|
||||
CoinError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
||||
Transaction as TransactionTrait, Eventuality as EventualityTrait, PostFeeBranch, Coin,
|
||||
drop_branches, amortize_fee,
|
||||
},
|
||||
};
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Output(SpendableOutput, Vec<u8>);
|
||||
|
||||
const EXTERNAL_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(0, 0);
|
||||
const BRANCH_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(1, 0);
|
||||
const CHANGE_SUBADDRESS: Option<SubaddressIndex> = SubaddressIndex::new(2, 0);
|
||||
|
||||
impl OutputTrait for Output {
|
||||
// While we could use (tx, o), using the key ensures we won't be susceptible to the burning bug.
|
||||
// While we already are immune, thanks to using featured address, this doesn't hurt and is
|
||||
// technically more efficient.
|
||||
type Id = [u8; 32];
|
||||
|
||||
fn kind(&self) -> OutputType {
|
||||
match self.0.output.metadata.subaddress {
|
||||
EXTERNAL_SUBADDRESS => OutputType::External,
|
||||
BRANCH_SUBADDRESS => OutputType::Branch,
|
||||
CHANGE_SUBADDRESS => OutputType::Change,
|
||||
_ => panic!("unrecognized address was scanned for"),
|
||||
}
|
||||
}
|
||||
|
||||
fn id(&self) -> Self::Id {
|
||||
self.0.output.data.key.compress().to_bytes()
|
||||
}
|
||||
|
||||
fn amount(&self) -> u64 {
|
||||
self.0.commitment().amount
|
||||
}
|
||||
|
||||
fn data(&self) -> &[u8] {
|
||||
&self.1
|
||||
}
|
||||
|
||||
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
self.0.write(writer)?;
|
||||
writer.write_all(&u16::try_from(self.1.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(&self.1)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let output = SpendableOutput::read(reader)?;
|
||||
|
||||
let mut data_len = [0; 2];
|
||||
reader.read_exact(&mut data_len)?;
|
||||
|
||||
let mut data = vec![0; usize::from(u16::from_le_bytes(data_len))];
|
||||
reader.read_exact(&mut data)?;
|
||||
|
||||
Ok(Output(output, data))
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl TransactionTrait<Monero> for Transaction {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
self.hash()
|
||||
}
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
self.serialize()
|
||||
}
|
||||
#[cfg(test)]
|
||||
async fn fee(&self, _: &Monero) -> u64 {
|
||||
self.rct_signatures.base.fee
|
||||
}
|
||||
}
|
||||
|
||||
impl EventualityTrait for Eventuality {
|
||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
Eventuality::read(reader)
|
||||
}
|
||||
fn serialize(&self) -> Vec<u8> {
|
||||
self.serialize()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct SignableTransaction {
|
||||
keys: ThresholdKeys<Ed25519>,
|
||||
transcript: RecommendedTranscript,
|
||||
// Monero height, defined as the length of the chain
|
||||
height: usize,
|
||||
actual: MSignableTransaction,
|
||||
}
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Block([u8; 32], MBlock);
|
||||
impl BlockTrait<Monero> for Block {
|
||||
type Id = [u8; 32];
|
||||
fn id(&self) -> Self::Id {
|
||||
self.0
|
||||
}
|
||||
|
||||
fn median_fee(&self) -> Fee {
|
||||
// TODO
|
||||
Fee { per_weight: 80000, mask: 10000 }
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Monero {
|
||||
pub(crate) rpc: Rpc,
|
||||
}
|
||||
// Shim required for testing/debugging purposes due to generic arguments also necessitating trait
|
||||
// bounds
|
||||
impl PartialEq for Monero {
|
||||
fn eq(&self, _: &Self) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
impl Eq for Monero {}
|
||||
|
||||
impl Monero {
|
||||
pub fn new(url: String) -> Monero {
|
||||
Monero { rpc: Rpc::new(url).unwrap() }
|
||||
}
|
||||
|
||||
fn view_pair(spend: EdwardsPoint) -> ViewPair {
|
||||
ViewPair::new(spend.0, Zeroizing::new(additional_key::<Monero>(0).0))
|
||||
}
|
||||
|
||||
fn address_internal(spend: EdwardsPoint, subaddress: Option<SubaddressIndex>) -> Address {
|
||||
Address::new(Self::view_pair(spend).address(
|
||||
Network::Mainnet,
|
||||
AddressSpec::Featured { subaddress, payment_id: None, guaranteed: true },
|
||||
))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn scanner(spend: EdwardsPoint) -> Scanner {
|
||||
let mut scanner = Scanner::from_view(Self::view_pair(spend), None);
|
||||
debug_assert!(EXTERNAL_SUBADDRESS.is_none());
|
||||
scanner.register_subaddress(BRANCH_SUBADDRESS.unwrap());
|
||||
scanner.register_subaddress(CHANGE_SUBADDRESS.unwrap());
|
||||
scanner
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_view_pair() -> ViewPair {
|
||||
ViewPair::new(*EdwardsPoint::generator(), Zeroizing::new(Scalar::one().0))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_scanner() -> Scanner {
|
||||
Scanner::from_view(Self::test_view_pair(), Some(std::collections::HashSet::new()))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
fn test_address() -> Address {
|
||||
Address::new(Self::test_view_pair().address(Network::Mainnet, AddressSpec::Standard)).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Coin for Monero {
|
||||
type Curve = Ed25519;
|
||||
|
||||
type Fee = Fee;
|
||||
type Transaction = Transaction;
|
||||
type Block = Block;
|
||||
|
||||
type Output = Output;
|
||||
type SignableTransaction = SignableTransaction;
|
||||
type Eventuality = Eventuality;
|
||||
type TransactionMachine = TransactionMachine;
|
||||
|
||||
type Address = Address;
|
||||
|
||||
const ID: &'static str = "Monero";
|
||||
const CONFIRMATIONS: usize = 10;
|
||||
|
||||
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
|
||||
// larger than 150kb. This fits within the 100kb mark
|
||||
// Technically, it can be ~124, yet a small bit of buffer is appreciated
|
||||
// TODO: Test creating a TX this big
|
||||
const MAX_INPUTS: usize = 120;
|
||||
const MAX_OUTPUTS: usize = 16;
|
||||
|
||||
// 0.01 XMR
|
||||
const DUST: u64 = 10000000000;
|
||||
|
||||
// Monero doesn't require/benefit from tweaking
|
||||
fn tweak_keys(_: &mut ThresholdKeys<Self::Curve>) {}
|
||||
|
||||
fn address(key: EdwardsPoint) -> Self::Address {
|
||||
Self::address_internal(key, EXTERNAL_SUBADDRESS)
|
||||
}
|
||||
|
||||
fn branch_address(key: EdwardsPoint) -> Self::Address {
|
||||
Self::address_internal(key, BRANCH_SUBADDRESS)
|
||||
}
|
||||
|
||||
async fn get_latest_block_number(&self) -> Result<usize, CoinError> {
|
||||
// Monero defines height as chain length, so subtract 1 for block number
|
||||
Ok(self.rpc.get_height().await.map_err(|_| CoinError::ConnectionError)? - 1)
|
||||
}
|
||||
|
||||
async fn get_block(&self, number: usize) -> Result<Self::Block, CoinError> {
|
||||
let hash = self.rpc.get_block_hash(number).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
let block = self.rpc.get_block(hash).await.map_err(|_| CoinError::ConnectionError)?;
|
||||
Ok(Block(hash, block))
|
||||
}
|
||||
|
||||
async fn get_outputs(
|
||||
&self,
|
||||
block: &Self::Block,
|
||||
key: EdwardsPoint,
|
||||
) -> Result<Vec<Self::Output>, CoinError> {
|
||||
let mut txs = Self::scanner(key)
|
||||
.scan(&self.rpc, &block.1)
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)?
|
||||
.iter()
|
||||
.filter_map(|outputs| Some(outputs.not_locked()).filter(|outputs| !outputs.is_empty()))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// This should be pointless as we shouldn't be able to scan for any other subaddress
|
||||
// This just ensures nothing invalid makes it through
|
||||
for tx_outputs in &txs {
|
||||
for output in tx_outputs {
|
||||
assert!([EXTERNAL_SUBADDRESS, BRANCH_SUBADDRESS, CHANGE_SUBADDRESS]
|
||||
.contains(&output.output.metadata.subaddress));
|
||||
}
|
||||
}
|
||||
|
||||
let mut outputs = Vec::with_capacity(txs.len());
|
||||
for mut tx_outputs in txs.drain(..) {
|
||||
for output in tx_outputs.drain(..) {
|
||||
let mut data = output.arbitrary_data().get(0).cloned().unwrap_or(vec![]);
|
||||
|
||||
// The Output serialization code above uses u16 to represent length
|
||||
data.truncate(u16::MAX.into());
|
||||
// Monero data segments should be <= 255 already, and MAX_DATA_LEN is currently 512
|
||||
// This just allows either Monero to change, or MAX_DATA_LEN to change, without introducing
|
||||
// complicationso
|
||||
data.truncate(MAX_DATA_LEN.try_into().unwrap());
|
||||
|
||||
outputs.push(Output(output, data));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(outputs)
|
||||
}
|
||||
|
||||
async fn prepare_send(
|
||||
&self,
|
||||
keys: ThresholdKeys<Ed25519>,
|
||||
block_number: usize,
|
||||
mut plan: Plan<Self>,
|
||||
fee: Fee,
|
||||
) -> Result<(Option<(SignableTransaction, Eventuality)>, Vec<PostFeeBranch>), CoinError> {
|
||||
// Sanity check this has at least one output planned
|
||||
assert!((!plan.payments.is_empty()) || plan.change.is_some());
|
||||
|
||||
let protocol = Protocol::v16;
|
||||
// Check a fork hasn't occurred which this processor hasn't been updated for
|
||||
assert_eq!(protocol, self.rpc.get_protocol().await.map_err(|_| CoinError::ConnectionError)?);
|
||||
|
||||
let signable = |plan: &mut Plan<Self>, tx_fee: Option<_>| {
|
||||
// Monero requires at least two outputs
|
||||
// If we only have one output planned, add a dummy payment
|
||||
let outputs = plan.payments.len() + usize::from(u8::from(plan.change.is_some()));
|
||||
if outputs == 0 {
|
||||
return Ok(None);
|
||||
} else if outputs == 1 {
|
||||
plan.payments.push(Payment {
|
||||
address: Address::new(
|
||||
ViewPair::new(EdwardsPoint::generator().0, Zeroizing::new(Scalar::one().0))
|
||||
.address(Network::Mainnet, AddressSpec::Standard),
|
||||
)
|
||||
.unwrap(),
|
||||
amount: 0,
|
||||
data: None,
|
||||
});
|
||||
}
|
||||
|
||||
let mut payments = vec![];
|
||||
for payment in &plan.payments {
|
||||
// If we're solely estimating the fee, don't actually specify an amount
|
||||
// This won't affect the fee calculation yet will ensure we don't hit an out of funds error
|
||||
payments.push((
|
||||
payment.address.clone().into(),
|
||||
if tx_fee.is_none() { 0 } else { payment.amount },
|
||||
));
|
||||
}
|
||||
|
||||
match MSignableTransaction::new(
|
||||
protocol,
|
||||
// Use the plan ID as the r_seed
|
||||
// This perfectly binds the plan while simultaneously allowing verifying the plan was
|
||||
// executed with no additional communication
|
||||
Some(Zeroizing::new(plan.id())),
|
||||
plan.inputs.iter().cloned().map(|input| input.0).collect(),
|
||||
payments,
|
||||
plan.change.map(|key| {
|
||||
Change::fingerprintable(Self::address_internal(key, CHANGE_SUBADDRESS).into())
|
||||
}),
|
||||
vec![],
|
||||
fee,
|
||||
) {
|
||||
Ok(signable) => Ok(Some(signable)),
|
||||
Err(e) => match e {
|
||||
TransactionError::MultiplePaymentIds => {
|
||||
panic!("multiple payment IDs despite not supporting integrated addresses");
|
||||
}
|
||||
TransactionError::NoInputs |
|
||||
TransactionError::NoOutputs |
|
||||
TransactionError::NoChange |
|
||||
TransactionError::TooManyOutputs |
|
||||
TransactionError::TooMuchData |
|
||||
TransactionError::TooLargeTransaction |
|
||||
TransactionError::WrongPrivateKey => {
|
||||
panic!("created an Monero invalid transaction: {e}");
|
||||
}
|
||||
TransactionError::ClsagError(_) |
|
||||
TransactionError::InvalidTransaction(_) |
|
||||
TransactionError::FrostError(_) => {
|
||||
panic!("supposedly unreachable (at this time) Monero error: {e}");
|
||||
}
|
||||
TransactionError::NotEnoughFunds(_, _) => {
|
||||
if tx_fee.is_none() {
|
||||
Ok(None)
|
||||
} else {
|
||||
panic!("didn't have enough funds for a Monero TX");
|
||||
}
|
||||
}
|
||||
TransactionError::RpcError(e) => {
|
||||
log::error!("RpcError when preparing transaction: {e:?}");
|
||||
Err(CoinError::ConnectionError)
|
||||
}
|
||||
},
|
||||
}
|
||||
};
|
||||
|
||||
let tx_fee = match signable(&mut plan, None)? {
|
||||
Some(tx) => tx.fee(),
|
||||
None => return Ok((None, drop_branches(&plan))),
|
||||
};
|
||||
|
||||
let branch_outputs = amortize_fee(&mut plan, tx_fee);
|
||||
|
||||
let signable = SignableTransaction {
|
||||
keys,
|
||||
transcript: plan.transcript(),
|
||||
height: block_number + 1,
|
||||
actual: match signable(&mut plan, Some(tx_fee))? {
|
||||
Some(signable) => signable,
|
||||
None => return Ok((None, branch_outputs)),
|
||||
},
|
||||
};
|
||||
let eventuality = signable.actual.eventuality().unwrap();
|
||||
Ok((Some((signable, eventuality)), branch_outputs))
|
||||
}
|
||||
|
||||
async fn attempt_send(
|
||||
&self,
|
||||
transaction: SignableTransaction,
|
||||
) -> Result<Self::TransactionMachine, CoinError> {
|
||||
transaction
|
||||
.actual
|
||||
.clone()
|
||||
.multisig(
|
||||
&self.rpc,
|
||||
transaction.keys.clone(),
|
||||
transaction.transcript.clone(),
|
||||
transaction.height,
|
||||
)
|
||||
.await
|
||||
.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), CoinError> {
|
||||
match self.rpc.publish_transaction(tx).await {
|
||||
Ok(_) => Ok(()),
|
||||
Err(RpcError::ConnectionError) => Err(CoinError::ConnectionError)?,
|
||||
// TODO: Distinguish already in pool vs double spend (other signing attempt succeeded) vs
|
||||
// invalid transaction
|
||||
Err(e) => panic!("failed to publish TX {:?}: {e}", tx.hash()),
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, CoinError> {
|
||||
self.rpc.get_transaction(*id).await.map_err(|_| CoinError::ConnectionError)
|
||||
}
|
||||
|
||||
fn confirm_completion(&self, eventuality: &Eventuality, tx: &Transaction) -> bool {
|
||||
eventuality.matches(tx)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_block_number(&self, id: &[u8; 32]) -> usize {
|
||||
self.rpc.get_block(*id).await.unwrap().number()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn get_fee(&self) -> Self::Fee {
|
||||
self.rpc.get_fee().await.unwrap()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn mine_block(&self) {
|
||||
// https://github.com/serai-dex/serai/issues/198
|
||||
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
|
||||
|
||||
#[derive(serde::Deserialize, Debug)]
|
||||
struct EmptyResponse {}
|
||||
let _: EmptyResponse = self
|
||||
.rpc
|
||||
.rpc_call(
|
||||
"json_rpc",
|
||||
Some(serde_json::json!({
|
||||
"method": "generateblocks",
|
||||
"params": {
|
||||
"wallet_address": Self::test_address().to_string(),
|
||||
"amount_of_blocks": 1
|
||||
},
|
||||
})),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
async fn test_send(&self, address: Self::Address) -> Block {
|
||||
use zeroize::Zeroizing;
|
||||
use rand_core::OsRng;
|
||||
|
||||
let new_block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
for _ in 0 .. 80 {
|
||||
self.mine_block().await;
|
||||
}
|
||||
|
||||
let outputs = Self::test_scanner()
|
||||
.scan(&self.rpc, &self.rpc.get_block_by_number(new_block).await.unwrap())
|
||||
.await
|
||||
.unwrap()
|
||||
.swap_remove(0)
|
||||
.ignore_timelock();
|
||||
|
||||
let amount = outputs[0].commitment().amount;
|
||||
// The dust should always be sufficient for the fee
|
||||
let fee = Monero::DUST;
|
||||
|
||||
let tx = MSignableTransaction::new(
|
||||
self.rpc.get_protocol().await.unwrap(),
|
||||
None,
|
||||
outputs,
|
||||
vec![(address.into(), amount - fee)],
|
||||
Some(Change::fingerprintable(Self::test_address().into())),
|
||||
vec![],
|
||||
self.rpc.get_fee().await.unwrap(),
|
||||
)
|
||||
.unwrap()
|
||||
.sign(&mut OsRng, &self.rpc, &Zeroizing::new(Scalar::one().0))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let block = self.get_latest_block_number().await.unwrap() + 1;
|
||||
self.rpc.publish_transaction(&tx).await.unwrap();
|
||||
for _ in 0 .. 10 {
|
||||
self.mine_block().await;
|
||||
}
|
||||
self.get_block(block).await.unwrap()
|
||||
}
|
||||
}
|
||||
42
processor/src/coordinator.rs
Normal file
42
processor/src/coordinator.rs
Normal file
@@ -0,0 +1,42 @@
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
collections::VecDeque,
|
||||
};
|
||||
|
||||
use messages::{ProcessorMessage, CoordinatorMessage};
|
||||
|
||||
// TODO: Also include the coin block height here so we can delay handling if not synced?
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Message {
|
||||
pub id: u64,
|
||||
pub msg: CoordinatorMessage,
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
pub trait Coordinator {
|
||||
async fn send(&mut self, msg: ProcessorMessage);
|
||||
async fn recv(&mut self) -> Message;
|
||||
async fn ack(&mut self, msg: Message);
|
||||
}
|
||||
|
||||
// TODO: Move this to tests
|
||||
pub struct MemCoordinator(Arc<RwLock<VecDeque<Message>>>);
|
||||
impl MemCoordinator {
|
||||
#[allow(clippy::new_without_default)]
|
||||
pub fn new() -> MemCoordinator {
|
||||
MemCoordinator(Arc::new(RwLock::new(VecDeque::new())))
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl Coordinator for MemCoordinator {
|
||||
async fn send(&mut self, _: ProcessorMessage) {
|
||||
todo!()
|
||||
}
|
||||
async fn recv(&mut self) -> Message {
|
||||
todo!()
|
||||
}
|
||||
async fn ack(&mut self, _: Message) {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
149
processor/src/db.rs
Normal file
149
processor/src/db.rs
Normal file
@@ -0,0 +1,149 @@
|
||||
use core::{marker::PhantomData, fmt::Debug};
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
collections::HashMap,
|
||||
};
|
||||
|
||||
use crate::{Plan, coins::Coin};
|
||||
|
||||
pub trait DbTxn: Send + Sync + Clone + Debug {
|
||||
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
|
||||
fn del(&mut self, key: impl AsRef<[u8]>);
|
||||
fn commit(self);
|
||||
}
|
||||
|
||||
pub trait Db: 'static + Send + Sync + Clone + Debug {
|
||||
type Transaction: DbTxn;
|
||||
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
|
||||
let db_len = u8::try_from(db_dst.len()).unwrap();
|
||||
let dst_len = u8::try_from(item_dst.len()).unwrap();
|
||||
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat().to_vec()
|
||||
}
|
||||
fn txn(&mut self) -> Self::Transaction;
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
|
||||
}
|
||||
|
||||
// TODO: Replace this with RocksDB
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct MemDb(Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>);
|
||||
impl MemDb {
|
||||
#[allow(clippy::new_without_default)]
|
||||
pub fn new() -> MemDb {
|
||||
MemDb(Arc::new(RwLock::new(HashMap::new())))
|
||||
}
|
||||
}
|
||||
|
||||
impl DbTxn for MemDb {
|
||||
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
|
||||
self.0.write().unwrap().insert(key.as_ref().to_vec(), value.as_ref().to_vec());
|
||||
}
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
|
||||
self.0.read().unwrap().get(key.as_ref()).cloned()
|
||||
}
|
||||
fn del(&mut self, key: impl AsRef<[u8]>) {
|
||||
self.0.write().unwrap().remove(key.as_ref());
|
||||
}
|
||||
fn commit(self) {}
|
||||
}
|
||||
|
||||
impl Db for MemDb {
|
||||
type Transaction = MemDb;
|
||||
fn txn(&mut self) -> MemDb {
|
||||
Self(self.0.clone())
|
||||
}
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
|
||||
self.0.read().unwrap().get(key.as_ref()).cloned()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct MainDb<C: Coin, D: Db>(D, PhantomData<C>);
|
||||
impl<C: Coin, D: Db> MainDb<C, D> {
|
||||
pub fn new(db: D) -> Self {
|
||||
Self(db, PhantomData)
|
||||
}
|
||||
|
||||
fn main_key(dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
|
||||
D::key(b"MAIN", dst, key)
|
||||
}
|
||||
|
||||
fn plan_key(id: &[u8]) -> Vec<u8> {
|
||||
Self::main_key(b"plan", id)
|
||||
}
|
||||
fn signing_key(key: &[u8]) -> Vec<u8> {
|
||||
Self::main_key(b"signing", key)
|
||||
}
|
||||
pub fn save_signing(&mut self, key: &[u8], block_number: u64, time: u64, plan: &Plan<C>) {
|
||||
let id = plan.id();
|
||||
// Creating a TXN here is arguably an anti-pattern, yet nothing here expects atomicity
|
||||
let mut txn = self.0.txn();
|
||||
|
||||
{
|
||||
let mut signing = txn.get(Self::signing_key(key)).unwrap_or(vec![]);
|
||||
|
||||
// If we've already noted we're signing this, return
|
||||
assert_eq!(signing.len() % 32, 0);
|
||||
for i in 0 .. (signing.len() / 32) {
|
||||
if signing[(i * 32) .. ((i + 1) * 32)] == id {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
signing.extend(&id);
|
||||
txn.put(Self::signing_key(key), id);
|
||||
}
|
||||
|
||||
{
|
||||
let mut buf = block_number.to_le_bytes().to_vec();
|
||||
buf.extend(&time.to_le_bytes());
|
||||
plan.write(&mut buf).unwrap();
|
||||
txn.put(Self::plan_key(&id), &buf);
|
||||
}
|
||||
|
||||
txn.commit();
|
||||
}
|
||||
|
||||
pub fn signing(&self, key: &[u8]) -> Vec<(u64, u64, Plan<C>)> {
|
||||
let signing = self.0.get(Self::signing_key(key)).unwrap_or(vec![]);
|
||||
let mut res = vec![];
|
||||
|
||||
assert_eq!(signing.len() % 32, 0);
|
||||
for i in 0 .. (signing.len() / 32) {
|
||||
let id = &signing[(i * 32) .. ((i + 1) * 32)];
|
||||
let buf = self.0.get(Self::plan_key(id)).unwrap();
|
||||
|
||||
let block_number = u64::from_le_bytes(buf[.. 8].try_into().unwrap());
|
||||
let time = u64::from_le_bytes(buf[8 .. 16].try_into().unwrap());
|
||||
let plan = Plan::<C>::read::<&[u8]>(&mut &buf[16 ..]).unwrap();
|
||||
assert_eq!(id, &plan.id());
|
||||
res.push((block_number, time, plan));
|
||||
}
|
||||
|
||||
res
|
||||
}
|
||||
|
||||
pub fn finish_signing(&mut self, key: &[u8], id: [u8; 32]) {
|
||||
let mut signing = self.0.get(Self::signing_key(key)).unwrap_or(vec![]);
|
||||
assert_eq!(signing.len() % 32, 0);
|
||||
|
||||
let mut found = false;
|
||||
for i in 0 .. (signing.len() / 32) {
|
||||
let start = i * 32;
|
||||
let end = i + 32;
|
||||
if signing[start .. end] == id {
|
||||
found = true;
|
||||
signing = [&signing[.. start], &signing[end ..]].concat().to_vec();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
log::warn!("told to finish signing {} yet wasn't actively signing it", hex::encode(id));
|
||||
}
|
||||
|
||||
let mut txn = self.0.txn();
|
||||
txn.put(Self::signing_key(key), signing);
|
||||
txn.commit();
|
||||
}
|
||||
}
|
||||
308
processor/src/key_gen.rs
Normal file
308
processor/src/key_gen.rs
Normal file
@@ -0,0 +1,308 @@
|
||||
use core::marker::PhantomData;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
use rand_core::SeedableRng;
|
||||
use rand_chacha::ChaCha20Rng;
|
||||
|
||||
use transcript::{Transcript, RecommendedTranscript};
|
||||
use group::GroupEncoding;
|
||||
use frost::{
|
||||
curve::Ciphersuite,
|
||||
dkg::{Participant, ThresholdParams, ThresholdCore, ThresholdKeys, encryption::*, frost::*},
|
||||
};
|
||||
|
||||
use log::info;
|
||||
|
||||
use serai_client::validator_sets::primitives::ValidatorSetInstance;
|
||||
use messages::key_gen::*;
|
||||
|
||||
use crate::{DbTxn, Db, coins::Coin};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum KeyGenEvent<C: Ciphersuite> {
|
||||
KeyConfirmed { activation_number: usize, keys: ThresholdKeys<C> },
|
||||
ProcessorMessage(ProcessorMessage),
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
struct KeyGenDb<C: Coin, D: Db>(D, PhantomData<C>);
|
||||
impl<C: Coin, D: Db> KeyGenDb<C, D> {
|
||||
fn key_gen_key(dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
|
||||
D::key(b"KEY_GEN", dst, key)
|
||||
}
|
||||
|
||||
fn params_key(set: &ValidatorSetInstance) -> Vec<u8> {
|
||||
Self::key_gen_key(b"params", bincode::serialize(set).unwrap())
|
||||
}
|
||||
fn save_params(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
set: &ValidatorSetInstance,
|
||||
params: &ThresholdParams,
|
||||
) {
|
||||
txn.put(Self::params_key(set), bincode::serialize(params).unwrap());
|
||||
}
|
||||
fn params(&self, set: &ValidatorSetInstance) -> ThresholdParams {
|
||||
// Directly unwraps the .get() as this will only be called after being set
|
||||
bincode::deserialize(&self.0.get(Self::params_key(set)).unwrap()).unwrap()
|
||||
}
|
||||
|
||||
// Not scoped to the set since that'd have latter attempts overwrite former
|
||||
// A former attempt may become the finalized attempt, even if it doesn't in a timely manner
|
||||
// Overwriting its commitments would be accordingly poor
|
||||
fn commitments_key(id: &KeyGenId) -> Vec<u8> {
|
||||
Self::key_gen_key(b"commitments", bincode::serialize(id).unwrap())
|
||||
}
|
||||
fn save_commitments(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
id: &KeyGenId,
|
||||
commitments: &HashMap<Participant, Vec<u8>>,
|
||||
) {
|
||||
txn.put(Self::commitments_key(id), bincode::serialize(commitments).unwrap());
|
||||
}
|
||||
fn commitments(
|
||||
&self,
|
||||
id: &KeyGenId,
|
||||
params: ThresholdParams,
|
||||
) -> HashMap<Participant, EncryptionKeyMessage<C::Curve, Commitments<C::Curve>>> {
|
||||
bincode::deserialize::<HashMap<Participant, Vec<u8>>>(
|
||||
&self.0.get(Self::commitments_key(id)).unwrap(),
|
||||
)
|
||||
.unwrap()
|
||||
.drain()
|
||||
.map(|(i, bytes)| {
|
||||
(
|
||||
i,
|
||||
EncryptionKeyMessage::<C::Curve, Commitments<C::Curve>>::read::<&[u8]>(
|
||||
&mut bytes.as_ref(),
|
||||
params,
|
||||
)
|
||||
.unwrap(),
|
||||
)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn generated_keys_key(id: &KeyGenId) -> Vec<u8> {
|
||||
Self::key_gen_key(b"generated_keys", bincode::serialize(id).unwrap())
|
||||
}
|
||||
fn save_keys(&mut self, txn: &mut D::Transaction, id: &KeyGenId, keys: &ThresholdCore<C::Curve>) {
|
||||
txn.put(Self::generated_keys_key(id), keys.serialize());
|
||||
}
|
||||
|
||||
fn keys_key(key: &<C::Curve as Ciphersuite>::G) -> Vec<u8> {
|
||||
Self::key_gen_key(b"keys", key.to_bytes())
|
||||
}
|
||||
fn confirm_keys(&mut self, txn: &mut D::Transaction, id: &KeyGenId) -> ThresholdKeys<C::Curve> {
|
||||
let keys_vec = self.0.get(Self::generated_keys_key(id)).unwrap();
|
||||
let mut keys =
|
||||
ThresholdKeys::new(ThresholdCore::read::<&[u8]>(&mut keys_vec.as_ref()).unwrap());
|
||||
C::tweak_keys(&mut keys);
|
||||
txn.put(Self::keys_key(&keys.group_key()), keys_vec);
|
||||
keys
|
||||
}
|
||||
fn keys(&self, key: &<C::Curve as Ciphersuite>::G) -> ThresholdKeys<C::Curve> {
|
||||
let mut keys = ThresholdKeys::new(
|
||||
ThresholdCore::read::<&[u8]>(&mut self.0.get(Self::keys_key(key)).unwrap().as_ref()).unwrap(),
|
||||
);
|
||||
C::tweak_keys(&mut keys);
|
||||
keys
|
||||
}
|
||||
}
|
||||
|
||||
/// Coded so if the processor spontaneously reboots, one of two paths occur:
|
||||
/// 1) It either didn't send its response, so the attempt will be aborted
|
||||
/// 2) It did send its response, and has locally saved enough data to continue
|
||||
#[derive(Debug)]
|
||||
pub struct KeyGen<C: Coin, D: Db> {
|
||||
db: KeyGenDb<C, D>,
|
||||
entropy: Zeroizing<[u8; 32]>,
|
||||
|
||||
active_commit: HashMap<ValidatorSetInstance, SecretShareMachine<C::Curve>>,
|
||||
active_share: HashMap<ValidatorSetInstance, KeyMachine<C::Curve>>,
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> KeyGen<C, D> {
|
||||
#[allow(clippy::new_ret_no_self)]
|
||||
pub fn new(db: D, entropy: Zeroizing<[u8; 32]>) -> KeyGen<C, D> {
|
||||
KeyGen {
|
||||
db: KeyGenDb(db, PhantomData::<C>),
|
||||
entropy,
|
||||
|
||||
active_commit: HashMap::new(),
|
||||
active_share: HashMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn keys(&self, key: &<C::Curve as Ciphersuite>::G) -> ThresholdKeys<C::Curve> {
|
||||
self.db.keys(key)
|
||||
}
|
||||
|
||||
pub async fn handle(&mut self, msg: CoordinatorMessage) -> KeyGenEvent<C::Curve> {
|
||||
let context = |id: &KeyGenId| {
|
||||
// TODO2: Also embed the chain ID/genesis block
|
||||
format!(
|
||||
"Serai Key Gen. Session: {}, Index: {}, Attempt: {}",
|
||||
id.set.session.0, id.set.index.0, id.attempt
|
||||
)
|
||||
};
|
||||
|
||||
let rng = |label, id: KeyGenId| {
|
||||
let mut transcript = RecommendedTranscript::new(label);
|
||||
transcript.append_message(b"entropy", self.entropy.as_ref());
|
||||
transcript.append_message(b"context", context(&id));
|
||||
ChaCha20Rng::from_seed(transcript.rng_seed(b"rng"))
|
||||
};
|
||||
let coefficients_rng = |id| rng(b"Key Gen Coefficients", id);
|
||||
let secret_shares_rng = |id| rng(b"Key Gen Secret Shares", id);
|
||||
let share_rng = |id| rng(b"Key Gen Share", id);
|
||||
|
||||
let key_gen_machine = |id, params| {
|
||||
KeyGenMachine::new(params, context(&id)).generate_coefficients(&mut coefficients_rng(id))
|
||||
};
|
||||
|
||||
match msg {
|
||||
CoordinatorMessage::GenerateKey { id, params } => {
|
||||
info!("Generating new key. ID: {:?} Params: {:?}", id, params);
|
||||
|
||||
// Remove old attempts
|
||||
if self.active_commit.remove(&id.set).is_none() &&
|
||||
self.active_share.remove(&id.set).is_none()
|
||||
{
|
||||
// If we haven't handled this set before, save the params
|
||||
// This may overwrite previously written params if we rebooted, yet that isn't a
|
||||
// concern
|
||||
let mut txn = self.db.0.txn();
|
||||
self.db.save_params(&mut txn, &id.set, ¶ms);
|
||||
txn.commit();
|
||||
}
|
||||
|
||||
let (machine, commitments) = key_gen_machine(id, params);
|
||||
self.active_commit.insert(id.set, machine);
|
||||
|
||||
KeyGenEvent::ProcessorMessage(ProcessorMessage::Commitments {
|
||||
id,
|
||||
commitments: commitments.serialize(),
|
||||
})
|
||||
}
|
||||
|
||||
CoordinatorMessage::Commitments { id, commitments } => {
|
||||
info!("Received commitments for {:?}", id);
|
||||
|
||||
if self.active_share.contains_key(&id.set) {
|
||||
// We should've been told of a new attempt before receiving commitments again
|
||||
// The coordinator is either missing messages or repeating itself
|
||||
// Either way, it's faulty
|
||||
panic!("commitments when already handled commitments");
|
||||
}
|
||||
|
||||
let params = self.db.params(&id.set);
|
||||
|
||||
// Parse the commitments
|
||||
let parsed = match commitments
|
||||
.iter()
|
||||
.map(|(i, commitments)| {
|
||||
EncryptionKeyMessage::<C::Curve, Commitments<C::Curve>>::read::<&[u8]>(
|
||||
&mut commitments.as_ref(),
|
||||
params,
|
||||
)
|
||||
.map(|commitments| (*i, commitments))
|
||||
})
|
||||
.collect()
|
||||
{
|
||||
Ok(commitments) => commitments,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
|
||||
// Get the machine, rebuilding it if we don't have it
|
||||
// We won't if the processor rebooted
|
||||
// This *may* be inconsistent if we receive a KeyGen for attempt x, then commitments for
|
||||
// attempt y
|
||||
// The coordinator is trusted to be proper in this regard
|
||||
let machine =
|
||||
self.active_commit.remove(&id.set).unwrap_or_else(|| key_gen_machine(id, params).0);
|
||||
|
||||
let (machine, mut shares) =
|
||||
match machine.generate_secret_shares(&mut secret_shares_rng(id), parsed) {
|
||||
Ok(res) => res,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
self.active_share.insert(id.set, machine);
|
||||
|
||||
let mut txn = self.db.0.txn();
|
||||
self.db.save_commitments(&mut txn, &id, &commitments);
|
||||
txn.commit();
|
||||
|
||||
KeyGenEvent::ProcessorMessage(ProcessorMessage::Shares {
|
||||
id,
|
||||
shares: shares.drain().map(|(i, share)| (i, share.serialize())).collect(),
|
||||
})
|
||||
}
|
||||
|
||||
CoordinatorMessage::Shares { id, mut shares } => {
|
||||
info!("Received shares for {:?}", id);
|
||||
|
||||
let params = self.db.params(&id.set);
|
||||
|
||||
// Parse the shares
|
||||
let shares = match shares
|
||||
.drain()
|
||||
.map(|(i, share)| {
|
||||
EncryptedMessage::<C::Curve, SecretShare<<C::Curve as Ciphersuite>::F>>::read::<&[u8]>(
|
||||
&mut share.as_ref(),
|
||||
params,
|
||||
)
|
||||
.map(|share| (i, share))
|
||||
})
|
||||
.collect()
|
||||
{
|
||||
Ok(shares) => shares,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
|
||||
// Same commentary on inconsistency as above exists
|
||||
let machine = self.active_share.remove(&id.set).unwrap_or_else(|| {
|
||||
key_gen_machine(id, params)
|
||||
.0
|
||||
.generate_secret_shares(&mut secret_shares_rng(id), self.db.commitments(&id, params))
|
||||
.unwrap()
|
||||
.0
|
||||
});
|
||||
|
||||
// TODO2: Handle the blame machine properly
|
||||
let keys = (match machine.calculate_share(&mut share_rng(id), shares) {
|
||||
Ok(res) => res,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
})
|
||||
.complete();
|
||||
|
||||
let mut txn = self.db.0.txn();
|
||||
self.db.save_keys(&mut txn, &id, &keys);
|
||||
txn.commit();
|
||||
|
||||
let mut keys = ThresholdKeys::new(keys);
|
||||
C::tweak_keys(&mut keys);
|
||||
KeyGenEvent::ProcessorMessage(ProcessorMessage::GeneratedKey {
|
||||
id,
|
||||
key: keys.group_key().to_bytes().as_ref().to_vec(),
|
||||
})
|
||||
}
|
||||
|
||||
CoordinatorMessage::ConfirmKey { context, id } => {
|
||||
let mut txn = self.db.0.txn();
|
||||
let keys = self.db.confirm_keys(&mut txn, &id);
|
||||
txn.commit();
|
||||
|
||||
info!("Confirmed key {} from {:?}", hex::encode(keys.group_key().to_bytes()), id);
|
||||
|
||||
KeyGenEvent::KeyConfirmed {
|
||||
activation_number: context.coin_latest_block_number.try_into().unwrap(),
|
||||
keys,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
use std::{marker::Send, collections::HashMap};
|
||||
|
||||
use async_trait::async_trait;
|
||||
use thiserror::Error;
|
||||
|
||||
use frost::{curve::Ciphersuite, Participant, FrostError};
|
||||
|
||||
mod coin;
|
||||
use coin::{CoinError, Coin};
|
||||
|
||||
mod wallet;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
#[derive(Clone, Error, Debug)]
|
||||
pub enum NetworkError {}
|
||||
|
||||
#[async_trait]
|
||||
pub trait Network: Send {
|
||||
async fn round(&mut self, data: Vec<u8>) -> Result<HashMap<Participant, Vec<u8>>, NetworkError>;
|
||||
}
|
||||
|
||||
#[derive(Clone, Error, Debug)]
|
||||
pub enum SignError {
|
||||
#[error("FROST had an error {0}")]
|
||||
FrostError(FrostError),
|
||||
#[error("coin had an error {0}")]
|
||||
CoinError(CoinError),
|
||||
#[error("network had an error {0}")]
|
||||
NetworkError(NetworkError),
|
||||
}
|
||||
|
||||
// Generate a static additional key for a given chain in a globally consistent manner
|
||||
// Doesn't consider the current group key to increase the simplicity of verifying Serai's status
|
||||
// Takes an index, k, to support protocols which use multiple secondary keys
|
||||
// Presumably a view key
|
||||
pub(crate) fn additional_key<C: Coin>(k: u64) -> <C::Curve as Ciphersuite>::F {
|
||||
<C::Curve as Ciphersuite>::hash_to_F(
|
||||
b"Serai DEX Additional Key",
|
||||
&[C::ID, &k.to_le_bytes()].concat(),
|
||||
)
|
||||
}
|
||||
458
processor/src/main.rs
Normal file
458
processor/src/main.rs
Normal file
@@ -0,0 +1,458 @@
|
||||
use std::{
|
||||
env,
|
||||
pin::Pin,
|
||||
task::{Poll, Context},
|
||||
future::Future,
|
||||
time::{Duration, SystemTime},
|
||||
collections::{VecDeque, HashMap},
|
||||
};
|
||||
|
||||
use zeroize::{Zeroize, Zeroizing};
|
||||
|
||||
use transcript::{Transcript, RecommendedTranscript};
|
||||
use group::GroupEncoding;
|
||||
use frost::curve::Ciphersuite;
|
||||
|
||||
use log::{info, warn, error};
|
||||
use tokio::time::sleep;
|
||||
|
||||
use scale::Decode;
|
||||
|
||||
use serai_client::{
|
||||
primitives::{Amount, WithAmount},
|
||||
tokens::primitives::OutInstruction,
|
||||
in_instructions::primitives::{Shorthand, RefundableInInstruction},
|
||||
};
|
||||
|
||||
use messages::{SubstrateContext, sign, substrate, CoordinatorMessage, ProcessorMessage};
|
||||
|
||||
mod plan;
|
||||
pub use plan::*;
|
||||
|
||||
mod db;
|
||||
pub use db::*;
|
||||
|
||||
mod coordinator;
|
||||
pub use coordinator::*;
|
||||
|
||||
mod coins;
|
||||
use coins::{OutputType, Output, PostFeeBranch, Block, Coin};
|
||||
#[cfg(feature = "bitcoin")]
|
||||
use coins::Bitcoin;
|
||||
#[cfg(feature = "monero")]
|
||||
use coins::Monero;
|
||||
|
||||
mod key_gen;
|
||||
use key_gen::{KeyGenEvent, KeyGen};
|
||||
|
||||
mod signer;
|
||||
use signer::{SignerEvent, Signer, SignerHandle};
|
||||
|
||||
mod scanner;
|
||||
use scanner::{ScannerEvent, Scanner, ScannerHandle};
|
||||
|
||||
mod scheduler;
|
||||
use scheduler::Scheduler;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
// Generate a static additional key for a given chain in a globally consistent manner
|
||||
// Doesn't consider the current group key to increase the simplicity of verifying Serai's status
|
||||
// Takes an index, k, to support protocols which use multiple secondary keys
|
||||
// Presumably a view key
|
||||
pub(crate) fn additional_key<C: Coin>(k: u64) -> <C::Curve as Ciphersuite>::F {
|
||||
<C::Curve as Ciphersuite>::hash_to_F(
|
||||
b"Serai DEX Additional Key",
|
||||
&[C::ID.as_bytes(), &k.to_le_bytes()].concat(),
|
||||
)
|
||||
}
|
||||
|
||||
struct SignerMessageFuture<'a, C: Coin, D: Db>(&'a mut HashMap<Vec<u8>, SignerHandle<C, D>>);
|
||||
impl<'a, C: Coin, D: Db> Future for SignerMessageFuture<'a, C, D> {
|
||||
type Output = (Vec<u8>, SignerEvent<C>);
|
||||
fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {
|
||||
for (key, signer) in self.0.iter_mut() {
|
||||
match signer.events.poll_recv(ctx) {
|
||||
Poll::Ready(event) => return Poll::Ready((key.clone(), event.unwrap())),
|
||||
Poll::Pending => {}
|
||||
}
|
||||
}
|
||||
Poll::Pending
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_fee<C: Coin>(coin: &C, block_number: usize) -> C::Fee {
|
||||
loop {
|
||||
// TODO2: Use an fee representative of several blocks
|
||||
match coin.get_block(block_number).await {
|
||||
Ok(block) => {
|
||||
return block.median_fee();
|
||||
}
|
||||
Err(e) => {
|
||||
error!("couldn't get block {}: {e}", block_number);
|
||||
// Since this block is considered finalized, we shouldn't be unable to get it unless the
|
||||
// node is offline, hence the long sleep
|
||||
sleep(Duration::from_secs(60)).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn prepare_send<C: Coin, D: Db>(
|
||||
coin: &C,
|
||||
signer: &SignerHandle<C, D>,
|
||||
block_number: usize,
|
||||
fee: C::Fee,
|
||||
plan: Plan<C>,
|
||||
) -> (Option<(C::SignableTransaction, C::Eventuality)>, Vec<PostFeeBranch>) {
|
||||
let keys = signer.keys().await;
|
||||
loop {
|
||||
match coin.prepare_send(keys.clone(), block_number, plan.clone(), fee).await {
|
||||
Ok(prepared) => {
|
||||
return prepared;
|
||||
}
|
||||
Err(e) => {
|
||||
error!("couldn't prepare a send for plan {}: {e}", hex::encode(plan.id()));
|
||||
// The processor is either trying to create an invalid TX (fatal) or the node went
|
||||
// offline
|
||||
// The former requires a patch, the latter is a connection issue
|
||||
// If the latter, this is an appropriate sleep. If the former, we should panic, yet
|
||||
// this won't flood the console ad infinitum
|
||||
sleep(Duration::from_secs(60)).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn sign_plans<C: Coin, D: Db>(
|
||||
db: &mut MainDb<C, D>,
|
||||
coin: &C,
|
||||
schedulers: &mut HashMap<Vec<u8>, Scheduler<C>>,
|
||||
signers: &HashMap<Vec<u8>, SignerHandle<C, D>>,
|
||||
context: SubstrateContext,
|
||||
plans: Vec<Plan<C>>,
|
||||
) {
|
||||
let mut plans = VecDeque::from(plans);
|
||||
let start = SystemTime::UNIX_EPOCH.checked_add(Duration::from_secs(context.time)).unwrap();
|
||||
let block_number = context.coin_latest_block_number.try_into().unwrap();
|
||||
|
||||
let fee = get_fee(coin, block_number).await;
|
||||
|
||||
while let Some(plan) = plans.pop_front() {
|
||||
let id = plan.id();
|
||||
info!("preparing plan {}: {:?}", hex::encode(id), plan);
|
||||
|
||||
let key = plan.key.to_bytes();
|
||||
db.save_signing(key.as_ref(), context.coin_latest_block_number, context.time, &plan);
|
||||
let (tx, branches) = prepare_send(coin, &signers[key.as_ref()], block_number, fee, plan).await;
|
||||
|
||||
// TODO: If we reboot mid-sign_plans, for a DB-backed scheduler, these may be partially
|
||||
// executed
|
||||
// Global TXN object for the entire coordinator message?
|
||||
// Re-ser the scheduler after every sign_plans call?
|
||||
// To clarify, the scheduler is distinct as it mutates itself on new data.
|
||||
// The key_gen/scanner/signer are designed to be deterministic to new data, irrelevant to prior
|
||||
// states.
|
||||
for branch in branches {
|
||||
schedulers
|
||||
.get_mut(key.as_ref())
|
||||
.expect("didn't have a scheduler for a key we have a plan for")
|
||||
.created_output(branch.expected, branch.actual);
|
||||
}
|
||||
|
||||
if let Some((tx, eventuality)) = tx {
|
||||
// TODO: Handle detection of already signed TXs (either on-chain or notified by a peer)
|
||||
signers[key.as_ref()].sign_transaction(id, start, tx, eventuality).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn run<C: Coin, D: Db, Co: Coordinator>(raw_db: D, coin: C, mut coordinator: Co) {
|
||||
let mut entropy_transcript = {
|
||||
let entropy =
|
||||
Zeroizing::new(env::var("ENTROPY").expect("entropy wasn't provided as an env var"));
|
||||
if entropy.len() != 64 {
|
||||
panic!("entropy isn't the right length");
|
||||
}
|
||||
let bytes = Zeroizing::new(hex::decode(entropy).expect("entropy wasn't hex-formatted"));
|
||||
let mut entropy = Zeroizing::new([0; 32]);
|
||||
entropy.as_mut().copy_from_slice(bytes.as_ref());
|
||||
|
||||
let mut transcript = RecommendedTranscript::new(b"Serai Processor Entropy");
|
||||
transcript.append_message(b"entropy", entropy.as_ref());
|
||||
transcript
|
||||
};
|
||||
|
||||
let mut entropy = |label| {
|
||||
let mut challenge = entropy_transcript.challenge(label);
|
||||
let mut res = Zeroizing::new([0; 32]);
|
||||
res.as_mut().copy_from_slice(&challenge[.. 32]);
|
||||
challenge.zeroize();
|
||||
res
|
||||
};
|
||||
|
||||
// We don't need to re-issue GenerateKey orders because the coordinator is expected to
|
||||
// schedule/notify us of new attempts
|
||||
let mut key_gen = KeyGen::<C, _>::new(raw_db.clone(), entropy(b"key-gen_entropy"));
|
||||
// The scanner has no long-standing orders to re-issue
|
||||
let (mut scanner, active_keys) = Scanner::new(coin.clone(), raw_db.clone());
|
||||
|
||||
let mut schedulers = HashMap::<Vec<u8>, Scheduler<C>>::new();
|
||||
let mut signers = HashMap::new();
|
||||
|
||||
let mut main_db = MainDb::new(raw_db.clone());
|
||||
|
||||
for key in &active_keys {
|
||||
// TODO: Load existing schedulers
|
||||
|
||||
let signer = Signer::new(raw_db.clone(), coin.clone(), key_gen.keys(key));
|
||||
|
||||
// Load any TXs being actively signed
|
||||
let key = key.to_bytes();
|
||||
for (block_number, start, plan) in main_db.signing(key.as_ref()) {
|
||||
let block_number = block_number.try_into().unwrap();
|
||||
let start = SystemTime::UNIX_EPOCH.checked_add(Duration::from_secs(start)).unwrap();
|
||||
|
||||
let fee = get_fee(&coin, block_number).await;
|
||||
|
||||
let id = plan.id();
|
||||
info!("reloading plan {}: {:?}", hex::encode(id), plan);
|
||||
|
||||
let (Some((tx, eventuality)), _) =
|
||||
prepare_send(&coin, &signer, block_number, fee, plan).await else {
|
||||
panic!("previously created transaction is no longer being created")
|
||||
};
|
||||
signer.sign_transaction(id, start, tx, eventuality).await;
|
||||
}
|
||||
|
||||
signers.insert(key.as_ref().to_vec(), signer);
|
||||
}
|
||||
|
||||
// We can't load this from the DB as we can't guarantee atomic increments with the ack function
|
||||
let mut last_coordinator_msg = None;
|
||||
|
||||
loop {
|
||||
tokio::select! {
|
||||
// This blocks the entire processor until it finishes handling this message
|
||||
// KeyGen specifically may take a notable amount of processing time
|
||||
// While that shouldn't be an issue in practice, as after processing an attempt it'll handle
|
||||
// the other messages in the queue, it may be beneficial to parallelize these
|
||||
// They could likely be parallelized by type (KeyGen, Sign, Substrate) without issue
|
||||
msg = coordinator.recv() => {
|
||||
assert_eq!(msg.id, (last_coordinator_msg.unwrap_or(msg.id - 1) + 1));
|
||||
last_coordinator_msg = Some(msg.id);
|
||||
|
||||
// If this message expects a higher block number than we have, halt until synced
|
||||
async fn wait<C: Coin, D: Db>(
|
||||
coin: &C,
|
||||
scanner: &ScannerHandle<C, D>,
|
||||
context: &SubstrateContext
|
||||
) {
|
||||
let needed = usize::try_from(context.coin_latest_block_number).unwrap();
|
||||
|
||||
loop {
|
||||
let Ok(actual) = coin.get_latest_block_number().await else {
|
||||
error!("couldn't get the latest block number");
|
||||
// Sleep for a minute as node errors should be incredibly uncommon yet take multiple
|
||||
// seconds to resolve
|
||||
sleep(Duration::from_secs(60)).await;
|
||||
continue;
|
||||
};
|
||||
|
||||
// Check our daemon has this block
|
||||
// CONFIRMATIONS - 1 since any block's TXs have one confirmation (the block itself)
|
||||
let confirmed = actual.saturating_sub(C::CONFIRMATIONS - 1);
|
||||
if needed > confirmed {
|
||||
// This may occur within some natural latency window
|
||||
warn!(
|
||||
"node is desynced. need block {}, have {}",
|
||||
// Print the block needed for the needed block to be confirmed
|
||||
needed + (C::CONFIRMATIONS - 1),
|
||||
actual,
|
||||
);
|
||||
// Sleep for one second per needed block
|
||||
// If the node is disconnected from the network, this will be faster than it should
|
||||
// be, yet presumably it just neeeds a moment to sync up
|
||||
sleep(Duration::from_secs((needed - confirmed).try_into().unwrap())).await;
|
||||
}
|
||||
|
||||
// Check our scanner has scanned it
|
||||
// This check does void the need for the last one, yet it provides a bit better
|
||||
// debugging
|
||||
let ram_scanned = scanner.ram_scanned().await;
|
||||
if ram_scanned < needed {
|
||||
warn!("scanner is behind. need block {}, scanned up to {}", needed, ram_scanned);
|
||||
sleep(Duration::from_secs((needed - ram_scanned).try_into().unwrap())).await;
|
||||
}
|
||||
|
||||
// TODO: Sanity check we got an AckBlock (or this is the AckBlock) for the block in
|
||||
// question
|
||||
|
||||
/*
|
||||
let synced = |context: &SubstrateContext, key| -> Result<(), ()> {
|
||||
// Check that we've synced this block and can actually operate on it ourselves
|
||||
let latest = scanner.latest_scanned(key);
|
||||
if usize::try_from(context.coin_latest_block_number).unwrap() < latest {
|
||||
log::warn!(
|
||||
"coin node disconnected/desynced from rest of the network. \
|
||||
our block: {latest:?}, network's acknowledged: {}",
|
||||
context.coin_latest_block_number
|
||||
);
|
||||
Err(())?;
|
||||
}
|
||||
Ok(())
|
||||
};
|
||||
*/
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
match &msg.msg {
|
||||
CoordinatorMessage::KeyGen(_) => {},
|
||||
CoordinatorMessage::Sign(_) => {},
|
||||
CoordinatorMessage::Substrate(msg) => {
|
||||
match msg {
|
||||
substrate::CoordinatorMessage::BlockAcknowledged { context, .. } => {
|
||||
wait(&coin, &scanner, context).await;
|
||||
},
|
||||
substrate::CoordinatorMessage::Burns { context, .. } => {
|
||||
wait(&coin, &scanner, context).await;
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
match msg.msg.clone() {
|
||||
CoordinatorMessage::KeyGen(msg) => {
|
||||
match key_gen.handle(msg).await {
|
||||
KeyGenEvent::KeyConfirmed { activation_number, keys } => {
|
||||
let key = keys.group_key();
|
||||
scanner.rotate_key(activation_number, key).await;
|
||||
schedulers.insert(key.to_bytes().as_ref().to_vec(), Scheduler::<C>::new(key));
|
||||
signers.insert(
|
||||
keys.group_key().to_bytes().as_ref().to_vec(),
|
||||
Signer::new(raw_db.clone(), coin.clone(), keys)
|
||||
);
|
||||
},
|
||||
|
||||
// TODO: This may be fired multiple times. What's our plan for that?
|
||||
KeyGenEvent::ProcessorMessage(msg) => {
|
||||
coordinator.send(ProcessorMessage::KeyGen(msg)).await;
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
CoordinatorMessage::Sign(msg) => {
|
||||
signers[msg.key()].handle(msg).await;
|
||||
}
|
||||
|
||||
CoordinatorMessage::Substrate(msg) => {
|
||||
match msg {
|
||||
substrate::CoordinatorMessage::BlockAcknowledged { context, key: key_vec, block } => {
|
||||
let key =
|
||||
<C::Curve as Ciphersuite>::read_G::<&[u8]>(&mut key_vec.as_ref()).unwrap();
|
||||
let mut block_id = <C::Block as Block<C>>::Id::default();
|
||||
block_id.as_mut().copy_from_slice(&block);
|
||||
|
||||
let plans = schedulers
|
||||
.get_mut(&key_vec)
|
||||
.expect("key we don't have a scheduler for acknowledged a block")
|
||||
.add_outputs(scanner.ack_block(key, block_id).await);
|
||||
sign_plans(&mut main_db, &coin, &mut schedulers, &signers, context, plans).await;
|
||||
}
|
||||
|
||||
substrate::CoordinatorMessage::Burns { context, burns } => {
|
||||
// TODO2: Rewrite rotation documentation
|
||||
let schedule_key = active_keys.last().expect("burn event despite no keys");
|
||||
let scheduler = schedulers.get_mut(schedule_key.to_bytes().as_ref()).unwrap();
|
||||
|
||||
let mut payments = vec![];
|
||||
for out in burns.clone() {
|
||||
let WithAmount { data: OutInstruction { address, data }, amount } = out;
|
||||
if let Ok(address) = C::Address::try_from(address.consume()) {
|
||||
payments.push(Payment {
|
||||
address,
|
||||
data: data.map(|data| data.consume()),
|
||||
amount: amount.0,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let plans = scheduler.schedule(payments);
|
||||
sign_plans(&mut main_db, &coin, &mut schedulers, &signers, context, plans).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
coordinator.ack(msg).await;
|
||||
},
|
||||
|
||||
msg = scanner.events.recv() => {
|
||||
// These need to be sent to the coordinator which needs to check they aren't replayed
|
||||
// TODO
|
||||
match msg.unwrap() {
|
||||
ScannerEvent::Outputs(key, block, outputs) => {
|
||||
coordinator.send(ProcessorMessage::Substrate(substrate::ProcessorMessage::Update {
|
||||
key: key.to_bytes().as_ref().to_vec(),
|
||||
block: block.as_ref().to_vec(),
|
||||
instructions: outputs.iter().filter_map(|output| {
|
||||
// If these aren't externally received funds, don't handle it as an instruction
|
||||
if output.kind() != OutputType::External {
|
||||
return None;
|
||||
}
|
||||
|
||||
let shorthand = Shorthand::decode(&mut output.data()).ok()?;
|
||||
let instruction = RefundableInInstruction::try_from(shorthand).ok()?;
|
||||
// TODO2: Set instruction.origin if not set (and handle refunds in general)
|
||||
Some(WithAmount { data: instruction.instruction, amount: Amount(output.amount()) })
|
||||
}).collect(),
|
||||
})).await;
|
||||
},
|
||||
}
|
||||
},
|
||||
|
||||
(key, msg) = SignerMessageFuture(&mut signers) => {
|
||||
match msg {
|
||||
SignerEvent::SignedTransaction { id, tx } => {
|
||||
main_db.finish_signing(&key, id);
|
||||
coordinator
|
||||
.send(ProcessorMessage::Sign(sign::ProcessorMessage::Completed {
|
||||
key,
|
||||
id,
|
||||
tx: tx.as_ref().to_vec()
|
||||
}))
|
||||
.await;
|
||||
|
||||
// TODO
|
||||
// 1) We need to stop signing whenever a peer informs us or the chain has an
|
||||
// eventuality
|
||||
// 2) If a peer informed us of an eventuality without an outbound payment, stop
|
||||
// scanning the chain for it (or at least ack it's solely for sanity purposes?)
|
||||
// 3) When the chain has an eventuality, if it had an outbound payment, report it up to
|
||||
// Substrate for logging purposes
|
||||
},
|
||||
SignerEvent::ProcessorMessage(msg) => {
|
||||
coordinator.send(ProcessorMessage::Sign(msg)).await;
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let db = MemDb::new(); // TODO
|
||||
let coordinator = MemCoordinator::new(); // TODO
|
||||
let url = env::var("COIN_RPC").expect("coin rpc wasn't specified as an env var");
|
||||
match env::var("COIN").expect("coin wasn't specified as an env var").as_str() {
|
||||
#[cfg(feature = "bitcoin")]
|
||||
"bitcoin" => run(db, Bitcoin::new(url), coordinator).await,
|
||||
#[cfg(feature = "monero")]
|
||||
"monero" => run(db, Monero::new(url), coordinator).await,
|
||||
_ => panic!("unrecognized coin"),
|
||||
}
|
||||
}
|
||||
153
processor/src/plan.rs
Normal file
153
processor/src/plan.rs
Normal file
@@ -0,0 +1,153 @@
|
||||
use std::io;
|
||||
|
||||
use transcript::{Transcript, RecommendedTranscript};
|
||||
use group::GroupEncoding;
|
||||
use frost::curve::Ciphersuite;
|
||||
|
||||
use crate::coins::{Output, Coin};
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Payment<C: Coin> {
|
||||
pub address: C::Address,
|
||||
pub data: Option<Vec<u8>>,
|
||||
pub amount: u64,
|
||||
}
|
||||
|
||||
impl<C: Coin> Payment<C> {
|
||||
pub fn transcript<T: Transcript>(&self, transcript: &mut T) {
|
||||
transcript.domain_separate(b"payment");
|
||||
transcript.append_message(b"address", self.address.to_string().as_bytes());
|
||||
if let Some(data) = self.data.as_ref() {
|
||||
transcript.append_message(b"data", data);
|
||||
}
|
||||
transcript.append_message(b"amount", self.amount.to_le_bytes());
|
||||
}
|
||||
|
||||
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
let address: Vec<u8> = self
|
||||
.address
|
||||
.clone()
|
||||
.try_into()
|
||||
.map_err(|_| io::Error::new(io::ErrorKind::Other, "address couldn't be serialized"))?;
|
||||
writer.write_all(&u32::try_from(address.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(&address)?;
|
||||
|
||||
writer.write_all(&[u8::from(self.data.is_some())])?;
|
||||
if let Some(data) = &self.data {
|
||||
writer.write_all(&u32::try_from(data.len()).unwrap().to_le_bytes())?;
|
||||
writer.write_all(data)?;
|
||||
}
|
||||
|
||||
writer.write_all(&self.amount.to_le_bytes())
|
||||
}
|
||||
|
||||
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let mut buf = [0; 4];
|
||||
reader.read_exact(&mut buf)?;
|
||||
let mut address = vec![0; usize::try_from(u32::from_le_bytes(buf)).unwrap()];
|
||||
reader.read_exact(&mut address)?;
|
||||
let address = C::Address::try_from(address)
|
||||
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid address"))?;
|
||||
|
||||
let mut buf = [0; 1];
|
||||
reader.read_exact(&mut buf)?;
|
||||
let data = if buf[0] == 1 {
|
||||
let mut buf = [0; 4];
|
||||
reader.read_exact(&mut buf)?;
|
||||
let mut data = vec![0; usize::try_from(u32::from_le_bytes(buf)).unwrap()];
|
||||
reader.read_exact(&mut data)?;
|
||||
Some(data)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let mut buf = [0; 8];
|
||||
reader.read_exact(&mut buf)?;
|
||||
let amount = u64::from_le_bytes(buf);
|
||||
|
||||
Ok(Payment { address, data, amount })
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||
pub struct Plan<C: Coin> {
|
||||
pub key: <C::Curve as Ciphersuite>::G,
|
||||
pub inputs: Vec<C::Output>,
|
||||
pub payments: Vec<Payment<C>>,
|
||||
pub change: Option<<C::Curve as Ciphersuite>::G>,
|
||||
}
|
||||
|
||||
impl<C: Coin> Plan<C> {
|
||||
pub fn transcript(&self) -> RecommendedTranscript {
|
||||
let mut transcript = RecommendedTranscript::new(b"Serai Processor Plan ID");
|
||||
transcript.domain_separate(b"meta");
|
||||
transcript.append_message(b"key", self.key.to_bytes());
|
||||
|
||||
transcript.domain_separate(b"inputs");
|
||||
for input in &self.inputs {
|
||||
transcript.append_message(b"input", input.id());
|
||||
}
|
||||
|
||||
transcript.domain_separate(b"payments");
|
||||
for payment in &self.payments {
|
||||
payment.transcript(&mut transcript);
|
||||
}
|
||||
|
||||
if let Some(change) = self.change {
|
||||
transcript.append_message(b"change", change.to_bytes());
|
||||
}
|
||||
|
||||
transcript
|
||||
}
|
||||
|
||||
pub fn id(&self) -> [u8; 32] {
|
||||
let challenge = self.transcript().challenge(b"id");
|
||||
let mut res = [0; 32];
|
||||
res.copy_from_slice(&challenge[.. 32]);
|
||||
res
|
||||
}
|
||||
|
||||
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||
writer.write_all(self.key.to_bytes().as_ref())?;
|
||||
|
||||
writer.write_all(&u32::try_from(self.inputs.len()).unwrap().to_le_bytes())?;
|
||||
for input in &self.inputs {
|
||||
input.write(writer)?;
|
||||
}
|
||||
|
||||
writer.write_all(&u32::try_from(self.payments.len()).unwrap().to_le_bytes())?;
|
||||
for payment in &self.payments {
|
||||
payment.write(writer)?;
|
||||
}
|
||||
|
||||
writer.write_all(&[u8::from(self.change.is_some())])?;
|
||||
if let Some(change) = &self.change {
|
||||
writer.write_all(change.to_bytes().as_ref())?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let key = C::Curve::read_G(reader)?;
|
||||
|
||||
let mut inputs = vec![];
|
||||
let mut buf = [0; 4];
|
||||
reader.read_exact(&mut buf)?;
|
||||
for _ in 0 .. u32::from_le_bytes(buf) {
|
||||
inputs.push(C::Output::read(reader)?);
|
||||
}
|
||||
|
||||
let mut payments = vec![];
|
||||
reader.read_exact(&mut buf)?;
|
||||
for _ in 0 .. u32::from_le_bytes(buf) {
|
||||
payments.push(Payment::<C>::read(reader)?);
|
||||
}
|
||||
|
||||
let mut buf = [0; 1];
|
||||
reader.read_exact(&mut buf)?;
|
||||
let change = if buf[0] == 1 { Some(C::Curve::read_G(reader)?) } else { None };
|
||||
|
||||
Ok(Plan { key, inputs, payments, change })
|
||||
}
|
||||
}
|
||||
384
processor/src/scanner.rs
Normal file
384
processor/src/scanner.rs
Normal file
@@ -0,0 +1,384 @@
|
||||
use core::{marker::PhantomData, time::Duration};
|
||||
use std::{
|
||||
sync::Arc,
|
||||
collections::{HashSet, HashMap},
|
||||
};
|
||||
|
||||
use group::GroupEncoding;
|
||||
use frost::curve::Ciphersuite;
|
||||
|
||||
use log::{info, debug, warn};
|
||||
use tokio::{
|
||||
sync::{RwLock, mpsc},
|
||||
time::sleep,
|
||||
};
|
||||
|
||||
use crate::{
|
||||
DbTxn, Db,
|
||||
coins::{Output, Block, Coin},
|
||||
};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub enum ScannerEvent<C: Coin> {
|
||||
// Outputs received
|
||||
Outputs(<C::Curve as Ciphersuite>::G, <C::Block as Block<C>>::Id, Vec<C::Output>),
|
||||
}
|
||||
|
||||
pub type ScannerEventChannel<C> = mpsc::UnboundedReceiver<ScannerEvent<C>>;
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
struct ScannerDb<C: Coin, D: Db>(D, PhantomData<C>);
|
||||
impl<C: Coin, D: Db> ScannerDb<C, D> {
|
||||
fn scanner_key(dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
|
||||
D::key(b"SCANNER", dst, key)
|
||||
}
|
||||
|
||||
fn block_key(number: usize) -> Vec<u8> {
|
||||
Self::scanner_key(b"block_id", u64::try_from(number).unwrap().to_le_bytes())
|
||||
}
|
||||
fn block_number_key(id: &<C::Block as Block<C>>::Id) -> Vec<u8> {
|
||||
Self::scanner_key(b"block_number", id)
|
||||
}
|
||||
fn save_block(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
number: usize,
|
||||
id: &<C::Block as Block<C>>::Id,
|
||||
) {
|
||||
txn.put(Self::block_number_key(id), u64::try_from(number).unwrap().to_le_bytes());
|
||||
txn.put(Self::block_key(number), id);
|
||||
}
|
||||
fn block(&self, number: usize) -> Option<<C::Block as Block<C>>::Id> {
|
||||
self.0.get(Self::block_key(number)).map(|id| {
|
||||
let mut res = <C::Block as Block<C>>::Id::default();
|
||||
res.as_mut().copy_from_slice(&id);
|
||||
res
|
||||
})
|
||||
}
|
||||
fn block_number(&self, id: &<C::Block as Block<C>>::Id) -> Option<usize> {
|
||||
self
|
||||
.0
|
||||
.get(Self::block_number_key(id))
|
||||
.map(|number| u64::from_le_bytes(number.try_into().unwrap()).try_into().unwrap())
|
||||
}
|
||||
|
||||
fn active_keys_key() -> Vec<u8> {
|
||||
Self::scanner_key(b"active_keys", b"")
|
||||
}
|
||||
fn add_active_key(&mut self, txn: &mut D::Transaction, key: <C::Curve as Ciphersuite>::G) {
|
||||
let mut keys = self.0.get(Self::active_keys_key()).unwrap_or(vec![]);
|
||||
// TODO: Don't do this if the key is already marked active (which can happen based on reboot
|
||||
// timing)
|
||||
keys.extend(key.to_bytes().as_ref());
|
||||
txn.put(Self::active_keys_key(), keys);
|
||||
}
|
||||
fn active_keys(&self) -> Vec<<C::Curve as Ciphersuite>::G> {
|
||||
let bytes_vec = self.0.get(Self::active_keys_key()).unwrap_or(vec![]);
|
||||
let mut bytes: &[u8] = bytes_vec.as_ref();
|
||||
|
||||
let mut res = Vec::with_capacity(bytes.len() / 32);
|
||||
while !bytes.is_empty() {
|
||||
res.push(C::Curve::read_G(&mut bytes).unwrap());
|
||||
}
|
||||
res
|
||||
}
|
||||
|
||||
fn seen_key(id: &<C::Output as Output>::Id) -> Vec<u8> {
|
||||
Self::scanner_key(b"seen", id)
|
||||
}
|
||||
fn seen(&self, id: &<C::Output as Output>::Id) -> bool {
|
||||
self.0.get(Self::seen_key(id)).is_some()
|
||||
}
|
||||
|
||||
fn outputs_key(
|
||||
key: &<C::Curve as Ciphersuite>::G,
|
||||
block: &<C::Block as Block<C>>::Id,
|
||||
) -> Vec<u8> {
|
||||
let key_bytes = key.to_bytes();
|
||||
let key = key_bytes.as_ref();
|
||||
// This should be safe without the bincode serialize. Using bincode lets us not worry/have to
|
||||
// think about this
|
||||
let db_key = bincode::serialize(&(key, block.as_ref())).unwrap();
|
||||
// Assert this is actually length prefixing
|
||||
debug_assert!(db_key.len() >= (1 + key.len() + 1 + block.as_ref().len()));
|
||||
Self::scanner_key(b"outputs", db_key)
|
||||
}
|
||||
fn save_outputs(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
key: &<C::Curve as Ciphersuite>::G,
|
||||
block: &<C::Block as Block<C>>::Id,
|
||||
outputs: &[C::Output],
|
||||
) {
|
||||
let mut bytes = Vec::with_capacity(outputs.len() * 64);
|
||||
for output in outputs {
|
||||
output.write(&mut bytes).unwrap();
|
||||
}
|
||||
txn.put(Self::outputs_key(key, block), bytes);
|
||||
}
|
||||
fn outputs(
|
||||
&self,
|
||||
key: &<C::Curve as Ciphersuite>::G,
|
||||
block: &<C::Block as Block<C>>::Id,
|
||||
) -> Option<Vec<C::Output>> {
|
||||
let bytes_vec = self.0.get(Self::outputs_key(key, block))?;
|
||||
let mut bytes: &[u8] = bytes_vec.as_ref();
|
||||
|
||||
let mut res = vec![];
|
||||
while !bytes.is_empty() {
|
||||
res.push(C::Output::read(&mut bytes).unwrap());
|
||||
}
|
||||
Some(res)
|
||||
}
|
||||
|
||||
fn scanned_block_key(key: &<C::Curve as Ciphersuite>::G) -> Vec<u8> {
|
||||
Self::scanner_key(b"scanned_block", key.to_bytes())
|
||||
}
|
||||
fn save_scanned_block(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
key: &<C::Curve as Ciphersuite>::G,
|
||||
block: usize,
|
||||
) -> Vec<C::Output> {
|
||||
let new_key = self.0.get(Self::scanned_block_key(key)).is_none();
|
||||
let outputs = self.block(block).and_then(|id| self.outputs(key, &id));
|
||||
// Either this is a new key, with no outputs, or we're acknowledging this block
|
||||
// If we're acknowledging it, we should have outputs available
|
||||
assert_eq!(new_key, outputs.is_none());
|
||||
let outputs = outputs.unwrap_or(vec![]);
|
||||
|
||||
// Mark all the outputs from this block as seen
|
||||
for output in &outputs {
|
||||
txn.put(Self::seen_key(&output.id()), b"");
|
||||
}
|
||||
|
||||
txn.put(Self::scanned_block_key(key), u64::try_from(block).unwrap().to_le_bytes());
|
||||
|
||||
// Return this block's outputs so they can be pruned from the RAM cache
|
||||
outputs
|
||||
}
|
||||
fn latest_scanned_block(&self, key: <C::Curve as Ciphersuite>::G) -> usize {
|
||||
let bytes = self.0.get(Self::scanned_block_key(&key)).unwrap_or(vec![0; 8]);
|
||||
u64::from_le_bytes(bytes.try_into().unwrap()).try_into().unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
/// The Scanner emits events relating to the blockchain, notably received outputs.
|
||||
/// It WILL NOT fail to emit an event, even if it reboots at selected moments.
|
||||
/// It MAY fire the same event multiple times.
|
||||
#[derive(Debug)]
|
||||
pub struct Scanner<C: Coin, D: Db> {
|
||||
coin: C,
|
||||
db: ScannerDb<C, D>,
|
||||
keys: Vec<<C::Curve as Ciphersuite>::G>,
|
||||
|
||||
ram_scanned: HashMap<Vec<u8>, usize>,
|
||||
ram_outputs: HashSet<Vec<u8>>,
|
||||
|
||||
events: mpsc::UnboundedSender<ScannerEvent<C>>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ScannerHandle<C: Coin, D: Db> {
|
||||
scanner: Arc<RwLock<Scanner<C, D>>>,
|
||||
pub events: ScannerEventChannel<C>,
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> ScannerHandle<C, D> {
|
||||
pub async fn ram_scanned(&self) -> usize {
|
||||
let mut res = None;
|
||||
for scanned in self.scanner.read().await.ram_scanned.values() {
|
||||
if res.is_none() {
|
||||
res = Some(*scanned);
|
||||
}
|
||||
// Returns the lowest scanned value so no matter the keys interacted with, this is
|
||||
// sufficiently scanned
|
||||
res = Some(res.unwrap().min(*scanned));
|
||||
}
|
||||
res.unwrap_or(0)
|
||||
}
|
||||
|
||||
/// Rotate the key being scanned for.
|
||||
///
|
||||
/// If no key has been prior set, this will become the key with no further actions.
|
||||
///
|
||||
/// If a key has been prior set, both keys will be scanned for as detailed in the Multisig
|
||||
/// documentation. The old key will eventually stop being scanned for, leaving just the
|
||||
/// updated-to key.
|
||||
pub async fn rotate_key(&self, activation_number: usize, key: <C::Curve as Ciphersuite>::G) {
|
||||
let mut scanner = self.scanner.write().await;
|
||||
if !scanner.keys.is_empty() {
|
||||
// Protonet will have a single, static validator set
|
||||
// TODO2
|
||||
panic!("only a single key is supported at this time");
|
||||
}
|
||||
|
||||
info!("Rotating to key {}", hex::encode(key.to_bytes()));
|
||||
let mut txn = scanner.db.0.txn();
|
||||
assert!(scanner.db.save_scanned_block(&mut txn, &key, activation_number).is_empty());
|
||||
scanner.db.add_active_key(&mut txn, key);
|
||||
txn.commit();
|
||||
scanner.keys.push(key);
|
||||
}
|
||||
|
||||
/// Acknowledge having handled a block for a key.
|
||||
pub async fn ack_block(
|
||||
&self,
|
||||
key: <C::Curve as Ciphersuite>::G,
|
||||
id: <C::Block as Block<C>>::Id,
|
||||
) -> Vec<C::Output> {
|
||||
let mut scanner = self.scanner.write().await;
|
||||
debug!("Block {} acknowledged", hex::encode(&id));
|
||||
let number =
|
||||
scanner.db.block_number(&id).expect("main loop trying to operate on data we haven't scanned");
|
||||
|
||||
let mut txn = scanner.db.0.txn();
|
||||
let outputs = scanner.db.save_scanned_block(&mut txn, &key, number);
|
||||
txn.commit();
|
||||
|
||||
for output in &outputs {
|
||||
scanner.ram_outputs.remove(output.id().as_ref());
|
||||
}
|
||||
|
||||
outputs
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> Scanner<C, D> {
|
||||
#[allow(clippy::new_ret_no_self)]
|
||||
pub fn new(coin: C, db: D) -> (ScannerHandle<C, D>, Vec<<C::Curve as Ciphersuite>::G>) {
|
||||
let (events_send, events_recv) = mpsc::unbounded_channel();
|
||||
|
||||
let db = ScannerDb(db, PhantomData);
|
||||
let keys = db.active_keys();
|
||||
|
||||
let scanner = Arc::new(RwLock::new(Scanner {
|
||||
coin,
|
||||
db,
|
||||
keys: keys.clone(),
|
||||
|
||||
ram_scanned: HashMap::new(),
|
||||
ram_outputs: HashSet::new(),
|
||||
|
||||
events: events_send,
|
||||
}));
|
||||
tokio::spawn(Scanner::run(scanner.clone()));
|
||||
|
||||
(ScannerHandle { scanner, events: events_recv }, keys)
|
||||
}
|
||||
|
||||
fn emit(&mut self, event: ScannerEvent<C>) -> bool {
|
||||
if self.events.send(event).is_err() {
|
||||
info!("Scanner handler was dropped. Shutting down?");
|
||||
return false;
|
||||
}
|
||||
true
|
||||
}
|
||||
|
||||
// An async function, to be spawned on a task, to discover and report outputs
|
||||
async fn run(scanner: Arc<RwLock<Self>>) {
|
||||
loop {
|
||||
// Only check every five seconds for new blocks
|
||||
sleep(Duration::from_secs(5)).await;
|
||||
|
||||
// Scan new blocks
|
||||
{
|
||||
let mut scanner = scanner.write().await;
|
||||
let latest = scanner.coin.get_latest_block_number().await;
|
||||
let latest = match latest {
|
||||
// Only scan confirmed blocks, which we consider effectively finalized
|
||||
// CONFIRMATIONS - 1 as whatever's in the latest block already has 1 confirm
|
||||
Ok(latest) => latest.saturating_sub(C::CONFIRMATIONS.saturating_sub(1)),
|
||||
Err(_) => {
|
||||
warn!("Couldn't get {}'s latest block number", C::ID);
|
||||
sleep(Duration::from_secs(60)).await;
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
for key in scanner.keys.clone() {
|
||||
let key_vec = key.to_bytes().as_ref().to_vec();
|
||||
let latest_scanned = {
|
||||
// Grab the latest scanned block according to the DB
|
||||
let db_scanned = scanner.db.latest_scanned_block(key);
|
||||
// We may, within this process's lifetime, have scanned more blocks
|
||||
// If they're still being processed, we will not have officially written them to the DB
|
||||
// as scanned yet
|
||||
// That way, if the process terminates, and is rebooted, we'll rescan from a handled
|
||||
// point, re-firing all events along the way, enabling them to be properly processed
|
||||
// In order to not re-fire them within this process's lifetime, check our RAM cache
|
||||
// of what we've scanned
|
||||
// We are allowed to re-fire them within this lifetime. It's just wasteful
|
||||
let ram_scanned = scanner.ram_scanned.get(&key_vec).cloned().unwrap_or(0);
|
||||
// Pick whichever is higher
|
||||
db_scanned.max(ram_scanned)
|
||||
};
|
||||
|
||||
for i in (latest_scanned + 1) ..= latest {
|
||||
// TODO2: Check for key deprecation
|
||||
|
||||
let block = match scanner.coin.get_block(i).await {
|
||||
Ok(block) => block,
|
||||
Err(_) => {
|
||||
warn!("Couldn't get {} block {i}", C::ID);
|
||||
break;
|
||||
}
|
||||
};
|
||||
let block_id = block.id();
|
||||
|
||||
if let Some(id) = scanner.db.block(i) {
|
||||
// TODO2: Also check this block builds off the previous block
|
||||
if id != block.id() {
|
||||
panic!("{} reorg'd from {id:?} to {:?}", C::ID, hex::encode(block_id));
|
||||
}
|
||||
} else {
|
||||
info!("Found new block: {}", hex::encode(&block_id));
|
||||
let mut txn = scanner.db.0.txn();
|
||||
scanner.db.save_block(&mut txn, i, &block_id);
|
||||
txn.commit();
|
||||
}
|
||||
|
||||
let outputs = match scanner.coin.get_outputs(&block, key).await {
|
||||
Ok(outputs) => outputs,
|
||||
Err(_) => {
|
||||
warn!("Couldn't scan {} block {i:?}", C::ID);
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
// Panic if we've already seen these outputs
|
||||
for output in &outputs {
|
||||
let id = output.id();
|
||||
// On Bitcoin, the output ID should be unique for a given chain
|
||||
// On Monero, it's trivial to make an output sharing an ID with another
|
||||
// We should only scan outputs with valid IDs however, which will be unique
|
||||
let seen = scanner.db.seen(&id);
|
||||
let id = id.as_ref().to_vec();
|
||||
if seen || scanner.ram_outputs.contains(&id) {
|
||||
panic!("scanned an output multiple times");
|
||||
}
|
||||
scanner.ram_outputs.insert(id);
|
||||
}
|
||||
|
||||
// TODO: Still fire an empty Outputs event if we haven't had inputs in a while
|
||||
if outputs.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Save the outputs to disk
|
||||
let mut txn = scanner.db.0.txn();
|
||||
scanner.db.save_outputs(&mut txn, &key, &block_id, &outputs);
|
||||
txn.commit();
|
||||
|
||||
// Send all outputs
|
||||
if !scanner.emit(ScannerEvent::Outputs(key, block_id, outputs)) {
|
||||
return;
|
||||
}
|
||||
// Write this number as scanned so we won't re-fire these outputs
|
||||
scanner.ram_scanned.insert(key_vec.clone(), i);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,265 @@
|
||||
// For n existing inputs, and n target outputs, multiplex the inputs in while log scheduling the
|
||||
// outputs out. Monero, which has a limit of 16 TXOs, could do 15 at a time, carrying a change
|
||||
// Combined with the 20 minute lock, this is completely infeasible. By instead doing 15 TX seeds,
|
||||
// and then 16 outputs on each, in just two lock cycles you can accomplish 240 TXs (not just 30).
|
||||
use std::collections::{VecDeque, HashMap};
|
||||
|
||||
use frost::curve::Ciphersuite;
|
||||
|
||||
use crate::{
|
||||
coins::{Output, Coin},
|
||||
Payment, Plan,
|
||||
};
|
||||
|
||||
/// Stateless, deterministic output/payment manager.
|
||||
#[derive(Debug)]
|
||||
pub struct Scheduler<C: Coin> {
|
||||
key: <C::Curve as Ciphersuite>::G,
|
||||
|
||||
// Serai, when it has more outputs expected than it can handle in a single tranaction, will
|
||||
// schedule the outputs to be handled later. Immediately, it just creates additional outputs
|
||||
// which will eventually handle those outputs
|
||||
//
|
||||
// These maps map output amounts, which we'll receive in the future, to the payments they should
|
||||
// be used on
|
||||
//
|
||||
// When those output amounts appear, their payments should be scheduled
|
||||
// The Vec<Payment> is for all payments that should be done per output instance
|
||||
// The VecDeque allows multiple sets of payments with the same sum amount to properly co-exist
|
||||
//
|
||||
// queued_plans are for outputs which we will create, yet when created, will have their amount
|
||||
// reduced by the fee it cost to be created. The Scheduler will then be told how what amount the
|
||||
// output actually has, and it'll be moved into plans
|
||||
//
|
||||
// TODO2: Consider edge case where branch/change isn't mined yet keys are deprecated
|
||||
queued_plans: HashMap<u64, VecDeque<Vec<Payment<C>>>>,
|
||||
plans: HashMap<u64, VecDeque<Vec<Payment<C>>>>,
|
||||
|
||||
// UTXOs available
|
||||
utxos: Vec<C::Output>,
|
||||
|
||||
// Payments awaiting scheduling due to the output availability problem
|
||||
payments: VecDeque<Payment<C>>,
|
||||
}
|
||||
|
||||
impl<C: Coin> Scheduler<C> {
|
||||
pub fn new(key: <C::Curve as Ciphersuite>::G) -> Self {
|
||||
Scheduler {
|
||||
key,
|
||||
queued_plans: HashMap::new(),
|
||||
plans: HashMap::new(),
|
||||
utxos: vec![],
|
||||
payments: VecDeque::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn execute(&mut self, inputs: Vec<C::Output>, mut payments: Vec<Payment<C>>) -> Plan<C> {
|
||||
// This must be equal to plan.key due to how coins detect they created outputs which are to
|
||||
// the branch address
|
||||
let branch_address = C::branch_address(self.key);
|
||||
// created_output will be called any time we send to a branch address
|
||||
// If it's called, and it wasn't expecting to be called, that's almost certainly an error
|
||||
// The only way it wouldn't be is if someone on Serai triggered a burn to a branch, which is
|
||||
// pointless anyways
|
||||
// If we allow such behavior, we lose the ability to detect the aforementioned class of errors
|
||||
// Ignore these payments so we can safely assert there
|
||||
let mut payments =
|
||||
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
|
||||
|
||||
let mut change = false;
|
||||
let mut max = C::MAX_OUTPUTS;
|
||||
|
||||
let payment_amounts =
|
||||
|payments: &Vec<Payment<C>>| payments.iter().map(|payment| payment.amount).sum::<u64>();
|
||||
|
||||
// Requires a change output
|
||||
if inputs.iter().map(Output::amount).sum::<u64>() != payment_amounts(&payments) {
|
||||
change = true;
|
||||
max -= 1;
|
||||
}
|
||||
|
||||
let mut add_plan = |payments| {
|
||||
let amount = payment_amounts(&payments);
|
||||
self.queued_plans.entry(amount).or_insert(VecDeque::new()).push_back(payments);
|
||||
amount
|
||||
};
|
||||
|
||||
// If we have more payments than we can handle in a single TX, create plans for them
|
||||
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
|
||||
// 15 branches of 16 leaves
|
||||
// 1 branch of:
|
||||
// - 1 branch of 16 leaves
|
||||
// - 2 leaves
|
||||
// If this was perfect, the heaviest branch would have 1 branch of 3 leaves and 15 leaves
|
||||
while payments.len() > max {
|
||||
// The resulting TX will have the remaining payments and a new branch payment
|
||||
let to_remove = (payments.len() + 1) - C::MAX_OUTPUTS;
|
||||
// Don't remove more than possible
|
||||
let to_remove = to_remove.min(C::MAX_OUTPUTS);
|
||||
|
||||
// Create the plan
|
||||
let removed = payments.drain((payments.len() - to_remove) ..).collect::<Vec<_>>();
|
||||
assert_eq!(removed.len(), to_remove);
|
||||
let amount = add_plan(removed);
|
||||
|
||||
// Create the payment for the plan
|
||||
// Push it to the front so it's not moved into a branch until all lower-depth items are
|
||||
payments.insert(0, Payment { address: branch_address.clone(), data: None, amount });
|
||||
}
|
||||
|
||||
// TODO2: Use the latest key for change
|
||||
// TODO2: Update rotation documentation
|
||||
Plan { key: self.key, inputs, payments, change: Some(self.key).filter(|_| change) }
|
||||
}
|
||||
|
||||
// When Substrate emits `Updates` for a coin, all outputs should be added up to the
|
||||
// acknowledged block.
|
||||
pub fn add_outputs(&mut self, mut utxos: Vec<C::Output>) -> Vec<Plan<C>> {
|
||||
let mut txs = vec![];
|
||||
|
||||
for utxo in utxos.drain(..) {
|
||||
// If we can fulfill planned TXs with this output, do so
|
||||
// We could limit this to UTXOs where `utxo.kind() == OutputType::Branch`, yet there's no
|
||||
// practical benefit in doing so
|
||||
if let Some(plans) = self.plans.get_mut(&utxo.amount()) {
|
||||
// Execute the first set of payments possible with an output of this amount
|
||||
let payments = plans.pop_front().unwrap();
|
||||
// They won't be equal if we dropped payments due to being dust
|
||||
assert!(utxo.amount() >= payments.iter().map(|payment| payment.amount).sum::<u64>());
|
||||
|
||||
// If we've grabbed the last plan for this output amount, remove it from the map
|
||||
if plans.is_empty() {
|
||||
self.plans.remove(&utxo.amount());
|
||||
}
|
||||
|
||||
// Create a TX for these payments
|
||||
txs.push(self.execute(vec![utxo], payments));
|
||||
} else {
|
||||
self.utxos.push(utxo);
|
||||
}
|
||||
}
|
||||
|
||||
// Sort the UTXOs by amount
|
||||
utxos.sort_by(|a, b| a.amount().cmp(&b.amount()).reverse());
|
||||
|
||||
// Return the now possible TXs
|
||||
log::info!("created {} planned TXs to sign from now recived outputs", txs.len());
|
||||
txs
|
||||
}
|
||||
|
||||
// Schedule a series of payments. This should be called after `add_outputs`.
|
||||
pub fn schedule(&mut self, payments: Vec<Payment<C>>) -> Vec<Plan<C>> {
|
||||
log::debug!("scheduling payments");
|
||||
assert!(!payments.is_empty(), "tried to schedule zero payments");
|
||||
|
||||
// Add all new payments to the list of pending payments
|
||||
self.payments.extend(payments);
|
||||
|
||||
// If we don't have UTXOs available, don't try to continue
|
||||
if self.utxos.is_empty() {
|
||||
return vec![];
|
||||
}
|
||||
|
||||
// We always want to aggregate our UTXOs into a single UTXO in the name of simplicity
|
||||
// We may have more UTXOs than will fit into a TX though
|
||||
// We use the most valuable UTXOs to handle our current payments, and we return aggregation TXs
|
||||
// for the rest of the inputs
|
||||
// Since we do multiple aggregation TXs at once, this will execute in logarithmic time
|
||||
let utxos = self.utxos.drain(..).collect::<Vec<_>>();
|
||||
let mut utxo_chunks =
|
||||
utxos.chunks(C::MAX_INPUTS).map(|chunk| chunk.to_vec()).collect::<Vec<_>>();
|
||||
let utxos = utxo_chunks.remove(0);
|
||||
|
||||
// If the last chunk exists and only has one output, don't try aggregating it
|
||||
// Just immediately consider it another output
|
||||
if let Some(mut chunk) = utxo_chunks.pop() {
|
||||
if chunk.len() == 1 {
|
||||
self.utxos.push(chunk.pop().unwrap());
|
||||
} else {
|
||||
utxo_chunks.push(chunk);
|
||||
}
|
||||
}
|
||||
|
||||
let mut aggregating = vec![];
|
||||
for chunk in utxo_chunks.drain(..) {
|
||||
aggregating.push(Plan {
|
||||
key: self.key,
|
||||
inputs: chunk,
|
||||
payments: vec![],
|
||||
change: Some(self.key),
|
||||
})
|
||||
}
|
||||
|
||||
// We want to use all possible UTXOs for all possible payments
|
||||
let mut balance = utxos.iter().map(Output::amount).sum::<u64>();
|
||||
|
||||
// If we can't fulfill the next payment, we have encountered an instance of the UTXO
|
||||
// availability problem
|
||||
// This shows up in coins like Monero, where because we spent outputs, our change has yet to
|
||||
// re-appear. Since it has yet to re-appear, we only operate with a balance which is a subset
|
||||
// of our total balance
|
||||
// Despite this, we may be order to fulfill a payment which is our total balance
|
||||
// The solution is to wait for the temporarily unavailable change outputs to re-appear,
|
||||
// granting us access to our full balance
|
||||
let mut executing = vec![];
|
||||
while !self.payments.is_empty() {
|
||||
let amount = self.payments[0].amount;
|
||||
if balance.checked_sub(amount).is_some() {
|
||||
balance -= amount;
|
||||
executing.push(self.payments.pop_front().unwrap());
|
||||
}
|
||||
}
|
||||
|
||||
// Now that we have the list of payments we can successfully handle right now, create the TX
|
||||
// for them
|
||||
let mut txs = vec![self.execute(utxos, executing)];
|
||||
txs.append(&mut aggregating);
|
||||
log::info!("created {} TXs to sign", txs.len());
|
||||
txs
|
||||
}
|
||||
|
||||
// Note a branch output as having been created, with the amount it was actually created with,
|
||||
// or not having been created due to being too small
|
||||
// This can be called whenever, so long as it's properly ordered
|
||||
// (it's independent to Serai/the chain we're scheduling over, yet still expects outputs to be
|
||||
// created in the same order Plans are returned in)
|
||||
pub fn created_output(&mut self, expected: u64, actual: Option<u64>) {
|
||||
log::debug!("output expected to have {} had {:?} after fees", expected, actual);
|
||||
|
||||
// Get the payments this output is expected to handle
|
||||
let queued = self.queued_plans.get_mut(&expected).unwrap();
|
||||
let mut payments = queued.pop_front().unwrap();
|
||||
assert_eq!(expected, payments.iter().map(|payment| payment.amount).sum::<u64>());
|
||||
// If this was the last set of payments at this amount, remove it
|
||||
if queued.is_empty() {
|
||||
self.queued_plans.remove(&expected);
|
||||
}
|
||||
|
||||
// If we didn't actually create this output, return, dropping the child payments
|
||||
let actual = match actual {
|
||||
Some(actual) => actual,
|
||||
None => return,
|
||||
};
|
||||
|
||||
// Amortize the fee amongst all payments
|
||||
// While some coins, like Ethereum, may have some payments take notably more gas, those
|
||||
// payments will have their own gas deducted when they're created. The difference in output
|
||||
// value present here is solely the cost of the branch, which is used for all of these
|
||||
// payments, regardless of how much they'll end up costing
|
||||
let diff = actual - expected;
|
||||
let payments_len = u64::try_from(payments.len()).unwrap();
|
||||
let per_payment = diff / payments_len;
|
||||
// The above division isn't perfect
|
||||
let mut remainder = diff - (per_payment * payments_len);
|
||||
|
||||
for mut payment in payments.iter_mut() {
|
||||
payment.amount = payment.amount.saturating_sub(per_payment + remainder);
|
||||
// Only subtract the remainder once
|
||||
remainder = 0;
|
||||
}
|
||||
|
||||
// Drop payments now below the dust threshold
|
||||
let payments =
|
||||
payments.drain(..).filter(|payment| payment.amount >= C::DUST).collect::<Vec<_>>();
|
||||
// Sanity check this was done properly
|
||||
assert!(actual >= payments.iter().map(|payment| payment.amount).sum::<u64>());
|
||||
|
||||
self.plans.entry(actual).or_insert(VecDeque::new()).push_back(payments);
|
||||
}
|
||||
}
|
||||
|
||||
512
processor/src/signer.rs
Normal file
512
processor/src/signer.rs
Normal file
@@ -0,0 +1,512 @@
|
||||
use core::{marker::PhantomData, fmt};
|
||||
use std::{
|
||||
sync::Arc,
|
||||
time::{SystemTime, Duration},
|
||||
collections::HashMap,
|
||||
};
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use group::GroupEncoding;
|
||||
use frost::{
|
||||
ThresholdKeys,
|
||||
sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine},
|
||||
};
|
||||
|
||||
use log::{info, debug, warn, error};
|
||||
use tokio::{
|
||||
sync::{RwLock, mpsc},
|
||||
time::sleep,
|
||||
};
|
||||
|
||||
use messages::sign::*;
|
||||
use crate::{
|
||||
DbTxn, Db,
|
||||
coins::{Transaction, Eventuality, Coin},
|
||||
};
|
||||
|
||||
const CHANNEL_MSG: &str = "Signer handler was dropped. Shutting down?";
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum SignerEvent<C: Coin> {
|
||||
SignedTransaction { id: [u8; 32], tx: <C::Transaction as Transaction<C>>::Id },
|
||||
ProcessorMessage(ProcessorMessage),
|
||||
}
|
||||
|
||||
pub type SignerEventChannel<C> = mpsc::UnboundedReceiver<SignerEvent<C>>;
|
||||
|
||||
#[derive(Debug)]
|
||||
struct SignerDb<C: Coin, D: Db>(D, PhantomData<C>);
|
||||
impl<C: Coin, D: Db> SignerDb<C, D> {
|
||||
fn sign_key(dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
|
||||
D::key(b"SIGNER", dst, key)
|
||||
}
|
||||
|
||||
fn completed_key(id: [u8; 32]) -> Vec<u8> {
|
||||
Self::sign_key(b"completed", id)
|
||||
}
|
||||
fn complete(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
id: [u8; 32],
|
||||
tx: <C::Transaction as Transaction<C>>::Id,
|
||||
) {
|
||||
// Transactions can be completed by multiple signatures
|
||||
// Save every solution in order to be robust
|
||||
let mut existing = txn.get(Self::completed_key(id)).unwrap_or(vec![]);
|
||||
// TODO: Don't do this if this TX is already present
|
||||
existing.extend(tx.as_ref());
|
||||
txn.put(Self::completed_key(id), existing);
|
||||
}
|
||||
fn completed(&self, id: [u8; 32]) -> Option<Vec<u8>> {
|
||||
self.0.get(Self::completed_key(id))
|
||||
}
|
||||
|
||||
fn eventuality_key(id: [u8; 32]) -> Vec<u8> {
|
||||
Self::sign_key(b"eventuality", id)
|
||||
}
|
||||
fn save_eventuality(
|
||||
&mut self,
|
||||
txn: &mut D::Transaction,
|
||||
id: [u8; 32],
|
||||
eventuality: C::Eventuality,
|
||||
) {
|
||||
txn.put(Self::eventuality_key(id), eventuality.serialize());
|
||||
}
|
||||
fn eventuality(&self, id: [u8; 32]) -> Option<C::Eventuality> {
|
||||
Some(
|
||||
C::Eventuality::read::<&[u8]>(&mut self.0.get(Self::eventuality_key(id))?.as_ref()).unwrap(),
|
||||
)
|
||||
}
|
||||
|
||||
fn attempt_key(id: &SignId) -> Vec<u8> {
|
||||
Self::sign_key(b"attempt", bincode::serialize(id).unwrap())
|
||||
}
|
||||
fn attempt(&mut self, txn: &mut D::Transaction, id: &SignId) {
|
||||
txn.put(Self::attempt_key(id), []);
|
||||
}
|
||||
fn has_attempt(&mut self, id: &SignId) -> bool {
|
||||
self.0.get(Self::attempt_key(id)).is_some()
|
||||
}
|
||||
|
||||
fn save_transaction(&mut self, txn: &mut D::Transaction, tx: &C::Transaction) {
|
||||
txn.put(Self::sign_key(b"tx", tx.id()), tx.serialize());
|
||||
}
|
||||
}
|
||||
|
||||
/// Coded so if the processor spontaneously reboots, one of two paths occur:
|
||||
/// 1) It either didn't send its response, so the attempt will be aborted
|
||||
/// 2) It did send its response, and has locally saved enough data to continue
|
||||
pub struct Signer<C: Coin, D: Db> {
|
||||
coin: C,
|
||||
db: SignerDb<C, D>,
|
||||
|
||||
keys: ThresholdKeys<C::Curve>,
|
||||
|
||||
signable: HashMap<[u8; 32], (SystemTime, C::SignableTransaction)>,
|
||||
attempt: HashMap<[u8; 32], u32>,
|
||||
preprocessing: HashMap<[u8; 32], <C::TransactionMachine as PreprocessMachine>::SignMachine>,
|
||||
#[allow(clippy::type_complexity)]
|
||||
signing: HashMap<
|
||||
[u8; 32],
|
||||
<
|
||||
<C::TransactionMachine as PreprocessMachine>::SignMachine as SignMachine<C::Transaction>
|
||||
>::SignatureMachine,
|
||||
>,
|
||||
|
||||
events: mpsc::UnboundedSender<SignerEvent<C>>,
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> fmt::Debug for Signer<C, D> {
|
||||
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
fmt
|
||||
.debug_struct("Signer")
|
||||
.field("coin", &self.coin)
|
||||
.field("signable", &self.signable)
|
||||
.field("attempt", &self.attempt)
|
||||
.finish_non_exhaustive()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct SignerHandle<C: Coin, D: Db> {
|
||||
signer: Arc<RwLock<Signer<C, D>>>,
|
||||
pub events: SignerEventChannel<C>,
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> Signer<C, D> {
|
||||
#[allow(clippy::new_ret_no_self)]
|
||||
pub fn new(db: D, coin: C, keys: ThresholdKeys<C::Curve>) -> SignerHandle<C, D> {
|
||||
let (events_send, events_recv) = mpsc::unbounded_channel();
|
||||
|
||||
let signer = Arc::new(RwLock::new(Signer {
|
||||
coin,
|
||||
db: SignerDb(db, PhantomData),
|
||||
|
||||
keys,
|
||||
|
||||
signable: HashMap::new(),
|
||||
attempt: HashMap::new(),
|
||||
preprocessing: HashMap::new(),
|
||||
signing: HashMap::new(),
|
||||
|
||||
events: events_send,
|
||||
}));
|
||||
|
||||
tokio::spawn(Signer::run(signer.clone()));
|
||||
|
||||
SignerHandle { signer, events: events_recv }
|
||||
}
|
||||
|
||||
fn verify_id(&self, id: &SignId) -> Result<(), ()> {
|
||||
if !id.signing_set(&self.keys.params()).contains(&self.keys.params().i()) {
|
||||
panic!("coordinator sent us preprocesses for a signing attempt we're not participating in");
|
||||
}
|
||||
|
||||
// Check the attempt lines up
|
||||
match self.attempt.get(&id.id) {
|
||||
// If we don't have an attempt logged, it's because the coordinator is faulty OR
|
||||
// because we rebooted
|
||||
None => {
|
||||
warn!("not attempting {:?}. this is an error if we didn't reboot", id);
|
||||
// Don't panic on the assumption we rebooted
|
||||
Err(())?;
|
||||
}
|
||||
Some(attempt) => {
|
||||
// This could be an old attempt, or it may be a 'future' attempt if we rebooted and
|
||||
// our SystemTime wasn't monotonic, as it may be
|
||||
if attempt != &id.attempt {
|
||||
debug!("sent signing data for a distinct attempt");
|
||||
Err(())?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn emit(&mut self, event: SignerEvent<C>) -> bool {
|
||||
if self.events.send(event).is_err() {
|
||||
info!("{}", CHANNEL_MSG);
|
||||
false
|
||||
} else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle(&mut self, msg: CoordinatorMessage) {
|
||||
match msg {
|
||||
CoordinatorMessage::Preprocesses { id, mut preprocesses } => {
|
||||
if self.verify_id(&id).is_err() {
|
||||
return;
|
||||
}
|
||||
|
||||
let machine = match self.preprocessing.remove(&id.id) {
|
||||
// Either rebooted or RPC error, or some invariant
|
||||
None => {
|
||||
warn!("not preprocessing for {:?}. this is an error if we didn't reboot", id);
|
||||
return;
|
||||
}
|
||||
Some(machine) => machine,
|
||||
};
|
||||
|
||||
let preprocesses = match preprocesses
|
||||
.drain()
|
||||
.map(|(l, preprocess)| {
|
||||
machine
|
||||
.read_preprocess::<&[u8]>(&mut preprocess.as_ref())
|
||||
.map(|preprocess| (l, preprocess))
|
||||
})
|
||||
.collect::<Result<_, _>>()
|
||||
{
|
||||
Ok(preprocesses) => preprocesses,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
|
||||
// Use an empty message, as expected of TransactionMachines
|
||||
let (machine, share) = match machine.sign(preprocesses, &[]) {
|
||||
Ok(res) => res,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
self.signing.insert(id.id, machine);
|
||||
|
||||
// Broadcast our share
|
||||
self.emit(SignerEvent::ProcessorMessage(ProcessorMessage::Share {
|
||||
id,
|
||||
share: share.serialize(),
|
||||
}));
|
||||
}
|
||||
|
||||
CoordinatorMessage::Shares { id, mut shares } => {
|
||||
if self.verify_id(&id).is_err() {
|
||||
return;
|
||||
}
|
||||
|
||||
let machine = match self.signing.remove(&id.id) {
|
||||
// Rebooted, RPC error, or some invariant
|
||||
None => {
|
||||
// If preprocessing has this ID, it means we were never sent the preprocess by the
|
||||
// coordinator
|
||||
if self.preprocessing.contains_key(&id.id) {
|
||||
panic!("never preprocessed yet signing?");
|
||||
}
|
||||
|
||||
warn!("not preprocessing for {:?}. this is an error if we didn't reboot", id);
|
||||
return;
|
||||
}
|
||||
Some(machine) => machine,
|
||||
};
|
||||
|
||||
let shares = match shares
|
||||
.drain()
|
||||
.map(|(l, share)| {
|
||||
machine.read_share::<&[u8]>(&mut share.as_ref()).map(|share| (l, share))
|
||||
})
|
||||
.collect::<Result<_, _>>()
|
||||
{
|
||||
Ok(shares) => shares,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
|
||||
let tx = match machine.complete(shares) {
|
||||
Ok(res) => res,
|
||||
Err(e) => todo!("malicious signer: {:?}", e),
|
||||
};
|
||||
|
||||
// Save the transaction in case it's needed for recovery
|
||||
let mut txn = self.db.0.txn();
|
||||
self.db.save_transaction(&mut txn, &tx);
|
||||
self.db.complete(&mut txn, id.id, tx.id());
|
||||
txn.commit();
|
||||
|
||||
// Publish it
|
||||
if let Err(e) = self.coin.publish_transaction(&tx).await {
|
||||
error!("couldn't publish {:?}: {:?}", tx, e);
|
||||
} else {
|
||||
info!("published {:?}", hex::encode(tx.id()));
|
||||
}
|
||||
|
||||
// Stop trying to sign for this TX
|
||||
assert!(self.signable.remove(&id.id).is_some());
|
||||
assert!(self.attempt.remove(&id.id).is_some());
|
||||
assert!(self.preprocessing.remove(&id.id).is_none());
|
||||
assert!(self.signing.remove(&id.id).is_none());
|
||||
|
||||
self.emit(SignerEvent::SignedTransaction { id: id.id, tx: tx.id() });
|
||||
}
|
||||
|
||||
CoordinatorMessage::Completed { key: _, id, tx: tx_vec } => {
|
||||
let mut tx = <C::Transaction as Transaction<C>>::Id::default();
|
||||
if tx.as_ref().len() != tx_vec.len() {
|
||||
warn!(
|
||||
"a validator claimed {} completed {id:?} yet that's not a valid TX ID",
|
||||
hex::encode(&tx)
|
||||
);
|
||||
return;
|
||||
}
|
||||
tx.as_mut().copy_from_slice(&tx_vec);
|
||||
|
||||
if let Some(eventuality) = self.db.eventuality(id) {
|
||||
// Transaction hasn't hit our mempool/was dropped for a different signature
|
||||
// The latter can happen given certain latency conditions/a single malicious signer
|
||||
// In the case of a single malicious signer, they can drag multiple honest
|
||||
// validators down with them, so we unfortunately can't slash on this case
|
||||
let Ok(tx) = self.coin.get_transaction(&tx).await else {
|
||||
todo!("queue checking eventualities"); // or give up here?
|
||||
};
|
||||
|
||||
if self.coin.confirm_completion(&eventuality, &tx) {
|
||||
// Stop trying to sign for this TX
|
||||
let mut txn = self.db.0.txn();
|
||||
self.db.save_transaction(&mut txn, &tx);
|
||||
self.db.complete(&mut txn, id, tx.id());
|
||||
txn.commit();
|
||||
|
||||
self.signable.remove(&id);
|
||||
self.attempt.remove(&id);
|
||||
self.preprocessing.remove(&id);
|
||||
self.signing.remove(&id);
|
||||
|
||||
self.emit(SignerEvent::SignedTransaction { id, tx: tx.id() });
|
||||
} else {
|
||||
warn!("a validator claimed {} completed {id:?} when it did not", hex::encode(&tx.id()));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// An async function, to be spawned on a task, to handle signing
|
||||
async fn run(signer_arc: Arc<RwLock<Self>>) {
|
||||
const SIGN_TIMEOUT: u64 = 30;
|
||||
|
||||
loop {
|
||||
// Sleep until a timeout expires (or five seconds expire)
|
||||
// Since this code start new sessions, it will delay any ordered signing sessions from
|
||||
// starting for up to 5 seconds, hence why this number can't be too high (such as 30 seconds,
|
||||
// the full timeout)
|
||||
// This won't delay re-attempting any signing session however, nor will it block the
|
||||
// sign_transaction function (since this doesn't hold any locks)
|
||||
sleep({
|
||||
let now = SystemTime::now();
|
||||
let mut lowest = Duration::from_secs(5);
|
||||
let signer = signer_arc.read().await;
|
||||
for (id, (start, _)) in &signer.signable {
|
||||
let until = if let Some(attempt) = signer.attempt.get(id) {
|
||||
// Get when this attempt times out
|
||||
(*start + Duration::from_secs(u64::from(attempt + 1) * SIGN_TIMEOUT))
|
||||
.duration_since(now)
|
||||
.unwrap_or(Duration::ZERO)
|
||||
} else {
|
||||
Duration::ZERO
|
||||
};
|
||||
|
||||
if until < lowest {
|
||||
lowest = until;
|
||||
}
|
||||
}
|
||||
lowest
|
||||
})
|
||||
.await;
|
||||
|
||||
// Because a signing attempt has timed out (or five seconds has passed), check all
|
||||
// sessions' timeouts
|
||||
{
|
||||
let mut signer = signer_arc.write().await;
|
||||
let keys = signer.signable.keys().cloned().collect::<Vec<_>>();
|
||||
for id in keys {
|
||||
let (start, tx) = &signer.signable[&id];
|
||||
let start = *start;
|
||||
|
||||
let attempt = u32::try_from(
|
||||
SystemTime::now().duration_since(start).unwrap_or(Duration::ZERO).as_secs() /
|
||||
SIGN_TIMEOUT,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Check if we're already working on this attempt
|
||||
if let Some(curr_attempt) = signer.attempt.get(&id) {
|
||||
if curr_attempt >= &attempt {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Start this attempt
|
||||
// Clone the TX so we don't have an immutable borrow preventing the below mutable actions
|
||||
// (also because we do need an owned tx anyways)
|
||||
let tx = tx.clone();
|
||||
|
||||
// Delete any existing machines
|
||||
signer.preprocessing.remove(&id);
|
||||
signer.signing.remove(&id);
|
||||
|
||||
// Update the attempt number so we don't re-enter this conditional
|
||||
signer.attempt.insert(id, attempt);
|
||||
|
||||
let id =
|
||||
SignId { key: signer.keys.group_key().to_bytes().as_ref().to_vec(), id, attempt };
|
||||
// Only preprocess if we're a signer
|
||||
if !id.signing_set(&signer.keys.params()).contains(&signer.keys.params().i()) {
|
||||
continue;
|
||||
}
|
||||
info!("selected to sign {:?}", id);
|
||||
|
||||
// If we reboot mid-sign, the current design has us abort all signs and wait for latter
|
||||
// attempts/new signing protocols
|
||||
// This is distinct from the DKG which will continue DKG sessions, even on reboot
|
||||
// This is because signing is tolerant of failures of up to 1/3rd of the group
|
||||
// The DKG requires 100% participation
|
||||
// While we could apply similar tricks as the DKG (a seeded RNG) to achieve support for
|
||||
// reboots, it's not worth the complexity when messing up here leaks our secret share
|
||||
//
|
||||
// Despite this, on reboot, we'll get told of active signing items, and may be in this
|
||||
// branch again for something we've already attempted
|
||||
//
|
||||
// Only run if this hasn't already been attempted
|
||||
if signer.db.has_attempt(&id) {
|
||||
warn!("already attempted {:?}. this is an error if we didn't reboot", id);
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut txn = signer.db.0.txn();
|
||||
signer.db.attempt(&mut txn, &id);
|
||||
txn.commit();
|
||||
|
||||
// Attempt to create the TX
|
||||
let machine = match signer.coin.attempt_send(tx).await {
|
||||
Err(e) => {
|
||||
error!("failed to attempt {:?}: {:?}", id, e);
|
||||
continue;
|
||||
}
|
||||
Ok(machine) => machine,
|
||||
};
|
||||
|
||||
let (machine, preprocess) = machine.preprocess(&mut OsRng);
|
||||
signer.preprocessing.insert(id.id, machine);
|
||||
|
||||
// Broadcast our preprocess
|
||||
if !signer.emit(SignerEvent::ProcessorMessage(ProcessorMessage::Preprocess {
|
||||
id,
|
||||
preprocess: preprocess.serialize(),
|
||||
})) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<C: Coin, D: Db> SignerHandle<C, D> {
|
||||
pub async fn keys(&self) -> ThresholdKeys<C::Curve> {
|
||||
self.signer.read().await.keys.clone()
|
||||
}
|
||||
|
||||
pub async fn sign_transaction(
|
||||
&self,
|
||||
id: [u8; 32],
|
||||
start: SystemTime,
|
||||
tx: C::SignableTransaction,
|
||||
eventuality: C::Eventuality,
|
||||
) {
|
||||
let mut signer = self.signer.write().await;
|
||||
|
||||
if let Some(txs) = signer.db.completed(id) {
|
||||
debug!("SignTransaction order for ID we've already completed signing");
|
||||
|
||||
// Find the first instance we noted as having completed *and can still get from our node*
|
||||
let mut tx = None;
|
||||
let mut buf = <C::Transaction as Transaction<C>>::Id::default();
|
||||
let tx_id_len = buf.as_ref().len();
|
||||
assert_eq!(txs.len() % tx_id_len, 0);
|
||||
for id in 0 .. (txs.len() / tx_id_len) {
|
||||
let start = id * tx_id_len;
|
||||
buf.as_mut().copy_from_slice(&txs[start .. (start + tx_id_len)]);
|
||||
if signer.coin.get_transaction(&buf).await.is_ok() {
|
||||
tx = Some(buf);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Fire the SignedTransaction event again
|
||||
if let Some(tx) = tx {
|
||||
if !signer.emit(SignerEvent::SignedTransaction { id, tx }) {
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
warn!("completed signing {} yet couldn't get any of the completing TXs", hex::encode(id));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
let mut txn = signer.db.0.txn();
|
||||
signer.db.save_eventuality(&mut txn, id, eventuality);
|
||||
txn.commit();
|
||||
|
||||
signer.signable.insert(id, (start, tx));
|
||||
}
|
||||
|
||||
pub async fn handle(&self, msg: CoordinatorMessage) {
|
||||
self.signer.write().await.handle(msg).await;
|
||||
}
|
||||
}
|
||||
98
processor/src/tests/addresses.rs
Normal file
98
processor/src/tests/addresses.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
use core::time::Duration;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use frost::{Participant, ThresholdKeys};
|
||||
|
||||
use tokio::time::timeout;
|
||||
|
||||
use crate::{
|
||||
Plan, Db,
|
||||
coins::{OutputType, Output, Block, Coin},
|
||||
scanner::{ScannerEvent, Scanner, ScannerHandle},
|
||||
tests::{util::db::MemDb, sign},
|
||||
};
|
||||
|
||||
async fn spend<C: Coin, D: Db>(
|
||||
coin: &C,
|
||||
keys: &HashMap<Participant, ThresholdKeys<C::Curve>>,
|
||||
scanner: &mut ScannerHandle<C, D>,
|
||||
outputs: Vec<C::Output>,
|
||||
) -> Vec<C::Output> {
|
||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||
|
||||
let mut keys_txs = HashMap::new();
|
||||
for (i, keys) in keys {
|
||||
keys_txs.insert(
|
||||
*i,
|
||||
(
|
||||
keys.clone(),
|
||||
coin
|
||||
.prepare_send(
|
||||
keys.clone(),
|
||||
coin.get_latest_block_number().await.unwrap() - C::CONFIRMATIONS,
|
||||
// Send to a change output
|
||||
Plan { key, inputs: outputs.clone(), payments: vec![], change: Some(key) },
|
||||
coin.get_fee().await,
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
.0
|
||||
.unwrap(),
|
||||
),
|
||||
);
|
||||
}
|
||||
sign(coin.clone(), keys_txs).await;
|
||||
|
||||
for _ in 0 .. C::CONFIRMATIONS {
|
||||
coin.mine_block().await;
|
||||
}
|
||||
match timeout(Duration::from_secs(10), scanner.events.recv()).await.unwrap().unwrap() {
|
||||
ScannerEvent::Outputs(this_key, _, outputs) => {
|
||||
assert_eq!(this_key, key);
|
||||
assert_eq!(outputs.len(), 1);
|
||||
// Make sure this is actually a change output
|
||||
assert_eq!(outputs[0].kind(), OutputType::Change);
|
||||
outputs
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn test_addresses<C: Coin>(coin: C) {
|
||||
let mut keys = frost::tests::key_gen::<_, C::Curve>(&mut OsRng);
|
||||
for (_, keys) in keys.iter_mut() {
|
||||
C::tweak_keys(keys);
|
||||
}
|
||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||
|
||||
// Mine blocks so there's a confirmed block
|
||||
for _ in 0 .. C::CONFIRMATIONS {
|
||||
coin.mine_block().await;
|
||||
}
|
||||
|
||||
let db = MemDb::new();
|
||||
let (mut scanner, active_keys) = Scanner::new(coin.clone(), db.clone());
|
||||
assert!(active_keys.is_empty());
|
||||
scanner.rotate_key(coin.get_latest_block_number().await.unwrap(), key).await;
|
||||
|
||||
// Receive funds to the branch address and make sure it's properly identified
|
||||
let block_id = coin.test_send(C::branch_address(key)).await.id();
|
||||
|
||||
// Verify the Scanner picked them up
|
||||
let outputs =
|
||||
match timeout(Duration::from_secs(10), scanner.events.recv()).await.unwrap().unwrap() {
|
||||
ScannerEvent::Outputs(this_key, block, outputs) => {
|
||||
assert_eq!(this_key, key);
|
||||
assert_eq!(block, block_id);
|
||||
assert_eq!(outputs.len(), 1);
|
||||
assert_eq!(outputs[0].kind(), OutputType::Branch);
|
||||
outputs
|
||||
}
|
||||
};
|
||||
|
||||
// Spend the branch output, creating a change output and ensuring we actually get change
|
||||
let outputs = spend(&coin, &keys, &mut scanner, outputs).await;
|
||||
// Also test spending the change output
|
||||
spend(&coin, &keys, &mut scanner, outputs).await;
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
use crate::{
|
||||
coin::{Coin, Bitcoin},
|
||||
tests::test_send,
|
||||
};
|
||||
|
||||
#[tokio::test]
|
||||
async fn bitcoin() {
|
||||
let bitcoin = Bitcoin::new("http://serai:seraidex@127.0.0.1:18443".to_string()).await;
|
||||
bitcoin.fresh_chain().await;
|
||||
let fee = bitcoin.get_fee().await;
|
||||
test_send(bitcoin, fee).await;
|
||||
}
|
||||
136
processor/src/tests/key_gen.rs
Normal file
136
processor/src/tests/key_gen.rs
Normal file
@@ -0,0 +1,136 @@
|
||||
use core::time::Duration;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use zeroize::Zeroizing;
|
||||
|
||||
use rand_core::{RngCore, OsRng};
|
||||
|
||||
use group::GroupEncoding;
|
||||
use frost::{Participant, ThresholdParams, tests::clone_without};
|
||||
|
||||
use serai_client::validator_sets::primitives::{Session, ValidatorSetIndex, ValidatorSetInstance};
|
||||
|
||||
use messages::{SubstrateContext, key_gen::*};
|
||||
use crate::{
|
||||
coins::Coin,
|
||||
key_gen::{KeyGenEvent, KeyGen},
|
||||
tests::util::db::MemDb,
|
||||
};
|
||||
|
||||
const ID: KeyGenId = KeyGenId {
|
||||
set: ValidatorSetInstance { session: Session(1), index: ValidatorSetIndex(2) },
|
||||
attempt: 3,
|
||||
};
|
||||
|
||||
pub async fn test_key_gen<C: Coin>() {
|
||||
let mut entropies = HashMap::new();
|
||||
let mut dbs = HashMap::new();
|
||||
let mut key_gens = HashMap::new();
|
||||
for i in 1 ..= 5 {
|
||||
let mut entropy = Zeroizing::new([0; 32]);
|
||||
OsRng.fill_bytes(entropy.as_mut());
|
||||
entropies.insert(i, entropy);
|
||||
dbs.insert(i, MemDb::new());
|
||||
key_gens.insert(i, KeyGen::<C, _>::new(dbs[&i].clone(), entropies[&i].clone()));
|
||||
}
|
||||
|
||||
let mut all_commitments = HashMap::new();
|
||||
for i in 1 ..= 5 {
|
||||
let key_gen = key_gens.get_mut(&i).unwrap();
|
||||
if let KeyGenEvent::ProcessorMessage(ProcessorMessage::Commitments { id, commitments }) =
|
||||
key_gen
|
||||
.handle(CoordinatorMessage::GenerateKey {
|
||||
id: ID,
|
||||
params: ThresholdParams::new(3, 5, Participant::new(u16::try_from(i).unwrap()).unwrap())
|
||||
.unwrap(),
|
||||
})
|
||||
.await
|
||||
{
|
||||
assert_eq!(id, ID);
|
||||
all_commitments.insert(Participant::new(u16::try_from(i).unwrap()).unwrap(), commitments);
|
||||
} else {
|
||||
panic!("didn't get commitments back");
|
||||
}
|
||||
}
|
||||
|
||||
// 1 is rebuilt on every step
|
||||
// 2 is rebuilt here
|
||||
// 3 ... are rebuilt once, one at each of the following steps
|
||||
let rebuild = |key_gens: &mut HashMap<_, _>, i| {
|
||||
key_gens.remove(&i);
|
||||
key_gens.insert(i, KeyGen::<C, _>::new(dbs[&i].clone(), entropies[&i].clone()));
|
||||
};
|
||||
rebuild(&mut key_gens, 1);
|
||||
rebuild(&mut key_gens, 2);
|
||||
|
||||
let mut all_shares = HashMap::new();
|
||||
for i in 1 ..= 5 {
|
||||
let key_gen = key_gens.get_mut(&i).unwrap();
|
||||
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
||||
if let KeyGenEvent::ProcessorMessage(ProcessorMessage::Shares { id, shares }) = key_gen
|
||||
.handle(CoordinatorMessage::Commitments {
|
||||
id: ID,
|
||||
commitments: clone_without(&all_commitments, &i),
|
||||
})
|
||||
.await
|
||||
{
|
||||
assert_eq!(id, ID);
|
||||
all_shares.insert(i, shares);
|
||||
} else {
|
||||
panic!("didn't get shares back");
|
||||
}
|
||||
}
|
||||
|
||||
// Rebuild 1 and 3
|
||||
rebuild(&mut key_gens, 1);
|
||||
rebuild(&mut key_gens, 3);
|
||||
|
||||
let mut res = None;
|
||||
for i in 1 ..= 5 {
|
||||
let key_gen = key_gens.get_mut(&i).unwrap();
|
||||
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
||||
if let KeyGenEvent::ProcessorMessage(ProcessorMessage::GeneratedKey { id, key }) = key_gen
|
||||
.handle(CoordinatorMessage::Shares {
|
||||
id: ID,
|
||||
shares: all_shares
|
||||
.iter()
|
||||
.filter_map(|(l, shares)| if i == *l { None } else { Some((*l, shares[&i].clone())) })
|
||||
.collect(),
|
||||
})
|
||||
.await
|
||||
{
|
||||
assert_eq!(id, ID);
|
||||
if res.is_none() {
|
||||
res = Some(key.clone());
|
||||
}
|
||||
assert_eq!(res.as_ref().unwrap(), &key);
|
||||
} else {
|
||||
panic!("didn't get key back");
|
||||
}
|
||||
}
|
||||
|
||||
// Rebuild 1 and 4
|
||||
rebuild(&mut key_gens, 1);
|
||||
rebuild(&mut key_gens, 4);
|
||||
|
||||
for i in 1 ..= 5 {
|
||||
let key_gen = key_gens.get_mut(&i).unwrap();
|
||||
if let KeyGenEvent::KeyConfirmed { activation_number, keys } = key_gen
|
||||
.handle(CoordinatorMessage::ConfirmKey {
|
||||
context: SubstrateContext { time: 0, coin_latest_block_number: 111 },
|
||||
id: ID,
|
||||
})
|
||||
.await
|
||||
{
|
||||
assert_eq!(activation_number, 111);
|
||||
assert_eq!(
|
||||
keys.params(),
|
||||
ThresholdParams::new(3, 5, Participant::new(u16::try_from(i).unwrap()).unwrap()).unwrap()
|
||||
);
|
||||
assert_eq!(keys.group_key().to_bytes().as_ref(), res.as_ref().unwrap());
|
||||
} else {
|
||||
panic!("didn't get key back");
|
||||
}
|
||||
}
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
43
processor/src/tests/literal/mod.rs
Normal file
43
processor/src/tests/literal/mod.rs
Normal file
@@ -0,0 +1,43 @@
|
||||
#[cfg(feature = "bitcoin")]
|
||||
mod bitcoin {
|
||||
use crate::coins::Bitcoin;
|
||||
|
||||
async fn bitcoin() -> Bitcoin {
|
||||
let bitcoin = Bitcoin::new("http://serai:seraidex@127.0.0.1:18443".to_string());
|
||||
bitcoin.fresh_chain().await;
|
||||
bitcoin
|
||||
}
|
||||
|
||||
test_coin!(
|
||||
Bitcoin,
|
||||
bitcoin,
|
||||
bitcoin_key_gen,
|
||||
bitcoin_scanner,
|
||||
bitcoin_signer,
|
||||
bitcoin_wallet,
|
||||
bitcoin_addresses,
|
||||
);
|
||||
}
|
||||
|
||||
#[cfg(feature = "monero")]
|
||||
mod monero {
|
||||
use crate::coins::{Coin, Monero};
|
||||
|
||||
async fn monero() -> Monero {
|
||||
let monero = Monero::new("http://127.0.0.1:18081".to_string());
|
||||
while monero.get_latest_block_number().await.unwrap() < 150 {
|
||||
monero.mine_block().await;
|
||||
}
|
||||
monero
|
||||
}
|
||||
|
||||
test_coin!(
|
||||
Monero,
|
||||
monero,
|
||||
monero_key_gen,
|
||||
monero_scanner,
|
||||
monero_signer,
|
||||
monero_wallet,
|
||||
monero_addresses,
|
||||
);
|
||||
}
|
||||
@@ -1,5 +1,99 @@
|
||||
mod send;
|
||||
pub(crate) use send::test_send;
|
||||
pub(crate) mod util;
|
||||
|
||||
mod bitcoin;
|
||||
mod monero;
|
||||
mod key_gen;
|
||||
pub(crate) use key_gen::test_key_gen;
|
||||
|
||||
mod scanner;
|
||||
pub(crate) use scanner::test_scanner;
|
||||
|
||||
mod signer;
|
||||
pub(crate) use signer::{sign, test_signer};
|
||||
|
||||
mod wallet;
|
||||
pub(crate) use wallet::test_wallet;
|
||||
|
||||
mod addresses;
|
||||
pub(crate) use addresses::test_addresses;
|
||||
|
||||
// Effective Once
|
||||
lazy_static::lazy_static! {
|
||||
static ref INIT_LOGGER: () = env_logger::init();
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! sequential {
|
||||
() => {
|
||||
lazy_static::lazy_static! {
|
||||
static ref SEQUENTIAL: tokio::sync::Mutex<()> = tokio::sync::Mutex::new(());
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! async_sequential {
|
||||
($(async fn $name: ident() $body: block)*) => {
|
||||
$(
|
||||
#[tokio::test]
|
||||
async fn $name() {
|
||||
*$crate::tests::INIT_LOGGER;
|
||||
let guard = SEQUENTIAL.lock().await;
|
||||
let local = tokio::task::LocalSet::new();
|
||||
local.run_until(async move {
|
||||
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
|
||||
drop(guard);
|
||||
Err(err).unwrap()
|
||||
}
|
||||
}).await;
|
||||
}
|
||||
)*
|
||||
}
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
macro_rules! test_coin {
|
||||
(
|
||||
$C: ident,
|
||||
$coin: ident,
|
||||
$key_gen: ident,
|
||||
$scanner: ident,
|
||||
$signer: ident,
|
||||
$wallet: ident,
|
||||
$addresses: ident,
|
||||
) => {
|
||||
use $crate::tests::{test_key_gen, test_scanner, test_signer, test_wallet, test_addresses};
|
||||
|
||||
// This doesn't interact with a node and accordingly doesn't need to be run sequentially
|
||||
#[tokio::test]
|
||||
async fn $key_gen() {
|
||||
test_key_gen::<$C>().await;
|
||||
}
|
||||
|
||||
sequential!();
|
||||
|
||||
async_sequential! {
|
||||
async fn $scanner() {
|
||||
test_scanner($coin().await).await;
|
||||
}
|
||||
}
|
||||
|
||||
async_sequential! {
|
||||
async fn $signer() {
|
||||
test_signer($coin().await).await;
|
||||
}
|
||||
}
|
||||
|
||||
async_sequential! {
|
||||
async fn $wallet() {
|
||||
test_wallet($coin().await).await;
|
||||
}
|
||||
}
|
||||
|
||||
async_sequential! {
|
||||
async fn $addresses() {
|
||||
test_addresses($coin().await).await;
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
mod literal;
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
use crate::{
|
||||
coin::{Coin, Monero},
|
||||
tests::test_send,
|
||||
};
|
||||
|
||||
#[tokio::test]
|
||||
async fn monero() {
|
||||
let monero = Monero::new("http://127.0.0.1:18081".to_string()).await;
|
||||
let fee = monero.get_fee().await;
|
||||
test_send(monero, fee).await;
|
||||
}
|
||||
72
processor/src/tests/scanner.rs
Normal file
72
processor/src/tests/scanner.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
use core::time::Duration;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use frost::Participant;
|
||||
|
||||
use tokio::time::timeout;
|
||||
|
||||
use crate::{
|
||||
coins::{OutputType, Output, Block, Coin},
|
||||
scanner::{ScannerEvent, Scanner, ScannerHandle},
|
||||
tests::util::db::MemDb,
|
||||
};
|
||||
|
||||
pub async fn test_scanner<C: Coin>(coin: C) {
|
||||
let mut keys =
|
||||
frost::tests::key_gen::<_, C::Curve>(&mut OsRng).remove(&Participant::new(1).unwrap()).unwrap();
|
||||
C::tweak_keys(&mut keys);
|
||||
|
||||
// Mine blocks so there's a confirmed block
|
||||
for _ in 0 .. C::CONFIRMATIONS {
|
||||
coin.mine_block().await;
|
||||
}
|
||||
|
||||
let first = Arc::new(Mutex::new(true));
|
||||
let db = MemDb::new();
|
||||
let new_scanner = || async {
|
||||
let (scanner, active_keys) = Scanner::new(coin.clone(), db.clone());
|
||||
let mut first = first.lock().unwrap();
|
||||
if *first {
|
||||
assert!(active_keys.is_empty());
|
||||
scanner.rotate_key(coin.get_latest_block_number().await.unwrap(), keys.group_key()).await;
|
||||
*first = false;
|
||||
} else {
|
||||
assert_eq!(active_keys.len(), 1);
|
||||
}
|
||||
scanner
|
||||
};
|
||||
let scanner = new_scanner().await;
|
||||
|
||||
// Receive funds
|
||||
let block_id = coin.test_send(C::address(keys.group_key())).await.id();
|
||||
|
||||
// Verify the Scanner picked them up
|
||||
let verify_event = |mut scanner: ScannerHandle<C, MemDb>| async {
|
||||
let outputs =
|
||||
match timeout(Duration::from_secs(10), scanner.events.recv()).await.unwrap().unwrap() {
|
||||
ScannerEvent::Outputs(key, block, outputs) => {
|
||||
assert_eq!(key, keys.group_key());
|
||||
assert_eq!(block, block_id);
|
||||
assert_eq!(outputs.len(), 1);
|
||||
assert_eq!(outputs[0].kind(), OutputType::External);
|
||||
outputs
|
||||
}
|
||||
};
|
||||
(scanner, outputs)
|
||||
};
|
||||
let (mut scanner, outputs) = verify_event(scanner).await;
|
||||
|
||||
// Create a new scanner off the current DB and verify it re-emits the above events
|
||||
verify_event(new_scanner().await).await;
|
||||
|
||||
// Acknowledge the block
|
||||
assert_eq!(scanner.ack_block(keys.group_key(), block_id.clone()).await, outputs);
|
||||
|
||||
// There should be no more events
|
||||
assert!(timeout(Duration::from_secs(10), scanner.events.recv()).await.is_err());
|
||||
|
||||
// Create a new scanner off the current DB and make sure it also does nothing
|
||||
assert!(timeout(Duration::from_secs(10), new_scanner().await.events.recv()).await.is_err());
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
collections::HashMap,
|
||||
};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use frost::Participant;
|
||||
|
||||
use crate::{
|
||||
NetworkError, Network,
|
||||
coin::Coin,
|
||||
wallet::{WalletKeys, MemCoinDb, Wallet},
|
||||
};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct LocalNetwork {
|
||||
i: Participant,
|
||||
size: u16,
|
||||
round: usize,
|
||||
#[allow(clippy::type_complexity)]
|
||||
rounds: Arc<RwLock<Vec<HashMap<Participant, Vec<u8>>>>>,
|
||||
}
|
||||
|
||||
impl LocalNetwork {
|
||||
fn new(size: u16) -> Vec<LocalNetwork> {
|
||||
let rounds = Arc::new(RwLock::new(vec![]));
|
||||
let mut res = vec![];
|
||||
for i in 1 ..= size {
|
||||
res.push(LocalNetwork {
|
||||
i: Participant::new(i).unwrap(),
|
||||
size,
|
||||
round: 0,
|
||||
rounds: rounds.clone(),
|
||||
});
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Network for LocalNetwork {
|
||||
async fn round(&mut self, data: Vec<u8>) -> Result<HashMap<Participant, Vec<u8>>, NetworkError> {
|
||||
{
|
||||
let mut rounds = self.rounds.write().unwrap();
|
||||
if rounds.len() == self.round {
|
||||
rounds.push(HashMap::new());
|
||||
}
|
||||
rounds[self.round].insert(self.i, data);
|
||||
}
|
||||
|
||||
while {
|
||||
let read = self.rounds.try_read().unwrap();
|
||||
read[self.round].len() != usize::from(self.size)
|
||||
} {
|
||||
tokio::task::yield_now().await;
|
||||
}
|
||||
|
||||
let mut res = self.rounds.try_read().unwrap()[self.round].clone();
|
||||
res.remove(&self.i);
|
||||
self.round += 1;
|
||||
Ok(res)
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn test_send<C: Coin + Clone>(coin: C, fee: C::Fee) {
|
||||
// Mine blocks so there's a confirmed block
|
||||
coin.mine_block().await;
|
||||
let latest = coin.get_latest_block_number().await.unwrap();
|
||||
|
||||
let mut keys = frost::tests::key_gen::<_, C::Curve>(&mut OsRng);
|
||||
let threshold = keys[&Participant::new(1).unwrap()].params().t();
|
||||
let mut networks = LocalNetwork::new(threshold);
|
||||
|
||||
let mut wallets = vec![];
|
||||
for i in 1 ..= threshold {
|
||||
let mut wallet = Wallet::new(MemCoinDb::new(), coin.clone());
|
||||
wallet.acknowledge_block(0, latest);
|
||||
wallet.add_keys(&WalletKeys::new(keys.remove(&Participant::new(i).unwrap()).unwrap(), 0));
|
||||
wallets.push(wallet);
|
||||
}
|
||||
|
||||
// Get the chain to a length where blocks have sufficient confirmations
|
||||
while (latest + (C::CONFIRMATIONS - 1)) > coin.get_latest_block_number().await.unwrap() {
|
||||
coin.mine_block().await;
|
||||
}
|
||||
|
||||
for wallet in wallets.iter_mut() {
|
||||
// Poll to activate the keys
|
||||
wallet.poll().await.unwrap();
|
||||
}
|
||||
|
||||
coin.test_send(wallets[0].address()).await;
|
||||
|
||||
let mut futures = vec![];
|
||||
for (network, wallet) in networks.iter_mut().zip(wallets.iter_mut()) {
|
||||
wallet.poll().await.unwrap();
|
||||
|
||||
let latest = coin.get_latest_block_number().await.unwrap();
|
||||
wallet.acknowledge_block(1, latest - (C::CONFIRMATIONS - 1));
|
||||
let signable = wallet
|
||||
.prepare_sends(1, vec![(wallet.address(), 100000000)], fee)
|
||||
.await
|
||||
.unwrap()
|
||||
.1
|
||||
.swap_remove(0);
|
||||
futures.push(wallet.attempt_send(network, signable));
|
||||
}
|
||||
|
||||
println!("{:?}", hex::encode(futures::future::join_all(futures).await.swap_remove(0).unwrap()));
|
||||
}
|
||||
187
processor/src/tests/signer.rs
Normal file
187
processor/src/tests/signer.rs
Normal file
@@ -0,0 +1,187 @@
|
||||
use std::{
|
||||
time::{Duration, SystemTime},
|
||||
collections::HashMap,
|
||||
};
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use group::GroupEncoding;
|
||||
use frost::{
|
||||
Participant, ThresholdKeys,
|
||||
dkg::tests::{key_gen, clone_without},
|
||||
};
|
||||
|
||||
use tokio::time::timeout;
|
||||
|
||||
use messages::sign::*;
|
||||
use crate::{
|
||||
Payment, Plan,
|
||||
coins::{Output, Transaction, Coin},
|
||||
signer::{SignerEvent, Signer},
|
||||
tests::util::db::MemDb,
|
||||
};
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
pub async fn sign<C: Coin>(
|
||||
coin: C,
|
||||
mut keys_txs: HashMap<
|
||||
Participant,
|
||||
(ThresholdKeys<C::Curve>, (C::SignableTransaction, C::Eventuality)),
|
||||
>,
|
||||
) -> <C::Transaction as Transaction<C>>::Id {
|
||||
let actual_id = SignId {
|
||||
key: keys_txs[&Participant::new(1).unwrap()].0.group_key().to_bytes().as_ref().to_vec(),
|
||||
id: [0xaa; 32],
|
||||
attempt: 0,
|
||||
};
|
||||
|
||||
let signing_set = actual_id.signing_set(&keys_txs[&Participant::new(1).unwrap()].0.params());
|
||||
let mut keys = HashMap::new();
|
||||
let mut txs = HashMap::new();
|
||||
for (i, (these_keys, this_tx)) in keys_txs.drain() {
|
||||
assert_eq!(actual_id.signing_set(&these_keys.params()), signing_set);
|
||||
keys.insert(i, these_keys);
|
||||
txs.insert(i, this_tx);
|
||||
}
|
||||
|
||||
let mut signers = HashMap::new();
|
||||
for i in 1 ..= keys.len() {
|
||||
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
||||
signers.insert(i, Signer::new(MemDb::new(), coin.clone(), keys.remove(&i).unwrap()));
|
||||
}
|
||||
|
||||
let start = SystemTime::now();
|
||||
for i in 1 ..= signers.len() {
|
||||
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
||||
let (tx, eventuality) = txs.remove(&i).unwrap();
|
||||
signers[&i].sign_transaction(actual_id.id, start, tx, eventuality).await;
|
||||
}
|
||||
|
||||
let mut preprocesses = HashMap::new();
|
||||
for i in &signing_set {
|
||||
if let Some(SignerEvent::ProcessorMessage(ProcessorMessage::Preprocess { id, preprocess })) =
|
||||
signers.get_mut(i).unwrap().events.recv().await
|
||||
{
|
||||
assert_eq!(id, actual_id);
|
||||
preprocesses.insert(*i, preprocess);
|
||||
} else {
|
||||
panic!("didn't get preprocess back");
|
||||
}
|
||||
}
|
||||
|
||||
let mut shares = HashMap::new();
|
||||
for i in &signing_set {
|
||||
signers[i]
|
||||
.handle(CoordinatorMessage::Preprocesses {
|
||||
id: actual_id.clone(),
|
||||
preprocesses: clone_without(&preprocesses, i),
|
||||
})
|
||||
.await;
|
||||
if let Some(SignerEvent::ProcessorMessage(ProcessorMessage::Share { id, share })) =
|
||||
signers.get_mut(i).unwrap().events.recv().await
|
||||
{
|
||||
assert_eq!(id, actual_id);
|
||||
shares.insert(*i, share);
|
||||
} else {
|
||||
panic!("didn't get share back");
|
||||
}
|
||||
}
|
||||
|
||||
let mut tx_id = None;
|
||||
for i in &signing_set {
|
||||
signers[i]
|
||||
.handle(CoordinatorMessage::Shares {
|
||||
id: actual_id.clone(),
|
||||
shares: clone_without(&shares, i),
|
||||
})
|
||||
.await;
|
||||
if let Some(SignerEvent::SignedTransaction { id, tx }) =
|
||||
signers.get_mut(i).unwrap().events.recv().await
|
||||
{
|
||||
assert_eq!(id, actual_id.id);
|
||||
if tx_id.is_none() {
|
||||
tx_id = Some(tx.clone());
|
||||
}
|
||||
assert_eq!(tx_id, Some(tx));
|
||||
} else {
|
||||
panic!("didn't get TX back");
|
||||
}
|
||||
}
|
||||
|
||||
// Make sure the signers not included didn't do anything
|
||||
let mut excluded = (1 ..= signers.len())
|
||||
.map(|i| Participant::new(u16::try_from(i).unwrap()).unwrap())
|
||||
.collect::<Vec<_>>();
|
||||
for i in signing_set {
|
||||
excluded.remove(excluded.binary_search(&i).unwrap());
|
||||
}
|
||||
for i in excluded {
|
||||
assert!(timeout(
|
||||
Duration::from_secs(1),
|
||||
signers.get_mut(&Participant::new(u16::try_from(i).unwrap()).unwrap()).unwrap().events.recv()
|
||||
)
|
||||
.await
|
||||
.is_err());
|
||||
}
|
||||
|
||||
tx_id.unwrap()
|
||||
}
|
||||
|
||||
pub async fn test_signer<C: Coin>(coin: C) {
|
||||
let mut keys = key_gen(&mut OsRng);
|
||||
for (_, keys) in keys.iter_mut() {
|
||||
C::tweak_keys(keys);
|
||||
}
|
||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||
|
||||
let outputs = coin.get_outputs(&coin.test_send(C::address(key)).await, key).await.unwrap();
|
||||
let sync_block = coin.get_latest_block_number().await.unwrap() - C::CONFIRMATIONS;
|
||||
let fee = coin.get_fee().await;
|
||||
|
||||
let amount = 2 * C::DUST;
|
||||
let mut keys_txs = HashMap::new();
|
||||
let mut eventualities = vec![];
|
||||
for (i, keys) in keys.drain() {
|
||||
let (signable, eventuality) = coin
|
||||
.prepare_send(
|
||||
keys.clone(),
|
||||
sync_block,
|
||||
Plan {
|
||||
key,
|
||||
inputs: outputs.clone(),
|
||||
payments: vec![Payment { address: C::address(key), data: None, amount }],
|
||||
change: Some(key),
|
||||
},
|
||||
fee,
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
.0
|
||||
.unwrap();
|
||||
|
||||
eventualities.push(eventuality.clone());
|
||||
keys_txs.insert(i, (keys, (signable, eventuality)));
|
||||
}
|
||||
|
||||
// The signer may not publish the TX if it has a connection error
|
||||
// It doesn't fail in this case
|
||||
let txid = sign(coin.clone(), keys_txs).await;
|
||||
let tx = coin.get_transaction(&txid).await.unwrap();
|
||||
assert_eq!(tx.id(), txid);
|
||||
// Mine a block, and scan it, to ensure that the TX actually made it on chain
|
||||
coin.mine_block().await;
|
||||
let outputs = coin
|
||||
.get_outputs(&coin.get_block(coin.get_latest_block_number().await.unwrap()).await.unwrap(), key)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(outputs.len(), 2);
|
||||
// Adjust the amount for the fees
|
||||
let amount = amount - tx.fee(&coin).await;
|
||||
// Check either output since Monero will randomize its output order
|
||||
assert!((outputs[0].amount() == amount) || (outputs[1].amount() == amount));
|
||||
|
||||
// Check the eventualities pass
|
||||
for eventuality in eventualities {
|
||||
assert!(coin.confirm_completion(&eventuality, &tx));
|
||||
}
|
||||
}
|
||||
42
processor/src/tests/util/db.rs
Normal file
42
processor/src/tests/util/db.rs
Normal file
@@ -0,0 +1,42 @@
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
collections::HashMap,
|
||||
};
|
||||
|
||||
use crate::{DbTxn, Db};
|
||||
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct MemDb(Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>);
|
||||
impl MemDb {
|
||||
pub(crate) fn new() -> MemDb {
|
||||
MemDb(Arc::new(RwLock::new(HashMap::new())))
|
||||
}
|
||||
}
|
||||
impl Default for MemDb {
|
||||
fn default() -> MemDb {
|
||||
MemDb::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl DbTxn for MemDb {
|
||||
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
|
||||
self.0.write().unwrap().insert(key.as_ref().to_vec(), value.as_ref().to_vec());
|
||||
}
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
|
||||
self.0.read().unwrap().get(key.as_ref()).cloned()
|
||||
}
|
||||
fn del(&mut self, key: impl AsRef<[u8]>) {
|
||||
self.0.write().unwrap().remove(key.as_ref());
|
||||
}
|
||||
fn commit(self) {}
|
||||
}
|
||||
|
||||
impl Db for MemDb {
|
||||
type Transaction = MemDb;
|
||||
fn txn(&mut self) -> MemDb {
|
||||
Self(self.0.clone())
|
||||
}
|
||||
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
|
||||
self.0.read().unwrap().get(key.as_ref()).cloned()
|
||||
}
|
||||
}
|
||||
1
processor/src/tests/util/mod.rs
Normal file
1
processor/src/tests/util/mod.rs
Normal file
@@ -0,0 +1 @@
|
||||
pub(crate) mod db;
|
||||
108
processor/src/tests/wallet.rs
Normal file
108
processor/src/tests/wallet.rs
Normal file
@@ -0,0 +1,108 @@
|
||||
use std::{time::Duration, collections::HashMap};
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use frost::{Participant, dkg::tests::key_gen};
|
||||
|
||||
use tokio::time::timeout;
|
||||
|
||||
use crate::{
|
||||
Payment, Plan,
|
||||
coins::{Output, Transaction, Block, Coin},
|
||||
scanner::{ScannerEvent, Scanner},
|
||||
scheduler::Scheduler,
|
||||
tests::{util::db::MemDb, sign},
|
||||
};
|
||||
|
||||
// Tests the Scanner, Scheduler, and Signer together
|
||||
pub async fn test_wallet<C: Coin>(coin: C) {
|
||||
let mut keys = key_gen(&mut OsRng);
|
||||
for (_, keys) in keys.iter_mut() {
|
||||
C::tweak_keys(keys);
|
||||
}
|
||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||
|
||||
let (mut scanner, active_keys) = Scanner::new(coin.clone(), MemDb::new());
|
||||
assert!(active_keys.is_empty());
|
||||
let (block_id, outputs) = {
|
||||
scanner.rotate_key(coin.get_latest_block_number().await.unwrap(), key).await;
|
||||
|
||||
let block_id = coin.test_send(C::address(key)).await.id();
|
||||
|
||||
match timeout(Duration::from_secs(10), scanner.events.recv()).await.unwrap().unwrap() {
|
||||
ScannerEvent::Outputs(this_key, block, outputs) => {
|
||||
assert_eq!(this_key, key);
|
||||
assert_eq!(block, block_id);
|
||||
assert_eq!(outputs.len(), 1);
|
||||
(block_id, outputs)
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
let mut scheduler = Scheduler::new(key);
|
||||
// Add these outputs, which should return no plans
|
||||
assert!(scheduler.add_outputs(outputs.clone()).is_empty());
|
||||
|
||||
let amount = 2 * C::DUST;
|
||||
let plans = scheduler.schedule(vec![Payment { address: C::address(key), data: None, amount }]);
|
||||
assert_eq!(
|
||||
plans,
|
||||
vec![Plan {
|
||||
key,
|
||||
inputs: outputs,
|
||||
payments: vec![Payment { address: C::address(key), data: None, amount }],
|
||||
change: Some(key),
|
||||
}]
|
||||
);
|
||||
|
||||
{
|
||||
let mut buf = vec![];
|
||||
plans[0].write(&mut buf).unwrap();
|
||||
assert_eq!(plans[0], Plan::<C>::read::<&[u8]>(&mut buf.as_ref()).unwrap());
|
||||
}
|
||||
|
||||
// Execute the plan
|
||||
let fee = coin.get_fee().await;
|
||||
let mut keys_txs = HashMap::new();
|
||||
let mut eventualities = vec![];
|
||||
for (i, keys) in keys.drain() {
|
||||
let (signable, eventuality) = coin
|
||||
.prepare_send(keys.clone(), coin.get_block_number(&block_id).await, plans[0].clone(), fee)
|
||||
.await
|
||||
.unwrap()
|
||||
.0
|
||||
.unwrap();
|
||||
|
||||
eventualities.push(eventuality.clone());
|
||||
keys_txs.insert(i, (keys, (signable, eventuality)));
|
||||
}
|
||||
|
||||
let txid = sign(coin.clone(), keys_txs).await;
|
||||
let tx = coin.get_transaction(&txid).await.unwrap();
|
||||
coin.mine_block().await;
|
||||
let block_number = coin.get_latest_block_number().await.unwrap();
|
||||
let block = coin.get_block(block_number).await.unwrap();
|
||||
let outputs = coin.get_outputs(&block, key).await.unwrap();
|
||||
assert_eq!(outputs.len(), 2);
|
||||
let amount = amount - tx.fee(&coin).await;
|
||||
assert!((outputs[0].amount() == amount) || (outputs[1].amount() == amount));
|
||||
|
||||
for eventuality in eventualities {
|
||||
assert!(coin.confirm_completion(&eventuality, &tx));
|
||||
}
|
||||
|
||||
for _ in 1 .. C::CONFIRMATIONS {
|
||||
coin.mine_block().await;
|
||||
}
|
||||
|
||||
match timeout(Duration::from_secs(10), scanner.events.recv()).await.unwrap().unwrap() {
|
||||
ScannerEvent::Outputs(this_key, block_id, these_outputs) => {
|
||||
assert_eq!(this_key, key);
|
||||
assert_eq!(block_id, block.id());
|
||||
assert_eq!(these_outputs, outputs);
|
||||
}
|
||||
}
|
||||
|
||||
// Check the Scanner DB can reload the outputs
|
||||
assert_eq!(scanner.ack_block(key, block.id()).await, outputs);
|
||||
}
|
||||
@@ -1,385 +0,0 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use rand_core::OsRng;
|
||||
|
||||
use group::GroupEncoding;
|
||||
|
||||
use transcript::{Transcript, RecommendedTranscript};
|
||||
use frost::{
|
||||
curve::{Ciphersuite, Curve},
|
||||
FrostError, ThresholdKeys,
|
||||
sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
coin::{CoinError, Output, Coin},
|
||||
SignError, Network,
|
||||
};
|
||||
|
||||
pub struct WalletKeys<C: Curve> {
|
||||
keys: ThresholdKeys<C>,
|
||||
creation_block: usize,
|
||||
}
|
||||
|
||||
impl<C: Curve> WalletKeys<C> {
|
||||
pub fn new(keys: ThresholdKeys<C>, creation_block: usize) -> WalletKeys<C> {
|
||||
WalletKeys { keys, creation_block }
|
||||
}
|
||||
|
||||
// Bind this key to a specific network by applying an additive offset
|
||||
// While it would be fine to just C::ID, including the group key creates distinct
|
||||
// offsets instead of static offsets. Under a statically offset system, a BTC key could
|
||||
// have X subtracted to find the potential group key, and then have Y added to find the
|
||||
// potential ETH group key. While this shouldn't be an issue, as this isn't a private
|
||||
// system, there are potentially other benefits to binding this to a specific group key
|
||||
// It's no longer possible to influence group key gen to key cancel without breaking the hash
|
||||
// function as well, although that degree of influence means key gen is broken already
|
||||
fn bind(&self, chain: &[u8]) -> ThresholdKeys<C> {
|
||||
const DST: &[u8] = b"Serai Processor Wallet Chain Bind";
|
||||
let mut transcript = RecommendedTranscript::new(DST);
|
||||
transcript.append_message(b"chain", chain);
|
||||
transcript.append_message(b"curve", C::ID);
|
||||
transcript.append_message(b"group_key", self.keys.group_key().to_bytes());
|
||||
self.keys.offset(<C as Ciphersuite>::hash_to_F(DST, &transcript.challenge(b"offset")))
|
||||
}
|
||||
}
|
||||
|
||||
pub trait CoinDb {
|
||||
// Set a block as scanned to
|
||||
fn scanned_to_block(&mut self, block: usize);
|
||||
// Acknowledge a specific block number as part of a canonical block
|
||||
fn acknowledge_block(&mut self, canonical: usize, block: usize);
|
||||
|
||||
// Adds an output to the DB. Returns false if the output was already added
|
||||
fn add_output<O: Output>(&mut self, output: &O) -> bool;
|
||||
|
||||
// Block this coin has been scanned to (inclusive)
|
||||
fn scanned_block(&self) -> usize;
|
||||
// Acknowledged block for a given canonical block
|
||||
fn acknowledged_block(&self, canonical: usize) -> usize;
|
||||
}
|
||||
|
||||
pub struct MemCoinDb {
|
||||
// Block number of the block this coin has been scanned to
|
||||
scanned_block: usize,
|
||||
// Acknowledged block for a given canonical block
|
||||
acknowledged_blocks: HashMap<usize, usize>,
|
||||
outputs: HashMap<Vec<u8>, Vec<u8>>,
|
||||
}
|
||||
|
||||
impl MemCoinDb {
|
||||
pub fn new() -> MemCoinDb {
|
||||
MemCoinDb { scanned_block: 0, acknowledged_blocks: HashMap::new(), outputs: HashMap::new() }
|
||||
}
|
||||
}
|
||||
|
||||
impl CoinDb for MemCoinDb {
|
||||
fn scanned_to_block(&mut self, block: usize) {
|
||||
self.scanned_block = block;
|
||||
}
|
||||
|
||||
fn acknowledge_block(&mut self, canonical: usize, block: usize) {
|
||||
debug_assert!(!self.acknowledged_blocks.contains_key(&canonical));
|
||||
self.acknowledged_blocks.insert(canonical, block);
|
||||
}
|
||||
|
||||
fn add_output<O: Output>(&mut self, output: &O) -> bool {
|
||||
// This would be insecure as we're indexing by ID and this will replace the output as a whole
|
||||
// Multiple outputs may have the same ID in edge cases such as Monero, where outputs are ID'd
|
||||
// by output key, not by hash + index
|
||||
// self.outputs.insert(output.id(), output).is_some()
|
||||
let id = output.id().as_ref().to_vec();
|
||||
if self.outputs.contains_key(&id) {
|
||||
return false;
|
||||
}
|
||||
self.outputs.insert(id, output.serialize());
|
||||
true
|
||||
}
|
||||
|
||||
fn scanned_block(&self) -> usize {
|
||||
self.scanned_block
|
||||
}
|
||||
|
||||
fn acknowledged_block(&self, canonical: usize) -> usize {
|
||||
self.acknowledged_blocks[&canonical]
|
||||
}
|
||||
}
|
||||
|
||||
fn select_inputs<C: Coin>(inputs: &mut Vec<C::Output>) -> (Vec<C::Output>, u64) {
|
||||
// Sort to ensure determinism. Inefficient, yet produces the most legible code to be optimized
|
||||
// later
|
||||
inputs.sort_by_key(|a| a.amount());
|
||||
|
||||
// Select the maximum amount of outputs possible
|
||||
let res = inputs.split_off(inputs.len() - C::MAX_INPUTS.min(inputs.len()));
|
||||
// Calculate their sum value, minus the fee needed to spend them
|
||||
let sum = res.iter().map(|input| input.amount()).sum();
|
||||
// sum -= C::MAX_FEE; // TODO
|
||||
(res, sum)
|
||||
}
|
||||
|
||||
fn select_outputs<C: Coin>(
|
||||
payments: &mut Vec<(C::Address, u64)>,
|
||||
value: &mut u64,
|
||||
) -> Vec<(C::Address, u64)> {
|
||||
// Prioritize large payments which will most efficiently use large inputs
|
||||
payments.sort_by(|a, b| a.1.cmp(&b.1));
|
||||
|
||||
// Grab the payments this will successfully fund
|
||||
let mut outputs = vec![];
|
||||
let mut p = payments.len();
|
||||
while p != 0 {
|
||||
p -= 1;
|
||||
if *value >= payments[p].1 {
|
||||
*value -= payments[p].1;
|
||||
// Swap remove will either pop the tail or insert an element that wouldn't fit, making it
|
||||
// always safe to move past
|
||||
outputs.push(payments.swap_remove(p));
|
||||
}
|
||||
// Doesn't break in this else case as a smaller payment may still fit
|
||||
}
|
||||
|
||||
outputs
|
||||
}
|
||||
|
||||
// Optimizes on the expectation selected/inputs are sorted from lowest value to highest
|
||||
fn refine_inputs<C: Coin>(
|
||||
selected: &mut Vec<C::Output>,
|
||||
inputs: &mut Vec<C::Output>,
|
||||
mut remaining: u64,
|
||||
) {
|
||||
// Drop unused inputs
|
||||
let mut s = 0;
|
||||
while remaining > selected[s].amount() {
|
||||
remaining -= selected[s].amount();
|
||||
s += 1;
|
||||
}
|
||||
// Add them back to the inputs pool
|
||||
inputs.extend(selected.drain(.. s));
|
||||
|
||||
// Replace large inputs with smaller ones
|
||||
for s in (0 .. selected.len()).rev() {
|
||||
for input in inputs.iter_mut() {
|
||||
// Doesn't break due to inputs no longer being sorted
|
||||
// This could be made faster if we prioritized small input usage over transaction size/fees
|
||||
// TODO: Consider. This would implicitly consolidate inputs which would be advantageous
|
||||
if selected[s].amount() < input.amount() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// If we can successfully replace this input, do so
|
||||
let diff = selected[s].amount() - input.amount();
|
||||
if remaining > diff {
|
||||
remaining -= diff;
|
||||
|
||||
let old = selected[s].clone();
|
||||
selected[s] = input.clone();
|
||||
*input = old;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
fn select_inputs_outputs<C: Coin>(
|
||||
inputs: &mut Vec<C::Output>,
|
||||
outputs: &mut Vec<(C::Address, u64)>,
|
||||
) -> (Vec<C::Output>, Vec<(C::Address, u64)>) {
|
||||
if inputs.is_empty() {
|
||||
return (vec![], vec![]);
|
||||
}
|
||||
|
||||
let (mut selected, mut value) = select_inputs::<C>(inputs);
|
||||
|
||||
let outputs = select_outputs::<C>(outputs, &mut value);
|
||||
if outputs.is_empty() {
|
||||
inputs.extend(selected);
|
||||
return (vec![], vec![]);
|
||||
}
|
||||
|
||||
refine_inputs::<C>(&mut selected, inputs, value);
|
||||
(selected, outputs)
|
||||
}
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
pub struct Wallet<D: CoinDb, C: Coin> {
|
||||
db: D,
|
||||
coin: C,
|
||||
keys: Vec<(ThresholdKeys<C::Curve>, Vec<C::Output>)>,
|
||||
pending: Vec<(usize, ThresholdKeys<C::Curve>)>,
|
||||
}
|
||||
|
||||
impl<D: CoinDb, C: Coin> Wallet<D, C> {
|
||||
pub fn new(db: D, coin: C) -> Wallet<D, C> {
|
||||
Wallet { db, coin, keys: vec![], pending: vec![] }
|
||||
}
|
||||
|
||||
pub fn scanned_block(&self) -> usize {
|
||||
self.db.scanned_block()
|
||||
}
|
||||
pub fn acknowledge_block(&mut self, canonical: usize, block: usize) {
|
||||
self.db.acknowledge_block(canonical, block);
|
||||
}
|
||||
pub fn acknowledged_block(&self, canonical: usize) -> usize {
|
||||
self.db.acknowledged_block(canonical)
|
||||
}
|
||||
|
||||
pub fn add_keys(&mut self, keys: &WalletKeys<C::Curve>) {
|
||||
let creation_block = keys.creation_block;
|
||||
let mut keys = keys.bind(C::ID);
|
||||
self.coin.tweak_keys(&mut keys);
|
||||
self.pending.push((self.acknowledged_block(creation_block), keys));
|
||||
}
|
||||
|
||||
pub fn address(&self) -> C::Address {
|
||||
self.coin.address(self.keys[self.keys.len() - 1].0.group_key())
|
||||
}
|
||||
|
||||
pub async fn poll(&mut self) -> Result<(), CoinError> {
|
||||
if self.coin.get_latest_block_number().await? < (C::CONFIRMATIONS - 1) {
|
||||
return Ok(());
|
||||
}
|
||||
let confirmed_block = self.coin.get_latest_block_number().await? - (C::CONFIRMATIONS - 1);
|
||||
|
||||
// Will never scan the genesis block, which shouldn't be an issue
|
||||
for b in (self.scanned_block() + 1) ..= confirmed_block {
|
||||
// If any keys activated at this block, shift them over
|
||||
{
|
||||
let mut k = 0;
|
||||
while k < self.pending.len() {
|
||||
// TODO
|
||||
//if b < self.pending[k].0 {
|
||||
//} else if b == self.pending[k].0 {
|
||||
if b <= self.pending[k].0 {
|
||||
self.keys.push((self.pending.swap_remove(k).1, vec![]));
|
||||
} else {
|
||||
k += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let block = self.coin.get_block(b).await?;
|
||||
for (keys, outputs) in self.keys.iter_mut() {
|
||||
outputs.extend(
|
||||
self
|
||||
.coin
|
||||
.get_outputs(&block, keys.group_key())
|
||||
.await?
|
||||
.drain(..)
|
||||
.filter(|output| self.db.add_output(output)),
|
||||
);
|
||||
}
|
||||
|
||||
self.db.scanned_to_block(b);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// This should be called whenever new outputs are received, meaning there was a new block
|
||||
// If these outputs were received and sent to Substrate, it should be called after they're
|
||||
// included in a block and we have results to act on
|
||||
// If these outputs weren't sent to Substrate (change), it should be called immediately
|
||||
// with all payments still queued from the last call
|
||||
pub async fn prepare_sends(
|
||||
&mut self,
|
||||
canonical: usize,
|
||||
mut payments: Vec<(C::Address, u64)>,
|
||||
fee: C::Fee,
|
||||
) -> Result<(Vec<(C::Address, u64)>, Vec<C::SignableTransaction>), CoinError> {
|
||||
if payments.is_empty() {
|
||||
return Ok((vec![], vec![]));
|
||||
}
|
||||
|
||||
let acknowledged_block = self.acknowledged_block(canonical);
|
||||
|
||||
// TODO: Log schedule outputs when MAX_OUTPUTS is lower than payments.len()
|
||||
// Payments is the first set of TXs in the schedule
|
||||
// As each payment re-appears, let mut payments = schedule[payment] where the only input is
|
||||
// the source payment
|
||||
// let (mut payments, schedule) = schedule(payments);
|
||||
|
||||
let mut txs = vec![];
|
||||
for (keys, outputs) in self.keys.iter_mut() {
|
||||
while !outputs.is_empty() {
|
||||
let (inputs, outputs) = select_inputs_outputs::<C>(outputs, &mut payments);
|
||||
// If we can no longer process any payments, move to the next set of keys
|
||||
if outputs.is_empty() {
|
||||
debug_assert_eq!(inputs.len(), 0);
|
||||
break;
|
||||
}
|
||||
|
||||
// Create the transcript for this transaction
|
||||
let mut transcript = RecommendedTranscript::new(b"Serai Processor Wallet Send");
|
||||
transcript
|
||||
.append_message(b"canonical_block", u64::try_from(canonical).unwrap().to_le_bytes());
|
||||
transcript.append_message(
|
||||
b"acknowledged_block",
|
||||
u64::try_from(acknowledged_block).unwrap().to_le_bytes(),
|
||||
);
|
||||
transcript.append_message(b"index", u64::try_from(txs.len()).unwrap().to_le_bytes());
|
||||
|
||||
let tx = self
|
||||
.coin
|
||||
.prepare_send(
|
||||
keys.clone(),
|
||||
transcript,
|
||||
acknowledged_block,
|
||||
inputs,
|
||||
&outputs,
|
||||
Some(keys.group_key()),
|
||||
fee,
|
||||
)
|
||||
.await?;
|
||||
// self.db.save_tx(tx) // TODO
|
||||
txs.push(tx);
|
||||
}
|
||||
}
|
||||
|
||||
Ok((payments, txs))
|
||||
}
|
||||
|
||||
pub async fn attempt_send<N: Network>(
|
||||
&mut self,
|
||||
network: &mut N,
|
||||
prepared: C::SignableTransaction,
|
||||
) -> Result<Vec<u8>, SignError> {
|
||||
let attempt = self.coin.attempt_send(prepared).await.map_err(SignError::CoinError)?;
|
||||
|
||||
let (attempt, commitments) = attempt.preprocess(&mut OsRng);
|
||||
let commitments = network
|
||||
.round(commitments.serialize())
|
||||
.await
|
||||
.map_err(SignError::NetworkError)?
|
||||
.drain()
|
||||
.map(|(validator, preprocess)| {
|
||||
Ok((
|
||||
validator,
|
||||
attempt
|
||||
.read_preprocess::<&[u8]>(&mut preprocess.as_ref())
|
||||
.map_err(|_| SignError::FrostError(FrostError::InvalidPreprocess(validator)))?,
|
||||
))
|
||||
})
|
||||
.collect::<Result<HashMap<_, _>, _>>()?;
|
||||
|
||||
let (attempt, share) = attempt.sign(commitments, b"").map_err(SignError::FrostError)?;
|
||||
let shares = network
|
||||
.round(share.serialize())
|
||||
.await
|
||||
.map_err(SignError::NetworkError)?
|
||||
.drain()
|
||||
.map(|(validator, share)| {
|
||||
Ok((
|
||||
validator,
|
||||
attempt
|
||||
.read_share::<&[u8]>(&mut share.as_ref())
|
||||
.map_err(|_| SignError::FrostError(FrostError::InvalidShare(validator)))?,
|
||||
))
|
||||
})
|
||||
.collect::<Result<HashMap<_, _>, _>>()?;
|
||||
|
||||
let tx = attempt.complete(shares).map_err(SignError::FrostError)?;
|
||||
|
||||
self.coin.publish_transaction(&tx).await.map_err(SignError::CoinError)
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user