Files
serai/tests/processor/src/lib.rs
Luke Parker e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00

734 lines
24 KiB
Rust

#![allow(clippy::needless_pass_by_ref_mut)] // False positives
use std::sync::{OnceLock, Mutex};
use zeroize::Zeroizing;
use ciphersuite::{
group::{ff::PrimeField, GroupEncoding},
Ciphersuite, Secp256k1, Ed25519, Ristretto,
};
use dkg::evrf::*;
use serai_client::primitives::{NetworkId, insecure_arbitrary_key_from_name};
use messages::{ProcessorMessage, CoordinatorMessage};
use serai_message_queue::{Service, Metadata, client::MessageQueue};
use dockertest::{
PullPolicy, Image, LogAction, LogPolicy, LogSource, LogOptions, StartPolicy,
TestBodySpecification, DockerOperations,
};
mod networks;
pub use networks::*;
#[cfg(test)]
mod tests;
static UNIQUE_ID: OnceLock<Mutex<u16>> = OnceLock::new();
#[allow(dead_code)]
#[derive(Clone)]
pub struct EvrfPublicKeys {
substrate: [u8; 32],
network: Vec<u8>,
}
pub fn processor_instance(
name: &str,
network: NetworkId,
port: u32,
message_queue_key: <Ristretto as Ciphersuite>::F,
) -> (Vec<TestBodySpecification>, EvrfPublicKeys) {
let substrate_evrf_key =
insecure_arbitrary_key_from_name::<<Ristretto as EvrfCurve>::EmbeddedCurve>(name);
let substrate_evrf_pub_key =
(<Ristretto as EvrfCurve>::EmbeddedCurve::generator() * substrate_evrf_key).to_bytes();
let substrate_evrf_key = substrate_evrf_key.to_repr();
let (network_evrf_key, network_evrf_pub_key) = match network {
NetworkId::Serai => panic!("starting a processor for Serai"),
NetworkId::Bitcoin | NetworkId::Ethereum => {
let evrf_key =
insecure_arbitrary_key_from_name::<<Secp256k1 as EvrfCurve>::EmbeddedCurve>(name);
let pub_key =
(<Secp256k1 as EvrfCurve>::EmbeddedCurve::generator() * evrf_key).to_bytes().to_vec();
(evrf_key.to_repr(), pub_key)
}
NetworkId::Monero => {
let evrf_key =
insecure_arbitrary_key_from_name::<<Ed25519 as EvrfCurve>::EmbeddedCurve>(name);
let pub_key =
(<Ed25519 as EvrfCurve>::EmbeddedCurve::generator() * evrf_key).to_bytes().to_vec();
(evrf_key.to_repr(), pub_key)
}
};
let network_str = match network {
NetworkId::Serai => panic!("starting a processor for Serai"),
NetworkId::Bitcoin => "bitcoin",
NetworkId::Ethereum => "ethereum",
NetworkId::Monero => "monero",
};
let image = format!("{network_str}-processor");
serai_docker_tests::build(image.clone());
let mut res = vec![TestBodySpecification::with_image(
Image::with_repository(format!("serai-dev-{image}")).pull_policy(PullPolicy::Never),
)
.replace_env(
[
("MESSAGE_QUEUE_KEY".to_string(), hex::encode(message_queue_key.to_repr())),
("SUBSTRATE_EVRF_KEY".to_string(), hex::encode(substrate_evrf_key)),
("NETWORK_EVRF_KEY".to_string(), hex::encode(network_evrf_key)),
("NETWORK".to_string(), network_str.to_string()),
("NETWORK_RPC_LOGIN".to_string(), format!("{RPC_USER}:{RPC_PASS}")),
("NETWORK_RPC_PORT".to_string(), port.to_string()),
("DB_PATH".to_string(), "./processor-db".to_string()),
("RUST_LOG".to_string(), "serai_processor=trace,".to_string()),
]
.into(),
)];
if network == NetworkId::Ethereum {
serai_docker_tests::build("ethereum-relayer".to_string());
res.push(
TestBodySpecification::with_image(
Image::with_repository("serai-dev-ethereum-relayer".to_string())
.pull_policy(PullPolicy::Never),
)
.replace_env(
[
("DB_PATH".to_string(), "./ethereum-relayer-db".to_string()),
("RUST_LOG".to_string(), "serai_ethereum_relayer=trace,".to_string()),
]
.into(),
)
.set_publish_all_ports(true),
);
}
(res, EvrfPublicKeys { substrate: substrate_evrf_pub_key, network: network_evrf_pub_key })
}
pub struct ProcessorKeys {
coordinator: <Ristretto as Ciphersuite>::F,
evrf: EvrfPublicKeys,
}
pub type Handles = (String, String, String, String);
pub fn processor_stack(
name: &str,
network: NetworkId,
network_hostname_override: Option<String>,
) -> (Handles, ProcessorKeys, Vec<TestBodySpecification>) {
let (network_composition, network_rpc_port) = network_instance(network);
let (coord_key, message_queue_keys, message_queue_composition) =
serai_message_queue_tests::instance();
let (mut processor_compositions, evrf_keys) =
processor_instance(name, network, network_rpc_port, message_queue_keys[&network]);
// Give every item in this stack a unique ID
// Uses a Mutex as we can't generate a 8-byte random ID without hitting hostname length limits
let unique_id = {
let unique_id_mutex = UNIQUE_ID.get_or_init(|| Mutex::new(0));
let mut unique_id_lock = unique_id_mutex.lock().unwrap();
let unique_id = *unique_id_lock;
*unique_id_lock += 1;
unique_id
};
let mut compositions = vec![];
let mut handles = vec![];
for (name, composition) in [
Some((
match network {
NetworkId::Serai => unreachable!(),
NetworkId::Bitcoin => "bitcoin",
NetworkId::Ethereum => "ethereum",
NetworkId::Monero => "monero",
},
network_composition,
)),
Some(("message_queue", message_queue_composition)),
Some(("processor", processor_compositions.remove(0))),
processor_compositions.pop().map(|composition| ("relayer", composition)),
]
.into_iter()
.flatten()
{
let handle = format!("processor-{name}-{unique_id}");
compositions.push(
composition.set_start_policy(StartPolicy::Strict).set_handle(handle.clone()).set_log_options(
Some(LogOptions {
action: LogAction::Forward,
policy: if handle.contains("-processor-") {
LogPolicy::Always
} else {
LogPolicy::OnError
},
source: LogSource::Both,
}),
),
);
handles.push(handle);
}
let processor_composition = compositions.get_mut(2).unwrap();
processor_composition.inject_container_name(
network_hostname_override.unwrap_or_else(|| handles[0].clone()),
"NETWORK_RPC_HOSTNAME",
);
if let Some(hostname) = handles.get(3) {
processor_composition.inject_container_name(hostname, "ETHEREUM_RELAYER_HOSTNAME");
processor_composition.modify_env("ETHEREUM_RELAYER_PORT", "20830");
}
processor_composition.inject_container_name(handles[1].clone(), "MESSAGE_QUEUE_RPC");
(
(
handles[0].clone(),
handles[1].clone(),
handles[2].clone(),
handles.get(3).cloned().unwrap_or(String::new()),
),
ProcessorKeys { coordinator: coord_key, evrf: evrf_keys },
compositions,
)
}
pub struct Coordinator {
network: NetworkId,
network_handle: String,
#[allow(unused)]
message_queue_handle: String,
#[allow(unused)]
processor_handle: String,
relayer_handle: String,
evrf_keys: EvrfPublicKeys,
next_send_id: u64,
next_recv_id: u64,
queue: MessageQueue,
}
impl Coordinator {
pub fn new(
network: NetworkId,
ops: &DockerOperations,
handles: Handles,
keys: ProcessorKeys,
) -> Coordinator {
let rpc = ops.handle(&handles.1).host_port(2287).unwrap();
let rpc = rpc.0.to_string() + ":" + &rpc.1.to_string();
let res = Coordinator {
network,
network_handle: handles.0,
message_queue_handle: handles.1,
processor_handle: handles.2,
relayer_handle: handles.3,
evrf_keys: keys.evrf,
next_send_id: 0,
next_recv_id: 0,
queue: MessageQueue::new(Service::Coordinator, rpc, Zeroizing::new(keys.coordinator)),
};
// Sleep for up to a minute in case the external network's RPC has yet to start
// Gets an async handle to block on since this function plays nicer when it isn't itself async
{
let ops = ops.clone();
let network_handle = res.network_handle.clone();
std::thread::spawn(move || {
let runtime = tokio::runtime::Runtime::new().unwrap();
let handle = runtime.handle();
let _async = handle.enter();
let rpc_url = network_rpc(network, &ops, &network_handle);
let mut iters = 0;
while iters < 60 {
match network {
NetworkId::Bitcoin => {
use bitcoin_serai::rpc::Rpc;
// Bitcoin's Rpc::new will test the connection
if handle.block_on(Rpc::new(rpc_url.clone())).is_ok() {
break;
}
}
NetworkId::Ethereum => {
use std::sync::Arc;
use ethereum_serai::{
alloy::{
simple_request_transport::SimpleRequest,
rpc_client::ClientBuilder,
provider::{Provider, RootProvider},
network::Ethereum,
},
deployer::Deployer,
};
let provider = Arc::new(RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
));
if handle
.block_on(provider.raw_request::<_, ()>("evm_setAutomine".into(), [false]))
.is_ok()
{
handle.block_on(async {
// Deploy the deployer
let tx = Deployer::deployment_tx();
let signer = tx.recover_signer().unwrap();
let (tx, sig, _) = tx.into_parts();
provider
.raw_request::<_, ()>(
"anvil_setBalance".into(),
[signer.to_string(), (tx.gas_limit * tx.gas_price).to_string()],
)
.await
.unwrap();
let mut bytes = vec![];
tx.encode_with_signature_fields(&sig, &mut bytes);
let _ = provider.send_raw_transaction(&bytes).await.unwrap();
provider.raw_request::<_, ()>("anvil_mine".into(), [96]).await.unwrap();
let _ = Deployer::new(provider.clone()).await.unwrap().unwrap();
// Sleep until the actual time is ahead of whatever time is in the epoch we just
// mined
tokio::time::sleep(core::time::Duration::from_secs(30)).await;
});
break;
}
}
NetworkId::Monero => {
use monero_simple_request_rpc::SimpleRequestRpc;
use monero_wallet::rpc::Rpc;
// Monero's won't, so call get_height
if handle
.block_on(SimpleRequestRpc::new(rpc_url.clone()))
.ok()
.and_then(|rpc| handle.block_on(rpc.get_height()).ok())
.is_some()
{
break;
}
}
NetworkId::Serai => panic!("processor is booting with external network of Serai"),
}
println!("external network RPC has yet to boot, waiting 1 sec, attempt {iters}");
handle.block_on(tokio::time::sleep(core::time::Duration::from_secs(1)));
iters += 1;
}
if iters == 60 {
panic!("couldn't connect to external network {network:?} after 60s");
}
})
.join()
.unwrap();
}
res
}
/// Get the eVRF keys for the associated processor.
pub fn evrf_keys(&self) -> EvrfPublicKeys {
self.evrf_keys.clone()
}
/// Send a message to a processor as its coordinator.
pub async fn send_message(&mut self, msg: impl Into<CoordinatorMessage>) {
let msg: CoordinatorMessage = msg.into();
self
.queue
.queue(
Metadata {
from: Service::Coordinator,
to: Service::Processor(self.network),
intent: msg.intent(),
},
borsh::to_vec(&msg).unwrap(),
)
.await;
self.next_send_id += 1;
}
/// Receive a message from a processor as its coordinator.
pub async fn recv_message(&mut self) -> ProcessorMessage {
let msg = tokio::time::timeout(
core::time::Duration::from_secs(20),
self.queue.next(Service::Processor(self.network)),
)
.await
.unwrap();
assert_eq!(msg.from, Service::Processor(self.network));
assert_eq!(msg.id, self.next_recv_id);
self.queue.ack(Service::Processor(self.network), msg.id).await;
self.next_recv_id += 1;
borsh::from_slice(&msg.msg).unwrap()
}
pub async fn add_block(&self, ops: &DockerOperations) -> ([u8; 32], Vec<u8>) {
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
match self.network {
NetworkId::Bitcoin => {
use bitcoin_serai::{
bitcoin::{consensus::Encodable, network::Network, Script, Address},
rpc::Rpc,
};
// Mine a block
let rpc = Rpc::new(rpc_url).await.expect("couldn't connect to the Bitcoin RPC");
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([1, Address::p2sh(Script::new(), Network::Regtest).unwrap()]),
)
.await
.unwrap();
// Get it so we can return it
let hash = rpc.get_block_hash(rpc.get_latest_block_number().await.unwrap()).await.unwrap();
let block = rpc.get_block(&hash).await.unwrap();
let mut block_buf = vec![];
block.consensus_encode(&mut block_buf).unwrap();
(hash, block_buf)
}
NetworkId::Ethereum => {
use ethereum_serai::alloy::{
simple_request_transport::SimpleRequest,
rpc_types::{BlockTransactionsKind, BlockNumberOrTag},
rpc_client::ClientBuilder,
provider::{Provider, RootProvider},
network::Ethereum,
};
let provider = RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
);
let start = provider
.get_block(BlockNumberOrTag::Latest.into(), BlockTransactionsKind::Hashes)
.await
.unwrap()
.unwrap()
.header
.number;
// We mine 96 blocks to mine one epoch, then cause its finalization
provider.raw_request::<_, ()>("anvil_mine".into(), [96]).await.unwrap();
let end_of_epoch = start + 31;
let hash = provider
.get_block(BlockNumberOrTag::Number(end_of_epoch).into(), BlockTransactionsKind::Hashes)
.await
.unwrap()
.unwrap()
.header
.hash;
let state = provider
.raw_request::<_, String>("anvil_dumpState".into(), ())
.await
.unwrap()
.into_bytes();
(hash.into(), state)
}
NetworkId::Monero => {
use curve25519_dalek::{constants::ED25519_BASEPOINT_POINT, scalar::Scalar};
use monero_simple_request_rpc::SimpleRequestRpc;
use monero_wallet::{rpc::Rpc, address::Network, ViewPair};
let rpc = SimpleRequestRpc::new(rpc_url).await.expect("couldn't connect to the Monero RPC");
rpc
.generate_blocks(
&ViewPair::new(ED25519_BASEPOINT_POINT, Zeroizing::new(Scalar::ONE))
.unwrap()
.legacy_address(Network::Mainnet),
1,
)
.await
.unwrap();
let hash = rpc.get_block_hash(rpc.get_height().await.unwrap() - 1).await.unwrap();
(hash, rpc.get_block(hash).await.unwrap().serialize())
}
NetworkId::Serai => panic!("processor tests adding block to Serai"),
}
}
pub async fn sync(&self, ops: &DockerOperations, others: &[Coordinator]) {
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
match self.network {
NetworkId::Bitcoin => {
use bitcoin_serai::{bitcoin::consensus::Encodable, rpc::Rpc};
let rpc = Rpc::new(rpc_url).await.expect("couldn't connect to the Bitcoin RPC");
let to = rpc.get_latest_block_number().await.unwrap();
for coordinator in others {
let other_rpc = Rpc::new(network_rpc(self.network, ops, &coordinator.network_handle))
.await
.expect("couldn't connect to the Bitcoin RPC");
let from = other_rpc.get_latest_block_number().await.unwrap() + 1;
for b in from ..= to {
let mut buf = vec![];
rpc
.get_block(&rpc.get_block_hash(b).await.unwrap())
.await
.unwrap()
.consensus_encode(&mut buf)
.unwrap();
let res: Option<String> = other_rpc
.rpc_call("submitblock", serde_json::json!([hex::encode(buf)]))
.await
.unwrap();
if let Some(err) = res {
panic!("submitblock failed: {err}");
}
}
}
}
NetworkId::Ethereum => {
use ethereum_serai::alloy::{
simple_request_transport::SimpleRequest,
rpc_types::{BlockTransactionsKind, BlockNumberOrTag},
rpc_client::ClientBuilder,
provider::{Provider, RootProvider},
network::Ethereum,
};
let (expected_number, state) = {
let provider = RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
);
let expected_number = provider
.get_block(BlockNumberOrTag::Latest.into(), BlockTransactionsKind::Hashes)
.await
.unwrap()
.unwrap()
.header
.number;
(
expected_number,
provider.raw_request::<_, String>("anvil_dumpState".into(), ()).await.unwrap(),
)
};
for coordinator in others {
let rpc_url = network_rpc(coordinator.network, ops, &coordinator.network_handle);
let provider = RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
);
assert!(provider
.raw_request::<_, bool>("anvil_loadState".into(), &[&state])
.await
.unwrap());
let new_number = provider
.get_block(BlockNumberOrTag::Latest.into(), BlockTransactionsKind::Hashes)
.await
.unwrap()
.unwrap()
.header
.number;
// TODO: https://github.com/foundry-rs/foundry/issues/7955
let _ = expected_number;
let _ = new_number;
//assert_eq!(expected_number, new_number);
}
}
NetworkId::Monero => {
use monero_simple_request_rpc::SimpleRequestRpc;
use monero_wallet::rpc::Rpc;
let rpc = SimpleRequestRpc::new(rpc_url).await.expect("couldn't connect to the Monero RPC");
let to = rpc.get_height().await.unwrap();
for coordinator in others {
let other_rpc = SimpleRequestRpc::new(network_rpc(
coordinator.network,
ops,
&coordinator.network_handle,
))
.await
.expect("couldn't connect to the Monero RPC");
let from = other_rpc.get_height().await.unwrap();
for b in from .. to {
let block =
rpc.get_block(rpc.get_block_hash(b).await.unwrap()).await.unwrap().serialize();
let res: serde_json::Value = other_rpc
.json_rpc_call("submit_block", Some(serde_json::json!([hex::encode(block)])))
.await
.unwrap();
let err = res.get("error");
if err.is_some() && (err.unwrap() != &serde_json::Value::Null) {
panic!("failed to submit Monero block: {res}");
}
}
}
}
NetworkId::Serai => panic!("processors tests syncing Serai nodes"),
}
}
pub async fn publish_transaction(&self, ops: &DockerOperations, tx: &[u8]) {
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
match self.network {
NetworkId::Bitcoin => {
use bitcoin_serai::{
bitcoin::{consensus::Decodable, Transaction},
rpc::Rpc,
};
let rpc =
Rpc::new(rpc_url).await.expect("couldn't connect to the coordinator's Bitcoin RPC");
rpc.send_raw_transaction(&Transaction::consensus_decode(&mut &*tx).unwrap()).await.unwrap();
}
NetworkId::Ethereum => {
use ethereum_serai::alloy::{
simple_request_transport::SimpleRequest,
rpc_client::ClientBuilder,
provider::{Provider, RootProvider},
network::Ethereum,
};
let provider = RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
);
let _ = provider.send_raw_transaction(tx).await.unwrap();
}
NetworkId::Monero => {
use monero_simple_request_rpc::SimpleRequestRpc;
use monero_wallet::{transaction::Transaction, rpc::Rpc};
let rpc = SimpleRequestRpc::new(rpc_url)
.await
.expect("couldn't connect to the coordinator's Monero RPC");
rpc.publish_transaction(&Transaction::read(&mut &*tx).unwrap()).await.unwrap();
}
NetworkId::Serai => panic!("processor tests broadcasting block to Serai"),
}
}
pub async fn publish_eventuality_completion(&self, ops: &DockerOperations, tx: &[u8]) {
match self.network {
NetworkId::Bitcoin | NetworkId::Monero => self.publish_transaction(ops, tx).await,
NetworkId::Ethereum => (),
NetworkId::Serai => panic!("processor tests broadcasting block to Serai"),
}
}
pub async fn get_published_transaction(
&self,
ops: &DockerOperations,
tx: &[u8],
) -> Option<Vec<u8>> {
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
match self.network {
NetworkId::Bitcoin => {
use bitcoin_serai::{bitcoin::consensus::Encodable, rpc::Rpc};
let rpc =
Rpc::new(rpc_url).await.expect("couldn't connect to the coordinator's Bitcoin RPC");
// Bitcoin publishes a 0-byte TX ID to reduce variables
// Accordingly, read the mempool to find the (presumed relevant) TX
let entries: Vec<String> =
rpc.rpc_call("getrawmempool", serde_json::json!([false])).await.unwrap();
assert_eq!(entries.len(), 1, "more than one entry in the mempool, so unclear which to get");
let mut hash = [0; 32];
hash.copy_from_slice(&hex::decode(&entries[0]).unwrap());
if let Ok(tx) = rpc.get_transaction(&hash).await {
let mut buf = vec![];
tx.consensus_encode(&mut buf).unwrap();
Some(buf)
} else {
None
}
}
NetworkId::Ethereum => {
/*
let provider = RootProvider::<_, Ethereum>::new(
ClientBuilder::default().transport(SimpleRequest::new(rpc_url.clone()), true),
);
let mut hash = [0; 32];
hash.copy_from_slice(tx);
let tx = provider.get_transaction_by_hash(hash.into()).await.unwrap()?;
let (tx, sig, _) = Signed::<TxLegacy>::try_from(tx).unwrap().into_parts();
let mut bytes = vec![];
tx.encode_with_signature_fields(&sig, &mut bytes);
Some(bytes)
*/
// This is being passed a signature. We need to check the relayer has a TX with this
// signature
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
net::TcpStream,
};
let (ip, port) = ops.handle(&self.relayer_handle).host_port(20831).unwrap();
let relayer_url = format!("{ip}:{port}");
let mut socket = TcpStream::connect(&relayer_url).await.unwrap();
// Iterate over every published command
for i in 1 .. u32::MAX {
socket.write_all(&i.to_le_bytes()).await.unwrap();
let mut recvd_len = [0; 4];
socket.read_exact(&mut recvd_len).await.unwrap();
if recvd_len == [0; 4] {
break;
}
let mut msg = vec![0; usize::try_from(u32::from_le_bytes(recvd_len)).unwrap()];
socket.read_exact(&mut msg).await.unwrap();
for start_pos in 0 .. msg.len() {
if (start_pos + tx.len()) > msg.len() {
break;
}
if &msg[start_pos .. (start_pos + tx.len())] == tx {
return Some(msg);
}
}
}
None
}
NetworkId::Monero => {
use monero_simple_request_rpc::SimpleRequestRpc;
use monero_wallet::rpc::Rpc;
let rpc = SimpleRequestRpc::new(rpc_url)
.await
.expect("couldn't connect to the coordinator's Monero RPC");
let mut hash = [0; 32];
hash.copy_from_slice(tx);
if let Ok(tx) = rpc.get_transaction(hash).await {
Some(tx.serialize())
} else {
None
}
}
NetworkId::Serai => panic!("processor tests broadcasting block to Serai"),
}
}
}