Add support for multiple multisigs to the processor (#377)

* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
This commit is contained in:
Luke Parker
2023-09-25 09:48:15 -04:00
committed by GitHub
parent fe19e8246e
commit ca69f97fef
50 changed files with 3490 additions and 1336 deletions

View File

@@ -0,0 +1,460 @@
use std::{
io::{self, Read},
collections::{VecDeque, HashMap},
};
use ciphersuite::{group::GroupEncoding, Ciphersuite};
use crate::{
networks::{OutputType, Output, Network},
DbTxn, Db, Payment, Plan,
};
/// Stateless, deterministic output/payment manager.
#[derive(PartialEq, Eq, Debug)]
pub struct Scheduler<N: Network> {
key: <N::Curve as Ciphersuite>::G,
// Serai, when it has more outputs expected than it can handle in a single tranaction, will
// schedule the outputs to be handled later. Immediately, it just creates additional outputs
// which will eventually handle those outputs
//
// These maps map output amounts, which we'll receive in the future, to the payments they should
// be used on
//
// When those output amounts appear, their payments should be scheduled
// The Vec<Payment> is for all payments that should be done per output instance
// The VecDeque allows multiple sets of payments with the same sum amount to properly co-exist
//
// queued_plans are for outputs which we will create, yet when created, will have their amount
// reduced by the fee it cost to be created. The Scheduler will then be told how what amount the
// output actually has, and it'll be moved into plans
queued_plans: HashMap<u64, VecDeque<Vec<Payment<N>>>>,
plans: HashMap<u64, VecDeque<Vec<Payment<N>>>>,
// UTXOs available
utxos: Vec<N::Output>,
// Payments awaiting scheduling due to the output availability problem
payments: VecDeque<Payment<N>>,
}
fn scheduler_key<D: Db, G: GroupEncoding>(key: &G) -> Vec<u8> {
D::key(b"SCHEDULER", b"scheduler", key.to_bytes())
}
impl<N: Network> Scheduler<N> {
pub fn empty(&self) -> bool {
self.queued_plans.is_empty() &&
self.plans.is_empty() &&
self.utxos.is_empty() &&
self.payments.is_empty()
}
fn read<R: Read>(key: <N::Curve as Ciphersuite>::G, reader: &mut R) -> io::Result<Self> {
let mut read_plans = || -> io::Result<_> {
let mut all_plans = HashMap::new();
let mut all_plans_len = [0; 4];
reader.read_exact(&mut all_plans_len)?;
for _ in 0 .. u32::from_le_bytes(all_plans_len) {
let mut amount = [0; 8];
reader.read_exact(&mut amount)?;
let amount = u64::from_le_bytes(amount);
let mut plans = VecDeque::new();
let mut plans_len = [0; 4];
reader.read_exact(&mut plans_len)?;
for _ in 0 .. u32::from_le_bytes(plans_len) {
let mut payments = vec![];
let mut payments_len = [0; 4];
reader.read_exact(&mut payments_len)?;
for _ in 0 .. u32::from_le_bytes(payments_len) {
payments.push(Payment::read(reader)?);
}
plans.push_back(payments);
}
all_plans.insert(amount, plans);
}
Ok(all_plans)
};
let queued_plans = read_plans()?;
let plans = read_plans()?;
let mut utxos = vec![];
let mut utxos_len = [0; 4];
reader.read_exact(&mut utxos_len)?;
for _ in 0 .. u32::from_le_bytes(utxos_len) {
utxos.push(N::Output::read(reader)?);
}
let mut payments = VecDeque::new();
let mut payments_len = [0; 4];
reader.read_exact(&mut payments_len)?;
for _ in 0 .. u32::from_le_bytes(payments_len) {
payments.push_back(Payment::read(reader)?);
}
Ok(Scheduler { key, queued_plans, plans, utxos, payments })
}
// TODO2: Get rid of this
// We reserialize the entire scheduler on any mutation to save it to the DB which is horrible
// We should have an incremental solution
fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(4096);
let mut write_plans = |plans: &HashMap<u64, VecDeque<Vec<Payment<N>>>>| {
res.extend(u32::try_from(plans.len()).unwrap().to_le_bytes());
for (amount, list_of_plans) in plans {
res.extend(amount.to_le_bytes());
res.extend(u32::try_from(list_of_plans.len()).unwrap().to_le_bytes());
for plan in list_of_plans {
res.extend(u32::try_from(plan.len()).unwrap().to_le_bytes());
for payment in plan {
payment.write(&mut res).unwrap();
}
}
}
};
write_plans(&self.queued_plans);
write_plans(&self.plans);
res.extend(u32::try_from(self.utxos.len()).unwrap().to_le_bytes());
for utxo in &self.utxos {
utxo.write(&mut res).unwrap();
}
res.extend(u32::try_from(self.payments.len()).unwrap().to_le_bytes());
for payment in &self.payments {
payment.write(&mut res).unwrap();
}
debug_assert_eq!(&Self::read(self.key, &mut res.as_slice()).unwrap(), self);
res
}
pub fn new<D: Db>(txn: &mut D::Transaction<'_>, key: <N::Curve as Ciphersuite>::G) -> Self {
let res = Scheduler {
key,
queued_plans: HashMap::new(),
plans: HashMap::new(),
utxos: vec![],
payments: VecDeque::new(),
};
// Save it to disk so from_db won't panic if we don't mutate it before rebooting
txn.put(scheduler_key::<D, _>(&res.key), res.serialize());
res
}
pub fn from_db<D: Db>(db: &D, key: <N::Curve as Ciphersuite>::G) -> io::Result<Self> {
let scheduler = db.get(scheduler_key::<D, _>(&key)).unwrap_or_else(|| {
panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes()))
});
let mut reader_slice = scheduler.as_slice();
let reader = &mut reader_slice;
Self::read(key, reader)
}
pub fn can_use_branch(&self, amount: u64) -> bool {
self.plans.contains_key(&amount)
}
fn execute(
&mut self,
inputs: Vec<N::Output>,
mut payments: Vec<Payment<N>>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
) -> Plan<N> {
let mut change = false;
let mut max = N::MAX_OUTPUTS;
let payment_amounts =
|payments: &Vec<Payment<N>>| payments.iter().map(|payment| payment.amount).sum::<u64>();
// Requires a change output
if inputs.iter().map(Output::amount).sum::<u64>() != payment_amounts(&payments) {
change = true;
max -= 1;
}
let mut add_plan = |payments| {
let amount = payment_amounts(&payments);
#[allow(clippy::unwrap_or_default)]
self.queued_plans.entry(amount).or_insert(VecDeque::new()).push_back(payments);
amount
};
let branch_address = N::branch_address(self.key);
// If we have more payments than we can handle in a single TX, create plans for them
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
// 15 branches of 16 leaves
// 1 branch of:
// - 1 branch of 16 leaves
// - 2 leaves
// If this was perfect, the heaviest branch would have 1 branch of 3 leaves and 15 leaves
while payments.len() > max {
// The resulting TX will have the remaining payments and a new branch payment
let to_remove = (payments.len() + 1) - N::MAX_OUTPUTS;
// Don't remove more than possible
let to_remove = to_remove.min(N::MAX_OUTPUTS);
// Create the plan
let removed = payments.drain((payments.len() - to_remove) ..).collect::<Vec<_>>();
assert_eq!(removed.len(), to_remove);
let amount = add_plan(removed);
// Create the payment for the plan
// Push it to the front so it's not moved into a branch until all lower-depth items are
payments.insert(0, Payment { address: branch_address.clone(), data: None, amount });
}
Plan {
key: self.key,
inputs,
payments,
change: Some(N::change_address(key_for_any_change)).filter(|_| change),
}
}
fn add_outputs(
&mut self,
mut utxos: Vec<N::Output>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
) -> Vec<Plan<N>> {
log::info!("adding {} outputs", utxos.len());
let mut txs = vec![];
for utxo in utxos.drain(..) {
if utxo.kind() == OutputType::Branch {
let amount = utxo.amount();
if let Some(plans) = self.plans.get_mut(&amount) {
// Execute the first set of payments possible with an output of this amount
let payments = plans.pop_front().unwrap();
// They won't be equal if we dropped payments due to being dust
assert!(amount >= payments.iter().map(|payment| payment.amount).sum::<u64>());
// If we've grabbed the last plan for this output amount, remove it from the map
if plans.is_empty() {
self.plans.remove(&amount);
}
// Create a TX for these payments
txs.push(self.execute(vec![utxo], payments, key_for_any_change));
continue;
}
}
self.utxos.push(utxo);
}
log::info!("{} planned TXs have had their required inputs confirmed", txs.len());
txs
}
// Schedule a series of outputs/payments.
pub fn schedule<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
utxos: Vec<N::Output>,
mut payments: Vec<Payment<N>>,
key_for_any_change: <N::Curve as Ciphersuite>::G,
force_spend: bool,
) -> Vec<Plan<N>> {
// Drop payments to our own branch address
/*
created_output will be called any time we send to a branch address. If it's called, and it
wasn't expecting to be called, that's almost certainly an error. The only way to guarantee
this however is to only have us send to a branch address when creating a branch, hence the
dropping of pointless payments.
This is not comprehensive as a payment may still be made to another active multisig's branch
address, depending on timing. This is safe as the issue only occurs when a multisig sends to
its *own* branch address, since created_output is called on the signer's Scheduler.
*/
{
let branch_address = N::branch_address(self.key);
payments =
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
}
let mut plans = self.add_outputs(utxos, key_for_any_change);
log::info!("scheduling {} new payments", payments.len());
// Add all new payments to the list of pending payments
self.payments.extend(payments);
let payments_at_start = self.payments.len();
log::info!("{} payments are now scheduled", payments_at_start);
// If we don't have UTXOs available, don't try to continue
if self.utxos.is_empty() {
log::info!("no utxos currently avilable");
return plans;
}
// Sort UTXOs so the highest valued ones are first
self.utxos.sort_by(|a, b| a.amount().cmp(&b.amount()).reverse());
// We always want to aggregate our UTXOs into a single UTXO in the name of simplicity
// We may have more UTXOs than will fit into a TX though
// We use the most valuable UTXOs to handle our current payments, and we return aggregation TXs
// for the rest of the inputs
// Since we do multiple aggregation TXs at once, this will execute in logarithmic time
let utxos = self.utxos.drain(..).collect::<Vec<_>>();
let mut utxo_chunks =
utxos.chunks(N::MAX_INPUTS).map(|chunk| chunk.to_vec()).collect::<Vec<_>>();
// Use the first chunk for any scheduled payments, since it has the most value
let utxos = utxo_chunks.remove(0);
// If the last chunk exists and only has one output, don't try aggregating it
// Just immediately consider it another output
if let Some(mut chunk) = utxo_chunks.pop() {
if chunk.len() == 1 {
self.utxos.push(chunk.pop().unwrap());
} else {
utxo_chunks.push(chunk);
}
}
for chunk in utxo_chunks.drain(..) {
// TODO: While payments have their TXs' fees deducted from themselves, that doesn't hold here
// We need the documented, but not yet implemented, virtual amount scheme to solve this
log::debug!("aggregating a chunk of {} inputs", N::MAX_INPUTS);
plans.push(Plan {
key: self.key,
inputs: chunk,
payments: vec![],
change: Some(N::change_address(key_for_any_change)),
})
}
// We want to use all possible UTXOs for all possible payments
let mut balance = utxos.iter().map(Output::amount).sum::<u64>();
// If we can't fulfill the next payment, we have encountered an instance of the UTXO
// availability problem
// This shows up in networks like Monero, where because we spent outputs, our change has yet to
// re-appear. Since it has yet to re-appear, we only operate with a balance which is a subset
// of our total balance
// Despite this, we may be ordered to fulfill a payment which is our total balance
// The solution is to wait for the temporarily unavailable change outputs to re-appear,
// granting us access to our full balance
let mut executing = vec![];
while !self.payments.is_empty() {
let amount = self.payments[0].amount;
if balance.checked_sub(amount).is_some() {
balance -= amount;
executing.push(self.payments.pop_front().unwrap());
} else {
// Doesn't check if other payments would fit into the current batch as doing so may never
// let enough inputs become simultaneously availabile to enable handling of payments[0]
break;
}
}
// Now that we have the list of payments we can successfully handle right now, create the TX
// for them
if !executing.is_empty() {
plans.push(self.execute(utxos, executing, key_for_any_change));
} else {
// If we don't have any payments to execute, save these UTXOs for later
self.utxos.extend(utxos);
}
// If we're instructed to force a spend, do so
// This is used when an old multisig is retiring and we want to always transfer outputs to the
// new one, regardless if we currently have payments
if force_spend && (!self.utxos.is_empty()) {
assert!(self.utxos.len() <= N::MAX_INPUTS);
plans.push(Plan {
key: self.key,
inputs: self.utxos.drain(..).collect::<Vec<_>>(),
payments: vec![],
change: Some(N::change_address(key_for_any_change)),
});
}
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
log::info!(
"created {} plans containing {} payments to sign",
plans.len(),
payments_at_start - self.payments.len(),
);
plans
}
pub fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
let res: Vec<_> = self.payments.drain(..).collect();
if !res.is_empty() {
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
}
res
}
// Note a branch output as having been created, with the amount it was actually created with,
// or not having been created due to being too small
// This can be called whenever, so long as it's properly ordered
// (it's independent to Serai/the chain we're scheduling over, yet still expects outputs to be
// created in the same order Plans are returned in)
pub fn created_output<D: Db>(
&mut self,
txn: &mut D::Transaction<'_>,
expected: u64,
actual: Option<u64>,
) {
log::debug!("output expected to have {} had {:?} after fees", expected, actual);
// Get the payments this output is expected to handle
let queued = self.queued_plans.get_mut(&expected).unwrap();
let mut payments = queued.pop_front().unwrap();
assert_eq!(expected, payments.iter().map(|payment| payment.amount).sum::<u64>());
// If this was the last set of payments at this amount, remove it
if queued.is_empty() {
self.queued_plans.remove(&expected);
}
// If we didn't actually create this output, return, dropping the child payments
let actual = match actual {
Some(actual) => actual,
None => return,
};
// Amortize the fee amongst all payments
// While some networks, like Ethereum, may have some payments take notably more gas, those
// payments will have their own gas deducted when they're created. The difference in output
// value present here is solely the cost of the branch, which is used for all of these
// payments, regardless of how much they'll end up costing
let diff = actual - expected;
let payments_len = u64::try_from(payments.len()).unwrap();
let per_payment = diff / payments_len;
// The above division isn't perfect
let mut remainder = diff - (per_payment * payments_len);
for payment in payments.iter_mut() {
payment.amount = payment.amount.saturating_sub(per_payment + remainder);
// Only subtract the remainder once
remainder = 0;
}
// Drop payments now below the dust threshold
let payments =
payments.drain(..).filter(|payment| payment.amount >= N::DUST).collect::<Vec<_>>();
// Sanity check this was done properly
assert!(actual >= payments.iter().map(|payment| payment.amount).sum::<u64>());
if payments.is_empty() {
return;
}
#[allow(clippy::unwrap_or_default)]
self.plans.entry(actual).or_insert(VecDeque::new()).push_back(payments);
// TODO2: This shows how ridiculous the serialize function is
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
}
}